full_name
stringlengths
7
104
description
stringlengths
4
725
topics
stringlengths
3
468
readme
stringlengths
13
565k
label
int64
0
1
twitch4j/twitch4j-chatbot
Chatbot example using the Twitch4J API [https://github.com/twitch4j/twitch4j]
chatbot hacktoberfest twitch4j
# Twitch4J - Chatbot Template Support: [![Discord](https://img.shields.io/badge/Join-Twitch4J-7289DA.svg?style=flat-square)](https://discord.gg/FQ5vgW3) [<img src="https://discordapp.com/api/guilds/143001431388061696/widget.png?style=shield">](https://discord.gg/FQ5vgW3) -------- ## A quick note: This Chatbot is part of the [Twitch4J API](https://github.com/PhilippHeuer/twitch4j) project. ## Chat Token You can generate a oauth chat token using the `Twitch Chat OAuth Password Generator`: http://twitchapps.com/tmi/
1
colintheshots/RxJavaExamples
Several simple examples demonstrating how to use RxJava along with a few exercises to try.
null
RxJavaExamples ============== Several simple examples demonstrating how to use RxJava along with a few exercises to try as seen in the comments. All code here are simple Java examples wrapped up so you can try them in Android Studio's console. * Use this online documentation: * https://github.com/ReactiveX/RxJava/wiki/Filtering-Observables * * Try the following Rx tricks: * * 1. takeLast() the last 2 results. * 2. elementAt() to obtain value #567. * 3. take() the first 10 results. * 4. Do #3 and also filter() only values divisible by two. * * Keep in mind that order of operations matters! * Use bigInteger.mod(BigInteger.valueOf(2)).equals(BigInteger.ZERO) to test for even values. * * Use this online documentation: * https://github.com/ReactiveX/RxJava/wiki/Transforming-Observables * * 5. Use map() to convert from BigInteger values to Long values and take only the first 50 results.
0
JohT/showcase-quarkus-eventsourcing
Shows an example on how to use AxonFramework in conjunction with microprofile on quarkus
axon axon-framework event-driven event-sourcing eventsourcing jasmine-tests microprofile poc quarkus reactive server-sent-events showcase sse
# showcase-quarkus-eventsourcing Shows an example on how to use [AxonFramework][AxonFramework] in conjunction with [MicroProfile][MicroProfile] on [Quarkus][Quarkus]. More informations can be found inside the module [showcase-quarkus-eventsourcing](./showcase-quarkus-eventsourcing/README.md) [AxonFramework]: https://github.com/AxonFramework/AxonFramework [Quarkus]: https://quarkus.io [MicroProfile]: https://projects.eclipse.org/projects/technology.microprofile
1
aws-samples/amazon-qldb-dmv-sample-java
A DMV based example application which demonstrates best-practices for using QLDB & the QLDB Driver for Java.
amazon-qldb sample
# Amazon QLDB Java DMV Sample App [![license](https://img.shields.io/badge/license-Apache%202.0-blue)](https://github.com/awslabs/amazon-qldb-driver-java/blob/master/LICENSE) [![AWS Provider](https://img.shields.io/badge/provider-AWS-orange?logo=amazon-aws&color=ff9900)](https://aws.amazon.com/qldb/) The samples in this project demonstrate several uses of Amazon QLDB. For our tutorial, see [Java and Amazon QLDB](https://docs.aws.amazon.com/qldb/latest/developerguide/getting-started.java.html). ## Requirements ### Basic Configuration See [Accessing Amazon QLDB](https://docs.aws.amazon.com/qldb/latest/developerguide/accessing.html) for information on connecting to AWS. See [Setting Region](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/region-selection.html) page for more information on using the AWS SDK for Java. You will need to set a region before running the sample code. ### Java 8 and Gradle The examples are written in Java 8 using the Gradle build tool. Java 8 must be installed to build the examples, however the Gradle wrapper is bundled in the project and does not need to be installed. Please see the link below for more detail to install Java 8 and information on Gradle: * [Java 8 Installation](https://docs.oracle.com/javase/8/docs/technotes/guides/install/install_overview.html) * [Gradle](https://gradle.org/) * [Gradle Wrapper](https://docs.gradle.org/3.3/userguide/gradle_wrapper.html) ## Running the Sample code The sample code creates a ledger with tables and indexes, and inserts some documents into those tables, among other things. Each of the examples in this project can be run in the following way: Windows: ``` gradlew run -Dtutorial=CreateLedger ``` Unix: ``` ./gradlew run -Dtutorial=CreateLedger ``` The above example will build the CreateLedger class with the necessary dependencies and create a ledger named: `vehicle-registration`. You may run other examples after creating a ledger. ### Samples Below is a list of the sample applications included in this repository with the recommended order of execution. ### Setting up the test ledger - CreateLedger - ListLedgers - DescribeLedger - ConnectToLedger - CreateTable - CreateIndex - ConnectToLedger: Run it again to see the created tables. - InsertDocument - ScanTable ### Transaction management, PartiQL queries examples and History - AddSecondaryOwner - DeregisterDriversLicense - FindVehicles - RegisterDriversLicense - RenewDriversLicense - TransferVehicleOwnership - DeregisterDriversLicense - QueryHistory - InsertIonTypes ### Exporting data - ExportJournal - ListJournalExports - DescribeJournalExport **Note:** To execute this test, you need to pass the ExportId that will be in the output of `ListJournalExports`. You can execute the test like this: ```bash ./gradlew run -Dtutorial=DescribeJournalExport --args="<Export Id obtained from the output of ListJournalExports>" ``` ### Verifying data - GetRevision - GetDigest - GetBlock - ValidateQldbHashChain ### Other Ledger management operations - TagResource - DeletionProtection - DeleteLedger ### Documentation Javadoc is used for documentation. You can generate HTML locally with the following: ```mvn site``` It will generate the Javadoc for public members (defined in <reporting/>) using the given stylesheet (defined in <reporting/>), and with an help page (default value for nohelp is true). ```mvn javadoc:javadoc``` It will generate the Javadoc for private members (defined in <build/>) using the stylesheet (defined in <reporting/>), and with no help page (defined in <build/>). Please see [Javadoc usage](https://maven.apache.org/plugins/maven-javadoc-plugin/usage.html). ## Security See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. ## License This library is licensed under the Apache 2.0 license.
1
jonashackt/spring-rabbitmq-messaging-microservices
Example project showing how to build a scalable microservice architecture using Spring Boot & RabbitMQ
null
spring-rabbitmq-messaging-microservices ====================================================================================== [![Build Status](https://github.com/jonashackt/spring-rabbitmq-messaging-microservices/workflows/build/badge.svg)](https://github.com/jonashackt/spring-rabbitmq-messaging-microservices/actions) [![renovateenabled](https://img.shields.io/badge/renovate-enabled-yellow)](https://renovatebot.com) Example project showing how to build a scalable microservice architecture using Spring Boot &amp; RabbitMQ ![spring-rabbitmq-messaging-diagram](https://yuml.me/diagram/scruffy/class/[weatherservice]->[RabbitMQ],[weatherservice]^-.-[RabbitMQ],[RabbitMQ]->[weatherbackend03],[RabbitMQ]^-.-[weatherbackend03],[RabbitMQ]->[weatherbackend02],[RabbitMQ]^-.-[weatherbackend02],[RabbitMQ]->[weatherbackend01],[RabbitMQ]^-.-[weatherbackend01]) We´re using [RabbitMQ Docker image](https://hub.docker.com/_/rabbitmq/) here. So if you fire it up with `docker-compose up -d`, you can easily login to the management gui at http://localhost:15672 using `guest` & `guest` as credentials. ### Testcontainers only inside weatherbackend ![spring-rabbitmq-messaging-diagram](https://yuml.me/diagram/scruffy/class/[weatherbackend]-&gt;[RabbitMQ],[weatherbackend]^-.-[RabbitMQ]) Although we could also use [docker-compose.yml](docker-compose.yml) right here in the weatherbackend test classes, this could lead to errors - because [testcontainers](https://www.testcontainers.org/) would also try to spin up a `weatherbackend` Docker container, which we don´t have at build time of the weatherbackend itself (cause the Spring Boot jar isn´t ready right then). But there´s something like the `org.testcontainers.containers.GenericContainer` we can use to spin up a RabbitMQ without a docker-compose.yml. Just have a look into the test class [SendAndReceiveTest](weatherbackend/src/test/java/de/jonashackt/SendAndReceiveTest.java): ``` ... import org.junit.ClassRule; import org.junit.Rule; ... import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.boot.test.util.TestPropertyValues; import org.springframework.context.ApplicationContextInitializer; import org.springframework.context.ConfigurableApplicationContext; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringRunner; import org.testcontainers.containers.GenericContainer; import org.testcontainers.containers.wait.strategy.Wait; ... @RunWith(SpringRunner.class) @SpringBootTest(classes = WeatherBackendApplication.class) @ContextConfiguration(initializers = {SendAndReceiveTest.Initializer.class}) public class SendAndReceiveTest { static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> { @Override public void initialize(ConfigurableApplicationContext configurableApplicationContext) { TestPropertyValues.of( "spring.rabbitmq.host=" + rabbitMq.getContainerIpAddress(), "spring.rabbitmq.port=" + rabbitMq.getMappedPort(5672)) .applyTo(configurableApplicationContext.getEnvironment()); } } @ClassRule public static GenericContainer rabbitMq = new GenericContainer("rabbitmq:management") .withExposedPorts(5672) .waitingFor(Wait.forListeningPort()); ``` As Testcontainers doesn´t guarantee that the RabbitMQ container is reachable under the host name `localhost` (see https://github.com/testcontainers/testcontainers-java/issues/669#issuecomment-385873331) and therefore the test execution leads to `ConnectionRefused` Exceptions from the Spring Boot Autoconfiguraiton trying to reach RabbitMQ on this host, we need to go a different path. But since we need to configure the RabbitMQ host url and port in the early stage of SpringBootTest initialization, we need the help of a `org.springframework.context.ConfigurableApplicationContext` to dynamically set our `spring.rabbitmq.host` property. Togehter with [TestPropertyValues](https://dzone.com/articles/testcontainers-and-spring-boot) this could be done easily as seen in the code. ### Testcontainers, the 'real' docker-compose.yml and the weatherservice At the weatherservice we´re also using [testcontainers](https://www.testcontainers.org/) to fully instanciate every microservice needed to test the whole interaction with RabbitMQ: ![spring-rabbitmq-messaging-diagram](https://yuml.me/diagram/scruffy/class/[weatherservice]->[RabbitMQ],[weatherservice]^-.-[RabbitMQ],[RabbitMQ]->[weatherbackend],[RabbitMQ]^-.-[weatherbackend]) Therefore the sequence of module build inside our [pom.xml](pom.xml) here is crucial: ``` <modules> <module>weathermodel</module> <module>weatherbackend</module> <module>weatherservice</module> </modules> ``` First the shared domain & event classes are packaged into a .jar file, so that every service is able to use it. Then the weatherbackend is build and tested - which does everything in the context of one microservice. The final `weatherservice` build then uses the successful build output of the `weatherbackend` inside the corresponding [Dockerfile](weatherbackend/Dockerfile): ``` ... # Add Spring Boot app.jar to Container ADD "target/weatherbackend-0.0.1-SNAPSHOT.jar" app.jar ... ``` Now the service definition inside the [docker-compose.yml](docker-compose.yml) again uses that Dockerfile to spin up a microservice Docker container containing the `weatherbackend`: ``` version: '3.7' services: rabbitmq: image: rabbitmq:management ports: - "5672:5672" - "15672:15672" tty: true weatherbackend: build: ./weatherbackend ports: - "8090" environment: - "SPRING.RABBITMQ.HOST=rabbitmq" tty: true restart: unless-stopped ``` Note the definition of the environment variable `spring.rabbitmq.host`, since the RabbitMQ containers´ host inside the weatherbackend´s Docker Container isn´t `localhost` but instead Docker DNS style `rabbitmq`! The Test class [WeatherServiceSendAndReceiveTest](weatherservice/src/test/java/de/jonashackt/WeatherServiceSendAndReceiveTest.java) uses `org.testcontainers.containers.DockerComposeContainer` to leverage to `real` docker-compose.yml: ``` package de.jonashackt; import com.fasterxml.jackson.core.JsonProcessingException; import de.jonashackt.messaging.MessageSender; import org.junit.ClassRule; import org.junit.Rule; import org.junit.Test; import org.junit.contrib.java.lang.system.SystemOutRule; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; import org.testcontainers.containers.DockerComposeContainer; import org.testcontainers.containers.wait.strategy.Wait; import java.io.File; import static de.jonashackt.common.ModelUtil.exampleEventGetOutlook; import static de.jonashackt.messaging.Queues.QUEUE_WEATHER_BACKEND; import static org.hamcrest.Matchers.containsString; import static org.junit.Assert.assertThat; @RunWith(SpringRunner.class) @SpringBootTest(classes = WeatherServiceApplication.class) public class WeatherServiceSendAndReceiveTest { @ClassRule public static DockerComposeContainer services = new DockerComposeContainer(new File("../docker-compose.yml")) .withExposedService("rabbitmq", 5672, Wait.forListeningPort()) .withExposedService("weatherbackend", 8090, Wait.forListeningPort()); @Rule public final SystemOutRule systemOutRule = new SystemOutRule().enableLog(); @Autowired private MessageSender messageSender; @Test public void is_EventGetOutlook_send_and_EventGeneralOutlook_received() throws JsonProcessingException, InterruptedException { messageSender.sendMessage(QUEUE_WEATHER_BACKEND, exampleEventGetOutlook()); Thread.sleep(5000); // We have to wait a bit here, since our Backend needs 3+ seconds to calculate the outlook assertThat(systemOutRule.getLog(), containsString("EventGeneralOutlook received in weatherservice.")); } } ``` ### Scale weatherbackend & observe, which retrieves the Events with Elastic stack To scale the weatherbackend Docker Containers, we can easily facilitate Docker Compose services scaling: ``` docker-compose up -d --scale weatherbackend=3 ``` Now we have 3 weatherbackends, as the original architecture diagram suggests: ![spring-rabbitmq-messaging-diagram](https://yuml.me/diagram/scruffy/class/[RabbitMQ]->[weatherbackend03],[RabbitMQ]^-.-[weatherbackend03],[RabbitMQ]->[weatherbackend02],[RabbitMQ]^-.-[weatherbackend02],[RabbitMQ]->[weatherbackend01],[RabbitMQ]^-.-[weatherbackend01]) If we fire up our `weatherservice` now, we can send events that one of the weatherbackends will retrieve. But which is retrieving which event? We need to use log correlation like with the [Elastic stack](https://www.elastic.co/) for that. The easiest way to do so, is to use https://github.com/jonashackt/docker-elk. Just clone this repo and do another `docker-compose up -d`. To connect our microservices to the Elastic stack, there are multiple possibilties. An easy way is to use the [logstash-logback-encoder](https://github.com/logstash/logstash-logback-encoder) and configure it via a `logback-spring.xml` inside the `resources` directory in each app: ``` <?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml"/> <logger name="org.springframework" level="WARN"/> <logger name="de.jonashackt" level="DEBUG"/> <!-- Logstash-Configuration --> <!-- For details see https://github.com/logstash/logstash-logback-encoder --> <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>localhost:5000</destination> <!-- encoder is required --> <encoder class="net.logstash.logback.encoder.LogstashEncoder"> <includeCallerData>true</includeCallerData> <customFields>{"service_name":"weatherservice"}</customFields> <fieldNames> <message>log-msg</message> </fieldNames> </encoder> <keepAliveDuration>5 minutes</keepAliveDuration> </appender> <root level="INFO"> <appender-ref ref="logstash" /> </root> </configuration> ``` Now with everything in place, fire a request to `weatherservice` with ``. Open up Kibana after successful startup at http://localhost:5601/app/kibana and first create an index pattern after in __Management/Index Patterns__ called: `logstash-*`. Then click __Next step__ and choose `@timestamp` from the dropdown. Finally click __Create index pattern__. Then head over to __Discover__. Now fire up some events after starting `weatherservice`: ``` curl -v localhost:8095/event # or 100 events like this curl -v localhost:8095/events/100 ``` Now we should see our services working: ![events-in-kibana](screenshots/kibana-logs.png) ### Scale Containers automatically depending on workload Initialize local Swarm mode: ``` docker swarm init ``` Now deploy our application as Docker Stack ``` docker stack deploy --compose-file docker-stack.yml {{ application_stack_name }} ``` ### Architects heaven: GitHub + Diagram + Markdown-Code Should be easy right?! I tried: https://yuml.me/diagram/scruffy/class/samples and there´s also a nice editor: [https://yuml.me/diagram/scruffy/class/edit/](https://yuml.me/diagram/scruffy/class/edit/[weatherservice]->[RabbitMQ],[weatherservice]^-.-[RabbitMQ],[RabbitMQ]->[weatherbackend03],[RabbitMQ]^-.-[weatherbackend03],[RabbitMQ]->[weatherbackend02],[RabbitMQ]^-.-[weatherbackend02],[RabbitMQ]->[weatherbackend01],[RabbitMQ]^-.-[weatherbackend01])
1
jonashackt/spring-boot-openapi-kong
Example project showing how to integrate Spring Boot microservices with Kong API Gateway
null
# spring-boot-openapi-kong [![Build Status](https://github.com/jonashackt/spring-boot-openapi-kong/workflows/openapi-to-kong-config-full-setup/badge.svg)](https://github.com/jonashackt/spring-boot-openapi-kong/actions) [![License](http://img.shields.io/:license-mit-blue.svg)](https://github.com/jonashackt/spring-boot-buildpack/blob/master/LICENSE) [![renovateenabled](https://img.shields.io/badge/renovate-enabled-yellow)](https://renovatebot.com) [![versionspringboot](https://img.shields.io/badge/dynamic/xml?color=brightgreen&url=https://raw.githubusercontent.com/jonashackt/spring-boot-openapi-kong/main/weatherbackend/pom.xml&query=%2F%2A%5Blocal-name%28%29%3D%27project%27%5D%2F%2A%5Blocal-name%28%29%3D%27parent%27%5D%2F%2A%5Blocal-name%28%29%3D%27version%27%5D&label=springboot)](https://github.com/spring-projects/spring-boot) Example project showing how to integrate Spring Boot microservices with Kong API Gateway [![asciicast](https://asciinema.org/a/370557.svg)](https://asciinema.org/a/370557) Bringing together Kong & Spring Boot. But wait, what is https://github.com/Kong/kong ? > Kong is a cloud-native, fast, scalable, and distributed Microservice Abstraction Layer (also known as an API Gateway or API Middleware). This project is also used as sample for this article: https://blog.codecentric.de/en/2020/11/spring-boot-kong ## Table of Contents * [Idea & Setup](#idea--setup) * [Step by step...](#step-by-step) * [The current problem with springdoc-openapi and WebFlux based Spring Boot apps](#the-current-problem-with-springdoc-openapi-and-webflux-based-spring-boot-apps) * [Create a Spring Boot App with REST endpoints](#create-a-spring-boot-app-with-rest-endpoints) * [Generate an OpenAPI spec with the springdoc-openapi-maven-plugin](#generate-an-openapi-spec-with-the-springdoc-openapi-maven-plugin) * [Tweak the API information in the generated OpenAPI spec](#tweak-the-api-information-in-the-generated-openapi-spec) * [Import OpenAPI spec into Kong](#import-openapi-spec-into-kong) * [Install Insomnia Desinger with Kong Bundle plugin](#install-insomnia-desinger-with-kong-bundle-plugin) * [Import Springdoc generated openapi.json into Insomnia Designer](#import-springdoc-generated-openapijson-into-insomnia-designer) * [Generate Kong Declarative Config from Openapi](#generate-kong-declarative-config-from-openapi) * [Docker Compose with Kong DB-less deployment & declarative configuration](#docker-compose-with-kong-db-less-deployment--declarative-configuration) * [Access the Spring Boot app through Kong](#access-the-spring-boot-app-through-kong) * [Configuring the correct upstream in Kong (connect() failed (111: Connection refused) while connecting to upstream)](#configuring-the-correct-upstream-in-kong-connect-failed-111-connection-refused-while-connecting-to-upstream) * [Automating the OpenAPI-Kong import](#automating-the-openapi-kong-import) * [Install Inso CLI](#install-inso-cli) * [Inso CLI install problems on Mac](#inso-cli-install-problems-on-mac) * [Use Inso CLI to generate Kong declarative config from OpenAPI spec](#use-inso-cli-to-generate-kong-declarative-config-from-openapi-spec) * [Run the OpenAPI spec generation and Kong declarative config transformation inside the Maven build](#run-the-openapi-spec-generation-and-kong-declarative-config-transformation-inside-the-maven-build) * [Integrate the full Maven build into Cloud CI](#integrate-the-full-maven-build-into-cloud-ci) * [Fire up our Kong Docker Compose setup & testdrive the Spring Boot service access](#fire-up-our-kong-docker-compose-setup--testdrive-the-spring-boot-service-access) * [Links](#links) ### Idea & Setup Some microservices to access with Kong... I once worked heavily with the Spring Cloud Netflix tooling. Here's the example project: https://github.com/jonashackt/cxf-spring-cloud-netflix-docker and the blog post I wrote back then https://blog.codecentric.de/en/2017/05/ansible-docker-windows-containers-scaling-spring-cloud-netflix-docker-compose The goal is to rebuild the project using Kong https://github.com/Kong/kong agnostical! more pattern like Setup idea: Spring Boot REST --> [generate OpenAPI spec yamls via springdoc-openapi-maven-plugin](https://www.baeldung.com/spring-rest-openapi-documentation) --> Insomnia: Kong config file with [Kong Bundle plugin](https://insomnia.rest/plugins/insomnia-plugin-kong-bundle/) --> import into Kong and run via decK (normal Kong gateway without EE) Nothing really there right now: https://www.google.com/search?q=openapi+spring+boot+kong * No change in Spring Boot dev workflow required, no custom annotations * elegant integration of Kong and Spring Boot services PLUS: CI process to regularly generate OpenAPI specs from Spring code -> and automatically import into Kong, which is an enterprise feature - or it is possible via: ## Step by step... ### The current problem with springdoc-openapi and WebFlux based Spring Boot apps Why didn't I go with a reactive WebFlux based app? WebFlux based Spring Boot Apps need some `springdoc-openapi` specific classes right now in order to fully generate the OpenAPI live documentation in the end. See the demos at https://github.com/springdoc/springdoc-openapi-demos And especially the webflux functional demo at: https://github.com/springdoc/springdoc-openapi-demos/blob/master/springdoc-openapi-spring-boot-2-webflux-functional where these imports are used: ```java import static org.springdoc.core.fn.builders.apiresponse.Builder.responseBuilder; import static org.springdoc.core.fn.builders.parameter.Builder.parameterBuilder; import static org.springdoc.webflux.core.fn.SpringdocRouteBuilder.route; ``` I can't fully discourage to go with this approach, but for this project I wanted a 100% "springdoc-free" standard Spring Boot app, where the springdoc feature are __ONLY__ used to generate OpenAPI specs - and not rely onto some dependencies from springdoc. Since that would imply that every Spring Boot project that wanted to adopt the solution outlined here would need to integrate springdoc classes in their projects. ### Create a Spring Boot App with REST endpoints This is the easy part. We all know where to start: Go to start.spring.io and create a Spring REST app skeleton. As I wanted to rebuild my good old Spring Cloud Netflix / Eureka based apps, I simply took the `weatherbackend` app from https://github.com/jonashackt/cxf-spring-cloud-netflix-docker/tree/master/weatherbackend Here's the [WeatherBackendAPI.java](weatherbackend/src/main/java/io/jonashackt/weatherbackend/api/WeatherBackendAPI.java) - nothing special here: ```java package io.jonashackt.weatherbackend.api; import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; import io.jonashackt.weatherbackend.businesslogic.IncredibleLogic; import io.jonashackt.weatherbackend.model.*; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.http.HttpStatus; import org.springframework.web.bind.annotation.*; @RestController @RequestMapping("/weather") public class WeatherBackendAPI { private static final Logger LOG = LoggerFactory.getLogger(WeatherBackendController.class); @PostMapping(value = "/general/outlook", produces = "application/json") public @ResponseBody GeneralOutlook generateGeneralOutlook(@RequestBody Weather weather) throws JsonProcessingException { ... return outlook; } @GetMapping(value = "/general/outlook", produces = "application/json") public @ResponseBody String infoAboutGeneralOutlook() throws JsonProcessingException { ... return "Try a POST also against this URL! Just send some body with it like: '" + weatherJson + "'"; } @GetMapping(value = "/{name}", produces = "text/plain") public String whatsTheSenseInThat(@PathVariable("name") String name) { LOG.info("Request for /{name} with GET"); return "Hello " + name + "! This is a RESTful HttpService written in Spring. :)"; } } ``` ### Generate an OpenAPI spec with the springdoc-openapi-maven-plugin See the docs at https://github.com/springdoc/springdoc-openapi-maven-plugin on how to use the springdoc-openapi-maven-plugin. > The aim of springdoc-openapi-maven-plugin is to generate json and yaml OpenAPI description during build time. The plugin works during integration-tests phase, and generates the OpenAPI description. The plugin works in conjunction with spring-boot-maven plugin. But in order to successfully run the springdoc-openapi-maven-plugin, we need to add the [springdoc-openapi-ui](https://github.com/springdoc/springdoc-openapi) plugin (for Tomcat / Spring MVC based apps) or the [springdoc-openapi-webflux-ui](https://github.com/springdoc/springdoc-openapi#spring-webflux-support-with-annotated-controllers) plugin (for Reactive WebFlux / Netty based apps) to our [weatherbackend/pom.xml](hellobackend/pom.xml): ```xml <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>1.4.8</version> </dependency> ``` Otherwise the `springdoc-openapi-maven-plugin` will run into errors like this (as described [in this so answer](https://stackoverflow.com/a/64677754/4964553)): ``` [INFO] --- springdoc-openapi-maven-plugin:1.1:generate (default) @ hellobackend --- [ERROR] An error has occured: Response code 404 [INFO] [INFO] --- spring-boot-maven-plugin:2.3.5.RELEASE:stop (post-integration-test) @ hellobackend --- [INFO] Stopping application... 2020-11-04 10:18:36.851 INFO 42036 --- [on(4)-127.0.0.1] inMXBeanRegistrar$SpringApplicationAdmin : Application shutdown requested. ``` As a sidenote: if you fire up your Spring Boot app from here with `mvn spring-boot:run`, you can access the live API documentation already at http://localhost:8080/swagger-ui.html ![openapi-swagger-ui](screenshots/openapi-swagger-ui.png) Now we can add the `springdoc-openapi-maven-plugin` to our [hellobackend/pom.xml](hellobackend/pom.xml): ```xml <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <executions> <execution> <id>pre-integration-test</id> <goals> <goal>start</goal> </goals> </execution> <execution> <id>post-integration-test</id> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <phase>integration-test</phase> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> </plugins> </build> ``` As you see we also need to tell the `spring-boot-maven-plugin` to start and stop the integration test phases. In order to generate the Open API spec we need to execute Maven with: ``` mvn verify ``` The output should contain something like that: ``` ... [INFO] --- springdoc-openapi-maven-plugin:1.1:generate (default) @ hellobackend --- 2020-11-04 10:26:09.579 INFO 42143 --- [ctor-http-nio-2] o.springdoc.api.AbstractOpenApiResource : Init duration for springdoc-openapi is: 29 ms ... ``` This indicates that the OpenAPI spec generation was successful. Therefore we need to have a look into the `weatherbackend/target` directory, where a file called [openapi.json](weatherbackend/target/openapi.json) should be present now (you may need to reformat the code inside you IDE to not look into a one-liner ;) ): ```json { "openapi": "3.0.1", "info": { "title": "OpenAPI definition", "version": "v0" }, "servers": [ { "url": "http://localhost:8080", "description": "Generated server url" } ], "paths": { "/weather/general/outlook": { "get": { "tags": [ "weather-backend-controller" ], "operationId": "infoAboutGeneralOutlook", "responses": { "200": { "description": "OK", "content": { "application/json": { "schema": { "type": "string" } } } } } } }}} ``` ### Tweak the API information in the generated OpenAPI spec I really don't wanted to change much here in the first place. But as I came into more details regarding the Kong integration, I wanted to configure at least some information in the generated `openapi.json`. Especially the `"title": "OpenAPI definition"`, which is then used as the Kong service name, should be optimized :) Therefore we can [use the @OpenAPIDefinition annotation](https://github.com/springdoc/springdoc-openapi#adding-api-information-and-security-documentation) to configure the service info. So let's create a class [OpenAPIConfig.java](weatherbackend/src/main/java/io/jonashackt/weatherbackend/api/OpenAPIConfig.java) and specify some info: ```java package io.jonashackt.weatherbackend.api; import io.swagger.v3.oas.annotations.OpenAPIDefinition; import io.swagger.v3.oas.annotations.info.Info; import io.swagger.v3.oas.annotations.servers.Server; @OpenAPIDefinition( info = @Info( title = "weatherbackend", version = "v2.0" ), servers = @Server(url = "http://weatherbackend:8080") ) public class OpenAPIConfig { } ``` With that we can generate our `openapi.json` again by running `mvn verify -DskipTests=true` and should have the new information propagated: ```json { "openapi": "3.0.1", "info": { "title": "weatherbackend", "version": "v2.0" }, "servers": [ { "url": "http://weatherbackend:8080", "variables": {} } ], "paths": { "/weather/general/outlook": { "get": { "tags": [ "weather-backend-api" ], "operationId": "infoAboutGeneralOutlook", "responses": { "200": { "description": "OK", "content": { "application/json": { "schema": { "type": "string" } } } } } }, ``` ### Import OpenAPI spec into Kong First we start the manual process in order to test drive our solution. ### Install Insomnia Desinger with Kong Bundle plugin On a Mac simply use brew (or have a look at https://insomnia.rest): ``` brew cask install insomnia-designer ``` Then go to https://insomnia.rest/plugins/insomnia-plugin-kong-bundle and click on `Install in Designer` & open the request in Insomnia Desinger: ![insomnia-designer-kong-bundle-plugin](screenshots/insomnia-designer-kong-bundle-plugin.png) ### Import Springdoc generated openapi.json into Insomnia Designer Now let's try to import the generated [openapi.json](weatherbackend/target/openapi.json) into our Insomnia Designer by clicking on `Create` and then `Import from / File`: ![insomnia-designer-import-openapi-json](screenshots/insomnia-designer-import-openapi-json.png) That was easy :) Now you can already interact with your API through Insomnia. ### Generate Kong Declarative Config from Openapi The next step is to generate the Kong configuration from the OpenAPI specification. Therefore we need to click on the `Generate Config` button: ![insomnia-designer-kong-declarative-config](screenshots/insomnia-designer-kong-declarative-config.png) And voilà we have our Kong declarative configuration ready: ```yaml _format_version: "1.1" services: - name: weatherbackend url: http://weatherbackend:8080 plugins: [] routes: - tags: - OAS3_import name: weatherbackend-path-get methods: - GET paths: - /weather/general/outlook strip_path: false - tags: - OAS3_import name: weatherbackend-path_1-post methods: - POST paths: - /weather/general/outlook strip_path: false - tags: - OAS3_import name: weatherbackend-path_2-get methods: - GET paths: - /weather/(?<name>\S+)$ strip_path: false tags: - OAS3_import upstreams: - name: weatherbackend targets: - target: weatherbackend:8080 tags: - OAS3_import ``` For now let's save this yaml inside the [kong/kong.yml](kong/kong.yml) file. ### Docker Compose with Kong DB-less deployment & declarative configuration I have two goals here: First I want a simple deployment solution. If I could avoid it then I don't want to have multiple services only for the API gateway. I want to start small and you as a reader should be able to easily follow. As [the official Docker Compose file](https://github.com/Kong/docker-kong/blob/master/compose/docker-compose.yml) has two (!!) database migration services, one database service and one service for Kong I was really overwhelmed at first. What I learned in my years in the IT industry: Every component you don't have is a good component. So there must be a way to deploy Kong without that much "hassle". And I found one: The documentation at https://docs.konghq.com/2.2.x/db-less-and-declarative-config/ & https://docs.konghq.com/install/docker says, that DB-less mode is possible since Kong 1.1 and has a number of benefits: * reduced number of dependencies: no need to manage a database installation if the entire setup for your use-cases fits in memory * it is a good fit for automation in CI/CD scenarios: configuration for entities can be kept in a single source of truth managed via a Git repository * it enables more deployment options for Kong But be aware that are also some drawbacks. [Not all plugins support this mode]https://docs.konghq.com/2.2.x/db-less-and-declarative-config/#plugin-compatibility) and [there is no central configuration database](https://docs.konghq.com/2.2.x/db-less-and-declarative-config/#no-central-database-coordination) if you want to run multiple Kong nodes. But for our simple setup we should be able to live with that. But there's another advantage: we don't need to use [decK](https://github.com/Kong/deck) here as my colleague [already outlined](https://blog.codecentric.de/en/2019/12/kong-api-gateway-declarative-configuration-using-deck-and-visualizations-with-konga/) is used with Kong for declarative config handling. This is only needed, if you use Kong with a Database deployment! If you choose the DB-less/declarative configuration approach, your declarative file is already everything we need! :) So let's do it. I'll try to setup the simplest possible `docker-compose.yml` here in order to spin up Kong. I'll derive it [from the official one](https://github.com/Kong/docker-kong/blob/master/compose/docker-compose.yml), the one my colleague Daniel Kocot [used in his blog posts](https://github.com/danielkocot/kong-blogposts/blob/master/docker-compose.yml) and others [like this](https://medium.com/@matias_azucas/db-less-kong-tutorial-8cbf8f70b266). ```yaml version: '3.7' services: kong: image: kong:2.2.0 environment: KONG_ADMIN_ACCESS_LOG: /dev/stdout KONG_ADMIN_ERROR_LOG: /dev/stderr KONG_ADMIN_LISTEN: '0.0.0.0:8001' KONG_DATABASE: "off" KONG_DECLARATIVE_CONFIG: /usr/local/kong/declarative/kong.yml KONG_PROXY_ACCESS_LOG: /dev/stdout KONG_PROXY_ERROR_LOG: /dev/stderr volumes: - ./kong/:/usr/local/kong/declarative networks: - kong-net ports: - "8000:8000/tcp" - "127.0.0.1:8001:8001/tcp" - "8443:8443/tcp" - "127.0.0.1:8444:8444/tcp" healthcheck: test: ["CMD", "kong", "health"] interval: 10s timeout: 10s retries: 10 restart: on-failure deploy: restart_policy: condition: on-failure # no portbinding here - the actual services should be accessible through Kong weatherbackend: build: ./weatherbackend ports: - "8080" networks: - kong-net tty: true restart: unless-stopped networks: kong-net: external: false ``` I litterally blew everything out we don't really really need in a DB-less scenario! No `kong-migrations`, `kong-migrations-up`, `kong-db` services - and no extra `Dockerfile` [as shown in this blog post](https://medium.com/@matias_azucas/db-less-kong-tutorial-8cbf8f70b266). I only wanted to have a single `kong` service for the API gateway - and a `weatherbackend` service that is registered in Kong later. As stated [in the docs for DB-less deployment](https://docs.konghq.com/install/docker/?_ga=2.266755086.1634614376.1604405282-930789398.1604405282) I used `KONG_DATABASE: "off"` to switch to DB-less mode and `KONG_DECLARATIVE_CONFIG: /usr/local/kong/declarative/kong.yml` to tell Kong where to get the `kong.yml` we generated with the Insomnia Designer's Kong Bungle plugin. To have the file present at `/usr/local/kong/declarative/kong.yml`, I used a simple volume mount like this: `./kong/:/usr/local/kong/declarative`. No need to manually create the Volume as described in the docs - or to create another Dockerfile solely to load the config file into the Kong container. Simply nothing needed instead of this sweet volume! Now this thing starts to make fun to me :) Now let's fire up our Kong setup with `docker-compose up` ``` $ docker-compose up Starting spring-boot-openapi-kong_kong_1 ... done Starting spring-boot-openapi-kong_weatherbackend_1 ... done Attaching to spring-boot-openapi-kong_weatherbackend_1, spring-boot-openapi-kong_kong_1 kong_1 | 2020/11/04 14:21:11 [notice] 1#0: using the "epoll" event method kong_1 | 2020/11/04 14:21:11 [notice] 1#0: openresty/1.17.8.2 kong_1 | 2020/11/04 14:21:11 [notice] 1#0: built by gcc 9.3.0 (Alpine 9.3.0) kong_1 | 2020/11/04 14:21:11 [notice] 1#0: OS: Linux 5.4.39-linuxkit kong_1 | 2020/11/04 14:21:11 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576 kong_1 | 2020/11/04 14:21:11 [notice] 1#0: start worker processes kong_1 | 2020/11/04 14:21:11 [notice] 1#0: start worker process 22 kong_1 | 2020/11/04 14:21:11 [notice] 1#0: start worker process 23 kong_1 | 2020/11/04 14:21:11 [notice] 1#0: start worker process 24 kong_1 | 2020/11/04 14:21:11 [notice] 1#0: start worker process 25 kong_1 | 2020/11/04 14:21:11 [notice] 23#0: *2 [lua] cache.lua:374: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua* kong_1 | 2020/11/04 14:21:11 [notice] 23#0: *2 [lua] cache.lua:374: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua* kong_1 | 2020/11/04 14:21:11 [notice] 23#0: *2 [kong] init.lua:354 declarative config loaded from /usr/local/kong/declarative/kong.yml, context: init_worker_by_lua* weatherbackend_1 | weatherbackend_1 | . ____ _ __ _ _ weatherbackend_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ weatherbackend_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ weatherbackend_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) weatherbackend_1 | ' |____| .__|_| |_|_| |_\__, | / / / / weatherbackend_1 | =========|_|==============|___/=/_/_/_/ weatherbackend_1 | :: Spring Boot :: (v2.3.5.RELEASE) weatherbackend_1 | weatherbackend_1 | 2020-11-04 14:21:13.226 INFO 6 --- [ main] io.jonashackt.weatherbackend.WeatherBackendApplication : Starting WeatherBackendApplication v2.3.5.RELEASE on 209e8a7cbb36 with PID 6 (/app.jar started by root in /) weatherbackend_1 | 2020-11-04 14:21:13.239 INFO 6 --- [ main] io.jonashackt.weatherbackend.WeatherBackendApplication : No active profile set, falling back to default profiles: default weatherbackend_1 | 2020-11-04 14:21:15.920 INFO 6 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) weatherbackend_1 | 2020-11-04 14:21:15.958 INFO 6 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] weatherbackend_1 | 2020-11-04 14:21:15.960 INFO 6 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.39] weatherbackend_1 | 2020-11-04 14:21:16.159 INFO 6 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext weatherbackend_1 | 2020-11-04 14:21:16.163 INFO 6 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2714 ms weatherbackend_1 | 2020-11-04 14:21:16.813 INFO 6 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' weatherbackend_1 | 2020-11-04 14:21:18.534 INFO 6 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' weatherbackend_1 | 2020-11-04 14:21:18.564 INFO 6 --- [ main] io.jonashackt.weatherbackend.WeatherBackendApplication : Started WeatherBackendApplication in 7.188 seconds (JVM running for 8.611) kong_1 | 172.19.0.1 - - [04/Nov/2020:14:25:16 +0000] "GET / HTTP/1.1" 404 48 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:82.0) Gecko/20100101 Firefox/82.0" ``` A crucial part here is that Kong successfully loads the declarative configuration file and logs something like `[kong] init.lua:354 declarative config loaded from /usr/local/kong/declarative/kong.yml` If your log looks somehow like the above you can also have a look at the admin API by opening http://localhost:8001/ in your browser. [As the docs state](https://docs.konghq.com/2.2.x/db-less-and-declarative-config/#setting-up-kong-in-db-less-mode) everything should be ok, if the response says `"database": "off"` somewhere like: ![docker-compose-db-less-deploy-database-off](screenshots/docker-compose-db-less-deploy-database-off.png) We can also double check http://localhost:8001/status , where we have a good overview of Kong's current availability. ### Access the Spring Boot app through Kong The next thing we need to look at is how to access our `weatherbackend` through Kong. Specifically we need to have a look into the configured Kong services, if the OpenAPI spec import worked out in the way we'd expected it in the first place. Without using Kong's declarative configuration [we need to add services and routes manually through the Kong admin API](https://blog.codecentric.de/en/2019/09/api-management-kong-update/). But as we use declarative configuration, which we generated from the OpenAPI spec, everything is taken care for us already. Therefore let's have a look into the list of all currently registered Kong services at http://localhost:8001/services ![kong-admin-api-services-overview](screenshots/kong-admin-api-services-overview.png) You can also access the Kong routes of our Spring Boot-backed service with this URL: http://localhost:8001/services/weatherbackend/routes ### Configuring the correct upstream in Kong (connect() failed (111: Connection refused) while connecting to upstream) If you run into problems like this: ``` kong_1 | 2020/11/04 18:56:05 [error] 24#0: *14486 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: kong, request: "GET /weather/Jonas HTTP/1.1", upstream: "http://127.0.0.1:8080/weather/Jonas", host: "localhost:8000" kong_1 | 2020/11/04 18:56:05 [error] 24#0: *14486 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: kong, request: "GET /weather/Jonas HTTP/1.1", upstream: "http://127.0.0.1:8080/weather/Jonas", host: "localhost:8000" kong_1 | 172.19.0.1 - - [04/Nov/2020:18:56:05 +0000] "GET /weather/Jonas HTTP/1.1" 502 75 "-" "insomnia/2020.4.2" ``` we should have a look at the `upstreams` configuration of our generated Kong declarative config: ```yaml upstreams: - name: weatherbackend targets: - target: localhost:8080 tags: - OAS3_import ``` As with our setup here Kong needs to access the weatherbackend from within the Docker network. So the `upstreams: target` to `localhost` will not work and lead to the error `connect() failed (111: Connection refused) while connecting to upstream`. So we need to think about a working `host` configuration. [Daniel did the trick](https://blog.codecentric.de/en/2019/09/api-management-kong-update/) to simply use `host.docker.internal` as host name in his post and I also remember it from my last work with Traefik. Coming from this solution I thought about my [post about the Spring Cloud microservice setup](https://blog.codecentric.de/en/2017/05/ansible-docker-windows-containers-scaling-spring-cloud-netflix-docker-compose) back in 2017: There I simply used the Docker (Compose) service names, which I aligned with the names of the microservices. So having a look into our [docker-compose.yml](docker-compose.yml) it would be easy to simply use `weatherbackend` as the host name, since that one should be also available inside the Docker network. And: We can enrich this later by using a DNS resolver and so on... In order to configure another host name inside Kong, we need to tweak our [OpenAPIConfig.java](weatherbackend/src/main/java/io/jonashackt/weatherbackend/api/OpenAPIConfig.java) with another configuration option called `servers`: ```java @OpenAPIDefinition( info = @Info( title = "weatherbackend", version = "v2.0" ), servers = @Server(url = "http://weatherbackend:8080") ) public class OpenAPIConfig { } ``` Now doing the OpenAPI spec and Kong declarative config generation again, our setup should come up with a working configuration to access our Spring Boot service through Kong! Finally we can use Postman, Insomnia Core or the like to access our Spring Boot app with a GET on http://localhost:8000/weather/MaxTheKongUser ![service-access-postman-success](screenshots/service-access-postman-success.png) Looking into our Docker Compose log we should also see the successful responses from our `weatherbackend` service: ```shell script weatherbackend_1 | 2020-11-05 07:54:48.381 INFO 7 --- [nio-8080-exec-1] i.j.controller.WeatherBackendController : Request for /{name} with GET kong_1 | 172.19.0.1 - - [05/Nov/2020:07:54:48 +0000] "GET /weather/MaxTheKongUser HTTP/1.1" 200 133 "-" "PostmanRuntime/7.26.1" weatherbackend_1 | 2020-11-05 07:54:59.951 INFO 7 --- [nio-8080-exec-2] i.j.controller.WeatherBackendController : Request for /{name} with GET kong_1 | 172.19.0.1 - - [05/Nov/2020:07:54:59 +0000] "GET /weather/MonicaTheKongUser HTTP/1.1" 200 136 "-" "PostmanRuntime/7.26.1" weatherbackend_1 | 2020-11-05 07:55:06.573 INFO 7 --- [nio-8080-exec-3] i.j.controller.WeatherBackendController : Request for /{name} with GET kong_1 | 172.19.0.1 - - [05/Nov/2020:07:55:06 +0000] "GET /weather/MartinTheKongUser HTTP/1.1" 200 136 "-" "PostmanRuntime/7.26.1" ``` ### Automating the OpenAPI-Kong import We need to also import the OpenAPI spec everytime the code changes, since otherwise the configuration in Kong will differ with every commit! Additionally we want to be able to run our process on our CI servers as well, since we're in 2020 and want to be sure everything runs even after code changes. And there's a way maybe: Inso CLI https://github.com/Kong/insomnia/tree/develop/packages/insomnia-inso Because there we have a [openapi-2-kong functionality](https://github.com/Kong/insomnia/tree/develop/packages/insomnia-inso#-inso-generate-config-identifier) - see also https://www.npmjs.com/package/openapi-2-kong: > Similar to the Kong Kubernetes and Declarative config plugins for Designer, this command can generate configuration from an API specification, using openapi-2-kong. ### Install Inso CLI So let's try Inso CLI! (did I say that this starting to get really cool :D ) Install it with: ```shell script npm i -g insomnia-inso ``` #### Inso CLI install problems on Mac I ran into the following error ``` node-pre-gyp WARN Using request for node-pre-gyp https download CXX(target) Release/obj.target/node_libcurl/src/node_libcurl.o clang: error: no such file or directory: '/usr/include' ``` This is a problem, since [MacOS command line tools do not add `/usr/include` folder by default anymore](https://stackoverflow.com/questions/64694248/node-libcurl-installation-fails-on-macos-catalina-clang-error-no-such-file-or/64694249#64694249) (OMG!). In order to fix that problem, you need to install `node_libcurl` (which has the above problem and is needed by insomnia-inso) first and use the environment variable `npm_config_curl_include_dirs` to show the installation process the new location of `/usr/include` which is `$(xcrun --show-sdk-path)/usr/include`. The command must also include `insomnia-inso`: ``` npm_config_curl_include_dirs="$(xcrun --show-sdk-path)/usr/include" npm install -g node-libcurl insomnia-inso ``` ### Use Inso CLI to generate Kong declarative config from OpenAPI spec As we want to go from `openapi.json` to `kong.yml`, we need to [use the `inso generate config` command as described in the docs](https://github.com/Kong/insomnia/tree/develop/packages/insomnia-inso#-inso-generate-config-identifier). We should also use option `--type declarative`, since the output should result in a Kong declarative configuration file. Also our OpenAPI spec file at `weatherbackend/target/openapi.json` could be directly passed to Inso CLI. The last part is to tell Inso where to output the Kong declarative configuration `--output kong/kong.yml`. ``` inso generate config weatherbackend/target/openapi.json --output kong/kong.yml --type declarative --verbose ``` If you want to see a bit more of an info what the inso CLI is doing, you can add `--verbose` to the command. If your node/npm installation is broken like mine, you can add the `node_modules/insomnia-inso/bin` directly to your `.bash_profile`, `.zshrc` etc. like that: ``` export PATH="/usr/local/Cellar/node/15.1.0/lib/node_modules/insomnia-inso/bin:$PATH" ``` ### Run the OpenAPI spec generation and Kong declarative config transformation inside the Maven build Everytime we change our Spring Boot app's code, we should initialize a re-generation of our Kong declarative config in our `kong.yml` file, since the API could have changed! Playing with different possibilities where to put the generation (Docker, Compose, CI server) I found a really simple solution to bind the step to our standard build process: I just used the [exec-maven-plugin](https://www.mojohaus.org/exec-maven-plugin/usage.html) to execute the `inso CLI`. Although the XML syntax may look a bit strange at first sight, it makes totally sense to have the generation of our `kong.yml` also directly coupled to our build process. Therefore let's have a look at our [weatherbackend/pom.xml](weatherbackend/pom.xml): ```xml <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>3.0.0</version> <executions> <execution> <id>execute-inso-cli</id> <phase>verify</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>inso</executable> <arguments> <argument>generate</argument> <argument>config</argument> <argument>target/openapi.json</argument> <argument>--output</argument> <argument>../kong/kong.yml</argument> <argument>--type</argument> <argument>declarative</argument> <argument>--verbose</argument> </arguments> </configuration> </plugin> ``` Using `mvn exec:exec` we are now able to execute `inso CLI` through Maven: ``` $ mvn exec:exec [INFO] Scanning for projects... [INFO] [INFO] ------------< io.jonashackt.weatherbackend:weatherbackend >------------- [INFO] Building weatherbackend 2.3.5.RELEASE [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- exec-maven-plugin:3.0.0:exec (default-cli) @ weatherbackend --- Configuration generated to "kong/kong.yml". [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.671 s [INFO] Finished at: 2020-11-05T14:05:04+01:00 [INFO] ------------------------------------------------------------------------ ``` As you can see the `inso CLI` logging `Configuration generated to "kong/kong.yml".` is part of the output. And we can push the integration into our build process even further: As [mentioned from Pascal at stackoverflow](https://stackoverflow.com/a/2472767/4964553) we can even bind the execution of the `exec-maven-plugin` to the standard Maven build. Using the `<phase>` tag we bind the execution to the `verify` phase, where the generation of the OpenAPI spec also takes place already: ```xml <executions> <execution> <id>execute-inso-cli</id> <phase>verify</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> ``` This is marvelous since with this addition a normal `mvn verify` does every needed step for us to generate a Kong declarative config file at [kong/kong.yml](kong/kong.yml)! ```shell script $ mvn verify ... [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.822 s - in io.jonashackt.weatherbackend.api.WeatherBackendAPITests 2020-11-05 14:07:49.261 INFO 66585 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor' [INFO] [INFO] Results: [INFO] [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-jar-plugin:3.2.0:jar (default-jar) @ weatherbackend --- [INFO] Building jar: /Users/jonashecht/dev/spring-boot/spring-boot-openapi-kong/weatherbackend/target/weatherbackend-2.3.5.RELEASE.jar [INFO] [INFO] --- spring-boot-maven-plugin:2.3.5.RELEASE:repackage (repackage) @ weatherbackend --- [INFO] Replacing main artifact with repackaged archive [INFO] [INFO] --- spring-boot-maven-plugin:2.3.5.RELEASE:start (pre-integration-test) @ weatherbackend --- [INFO] Attaching agents: [] . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.3.5.RELEASE) 2020-11-05 14:07:50.978 INFO 66597 --- [ main] i.j.w.WeatherBackendApplication : Starting WeatherBackendApplication on PikeBook.fritz.box with PID 66597 (/Users/jonashecht/dev/spring-boot/spring-boot-openapi-kong/weatherbackend/target/classes started by jonashecht in /Users/jonashecht/dev/spring-boot/spring-boot-openapi-kong/weatherbackend) 2020-11-05 14:07:50.981 INFO 66597 --- [ main] i.j.w.WeatherBackendApplication : No active profile set, falling back to default profiles: default 2020-11-05 14:07:51.657 INFO 66597 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2020-11-05 14:07:51.665 INFO 66597 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2020-11-05 14:07:51.665 INFO 66597 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.39] 2020-11-05 14:07:51.735 INFO 66597 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2020-11-05 14:07:51.736 INFO 66597 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 715 ms 2020-11-05 14:07:51.889 INFO 66597 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' 2020-11-05 14:07:52.292 INFO 66597 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2020-11-05 14:07:52.300 INFO 66597 --- [ main] i.j.w.WeatherBackendApplication : Started WeatherBackendApplication in 1.585 seconds (JVM running for 1.978) [INFO] [INFO] --- springdoc-openapi-maven-plugin:1.1:generate (default) @ weatherbackend --- 2020-11-05 14:07:52.764 INFO 66597 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' 2020-11-05 14:07:52.764 INFO 66597 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' 2020-11-05 14:07:52.768 INFO 66597 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 4 ms 2020-11-05 14:07:52.936 INFO 66597 --- [nio-8080-exec-1] o.springdoc.api.AbstractOpenApiResource : Init duration for springdoc-openapi is: 148 ms [INFO] [INFO] --- spring-boot-maven-plugin:2.3.5.RELEASE:stop (post-integration-test) @ weatherbackend --- [INFO] Stopping application... 2020-11-05 14:07:52.989 INFO 66597 --- [on(4)-127.0.0.1] inMXBeanRegistrar$SpringApplicationAdmin : Application shutdown requested. 2020-11-05 14:07:53.052 INFO 66597 --- [on(4)-127.0.0.1] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor' [INFO] [INFO] --- exec-maven-plugin:3.0.0:exec (execute-inso-cli) @ weatherbackend --- Configuration generated to "../kong/kong.yml". [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.185 s [INFO] Finished at: 2020-11-05T14:07:54+01:00 [INFO] ------------------------------------------------------------------------ ``` With that our Spring Boot app is build & tested, then the `openapi.json` gets generated using the `springdoc-openapi-maven-plugin` and then transformed into `kong.yml` by the `Inso CLI` executed by the `exec-maven-plugin` :))) ### Integrate the full Maven build into Cloud CI As we want to make sure everything works as expected every time code changes we need to include the build into a CI system. Now that we depend on `Inso CLI` installation, which depends on Node.js/NPM and Maven at the same time, we need go with a very flexible CloudCI solution. As we probably also need Docker Compose on our CI system I decided to go with GitHub Actions since here we have a full-blown virtual machine to do everything we want. So let's create a [.github/workflows/openapi-to-kong-config-full-setup.yml](.github/workflows/openapi-to-kong-config-full-setup.yml) to execute our Maven build (and don't forget to add `--no-transfer-progress` to the Maven command, since otherwise our build logs get polluted with downloads): ```yaml name: openapi-to-kong-config-full-setup on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Install Node/npm for Inso uses: actions/setup-node@v2 with: node-version: '14' - name: Install Java & Maven uses: actions/setup-java@v1 with: java-version: 15 - name: Install Inso and run Maven build, that'll generate OpenAPI spec and Kong declarative config later needed for Docker Compose run: | echo "Install insomnia-inso (Inso CLI) which is needed by our Maven build process later" npm install insomnia-inso echo "Show Inso version" node_modules/insomnia-inso/bin/inso --version echo "Build Spring Boot app with Maven" echo "This also generates OpenAPI spec file at weatherbackend/target/openapi.json and the Kong declarative config at kong/kong.yml from the OpenAPI spec with Inso CLI" mvn clean verify --file weatherbackend/pom.xml --no-transfer-progress -Dinso.executable.path=node_modules/insomnia-inso/bin/inso ``` I also ran into another problem. GitHub Actions couldn't find the `inso` executable (see this build) and produced the following error: ``` ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:3.0.0:exec (execute-inso-cli) on project weatherbackend: Command execution failed.: Cannot run program "inso" (in directory "/home/travis/build/jonashackt/spring-boot-openapi-kong/weatherbackend"): error=2, No such file or directory -> [Help 1] ``` So I thought about directly defining the full path to the `inso` executable inside our Maven build. But using the `exec-maven-plugin` - and not the command line directly - this isn't that easy anymore. But maybe we can define a default - and override it on Travis? I looked [at this so q&a](https://stackoverflow.com/questions/34746347/pom-xml-environment-variable-with-default-fallback) and finally [at this answer](https://stackoverflow.com/a/13709976/4964553). So let's try it! We alter our [weatherbackend/pom.xml](weatherbackend/pom.xml) slightly to use a new property called `${inso.executable.path}`: ```xml <properties> ... <inso.executable.path>inso</inso.executable.path> </properties> ... <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>3.0.0</version> ... <configuration> <executable>${inso.executable.path}</executable> <arguments> ... ``` With this change we should be able to run our normal `mvn verify` locally - and a special `mvn verify -DskipTests=true -Dinso.executable.path=inso-special-path` on GitHub Actions like this: ``` mvn clean verify --file weatherbackend/pom.xml --no-transfer-progress -Dinso.executable.path=node_modules/insomnia-inso/bin/inso ``` Now [our build works like a charm](https://travis-ci.com/github/jonashackt/spring-boot-openapi-kong/builds/198466347) :) At the end we also look into the generated [kong/kong.yml](kong/kong.yml): ```yaml echo "Show kong.yml" cat kong/kong.yml ``` ### Fire up our Kong Docker Compose setup & testdrive the Spring Boot service access As we only start Kong through Docker Compose, we should finally ensure, that every `docker-compose up` starts with the latest API definition! Therefore it would be great to initialize a Maven build every time we fire up our Compose setup. As we now have a CI server, we can also use it to fire up our Compose setup every time the full chain was build and generated! All we have to do here is to fire up our setup - and curl Kong with the correct service path. So let's add both to our [.github/workflows/openapi-to-kong-config-full-setup.yml](.github/workflows/openapi-to-kong-config-full-setup.yml): ```yaml - name: Fire up Docker Compose setup with Kong & do some checks run: | docker-compose up -d echo "Let's wait until Kong is available (we need to improve this)" sleep 10 echo "Also have a look into the Kong & Spring Boot app logs" docker ps -a docker-compose logs kong docker-compose logs weatherbackend echo "Have a look at the /services endpoint of Kong's admin API" curl http://localhost:8001/services echo "Verify that we can call our Spring Boot service through Kong" curl http://localhost:8000/weather/MaxTheKongUser echo "Again look into Kong logs to see the service call" docker-compose logs kong ``` Right after starting `docker-compose up` we need to wait for the containers to spin up. Currently [I use `sleep` here](https://stackoverflow.com/a/47439672/4964553) - it's dead simply, but it works right now :) We then also have a look into the Kong & Spring Boot app logs with `docker-compose logs kong` & `docker-compose logs weatherbackend`. After checking the service admin API with `curl http://localhost:8001/services` we finally curl for our service through Kong with `curl http://localhost:8000/weather/MaxTheKongUser`. ## Links https://blog.codecentric.de/en/2017/11/api-management-kong/ https://docs.konghq.com/hub/ #### Spring & OpenAPI https://github.com/springdoc/springdoc-openapi-maven-plugin https://stackoverflow.com/questions/59616165/what-is-the-function-of-springdoc-openapi-maven-plugin-configuration-apidocsurl https://www.baeldung.com/spring-rest-openapi-documentation #### Insomnia (Core) & Insomia Designer > Insomnia (Core) is a graphical REST client - just like postman https://blog.codecentric.de/2020/02/testen-und-debuggen-mit-insomnia/ Since 2019 Insomnia (Core) is part of Kong - and is the basis for Kong Studio (Enterprise) https://github.com/Kong/insomnia > Insomnia Designer is a OpenAPI / Swagger Desktop App https://blog.codecentric.de/en/2020/06/introduction-to-insomnia-designer/ > you can preview your specification using Swagger UI https://medium.com/@rmharrison/an-honest-review-of-insomnia-designer-and-insomnia-core-62e24a447ce With https://insomnia.rest/plugins/insomnia-plugin-kong-bundle/ you can deploy API definitions into Kong API gateway To see the integration in action, have a look on https://blog.codecentric.de/en/2020/09/offloading-and-more-from-reedelk-data-integration-services-through-kong-enterprise/ https://github.com/codecentric/reedelk-bookingintegrationservice #### decK > declarative configuration and drift detection for Kong https://blog.codecentric.de/en/2019/12/kong-api-gateway-declarative-configuration-using-deck-and-visualizations-with-konga/ https://github.com/Kong/deck #### Deployment incl. Konga Official Kong docker-compose template: https://github.com/Kong/docker-kong/blob/master/compose/docker-compose.yml https://blog.codecentric.de/en/2019/09/api-management-kong-update/ https://github.com/danielkocot/kong-blogposts/blob/master/docker-compose.yml First question: What is this `kong-migration` Docker container about? [Daniel answers it](https://blog.codecentric.de/en/2019/09/api-management-kong-update/): > the kong-migration service is used for the initial generation of the objects in the kong-database. Unfortunately, the configuration of the database is not managed by the kong service. Idea: Database-less deployment possible for showcases? > Konga: graphical dashboard https://github.com/pantsel/konga#running-konga https://blog.codecentric.de/en/2019/12/kong-api-gateway-declarative-configuration-using-deck-and-visualizations-with-konga/ Konga with it's own DB: https://github.com/asyrjasalo/kongpose/blob/master/docker-compose.yml Konga on the same DB as Kong: https://github.com/abrahamjoc/docker-compose-kong-konga/blob/master/docker-compose.yml
1
EnjoyAndroid/RecyclerviewNestedRecyclerview
An example of a recyclerview nested recyclerview
nested recyclerview recyclerview-nested-recyclerview
# RecyclerviewNestedRecyclerview An example of a recyclerview nested recyclerview   最近项目中用到了两个RecyclerView嵌套的布局,即RecyclerView的item也是RecyclerView,其中遇到了两个比较典型的问题:1、当item的方向是垂直方向时,父RecyclerView首次加载会出现位移;2、当item的方向是水平方向时,父RecyclerView上下滑动之后,子RecyclerView位置会还原,本文主要解决以上两个问题。我们先来瞄一眼这两个问题的效果图: ![修复前.gif](http://upload-images.jianshu.io/upload_images/2032177-caf62e812c3c8243.gif?imageMogr2/auto-orient/strip) 来瞄一眼解决问题后的效果图: ![修复后.gif](http://upload-images.jianshu.io/upload_images/2032177-546c6cce03f9b4f4.gif?imageMogr2/auto-orient/strip)  源码解析: http://www.jianshu.com/p/91b6ef2c4c29
1
jonashackt/cxf-spring-cloud-netflix-docker
Example project combining Spring Boot apps with Spring Cloud Netflix (Eureka, Zuul, Feign) & cxf-spring-boot-starter
docker eureka eureka-server feign spring-boot spring-cloud-netflix web-services zuul
cxf-spring-cloud-netflix-docker ====================================================================================== [![Build Status](https://github.com/jonashackt/cxf-spring-cloud-netflix-docker/workflows/cxf-spring-cloud-netflix-docker/badge.svg)](https://github.com/jonashackt/cxf-spring-cloud-netflix-docker/actions) [![License](http://img.shields.io/:license-mit-blue.svg)](https://github.com/jonashackt/spring-boot-buildpack/blob/master/LICENSE) [![renovateenabled](https://img.shields.io/badge/renovate-enabled-yellow)](https://renovatebot.com) Spring Boot & Spring Cloud compatibility: https://spring.io/projects/spring-cloud ## Example project combining Spring Boot apps together with Spring Cloud Netflix &amp; Docker It was created as a showcase for this blog post: [Scaling Spring Boot Apps on Docker Windows Containers with Ansible: A Complete Guide incl Spring Cloud Netflix and Docker Compose](https://blog.codecentric.de/en/2017/05/ansible-docker-windows-containers-scaling-spring-cloud-netflix-docker-compose/) and is used by this Ansible repository: [ansible-windows-docker-springboot](https://github.com/jonashackt/ansible-windows-docker-springboot) It´s roughly structured like shown in this sketch: ![multiple-apps-spring-boot-cloud-netflix](https://blog.codecentric.de/files/2017/05/multiple-apps-spring-boot-cloud-netflix-768x543.png) ### Usage As the whole example-application is Dockerized, just do a `docker-compose up -d` and all apps will be started for you. Run a `docker ps` to see what´s going on. Then enter http://localhost:8761/ to see all Services registering in Eureka. The zuul-edgeservice proxies weatherservice (by retrieving routes dynamically from eureka-serviceregistry) that itself calls weatherbackend Example: http://localhost:8080/api/weatherservice/soap There´s a Client application inside this project too, so you can fire requests to the weather-service with that one to - and it should be clear, how to implement a consumer :) For that, just fire up the [weatherclient](https://github.com/jonashackt/cxf-spring-cloud-netflix-docker/tree/master/weatherclient) - it should be right there after a `mvn clean package` ran inside the root directoy. ### Spring Cloud 2.x Upgrade Renamed starters: https://github.com/spring-projects/spring-cloud/wiki/Spring-Cloud-Edgware-Release-Notes ##### Errors bean overriding ``` *************************** APPLICATION FAILED TO START *************************** Description: The bean 'weatherServiceClient', defined in de.jonashackt.WeatherclientTestApplication, could not be registered. A bean with that name has already been defined in class path resource [de/jonashackt/configuration/WeatherclientConfiguration.class] and overriding is disabled. ``` See https://stackoverflow.com/questions/51367566/trouble-when-changing-spring-boot-version-from-2-0-3-release-to-2-1-0-build-snap, bean overriding (DI) isn't the default behavior anymore and you have to use: ``` spring.main.allow-bean-definition-overriding: true ``` inside `src/test/resources/application.yml`. ### Links http://projects.spring.io/spring-cloud/ http://cloud.spring.io/spring-cloud-static/Dalston.RELEASE/ https://github.com/sqshq/PiggyMetrics https://github.com/kbastani/spring-cloud-microservice-example
1
brunoborges/javaee7-jms-websocket-example
Example of a Java EE 7 application that integrates JMS 2.0, WebSockets 1.0, CDI events, and EJB 3
null
Java EE 7 Example for JMS and WebSockets integration ========================= This application demonstrates a full-duplex scenario using WebSockets and JMS, with a fully functional server-side asynchronous push, using CDI events and EJB. Details and step-by-step were blogged here: https://blogs.oracle.com/brunoborges/entry/integrating_websockets_and_jms_with # How to run Download and install JDK 8 and GlassFish 4.1. Open project on NetBeans or build package with Maven. Deploy on GlassFish through NB or with admin console.
1
gregwhitaker/springboot-rsocketjwt-example
Example of using JWT with RSocket and Spring Boot
jwt reactive reactive-programming reactive-streams rsocket rsocket-java spring-boot spring-messaging spring-security
# springboot-rsocketjwt-example ![Build](https://github.com/gregwhitaker/springboot-rsocketjwt-example/workflows/Build/badge.svg) An example of using [JWT](https://jwt.io/), for authentication and authorization, with [RSocket](http://rsocket.io) and Spring Boot. This example consists of an RSocket service, `hello-service`, that returns hello messages based upon the method called and the supplied JWT token from the `hello-client` application. The example assumes that you have already retrieved valid JWT tokens from your choice of Authorization Server. To mimic this, a `token-generator` project has been included to get valid tokens for use with this demo. ## Building the Example Run the following command to build the example: ./gradlew clean build ## Running the Example Follow the steps below to run the example: 1. Run the following command to generate the admin and user JWT tokens to use for authenticating with the `hello-service`: ./gradlew :token-generator:run If successful, you will see the tokens displayed in the console: > Task :token-generator:run Generated Tokens ================ Admin: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6ImhlbGxvLXNlcnZpY2UiLCJzY29wZSI6IkFETUlOIiwiaXNzIjoiaGVsbG8tc2VydmljZS1kZW1vIiwiZXhwIjoxNTc2ODY4MjE0LCJqdGkiOiIyYjgwOTUwMC0wZWJlLTQ4MDEtOTYwZS1mZjc2MGQ3MjE0ZGUifQ.fzWzcvelcaXooMa5C3w7BI4lJxcruZiA7TwFyPQuH1k User: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoiaGVsbG8tc2VydmljZSIsInNjb3BlIjoiVVNFUiIsImlzcyI6ImhlbGxvLXNlcnZpY2UtZGVtbyIsImV4cCI6MTU3Njg2ODIxNCwianRpIjoiOGQzZDE2YWUtZTg5MS00Nzc4LWFjNWEtN2NhY2ExOGEwMTYwIn0.Tlg1WxTcrMliLOBmBRSPR33C3xfbc6KUEkEZit928tE 2. In a new terminal, run the following command to start the `hello-service`: ./gradlew :hello-service:bootRun If successful, you will see a message stating the service has been started in the console: 2019-12-20 10:33:59.223 INFO 1889 --- [ main] e.service.hello.HelloServiceApplication : Started HelloServiceApplication in 1.185 seconds (JVM running for 1.546) Now you are ready to start calling the `hello-service`. 3. In a new terminal, run the following command to call the unsecured `hello` endpoint: ./gradlew :hello-client:bootRun --args="hello Bob" Notice that the request was successful and you received a hello response: 2019-12-20 10:37:24.282 INFO 1919 --- [ main] e.client.hello.HelloClientApplication : Response: Hello, Bob! - from unsecured method 4. Next, run the following command to call the `hello.secure` method which requires that the user is authenticated: ./gradlew :hello-client:bootRun --args="hello.secure Bob" You will receive an `io.rsocket.exceptions.ApplicationErrorException: Access Denied` exception because you have not supplied a valid JWT token. 5. Now, run the same command again, but this time supply the `User` JWT token you generated earlier: ./gradlew :hello-client:bootRun --args="--token {User Token Here} hello.secure Bob" You will now receive a successful hello message because you have authenticated with a valid JWT token: 2019-12-20 10:42:14.371 INFO 1979 --- [ main] e.client.hello.HelloClientApplication : Response: Hello, Bob! - from secured method 6. Next, let's test authorization by calling the `hello.secure.adminonly` endpoint with the `User` token by running the following command: ./gradlew :hello-client:bootRun --args="--token {User Token Here} hello.secure.adminonly Bob" You will receive an `io.rsocket.exceptions.ApplicationErrorException: Access Denied` exception because while you are authenticated, you are not authorized to access the method. 7. Finally, let's call the `hello.secure.adminonly` endpoint again, but this time use the `Admin` token by running the following command: ./gradlew :hello-client:bootRun --args="--token {Admin Token Here} hello.secure.adminonly Bob" You will receive a successful hello message because you have supplied a valid JWT token with admin scope: 2019-12-20 10:47:56.047 INFO 2054 --- [ main] e.client.hello.HelloClientApplication : Response: Hello, Bob! - from secured method [admin only] ## Bugs and Feedback For bugs, questions, and discussions please use the [Github Issues](https://github.com/gregwhitaker/springboot-rsocketjwt-example/issues). ## License MIT License Copyright (c) 2019 Greg Whitaker Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1
idris/spring-example-gae
Example of Spring 3.0 on Google App Engine
null
null
1
enpassio/Databinding
Various examples on databinding which can help not only understand how to get started with it but it also shows how this can be used in complex and real use cases
null
# Android Data Binding Samples This repository contains three samples for demonstrating the use of Android Data Binding Library. All three samples have both Java and Kotlin versions. The three samples are of various difficulty levels. We tried to avoid unnecessary complexities and minimize the use of external libraries, in case some learners might not be familiar with those libraries. But we also wanted to show something more or less realist. The first sample, Data Binding with RecyclerView is intentionally kept very simple so that you can concentrate on learning basics of data binding. It contains a single activity with two fragments: First fragment shows a list of items (hardcoded list of products) within a recyclerview, and the second fragment shows the details of the chosen product. In this sample, we demonstrated how to set up data binding in xml layouts, how to access binding instance from Java/Kotlin code, how to use imports and helper methods, how to implement data binding in recyclerview item layouts and how to set an item click listener with data binding The second one, Data Binding with News Api, again has two fragments, one showing a list of articles and the second one showing details of the chosen article. However this one makes a network call to fetch articles from News Api and uses some Android Jetpack components, like ViewModel and LiveData. Regarding data binding, besides the similar concepts that were used in the first sample, this sample also demonstrates use of custom binding adapters and integration of viewModel and liveData with data binding. And the third one, Two-Way Databinding, demonstrates the use of two-way data binding. It is an inventory app, which uses Room for data persistence. This one again uses ViewModel, LiveData and Repository pattern. It also demonstrates use of ObservableFields and converter methods.
0
yole/comparisonChainGen
Example plugin created during the Live Coding an IntelliJ IDEA Plugin from Scratch" webinar."
null
null
1
spektom/realtime-dashboard-example
This is a real-time dashboard example using Spark Streaming and Node.js
dashboard-application flink kafka meetup rethinkdb spark spark-streaming
coding-with-af-spark-streaming =============================== Spark Streaming makes it easy to build scalable fault-tolerant streaming applications. At AppsFlyer, we use Spark for many of our offline processing services. Spark Streaming joined our technology stack a few months ago for real-time work flows, reading directly from Kafka to provide value to our clients in near-real-time. In this session we will code a real-time dashboard application based on Spark Streaming technology. You will learn how to collect events from Kafka, aggregate them by time window and present aggregated insights in a dashboard. ## Screenshot ![Screenshot](https://github.com/spektom/coding-with-af-spark-streaming/raw/master/screenshot.gif) ## Architecture The following scheme presents what we are going to create in this session: Read events +------------+ +------------------------+ | | | | | Kafka | -----------> | Spark Streaming Job | | | | | +------------+ +------------------------+ | | Update V +---------------+ +--------+------+ | | | | Push! | Real-time | | RethinkDB | ---------> | Dashboard | | | | _.. | +---------------+ | (../ \) | +---------------+ 1. We'll read real-time events from Kafka queue. 2. We'll aggregate events in Spark streaming job by some sliding window. 3. We'll write aggregated events to RethinkDB. 4. RethinkDB will push updates to the real-time dashboard. 5. Real-time dashboard will present current status. ## Prerequisites 1. Laptop with at least 4GB of RAM. 2. Make sure Virtualization is enabled in BIOS. 3. [VirtualBox v5.x](https://www.virtualbox.org/wiki/Downloads) or greater installed. 4. [Vagrant v1.8.1](https://www.vagrantup.com/downloads.html) or greater installed. ## Preparation & Setup Please do these steps **prior** to coming to the Meetup: 1. Clone or download this repository. 2. Run `vagrant up && vagrant halt` inside the `vagrant/` directory. Please be patient, this process may take some time... ## Running the Application 1. Go inside `vagrant/` directory. 2. Boot the Vagrant virtual machine: `vagrant up` 3. Start the Spark Streaming process: `vagrant ssh -c /projects/aggregator/run.sh` 4. Start the Dashboard application in another terminal window: `vagrant ssh -c /projects/dashboard/run.sh` If everything was successful, you should be able to access your Dashboard application here: [http://192.168.10.16:3000](http://192.168.10.16:3000) ## Cleaning Up After the meetup, you may want to close running virtual machine. To do so, run inside `vagrant/` directory: vagrant suspend To destroy the virtual machine completely, run: vagrant destroy
1
lijingyao/microservice_coffeeshop
An example of microservice architecture with SpringBoot,SpringCloud-Eureka,JPA and so on.You can build docker image with gradle.
null
# microservice_coffeeshop ## 简介 这是一个微服务架构为基础的,简单的咖啡厅的微服务示例。主要的微服务工程如下: 1. service_user 维护了来咖啡厅的用户的领域模型。 2. service_item 维护了咖啡厅的主要商品,包含了咖啡饮品的领域模型。 3. service_trade 包含订单、子订单的领域模型。 主要使用的基础技术如下: * 微服务的服务发现使用EurekaServer。 EurekaServer的Docker镜像可以从 dockerhub 中download。 * 微服务的数据组装、转发在API Gateway 工程中,可以去 [API-Gateway](https://github.com/lijingyao/gateway_coffeeshop) 工程中checkout代码。 * 使用的工程构建工具:Gradle、Gradlew插件 * 存储服务:MySQL——InnoDB * 涉及到的框架有:SpringMVC、SpringBoot、SpringCloud-Netflix、Hibernate、RxJava(Gateway工程中) * 服务间通信:Restful API * 服务基础涉及原则:DDD(Domain-Driven Design) 其他: 编码:utf-8 ## 部署信息 ### 基础服务部署 需要启动的本地容器/服务: 注:以下服务可以通过docker镜像,也可以直接本地安装。工程中示例代码连接DB、Eureka、nexus 服务的地址,请替换成您本地配置的账号、密码。 1. Mysql :docker 容器可以使用官方镜像:[docker mysql image](https://hub.docker.com/_/mysql/) 2. Eureka :镜像:[netflixoss-eureka image](https://hub.docker.com/r/netflixoss/eureka/) 3. docker registry 镜像仓库: [docker registry image](https://hub.docker.com/_/registry/) 4. nexus 二方库、三方库镜像:[docker nexus image](https://hub.docker.com/r/sonatype/nexus/) ### 微服务部署 #### 打包docker镜像 运行gradle命令打包docker镜像,并且上传到本地的 docker 仓库。 ``` ./gradlew buildDocker -x test ``` 然后运行 **docker images** 命令查看镜像信息。 #### 运行docker 镜像 可以使用docker run 一键运行。或者使用docker-compose 做简单的容器编排 ``` docker run -p :8080 -t localhost:5000/item:1.0.0 --name service-item ``` 如果不以docker容器运行,本地环境也可以直接运行每个微服务的Springboot实例。 ## 给读者的作业 1. 用Java8 的 *Predicate* API 来实现对于入参和业务逻辑的校验。在DDD思想中,有参数的校验,也有领域模型自身逻辑的校验。 本demo 工程没有做这两种校验。先给读者做思考,怎样有更简洁、更高内聚的方式实现最基础的校验? 2. 本demo中的 *AdditionalTasteVO* 中有一个DDD **值对象**的设计,关于价格计算模型,还有什么更好的方式呢? 目前只计算了 *espresso*的附加价,读者们可以继续做更多的价格模型扩展。 ## 解答 1. 更新代码后,主要看service 包下面多了 *validators*包,用来封装各种业务的校验器。以*UserValidator*类为例, 主要增加了对用户注册入参的校验。过程中产生的中间业务结果,也可以用**Predicate**实现。统一的校验器可以将相似 的逻辑抽象出来,并且减少service代码对于一般参数校验。更方便维护主领域模型的逻辑。 2. 新增的代码在service_item的 *manager*包中。**PriceSelector**相当于策略模式的situation的选择器,示例中是 通过系统当前时间来判断和选择具体的策略。实际业务中会有更复杂的条件。示例代码中的策略分为两个,Summer,Winter。 不同的季节不同的咖啡附加料价格不同,通过策略模式,可以很好得解耦不同的计算模型。如果计算流程是可以复用的,那么 还可以在策略中结合**Template**模式。
1
jonashackt/spring-boot-graalvm
This example project shows how to compile a Webflux based Spring Boot application into a Native App using GraalVM Native Image locally & on GitHub Actions with & without Docker
docker github-actions graalvm graalvm-native-image heroku heroku-container-registry heroku-docker java native-image native-image-maven-plugin spring-boot spring-graal substratevm travis travis-ci
# spring-boot-graalvm [![Build Status](https://github.com/jonashackt/spring-boot-graalvm/workflows/native-image-compile/badge.svg)](https://github.com/jonashackt/spring-boot-graalvm/actions) [![License](http://img.shields.io/:license-mit-blue.svg)](https://github.com/jonashackt/spring-boot-graalvm/blob/master/LICENSE) [![renovateenabled](https://img.shields.io/badge/renovate-enabled-yellow)](https://renovatebot.com) [![versionspringboot](https://img.shields.io/badge/dynamic/xml?color=brightgreen&url=https://raw.githubusercontent.com/jonashackt/spring-boot-graalvm/master/pom.xml&query=%2F%2A%5Blocal-name%28%29%3D%27project%27%5D%2F%2A%5Blocal-name%28%29%3D%27parent%27%5D%2F%2A%5Blocal-name%28%29%3D%27version%27%5D&label=springboot)](https://github.com/spring-projects/spring-boot) [![versionspring-graalvm-native](https://img.shields.io/badge/dynamic/xml?color=brightgreen&url=https://raw.githubusercontent.com/jonashackt/spring-boot-graalvm/master/pom.xml&query=%2F%2A%5Blocal-name%28%29%3D%27project%27%5D%2F%2A%5Blocal-name%28%29%3D%27properties%27%5D%2F%2A%5Blocal-name%28%29%3D%27spring-native.version%27%5D&label=spring-native)](https://github.com/spring-projects-experimental/spring-graalvm-native) [![versionjava](https://img.shields.io/badge/graalvm_ce-21.2.0_JDK11-orange.svg?logo=java)](https://www.graalvm.org/) [![Deployed on Heroku](https://img.shields.io/badge/heroku-deployed-blueviolet.svg?logo=heroku&)](https://spring-boot-graal.herokuapp.com/hello) [![Pushed to Docker Hub](https://img.shields.io/badge/docker_hub-released-blue.svg?logo=docker)](https://hub.docker.com/r/jonashackt/spring-boot-graalvm) This example project shows how to compile a Webflux based Spring Boot application into a Native App using GraalVM Native Image > This project here shows a technical demo of what's possible right now - stable GraalVM Native Image support for Spring Boot could be expected with [Spring Frameworks 5.3 release planned in October 2020](https://spring.io/blog/2019/12/03/spring-framework-maintenance-roadmap-in-2020-including-4-3-eol), on which Spring Boot 2.4 will be based. [![asciicast](https://asciinema.org/a/313688.svg)](https://asciinema.org/a/313688) A live deployment is available on Heroku: https://spring-boot-graal.herokuapp.com/hello This project is used as example in some articles: * [blog.codecentric.de/en/2020/05/spring-boot-graalvm/](https://blog.codecentric.de/en/2020/05/spring-boot-graalvm/) * [blog.codecentric.de/en/2020/06/spring-boot-graalvm-docker-heroku/](https://blog.codecentric.de/en/2020/06/spring-boot-graalvm-docker-heroku/) * [blog.codecentric.de/en/2020/06/spring-boot-graalvm-native-image-maven-plugin/](https://blog.codecentric.de/en/2020/06/spring-boot-graalvm-native-image-maven-plugin/) [![javamagazin-092020-cover-small](screenshots/javamagazin-092020-cover-small.jpg)](https://public.centerdevice.de/41c5481e-5782-4c0e-bf7b-a62ec68d3854) ## Table of Contents * [New to GraalVM with Spring Boot?](#new-to-graalvm-with-spring-boot) * [Graal Native Image & SpringBoot](#graal-native-image--springboot) * [Dynamic Graal Native Image configuration with @AutomaticFeature](#dynamic-graal-native-image-configuration-with-automaticfeature) * [Install GraalVM with SDKMAN](#install-graalvm-with-sdkman) * [Install GraalVM Native Image](#install-graalvm-native-image) * [Create a simple WebFlux Reactive REST Spring Boot app](#create-a-simple-webflux-reactive-rest-spring-boot-app) * [Make Spring Boot app Graal Native Image friendly](#make-spring-boot-app-graal-native-image-friendly) * [Relocate Annotation classpath scanning from runtime to build time](#relocate-annotation-classpath-scanning-from-runtime-to-build-time) * [Disable usage of CGLIB proxies](#disable-usage-of-cglib-proxies) * [Detect Autoconfiguration](#detect-autoconfiguration) * [Get Spring Graal @AutomaticFeature](#get-spring-graal-automaticfeature) * [Set start-class element in pom.xml](#set-start-class-element-in-pomxml) * [Craft a compile.sh script](#craft-a-compilesh-script) * [Run the compile.sh script & start your native Spring Boot App](#run-the-compilesh-script--start-your-native-spring-boot-app) * [Doing all the steps together using the native-image-maven-plugin](#doing-all-the-steps-together-using-the-native-image-maven-plugin) * [Tackling the 'No default constructor found Failed to instantiate java.lang.NoSuchMethodException: io.jonashackt.springbootgraal.SpringBootHelloApplication.()' error](#tackling-the-no-default-constructor-found-failed-to-instantiate-javalangnosuchmethodexception-iojonashacktspringbootgraalspringboothelloapplication-error) * [Comparing Startup time & Memory footprint](#comparing-startup-time--memory-footprint) * [Build and Run your Native Image compilation on a Cloud-CI provider like TravisCI](#build-and-run-your-native-image-compilation-on-a-cloud-ci-provider-like-travisci) * [Prevent the 'java.lang.UnsatisfiedLinkError: no netty_transport_native_epoll_x86_64 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]' error](#prevent-the-javalangunsatisfiedlinkerror-no-netty_transport_native_epoll_x86_64-in-javalibrarypath-usrjavapackageslib-usrlib64-lib64-lib-usrlib-error) * [Tackling the 'There was an error linking the native image /usr/bin/ld: final link failed: Memory exhausted' error](#tackling-the-there-was-an-error-linking-the-native-image-usrbinld-final-link-failed-memory-exhausted-error) * [Build and Run your Native Image compilation on GitHub Actions](#build-and-run-your-native-image-compilation-on-github-actions) * [Use Docker to compile a Spring Boot App with GraalVM](#use-docker-to-compile-a-spring-boot-app-with-graalvm) * [Tackling 'Exception java.lang.OutOfMemoryError in thread "native-image pid watcher"' error](#tackling-exception-javalangoutofmemoryerror-in-thread-native-image-pid-watcher-error) * [Run Spring Boot Native Apps in Docker](#run-spring-boot-native-apps-in-docker) * [Running Spring Boot Graal Native Apps on Heroku](#running-spring-boot-graal-native-apps-on-heroku) * [Configure the Spring Boot Native app's port dynamically inside a Docker container](#configure-the-spring-boot-native-apps-port-dynamically-inside-a-docker-container) * [Use Docker to run our Spring Boot Native App on Heroku](#use-docker-to-run-our-spring-boot-native-app-on-heroku) * [Work around the Heroku 512MB RAM cap: Building our Dockerimage with TravisCI](#work-around-the-heroku-512mb-ram-cap-building-our-dockerimage-with-travisci) * [Tackling 'Error: Image build request failed with exit status 137' with the -J-Xmx parameter](#tackling-error-image-build-request-failed-with-exit-status-137-with-the--j-xmx-parameter) * [Pushing and Releasing our Dockerized Native Spring Boot App on Heroku Container Infrastructure](#pushing-and-releasing-our-dockerized-native-spring-boot-app-on-heroku-container-infrastructure) * [Pushing and Releasing our Dockerized Native Spring Boot App on Heroku Container Infrastructure using GitHub Actions](#pushing-and-releasing-our-dockerized-native-spring-boot-app-on-heroku-container-infrastructure-using-github-actions) * [Autorelease on Docker Hub with TravisCI](#autorelease-on-docker-hub-with-travisci) * [Links](#links) # New to GraalVM with Spring Boot? Current status of Spring's Graal support: * https://github.com/spring-projects/spring-framework/wiki/GraalVM-native-image-support * https://github.com/spring-projects/spring-framework/issues/22968 > Note: [GraalVM](https://www.graalvm.org/) is an umbrella for many projects - if we want to fasten the startup and reduce the footprint of our Spring Boot projects, we need to focus on [GraalVM Native Image](https://www.graalvm.org/docs/reference-manual/native-image/). ### Graal Native Image & SpringBoot There are some good intro resources - like the [Running Spring Boot Applications as GraalVM Native Images talk @ Spring One Platform 2019](https://www.infoq.com/presentations/spring-boot-graalvm/) by [Andy Clement](https://twitter.com/andy_clement). One could tell Native Image to initialize Java classes ``` # at build time: native image --initialize-at-build-time=your.package.YourClass # or at runtime native image --initialize-at-run-time=your.package.YourClass ``` GraalVM Native Image supports: * __static configuration:__ via JSON files * either hand-crafted or * [generated by Graal Native Image agent](https://medium.com/graalvm/introducing-the-tracing-agent-simplifying-graalvm-native-image-configuration-c3b56c486271)) * __dynamic configuration:__ with the help of a [Graal Feature interface](https://www.graalvm.org/sdk/javadoc/index.html?org/graalvm/nativeimage/hosted/Feature.html) * implementing classes are called back throughout the image build process (see https://github.com/oracle/graal/blob/master/substratevm/REFLECTION.md#manual-configuration) ### Dynamic Graal Native Image configuration with @AutomaticFeature [Andy Clement](https://twitter.com/andy_clement) also seems to lead a Spring experimental project, that provides a Graal @AutomaticFeature for typical Spring application: https://github.com/spring-projects-experimental/spring-graalvm-native There are also already some example projects available: https://github.com/spring-projects-experimental/spring-graalvm-native/tree/master/spring-graalvm-native-samples # Install GraalVM with SDKMAN Let's install GraalVM with the help of SDKMAN. Therefore you need to [have SDKMAN itself installed](https://sdkman.io/install): ``` curl -s "https://get.sdkman.io" | bash source "$HOME/.sdkman/bin/sdkman-init.sh" ``` If SDKMAN has been installed successfully, the following command should work: ``` $ sdk list java ================================================================================ Available Java Versions ================================================================================ Vendor | Use | Version | Dist | Status | Identifier -------------------------------------------------------------------------------- AdoptOpenJDK | | 14.0.0.j9 | adpt | | 14.0.0.j9-adpt | | 14.0.0.hs | adpt | | 14.0.0.hs-adpt | | 13.0.2.j9 | adpt | | 13.0.2.j9-adpt ... GraalVM | >>> | 20.2.0.r11 | grl | installed | 20.2.0.r11-grl | | 20.2.0.r8 | grl | | 20.2.0.r8-grl | | 20.1.0.r11 | grl | | 20.1.0.r11-grl | | 20.1.0.r8 | grl | | 20.1.0.r8-grl | | 20.0.0.r11 | grl | | 20.0.0.r11-grl | | 20.0.0.r8 | grl | | 20.0.0.r8-grl | | 19.3.1.r11 | grl | | 19.3.1.r11-grl | | 19.3.1.r8 | grl | | 19.3.1.r8-grl ... ``` The list itself is much longer and you could see the wonderful simplicity of this approach: Don't ever mess again with JDK installations! Now to install GraalVM based on JDK11, simply run: ``` sdk install java 20.2.0.r11-grl ``` SDKMAN now installs GraalVM for us. To have the correct `PATH` configuration in place, you may need to restart your console. If everything went fine, you should see `java -version` react like this: ``` $ java -version openjdk version "11.0.8" 2020-07-14 OpenJDK Runtime Environment GraalVM CE 20.2.0 (build 11.0.8+10-jvmci-20.2-b03) OpenJDK 64-Bit Server VM GraalVM CE 20.2.0 (build 11.0.8+10-jvmci-20.2-b03, mixed mode, sharing) ``` ### Install GraalVM Native Image GraalVM brings a special tool `gu` - the GraalVM updater. To list everything thats currently installed, run ``` $ gu list ComponentId Version Component name Origin -------------------------------------------------------------------------------- graalvm 20.2.0 GraalVM Core ``` Now to install GraalVM Native image, simply run: ``` gu install native-image ``` After that, the `native-image` command should work for you: ``` $ native-image --version GraalVM Version 20.2.0 (Java Version 11.0.8) ``` # Create a simple WebFlux Reactive REST Spring Boot app As famous [starbuxman](https://twitter.com/starbuxman) suggests, we start at: https://start.spring.io/! As https://github.com/spring-projects/spring-framework/wiki/GraalVM-native-image-support suggests, the GraalVM Native Image support becomes better every day - so [we should choose the newest Spring Boot `2.3` Milestone release](https://github.com/spring-projects-experimental/spring-graalvm-native) available: > Spring Boot 2.3.0.M1 (you may be able to get some things working with Boot 2.2.X but not 2.1 or earlier) ![spring.start.io](screenshots/spring.start.io.png) Stable Native Image support for Spring Boot could be expected with [Spring Frameworks 5.3 release planned in October 2020](https://spring.io/blog/2019/12/03/spring-framework-maintenance-roadmap-in-2020-including-4-3-eol), on which Spring Boot 2.4 will be based. Let's create a simple Spring Boot Reactive REST service. First we need a Handler like [HelloHandler](src/main/java/io/jonashackt/springbootgraal/HelloHandler.java): ```java package io.jonashackt.springbootgraal; import org.springframework.http.MediaType; import org.springframework.stereotype.Component; import org.springframework.web.reactive.function.BodyInserters; import org.springframework.web.reactive.function.server.ServerRequest; import org.springframework.web.reactive.function.server.ServerResponse; import reactor.core.publisher.Mono; @Component public class HelloHandler { protected static String RESPONSE_TEXT= "Hello Reactive People!"; public Mono<ServerResponse> hello(ServerRequest serverRequest) { return ServerResponse .ok() .contentType(MediaType.TEXT_PLAIN) .body(BodyInserters.fromValue(RESPONSE_TEXT)); } } ``` In the Reactive Spring approach we also need a Router - let's create [HelloRouter](src/main/java/io/jonashackt/springbootgraal/HelloRouter.java): ```java package io.jonashackt.springbootgraal; import org.springframework.context.annotation.Bean; import org.springframework.http.MediaType; import org.springframework.stereotype.Component; import org.springframework.web.reactive.function.server.*; @Component public class HelloRouter { @Bean public RouterFunction<ServerResponse> route(HelloHandler helloHandler) { return RouterFunctions.route( RequestPredicates.GET("/hello").and(RequestPredicates.accept(MediaType.TEXT_PLAIN)), serverRequest -> helloHandler.hello(serverRequest) ); } } ``` Now we have everything in place to create a Testcase [HelloRouterTest](src/test/java/io/jonashackt/springbootgraal/HelloRouterTest.java) using the non-blocking [WebClient](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/reactive/function/client/WebClient.html): ```java package io.jonashackt.springbootgraal; import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.http.MediaType; import org.springframework.test.web.reactive.server.WebTestClient; @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) class HelloRouterTest { @Test void should_call_reactive_rest_resource(@Autowired WebTestClient webTestClient) { webTestClient.get().uri("/hello") .accept(MediaType.TEXT_PLAIN) .exchange() .expectBody(String.class).isEqualTo(HelloHandler.RESPONSE_TEXT); } } ``` If you want to create another Spring Boot app I can recomment the great [Getting Started Guides](https://spring.io/guides)! # Make Spring Boot app Graal Native Image friendly From https://github.com/spring-projects/spring-framework/wiki/GraalVM-native-image-support#experimental-support: > "The spring-graalvm-native experimental project, created by Andy Clement, shows how it is possible to run a Spring Boot application out of the box as a GraalVM native image. It could be used as a basis for a potential upcoming official support." So let's try this currently available implementation! ### Relocate Annotation classpath scanning from runtime to build time The `spring-context-indexer` is an Annotation processor, which pushes the scan for Annotations from runtime to build time - see the docs: https://docs.spring.io/spring/docs/5.2.4.RELEASE/spring-framework-reference/core.html#beans-scanning-index: > While classpath scanning is very fast, it is possible to improve the startup performance of large applications by creating a static list of candidates at compilation time. In this mode, all modules that are target of component scan must use this mechanism. We could use the spring-context-indexer via importing it with Maven: ```xml <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-indexer</artifactId> <optional>true</optional> </dependency> </dependencies> ``` This would produce a `META-INF/spring.components` file containing a list of all Spring Compontens, Entities and so on. __But we don't have to do this manually__, since the [Spring Graal @AutomaticFeature](https://github.com/spring-projects-experimental/spring-graalvm-native) (again, this is in experimental stage right now) does this automatically for us. The `@AutomaticFeature` will additionally chase down imported annotated classes like `@Import` - it knows, which kinds of annotations lead to reflection needs at runtime, which with GraalVM need to be registered at build time. And as resource files like `application.properties` also need to be registered at build time, the Feature covers those too. ### Disable usage of CGLIB proxies With Spring Boot 2.2 CGLIB proxies are no longer necessary - it introduces the new `proxyBeanMethods` option to avoid CGLIB processing. Let's have a look at our [SpringBootHelloApplication.java](src/main/java/io/jonashackt/springbootgraal/SpringBootHelloApplication.java): ```java @SpringBootApplication(proxyBeanMethods = false) public class SpringBootHelloApplication { ... } ``` The usage of [JDK Proxies is supported by GraalVM](https://github.com/oracle/graal/blob/master/substratevm/DYNAMIC_PROXY.md), they just need to be registered at build time. This is also taken care of by the [Spring Graal @AutomaticFeature](https://github.com/spring-projects-experimental/spring-graalvm-native). ### Detect Autoconfiguration Spring Boot ships with lot's of autoconfiguration projects, which only kick in, when there are specific classes found on the class path. Since this is done at runtime, it wouldn't work with GraalVM. But the [SpringBootHelloApplication.java](src/main/java/io/jonashackt/springbootgraal/SpringBootHelloApplication.java) also takes care of this. It simply analyses the `META-INF/spring.factories` file, where the autoconfiguration classes are listed. An [example of such a file](https://github.com/codecentric/cxf-spring-boot-starter/blob/master/cxf-spring-boot-starter/src/main/resources/META-INF/spring.factories) could be found in the community-driven Spring Boot Starter [cxf-spring-boot-starter](https://github.com/codecentric/cxf-spring-boot-starter). The `@AutomaticFeature` again pulls the work from runtime to build time - and eliminates the need for runtime autoconfiguration. ### Get Spring Graal @AutomaticFeature In order to compile our Spring Boot App as a Native Image, we need to have the latest [Spring Graal @AutomaticFeature](https://github.com/spring-projects-experimental/spring-graalvm-native) in place. As until March 2020 there was no Maven Dependency available, since this project is in a very early stage of development I guess. So I initially crafted a script `get-spring-feature.sh` that cloned and build the project for local usage. But the Spring guys are moving fast! As there was also [a spring.io post released by starbuxman](https://spring.io/blog/2020/04/16/spring-tips-the-graalvm-native-image-builder-feature) at 16th of April, I think he got [Andy Clement](https://twitter.com/andy_clement) and [Sébastien Deleuze](https://twitter.com/sdeleuze) to get him a Maven dependecy available on https://repo.spring.io/milestone :) So there we go! Now we don't need to manually download and compile the @AutomaticFeature, we simply add a dependency to our [pom.xml](pom.xml): ``` <dependencies> <dependency> <groupId>org.springframework.experimental</groupId> <artifactId>spring-graalvm-native</artifactId> <version>0.7.1</version> </dependency> ... <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> </pluginRepository> </pluginRepositories> ``` Be sure to also have the separate `Spring Milestones` repository definition in place, since the library isn't available on Maven Central right now! ### Set start-class element in pom.xml For successfully being able to execute the `native-image` compilation process, we need to provide the command with the full name of our Spring Boot main class. At first I provided a parameter for my `compile.sh` script we have a look into later on. But as the [native-image-maven-plugin](https://mvnrepository.com/artifact/com.oracle.substratevm/native-image-maven-plugin) also relies on this setting, I found it rather okay to provide this class' name inside the [pom.xml](pom.xml): ``` <properties> ... <start-class>io.jonashackt.springbootgraal.SpringBootHelloApplication</start-class> </properties> ``` Since after setting this class once in our `pom.xml`, we don't need to bother with this parameter again - since we could read it from our pom in the later steps automatically. ### Craft a compile.sh script I'am pretty sure, that this step described here will not be necessary when Spring will officially release the Graal full support. But right now, we do need to do a little grunt work here. There are great examples of working compile scripts inside the [spring-graalvm-native-samples](https://github.com/spring-projects-experimental/spring-graalvm-native/tree/master/spring-graalvm-native-samples) project. So let's try to derive our own from that - just have a look into this project's [compile.sh](compile.sh): ```shell script #!/usr/bin/env bash echo "[-->] Detect artifactId from pom.xml" ARTIFACT=$(mvn -q \ -Dexec.executable=echo \ -Dexec.args='${project.artifactId}' \ --non-recursive \ exec:exec); echo "artifactId is '$ARTIFACT'" echo "[-->] Detect artifact version from pom.xml" VERSION=$(mvn -q \ -Dexec.executable=echo \ -Dexec.args='${project.version}' \ --non-recursive \ exec:exec); echo "artifact version is '$VERSION'" echo "[-->] Detect Spring Boot Main class ('start-class') from pom.xml" MAINCLASS=$(mvn -q \ -Dexec.executable=echo \ -Dexec.args='${start-class}' \ --non-recursive \ exec:exec); echo "Spring Boot Main class ('start-class') is '$MAINCLASS'" ``` The first part of the script is dedicated to define needed variables for later GraalVM Native Image compilation. The variables `ARTIFACT`, `VERSION` and `MAINCLASS` could be simply derived from our [pom.xml](pom.xml) with [the help of the Maven exec plugin](https://stackoverflow.com/a/26514030/4964553). In the next section of the [compile.sh](compile.sh) script, we clean (aka remove) the `target` directory and build our Spring Boot App via a well known `mvn package`: ```shell script echo "[-->] Cleaning target directory & creating new one" rm -rf target mkdir -p target/native-image echo "[-->] Build Spring Boot App with mvn package" mvn -DskipTests package ``` After the build, the Spring Boot fat jar needs to be expanded and the classpath needs to be set to the content of the results. Also the Spring Graal AutomaticFeature needs to be available on the classpath. This is taken care by using the all the libraries found in `BOOT-INF/lib`, since by using the Maven dependency of `spring-graalvm-native` the automatic feature also resides there. ```shell script echo "[-->] Expanding the Spring Boot fat jar" JAR="$ARTIFACT-$VERSION.jar" cd target/native-image jar -xvf ../$JAR >/dev/null 2>&1 cp -R META-INF BOOT-INF/classes echo "[-->] Set the classpath to the contents of the fat jar (where the libs contain the Spring Graal AutomaticFeature)" LIBPATH=`find BOOT-INF/lib | tr '\n' ':'` CP=BOOT-INF/classes:$LIBPATH ``` Now finally the GraalVM Native Image compilation is triggered with lot's of appropriate configuration options: ```shell script GRAALVM_VERSION=`native-image --version` echo "[-->] Compiling Spring Boot App '$ARTIFACT' with $GRAALVM_VERSION" time native-image \ -H:+TraceClassInitialization \ -H:Name=$ARTIFACT \ -H:+ReportExceptionStackTraces \ -Dspring.graal.remove-unused-autoconfig=true \ -Dspring.graal.remove-yaml-support=true \ -cp $CP $MAINCLASS; ``` I altered this section compared to the example scripts also, since I wanted to see the compilation process in my console. ### Run the compile.sh script & start your native Spring Boot App We can now run the compile script with: ```shell script ./compile.sh ``` The compile step does take it's time (depending on your hardware!). On my MacBook Pro 2017 this takes around 3 to 4 minutes. I prepared a small asciinema record so that you can have a look at how the compilation process works: [![asciicast](https://asciinema.org/a/320745.svg)](https://asciinema.org/a/320745) If your console shows something like the following: ```shell script [spring-boot-graal:93927] (typeflow): 74,606.04 ms, 12.76 GB [spring-boot-graal:93927] (objects): 58,480.01 ms, 12.76 GB [spring-boot-graal:93927] (features): 8,413.90 ms, 12.76 GB [spring-boot-graal:93927] analysis: 147,776.93 ms, 12.76 GB [spring-boot-graal:93927] (clinit): 1,578.42 ms, 12.76 GB [spring-boot-graal:93927] universe: 4,909.40 ms, 12.76 GB [spring-boot-graal:93927] (parse): 6,885.61 ms, 12.78 GB [spring-boot-graal:93927] (inline): 6,594.06 ms, 12.78 GB [spring-boot-graal:93927] (compile): 33,040.00 ms, 12.79 GB [spring-boot-graal:93927] compile: 50,001.85 ms, 12.79 GB [spring-boot-graal:93927] image: 8,963.82 ms, 12.79 GB [spring-boot-graal:93927] write: 2,414.18 ms, 12.79 GB [spring-boot-graal:93927] [total]: 232,479.88 ms, 12.79 GB real 3m54.635s user 16m16.765s sys 1m55.756s ``` you're now be able to __fire up your first GraalVM Native App!__. How cool is that?!! All you have to do is to run the generated executable `/target/native-image/spring-graal-vm`: ```shell script $ ./target/native-image/spring-graal-vm . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: 2020-03-26 15:45:32.086 INFO 33864 --- [ main] i.j.s.SpringBootHelloApplication : Starting SpringBootHelloApplication on PikeBook.fritz.box with PID 33864 (/Users/jonashecht/dev/spring-boot/spring-boot-graalvm/target/spring-boot-graal started by jonashecht in /Users/jonashecht/dev/spring-boot/spring-boot-graalvm/target) 2020-03-26 15:45:32.086 INFO 33864 --- [ main] i.j.s.SpringBootHelloApplication : No active profile set, falling back to default profiles: default 2020-03-26 15:45:32.133 WARN 33864 --- [ main] io.netty.channel.DefaultChannelId : Failed to find the current process ID from ''; using a random value: 801435406 2020-03-26 15:45:32.136 INFO 33864 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080 2020-03-26 15:45:32.137 INFO 33864 --- [ main] i.j.s.SpringBootHelloApplication : Started SpringBootHelloApplication in 0.083 seconds (JVM running for 0.086) ``` I also prepared a small asciicast - but be aware, you'll maybe don't get it since it's damn fast :) [![asciicast](https://asciinema.org/a/313688.svg)](https://asciinema.org/a/313688) __Your Spring Boot App started in 0.083!!__ Simply access the App via http://localhost:8080/hello. # Doing all the steps together using the native-image-maven-plugin Currently it really makes sense to hand-craft a bash script like our [compile.sh](compile.sh) in order to be able to debug all those `native-image` options! But the development of GraalVM and the spring-graalvm-native projects really go fast. See [this post about GraalVM 20.1.0 release](https://medium.com/graalvm/graalvm-20-1-7ce7e89f066b) for example. So it makes also sense to have a look at the posibility to do all the needed steps to compile a Spring Boot app with GraalVM native images by only using the [native-image-maven-plugin](https://search.maven.org/search?q=g:org.graalvm.nativeimage%20AND%20a:native-image-maven-plugin). > For more information about the `native-image-maven-plugin` see this post: https://medium.com/graalvm/simplifying-native-image-generation-with-maven-plugin-and-embeddable-configuration-d5b283b92f57 Therefor let's add a new Maven profile to our [pom.xml](pom.xml) as [described in the spring-graalvm-native docs](https://repo.spring.io/milestone/org/springframework/experimental/spring-graalvm-native-docs/0.7.0/spring-graalvm-native-docs-0.7.0.zip!/reference/index.html#_add_the_maven_plugin): ```xml <profiles> <profile> <id>native</id> <build> <plugins> <plugin> <groupId>org.graalvm.nativeimage</groupId> <artifactId>native-image-maven-plugin</artifactId> <version>20.2.0</version> <configuration> <buildArgs>-J-Xmx4G -H:+TraceClassInitialization -H:+ReportExceptionStackTraces -Dspring.graal.remove-unused-autoconfig=true -Dspring.graal.remove-yaml-support=true</buildArgs> <imageName>${project.artifactId}</imageName> </configuration> <executions> <execution> <goals> <goal>native-image</goal> </goals> <phase>package</phase> </execution> </executions> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </profile> </profiles> ``` The `buildArgs` tag is crucial here! We need to configure everything needed to successfully run a `native-image` command for our Spring Boot app as already used inside our [compile.sh](compile.sh). But we can leave out `-cp $CP $MAINCLASS` parameter since they are already provided by the plugin. Remember now we run the `native-image` compilation from within the Maven pom context where all those is known. Using the `<imageName>${project.artifactId}</imageName>` is a good idea in order to use our `artifactId` for the resulting executable image name. Otherwise we end up with a fully qualified class name like `io.jonashackt.springbootgraal.springboothelloapplication`. Just remember to have the `start-class` property in place: ``` <properties> <start-class>io.jonashackt.springbootgraal.SpringBootHelloApplication</start-class> ... </properties> ``` That should already suffice! Now we can simply run our Maven profile with: ``` mvn -Pnative clean package ``` ### Tackling the 'No default constructor found Failed to instantiate java.lang.NoSuchMethodException: io.jonashackt.springbootgraal.SpringBootHelloApplication.<init>()' error After executing the build process (which went fine), the resulting native image doesn't start without errors: ``` ./spring-boot-graal . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: Jun 05, 2020 10:46:27 AM org.springframework.boot.StartupInfoLogger logStarting INFO: Starting application on PikeBook.fritz.box with PID 33047 (started by jonashecht in /Users/jonashecht/dev/spring-boot/spring-boot-graalvm/target) Jun 05, 2020 10:46:27 AM org.springframework.boot.SpringApplication logStartupProfileInfo INFO: No active profile set, falling back to default profiles: default Jun 05, 2020 10:46:27 AM org.springframework.context.support.AbstractApplicationContext refresh WARNING: Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'springBootHelloApplication': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.jonashackt.springbootgraal.SpringBootHelloApplication]: No default constructor found; nested exception is java.lang.NoSuchMethodException: io.jonashackt.springbootgraal.SpringBootHelloApplication.<init>() Jun 05, 2020 10:46:27 AM org.springframework.boot.autoconfigure.logging.ConditionEvaluationReportLoggingListener logMessage INFO: Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. Jun 05, 2020 10:46:27 AM org.springframework.boot.SpringApplication reportFailure SEVERE: Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'springBootHelloApplication': Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.jonashackt.springbootgraal.SpringBootHelloApplication]: No default constructor found; nested exception is java.lang.NoSuchMethodException: io.jonashackt.springbootgraal.SpringBootHelloApplication.<init>() at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1320) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1214) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:226) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:895) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) at org.springframework.boot.web.reactive.context.ReactiveWebServerApplicationContext.refresh(ReactiveWebServerApplicationContext.java:62) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:758) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:750) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) at io.jonashackt.springbootgraal.SpringBootHelloApplication.main(SpringBootHelloApplication.java:10) Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.jonashackt.springbootgraal.SpringBootHelloApplication]: No default constructor found; nested exception is java.lang.NoSuchMethodException: io.jonashackt.springbootgraal.SpringBootHelloApplication.<init>() at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:83) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1312) ... 18 more Caused by: java.lang.NoSuchMethodException: io.jonashackt.springbootgraal.SpringBootHelloApplication.<init>() at java.lang.Class.getConstructor0(DynamicHub.java:3349) at java.lang.Class.getDeclaredConstructor(DynamicHub.java:2553) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:78) ... 19 more ``` > But what is the difference between the way our [compile.sh](compile.sh) works compared to the `native-image-maven-plugin` really? The parameters are the same! I had a hard time figuring that one out! But finally I found a difference - it's all about the Spring Feature computed `spring.components`: ``` $ ./compile.sh ... Excluding 104 auto-configurations from spring.factories file Found no META-INF/spring.components -> synthesizing one... Computed spring.components is vvv io.jonashackt.springbootgraal.HelloRouter=org.springframework.stereotype.Component io.jonashackt.springbootgraal.HelloHandler=org.springframework.stereotype.Component io.jonashackt.springbootgraal.SpringBootHelloApplication=org.springframework.stereotype.Component ^^^ Registered 3 entries Configuring initialization time for specific types and packages: #69 buildtime-init-classes #21 buildtime-init-packages #28 runtime-init-classes #0 runtime-init-packages ``` with our [compile.sh](compile.sh) the Feature finds the 3 classes that are Spring Components and thus are relevant for our Application to work. ``` $ mvn -Pnative clean package ... Excluding 104 auto-configurations from spring.factories file Found no META-INF/spring.components -> synthesizing one... Computed spring.components is vvv ^^^ Registered 0 entries Configuring initialization time for specific types and packages: #69 buildtime-init-classes #21 buildtime-init-packages #28 runtime-init-classes #0 runtime-init-packages ``` Our Maven plugin does not recognize the three needed classes! And thus it also doesn't successfully run our application in the end, since the REST controller doesn't work, if we access it via http://localhost:8080/hello In a non-native world, our Spring Components would be explored at runtime via component scanning. But with GraalVM native image compilation, all notion of a thing called classpath is lost at runtime! So we need something to do the component scanning at build time. The one utility that does this is the [spring-context-indexer](https://stackoverflow.com/questions/47254907/how-can-i-create-a-spring-5-component-index/48407939) and is executed by the Spring @AutomaticFeature for us, if we use our `compile.sh`. But using the `native-image-maven-plugin` this isn't done automatically! So we have to explicitely include the [spring-context-indexer](https://mvnrepository.com/artifact/org.springframework/spring-context-indexer/5.2.6.RELEASE) dependency inside our [pom.xml]: ```xml <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-indexer</artifactId> </dependency> ``` Now running a Maven build, the file `target/classes/META_INF/spring.components` containing our 3 needed classes is created: ``` io.jonashackt.springbootgraal.HelloHandler=org.springframework.stereotype.Component io.jonashackt.springbootgraal.HelloRouter=org.springframework.stereotype.Component io.jonashackt.springbootgraal.SpringBootHelloApplication=org.springframework.stereotype.Component ``` And using that dependency, our Maven build finally works as expected: ``` $ mvn -Pnative clean package ... Excluding 104 auto-configurations from spring.factories file Processing META-INF/spring.components files... Registered 3 entries Configuring initialization time for specific types and packages: #69 buildtime-init-classes #21 buildtime-init-packages #28 runtime-init-classes #0 runtime-init-packages ... ``` __The question remains why the Spring @AutomaticFeature doesn't do that automatically only while executed via the `native-image-maven-plugin`!__ # Comparing Startup time & Memory footprint Ok, the initial goal was to run our beloved Spring Boot Apps at lightning speed. Now we have a "normal" Spring Boot App, that we're able to run with: ``` $ java -jar target/spring-boot-graal-0.0.1-SNAPSHOT.jar . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.3.0.M4) 2020-04-30 15:40:21.187 INFO 40149 --- [ main] i.j.s.SpringBootHelloApplication : Starting SpringBootHelloApplication v0.0.1-SNAPSHOT on PikeBook.fritz.box with PID 40149 (/Users/jonashecht/dev/spring-boot/spring-boot-graalvm/target/spring-boot-graal-0.0.1-SNAPSHOT.jar started by jonashecht in /Users/jonashecht/dev/spring-boot/spring-boot-graalvm) 2020-04-30 15:40:21.190 INFO 40149 --- [ main] i.j.s.SpringBootHelloApplication : No active profile set, falling back to default profiles: default 2020-04-30 15:40:22.280 INFO 40149 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080 2020-04-30 15:40:22.288 INFO 40149 --- [ main] i.j.s.SpringBootHelloApplication : Started SpringBootHelloApplication in 1.47 seconds (JVM running for 1.924) ``` The standard way takes about `1.47 seconds` to start up and it uses around `491 MB` of RAM: ``` PID TTY TIME CMD Processes: 545 total, 2 running, 1 stuck, 542 sleeping, 2943 threads 16:21:23 Load Avg: 1.35, 1.92, 2.30 CPU usage: 3.96% user, 3.84% sys, 92.19% idle SharedLibs: 240M resident, 63M data, 19M linkedit. MemRegions: 224056 total, 3655M resident, 50M private, 6794M shared. PhysMem: 16G used (3579M wired), 93M unused. VM: 2744G vsize, 1997M framework vsize, 64447396(189) swapins, 66758016(0) swapouts. Networks: packets: 34854978/40G in, 30746488/34G out. Disks: 28626843/545G read, 11039646/423G written. PID COMMAND %CPU TIME #TH #WQ #POR MEM PURG CMPR PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGS MSGR SYSBSD SYSM CSW PAGE IDLE POWE 40862 java 0.1 00:05.46 27 1 112 491M 0B 0B 40862 1592 sleeping *0[1] 0.00000 0.00000 501 136365 1942 5891 2919 52253+ 8577 21848+ 7148 733+ 0.8 ``` Now comparing our Natively compiled Spring Boot App, we see a startup time of about `0.078 seconds`: ``` ./spring-boot-graal . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: 2020-05-01 10:25:31.200 INFO 42231 --- [ main] i.j.s.SpringBootHelloApplication : Starting SpringBootHelloApplication on PikeBook.fritz.box with PID 42231 (/Users/jonashecht/dev/spring-boot/spring-boot-graalvm/target/native-image/spring-boot-graal started by jonashecht in /Users/jonashecht/dev/spring-boot/spring-boot-graalvm/target/native-image) 2020-05-01 10:25:31.200 INFO 42231 --- [ main] i.j.s.SpringBootHelloApplication : No active profile set, falling back to default profiles: default 2020-05-01 10:25:31.241 WARN 42231 --- [ main] io.netty.channel.DefaultChannelId : Failed to find the current process ID from ''; using a random value: 635087100 2020-05-01 10:25:31.245 INFO 42231 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080 2020-05-01 10:25:31.245 INFO 42231 --- [ main] i.j.s.SpringBootHelloApplication : Started SpringBootHelloApplication in 0.078 seconds (JVM running for 0.08) ``` and uses only `30MB` of RAM: ``` Processes: 501 total, 2 running, 499 sleeping, 2715 threads 10:26:05 Load Avg: 5.73, 10.11, 6.17 CPU usage: 4.33% user, 3.86% sys, 91.79% idle SharedLibs: 162M resident, 34M data, 9248K linkedit. MemRegions: 214693 total, 2846M resident, 72M private, 1677M shared. PhysMem: 11G used (3607M wired), 4987M unused. VM: 2448G vsize, 1997M framework vsize, 77090986(192) swapins, 80042677(0) swapouts. Networks: packets: 31169140/37G in, 27833716/33G out. Disks: 29775686/600G read, 11686485/480G written. PID COMMAND %CPU TIME #TH #WQ #POR MEM PURG CMPR PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULT COW MSGS MSGR SYSB SYSM CSW PAGE IDLE POWE INST CYCL 42231 spring-boot- 0.0 00:00.08 7 1 38 30M 0B 0B 42231 1592 sleeping *0[1] 0.00000 0.00000 501 17416 2360 77 20 2186 186 174 27 2 0.0 0 0 ``` So with a default Spring App we have around 500MB memory consumption, a natively compiled Spring App has only 30MB. That means, we could run more than 15 Spring microservices with the same amount of RAM we needed for only one standard Spring microservice! Wohoo! :) And not to mention the startup times. Around 1.5 seconds versus only 78 milli seconds. So even our Kubernetes cluster is able to scale our Spring Boot Apps at lightning speed! # Build and Run your Native Image compilation on a Cloud-CI provider like TravisCI As we are used to test-driven development and we rely on very new code, which is for sure subject to change in the near future, we should be also able to automatically run our GraalVM Native image complilation on a Cloud CI provider like In order to run the compilation process, we need to [install GraalVM and GraalVM Native Image first on TravisCI](https://stackoverflow.com/a/61254927/4964553). Therefore let's have a look into our [.travis.yml](.travis.yml): ```yaml dist: bionic language: minimal install: # Install GraalVM with SDKMAN - curl -s "https://get.sdkman.io" | bash - source "$HOME/.sdkman/bin/sdkman-init.sh" - sdk install java 20.2.0.r11-grl # Check if GraalVM was installed successfully - java -version # Install Maven, that uses GraalVM for later builds - sdk install maven # Show Maven using GraalVM JDK - mvn --version # Install GraalVM Native Image - gu install native-image # Check if Native Image was installed properly - native-image --version script: # Run GraalVM Native Image compilation of Spring Boot App - ./compile.sh ``` There are two main things to notice here: First we simply leverage the power of SDKMAN again to install GraalVM, as we already did on our local machines. Second: __Don't use a `language: java` or the default linux distros like `dist: bionic`!__, because they ship with pre-installed Maven versions, which is configured to use the pre-installed OpenJDK - and __NOT our GraalVM installation__. Therefore we simply use the `language: minimal`, which is [a simple way of getting our Travis builds based on a basic Travis build environment without pre-installed JDKs or Maven](https://stackoverflow.com/a/44738181/4964553) together with `distro: bionic` which will tell Travis to use the latest available `minimal` build image (see https://docs.travis-ci.com/user/languages/minimal-and-generic/). Now our TravisCI builds should run a full native image compilation: ``` Warning: class initialization of class io.netty.handler.ssl.JettyNpnSslEngine failed with exception java.lang.NoClassDefFoundError: org/eclipse/jetty/npn/NextProtoNego$Provider. This class will be initialized at run time because option --allow-incomplete-classpath is used for image building. Use the option --initialize-at-run-time=io.netty.handler.ssl.JettyNpnSslEngine to explicitly request delayed initialization of this class. [spring-boot-graal:5634] (typeflow): 238,622.47 ms, 6.23 GB [spring-boot-graal:5634] (objects): 122,937.15 ms, 6.23 GB [spring-boot-graal:5634] (features): 10,311.79 ms, 6.23 GB [spring-boot-graal:5634] analysis: 379,203.23 ms, 6.23 GB [spring-boot-graal:5634] (clinit): 2,542.77 ms, 6.23 GB [spring-boot-graal:5634] universe: 9,890.85 ms, 6.23 GB [spring-boot-graal:5634] (parse): 20,901.16 ms, 6.23 GB [spring-boot-graal:5634] (inline): 14,131.55 ms, 6.23 GB [spring-boot-graal:5634] (compile): 94,847.99 ms, 6.23 GB [spring-boot-graal:5634] compile: 133,862.12 ms, 6.23 GB [spring-boot-graal:5634] image: 8,635.21 ms, 6.23 GB [spring-boot-graal:5634] write: 1,472.98 ms, 6.23 GB ``` See this build for example: ![successfull-travis-compile](screenshots/successfull-travis-compile.png) ### Tackling the 'There was an error linking the native image /usr/bin/ld: final link failed: Memory exhausted' error I now had Travis finally compiling my Spring Boot App - but with a last error (you can [see full log here](https://travis-ci.org/github/jonashackt/spring-boot-graalvm)): ``` [spring-boot-graal:5634] (typeflow): 238,622.47 ms, 6.23 GB [spring-boot-graal:5634] (objects): 122,937.15 ms, 6.23 GB [spring-boot-graal:5634] (features): 10,311.79 ms, 6.23 GB [spring-boot-graal:5634] analysis: 379,203.23 ms, 6.23 GB [spring-boot-graal:5634] (clinit): 2,542.77 ms, 6.23 GB [spring-boot-graal:5634] universe: 9,890.85 ms, 6.23 GB [spring-boot-graal:5634] (parse): 20,901.16 ms, 6.23 GB [spring-boot-graal:5634] (inline): 14,131.55 ms, 6.23 GB [spring-boot-graal:5634] (compile): 94,847.99 ms, 6.23 GB [spring-boot-graal:5634] compile: 133,862.12 ms, 6.23 GB [spring-boot-graal:5634] image: 8,635.21 ms, 6.23 GB [spring-boot-graal:5634] write: 1,472.98 ms, 6.23 GB Fatal error: java.lang.RuntimeException: java.lang.RuntimeException: There was an error linking the native image: Linker command exited with 1 Linker command executed: cc -v -o /home/travis/build/jonashackt/spring-boot-graalvm/target/native-image/spring-boot-graal -z noexecstack -Wl,--gc-sections -Wl,--dynamic-list -Wl,/tmp/SVM-8253584528623373425/exported_symbols.list -Wl,-x -L/tmp/SVM-8253584528623373425 -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64 /tmp/SVM-8253584528623373425/spring-boot-graal.o /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libjava.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libzip.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnio.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libextnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libffi.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/liblibchelper.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libjvm.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libstrictmath.a -lpthread -ldl -lz -lrt Linker command ouput: Using built-in specs. COLLECT_GCC=cc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper OFFLOAD_TARGET_NAMES=nvptx-none OFFLOAD_TARGET_DEFAULT=1 Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 7.4.0-1ubuntu1~18.04.1' --with-bugurl=file:///usr/share/doc/gcc-7/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --program-suffix=-7 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1) COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/7/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-o' '/home/travis/build/jonashackt/spring-boot-graalvm/target/native-image/spring-boot-graal' '-z' 'noexecstack' '-L/tmp/SVM-8253584528623373425' '-L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib' '-L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64' '-mtune=generic' '-march=x86-64' /usr/lib/gcc/x86_64-linux-gnu/7/collect2 -plugin /usr/lib/gcc/x86_64-linux-gnu/7/liblto_plugin.so -plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper -plugin-opt=-fresolution=/tmp/ccHdD8kF.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --sysroot=/ --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu --as-needed -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -z now -z relro -o /home/travis/build/jonashackt/spring-boot-graalvm/target/native-image/spring-boot-graal -z noexecstack /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o -L/tmp/SVM-8253584528623373425 -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64 -L/usr/lib/gcc/x86_64-linux-gnu/7 -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/7/../../.. --gc-sections --dynamic-list /tmp/SVM-8253584528623373425/exported_symbols.list -x /tmp/SVM-8253584528623373425/spring-boot-graal.o /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libjava.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libzip.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnio.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libextnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libffi.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/liblibchelper.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libjvm.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libstrictmath.a -lpthread -ldl -lz -lrt -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-linux-gnu/7/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crtn.o /usr/bin/ld: final link failed: Memory exhausted collect2: error: ld returned 1 exit status at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:600) at java.base/java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006) at com.oracle.svm.hosted.NativeImageGenerator.run(NativeImageGenerator.java:462) at com.oracle.svm.hosted.NativeImageGeneratorRunner.buildImage(NativeImageGeneratorRunner.java:357) at com.oracle.svm.hosted.NativeImageGeneratorRunner.build(NativeImageGeneratorRunner.java:501) at com.oracle.svm.hosted.NativeImageGeneratorRunner.main(NativeImageGeneratorRunner.java:115) at com.oracle.svm.hosted.NativeImageGeneratorRunner$JDK9Plus.main(NativeImageGeneratorRunner.java:528) Caused by: java.lang.RuntimeException: There was an error linking the native image: Linker command exited with 1 Linker command executed: cc -v -o /home/travis/build/jonashackt/spring-boot-graalvm/target/native-image/spring-boot-graal -z noexecstack -Wl,--gc-sections -Wl,--dynamic-list -Wl,/tmp/SVM-8253584528623373425/exported_symbols.list -Wl,-x -L/tmp/SVM-8253584528623373425 -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64 /tmp/SVM-8253584528623373425/spring-boot-graal.o /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libjava.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libzip.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnio.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libextnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libffi.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/liblibchelper.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libjvm.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libstrictmath.a -lpthread -ldl -lz -lrt Linker command ouput: Using built-in specs. COLLECT_GCC=cc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper OFFLOAD_TARGET_NAMES=nvptx-none OFFLOAD_TARGET_DEFAULT=1 Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 7.4.0-1ubuntu1~18.04.1' --with-bugurl=file:///usr/share/doc/gcc-7/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --program-suffix=-7 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1) COMPILER_PATH=/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/7/:/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/:/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib/:/lib/x86_64-linux-gnu/:/lib/../lib/:/usr/lib/x86_64-linux-gnu/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-linux-gnu/7/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-o' '/home/travis/build/jonashackt/spring-boot-graalvm/target/native-image/spring-boot-graal' '-z' 'noexecstack' '-L/tmp/SVM-8253584528623373425' '-L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib' '-L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64' '-mtune=generic' '-march=x86-64' /usr/lib/gcc/x86_64-linux-gnu/7/collect2 -plugin /usr/lib/gcc/x86_64-linux-gnu/7/liblto_plugin.so -plugin-opt=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper -plugin-opt=-fresolution=/tmp/ccHdD8kF.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --sysroot=/ --build-id --eh-frame-hdr -m elf_x86_64 --hash-style=gnu --as-needed -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -z now -z relro -o /home/travis/build/jonashackt/spring-boot-graalvm/target/native-image/spring-boot-graal -z noexecstack /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o -L/tmp/SVM-8253584528623373425 -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib -L/home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64 -L/usr/lib/gcc/x86_64-linux-gnu/7 -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/7/../../.. --gc-sections --dynamic-list /tmp/SVM-8253584528623373425/exported_symbols.list -x /tmp/SVM-8253584528623373425/spring-boot-graal.o /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libjava.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libzip.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libnio.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/libextnet.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libffi.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/liblibchelper.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libjvm.a /home/travis/.sdkman/candidates/java/20.0.0.r11-grl/lib/svm/clibraries/linux-amd64/libstrictmath.a -lpthread -ldl -lz -lrt -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-linux-gnu/7/crtendS.o /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crtn.o /usr/bin/ld: final link failed: Memory exhausted collect2: error: ld returned 1 exit status at com.oracle.svm.hosted.image.NativeBootImageViaCC.handleLinkerFailure(NativeBootImageViaCC.java:424) at com.oracle.svm.hosted.image.NativeBootImageViaCC.write(NativeBootImageViaCC.java:399) at com.oracle.svm.hosted.NativeImageGenerator.doRun(NativeImageGenerator.java:657) at com.oracle.svm.hosted.NativeImageGenerator.lambda$run$0(NativeImageGenerator.java:445) at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1407) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177) Error: Image build request failed with exit status 1 real 9m11.937s user 17m46.032s sys 0m11.720s ``` # Build and Run your Native Image compilation on GitHub Actions Since Travis laid down their OpenSource support to a massive degree, many maintainers move their repos over to GitHub Actions - see also this post: https://blog.codecentric.de/en/2021/02/github-actions-pipeline/ So let's implement a [.github/workflows/native-image-compile.yml](.github/workflows/native-image-compile.yml): ```yaml name: native-image-compile on: [push] jobs: native-image-compile-on-host: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Cache SDKMAN archives & candidates uses: actions/cache@v2 with: path: ~/.sdkman key: ${{ runner.os }}-sdkman-${{ hashFiles('pom.xml') }} restore-keys: | ${{ runner.os }}-sdkman- - name: Install GraalVM, Maven, Native Image & Run Maven build run: | echo 'Install GraalVM with SDKMAN' curl -s "https://get.sdkman.io" | bash source "$HOME/.sdkman/bin/sdkman-init.sh" sdk install java 20.2.0.r11-grl echo 'Check if GraalVM was installed successfully' java -version echo 'Install GraalVM Native Image' gu install native-image echo 'Check if Native Image was installed properly' native-image --version echo 'Install Maven, that uses GraalVM for later builds' source "$HOME/.sdkman/bin/sdkman-init.sh" sdk install maven echo 'Show Maven using GraalVM JDK' mvn --version echo 'Run GraalVM Native Image compilation of Spring Boot App (Maven version instead of ./compile.sh)' mvn -B clean package -P native --no-transfer-progress ``` This one does exactly what we did with TravisCI - building the native image using Maven and installing GraalVM beforehand. # Use Docker to compile a Spring Boot App with GraalVM There's an [official Docker image from Oracle](https://github.com/orgs/graalvm/packages/container/package/graalvm-ce), but this one sadyl lacks both Maven with it's `mvn` command and the `native-image` plugin also not installed. But we can help ourselves - we just craft a simple [Dockerfile](Dockerfile) for us. We're already used to leverage SDKMAN to install Maven. Therefore we need to install `unzip` and `zip` first, since SDKMAN needs both to work properly: ```dockerfile # Simple Dockerfile adding Maven and GraalVM Native Image compiler to the standard # https://github.com/orgs/graalvm/packages/container/package/graalvm-ce image FROM ghcr.io/graalvm/graalvm-ce:ol7-java11-20.3.1.2 # For SDKMAN to work we need unzip & zip RUN yum install -y unzip zip RUN \ # Install SDKMAN curl -s "https://get.sdkman.io" | bash; \ source "$HOME/.sdkman/bin/sdkman-init.sh"; \ sdk install maven; \ # Install GraalVM Native Image gu install native-image; RUN source "$HOME/.sdkman/bin/sdkman-init.sh" && mvn --version RUN native-image --version # Always use source sdkman-init.sh before any command, so that we will be able to use 'mvn' command ENTRYPOINT bash -c "source $HOME/.sdkman/bin/sdkman-init.sh && $0" ``` In order to enable the `mvn` command for a user of our Docker image, we craft a slightly more interesting `ENTRYPOINT` that always prefixes commands with `"source $HOME/.sdkman/bin/sdkman-init.sh`. Now let's build our Image with: ```shell script docker build . --tag=graalvm-ce:20.3.0-java11-mvn-native-image ``` Now we should be able to launch our GraalVM Native Image compilation inside official Oracle GraalVM image with: ```shell script docker run -it --rm \ --volume $(pwd):/build \ --workdir /build \ --volume "$HOME"/.m2:/root/.m2 \ graalvm-ce:20.3.0-java11-mvn-native-image ./compile.sh ``` When I first thought about a Docker usage, I wanted to pack this build into a `Dockerfile` also - but then I realized, that there's [no easy way of using Docker volumes at Docker build time](https://stackoverflow.com/questions/51086724/docker-build-using-volumes-at-build-time). But I really wanted to mount a Docker volume to my local Maven repository like `--volume "$HOME"/.m2:/root/.m2` to prevent the download of all the Spring Maven dependencies over and over again every time we start our Docker container. So I went with another way: We simply use a `docker run` command, that will compile our native Spring Boot app into our project's working directory (with `--volume $(pwd):/build`). The resulting `spring-boot-graal` native App should be ready after some minutes of heavy compilation. __But!__ We're not able to run it! Hell yeah - because we turned our platform independend Java App into a platform dependend one! That's the price for speed I guess :) ### Tackling 'Exception java.lang.OutOfMemoryError in thread "native-image pid watcher"' error Sometimes the `docker run` seems to take ages to complete - and then a `java.lang.OutOfMemoryError` is thrown into the log: ``` 14:06:34.609 [ForkJoinPool-2-worker-3] DEBUG io.netty.handler.codec.compression.ZlibCodecFactory - -Dio.netty.noJdkZlibEncoder: false Exception in thread "native-image pid watcher" Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "native-image pid watcher" ``` Then it is very likely that your Docker Engine has not enough RAM it is able to use! In my Mac installation the default is only `2.00 GB`: ![docker-mac-memory](screenshots/docker-mac-memory.png) As [stated in the comments of this so q&a](https://stackoverflow.com/questions/57935533/native-image-building-process-is-frozen-in-quarkus), you have to give Docker much more memory since the GraalVM Native Image compilation process is really RAM intensive. I had a working local compilation in the Docker Container when I gave Docker `12.00 GB` of RAM. ### Run Spring Boot Native Apps in Docker Now that our Docker build works in general, we should also run our Native Spring Boot App inside a Docker container. Therefore a Docker multi-stage build would come in handy, since we could then do the build & Native Image compilation stuff in the first container - and then only take the resulting Native app and use it in the second container to run it. Therefore let's refactor our Dockerfile: ```dockerfile # Simple Dockerfile adding Maven and GraalVM Native Image compiler to the standard # https://github.com/orgs/graalvm/packages/container/package/graalvm-ce image FROM ghcr.io/graalvm/graalvm-ce:ol7-java11-20.3.1.2 ADD . /build WORKDIR /build # For SDKMAN to work we need unzip & zip RUN yum install -y unzip zip RUN \ # Install SDKMAN curl -s "https://get.sdkman.io" | bash; \ source "$HOME/.sdkman/bin/sdkman-init.sh"; \ sdk install maven; \ # Install GraalVM Native Image gu install native-image; RUN source "$HOME/.sdkman/bin/sdkman-init.sh" && mvn --version RUN native-image --version RUN source "$HOME/.sdkman/bin/sdkman-init.sh" && ./compile.sh # We use a Docker multi-stage build here in order that we only take the compiled native Spring Boot App from the first build container FROM oraclelinux:7-slim MAINTAINER Jonas Hecht # Add Spring Boot Native app spring-boot-graal to Container COPY --from=0 "/build/target/native-image/spring-boot-graal" spring-boot-graal # Fire up our Spring Boot Native app by default CMD [ "sh", "-c", "./spring-boot-graal" ] ``` Additionally the second container isn't based on the `ghcr.io/graalvm/graalvm-ce` image containing a GraalVM installation, Maven and the `native-image` command - but instead uses [the base image of this image](https://github.com/oracle/docker-images/blob/master/GraalVM/CE/Dockerfile.java11), which is `oraclelinux:7-slim`. With that we reduce the resulting Docker image size from around `1.48GB` to only `186MB`! Let't run our Multi-stage build with the following command: ```shell script docker build . --tag=spring-boot-graal ``` This again will take a while - you may grab a coffee :) After the Docker build successfully finished with some output like that: ``` [spring-boot-graal:289] (typeflow): 114,554.33 ms, 6.58 GB [spring-boot-graal:289] (objects): 63,145.07 ms, 6.58 GB [spring-boot-graal:289] (features): 6,990.75 ms, 6.58 GB [spring-boot-graal:289] analysis: 190,400.92 ms, 6.58 GB [spring-boot-graal:289] (clinit): 1,970.98 ms, 6.67 GB [spring-boot-graal:289] universe: 6,263.93 ms, 6.67 GB [spring-boot-graal:289] (parse): 11,824.83 ms, 6.67 GB [spring-boot-graal:289] (inline): 7,216.63 ms, 6.73 GB [spring-boot-graal:289] (compile): 63,692.52 ms, 6.77 GB [spring-boot-graal:289] compile: 86,836.76 ms, 6.77 GB [spring-boot-graal:289] image: 10,050.63 ms, 6.77 GB [spring-boot-graal:289] write: 1,319.52 ms, 6.77 GB [spring-boot-graal:289] [total]: 313,644.65 ms, 6.77 GB real 5m16.447s user 16m32.096s sys 1m34.441s Removing intermediate container 151e1413ec2f ---> be671d4f237f Step 10/13 : FROM docker pull ghcr.io/graalvm/graalvm-ce:ol7-java11-20.3.1.2 ---> 364d0bb387bd Step 11/13 : MAINTAINER Jonas Hecht ---> Using cache ---> 445833938b60 Step 12/13 : COPY --from=0 "/build/target/native-image/spring-boot-graal" spring-boot-graal ---> 2d717a0db703 Step 13/13 : CMD [ "sh", "-c", "./spring-boot-graal" ] ---> Running in 7fa931991d7e Removing intermediate container 7fa931991d7e ---> a0afe30b3619 Successfully built a0afe30b3619 Successfully tagged spring-boot-graal:latest ``` We are able to run our Spring Boot Native app with `docker run -p 8080:8080 spring-boot-graal`: ``` $ docker run -p 8080:8080 spring-boot-graal . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: 2020-04-19 09:22:51.547 INFO 1 --- [ main] i.j.s.SpringBootHelloApplication : Starting SpringBootHelloApplication on 06274db526b0 with PID 1 (/spring-boot-graal started by root in /) 2020-04-19 09:22:51.547 INFO 1 --- [ main] i.j.s.SpringBootHelloApplication : No active profile set, falling back to default profiles: default 2020-04-19 09:22:51.591 WARN 1 --- [ main] io.netty.channel.DefaultChannelId : Failed to find the current process ID from ''; using a random value: -949685832 2020-04-19 09:22:51.593 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080 2020-04-19 09:22:51.594 INFO 1 --- [ main] i.j.s.SpringBootHelloApplication : Started SpringBootHelloApplication in 0.063 seconds (JVM running for 0.065) ``` Now simply access your App via http://localhost:8080/hello # Running Spring Boot Graal Native Apps on Heroku Finally we are where we wanted to be in the first place! We're able to run our natively compiled Spring Boot Apps inside Docker containers. It should be easy to deploy those to [a cloud provider like Heroku](https://heroku.com)! And it's good to get back on my last year's article on Running [Spring Boot on Heroku with Docker, JDK 11 & Maven 3.5.x](https://blog.codecentric.de/en/2019/08/spring-boot-heroku-docker-jdk11/), since there may be tweaks we need with our Graal-Setup also! Now as we move forward to a deployment of our Spring Boot Native app on a cloud provider's Docker infrastructure, we need to have our Spring Boot Native app's port configurable in a dynamic fashion! Most cloud providers want to dynamically set this port from the outside - as [we can see in Heroku for example](https://devcenter.heroku.com/articles/setting-the-http-port-for-java-applications). [As the Heroku docs state]( https://devcenter.heroku.com/articles/container-registry-and-runtime#dockerfile-commands-and-runtime): > The web process must listen for HTTP traffic on $PORT, which is set by Heroku. EXPOSE in Dockerfile is not respected, but can be used for local testing. Only HTTP requests are supported. ### Configure the Spring Boot Native app's port dynamically inside a Docker container To achieve that, we need to somehow pass a port variable to our Spring Boot Native app from command line. Since the GraalVM support is just in its early stages, we can't rely on a huge documentation. But as this is a similar problem other frameworks also needed to solve, I thought about [Quarkus.io](https://quarkus.io/) which has been around for some time now - and should have had exactly this problem already. And [there's the stackoverflow answer](https://stackoverflow.com/a/55043637/4964553) :) With Quarkus, you simply need to pass the port as `-D` parameter like `-Dquarkus.http.port=8081` to the native app. [Could this be mapped onto Spring Boot too?](https://stackoverflow.com/questions/61302412/how-to-configure-the-port-of-a-spring-boot-app-thats-natively-compiled-by-graal) Luckily yes! Just run your Spring Boot native app with ```shell script ./spring-boot-graal -Dserver.port=8087 ``` And your App starts using port `8087` :) Now we are able to pass the port dynamically from a `docker run` command. Therefore we need to make a small change to our [Dockerfile](Dockerfile): ```dockerfile ... # Add Spring Boot Native app spring-boot-graal to Container COPY --from=0 "/build/target/native-image/spring-boot-graal" spring-boot-graal # Fire up our Spring Boot Native app by default CMD [ "sh", "-c", "./spring-boot-graal -Dserver.port=$PORT" ] ``` With this we are able to run our Dockerized Spring Boot Native App with a dynamic port setting from command line like this: ``` docker run -e "PORT=8087" -p 8087:8087 spring-boot-graal ``` Finally try to access your app at http://localhost:8087/hello ### Use Docker to run our Spring Boot Native App on Heroku First things first: Let's start by creating your Heroku app if you haven't already: ``` heroku create spring-boot-graal ``` Then you simply set the Heroku stack: ``` heroku stack:set container --app spring-boot-graal ``` Sadly we can't use the section __'Configuring Heroku to use Docker'__ of my article on Running [Spring Boot on Heroku with Docker, JDK 11 & Maven 3.5.x](https://blog.codecentric.de/en/2019/08/spring-boot-heroku-docker-jdk11/) in this case here, since we would run into the `Error: Image build request failed with exit status 137`. My first attempts on Heroku lead to the build problems: ``` Error: Image build request failed with exit status 137 real 2m51.946s user 2m9.594s sys 0m19.085s The command '/bin/sh -c source "$HOME/.sdkman/bin/sdkman-init.sh" && ./compile.sh' returned a non-zero code: 137 ``` This error appears usually [when Docker does not have enough memory](https://codefresh.io/docs/docs/troubleshooting/common-issues/error-code-137/). And since the free Heroku dyno only guarantees us `512MB` of RAM :( ([see Dyno Types](https://devcenter.heroku.com/articles/dyno-types))), we won't get far on this way. But [as the docs state](https://devcenter.heroku.com/categories/deploying-with-docker) the way of [Building Docker Images with heroku.yml](https://devcenter.heroku.com/articles/build-docker-images-heroku-yml) isn't the only way to run Docker containers on Heroku. There's another way of using the [Container Registry & Runtime (Docker Deploys)](https://devcenter.heroku.com/articles/container-registry-and-runtime)! With that we could decouple the Docker image build process (which is so much memory hungry!) from simply running the Docker container based on that image. ### Work around the Heroku 512MB RAM cap: Building our Dockerimage with TravisCI So we need to do the Docker build on another platform - why not simply use Travis?! It already proofed to work directly on the host, why not also [using the Travis Docker service](https://docs.travis-ci.com/user/docker/)?! Leveraging [Travis jobs feature](https://docs.travis-ci.com/user/build-stages/), we can also do both in parallel - just have a look at the following screenshot: ![travis-parallel-jobs-direct-and-docker](screenshots/travis-parallel-jobs-direct-and-docker.png) Therefore we implement two separate Travis jobs `"Native Image compile on Travis Host"` and `"Native Image compile in Docker on Travis & Push to Heroku Container Registry"` inside our [.travis.yml](.travis.yml) and include the `docker` services: ```yaml # use minimal Travis build image so that we could install our own JDK (Graal) and Maven # use newest available minimal distro - see https://docs.travis-ci.com/user/languages/minimal-and-generic/ dist: bionic language: minimal services: - docker jobs: include: - script: # Install GraalVM with SDKMAN - curl -s "https://get.sdkman.io" | bash - source "$HOME/.sdkman/bin/sdkman-init.sh" - sdk install java 20.2.0.r11-grl # Check if GraalVM was installed successfully - java -version # Install Maven, that uses GraalVM for later builds - sdk install maven # Show Maven using GraalVM JDK - mvn --version # Install GraalVM Native Image - gu install native-image # Check if Native Image was installed properly - native-image --version # Run GraalVM Native Image compilation of Spring Boot App - ./compile.sh name: "Native Image compile on Travis Host" - script: # Compile with Docker - docker build . --tag=spring-boot-graal name: "Native Image compile in Docker on Travis & Push to Heroku Container Registry" ``` ### Tackling 'Error: Image build request failed with exit status 137' with the -J-Xmx parameter [As mentioned in the Spring docs](https://repo.spring.io/milestone/org/springframework/experimental/spring-graalvm-native-docs/0.7.0/spring-graalvm-native-docs-0.7.0.zip!/reference/index.html#_options_enabled_by_default), the `spring-graalvm-native` uses the `--no-server` option by default when running Native Image compilations with Spring. But why is this parameter used? See the official docs: https://www.graalvm.org/docs/reference-manual/native-image/ > Another prerequisite to consider is the maximum heap size. Physical memory for running a JVM-based application may be insufficient to build a native image. For server-based image building we allow to use 80% of the reported physical RAM for all servers together, but never more than 14GB per server (for exact details please consult the native-image source code). If you run with --no-server option, you will get the whole 80% of what is reported as physical RAM as the baseline. This mode respects -Xmx arguments additionally. We somehow could leave out the `no-server` option in order to reduce the amount of memory our Native Image compilation consumes - but there's an open issue in combination with Spring: https://github.com/oracle/graal/issues/1952 which says, that the images build without `--no-server` is sometimes unreliable. Luckily there's [a hint in this GitHub issue](https://github.com/oracle/graal/issues/920), that we could configure the amount of memory the `--no-server` option takes in total with the help of a `Xmx` parameter like `-J-Xmx3G`. Using that option together like this in our `native-image` command: ```shell script time native-image \ -J-Xmx4G \ -H:+TraceClassInitialization \ -H:Name=$ARTIFACT \ -H:+ReportExceptionStackTraces \ -Dspring.graal.remove-unused-autoconfig=true \ -Dspring.graal.remove-yaml-support=true \ -cp $CP $MAINCLASS; ``` we could repeatably reduce the amount of memory to 4GBs of RAM, which should be enough for TravisCI - since it provides us with more than 6GB using the Docker service ([see this build for example](https://travis-ci.org/github/jonashackt/spring-boot-graalvm/builds/677157831)). Using the option results in the following output: ``` 08:07:23.999 [ForkJoinPool-2-worker-3] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 4294967296 bytes (maybe) ... [spring-boot-graal:215] (typeflow): 158,492.53 ms, 4.00 GB [spring-boot-graal:215] (objects): 94,986.72 ms, 4.00 GB [spring-boot-graal:215] (features): 104,518.36 ms, 4.00 GB [spring-boot-graal:215] analysis: 368,005.35 ms, 4.00 GB [spring-boot-graal:215] (clinit): 3,107.18 ms, 4.00 GB [spring-boot-graal:215] universe: 12,502.04 ms, 4.00 GB [spring-boot-graal:215] (parse): 22,617.13 ms, 4.00 GB [spring-boot-graal:215] (inline): 10,093.57 ms, 3.49 GB [spring-boot-graal:215] (compile): 82,256.99 ms, 3.59 GB [spring-boot-graal:215] compile: 119,502.78 ms, 3.59 GB [spring-boot-graal:215] image: 12,087.80 ms, 3.59 GB [spring-boot-graal:215] write: 3,573.06 ms, 3.59 GB [spring-boot-graal:215] [total]: 558,194.13 ms, 3.59 GB real 9m22.984s user 24m41.948s sys 2m3.179s ``` The one thing to take into account is that Native Image compilation will be a bit slower now. So if you run on your local machine with lot's of memory, feel free to delete the ` -J-Xmx4G` parameter :) ### Work around the Heroku 512MB RAM cap: Building our Dockerimage with GitHub Actions ```yaml native-image-compile-in-docker: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Compile Native Image using Docker run: docker build . --tag=registry.heroku.com/spring-boot-graal/web ``` ### Pushing and Releasing our Dockerized Native Spring Boot App on Heroku Container Infrastructure Now we should be able to finally [push the build Docker image into Heroku's Container Registry](https://devcenter.heroku.com/articles/container-registry-and-runtime#using-a-ci-cd-platform), from where we're able to run our Spring Boot Native app later on. Therefore we need to [configure some environment variables in Travis in order to push](https://docs.travis-ci.com/user/docker/#pushing-a-docker-image-to-a-registry) to Heroku's Container Registry inside our TravisCI job's settings: `DOCKER_USERNAME` and `DOCKER_PASSWORD`. The first is your Heroku eMail, the latter is your Heroku API key. Be sure to prevent displaying the values in the build log: ![travis-env-vars-heroku](screenshots/travis-env-vars-heroku.png) With the following configuration inside our [.travis.yml](.travis.yml), we should be able to successfully log in to Heroku Container Registry: ```yaml - script: # Login into Heroku Container Registry first, so that we can push our Image later - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin registry.heroku.com ``` Now after a successful Docker build, that compiles our Spring Boot App into a native executable, we finally need to push the resulting Docker image into Heroku Container Registry. Therefore we need to use the correct tag for our Docker image build([see the docs](https://devcenter.heroku.com/articles/container-registry-and-runtime#pushing-an-existing-image): ```shell script docker build . --tag=registry.heroku.com/<app>/<process-type> docker push registry.heroku.com/<app>/<process-type> ``` This means we add the following `docker tag` and `docker push` command into our [.travis.yml](.travis.yml): ```yaml - docker build . --tag=registry.heroku.com/spring-boot-graal/web - docker push registry.heroku.com/spring-boot-graal/web ``` The final step after a successful push is [to release our App on Heroku](https://devcenter.heroku.com/articles/container-registry-and-runtime#releasing-an-image), which is always the last step to deploy our App on Heroku using Docker [since May 2018](https://devcenter.heroku.com/changelog-items/1426) (before a push was all you had to do). There are [two ways to achieve this](https://devcenter.heroku.com/articles/container-registry-and-runtime#releasing-an-image): either through the CLI via `heroku container:release web` or with the API. The first would require us to install Heroku CLI in Travis, the latter should work out-of-the-box. Therefore let's craft the needed `curl` command: ```shell script curl -X PATCH https://api.heroku.com/apps/spring-boot-graal/formation \ -d '{ "updates": [ { "type": "web", "docker_image": "'"$(docker inspect registry.heroku.com/spring-boot-graal/web --format={{.Id}})"'" }] }' \ -H "Content-Type: application/json" \ -H "Accept: application/vnd.heroku+json; version=3.docker-releases" \ -H "Authorization: Bearer $DOCKER_PASSWORD" ``` This `curl` command is even better then the documented on in [the official Heroku docs](https://devcenter.heroku.com/articles/container-registry-and-runtime#api), since it already incorporates the `docker inspect registry.heroku.com/spring-boot-graal/web --format={{.Id}})` command to retrieve the needed Docker image id and also omits the need to login to Heroku CLI beforehand (to create the needed `~/.netrc` mentioned in the docs), since we simply use `-H "Authorization: Bearer $DOCKER_PASSWORD"` here, where `$DOCKER_PASSWORD` is our Heroku API Key again. The problem with Travis: [It does not understand our nice curl](https://travis-ci.org/github/jonashackt/spring-boot-graalvm/jobs/679008339) command, [since it interprets it totally wrong](https://stackoverflow.com/questions/34687610/how-to-properly-use-curl-in-travis-ci-config-file-yaml), even if we mind [the correct multiline usage](https://travis-ci.community/t/yaml-multiline-strings/3914/4). Well I guess our Java User Group Thüringen speaker Kai Tödter did already know that restriction of some CI systems, and [crafted himself a bash script](https://toedter.com/2018/06/02/heroku-docker-deployment-update/) for exactly that purpose. At that point I created a script called [heroku-release.sh](heroku-release.sh): ```shell script #!/usr/bin/env bash herokuAppName=$1 dockerImageId=$(docker inspect registry.heroku.com/$herokuAppName/web --format={{.Id}}) curl -X PATCH https://api.heroku.com/apps/$herokuAppName/formation \ -d '{ "updates": [ { "type": "web", "docker_image": "'"$dockerImageId"'" }] }' \ -H "Content-Type: application/json" \ -H "Accept: application/vnd.heroku+json; version=3.docker-releases" \ -H "Authorization: Bearer $DOCKER_PASSWORD" ``` Using this script, we finally have our fully working [.travis.yml](.travis.yml): ```yaml dist: bionic language: minimal services: - docker - script: # Login into Heroku Container Registry first, so that we can push our Image later - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin registry.heroku.com # Compile App with Docker - docker build . --tag=registry.heroku.com/spring-boot-graal/web # Push to Heroku Container Registry - docker push registry.heroku.com/spring-boot-graal/web # Release Dockerized Native Spring Boot App on Heroku - ./heroku-release.sh spring-boot-graal ``` That's it! After a successfull TravisCI build, we should be able to see our running Dockerized Spring Boot Native App on Heroku at https://spring-boot-graal.herokuapp.com/hello ![heroku-running-app](screenshots/heroku-running-app.png) You can even use `heroku logs` to see what's happening behind the scenes: ``` $ heroku logs -a spring-boot-graal 2020-04-24T12:02:14.562471+00:00 heroku[web.1]: State changed from down to starting 2020-04-24T12:02:41.564599+00:00 heroku[web.1]: State changed from starting to up 2020-04-24T12:02:41.283549+00:00 app[web.1]: 2020-04-24T12:02:41.283574+00:00 app[web.1]: . ____ _ __ _ _ 2020-04-24T12:02:41.283575+00:00 app[web.1]: /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 2020-04-24T12:02:41.283575+00:00 app[web.1]: ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 2020-04-24T12:02:41.283576+00:00 app[web.1]: \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 2020-04-24T12:02:41.283576+00:00 app[web.1]: ' |____| .__|_| |_|_| |_\__, | / / / / 2020-04-24T12:02:41.283578+00:00 app[web.1]: =========|_|==============|___/=/_/_/_/ 2020-04-24T12:02:41.286498+00:00 app[web.1]: :: Spring Boot :: 2020-04-24T12:02:41.286499+00:00 app[web.1]: 2020-04-24T12:02:41.287774+00:00 app[web.1]: 2020-04-24 12:02:41.287 INFO 3 --- [ main] i.j.s.SpringBootHelloApplication : Starting SpringBootHelloApplication on 1c7f1944-1f01-4284-8931-bc1a0a2d1fa5 with PID 3 (/spring-boot-graal started by u11658 in /) 2020-04-24T12:02:41.287859+00:00 app[web.1]: 2020-04-24 12:02:41.287 INFO 3 --- [ main] i.j.s.SpringBootHelloApplication : No active profile set, falling back to default profiles: default 2020-04-24T12:02:41.425964+00:00 app[web.1]: 2020-04-24 12:02:41.425 WARN 3 --- [ main] io.netty.channel.DefaultChannelId : Failed to find the current process ID from ''; using a random value: -36892848 2020-04-24T12:02:41.427326+00:00 app[web.1]: 2020-04-24 12:02:41.427 INFO 3 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 59884 2020-04-24T12:02:41.430874+00:00 app[web.1]: 2020-04-24 12:02:41.430 INFO 3 --- [ main] i.j.s.SpringBootHelloApplication : Started SpringBootHelloApplication in 0.156 seconds (JVM running for 0.159) ``` ### Pushing and Releasing our Dockerized Native Spring Boot App on Heroku Container Infrastructure using GitHub Actions We should also use GitHub Actions to [push the build Docker image into Heroku's Container Registry](https://devcenter.heroku.com/articles/container-registry-and-runtime#using-a-ci-cd-platform). Therefore we need to configure encrypted variables in our GitHub repository in order to push to Heroku's Container Registry: `DOCKER_USERNAME` and `DOCKER_PASSWORD`. The first is your Heroku eMail, the latter is your Heroku API key. Be sure to prevent displaying the values in the build log: With the following configuration inside our [.github/workflows/native-image-compile.yml](.github/workflows/native-image-compile.yml), we should be able to successfully log in to Heroku Container Registry: ```yaml run: | echo ' Login into Heroku Container Registry first, so that we can push our Image later' echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin registry.heroku.com ``` Now after a successful Docker build, that compiles our Spring Boot App into a native executable, we finally need to push the resulting Docker image into Heroku Container Registry. Therefore we need to use the correct tag for our Docker image build([see the docs](https://devcenter.heroku.com/articles/container-registry-and-runtime#pushing-an-existing-image): ```shell script docker build . --tag=registry.heroku.com/<app>/<process-type> docker push registry.heroku.com/<app>/<process-type> ``` This means we add the following `docker tag` and `docker push` command into our [.github/workflows/native-image-compile.yml](.github/workflows/native-image-compile.yml): ```yaml echo 'Compile Native Image using Docker' docker build . --tag=registry.heroku.com/spring-boot-graal/web echo 'Push to Heroku Container Registry' docker push registry.heroku.com/spring-boot-graal/web ``` See the paragraph on how to release to Heroku using Containers at [Pushing and Releasing our Dockerized Native Spring Boot App on Heroku Container Infrastructure](#pushing-and-releasing-our-dockerized-native-spring-boot-app-on-heroku-container-infrastructure).) # Autorelease on Docker Hub with TravisCI & GitHub Actions We could try to __autorelease to Docker Hub on hub.docker.com:__ Therefore head over to the repositories tab in Docker Hub and click `Create Repository`: ![docker-hub-create-repo](screenshots/docker-hub-create-repo.png) As the docs state, there are some config options to [setup automated builds](https://docs.docker.com/docker-hub/builds/). __BUT:__ As the automatic builds feature rely on the Docker Hub build infrastructure, there woun't be enough RAM for our builds to succeed! You may try it, but you'll see those errors at the end: ``` 13:13:26.080 [ForkJoinPool-2-worker-3] DEBUG io.netty.handler.codec.compression.ZlibCodecFactory - -Dio.netty.noJdkZlibEncoder: false # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 578920448 bytes for committing reserved memory. # An error report file with more information is saved as: # /build/target/native-image/hs_err_pid258.log OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x000000078d96d000, 578920448, 0) failed; error='Not enough space' (errno=12)  Error: Image build request failed with exit status 1 ``` Since our TravisCI & GitHub Actions builds are now enabled to successfully run our GraalVM Native Image compilation in a Docker build, we could live without the automatic builds feature of Docker Hub - and simply push our build image to Docker Hub also! Therefore you need to create an Access Token in your Docker Hub account at https://hub.docker.com/settings/security Then head over to your TravisCI & GitHub Actions project settings and add the environment variables `DOCKER_HUB_TOKEN` and `DOCKER_HUB_USERNAME` as already happended for Heroku Container Registry. The final step then is to add the correct `docker login` and `docker push` commands to our [.travis.yml](.travis.yml) and [.github/workflows/native-image-compile.yml](.github/workflows/native-image-compile.yml): ```yaml # Push to Docker Hub also, since automatic Builds there don't have anough RAM to do a docker build - echo "$DOCKER_HUB_TOKEN" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin - docker tag registry.heroku.com/spring-boot-graal/web jonashackt/spring-boot-graalvm:latest - docker push jonashackt/spring-boot-graalvm:latest ``` Be sure to also tag your image correctly according to your created Docker Hub repository. Finally, we should see our Docker images released on https://hub.docker.com/r/jonashackt/spring-boot-graalvm and could run this app simply by executing: ``` docker run -e "PORT=8087" -p 8087:8087 jonashackt/spring-boot-graalvm:latest ``` This pulls the latest `jonashackt/spring-boot-graalvm` image and runs our app locally. # Upgrade to spring-native (from spring-graalvm-native) & spring-aot-maven-plugin & GraalVM 21.3 Current docs: https://docs.spring.io/spring-native/docs/current/reference/htmlsingle/index.html#overview https://spring.io/blog/2021/03/11/announcing-spring-native-beta ### spring-graalvm-native -> spring-native Switch from `spring-graalvm-native` to `spring-native`: ```xml <spring-graalvm-native.version>0.8.5</spring-graalvm-native.version> <dependency> <groupId>org.springframework.experimental</groupId> <artifactId>spring-graalvm-native</artifactId> <version>${spring-graalvm-native.version}</version> </dependency> to <spring-native.version>0.10.5</spring-native.version> <dependency> <groupId>org.springframework.experimental</groupId> <artifactId>spring-native</artifactId> <version>${spring-native.version}</version> </dependency> ``` ### Spring Boot Version <=> spring-native Version <=> GraalVM version <=> Java version https://github.com/spring-projects-experimental/spring-native/milestones?state=closed https://docs.spring.io/spring-native/docs/current/reference/htmlsingle/index.html#_validate_spring_boot_version > Spring Native 0.10.5 only supports Spring Boot 2.5.6, so change the version if necessary. https://docs.spring.io/spring-native/docs/current/reference/htmlsingle/index.html#_freeze_graalvm_version Install the matching GraalVM version with SDKMAN: ```shell sdk install java 21.2.0.r11-grl ``` This will also configure the correct Maven version. Run ```shell $ native-image --version GraalVM 21.2.0 Java 11 CE (Java Version 11.0.12+6-jvmci-21.2-b08) $ java -version openjdk version "11.0.12" 2021-07-20 OpenJDK Runtime Environment GraalVM CE 21.2.0 (build 11.0.12+6-jvmci-21.2-b08) OpenJDK 64-Bit Server VM GraalVM CE 21.2.0 (build 11.0.12+6-jvmci-21.2-b08, mixed mode, sharing) $ mvn --version Apache Maven 3.8.3 (ff8e977a158738155dc465c6a97ffaf31982d739) Maven home: /Users/jonashecht/.sdkman/candidates/maven/current Java version: 11.0.12, vendor: GraalVM Community, runtime: /Users/jonashecht/.sdkman/candidates/java/21.2.0.r11-grl Default locale: de_DE, platform encoding: UTF-8 OS name: "mac os x", version: "11.5", arch: "x86_64", family: "mac" ``` Also use the matching version (see https://github.com/graalvm/container/pkgs/container/graalvm-ce) inside your [Dockerfile](Dockerfile) (if you don't use Buildpacks): ```dockerfile FROM ghcr.io/graalvm/graalvm-ce:ol7-java11-21.2.0 ``` and inside your CI system like GitHub Actions [.github/workflows/native-image-compile.yml](.github/workflows/native-image-compile.yml): ```yaml - name: Install GraalVM with SDKMAN run: | curl -s "https://get.sdkman.io" | bash source "$HOME/.sdkman/bin/sdkman-init.sh" sdk install java 21.2.0.r11-grl java -version ``` ### Enable native image support via spring-boot-maven-plugin https://docs.spring.io/spring-native/docs/current/reference/htmlsingle/index.html#_enable_native_image_support Enhance `spring-boot-maven-plugin` buildpacks configuration & `${repackage.classifier}`: ```xml <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> to <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <classifier>${repackage.classifier}</classifier>  </configuration> </plugin> ``` ### spring-context-indexer --> spring-aot-maven-plugin https://docs.spring.io/spring-native/docs/current/reference/htmlsingle/index.html#_add_the_spring_aot_plugin From `spring-context-indexer` to new Spring ahead-of-time (AOT) Maven build plugin `spring-aot-maven-plugin`: ```xml <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-indexer</artifactId> </dependency> to <plugin> <groupId>org.springframework.experimental</groupId> <artifactId>spring-aot-maven-plugin</artifactId> <version>${spring-native.version}</version> <executions> <execution> <id>test-generate</id> <goals> <goal>test-generate</goal> </goals> </execution> <execution> <id>generate</id> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> </plugins> </build> ``` ### native-image-maven-plugin --> native-maven-plugin Inside the profile `native` move plugin `org.graalvm.nativeimage.native-image-maven-plugin` to new `org.graalvm.buildtools.native-maven-plugin`: ```xml <native-image-maven-plugin.version>20.3.2</native-image-maven-plugin.version> <profiles> <profile> <id>native</id> <build> <plugins> <plugin> <groupId>org.graalvm.nativeimage</groupId> <artifactId>native-image-maven-plugin</artifactId> <version>${native-image-maven-plugin.version}</version> <configuration> <buildArgs>-J-Xmx4G -H:+ReportExceptionStackTraces -Dspring.native.remove-unused-autoconfig=true -Dspring.native.remove-yaml-support=true</buildArgs> <imageName>${project.artifactId}</imageName> </configuration> <executions> <execution> <goals> <goal>native-image</goal> </goals> <phase>package</phase> </execution> </executions> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </profile> </profiles> to <native-buildtools.version>0.9.4</native-buildtools.version> <profiles> <profile> <id>native</id> <properties> <repackage.classifier>exec</repackage.classifier> </properties> <dependencies> <dependency> <groupId>org.graalvm.buildtools</groupId> <artifactId>junit-platform-native</artifactId> <version>${native-buildtools.version}</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> <version>${native-buildtools.version}</version> <executions> <execution> <id>test-native</id> <phase>test</phase> <goals> <goal>test</goal> </goals> </execution> <execution> <id>build-native</id> <phase>package</phase> <goals> <goal>build</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> ``` # Links ### Spring Current docs: https://repo.spring.io/milestone/org/springframework/experimental/spring-graalvm-native-docs/0.7.0/spring-graalvm-native-docs-0.7.0.zip!/reference/index.html https://github.com/spring-projects/spring-framework/wiki/GraalVM-native-image-support https://www.infoq.com/presentations/spring-boot-graalvm/ https://github.com/spring-projects/spring-framework/issues/21529 https://spring.io/blog/2020/04/09/spring-graal-native-0-6-0-released https://spring.io/blog/2020/04/16/spring-tips-the-graalvm-native-image-builder-feature https://spring.io/blog/2020/06/10/the-path-towards-spring-boot-native-applications ##### 0.8.3 Spring Boot 2.4.0 Release + Oracle GraalVM 20.3.x compatibility: https://spring.io/blog/2020/11/23/spring-native-for-graalvm-0-8-3-available-now No `-H:+TraceClassInitialization` as simple boolean anymore: https://github.com/quarkusio/quarkus/issues/12434 & https://github.com/oracle/graal/commit/8c210f7fdbba5045bfbe14b6870f98ebbff6eed7 With GraalVM 20.3.x the official Docker image moved from Docker Hub to GitHub Packages: https://github.com/orgs/graalvm/packages/container/package/graalvm-ce ### Stackoverflow https://stackoverflow.com/questions/50911552/graalvm-and-spring-applications https://stackoverflow.com/questions/58465833/graalvm-with-native-image-compilation-in-travis-ci https://stackoverflow.com/questions/61302412/how-to-configure-the-port-of-a-spring-boot-app-thats-natively-compiled-by-graal ### GraalVM & Oracle https://blog.softwaremill.com/graalvm-installation-and-setup-on-macos-294dd1d23ca2 https://github.com/orgs/graalvm/packages/container/package/graalvm-ce https://www.graalvm.org/docs/reference-manual/native-image/ https://medium.com/graalvm/graalvm-20-1-7ce7e89f066b https://medium.com/graalvm/updates-on-class-initialization-in-graalvm-native-image-generation-c61faca461f7 ### Others https://e.printstacktrace.blog/building-java-and-maven-docker-images-using-parallelized-jenkins-pipeline-and-sdkman/ https://medium.com/analytics-vidhya/maybe-native-executable-in-quarkus-is-not-for-you-but-it-is-awesome-967588e80a4 https://quarkus.io/guides/building-native-image
1
joolu/ddd-sample
An SVN import of the Domain Driven Design (DDD) example project hosted on http://dddsample.sourceforge.net/
null
null
1
lynx-r/tictactoe-microservices-example
An example of Spring Cloud Microservices application based on books (see Links section)
eureka hystrix jwt microservices-application reactive-applications reactive-microservices ribbon spring-boot spring-boot-admin spring-cloud spring-config-server spring-gateway spring-microservices zipkin zuul
# An example of a simple microservices application # Run in Postman [![Run in Postman](https://run.pstmn.io/button.svg)](https://app.getpostman.com/run-collection/3d22b6efe9ada28ae2de#?env%5Btictactoe-local%5D=W3sia2V5IjoiSE9TVCIsInZhbHVlIjoiaHR0cDovL2xvY2FsaG9zdDo1NTU1IiwiZGVzY3JpcHRpb24iOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlRPS0VOIiwidmFsdWUiOiJleUpoYkdjaU9pSklVekkxTmlKOS5leUp6ZFdJaU9pSmhaRzFwYmlJc0ltRjFkR2h6SWpvaVVrOU1SVjlCUkUxSlRpeFNUMHhGWDFWVFJWSWlMQ0pwYzNNaU9pSjBhV04wWVdOMGIyVXVZMjl0SWl3aVpYaHdJam94TlRRME1qUXpOREUzZlEuLXF5cWptYlVQN3lTYkZiMGVsUnBxaU0xSmpsUnVUVWRaLXU5MzlkanNmUSIsImRlc2NyaXB0aW9uIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJIT1NUX0NPTkZJRyIsInZhbHVlIjoiaHR0cDovL2xvY2FsaG9zdDo4ODg4IiwiZGVzY3JpcHRpb24iOiIiLCJlbmFibGVkIjp0cnVlfV0=) Or import `tictactoe-postman_collection.json` Services and Credential (**login/password**): admin/admin actuator/actuator # Involved Spring Cloud and other services [Service discovery Eureka](http://localhost:8761) eureka/password [Cloud Config](http://localhost:8888/webapi/default) configuser/123 [Spring Boot Admin](http://localhost:9999/#/applications) admin/adminpassword [Zipkin](http://localhost:9411/zipkin/) zipkin/zipkin # Run and scripts Run `docker-compose` via `gradle plugin`: Before running `docker-compose` copy `tictactoe-shared.env` in `${HOME}/Docker/tictactoe-shared.env`. ``` ./docker-compose-up.sh ``` Create images via `gradle plugin`: ``` ./spring-create-images.sh ./tictactoe-create-images.sh ``` Push images via `gradle plugin`: ``` ./spring-push-images.sh ./tictactoe-push-images.sh ``` # Spring Cloud Config I use [this](https://github.com/lynx-r/tictactoe-config-repo) git repository to config application # Links * [Spring Microservices in Action](https://www.manning.com/books/spring-microservices-in-action) * [Hands-On Spring 5 Security for Reactive Applications](https://www.packtpub.com/application-development/hands-spring-security-5-reactive-applications) * [The code for the book Spring Microservices in Action](https://github.com/carnellj?tab=repositories) * [The code for the book Hands-On Spring 5 Security for Reactive Applications](https://github.com/lynx-r/Hands-On-Spring-Security-5-for-Reactive-Applications)
1
Nepxion/DiscoveryGuide
☀️ Nepxion Discovery Guide is a guide example for Nepxion Discovery 蓝绿灰度发布、路由、限流、熔断、降级、隔离、追踪、流量染色、故障转移、多活的指南示例
apollo blue-green-deployment gray-release nacos opentelemetry sentinel skywalking spring-cloud
![](http://nepxion.gitee.io/discovery/docs/discovery-doc/Banner.png) # Discovery【探索】云原生微服务解决方案 ![Total visits](https://visitor-badge.laobi.icu/badge?page_id=Nepxion&title=total%20visits) [![Total lines](https://tokei.rs/b1/github/Nepxion/Discovery?category=lines)](https://tokei.rs/b1/github/Nepxion/Discovery?category=lines) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg?label=license)](https://github.com/Nepxion/Discovery/blob/6.x.x/LICENSE) [![Maven Central](https://img.shields.io/maven-central/v/com.nepxion/discovery.svg?label=maven)](https://search.maven.org/artifact/com.nepxion/discovery) [![Javadocs](http://www.javadoc.io/badge/com.nepxion/discovery-plugin-framework-starter.svg)](http://www.javadoc.io/doc/com.nepxion/discovery-plugin-framework-starter) [![Build Status](https://github.com/Nepxion/Discovery/workflows/build/badge.svg)](https://github.com/Nepxion/Discovery/actions) [![Codacy Badge](https://app.codacy.com/project/badge/Grade/5c42eb719ef64def9cad773abd877e8b)](https://www.codacy.com/gh/Nepxion/Discovery/dashboard?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=Nepxion/Discovery&amp;utm_campaign=Badge_Grade) [![Stars](https://img.shields.io/github/stars/Nepxion/Discovery.svg?label=Stars&style=flat&logo=GitHub)](https://github.com/Nepxion/Discovery/stargazers) [![Stars](https://gitee.com/Nepxion/Discovery/badge/star.svg?theme=gvp)](https://gitee.com/Nepxion/Discovery/stargazers) [![Wiki](https://badgen.net/badge/icon/wiki?icon=wiki&label=GitHub)](https://github.com/Nepxion/Discovery/wiki) [![Wiki](https://badgen.net/badge/icon/wiki?icon=wiki&label=Gitee)](https://gitee.com/nepxion/Discovery/wikis/pages?sort_id=3993615&doc_id=1124387) [![Discovery PPT](https://img.shields.io/badge/Discovery%20-ppt-brightgreen?logo=Microsoft%20PowerPoint)](http://nepxion.gitee.io/discovery/docs/link-doc/discovery-ppt.html) [![Discovery Page](https://img.shields.io/badge/Discovery%20-page-brightgreen?logo=Microsoft%20Edge)](http://nepxion.gitee.io/discovery/) [![Discovery Platform Page](https://img.shields.io/badge/Discovery%20Platform%20-page-brightgreen?logo=Microsoft%20Edge)](http://nepxion.gitee.io/discoveryplatform) [![Polaris Page](https://img.shields.io/badge/Polaris%20-page-brightgreen?logo=Microsoft%20Edge)](http://polaris-paas.gitee.io/polaris-sdk) <a href="https://github.com/Nepxion" tppabs="#" target="_blank"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/github.png"></a>&nbsp; <a href="https://gitee.com/Nepxion" tppabs="#" target="_blank"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/gitee.png"></a>&nbsp; <a href="https://search.maven.org/search?q=g:com.nepxion" tppabs="#" target="_blank"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/maven.png"></a>&nbsp; <a href="http://nepxion.gitee.io/discovery/docs/contact-doc/wechat.jpg" tppabs="#" target="_blank"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/wechat.png"></a>&nbsp; <a href="http://nepxion.gitee.io/discovery/docs/contact-doc/dingding.jpg" tppabs="#" target="_blank"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/dingding.png"></a>&nbsp; <a href="http://nepxion.gitee.io/discovery/docs/contact-doc/gongzhonghao.jpg" tppabs="#" target="_blank"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/gongzhonghao.png"></a>&nbsp; <a href="mailto:1394997@qq.com" tppabs="#"><img width="25" height="25" src="http://nepxion.gitee.io/discovery/docs/icon-doc/email.png"></a> 如果您觉得本框架具有一定的参考价值和借鉴意义,请帮忙在页面右上角 [**Star**] ## 简介 ### 作者简介 - Nepxion开源社区创始人 - 2020年阿里巴巴中国云原生峰会出品人 - 2020年被Nacos和Spring Cloud Alibaba纳入相关开源项目 - 2021年阿里巴巴技术峰会上海站演讲嘉宾 - 2021年荣获陆奇博士主持的奇绩资本,进行风险投资的关注和调研 - 2021年入选Gitee最有价值开源项目 - 阿里巴巴官方书籍《Nacos架构与原理》作者之一 - Spring Cloud Alibaba Steering Committer、Nacos Group Member - Spring Cloud Alibaba、Nacos、Sentinel、OpenTracing Committer & Contributor <img src="http://nepxion.gitee.io/discovery/docs/discovery-doc/CertificateGVP.jpg" width="43%"><img src="http://nepxion.gitee.io/discovery/docs/discovery-doc/AwardNacos1.jpg" width="28%"><img src="http://nepxion.gitee.io/discovery/docs/discovery-doc/AwardSCA1.jpg" width="28%"> ### 商业合作 ① Discovery系列 | 框架名称 | 框架版本 | 支持Spring Cloud版本 | 使用许可 | | --- | --- | --- | --- | | Discovery | 1.x.x ~ 6.x.x | Camden ~ Hoxton | 开源,永久免费 | | DiscoveryX | 7.x.x ~ 10.x.x | 2020 ~ 2023 | 闭源,商业许可 | ② Polaris系列 Polaris为Discovery高级定制版,特色功能 - 基于Nepxion Discovery集成定制 - 多云、多活、多机房流量调配 - 跨云动态域名、跨环境适配 - DCN、DSU、SET单元化部署 - 组件灵活装配、配置对外屏蔽 - 极简低代码PaaS平台 | 框架名称 | 框架版本 | 支持Discovery版本 | 支持Spring Cloud版本 | 使用许可 | | --- | --- | --- | --- | --- | | Polaris | 1.x.x | 6.x.x | Finchley ~ Hoxton | 闭源,商业许可 | | Polaris | 2.x.x | 7.x.x ~ 10.x.x | 2020 ~ 2023 | 闭源,商业许可 | 有商业版需求的企业和用户,请添加微信1394997,联系作者,洽谈合作事宜 ### 入门资料 ![](http://nepxion.gitee.io/discovery/docs/discovery-doc/Logo64.png) Discovery【探索】企业级云原生微服务开源解决方案 ① 快速入门 - [快速入门Github版](https://github.com/Nepxion/Discovery/wiki) - [快速入门Gitee版](https://gitee.com/Nepxion/Discovery/wikis/pages) ② 解决方案 - [解决方案WIKI版](http://nepxion.com/discovery) - [解决方案PPT版](http://nepxion.gitee.io/discovery/docs/link-doc/discovery-ppt.html) ③ 最佳实践 - [最佳实践PPT版](http://nepxion.gitee.io/discovery/docs/link-doc/discovery-ppt-1.html) ④ 平台界面 - [平台界面WIKI版](http://nepxion.com/discovery-platform) ⑤ 框架源码 - [框架源码Github版](https://github.com/Nepxion/Discovery) - [框架源码Gitee版](https://gitee.com/Nepxion/Discovery) ⑥ 指南示例源码 - [指南示例源码Github版](https://github.com/Nepxion/DiscoveryGuide) - [指南示例源码Gitee版](https://gitee.com/Nepxion/DiscoveryGuide) ⑦ 指南示例说明 - Spring Cloud Finchley ~ Hoxton版本 - [极简版指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/6.x.x-simple),分支为6.x.x-simple - [极简版域网关部署指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/6.x.x-simple-domain-gateway),分支为6.x.x-simple-domain-gateway - [极简版非域网关部署指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/6.x.x-simple-non-domain-gateway),分支为6.x.x-simple-non-domain-gateway - [集成版指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/6.x.x),分支为6.x.x - [高级版指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/6.x.x-complex),分支为6.x.x-complex - Spring Cloud 202x版本 - [极简版指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/master-simple),分支为master-simple - [极简版本地化指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/master-simple-native),分支为master-simple-native - [集成版指南示例](https://github.com/Nepxion/DiscoveryGuide/tree/master),分支为master ![](http://nepxion.gitee.io/discovery/docs/polaris-doc/Logo64.png) Polaris【北极星】企业级云原生微服务商业解决方案 ① 解决方案 - [解决方案WIKI版](http://nepxion.com/polaris) ② 框架源码 - [框架源码Github版](https://github.com/polaris-paas/polaris-sdk) - [框架源码Gitee版](https://gitee.com/polaris-paas/polaris-sdk) ③ 指南示例源码 - [指南示例源码Github版](https://github.com/polaris-paas/polaris-guide) - [指南示例源码Gitee版](https://gitee.com/polaris-paas/polaris-guide) ④ 指南示例说明 - Spring Cloud Finchley ~ Hoxton版本 - [指南示例](https://github.com/polaris-paas/polaris-guide/tree/1.x.x),分支为1.x.x - Spring Cloud 202x版本 - [指南示例](https://github.com/polaris-paas/polaris-guide/tree/master),分支为master ## 请联系我 微信、钉钉、公众号和文档 ![](http://nepxion.gitee.io/discovery/docs/contact-doc/wechat-1.jpg)![](http://nepxion.gitee.io/discovery/docs/contact-doc/dingding-1.jpg)![](http://nepxion.gitee.io/discovery/docs/contact-doc/gongzhonghao-1.jpg)![](http://nepxion.gitee.io/discovery/docs/contact-doc/document-1.jpg) ## Star走势图 [![Stargazers over time](https://starchart.cc/Nepxion/Discovery.svg)](https://starchart.cc/Nepxion/Discovery)
1
pereferrera/trident-lambda-splout
A toy example of a Lambda architecture" using Storm's Trident as real-time layer and Splout SQL as batch layer."
null
trident-lambda-splout ===================== A toy example of a ["Lambda architecture"](http://www.dzone.com/links/r/big_data_lambda_architecture.html) using Storm's [Trident](https://github.com/nathanmarz/storm/wiki/Trident-tutorial) as real-time layer and [Splout SQL](http://sploutsql.com) as batch layer. The problem =========== We want to implement counting the number of appearances of hashtags in tweets, grouped by date, and serve the data as a remote service, for example to be able to populate timelines in a website / mobile app (e.g. give me the evolution of mentions for hashtag "california" for the past 10 days). The requirements for the solution are: - It must scale (we want to process billions of tweets. Think as if we had access to the Firehouse!). - It must be able to serve low-latency requests to potentially a lot of concurrent users asking for timelines. Using Hadoop to store the tweets and a simple Hive query for grouping by hashtag and date seems good enough for calculating the counts. However, we also want to add real-time to the system: we want to have the actual number of appearances for hashtags updated for today in seconds time. And we need to put the Hadoop counts in some really fast datastore for being able to query them. The solution ============ The solution proposed is to use a "lambda architecture" and implement a real-time layer using [Trident](https://github.com/nathanmarz/storm/wiki/Trident-tutorial), which is an API on top of Storm that eases building real-time topologies and saving persistent state derived from them. For serving the batch layer we will use [Splout SQL](http://sploutsql.com) which is a high-performant SQL read-only data store that can pull and serve datasets from Hadoop very efficiently. Splout is fast like [ElephantDB](https://github.com/nathanmarz/elephantdb) but it also allows us to execute SQL queries. Using SQL for serving the batch layer is convenient as we might want to break-down the counts by hour, day, week, or any arbitrary date period. We will also use [Trident](https://github.com/nathanmarz/storm/wiki/Trident-tutorial) to implement the remote service using its DRPC capabilities. [Trident](https://github.com/nathanmarz/storm/wiki/Trident-tutorial) iself will query both the batch layer and the real-time layer and merge the results. This is how, conceptually, the overall architecture looks like: ![alt text](https://raw.github.com/pereferrera/trident-lambda-splout/master/TridentSploutArch-medium.png "Trident-Lambda-Splout Hashtag Counts Architecture") How to try it ============= 1) Hadoop Hadoop is a key component of this "lambda architecture" example so you must have it installed and its services must be running. $HADOOP_HOME needs to be defined for Splout SQL. 2) Batch layer For trying out this example you must first download Splout SQL an execute a one-node cluster locally. You can download a distribution (.tar.gz file) from Maven Central: http://search.maven.org/#browse%7C-1223220252 After uncompressing it: bin/splout-service.sh qnode start bin/splout-service.sh dnode start Should bring you a one-node local cluster up at: http://localhost:4412 When that is finished you can load a toy dataset of hourly hashtag counts. You can use the data in this "trident-lambda-splout" repo under "sample-hashtags" folder. Let's call $TRIDENT_LAMBDA_SPLOUT_HOME the path where you have cloned this repo, then, from $SPLOUT_HOME (the path where you uncompressed Splout) you just execute: hadoop fs -put $TRIDENT_LAMBDA_SPLOUT_HOME/sample-hashtags sample-hashtags hadoop jar splout-hadoop-*-hadoop.jar simple-generate -i sample-hashtags -o out-hashtags -pby hashtag -p 2 -s "label:string,date:string,count:int,hashtag:string" --index "hashtag,date" -t hashtags -tb hashtags hadoop jar splout-hadoop-*-hadoop.jar deploy -q http://localhost:4412 -root out-hashtags -ts hashtags After these three statements you will have the data indexed, partitioned and loaded into [Splout SQL](http://sploutsql.com). You can check it by looking for Tablespace "hashtags" at the management webapp: http://localhost:4412 2) Real-time layer Execute the class in this repo called "LambdaHashTagsTopology" from your favorite IDE. This class will: - Start a dummy cyclic input Spout that emits always the same two tweets. - Start a Trident topology that counts hashtags by date in real-time. - Start a DRPC server that accepts a hashtag as argument and queries both Splout (batch-layer) and Trident (real-time layer) and merges the results. If you want to execute it from command line you can use maven as follows: mvn clean install mvn dependency:copy-dependencies mvn exec:exec -Dexec.executable="java" -Dexec.args="-cp target/classes:target/dependency/* com.datasalt.trident.LambdaHashTagsTopology" You should see something like this: ... Result for hashtag 'california' -> [[{"20091022":115,"20091023":115,"20091024":158,"20091025":19}]] Result for hashtag 'california' -> [[{"20091022":115,"20091023":115,"20091024":158,"20091025":19,"20130123":76}]] Result for hashtag 'california' -> [[{"20091022":115,"20091023":115,"20091024":158,"20091025":19,"20130123":136}]] Result for hashtag 'california' -> [[{"20091022":115,"20091023":115,"20091024":158,"20091025":19,"20130123":192}]] Result for hashtag 'california' -> [[{"20091022":115,"20091023":115,"20091024":158,"20091025":19,"20130123":232}]] Result for hashtag 'california' -> [[{"20091022":115,"20091023":115,"20091024":158,"20091025":19,"20130123":286}]] ... The first four dates come from the batch layer Splout SQL whereas the last date (whose count is being incremented in real-time) comes from the real-time layer. The merging has been done with Trident at the DRPC service. Conclusions =========== We have shown a simple toy example of a "lambda architecture" that provides timelines for mentions of hashtags in tweets. If you see the code, you will notice it is actually quite easy and straight-forward to implement the real-time part of this system using [Trident](https://github.com/nathanmarz/storm/wiki/Trident-tutorial), mainly because of its high-level constructs (a-la-Cascading, each(), groupBy(), etc) and its wrappers around memory state. There is quite an interesting amount of information on how to handle state properly with Trident here: https://github.com/nathanmarz/storm/wiki/Trident-state . We have also seen [Splout SQL](http://sploutsql.com), a database that integrates tightly with Hadoop and provides real low-latency lookups over its data, being the perfect solution for serving a batch layer in a highly concurrent website or mobile application. For this example to be complete we have to clarify some things: - We didn't implement actually crawling the Tweets, parsing them and feeding them into Storm. You would usually do that through a messaging / queue system such as Kestrel (see https://github.com/nathanmarz/storm-kestrel). Creating a scalable fetcher for Twitter that also outputs the Tweets to Hadoop's HDFS is too complex and out of scope for this example. - We used a dataset of hourly counts that was already calculated by Hadoop but we didn't show how to do that. This part is quite straight-foward and you will find plenty of examples in the web on how to perform simple aggregation tasks using Hadoop, Pig, Hive, Cascading or even a lower-level API such as [Pangool](http://pangool.net/). - We didn't talk about a "master coordinator" which is a quite important part of the architecture. This coordinator would be in charge of triggering the Hadoop aggregation task, and triggering the Splout generation and deploy tasks of [Splout SQL](http://sploutsql.com) we saw above. One important thing to keep in mind is that batch always overrides real-time in this example so the coordinator and the fetcher must make sure that Hadoop only processes complete-hour data. Uncomplete hours should only be handled by the real-time layer. - The real-time layer should expire old data from time to time in order to be efficient (e.g. keep only a rolling view of one week). Keeping a rolling one-week view would mean that the batch layer could potentially be executed only once per week. For another toy example on Storm (even though it is a bit old: http://www.datasalt.com/2012/01/real-time-feed-processing-with-storm/)
1
aspear/izpack5-example-installer
This is a complete example of an izpack 5.x based installer that uses Maven to build.
null
null
1
ThomasVitale/spring-cloud-gateway-resilience-security-observability
Example with Spring Boot 3 focused on resilience, security and observability. It uses Spring Cloud Gateway, Spring Security and Spring Cloud Circuit Breaker.
grafana grafana-loki grafana-tempo keycloak microservices opentelemetry prometheus redis spring-boot spring-cloud spring-cloud-gateway spring-security
# Spring Cloud Gateway - Resilience, Security, and Observability Do you want to use a microservices architecture? Are you looking for a solution to manage access to single services from clients? How can you ensure resilience and security for your entire system? Spring Cloud Gateway is a project based on Reactor, Spring WebFlux, and Spring Boot which provides an effective way to route traffic to your APIs and address cross-cutting concerns. In this session, I'll show you how to configure an API gateway to route traffic to your microservices architecture and implement solutions to improve the resilience of your system with patterns like circuit breakers, retries, fallbacks, and rate limiters using Spring Cloud Circuit Breaker and Resilience4J. Since the gateway is the entry point of your system, it’s also an excellent candidate to implement security concerns like user authentication. I'll show you how to do that with Spring Security, OAuth2, and OpenID Connect, relying on Spring Redis Reactive to manage sessions. Finally, I'll show you how to improve the observability of your system using Spring Boot Actuator and Spring Cloud Sleuth and relying on the Grafana stack. ## Stack * Java 17 * Spring Boot 3 * Grafana OSS ## Usage You can use Docker Compose to set up the entire system, including applications, data services, and the Grafana observability stack. First, package both the Edge Service and Book Service application as container images leveraging the Cloud Native Buildpacks integration provided by Spring Boot. For each application, run the following task: ```bash ./gradlew bootBuildImage ``` Then, from the project root folder, run Docker Compose. ```bash docker-compose up -d ``` The Edge Service application is exposed on port 9000 while Book Service on port 9001. The applications require authentication through OAuth2/OpenID Connect. You can log in as Isabelle (isabelle/password) or Bjorn (bjorn/password). ## Observability Stack Both Spring Boot applications are observable, as any cloud native application should. Prometheus metrics are backed by Spring Boot Actuator and Micrometer Metrics. Distributed tracing is backed by OpenTelemetry and Micrometer Tracing. **Grafana** lets you query and visualize logs, metrics, and traces from your applications. After running the Docker Compose configuration as explained in the previous section, you can access Grafana on port 3000. It provides already dashboards to visualize metrics from Spring Boot, Spring Cloud Gateway, and Spring Cloud Circuit Breaker. In the "Explore" panel, you can query logs from Loki, metrics from Prometheus, and traces from Tempo. **Loki** is a log aggregation system part of the Grafana observability stack. "It's like Prometheus, but for logs." Logs are available for inspecting from Grafana. **Tempo** is a distributed tracing backend part of the Grafana observability stack. Spring Boot applications sends traces to Tempo, which made them available for inspecting from Grafana. The traces follows the OpenTelemetry format and protocol. **Prometheus** is a monitoring system part of the Grafana observability stack. It parses the metrics endpoints exposed by Spring Boot applications (`/actuator/prometheus`). Metrics are available for inspecting and dashboarding from Grafana.
1
isilher/red-vs-blue
Example React Native white label by config
null
# RED vs BLUE _React Native white label by config_ ## Context This demo React Native application showcases a simple white label implementation by environment configuration. This means: - One codebase. - Centralised config for Android, iOS and JS variables. - No flavors, no extra schemes, no extra targets. - No hassle. Just run: ``` shell yarn ios:red yarn ios:blue ``` To get: ![screenshot](screenshot_ios.png) ``` shell yarn android:red yarn android:blue ``` To get: ![screenshot](screenshot_android.png) ## Approach The demo setup was achieved using the following steps. 1. React native init. 1. Add and configure [react-native-ultimate-config](https://github.com/maxkomarychev/react-native-ultimate-config). (Their comprehensive documentation is amazing 🤩) 1. Set up `.env` files for each white label. I chose RED (`.env.red`) and BLUE (`.env.blue`). 1. Set the environment variables for the app name and unique identifier and icon names. 1. (optional) Set up asset folders for each white label and add [a hook script](.rnucrc.js) to copy the content into the `res` and `xcimages` folders when switching env files. 1. (optional) Add yarn scripts to switch env file and run the correct build. For a more detailed explanation you can check the commits of this project. They follow the above steps, starting with a common asset pool and then extracting them into separate folders as suggested in step 5. ## When to choose this approach - When you need to run out a few white label versions of your RN app. - When you want to keep the codebase as consistent as possible between those white label versions. ## When NOT to choose this approach When you want the white label versions to have differences beyond theme variables and feature flags. For example when different white labels will have different dependencies, you may be better off splitting into XCode targets/schemes and Android flavors. This is outside the scope of my research. There is [experimental support](https://github.com/maxkomarychev/react-native-ultimate-config/blob/master/docs/cookbook.md) for flavors and schemes in react-native-ultimate-config, but since these functionalities are intertwined within the build process it will probably never be completely supported. In my personal opinion you would probably be better off splitting your project into separate apps that use common functionality through shared libraries in those cases. ## Gotcha's - When setting a dynamic android `applicationId` you must supply it to `react-native run-android` or it can not auto-launch the app after building. I chose to add this into the yarn scripts for convenience.
1
java-crypto/cross_platform_crypto
Example codes for cryptographic exchange between several platforms (Java, PHP, C#, Javascript, NodeJs, node-forge, Python, Go and Dart)
aes crypto-js cryptography csharp dart go java javascript keyexchange node-js php python rsa
null
1
oldratlee/io-api
📐 generic API design example by I/O, the demo implementation of https://dzone.com/articles/generic-inputoutput-api-java
api api-design demo design generic io io-api java
# 📐 `Java`的通用`IO API`设计 <p align="center"> <a href="https://github.com/oldratlee/io-api/actions/workflows/ci.yaml"><img src="https://img.shields.io/github/actions/workflow/status/oldratlee/io-api/ci.yaml?branch=main&logo=github&logoColor=white" alt="Github Workflow Build Status"></a> <a href="https://app.codecov.io/gh/oldratlee/io-api/tree/main"><img src="https://img.shields.io/codecov/c/github/oldratlee/io-api/main?logo=codecov&logoColor=white" alt="Codecov"></a> <a href="https://openjdk.java.net/"><img src="https://img.shields.io/badge/Java-8+-339933?logo=openjdk&logoColor=white" alt="Java support"></a> <a href="https://www.apache.org/licenses/LICENSE-2.0.html"><img src="https://img.shields.io/github/license/oldratlee/io-api?color=4D7A97&logo=apache" alt="License"></a> <a href="https://github.com/oldratlee/io-api/stargazers"><img src="https://img.shields.io/github/stars/oldratlee/io-api?style=flat" alt="GitHub Stars"></a> <a href="https://github.com/oldratlee/io-api/fork"><img src="https://img.shields.io/github/forks/oldratlee/io-api?style=flat" alt="GitHub Forks"></a> <a href="https://github.com/oldratlee/io-api/issues"><img src="https://img.shields.io/github/issues/oldratlee/io-api" alt="GitHub Issues"></a> <a href="https://github.com/oldratlee/io-api"><img src="https://img.shields.io/github/repo-size/oldratlee/io-api" alt="GitHub repo size"></a> <a href="https://gitpod.io/#https://github.com/oldratlee/io-api"><img src="https://img.shields.io/badge/Gitpod-ready to code-339933?label=gitpod&logo=gitpod&logoColor=white" alt="gitpod: Ready to Code"></a> </p> [:book: English Documentation](README-EN.md) | :book: 中文文档 <a href="#dummy"><img src="https://user-images.githubusercontent.com/1063891/234197656-c664c069-01db-4883-9031-9800644ec9ac.jpg" width="50%" align="right" /></a> ------------------------------ <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - [包的功能](#%E5%8C%85%E7%9A%84%E5%8A%9F%E8%83%BD) - [更多信息](#%E6%9B%B4%E5%A4%9A%E4%BF%A1%E6%81%AF) - [API设计的进一步学习资料](#api%E8%AE%BE%E8%AE%A1%E7%9A%84%E8%BF%9B%E4%B8%80%E6%AD%A5%E5%AD%A6%E4%B9%A0%E8%B5%84%E6%96%99) - [简单资料](#%E7%AE%80%E5%8D%95%E8%B5%84%E6%96%99) - [系统书籍](#%E7%B3%BB%E7%BB%9F%E4%B9%A6%E7%B1%8D) <!-- END doctoc generated TOC please keep comment here to allow auto update --> ------------------------------ [Java的通用I/O API](https://github.com/oldratlee/translations/blob/master/generic-io-api-in-java-and-api-design/README.md)(by _Rickard Öberg_)中给出了一个通用`Java` `IO API`设计,并且有`API`的`Demo`代码。更重要的是给出了这个`API`设计本身的步骤和过程,这让`API`设计有些条理。文中示范了从 普通简单实现 整理成 正交分解、可复用、可扩展、高性能、无错误的`API`设计 的过程,这个过程是很值得理解和学习! 设计偏向是艺术,一个赏心悦目的设计,尤其是`API`设计,旁人看来多是妙手偶得的感觉,如果能有些章可循真是一件美事。 在艺术工作中,真的艺术性工作量也只是一部分,而给出 _**方法**_ 以 _**减少艺术工作之中艺术性工作量**_ 的人是 **大师**。 ❤️ 原文中只给出设计的 - 发展思路 - 关键接口 - 典型的使用方式 没有给出可运行的实现及其连接的细节,看起来可能比较费力,因为设计细致分解后抽象度高而不容易理解。 为了大家和自己更深入有效地学习,需要: 1. 给出这个通用`IO API`的可运行的`Demo`实现。 这个工程即是本人的可运行的`Demo`实现。 当然个人力荐你先自己实现练习一下,这样比直接看我的实现,在学习上会有效得多! 1. 写了一篇分析总结。 本人的分析总结:[用Java I/O API设计练习的分析和总结](docs/java-api-design-exercise.md)。这个你可以直接看,以更高效方便地理解这个`API`的设计。 > PS: > > 上面2件事其实是份自学的家庭作业哦~ :laughing: > 在阿里中间件团队的时候(2011年),[@_ShawnQianx_ 大大](http://weibo.com/shawnqianx)看到这篇文章时,给组员布置的家庭作业~ :bowtie: > > @_ShawnQianx_ 对这篇文章及作者的评论: > > 设计时,一要分解好系统,二是多个组件拼回来还是系统预期的样子,二步都做好是难度所在。这个人分析和把控的功力很好! ## 包的功能 - 包`com.oldratlee.io.core` 核心接口 - 包`com.oldratlee.io.core.filter` 实现的`Filter`功能的类 - 包`com.oldratlee.io.utils` 工具类 - 包`com.oldratlee.io.demo` Demo示例的`Main`类 ## 更多信息 - 个人在组内分享时的PPT:[API设计实例分析](docs/ApiDesignSampleStudy.pptx) - 本人对这篇博文的译文:[【译】Java的通用I/O API](https://github.com/oldratlee/translations/tree/master/generic-io-api-in-java-and-api-design/README.md) - 问题交流: https://github.com/oldratlee/io-api/issues ## API设计的进一步学习资料 ### 简单资料 - How to Design a Good API and Why it Matters(by Joshua Bloch) 【[本地下载](docs/How-to-Design-a-Good-API-and-Why-it-Matters-by-Joshua-Bloch.pdf)】 <http://lcsd05.cs.tamu.edu/slides/keynote.pdf> - Google Search <http://www.google.com.hk/search?&q=api+design> ### 系统书籍 - The Little Manual of API Design 【[本地下载](docs/The-Little-Manual-of-API-Design.pdf)】 <http://chaos.troll.no/~shausman/api-design/api-design.pdf> - [《软件框架设计的艺术》](http://book.douban.com/subject/6003832/) | 英文原版[Practical API Design: Confessions of a Java Framework Architect](http://www.amazon.com/Practical-API-Design-Confessions-Framework/dp/1430243171) - [Contributing to Eclipse中文版](https://book.douban.com/subject/1219945/) | 英文原版[Contributing to Eclipse : Principles, Patterns, and Plug-Ins](https://book.douban.com/subject/1610318/) - [.NET设计规范 : NET约定、惯用法与模式](http://book.douban.com/subject/4805165/) | 英文原版[Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)](http://www.amazon.com/Framework-Design-Guidelines-Conventions-Libraries/dp/0321545613)
1
minimaldevelop/libgdx-scene2d-demo-game
Example game using libgdx with scene2d, actors, box2d and other cool stuff. In this game you are falling man that avoid platforms.
null
null
1
eiceblue/Spire.Office-for-Java
A collection of examples that shows you how to use Spire.Office for Java to create, convert and manipulate Word, PowerPoint & PDF documents, and generate and scan 1D & 2D barcodes.
barcode-generator barcode-scanner java-library processing-office-document processing-pdf-document
# Spire.Office-for-Java-Java Libraries for Processing MS Office Documents and PDF [![Foo](https://i.imgur.com/yRDvquw.png)](https://www.e-iceblue.com/Introduce/office-for-java.html) [Product Page](https://www.e-iceblue.com/Introduce/office-for-java.html) | [Tutorials](https://www.e-iceblue.com/Tutorials.html) | [Examples](https://github.com/eiceblue) | [Forum](https://www.e-iceblue.com/forum/) | [Customized Demo](https://www.e-iceblue.com/freedemo.html) | [Temporary License](https://www.e-iceblue.com/TemLicense.html) [Spire.Office for Java](https://www.e-iceblue.com/Introduce/office-for-java.html) is a **combination of Enterprise-Level Office Java APIs** offered by E-iceblue. It includes the most recent versions of Spire.Doc, Spire.XLS, Spire.Presentation, Spire.PDF, and Spire.Barcode. Using Spire.Office for Java, developers can **open**, **create**, **modify**, **convert**, **print**, and **view** **MS Word**, **Excel**, and **PowerPoint** documents, **PDF** documents and many other format files. ### Spire.Doc for Java A professional [Word Java library](https://www.e-iceblue.com/Introduce/doc-for-java.html) designed to [create](https://www.e-iceblue.com/Tutorials/Java/Spire.Doc-for-Java/Program-Guide/Document-Operation/Create-Word-Document-in-Java.html), read, [write](https://www.e-iceblue.com/Tutorials/Java/Spire.Doc-for-Java/Program-Guide/Document-Operation/Create-Word-Document-in-Java.html), [convert](https://www.e-iceblue.com/Tutorials/Java/Spire.Doc-for-Java/Program-Guide/Conversion/Convert-Word-to-PDF-in-Java.html) and [print Word](https://www.e-iceblue.com/Tutorials/Java/Spire.Doc-for-Java/Program-Guide/Print/Print-Word-Document-in-Java.html) document files in any Java applications with fast and high-quality performance. ### Spire.XLS for Java A professional [Excel Java library](https://www.e-iceblue.com/Introduce/xls-for-java.html) that can be used to [create](https://www.e-iceblue.com/Tutorials/Java/Spire.XLS-for-Java/Program-Guide/Document-Operation/Create-Excel-File-in-Java.html), read, [write](https://www.e-iceblue.com/Tutorials/Java/Spire.XLS-for-Java/Program-Guide/Document-Operation/Create-Excel-File-in-Java.html), [convert](https://www.e-iceblue.com/Tutorials/Java/Spire.XLS-for-Java/Program-Guide/Conversion/Java-convert-Excel-to-PDF.html) and [print Excel](https://www.e-iceblue.com/Tutorials/Java/Spire.XLS-for-Java/Program-Guide/Print/Print-Excel-Documents-in-Java.html) files in any type of Java applications. ### Spire.Presentation for Java A professional [PowerPoint® compatible library](https://www.e-iceblue.com/Introduce/presentation-for-java.html) that enables developers to [create](https://www.e-iceblue.com/Tutorials/Java/Spire.Presentation-for-Java/Program-Guide/Document-Operation/Operate-the-presentation-slide-on-Java-applications.html), read, [modify](https://www.e-iceblue.com/Tutorials/Java/Spire.Presentation-for-Java/Program-Guide/Document-Operation/Operate-the-presentation-slide-on-Java-applications.html), [convert](https://www.e-iceblue.com/Tutorials/Java/Spire.Presentation-for-Java/Program-Guide/Conversion/Convert-PowerPoint-to-PDF-in-Java.html) and [print PowerPoint](https://www.e-iceblue.com/Tutorials/Java/Spire.Presentation-for-Java/Program-Guide/Print/Java-print-a-PowerPoint-document.html) documents in any Java application. ### Spire.PDF for Java A professional [PDF Java library](https://www.e-iceblue.com/Introduce/pdf-for-java.html) that enables developers to [create](https://www.e-iceblue.com/Tutorials/Spire.PDF-for-JAVA/Spire.PDF-Program-Guide-JAVA/Document-Operation/Create-a-PDF-Document-in-Java.html), edit, [convert](https://www.e-iceblue.com/Tutorials/Java/Spire.PDF-for-Java/Program-Guide/Conversion/Convert-PDF-to-Word-in-Java.html) and [print PDF](https://www.e-iceblue.com/Tutorials/Java/Spire.PDF-for-Java/Program-Guide/Print/How-to-print-PDF-document-in-Java.html) files in a Java application without any external dependencies. ### Spire.Barcode for Java A professional [barcode library](https://www.e-iceblue.com/Introduce/barcode-for-java.html) designed for Java developers to [generate](https://www.e-iceblue.com/Tutorials/Spire.Barcode-for-JAVA/Getting-Started/How-to-Create-Barcode-Using-Spire.Barcode-for-Java.html), [read and scan 1D & 2D barcodes](https://www.e-iceblue.com/Tutorials/Spire.Barcode-for-JAVA/Scan-Barcode-in-Java.html) in Java applications. [Product Page](https://www.e-iceblue.com/Introduce/office-for-java.html) | [Tutorials](https://www.e-iceblue.com/Tutorials.html) | [Examples](https://github.com/eiceblue) | [Forum](https://www.e-iceblue.com/forum/) | [Customized Demo](https://www.e-iceblue.com/freedemo.html) | [Temporary License](https://www.e-iceblue.com/TemLicense.html)
0
IvanWooll/FloatingLabelValidator
A small library including an example app which uses the 'floating label' pattern to show form validation
null
# FloatingLabelValidator A small library including an example app demonstrating a concept of combining the 'floating label' pattern with form validation. [Youtube video demo](https://youtu.be/9O6cJpybySg) --- This is more a proof of concept than a full blown library but if you want to use it all the files you need are in the lib folder. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" android:orientation="vertical"> <com.ivanwooll.floatinglabelvalidator.lib.FloatingLabelTextView android:layout_width="match_parent" android:layout_height="wrap_content" app:hint="alpha" app:allowEmpty="false" app:validatorType="alpha" /> <com.ivanwooll.floatinglabelvalidator.lib.FloatingLabelTextView android:layout_width="match_parent" android:layout_height="wrap_content" app:hint="numeric" app:allowEmpty="false" app:validatorType="numeric" /> <com.ivanwooll.floatinglabelvalidator.lib.FloatingLabelTextView android:layout_width="match_parent" android:layout_height="wrap_content" app:hint="alpha numeric" app:allowEmpty="false" app:validatorType="alphaNumeric" /> <com.ivanwooll.floatinglabelvalidator.lib.FloatingLabelTextView android:layout_width="match_parent" android:layout_height="wrap_content" app:hint="email" app:allowEmpty="false" app:validatorType="email" /> <com.ivanwooll.floatinglabelvalidator.lib.FloatingLabelTextView android:layout_width="match_parent" android:layout_height="wrap_content" app:hint="phone number" app:allowEmpty="false" app:validatorType="phone" /> </LinearLayout>
1
android10/DynamicProxy_Java_Sample
This is an example written in Java that demonstrates how to implement a simple dynamic proxy for intercepting method calls.
null
Dynamic Proxy Java Sample ================== This is an example written in Java that demonstrates how to implement a simple dynamic proxy for intercepting method calls. Proxy Pattern (from Wikipedia) ----------------- In computer programming, the proxy pattern is a software design pattern. A proxy, in its most general form, is a class functioning as an interface to something else. The proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate. A well-known example of the proxy pattern is a reference counting pointer object. In situations where multiple copies of a complex object must exist, the proxy pattern can be adapted to incorporate the flyweight pattern in order to reduce the application's memory footprint. Typically, one instance of the complex object and multiple proxy objects are created, all of which contain a reference to the single original complex object. Any operations performed on the proxies are forwarded to the original object. Once all instances of the proxy are out of scope, the complex object's memory may be deallocated. License -------- Copyright 2014 Fernando Cejas Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
1
jreznot/intellij-selenide-example
Example of the project created by Selenium UI Testing plugin for IntelliJ IDEA
allure-report intellij-idea selenide selenium ui-testing ui-tests
# intellij-selenide-example Example of a project created by Selenium UI Testing plugin for IntelliJ IDEA. Includes: - Gradle with JUnit 5 - Selenide tests - Allure listener setup - Browsers.json file for Selenoid [Read more about Selenium plugin for IntelliJ IDEA](https://blog.jetbrains.com/idea/2020/03/intellij-idea-2020-1-selenium-support/) <a href="https://blog.jetbrains.com/idea/2020/03/intellij-idea-2020-1-selenium-support/"><img src="https://raw.githubusercontent.com/jreznot/intellij-selenide-example/main/img/idea-install.png" alt="Install"/></a>
1
traex/SlidingMenuExample
Example for one of my tutorials at http://blog.robinchutaux.com/blog/how-to-create-a-menu-like-hello-sms/
null
SlidingMenuExample ================== ![SlidingMenuExample](https://github.com/traex/SlidingMenuExample/blob/master/header.png) This is an example of [How to create a menu like Hello SMS](http://blog.robinchutaux.com/blog/how-to-create-a-menu-like-hello-sms/) [HelloSMS](https://hellotext.com) is an awesome app with a design concept that I like very much ! So I wanted to know how to create the same menu and after some digging I found a way to do it. It was a little bit difficult so I explained [here](http://blog.robinchutaux.com/blog/how-to-create-a-menu-like-hello-sms/) how to achieve the same menu with interaction.
1
gauravgyal/MVVM-LIveData-ViewModel
This is a basic example which gives an idea of MVVM Architecture along with LiveData and ViewModel Architecture components.
null
A sample project which illustrate uses of MVVM Architecture and LiveData,ViewModel Architecture components. It displays list of news in a Recycler view and on click of any news, news will be opened in a WebView. https://android.jlelse.eu/android-architecture-pattern-components-mvvm-livedata-viewmodel-lifecycle-544e84e85177?source=friends_link&sk=a009475da1cf62720891f4526eb8ec05 ![whatsapp image 2017-12-30 at 12 32 22 pm](https://user-images.githubusercontent.com/14937553/34452211-6d54872a-ed60-11e7-9dcb-1ce503fcedcb.jpeg) ![whatsapp image 2017-12-30 at 12 32 22 pm 1](https://user-images.githubusercontent.com/14937553/34452214-797701c2-ed60-11e7-8471-8254f6ec97a7.jpeg) ![whatsapp image 2017-12-30 at 12 32 22 pm 2](https://user-images.githubusercontent.com/14937553/34452215-7b3dc216-ed60-11e7-82c6-48a0c05f2df8.jpeg)
1
Azure-Samples/hdinsight-kafka-java-get-started
Basic example of using Java to create a producer and consumer that work with Kafka on HDInsight. Also a demonstration of the streaming api.
null
--- page_type: sample languages: - java products: - azure - azure-hdinsight description: "The examples in this repository demonstrate how to use the Kafka Consumer, Producer, and Streaming APIs with a Kafka on HDInsight cluster." urlFragment: hdinsight-kafka-java-get-started --- # Java-based example of using the Kafka Consumer, Producer, and Streaming APIs The examples in this repository demonstrate how to use the Kafka Consumer, Producer, and Streaming APIs with a Kafka on HDInsight cluster. There are two projects included in this repository: * Producer-Consumer: This contains a producer and consumer that use a Kafka topic named `test`. * Streaming: This contains an application that uses the Kafka streaming API (in Kafka 0.10.0 or higher) that reads data from the `test` topic, splits the data into words, and writes a count of words into the `wordcounts` topic. NOTE: This both projects assume Kafka 0.10.0, which is available with Kafka on HDInsight cluster version 3.6. ## Producer and Consumer To run the consumer and producer example, use the following steps: 1. Fork/Clone the repository to your development environment. 2. Install Java JDK 8 or higher. This was tested with Oracle Java 8, but should work under things like OpenJDK as well. 3. Install [Maven](http://maven.apache.org/). 4. Assuming Java and Maven are both in the path, and everything is configured fine for JAVA_HOME, use the following commands to build the consumer and producer example: cd Producer-Consumer mvn clean package A file named `kafka-producer-consumer-1.0-SNAPSHOT.jar` is now available in the `target` directory. 5. Use SCP to upload the file to the Kafka cluster: scp ./target/kafka-producer-consumer-1.0-SNAPSHOT.jar SSHUSER@CLUSTERNAME-ssh.azurehdinsight.net:kafka-producer-consumer.jar Replace **SSHUSER** with the SSH user for your cluster, and replace **CLUSTERNAME** with the name of your cluster. When prompted enter the password for the SSH user. 6. Use SSH to connect to the cluster: ssh USERNAME@CLUSTERNAME 7. Use the following commands in the SSH session to get the Zookeeper hosts and Kafka brokers for the cluster. You need this information when working with Kafka. Note that JQ is also installed, as it makes it easier to parse the JSON returned from Ambari. Replace __PASSWORD__ with the login (admin) password for the cluster. Replace __KAFKANAME__ with the name of the Kafka on HDInsight cluster. sudo apt -y install jq export KAFKAZKHOSTS=`curl -sS -u admin:$PASSWORD -G https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2` export KAFKABROKERS=`curl -sS -u admin:$PASSWORD -G https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2` 8. Use the following to verify that the environment variables have been correctly populated: echo '$KAFKAZKHOSTS='$KAFKAZKHOSTS echo '$KAFKABROKERS='$KAFKABROKERS The following is an example of the contents of `$KAFKAZKHOSTS`: zk0-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,zk2-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181 The following is an example of the contents of `$KAFKABROKERS`: wn1-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,wn0-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092 NOTE: This information may change as you perform scaling operations on the cluster, as this adds and removes worker nodes. You should always retrieve the Zookeeper and Broker information before working with Kafka. IMPORTANT: You don't have to provide all broker or Zookeeper nodes. A connection to one broker or Zookeeper node can be used to learn about the others. In this example, the list of hosts is trimmed to two entries. 9. This example uses a topic named `test`. Use the following to create this topic: /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic test --zookeeper $KAFKAZKHOSTS 10. Use the producer-consumer example to write records to the topic: java -jar kafka-producer-consumer.jar producer test $KAFKABROKERS A counter displays how many records have been written. 11. Use the producer-consumer to read the records that were just written: java -jar kafka-producer-consumer.jar consumer test $KAFKABROKERS This returns a list of the random sentences, along with a count of how many are read. ## Streaming NOTE: The streaming example expects that you have already setup the `test` topic from the previous section. 1. On your development environment, change to the `Streaming` directory and use the following to create a jar for this project: mvn clean package 2. Use SCP to copy the `kafka-streaming-1.0-SNAPSHOT.jar` file to your HDInsight cluster: scp ./target/kafka-streaming-1.0-SNAPSHOT.jar SSHUSER@CLUSTERNAME-ssh.azurehdinsight.net:kafka-streaming.jar Replace **SSHUSER** with the SSH user for your cluster, and replace **CLUSTERNAME** with the name of your cluster. When prompted enter the password for the SSH user. 3. Once the file has been uploaded, return to the SSH connection to your HDInsight cluster and use the following commands to create the `wordcounts` and `wordcount-example-Counts-changelog` topics: /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic wordcounts --zookeeper $KAFKAZKHOSTS /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 8 --topic wordcount-example-Counts-changelog --zookeeper $KAFKAZKHOSTS 4. Use the following command to start the streaming process in the background: java -jar kafka-streaming.jar $KAFKABROKERS 2>/dev/null & 4. While it is running, use the producer to send messages to the `test` topic: java -jar kafka-producer-consumer.jar producer test $KAFKABROKERS &>/dev/null & 6. Use the following to view the output that is written to the `wordcounts` topic: /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server $KAFKABROKERS --topic wordcounts --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer NOTE: You have to tell the consumer to print the key (which contains the word value) and the deserializer to use for the key and value in order to view the data. The output is similar to the following: dwarfs 13635 ago 13664 snow 13636 dwarfs 13636 ago 13665 a 13803 ago 13666 a 13804 ago 13667 ago 13668 jumped 13640 jumped 13641 a 13805 snow 13637 7. Use __Ctrl + C__ to exit the consumer, then use the `fg` command to bring the streaming background task to the foreground. Use __Ctrl + C__ to exit it also.
1
nicksieger/refactoring-to-rails
Example of refactoring a Spring/Hibernate application to Rails
null
null
1
ilya40umov/spring-batch-quartz-example
An example which is meant to show how Spring Batch and Quartz can address the key issues of batch processing and job scheduling in a clustered environment.
null
spring-batch-quartz-example =========================== ## Summary ## An example which is meant to show how Spring Batch and Quartz can address the key issues of batch processing and job scheduling in a clustered environment. ## Used Technologies ## Spring Framework(IoC, JDBC, Transactions, etc.), Quartz Scheduler, Spring Batch, MySQL. ## Project details ## ### How to run ### The application contains two "main" classes: 1) org.sbq.batch.mains.ActivityEmulator does exactly what it's supposed to, it emulates some activity from users. This part is only needed to keep adding some new data to DB. You should run only one instance of this class(meaning that this part does not deal with clustering). 2) org.sbq.batch.mains.SchedulerRunner is meant to be run in multiple instances concurrently in order to simulate a bunch of nodes in a cluster. ### Simulated environment ### The example is meant to test the following environment: several servers(at least 2 nodes) running in a cluster against RDBMS(hopefully clustered) which have to perform certain batch tasks periodically and have fail-over, work distribution etc. ### Implemented Jobs ### <b>CalculateEventMetricsScheduledJob:</b> calculates a number of occurrences for each type of event since last job run and updates the site statistic entry(which hold metrics of site since its start); Triggered each 5 minutes; saves where it finished(time for which it processed); misfire policy 'FireOnceAndProceed', meaning that only one call to the job is needed for any number of misfires; <b>CalculateOnlineMetricsScheduledJob:</b> calculates total number of users online, number of users jogging, chatting, dancing and idle for a certain point of time; Triggered each 15 seconds; uses ScheduledFireTime from Quartz to identify for which point it should calculate the metrics; misfire policy 'IgnoreMisfire', meaning that all missed executions will be fired as soon as Quartz identifies them(so that the job can catch up); this job randomly (in ~ 1/3 of cases) throws an exception(TransientException) in order to emulate network issues; ## Addressed Scenarios ## ### Scheduler: No single point of failure ### <b>Use case:</b> Make sure that if one node goes down, the scheduled tasks are still being executed by the rest of the nodes. <b>How supported/implemented:</b> Quartz should be running on each machine in a cluster. Each Quartz should be configured to work with DB-backed JobStore and clustering should be enabled in Quartz properties. When at least 1 node with Quartz is up, the scheduled tasks will keep being executed(guaranteed by Quartz architecture). <b>Steps to verify:</b> Run init.sql. Start one instance of ActivityEmulator(optional). Start several instances of SchedulerRunner. Watch them executing jobs. Kill some of them. See how load is spread between the nodes which are left running. ### Scheduler: Work distribution ### <b>Use case:</b> Make sure that the tasks are getting distributed among nodes in the cluster. (This is important because after a certain point one node won't be able to handle all tasks). <b>How supported/implemented:</b> Quartz with DB JobStore performs work distribution automatically. <b>Steps to verify:</b> Run init.sql. Start one instance of ActivityEmulator(optional). Start several instances of SchedulerRunner. Looking at the log file on each instance of SchedulerRunner verify that the tasks are executed on each node(The distribution is not guaranteed to be even). ### Scheduler: Misfire Support ### <b>Use case:</b> Make sure that if all nodes go down and then after while at least one is back online, all of missed job executions(for particular jobs which are sensitive to misfires) are invoked. <b>How supported/implemented:</b> Quartz with DB JobStore performs detection of misfired jobs automatically upon startup of the first node from cluster. <b>Steps to verify:</b> Run init.sql. Start one instance of ActivityEmulator(optional). Start several instances of SchedulerRunner. Stop all instances of SchedulerRunner. Wait for some time. Start at least one instance of SchedulerRunner. See how misfired executions are detected and executed. ### Scheduler: Task Recovery ### <b>Use case:</b> Make sure that if a node executing a certain job goes down, the job is automatically repeated/re-started. <b>How supported/implemented:</b> This use case is tricky because a server crash is likely to leave the job in unknown state(especially if it writes data into non-transactional storage like Mongo). For now I assume the simplest use-case where the job just have to be restarted and we can ignore the fact of possible data collisions. Using requestRecovery feature from Quartz and SYNCHRONOUS executor(which uses Quartz thread for performing batch processing) we can rely on Quartz in terms of identifying crashed jobs and re-invoking them on a different node(or on the same one if it's up and the first one to identify the problem). <b>NOTE:</b> I think that a more smooth transition for job recovery can be made by storing job state in ExecutionContext which will be picked up by Spring Batch when you create a new execution for the same job instance. <b>Steps to verify:</b> Run init.sql. Start one instance of ActivityEmulator(optional). Start several instances of SchedulerRunner. Look at the logs and find out which SchedulerRunner is running LongRunningBatchScheduledJob, kill it. See how after a while another job logs the message that it's picked up the job(it can also be verified in DB by looking at executions table). ### Spring Batch: Retries Support ### <b>Use case:</b> Retry a job if it fails due to a transient problem(such as a network connectivity issue, or DB being down for a couple of minutes). <b>How supported/implemented:</b> Spring Batch provides RetryTemplate and RetryOperationsInterceptor for this purpose, which allow to specify number of retries, back-off policy and types of exceptions which considered retry-able. <b>Steps to verify:</b> Run init.sql. Start one instance of ActivityEmulator(optional). Start several instances of SchedulerRunner. In logs you should see "calculateOnlineMetrics() - TRANSIENT EXCEPTION..." which indicates that exception has been thrown but a method of Service class was retried by RetryOperationsInterceptor. ### General: Monitoring ### <b>Use case:</b> There should be an easy way to get the following info at any point in time: list of all jobs which are being executed at the moment, history of all job executions(with parameters and execution results success/failure), list of all scheduled jobs(e.g. next time a particular job runs etc.). <b>How supported/implemented:</b> In fact all this information can be obtained from Quartz and Spring Batch abstractions in java code. For some cases you can look into DB and find out the status of running jobs, history etc. There is also Spring Batch Admin web-app which can be used for this purpose. <b>Steps to verify:</b> see 'How supported/implemented' section. ## Un-Addressed Scenarios ## ### General: Execution Management ### <b>Q:</b> How do I manually re-execute a particular job(with given parameters) if it fails completely(i.e. no luck after N auto-retries)? <b>A:</b> Not implemented at the moment. In fact we should consider using JMS in order to deliver a command to a cluster of batch processing nodes. Then a JMS listener will trigger a specified Spring Batch job. ### General: Graceful Halt ### <b>Q:</b> How can I signal to all nodes to stop, so that I can deploy a new version of software, do maintenance etc.? <b>A:</b> I think this is also should be done via JMS message(send to a topic!). Upon receiving of a message each node should: a) stop Quartz b) wait for all nodes which don't support re-start c) stop all nodes which support re-start (the jobs which can save the point where they left and resume from that point). Also see http://numberformat.wordpress.com/tag/batch/ for some info on graceful stop.
1
aws-samples/designing-cloud-native-microservices-on-aws
Introduce a fluent way to design cloud native microservices via EventStorming workshop, this is a hands-on workshop. Contains such topics: DDD, Event storming, Specification by example. Including the AWS product : Serverless Lambda , DynamoDB, Fargate, CloudWatch.
aws cloudnative ddd dynamodb ecr ecs eventbridge eventstorming fargate lambda microservices serverless
# Designing Cloud Native Microservices on AWS (via DDD/EventStormingWorkshop) ![](docs/img/coffee.jpg) _Picture license-free from [Pexels](https://www.pexels.com/photo/background-beverage-breakfast-brown-414645/)_ Building software is hard. Understanding the business needs of the software is even harder. In almost every software development project, there will always be some form of gap between the requirements of the business users and the actual implementation. As a developer, knowing how to narrow this gap can help you go a long way to building applications that are relevant for the users. Using a Domain Driven Design approach, delivered via Event Storming, it can help to reduce the time it takes for everyone in the project team to understand a business domain model. > Theory and Practice: Learning in the Real world cases **Go through all of the learning journey, develop-->build-->deploy artifacts on AWS** ![](docs/img/Coffeeshop-architecture.png) ## Table of Contents - [00 - Event Storming](#eventstorming) - [What is Event Storming?](#what-is-event-storming) - [Whom is it for?](#whom-is-it-for) - [Event Storming Terms](#event-storming-terms) - [Event Storming Benefits](#event-storming-benefits) - [Event Storming Applications](#event-storming-applications) - [01 - Hands-on: Events exploring](docs/01-hands-on-events-exploring/README.md) - [02 - Cafe business scenario](docs/02-coffee-shop-scenario/README.md) - [03 - Roles, Commands, and Events Mapping](docs/03-roles-commands-events-mapping/README.md) - [Key Business events in the coffeeshop](docs/03-roles-commands-events-mapping/README.md#key-business-events-in-the-coffeeshop) - [Commands and Events mapping](docs/03-roles-commands-events-mapping/README.md#commands-and-events-mapping) - [Roles](docs/03-roles-commands-events-mapping/README.md#roles) - [Exceptions or risky events](docs/03-roles-commands-events-mapping/README.md#exceptions-or-risky-events) - [Re-think solutions to serve risky events](docs/03-roles-commands-events-mapping/README.md#re-think-solutions-to-serve-risky-events) - [Aggregate](docs/03-roles-commands-events-mapping/README.md#aggregate) - [Bounded Context forming up](docs/03-roles-commands-events-mapping/README.md#bounded-context-forming-up) - [04 - Modeling and Development](docs/04-modeling-and-development/README.md) - [Specification by Example](docs/04-modeling-and-development/README.md#specification-by-example) - [TDD within Unit Test environment](docs/04-modeling-and-development/README.md#tdd-within-unit-test-environment) - [Generate unit test code skeleton](docs/04-modeling-and-development/README.md#generate-unit-test-code-skeleton) - [Implement Domain Model from code Skeleton](docs/04-modeling-and-development/README.md#implement-domain-model-from-code-skeleton) - [Design each Microservices in Port-adapter concept](docs/04-modeling-and-development/README.md#design-each-microservices-in-port-adapter-concept) - [05 - Deploy Applications by AWS CDK](docs/05-deploy-applications-by-cdk/README.md) <!--- - [05 - Domain Driven Design Tactical design pattern guidance](05-ddd-tactical-design-pattern) - [06 - Actual Implementation](06-actual-implementation) - [07 - Infrastructure as Code by CDK](07-iaac-cdk) - [08 - Deploy Serverless application](08-deploy-serverless-app) - [09 - Deploy Containerized application](09-deploy-containerized-app) - [10 - Build up CI/CD pipeline](10-build-up-cicd-pipeline) ---> # Event Storming ![image](docs/img/problemsolving.png) ## What is Event Storming? Event Storming is a **rapid**, **lightweight**, and often under-appreciated group modeling technique invented by Alberto Brandolini, that is **intense**, **fun**, and **useful** to **accelerate** project teams. It is typically offered as an interactive **workshop** and it is a synthesis of facilitated group learning practices from Gamestorming, leveraging on the principles of Domain Driven Design (DDD). You can apply it practically on any technical or business domain, especially those that are large, complex, or both. ## Whom is it for? Event Storming isn't limited to just for the software development team. In fact, it is recommend to invite all the stakeholders, such as developers, domain experts, business decision makers etc to join the Event Storming workshop to collect viewpoints from each participants. ## Event Storming Terms ![Event Storming](https://storage.googleapis.com/xebia-blog/1/2018/10/From-EventStorming-to-CoDDDing-New-frame-3.jpg) > Reference from Kenny Bass - https://storage.googleapis.com/xebia-blog/1/2018/10/From-EventStorming-to-CoDDDing-New-frame-3.jpg Take a look on this diagram, there are a few colored sticky notes with different intention: * **Domain Events** (Orange sticky note) - Describes *what happened*. Represent facts that happened in a specific business context, written in past tense * **Actions** aka Command (Blue sticky note) - Describes an *action* that caused the domain event. It is a request or intention, raised by a role or time or external system * **Information** (Green sticky note) - Describes the *supporting information* required to help make a decision to raise a command * **Consistent Business Rules** aka Aggregate (Yellow sticky note) * Groups of Events or Actions that represent a specific business capability * Has the responsibility to accept or fulfill the intention of command * Should be in small scope * And communicated by eventual consistency * **Eventual Consistent Business rules** aka Policy (Lilac sticky note) * Represents a process or business rules. Can come from external regulation and restrictions e.g. account login success/fail process logic. ## Event Storming Benefits Business requirements can be very complex. It is often hard to find a fluent way to help the Product Owner and Development teams to collaborate effectively. Event storming is designed to be **efficient** and **fun**. By bringing key stakeholder into the same room, the process becomes: - **Efficient:** Everyone coming together in the same room can make decisions and sort out differences quickly. To create a comprehensive business domain model, what used to take many weeks of email, phone call or meeting exchanges can be reduced to a single workshop. - **Simple:** Event Storming encourages the use of "Ubiquitous language" that both the technical and non-technical stakeholders can understand. - **Fun:** Domain modeling is fun! Stakeholders get hands-on experience to domain modeling which everyone can participate and interact with each other. It also provides more opportunities to exchange ideas and improve mindsharing, from various perspective across multiple roles. - **Effective:** Stakeholders are encouraged not to think about the data model, but about the business domain. This puts customers first and working backwards from there, achieves an outcome that is more relevant. - **Insightful:** Event Storming generate conversations. This helps stakeholders to understand the entire business process better and help to have a more holistic view from various perspective. ## Event Storming Applications There are many useful applications of Event Storming. The most obvious time to use event storming is at a project's inception, so the team can start with a common understanding of the domain model. Some other reasons include: * Discovering complexity early on, finding missing concepts, understanding the business process; * Modelling or solving a specific problem in detail; * Learning how to model and think about modelling. Event Storming can also help to identify key views for your user interface, which can jump start Site Mapping or Wireframing. Let's get started with a quick example to demonstrate how to run a simple Event Storming. [Next: 01 Hands-on Events Exploring >](docs/01-hands-on-events-exploring/README.md) ## License This library is licensed under the MIT-0 License. See the LICENSE file.
0
naftulikay/commons-daemon-example
An example project using the Apache Commons Daemon library to run a Java system service.
null
commons-daemon-example ====================== Version: ${project.version} A fairly simple project to demonstrate the awesomeness of the Apache Commons Daemon project. This project demonstrates a complete application which can be run in the foreground, or (more importantly) as a background daemon service using `jsvc`.
1
andbed/clean-architecture
An example of the application built around clean architecture principles as defined by Uncle Bob.
null
Clean-architecture ================== An example of applying "The Clean Architecture" principles as defined by Uncle Bob: http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html This is only to illustrate key concepts. Version 1.1 Next TODOs: * Add HSQLDB * Finish configuring app as a working REST service (Spring based) * Add security * Add validation * Add another delivery mechanism * Implement some real functionality for a couple of use cases Changelog: * version 1.0 - prepared for Confitura 2014, contained only basic project structure * version 1.1 - added stand-alone controller tests and Spring DI support
1
sureshg/java-rust-ffi
🍋 FFI example for accessing Rust lang dynamic libraries from java
java jvm rust rust-ffi
# Java-Rust-FFI FFI example for accessing rust dynamic libraries from java
1
digitalgust/java2llvm
An Example shown convert java class bytecode to llvm ir , then compile llvm ir to standalong executable file .
null
# java2llvm An Example Project Show Convert Java Byte Code to LLVM IR assembler , then compile to standalone executable file This project is referenced on [class2ir](https://github.com/MParygin/class2ir), that based on an old llvm version. So I've changed instruction syntax, reimplemention to stack mode to fix branch problem, and repaired some bugs. There is a similar project that [tinyj2c](https://github.com/digitalgust/tinyj2c), it convert java bytecode to c source , it impmentated more function for java ### Currently: Generated CentOS_x64 and MacOS executable file, and say "Hello world". There are 2 implemention in the project, branch "emu stack ver" and "register ver", "register_ver" is fastest, but maybe problem that branch static analysis, the "emu_stack_ver" is more slow, no branch problem. ### Make: 1. Enter directory java2llvm/ 2. Run ***mac_build.sh*** or ***centos_build.sh*** , then you will get a.out here. ### Requirements: java 1.8 * Centos: CentOS 7.0 x86_64 llvm-as / llc / clang 5.0 make * MacOS MacOS 10.15 XCode with cli tools 11.0 ### Known issue: * No GC. * Maybe some of java instruction can't work * some of java instruction behavior simplify , eg. invokevirtual * Object memory allocation , like NO inheirt parent class field. ### change log: * Add base class java.lang.*, java.io.PrintStream * Add String to handle text output, StringBuilder to handle string concat * Trace instruction flow , to fix register var scope bug. ============== ## class2ir readme This project is the compiler from class files (Java byte code) to LL files (LLVM IR assembler). Result files can be compiled by llvm-as to standalone binary ELF files. Features: * No JDK, no JVM * Linux x86_64 target arch * Extreme small size (~10-20 kB ordinary program) * Use glibc * Use clang for system object At this moment project in active development, many things does not work.
1
levinotik/ReusableAsyncTask
An example of using the Observer pattern to implement a reusable AsyncTask
null
null
1
krugloid/todo-jedux
A simple Todo app example using Jedux for building app architecture and Anvil for reactive views
null
# todo-jedux A simple Todo app example using [Jedux](https://github.com/trikita/jedux) for building app architecture and [Anvil](https://github.com/zserge/anvil) for reactive views ![screencap](https://github.com/krugloid/todo-jedux/blob/master/screencap.gif)
1
danramirez/wearable-listview-example
Simple wearable listview example, with custom rows and center position animations
null
# Wearable Listview Example Simple example showing how to use WearableListview for Android Wear. Implements OnCenterProximityListener interface to change view style when focus on some list items ![androidwear-capture](images/listview.gif) # Icons by Norbert Kucsera You can download his icons collection from The Noun Project website [https://thenounproject.com/idiotbox/collection/sports/](https://thenounproject.com/idiotbox/collection/sports/)
0
jonashackt/spring-boot-buildpack
Example project showing how to use Buildpacks.io together with Spring Boot & it's layered jar feature
null
# spring-boot-buildpack [![Build Status](https://github.com/jonashackt/spring-boot-buildpack/workflows/build/badge.svg)](https://github.com/jonashackt/spring-boot-buildpack/actions) [![License](http://img.shields.io/:license-mit-blue.svg)](https://github.com/jonashackt/spring-boot-buildpack/blob/master/LICENSE) [![renovateenabled](https://img.shields.io/badge/renovate-enabled-yellow)](https://renovatebot.com) [![versionspringboot](https://img.shields.io/badge/dynamic/xml?color=brightgreen&url=https://raw.githubusercontent.com/jonashackt/spring-boot-buildpack/main/pom.xml&query=%2F%2A%5Blocal-name%28%29%3D%27project%27%5D%2F%2A%5Blocal-name%28%29%3D%27parent%27%5D%2F%2A%5Blocal-name%28%29%3D%27version%27%5D&label=springboot)](https://github.com/spring-projects/spring-boot) [![Pushed to Docker Hub](https://img.shields.io/badge/docker_hub-released-blue.svg?logo=docker)](https://hub.docker.com/r/jonashackt/spring-boot-buildpack) Example project showing how to use Buildpacks.io together with Spring Boot &amp; it's layered jar feature [![asciicast](https://asciinema.org/a/368329.svg)](https://asciinema.org/a/368329) I was really inspired to get to know the concept of buildpacks after attending this year's Spring One 2020 - and especially the talk by https://twitter.com/nebhale : https://www.youtube.com/watch?v=44n_MtsggnI ## Table of Contents * [Spring Boot & Cloud Native Build Packs?](#spring-boot--cloud-native-build-packs) * [Step by step...](#step-by-step) * ["dive" into the Containers](#dive-into-the-containers) * [Paketo pack CLI](#paketo-pack-cli) * [Why are the Spring Boot & Paketo images 40 years old?](#why-are-the-spring-boot--paketo-images-40-years-old) * [Layered jars](#layered-jars) * [Using Layered jars inside Dockerfiles](#using-layered-jars-inside-dockerfiles) * [Buildpacks with layered jars](#buildpacks-with-layered-jars) * [Doing a Buildpack build on TravisCI](#doing-a-buildpack-build-on-travisci) ### Spring Boot & Cloud Native Build Packs? __Buildpacks?__ * Heroku invented (2011) > Buildpacks were first conceived by Heroku in 2011. Since then, they have been adopted by Cloud Foundry (Pivotal) and other PaaS such as Google App Engine, Gitlab, Knative, Deis, Dokku, and Drie. __Cloud Native Buildpacks & Paketo__ * today: [CNCF Incubation project](https://www.cncf.io/blog/2020/11/18/toc-approves-cloud-native-buildpacks-from-sandbox-to-incubation/) > Specification for turning applications into Docker images: [buildpacks.io](https://buildpacks.io/) Paketo.io is an implementation for major languages (Java, Go, .Net, node.js, Ruby, PHP...) --> [paketo.io](https://paketo.io/) Similar to tools like: Jib https://github.com/GoogleContainerTools/jib, ko https://github.com/google/ko, Bazel (https://bazel.build/) __Maven/Gradle Plugin to use Paketo Buildpacks__ The build-image plugin takes care of doing the Paketo build. From Spring Boot 2.3.x on simply run it with: ```shell script mvn spring-boot:build-image ``` ### Step by step... Always start at [start.spring.io](start.spring.io) :) ![start-spring-io](screenshots/start-spring-io.png) Implement your App (e.g. building a reactive Web app using Spring Webflux). Then run the build with: ```shell script mvn spring-boot:build-image ``` This will do a "normal" Maven build of your Spring Boot app, but also be ```shell script $ mvn spring-boot:build-image ... [INFO] --- spring-boot-maven-plugin:2.4.0-M4:build-image (default-cli) @ spring-boot-buildpack --- [INFO] Building image 'docker.io/library/spring-boot-buildpack:0.0.1-SNAPSHOT' [INFO] [INFO] > Pulling builder image 'docker.io/paketobuildpacks/builder:base' 100% [INFO] > Pulled builder image 'paketobuildpacks/builder@sha256:00a9c25f8f994c1a044fa772f7e9314fe5d90d329b40f51426e1dafadbfa5ac8' [INFO] > Pulling run image 'docker.io/paketobuildpacks/run:base-cnb' 100% [INFO] > Pulled run image 'paketobuildpacks/run@sha256:21c1fb65033ae5a765a1fb44bfefdea37024ceac86ac6098202b891d27b8671f' [INFO] > Executing lifecycle version v0.9.2 [INFO] > Using build cache volume 'pack-cache-604f3372716a.build' [INFO] [INFO] > Running creator [INFO] [creator] ===> DETECTING [INFO] [creator] 5 of 17 buildpacks participating [INFO] [creator] paketo-buildpacks/bellsoft-liberica 4.0.0 [INFO] [creator] paketo-buildpacks/executable-jar 3.1.1 [INFO] [creator] paketo-buildpacks/apache-tomcat 2.3.0 [INFO] [creator] paketo-buildpacks/dist-zip 2.2.0 [INFO] [creator] paketo-buildpacks/spring-boot 3.2.1 [INFO] [creator] ===> ANALYZING [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:jre" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:jvmkill" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:helper" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:java-security-properties" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/executable-jar:class-path" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/spring-boot:spring-cloud-bindings" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/spring-boot:web-application-type" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/spring-boot:helper" from app image [INFO] [creator] ===> RESTORING [INFO] [creator] ===> BUILDING [INFO] [creator] [INFO] [creator] Paketo BellSoft Liberica Buildpack 4.0.0 [INFO] [creator] https://github.com/paketo-buildpacks/bellsoft-liberica [INFO] [creator] Build Configuration: [INFO] [creator] $BP_JVM_VERSION 11.* the Java version [INFO] [creator] Launch Configuration: [INFO] [creator] $BPL_JVM_HEAD_ROOM 0 the headroom in memory calculation [INFO] [creator] $BPL_JVM_LOADED_CLASS_COUNT 35% of classes the number of loaded classes in memory calculation [INFO] [creator] $BPL_JVM_THREAD_COUNT 250 the number of threads in memory calculation [INFO] [creator] $JAVA_TOOL_OPTIONS the JVM launch flags [INFO] [creator] BellSoft Liberica JRE 11.0.8: Reusing cached layer [INFO] [creator] Launch Helper: Reusing cached layer [INFO] [creator] JVMKill Agent 1.16.0: Reusing cached layer [INFO] [creator] Java Security Properties: Reusing cached layer [INFO] [creator] [INFO] [creator] Paketo Executable JAR Buildpack 3.1.1 [INFO] [creator] https://github.com/paketo-buildpacks/executable-jar [INFO] [creator] Process types: [INFO] [creator] executable-jar: java org.springframework.boot.loader.JarLauncher [INFO] [creator] task: java org.springframework.boot.loader.JarLauncher [INFO] [creator] web: java org.springframework.boot.loader.JarLauncher [INFO] [creator] [INFO] [creator] Paketo Spring Boot Buildpack 3.2.1 [INFO] [creator] https://github.com/paketo-buildpacks/spring-boot [INFO] [creator] Creating slices from layers index [INFO] [creator] dependencies [INFO] [creator] spring-boot-loader [INFO] [creator] snapshot-dependencies [INFO] [creator] application [INFO] [creator] Launch Helper: Reusing cached layer [INFO] [creator] Web Application Type: Reusing cached layer [INFO] [creator] Spring Cloud Bindings 1.6.0: Reusing cached layer [INFO] [creator] 4 application slices [INFO] [creator] Image labels: [INFO] [creator] org.opencontainers.image.title [INFO] [creator] org.opencontainers.image.version [INFO] [creator] org.springframework.boot.spring-configuration-metadata.json [INFO] [creator] org.springframework.boot.version [INFO] [creator] ===> EXPORTING [INFO] [creator] Reusing layer 'paketo-buildpacks/bellsoft-liberica:helper' [INFO] [creator] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties' [INFO] [creator] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre' [INFO] [creator] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill' [INFO] [creator] Reusing layer 'paketo-buildpacks/executable-jar:class-path' [INFO] [creator] Reusing layer 'paketo-buildpacks/spring-boot:helper' [INFO] [creator] Reusing layer 'paketo-buildpacks/spring-boot:spring-cloud-bindings' [INFO] [creator] Reusing layer 'paketo-buildpacks/spring-boot:web-application-type' [INFO] [creator] Reusing 5/5 app layer(s) [INFO] [creator] Reusing layer 'launcher' [INFO] [creator] Reusing layer 'config' [INFO] [creator] Adding label 'io.buildpacks.lifecycle.metadata' [INFO] [creator] Adding label 'io.buildpacks.build.metadata' [INFO] [creator] Adding label 'io.buildpacks.project.metadata' [INFO] [creator] Adding label 'org.opencontainers.image.title' [INFO] [creator] Adding label 'org.opencontainers.image.version' [INFO] [creator] Adding label 'org.springframework.boot.spring-configuration-metadata.json' [INFO] [creator] Adding label 'org.springframework.boot.version' [INFO] [creator] *** Images (408f3d59f38e): [INFO] [creator] docker.io/library/spring-boot-buildpack:0.0.1-SNAPSHOT [INFO] [INFO] Successfully built image 'docker.io/library/spring-boot-buildpack:0.0.1-SNAPSHOT' [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 20.009 s [INFO] Finished at: 2020-10-27T14:29:46+01:00 [INFO] ------------------------------------------------------------------------ ``` As you can see in the first phases `DETECTING` and `ANALYZING`, the build process analyses the given application and identifies multiple build packs that are needed to successfully package the application into a Docker image: ``` [INFO] [creator] ===> ANALYZING [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:jre" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:jvmkill" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:helper" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:java-security-properties" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/executable-jar:class-path" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/spring-boot:spring-cloud-bindings" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/spring-boot:web-application-type" from app image [INFO] [creator] Restoring metadata for "paketo-buildpacks/spring-boot:helper" from app image ``` For example there's `paketo-buildpacks/bellsoft-liberica:jre` to bring in a JRE, since we have a Java app here. And there's also `paketo-buildpacks/executable-jar` since the resulting application is an executable jar. Also there are a few `paketo-buildpacks/spring-boot-x` build packs because we have a Spring Boot application. Now simply run your Dockerized app via ``` docker run -p 8080:8080 spring-boot-buildpack ``` ### "dive" into the Containers Let's use the great Container introspection tool [dive](https://github.com/wagoodman/dive) to gain an insight of the build Docker image Install it with `brew install dive` on a Mac (or see https://github.com/wagoodman/dive#installation) Using dive we see a whole lot of Docker image layers containing all the different paketo layers: ![dive-container-layers-without-layered-jars-feature](screenshots/dive-container-layers-without-layered-jars-feature.png) If you want to have dive always start with the default to hide file attributes & unmodified files of each layer for an easier overview whats going on inside the layers, you can have a look at https://github.com/wagoodman/dive#ui-configuration or simply create a `.dive.yaml` inside your home directory. Here's my `.dive.yaml` for convenience: ```yaml diff: # You can change the default files shown in the filetree (right pane). All diff types are shown by default. hide: - unmodified filetree: # Show the file attributes next to the filetree show-attributes: false ``` Btw. it's also the dive configuration https://twitter.com/nebhale uses in his SpringOne 2020 talk - and it took me a while to get that one right :) ### Paketo pack CLI Use Paketo without the Maven/Gradle build plugin directly through the CLI. You need to [install pack CLI](https://buildpacks.io/docs/tools/pack/#pack-cli) first. On a Mac simply use brew: ``` brew install buildpacks/tap/pack ``` Choose one Paketo builder then ``` $ pack suggest-builders Suggested builders: Google: gcr.io/buildpacks/builder:v1 Ubuntu 18 base image with buildpacks for .NET, Go, Java, Node.js, and Python Heroku: heroku/buildpacks:18 heroku-18 base image with buildpacks for Ruby, Java, Node.js, Python, Golang, & PHP Paketo Buildpacks: paketobuildpacks/builder:base Ubuntu bionic base image with buildpacks for Java, NodeJS and Golang Paketo Buildpacks: paketobuildpacks/builder:full Ubuntu bionic base image with buildpacks for Java, .NET, NodeJS, Golang, PHP, HTTPD and NGINX Paketo Buildpacks: paketobuildpacks/builder:tiny Tiny base image (bionic build image, distroless run image) with buildpacks for Golang Tip: Learn more about a specific builder with: pack inspect-builder <builder-image> ``` Directly use Paketo with the pack CLI ``` pack build spring-boot-buildpack --path . --builder paketobuildpacks/builder:base ``` This will do exactly the same build which was run via the Spring Boot Maven build-image plugin behind the scenes (but maybe in more beautiful color): [![asciicast](https://asciinema.org/a/368331.svg)](https://asciinema.org/a/368331) Now simply use Docker to run the resulting image: ``` docker run -p 8080:8080 spring-boot-buildpack ``` and access your app on http://localhost:8080/hello ##### Why are the Spring Boot & Paketo images 40 years old? As you may noticed the resulting images have a really old timestamp: ```shell script gcr.io/paketo-buildpacks/builder base-platform-api-0.3 914aba170326 40 years ago 654MB paketobuildpacks/builder <none> 914aba170326 40 years ago 654MB spring-boot-buildpack-gcr-builder latest 6c7a74899b13 40 years ago 462MB pack.local/builder/axczkudrjk latest 69aeed7ad644 40 years ago 654MB spring-boot-buildpack latest b529a37599a6 40 years ago 259MB jonashackt/spring-boot-buildpack latest a9ccbb57fffd 40 years ago 259MB paketobuildpacks/builder base 1435430a71b7 40 years ago 558MB ``` Why is that? Because of providing reproducible builds (see this https://reproducible-builds.org/ for more info). There's great post about the why available here: https://medium.com/buildpacks/time-travel-with-pack-e0efd8bf05db (Thanks coldfinger [to clarify this one on stackoverflow](https://stackoverflow.com/a/62866908/4964553)!) Long story short: Without the fixed timestamp the hashes of the Docker images would differ every time you would issue a build (although maybe only seconds) - and then it wouldn't be clear, if anything changed. ### Layered jars From Spring Boot 2.3 on there's also [a build in feature called layered jars](https://spring.io/blog/2020/08/14/creating-efficient-docker-images-with-spring-boot-2-3). Before looking into the layered jars featuer, we should bring a standard Spring Boot jar layout to our minds. Simply unzip `spring-boot-buildpack-0.0.1-SNAPSHOT.jar` to see what's inside: ![jar-layout](screenshots/jar-layout.png) You can see `BOOT-INF`, `META-INF` and `org` directories - where `BOOT-INF/classes` contains our application classes and `BOOT-INF/lib` inherits all application dependencies. The directory `org/springframework/boot/loader` contains all Spring Boot magic classes that are needed to create the executable Boot app. So nothing new here for the moment. [While using Spring Boot 2.3.x we need activate this feature](https://docs.spring.io/spring-boot/docs/2.3.1.RELEASE/maven-plugin/reference/html/#repackage-layers) with simply configuring our `spring-boot-maven-plugin`: ``` <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <layers> <enabled>true</enabled> </layers> </configuration> </plugin> </plugins> </build> ``` [From Spring Boot 2.4.x Milestones (and GA) on, you don't even need to configure it since the default behavior then](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/maven-plugin/reference/html/#repackage-layers): > The repackaged jar includes the layers.idx file by default. Now run a fresh ``` mvn clean package ``` Now our jar file's `BOOT-INF` directory contains a new `layers.idx` file: ``` - "dependencies": - "BOOT-INF/lib/" - "spring-boot-loader": - "org/" - "snapshot-dependencies": - "application": - "BOOT-INF/classes/" - "BOOT-INF/classpath.idx" - "BOOT-INF/layers.idx" - "META-INF/" ``` As you can see the main thing about this is to assign our directories to layers and implement an order for them! Our dependencies define the first layer since they are likely to not change that often. The second layer inherits all Spring Boot loader classes and also shouldn't change all too much. Our SNAPSHOT dependencies then make for a more variable part and create the 3rd layer. Finally our application's class files and so on are likely to change a lot! So they reside in the last layer. In order to view the layers, there's a new command line option (or system property) `-Djarmode=layertools` for us. Simply `cd` into the `target` directory and run: ``` $ java -Djarmode=layertools -jar spring-boot-buildpack-0.0.1-SNAPSHOT.jar list dependencies spring-boot-loader snapshot-dependencies application ``` To extract each layer, we can also use the command line option with the `extract` option: ``` $ java -Djarmode=layertools -jar spring-boot-buildpack-0.0.1-SNAPSHOT.jar extract ``` Now inside the `target` directory you should find 4 more folders, which represent the separate layers: ![extracted-jar-layers](screenshots/extracted-jar-layers.png) ### Using Layered jars inside Dockerfiles All those directories could be used to create a separate layer inside a Docker image e.g. by using the `COPY` command. Phil Webb [outlined this in his spring.io post](https://spring.io/blog/2020/01/27/creating-docker-images-with-spring-boot-2-3-0-m1) already, where he crafts a `Dockerfile` that runs the `java -Djarmode=layertools -jar` command in the first build container and then uses the extracted directories to create seperate Docker layers from them: ```dockerfile FROM adoptopenjdk:11-jre-hotspot as builder WORKDIR application ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} application.jar RUN java -Djarmode=layertools -jar application.jar extract FROM adoptopenjdk:11-jre-hotspot WORKDIR application COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader/ ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] ``` You can run the Docker build if you want using the [DockerfileThatsNotNeededUsingBuildpacks](DockerfileThatsNotNeededUsingBuildpacks) via: ``` docker build . --tag spring-boot-layered --file DockerfileThatsNotNeededUsingBuildpack ``` And inside the output you'll see the separate layers beeing created: ``` ... Step 8/12 : COPY --from=builder application/dependencies/ ./ ---> 88bb8adaaca6 Step 9/12 : COPY --from=builder application/spring-boot-loader/ ./ ---> 3922891db128 Step 10/12 : COPY --from=builder application/snapshot-dependencies/ ./ ---> f139bcf5babb Step 11/12 : COPY --from=builder application/application/ ./ ---> 5d02393d4fe2 ... ``` We can even further examine the created Docker image with `dive`: ``` dive spring-boot-layered ``` It was really cool for me to see that one in action! ![dive-docker-image-with-layeres](screenshots/dive-docker-image-with-layeres.png) ### Buildpacks with layered jars Now running our build pack powered Maven build again should show a new part `Creating slices from layers index` inside the `Paketo Spring Boot Buildpack` output: ``` $ mvn spring-boot:build-image ... [INFO] [creator] Paketo Spring Boot Buildpack 3.2.1 [INFO] [creator] https://github.com/paketo-buildpacks/spring-boot [INFO] [creator] Creating slices from layers index [INFO] [creator] dependencies [INFO] [creator] spring-boot-loader [INFO] [creator] snapshot-dependencies [INFO] [creator] application [INFO] [creator] Launch Helper: Reusing cached layer ... ``` > Oh, I found a bug https://github.com/paketo-buildpacks/spring-boot/issues/1 , which comes from a change in the buildpacks/lifecycle umbrella project: https://github.com/buildpacks/lifecycle/issues/455 Bug was fixed already :) So we could move on! After doing our build pack powered build, at the end of the log you should find the latest image id like `*** Images (4c26dc7b3fa3)`: ``` ... *** Images (4c26dc7b3fa3): spring-boot-buildpack Reusing cache layer 'paketo-buildpacks/bellsoft-liberica:jdk' Adding cache layer 'paketo-buildpacks/maven:application' Adding cache layer 'paketo-buildpacks/maven:cache' Reusing cache layer 'paketo-buildpacks/maven:maven' Successfully built image spring-boot-buildpack ``` Now let's dive into the build image and watch our for our layers inside it: ![dive-container-layers-paketo-using-layered-jars-feature](screenshots/dive-container-layers-paketo-using-layered-jars-feature.png) ### Doing a Buildpack build on TravisCI Let's have a look into this project's [.travis.yml](.travis.yml): ```yaml language: java jdk: - openjdk11 cache: directories: - $HOME/.m2 services: - docker jobs: include: - script: - mvn clean spring-boot:build-image name: "Build Spring Boot app with build-image Maven plugin" - script: # Install pack CLI via homebrew. See https://buildpacks.io/docs/tools/pack/#pack-cli - (curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.14.2/pack-v0.14.2-linux.tgz" | sudo tar -C /usr/local/bin/ --no-same-owner -xzv pack) # Build app with pack CLI - pack build spring-boot-buildpack --path . --builder paketobuildpacks/builder:base # Push to Docker Hub also - echo "$DOCKER_HUB_TOKEN" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin - docker tag spring-boot-buildpack jonashackt/spring-boot-buildpack:latest - docker push jonashackt/spring-boot-buildpack:latest name: "Build Spring Boot app with Paketo.io pack CLI" ``` I wanted to have both possible build options covered - the first uses the Maven plugin with `mvn clean spring-boot:build-image`. The second installs `pack CLI` and build the application using it. Also the resulting image is pushed to DockerHub at https://hub.docker.com/r/jonashackt/spring-boot-buildpack ### Building GraalVM Native Images from Spring Boot Apps using Buildpacks There's a new Maven goal in town to use Buildpacks to create Native Images (see https://github.com/jonashackt/spring-boot-graalvm) ```shell script mvn springboot:native ``` ### Links Spring One 2020 talk by https://twitter.com/nebhale : https://www.youtube.com/watch?v=44n_MtsggnI https://spring.io/blog/2020/01/27/creating-docker-images-with-spring-boot-2-3-0-m1 https://spring.io/blog/2020/08/14/creating-efficient-docker-images-with-spring-boot-2-3 https://docs.spring.io/spring-boot/docs/2.3.0.RELEASE/maven-plugin/reference/html/#repackage-layers https://www.baeldung.com/spring-boot-docker-images https://github.com/paketo-buildpacks/spring-boot PackCLI needs Docker container runtime locally, not `docker build`! Build is done by lifecycle, see https://github.com/buildpacks/pack/issues/564#issuecomment-610172880 https://www.redhat.com/en/blog/why-red-hat-investing-cri-o-and-podman ## Advanced Topics #### Passing Runtime Environment Variables JAVA_TOOL_OPTS https://stackoverflow.com/questions/64964709/how-to-pass-flags-to-java-process-in-docker-contatiner-built-by-cloud-native-bui/65142031#65142031 #### Bindings Configure JDK uri of the bellsoft-liberica buildpack: https://stackoverflow.com/questions/65212231/cloud-native-buildpacks-paketo-with-java-spring-boot-how-to-configure-different Configure uri of spring-cloud-bindings jar: https://stackoverflow.com/questions/65118519/spring-boot-gradle-bootbuildimage-task-with-private-repo Bindings with spring-boot-maven-plugin https://stackoverflow.com/questions/65078636/how-to-configure-buildpack-bindings-with-the-spring-boot-maven-plugin/65195715#65195715 #### Change #### K8s Skaffold: https://skaffold.dev/docs/pipeline-stages/builders/buildpacks/ https://stackoverflow.com/questions/64843991/how-do-i-use-spring-boot-maven-plugin-build-image-with-skaffold-and-dekorate #### Buildpacks with Spring Boot < 2.3 https://stackoverflow.com/questions/64061096/using-cloud-native-buildpacks-with-spring-boot-2-3/65142343#65142343 #### Azure https://docs.microsoft.com/en-us/azure/container-registry/container-registry-tasks-pack-build #### Google Cloud https://cloud.google.com/blog/products/containers-kubernetes/google-cloud-now-supports-buildpacks
1
sameershukla/JavaFPLearning
The Repository is a compendium of Java-based Functional Programming examples aimed at enhancing your comprehension of the concepts and facilitating your eventual implementation of them. The examples in the repo doesn't contains the examples of how to use obvious map, filter, reduce, instead it focuses on writing efficient functional code in Java
null
# Functional Programming In Java <img src="https://axisapplications.com/wp-content/uploads/2019/02/functionalprogramming_icon-300x300.png" width="300"> # Overview The Repository is a compendium of Java-based Functional Programming examples aimed at enhancing your comprehension of the concepts and facilitating your eventual implementation of them. The examples in this repo doesn't contains the examples of how to use obvious map, filter, reduce, instead it focuses on writing efficient functional code. Throughout this course, you will gain a solid understanding of Functional programming concepts, starting from the fundamentals and progressing to more advanced topics. You will learn how to write Higher Order Functions in Java and how to leverage Function Chaining to produce elegant and efficient code. Additionally, you will explore Function Currying, Partial Functions, and Monads. One noteworthy aspect of this course is that it includes a variety of practical examples, which will be incredibly beneficial for your learning experience. # What you'll learn and how the code is structured. The repository contains examples that demonstrate the principles of writing elegant functional code. The repo so far has 2 packages "basic" and "problems". One should start exploring from the "basic" package that has several sub-packages covering each FP concept one should start learning in this order ### basics - Composition - basic - advance - hof - currying - types - utils ### problems This package contains Pipeline examples, which should be looked into once basics are in place, this WIP (Work In Progress, more and more examples will be added going forward) ### The "composition.basic" and "composition.advance" packages in Java for Function Chaining The "composition.basic" package provides a comprehensive guide for chaining regular functions, covering examples for String manipulation, file reading, and handling functions with multiple parameters. Here is the suggested order for learning these examples: **StringFunctionPipeline**: Creating a pipeline of functions to manipulate strings. **BiFunctionPipeline**: Chaining BiFunctions and returning Tuples, utilizing the Tuple class. **TriFunctionPipeline**: Handling functions with three parameters using TriFunction. **UserManagementService**: Demonstrating function chaining in a Spring Boot application. ### The "composition.advance" package is dedicated to showcasing how to chain objects of the java.util.function.Function interface. Here is the suggested order for learning these examples: **FunctionExample**: Demonstrating Function as First-class citizens in Java. **StringFunctionPipeline**: Creating a pipeline of Functions to manipulate strings. **FunctionCompositionExample**: Understanding the difference between 'compose' and 'andThen' functions. ### The "basics.hof" package contains Higher Order Function examples, which are functions that either take other functions as arguments or return functions as results. Here is the suggested order for learning these examples: **StringComparatorHof**: Example demonstrates passing Comparator Function to a Function and Compares Strings. **ConsoleFormatterHof**: Example demonstrates a function that takes 2 functions as params. ### The "currying" package is dedicated to showcasing Function Currying and Partial Applied Functions (PAF) The "functional.currying" package provides examples of Currying, which is a technique for transforming a function that takes multiple arguments into a sequence of functions that each take a single argument. Here is the suggested order: **CurriedCreateUser**: Showcases simple example of Function Currying **CurriedEmailComposer**: Showcases Composing EmailId using Currying **TriFunctionCurrying**: Slightly advance example of Currying. **PartialApplicationEndpoint**: Demonstrates example of Partial Applied Function (PAF) **PartialFunctionApplicationExample**: MOST IMPORTANT EXAMPLE OF CURRYING AND PAF. ### The "types" package contains types. There are 2 Types covered Tuple and Unit. **Tuple**: In functional programming, a tuple is an ordered collection of elements of different types. **Unit**: Unit is a class that represents the absence of a value. It is used to indicate that a function returns no useful value, similar to the void type. ### The utils package is work in progress that is a collection of some user-defined utils method. Very USEFUL by the way. # What are Functions In computer programming, a function is a self-contained block of code that performs a specific task. Functions take input, called arguments or parameters, and can return output values, allowing them to be used as building blocks for larger programs. ![img.png](images/function1.png) # Understanding Functions in Functional Programming Functions are a key concept in functional programming, and are used to express computations and transformations on data. In functional programming, functions are treated as first-class citizens, meaning that they can be passed around as values, stored in variables or data structures, and returned as results from other functions. Functions in functional programming are typically pure functions, which means that they don't have any side effects, and their output is solely determined by their input. This makes them very predictable and easy to reason about, since their behavior doesn't depend on any external state or context. One way to perceive java.util.function.Function is as follows: ![img.png](images/function.png) BiFunction can be perceive as ![img.png](images/bifunc.png) # Lambda Expression In Java, a lambda expression is a type of anonymous function that can be used to represent a block of code that can be passed as an argument to a method or stored in a variable. When a Java compiler encounters a lambda expression in the source code, it performs several steps to detect and process it: ##### **_Parsing_**: The Java compiler parses the lambda expression to determine its syntax and identify the variables that are used in the expression. ##### **_Type_ _Inference_**: The compiler infers the types of the lambda parameters based on the context in which the lambda expression is used. ##### **Creation of a Functional Interface**:_ A lambda expression is only valid if it can be assigned to a functional interface. A functional interface is an interface with a single abstract method. If the lambda expression matches the signature of the functional interface, the compiler creates an instance of that interface and assigns the lambda expression to it. ##### **Compilation**_: Finally, the compiler compiles the lambda expression and generates bytecode that can be executed by the Java Virtual Machine (JVM). During compilation, the lambda expression is translated into a class file that implements the functional interface. The class file contains a method that implements the lambda expression, as well as any captured variables and their values. When the lambda expression is executed, the JVM creates an instance of this class and invokes the method on that instance. # How Lambda Expressions handled by JVM When a lambda expression is encountered in Java code, the JVM uses the invokedynamic instruction to create an instance of a functional interface that represents the lambda. Here's how the process works in more detail: 1. The Java compiler generates an instance of a functional interface that corresponds to the lambda expression. For example, if the lambda expression is of the form x -> x * 2, the compiler generates an instance of the java.util.function.IntUnaryOperator interface. 2. The invokedynamic instruction is used to create a CallSite object, which is responsible for the dynamic invocation of the lambda expression. The CallSite object is associated with the lambda expression and the functional interface instance generated in step 1. 3. When the lambda expression is invoked, the JVM uses the CallSite object to dynamically bind the lambda expression to the appropriate method in the functional interface. 4. The JVM then invokes the method on the functional interface instance using the appropriate arguments, and returns the result to the calling code. Overall, the use of invokedynamic and functional interfaces enables efficient implementation of lambda expressions in Java, allowing for concise and expressive code. The dynamic binding of the lambda expression to the appropriate method in the functional interface allows for greater flexibility in code composition and enables more powerful abstractions in Java programming. # Function Chaining In order to create a chain of functions, it is essential to first instantiate a Function or BiFunction object. This marks the beginning of the pipeline. For example, you could create a Function<String, String> object named "pipeline" using the "createReader" method of the "FileOps" class. Once you have created the "pipeline" object, you can use the "andThen" method to chain subsequent functions together. Each function in the chain will take the output of the preceding function as input. Eventually, the final function in the chain should return a String value. Function<String, String> pipeline = FileOps :: createReader; pipeline.andThen("Output of Create Reader is input here") and so on, but eventually it should return String. more details can be found here: https://www.c-sharpcorner.com/article/creating-function-pipelines-in-java/ # Function Chaining Use Cases **Data processing**: Function chaining can be used to perform a series of data transformations on a collection or stream of data, such as filtering, mapping, sorting, and reducing. By chaining these operations together, you can create a data processing pipeline that is both efficient and easy to read and maintain. **Input validation**: Function chaining can be used to validate user input by applying a series of validation rules to the input data. Each function in the chain can check a specific aspect of the input, such as its length, format, or range, and return an error message if the input fails the validation. **Logging and debugging**: Function chaining can be used to create a chain of logging or debugging statements that track the execution of a program or a specific code path. Each function in the chain can output a specific piece of information, such as the input or output data, the execution time, or the error message, and pass the output to the next function in the chain. **Configuration and setup**: Function chaining can be used to configure and set up a complex system or application. Each function in the chain can perform a specific configuration task, such as initializing a database connection, setting up a network connection, or loading a configuration file, and pass the configuration data to the next function in the chain. In general, function chaining can be used in any situation where you need to compose a series of related functions to perform a specific task or data transformation. By chaining the functions together, you can create a powerful and flexible pipeline that is both easy to use and easy to maintain. # Function Chaining Methods and Interfaces **apply Function**: The 'apply()' method takes an input and returns a result. It is used to apply a function to an argument and compute a result ``` Function<Integer, Integer> doubleFunction = x -> x * 2; Integer result = doubleFunction.apply(5); // result is 10 ``` In this example, we have created a function that doubles its input and applied it to the integer 5. The apply() method takes the integer 5 as an argument and returns the result 10. Remember: apply returns a result **andThen Function:** The Function interface's "andThen" method takes a sequence of two functions and applies them in succession, using the output of the first function as the input to the second function. This chaining of the functions results in a new function that combines the behavior of both functions in a single transformation. Here's an example: ![img.png](images/andThen.png) ``` Function<Integer, Integer> addOne = x -> x + 1; Function<Integer, Integer> doubleIt = x -> x * 2; Function<Integer, Integer> addOneAndDoubleIt = addOne.andThen(doubleIt); System.out.println(addOneAndDoubleIt.apply(5)); // Output: 12 ``` **compose Function:** In contrast to the "andThen" method, the "compose" method applies the first function to the output of the second function. This means that the second function is applied to the input, and then the first function is applied to the output of the second function. This results in a chain of functions where the output of the second function becomes the input of the first function.. Here's an example: ![img.png](images/compose.png) ``` Function<Integer, Integer> addOne = x -> x + 1; Function<Integer, Integer> doubleIt = x -> x * 2; Function<Integer, Integer> addOneAfterDoubleIt = addOne.compose(doubleIt); System.out.println(addOneAfterDoubleIt.apply(5)); // Output: 11 ``` **BiFunction Interface:** BiFunction can be represented as Function<A, Function<A, B>> ``` /** * Input is Function<String, Function<String,String>> * Input to Function is String and output is a second function * Input to second function is String and output is a String, hence on apply.apply a String is returned * Input to first function (s1) output is a function (s2) -> s1 + s2 * Input to second function is (s2) output is s1 + s2 * */ (s1) -> (s2) -> s1 + s2 private static String function(Function<String, Function<String, String>> f){ return f.apply("Hello").apply("World"); } (s1, s2) -> s1 + s2 private static String biFunc(BiFunction<String, String, String> b){ return b.apply("Hello", "World"); } ``` **TriFunction:** If we require more than two parameters to be passed to a function, such as three parameters, we may encounter a problem as there is no TriFunction interface available in Java. However, there are two potential solutions to this issue. The first is to create our own TriFunction interface, which would resemble something like:, ``` @FunctionalInterface public interface TriFunction<A, B, C, R> { R apply(A a, B b, C c); default <R> TriFunction<A, B, C, R> andThen(TriFunction<A, B, C, R> after) { return (A a, B b, C c) -> after.apply(a,b,c); } } ``` If we need to pass more than three parameters to a function, it can become a difficult problem to solve. In such cases, using currying can be the most effective approach. By breaking down the function into a series of nested functions, each taking one argument, we can create a more flexible and reusable solution. # Higher Order Function A higher-order function is a function that can take one or more functions as arguments, and/or return a function as its result. This allows for more flexible and reusable code, as functions can be passed around like any other value. We are familiar with 'map()', 'filter()' methods both of them take Function and Predicate as arguments ![img.png](images/hof.png) They are an important and powerful concept in functional programming, providing several benefits, including: **Code reuse**: Higher-order functions allow you to abstract away common patterns of code, such as iterating over a collection, filtering elements, or mapping values. Once you have written a higher-order function for a particular pattern, you can reuse it with different functions to achieve different behaviors. **Code clarity**: Higher-order functions can make code more concise and easier to read by removing unnecessary boilerplate code. By passing a function as an argument to another function, you can define behavior in a clear and declarative way. **Flexibility**: Higher-order functions allow you to write more flexible code that can adapt to different situations. By allowing functions to be passed as arguments or returned as results, higher-order functions can be used to create generic and reusable code that can be adapted to different contexts. **Separation of concerns**: Higher-order functions can help to separate concerns by allowing you to define specific behavior in separate functions, which can then be combined and reused as necessary. This can make code more modular and easier to test. **Function composition and Chaining**: Higher-order functions can be used to compose more complex functions by combining simpler functions in a logical and reusable way. This can make code more expressive and easier to reason about. #### Example of HOF that takes Function as an argument ``` public static void applyMultiplyFunction(Integer[] numbers, Function<Integer, Integer> f){ for(int i=0; i < numbers.length; i++){ numbers[i] = f.apply(numbers[i]); } } public static void main(String[] args) { Integer[] numbers = {1,2,3,4,5}; Function<Integer, Integer> multiply = x -> x * 2; applyMultiplyFunction(numbers, multiply); System.out.println(Arrays.toString(numbers)); } ``` In the above example, we have created a Higher Order Function "applyMultiplyFunction" that takes an array of integers and a function as arguments, and applies the function to each element of the array. This example demonstrates how higher-order functions can be used to make our code more modular and reusable, by abstracting away the details of how a function is applied to an array or collection. #### Example of HOF that returns Function as it's result ``` private static Function<String, String> prefix = str -> str; private static Function<String, String> suffix(String str){ return suffix -> str + " " + suffix; } public static void main(String[] args) { Function<String, Function<String, String>> namePipeline = name -> prefix.andThen(suffix(name)); System.out.println(namePipeline.apply("John").apply("Doe")); } ``` The example above includes the creation of two functions, "prefix" and "suffix", both of which return a function that takes a string input and produces a string output. These two functions are then combined together using a pipeline, allowing their outputs to be concatenated. # Currying and Partial Functions https://www.c-sharpcorner.com/article/exploring-the-benefits-of-function-currying-in-java-understanding-the-concept/ Function currying is a technique that involves breaking down a function that takes multiple arguments into a series of functions that each take a single argument. In other words, it transforms a function that takes multiple arguments into a chain of functions that each take a single argument and return a new function until all the original arguments are consumed. Java's Function interface supports currying through the use of the "andThen" and "compose" methods. These methods enable the creation of a sequence of functions where the output of one function is used as the input of another function. By chaining functions together in this way, it is possible to create a pipeline of transformations that can be applied to data in a flexible and modular way. Currying has several benefits, including making it easier to reuse and compose functions, and enabling functions to be partially applied with some of their arguments fixed at runtime. This can lead to more modular, maintainable code and can simplify the development process. However, it's important to use currying judiciously and to avoid creating overly complex function chains that are difficult to reason about. Although Java doesn't have built-in support for function currying, it is still possible to implement curried functions using functional interfaces and lambda expressions. However, the syntax for defining a curried function in Java can be verbose and difficult to read. For example, declaring a function that takes two integer parameters and returns an integer in curried form would look like this: Function<Integer, Function<Integer, Integer>>, which can be challenging to interpret at first glance. Below code show-case how currying looks like, ``` //Curried function, it's returning other function, currying with 2 params Function<Integer, Function<Integer, Integer>> add = (param1) -> (param2) -> param1 + param2; //Curried function, it's returning other function, Currying with 3 params Function<String, Function<String, Function<String, String>>> curry = (f) -> (s) -> (t) -> f + " "+ s + " " + t; System.out.println(curry.apply("Java").apply("Programming").apply("Language")); # Java Programming Language ``` Several Java 8 functions, such as map in Optional or Stream, expect a Function as their argument. If we have a BiFunction, we can curry it and pass it directly to these functions without the need for a wrapper function or lambda expression. Partial functions, on the other hand, are functions that are defined only for certain input values, and undefined for all other values. A partial function can be thought of as a function that takes a subset of the arguments of another function Partial functions have several advantages: **Increased expressiveness:** Partial functions allow for more expressive code by allowing you to express functions that are not defined for all possible inputs. This can make your code more concise and easier to read. **Improved error handling:** When a function is defined for all possible inputs, it can be difficult to detect errors. With a partial function, you can explicitly specify when an input is not valid, which can make error handling more robust. **Better separation of concerns:** Partial functions can help you separate your code into smaller, more manageable pieces by allowing you to define functions that only operate on certain subsets of the input domain. This can make your code more modular and easier to maintain. **More efficient algorithms:** In some cases, a partial function can be computed more efficiently than a total function. For example, if you only need to compute a function for a subset of the input domain, you can avoid computing unnecessary values. **Improved type safety:** By using partial functions, you can make your code more type safe by explicitly defining the input and output types for each subset of the input domain. This can help prevent type errors and make your code more reliable. # Currying Use Cases **Configuring database access:** When connecting to a database, it's often necessary to provide credentials and other configuration parameters. Using a curried function to handle the configuration allows for greater flexibility and reuse of code. For example, we can create a function that takes the database URL as input and returns another function that takes the username and password as input and returns a database connection. **Filtering and sorting data:** When working with large datasets, it can be useful to create functions that filter and sort the data according to specific criteria. Using function currying to create reusable filters and sorters allows for greater flexibility and efficiency in data processing. For example, we can create a function that takes a list of strings as input and returns another function that filters the list to include only strings that contain a specific substring. **Event-driven programming:** In event-driven programming, it's common to register listeners or callbacks that are triggered when specific events occur. Using function currying to register listeners and callbacks can make the code more flexible and easier to maintain. For example, we can create a function that takes an event type as input and returns another function that takes a listener function as input and registers the listener for that event type. **Service composition:** In a service-oriented architecture, it's common to compose services by chaining together functions that handle different aspects of the service. Using function currying to chain together functions allows for greater flexibility and adaptability in service composition. For example, we can create a function that takes a URL as input and returns another function that fetches data from that URL, then another function that processes the data, and so on. # Monads A monad is a type that encapsulates a value and provides a way to chain operations on that value in a composable way. Monads allow developers to abstract away common patterns of code and provide a consistent interface for working with different types of data. There are several popular monads in Java, including Optional, Stream, and CompletableFuture. The Optional monad is used to represent a value that may or may not be present, and provides methods for safely accessing and manipulating the value. The Stream monad is used to represent a sequence of values that can be transformed and filtered in a composable way. The CompletableFuture monad is used to represent a future result that can be processed in a non-blocking way, and provides methods for combining and transforming the results of multiple futures. In addition to these built-in monads, it's also possible to create custom monads in Java using libraries such as the Functional Java library or the Vavr library. These libraries provide abstractions for working with monads and other functional programming concepts in a more idiomatic and expressive way. Overall, monads provide a powerful tool for writing composable and modular code in Java, and can help developers to write cleaner, more concise, and more maintainable code. While the most commonly used monads in functional programming languages such as Haskell, Scala, and F# usually have the map, filter, and flatMap operations, it's not necessarily the case that all monads must have these operations. In fact, there are many different types of monads with different sets of operations. The essential feature of a monad is that it provides a way to chain computations in a composable way. The specific operations that a monad provides depend on the specific use case and the types of data that the monad is designed to work with. For example, the Maybe monad in Haskell provides map and flatMap operations, but does not have a filter operation. The State monad in Haskell provides get and put operations for working with stateful computations, but does not have map, filter, or flatMap operations in the traditional sense. In general, a monad can have any number of operations that are tailored to its specific use case. The key is that the operations must satisfy the three monad laws: the identity law, the associativity law, and the compatibility law. As long as these laws are satisfied, the operations can be used to chain computations in a composable way, regardless of whether they are map, filter, flatMap, or some other set of operations. # How to create our own Monad Writing your own monad can be a bit involved, but it's definitely possible. The steps involved in creating a monad are: Define the type that the monad will encapsulate. This can be any type of data, such as a list, a tree, a stream, or a result. Define the operations that the monad will provide. The most common operations are map, flatMap, and unit (or return). map applies a function to the value inside the monad and returns a new monad with the transformed value. flatMap applies a function that returns another monad to the value inside the monad and returns a flattened monad. unit (or return) creates a new monad with a given value. Implement the monad operations using the underlying data structure. The operations must satisfy the three monad laws: the identity law, the associativity law, and the compatibility law. These laws ensure that the operations can be used to chain computations in a composable way. # The 3 Monad law in detail **Identity law:** The unit operation must be an identity function for the flatMap operation. That is, m.flatMap(unit) is equivalent to m for any monad m. This means that when you apply the flatMap operation to a monad and pass in the unit operation, you should get the same monad back. In other words, the unit operation should not change the value of the monad. **Associativity law:** The flatMap operation must be associative. That is, m.flatMap(f).flatMap(g) is equivalent to m.flatMap(x -> f.apply(x).flatMap(g)) for any monad m and functions f and g. This means that when you apply the flatMap operation to a monad twice, with two different functions, the order in which you apply the functions should not matter. The result should be the same, regardless of whether you apply the first function first or the second function first. **Compatibility law:** The map operation must be compatible with the flatMap operation. That is, m.map(f) is equivalent to m.flatMap(x -> unit(f.apply(x))) for any monad m and function f. This means that when you apply the map operation to a monad, the result should be the same as if you apply the flatMap operation with a function that returns a new monad with the result of applying the function to the value inside the original monad. @author: Sameer Shukla
0
tideworks/arvo2parquet
Example program that writes Parquet formatted data to plain files (i.e., not Hadoop hdfs); Parquet is a columnar storage format.
null
# avro2parquet - write Parquet to plain files (i.e., not Hadoop hdfs) Based on example code snippet `ParquetReaderWriterWithAvro.java` located on github at: &nbsp;&nbsp;&nbsp;&nbsp;[**MaxNevermind/Hadoop-snippets**](https://github.com/MaxNevermind/Hadoop-snippets/blob/master/src/main/java/org/maxkons/hadoop_snippets/parquet/ParquetReaderWriterWithAvro.java) Original example code author: **Max Konstantinov** [MaxNevermind](https://github.com/MaxNevermind) Extensively refactored by: **Roger Voss** [roger-dv](https://github.com/roger-dv), Tideworks Technology, May 2018 ## IMPLEMENTATION NOTES: - Original example wrote 2 Avro dummy test data items to a Parquet file. - The refactored implementation uses an iteration loop to write a default of 10 Avro dummy test day items and will accept a count as passed as a command line argument. - The test data strings are now generated by RandomString class to a size of 64 characters. - Still uses the original avroToParquet.avsc schema by which to describe the Avro dummy test data. - The most significant enhancements is where the code now calls these two methods: * `nioPathToOutputFile()` * `nioPathToInputFile()` + `nioPathToOutputFile()` accepts a Java nio `Path` to a standard file system file path and returns an `org.apache.parquet.io.OutputFile` (which is accepted by the `AvroParquetWriter` builder). + `nioPathToInputFile()` accepts a Java nio Path to a standard file system file path and returns an `org.apache.parquet.io.InputFile` (which is accepted by the `AvroParquetReader` builder). <br> These methods provide implementations of these two `OutputFile` and `InputFile` adaptors that make it possible to write Avro data to Parquet formatted file residing in the conventional file system (i.e., a plain file system instead of the Hadoop hdfs file system) and then read it back. The usecase would be for working in a big data solution stack that is not predicated on Hadoop and hdfs. <br> - It is an easy matter to adapt this approach to work with JSON input data - just synthesize an appropriate Avro schema to describe the JSON data, put the JSON data into an Avro `GenericData.Record` and write it out. ## NOTES ON BUILDING AND RUNNING PROGRAM: - Build: `mvn install` - **`HADOOP_HOME`** environment variable should be defined to prevent an exception from being thrown - code will continue to execute properly but defining this squelches it. This is down in the bowels of Hadoop/Parquet library implementation - not behavior from the application code. - **`HOME`** environment variable may defined. The program will look for logback.xml there and will write the Parquet file it generates to there. Otherwise the program will use the current working directory. - In `logback.xml`, the filters on the `ConsoleAppender` and `RollingFileAppender` should be adjusted to modify verbosity level of logging. The defaults are set to `INFO` level. The intent is to allow, say, setting file appender to `DEBUG` while console is set to `INFO`. - The only command line argument accepted is the specification of how many iterations of writing Avro records; the default is 10. - Can use the shell script `run.sh` to invoke the program from the Maven `target/` directory. - Logging will go into a `logs/` directory as the file `avro2parquet.log`.
1
lwluc/camunda-ddd-and-clean-architecture
An example to show how you could use clean architecture and DDD elements with Camunda.
null
# Camunda DDD and Clean Architecture An example to show how you could use clean architecture and DDD and their advantages with Camunda. I also wrote a blog post to show how clean architecture could help you to update to Camunda Platform 8 without touching your domain centered code: [How Clean Architecture helps you migrating Camunda Platform 7 to 8](https://www.novatec-gmbh.de/en/blog/how-clean-architecture-helps-you-migrating-camunda-platform-7-to-8/). ## 🚀Features The [BPMN process](assets/processes/loan_agreement.png) which start a [second process](assets/processes/cross_selling_recommendation.png) via message correlation should represent a tiny business process just to demonstrate the architecture. ### 🛫Start the process With the following POST request, you could start the process: ```sh curl --request POST \ --url http://localhost:8080/loan/agreement/1 \ --header 'Content-Type: application/json' \ --data '{ "customerNumber": "A-11", "name": "Tester", "mailAddress": "tester@web.io", "amount": 1100 }' ``` Using the admin user (`username: admin` and `password: pw`) you could log in to the Camunda Cockpit. ### ↔️Migration to Camunda Platform 8 All necessary change to upgrade from Camunda Platform 7 to 8 are shown in [this pull request](https://github.com/lwluc/camunda-ddd-and-clean-architecture/pull/1). The only changed files are all placed in the adapter layer. So the domain does not need to be touched to change a framework. ## 🏗Architecture The following sections contain some small aspects explaining the advantages of Domain-driven Design (DDD) and clean architecture. Flexibility around your domain (e.g., switching from Camunda 7 to Camunda 8) is the main focus I want to show you in this little example. ### Clean Architecture A software architecture that integrates changeability in their package and class structure makes it easy to switch or migrate a framework. So the migration of Camunda Platform 7 to 8 is much less painful with a good architecture. ![DDD-Clean-Architecture](assets/architecture/camunda-ddd-and-clean-architecture-rings.png) *The layers of clean architecture (based on Clean Architecture by Robert C. Martin)* Robert C. Martin describes architectural guidelines in his book "[Clean Architecture](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html)", which should allow independence of frameworks, databases, user interface (UI) and other technologies. In his opinion, clean architecture ensures the testability of business rules by its design. The image above displays layers as concentric circles wrapping each other. Each layer represents different parts of software. The center of the circle represents "policies" and thus your business rules and domain knowledge. The outer circles are "mechanisms" supporting our domain center. Beside the layers, the arrows show the dependency rule – only inward point dependencies! To reach the goals of clean architecture, the domain code must not have any dependencies pointing outwards. Instead, all dependencies point towards the domain code. The essential aspect of your business domain is placed in the core of the architecture: the entities. They are only accessed by the surrounding layer: the use cases. Services in a classic layered architecture represent use cases in a clean architecture, but these services should be more fine-grained so that they have only one responsibility. You do not want one big service implementing all your business use cases. Supporting components are placed around the core (your entities and use cases), such as persistence or user interfaces. #### Building Block View The [building block view](https://docs.arc42.org/section-5/) image below shows a stereotypical static decomposition of a system using Clean Architecture into building blocks as well as their dependencies. ![Clean-Architecture-Building-Block-View](assets/architecture/clean_architecture_building_blocks.svg) *Building Block View of clean architecture* #### Runtime View The [runtime view](https://docs.arc42.org/section-6/) image below describes concrete behavior and interactions of the stereotypical building blocks of a system using Clean Architecture. ![Clean-Architecture-Runtime-Block-View](assets/architecture/clean_architecture_runtime_view.svg) *Building Block View of clean architecture* #### Dependency Inversion Principle When the dependency rule is applied, the domain has no knowledge about how you persist your data or how you display them in any client. The domain should not contain any framework code (arguably Dependency Injection). As I already mentioned, you could use the Dependency Inversion Principle (DIP) to apply the dependency rule of clean architecture. The DIP tells you to reverse the direction of a dependency. You may be thinking of the Inversion of Control (IoC) design pattern, which is not the same as DIP, although they fit well together. If you want to know the exact differences I recommend reading Martin Fowlers article "[DIP in the Wild](https://martinfowler.com/articles/dipInTheWild.html#YouMeanDependencyInversionRight)" (in short: "[...] IoC is about direction, and DIP is about shape."). The following figure shows an example of how the DIP works. ![With and without Dependency Inversion Principle](assets/architecture/dependency-inversion-principle.png) *The Dependency Inversion Principle (DIP)* Imagine having a service (DomainService in the image) which starts a Camunda Process. To isolate your service (your business logic) from the framework, you could create another service using the Camunda Java API to start a process instance. The left frame of the image shows this scenario without applying the DIP. The domain service calls the ProcessEngineService directly. So what's the problem? Starting a process is a core aspect of our domain, so we want to pull it into our domain. By doing so, we break the rule of keeping our domain framework-agnostic. We can fix this, by placing an interface in our domain core instead of the concrete implementation and place the actual implementation outside of our domain layer, et voilà we apply the DIP. Combining the DIP with the [Ports and Adapters](http://alistair.cockburn.us/Hexagonal+architecture) architecture (the clean architecture emerged of), we get the picture shown below. ![DIP, Ports and Adapter, Clean Architecture](assets/architecture/dependency-inversion-principle-rings-in-out.png) *Clean architecture DIP and ports and adapters* Separating our ports / use cases and adapters that drive our application (input-ports) or are driven by our application (output-ports) helps us to structure our code even more and to keep the boundaries more clear. #### Mapping between layers The following image shows how the layers interact with the domain object with and without mapping. Without mapping, you miss the biggest advantage of clean architecture: decoupling your domain core with the outer (infrastructure) layers. If you do not map between your inner and outer layers, you are not isolated. If a third-party system changes its data model, your domain model needs to change as well. To prevent dependence on external influencing factors and to promote independence and decoupling, it is necessary to map between the layers. Using the input and output ports (use case layer) as gatekeepers into your domain core, they define how to communicate and interact with your application. They provide a clear API and by mapping into your domain you keep it independent of any framework or technology model changes. ![With and without mapping between layers](assets/architecture/mapping-between-layers.png) *With and without mapping between layers* The frame explaining the mapping approach is just one possible way of mapping. If you want to know more about different stages of mapping, take a look a Tom Hombergs book "[Get Your Hands Dirty on Clean Architecture](https://leanpub.com/get-your-hands-dirty-on-clean-architecture)", he explains them pretty well. In conclusion, mapping can be used to achieve greater decoupling. On the other hand, mapping between each layer could produce a lot of boilerplate code, which might be overkill depending on your use case and the goals you are working towards. ### Domain-driven Design Using clean architecture as architecture style combines perfectly with Domain-driven Design because we completely focus on our domain core (entities and use cases). In my opinion, Domain-driven Design (DDD) perfectly combines with clean architecture due to the fact that DDD is focused on your business domain. Focusing on your domain is supported by the goal of clean architecture, keeping the domain free of any framework or technologies. E.g. your domain does not focus on how to persist something, it just tells the outgoing port to save it. The implementation of the port (placed on the adapter layer) decides to use, e.g., relational or non-relational databases. Beside the matching goal of DDD and clean architecture, DDD tries to help you build complex designs around your domain, by e.g., building immutable objects that know all about their invariants, which helps you even more to structure your code. DDD Elements like Aggregate Entities and ValueObject can be found in our [domain-primitives](https://github.com/domain-primitives/domain-primitives-java) library. Structuring your code functional and brining more context to your object with, e.g. Value Object does not only help you to keep your code expressive, it also helps keeping it close to your business as your BPMN model. ## 🙏🏼Credits Thanks to [Matthias Eschhold](https://github.com/MatthiasEschhold) for the passionate discussion around DDD and clean architecture. Matthias published a nice blog series: [Clean Architecture and flexibility patterns](https://github.com/MatthiasEschhold/clean-architecture-and-flexibility-patterns) ## 📨Contact If you have any questions or ideas, feel free to contact me or create an [issue](https://github.com/lwluc/camunda-ddd-and-clean-architecture/issues).
1
rpereira-dev/CubeEngine
A Desktop Voxel rendering engine (window, 3D Voxels, GUI's, audio lib) + a game implementation example
null
# VoxelEngine ----------------------------------------------------------------------- A Game engine for Voxel games, using OpenGL, GLFW, OpenAL, Netty **ABANDONNED** ----------------------------------------------------------------------- ## DEMO VIDEOS https://www.youtube.com/playlist?list=PLTsKtD9K5K8nkeK2MzVr3JFv4ofuJTugb ## HOW TO USE ## - >> git clone https://github.com/toss-dev/VoxelEngine.git - >> cd VoxelEngine - >> ./gradlew eclipse Then in Eclipse, import the project (You can also import the gradle project directly if you are using the Eclipse plugin) You now have 3 projects: - VoxelEngine is the core engine - POT is a game implementation - Mod sample : a mod example, which will be imported by POT on launch ## DEMO VIDEOS https://www.youtube.com/playlist?list=PLTsKtD9K5K8nkeK2MzVr3JFv4ofuJTugb ## TECHNICAL PART: Down bellow are all the explanations about how things are implemented (more or less) deeply. It gives a great overview of the engine, and it may allow one to find a wrong implementation, or something that should have been thought differently # Face system Front: x- ; Back: x+ Left: z- ; Right: z+ Bot(tom): y- ; Top: y+ # Terrains Terrains are chunks of the world. They are 16x16x16 blocks. They contains 2 16x16x16 arrays (unidimensional for optimisation), one for each block light value (1 byte per block), and 1 for block ids (2 bytes per blocks). If the light value array is null, it means no block are emetting lights nearly. If the block id array is empty, it means the terrain is full of air. (This null values are key point in memory management) Moreover, each terrains has a list of 'BlockInstance', which contains... blocks instances (see bellow) # Terrains meshes They are basically 1 gl vao and one gl vbo. Various meshing algorithm should be implemented. Currently, I'm using the 'greedy' meshing algorithm (see https://0fps.net/2012/06/30/meshing-in-a-minecraft-game/) When building a mesh, the Mesher iterates through all terrain's block, and get it 'BlockRenderer'. A 'BlockRenderer' is an interface which is supposed to push a specific block vertices to the vertex stack, when the mesher meshes it. # Block A block has a unique instance which is created on engine initialisation. This instance stores all the block data. But then, only it id is stored into the terrain as we said before # BlockInstance If a block need it own instance, (parameters), you can override the function 'Block.createBlockInstance()' , which when returning a non-null 'BlockInstance', the new BlockInstance will be updated on every terrain's update i.e, liquid blocks need some parameters (liquid level, color...), and so has a BlockInstance # Water As explained upside, every water block has it own instance. Each instance has an amount of liquid, between MIN_LIQUID_AMOUNT and MAX_LIQUID_AMOUNT. The block rendering is done as simple block, but by translating vertices upper, or lower, depending on the amount of liquid for an instance. (this is done pretty fastly in the meshing algorythm) The flowing update algorythm (for eacg liquid block instance): the most it is called, the faster / smoother the water will flow, and so the flowing effect will look realistic, but this is a quite costly call as it require to rebuild a terrain mesh. So for now it is call like each 4 frames) Algorithm: var amount if amount < MIN_LIQUID_AMOUNT: disperseWater() (remove block) else if the block under this instance is air: make the instance 'fall', reducing this block height by 1 else if the block under is liquid: transfer as much liquid from the current instance, to the instance under if the current instance still has an amount of water: transfer as much liquid possible to neighbor liquid block (x+1, z), (x, z+1), (x-1, z), (x, z-1) # Light The light system is based on SOA implementations (https://www.seedofandromeda.com/blogs/29-fast-flood-fill-lighting-in-a-blocky-voxel-game-pt-1) It is a flood filling algorithms, applied on byte arrays. The distinction between sunlight (ambiant lighting), and blocklights (lights) is made. # Particles I implemented two type of particles, which are both behaving like points particles cpu-side. They have a health value which decrease on update, and the particle is deleted when this value reach 0. Each particles has a scale, position, velocity, and color value. They are being updated once in the rendering thread, before each frames # Billboarded particles. They are billboarded rectangles, which always face the camera, gpu. I use geometry shader and a single call for each particle draws. (So I can render them depending on the distance from the camera). Each BillboardedParticle has a TextureAtlas reference. This object is basically a GLTexture, where the image is a texture atlas. On particle updates and rendering, I use a single integer to know which texture of the atlas should be use, and I interpolate it with the next texture of the atlas, so it create this nice smooth effect when updating the texture # Cube particles. These particles are rendered as cubes, using a single draw call (via glDrawArraysInstanced()). A vbo is fill up with 21 floats per cubes: it transformation matrix (16 floats), it color (4 floats), it health (1 float). # Sky, environment The sky is currently using an untextured skydome, colored using a 3D noise. I'm looking for a nicer solution (sky plane + volumetric clouds should be better) Supporting fog (density, opacity, color) Supporting day/night cycles Climatic effects can be simulated using particles (dust, pollen, rain...) # Models 3D models implementation is based on Skeleton system. The mesh vertex format is: (x, y, z, ux, uy, vx, vy, nx, ny, nz, b1, b2, b3, w1, w2, w3) ('position', 'texture coordinates', 'normal', 'bones', 'weights') Each bones 'bn' correspond to the ID of a bone on the skeleton, and the corresponding weight 'wn' is a factor on how much does the bone transformation affect this vertex. (so when animating, the mesh stays static, the bones are transformed, and then we apply the same transformation to vertices but using these weights) - A 'Model' contains all the data of a model, which is basically a 'ModelSkeleton', a 'ModelMesh', 'List<ModelAnimation>', 'List<ModelSkin>' - 'ModelMesh' : contains the mesh (VAO/VBO) of the whole model - 'ModelSkin' simply has a name, and a texture. - 'ModelSkeletonAnimation' represents a single animation for a skeleton (i.e dance, run, ...). It has a name and a list of Key frames for transformed bones at given times. The ModelRenderer is only able to render 'ModelInstance', which are instances of a Model. This 'instance' system allows to get hundred of instances of the same model without performance loss. You should consider binding your model to a specific Entity, using the ModelManager and EntityManager on program initialization. (so when this Entity spawns, a new 'ModelInstance' is automatically bound to it), and so whole rendering process does the job independantly However, you can also dynamically bind an Entity and a Model, by creating a new 'ModelInstance', and add it to the ModelManager. (you should really not be doing this) # Entities An entity is an (moving) object of the world. It has a unique world id (set when added to a world), vec3 position, rotation, vec3 velocities, vec3 accelerations It follows the physics rules we apply on it (gravity, frictions...) It may have AI It may have a model, and then it has it own ModelInstance # World A world is basically a set of Terrains. Each terrains are stored into a Hashmap (where the key is 3 integers representing terrain index, the hash function is optimized for this. (see 'Vector3i.hashCode()') Every terrains which need to be updated are also in another list. (they are the 'loaded terrains') Entities are stored within multiples data storage to make their rendering and updates faster. See 'EntityStorage.java' # Events See EventManager.java. A simple event system. One can register an event or an event callback via the EventManager # Mod loader It loads every jar file in the folder './mods', './mod', './plugins', './plugin', and if they have a class which inherit from 'Mod.class', and has a valid 'ModInfo' annotation, then it load it to the engine. Pretty easy, isn't it? # Rendering pipeline The game is rendering in two times: the world, and the GUI's • The World: The WorldRenderer contains some 'Renderer', which are object which render parts of the world. They are allocated when they are needed, and deallocated when they arent anymore properly For example, there is the ModelRenderer (which render every entity's model), the SkyRenderer, which render the sky, the TerrainRenderer... Any modder can register new world renderer if needed. The world is rendered to it own FBO and texture. It can be displayed using a 'GuiTexture' for example. • Gui Renderer # GUI System The engine implements it own GUI system, and a preset of GUI objects. The system is based on parent/child hierarchy. (coordinates are always relative to the parent) Each objects have a weight, which determines it layer when rendering. Events are handled properly. New font can easily be added to the game (one line of code), by using the hiero jar file (font generator), in the 'com.grillecube.renderer.gui.font' package # Sounds A custom set of object and functions are implemented for 3D sounds (using OpenAL, thanks to LWJGL bindings) # Assets management : Each mod / project should have it own assets, stored into a zip file. This file has to be registered on program initialisation Then, it will be unzipped, and missing/modified files will be unzipped (so we ensure data is always present and not corrupted)
1
PavelMal/selenide-example
Example of using Selenide for UI Autotests
null
This is an example of using the Selenide framework and showcasing tests using the Page Object pattern. The project includes basic tests for search/clearing the search/ displaying pop-up suggestions, as well as attempting to log in with incorrect data. You can run tests in parallel by adding param `-Psuite="Parallel"`also you can configure thread amount by adding param `-Pthreads="2"` into 'arguments'. In case of no additional params tests will start with default configuration (`single` thread with a `single` suite). Also, you can add a param `-PbaseUrl` to change a start URL for example: - `-PbaseUrl="google.com"` To see a test report after passing tests, you need following the instruction: - Run tests (when tests finish you will see a new folder 'build' inside a project, inside 'build' folder you will see 'allure-results' folder) - Change your directory into 'build' and run a command `allure serve` in command line (report will be formed automatically) Screenshots will be automatically added to failed steps ![img.png](src/test/resources/screenshot/failed.png)
1
mzubal/spring-boot-monolith
Simple example of monolithic spring-boot app with components isolated using Java visibility modifiers.
null
# Simple Spring-boot Monolith This repository represents a very simple proof of concept of creating a *monolithic* *spring-boot* project with *separation* of internal components via Java APIs (hiding their internals from others). This is intentionally the most simple way of doing this using Java visibility modifiers (and verifying this works well with Spring and other libraries), but there might be better choices for bigger projects (like maven/gradle modules or microservices). The APIs are very loosely coupled and it would be quite easy to change e.g. the persistence for each "service" or transform this to use modules (you can check how that looks e.g. in [spring-gradle-kotlin-multimodule](https://github.com/mzubal/spring-gradle-kotlin-multimodule)) or microservice. The project uses *spring-data* as a persistence layer for all the "services". It also takes advantage of *ObjectMapper* and *Lombok* to reduce the boilerplate needed. The package structure of the project is following, with each "service" residing in it's package: ![packages](doc/packages.png)
1
camunda-community-hub/Make-Rest-Calls-From-Camunda-7-Example
This is an example application which demonstrates the main ways in which a rest call can ben made by from a Camunda BPMN process.
null
# Make REST calls from Camunda Platform 7 Example This document outlines an example project, and four ways to make REST calls from Camunda Platform 7 to external systems. Each method offers a series of pros and cons, but this document aims to provide a better understanding as to which method may be the best option for a given requirement and user. ![Video Tutorial Badge](https://img.shields.io/badge/Tutorial%20Reference%20Project-Tutorials%20for%20getting%20started%20with%20Camunda-%2338A3E1) <img src="https://img.shields.io/badge/Camunda%20DevRel%20Project-Created%20by%20the%20Camunda%20Developer%20Relations%20team-0Ba7B9"> This document outlines implementation of the following: * [Java Delegates](https://docs.camunda.org/manual/latest/user-guide/process-engine/delegation-code/) * Where the call is made within a local Java Class, called by the engine. * [External Task](https://docs.camunda.org/manual/latest/user-guide/process-engine/external-tasks/) * Where the call is made by an external piece of software, running independently from the engine. * [Connectors](https://docs.camunda.org/manual/latest/user-guide/process-engine/connectors/) * Where the call is made by the engine, using properties added directly to the XML of the process model. * [Script](https://docs.camunda.org/manual/latest/user-guide/process-engine/scripting/) * Where the call is made by the engine, executing a script. The script can be added directly to the XML or be maintained as an external ressource. Within a process, the three first implementation methods can implement a BPMN [Service Task](https://docs.camunda.org/manual/latest/reference/bpmn20/tasks/service-task/). The best practice guide provides a good [overview of the different options for the implementation of service taks](https://camunda.com/best-practices/invoking-services-from-the-process/). Script can be implemented within a [Script Task](https://docs.camunda.org/manual/latest/reference/bpmn20/tasks/script-task/). Additionally, [Task Listeners](https://docs.camunda.org/manual/latest/user-guide/process-engine/delegation-code/#execution-listener) and [Execution Listeners](https://docs.camunda.org/manual/latest/user-guide/process-engine/delegation-code/#execution-listener) can implement Java Delegates and Script. This project uses three Service Tasks and one Script Task to outline the different implementation methods. The following image outlines the process: ![Process](./img/process.png) :exclamation: **Important:** If a REST call returns a `2xx` status code, this indicates a successful call. However, it should not be assumed other codes such as `5xx` or `4xx` will automatically lead to an error within the process. Instead, this can be implemented and handled depending on the requirements for each project. For more information, read the [explanation of the implementations](#explanation-of-the-implementations) section below. ## Run the project The project contains a **Camunda Spring Boot** application and a [JavaScript External Task Worker](https://github.com/camunda/camunda-external-task-client-js). To run the project and start a process instance in Tasklist, follow the steps below: 1. Download the project and open the **Camunda Spring Boot** application in your IDE using Java 11. 2. Start the application. 3. Start a process instance of the process in Tasklist using the predefined start form, or start the instance via REST. 4. If you [start the instance via REST](https://docs.camunda.org/manual/latest/reference/rest/process-definition/post-start-process-instance/) make sure the necessary variables are included. Example for the Request body: ```Json { "variables": { "repoName" : { "value" : "Name of Repo", "type": "String" }, "repoOwner" : { "value" : "Name of Repo Owner", "type": "String" } }, "businessKey" : "myBusinessKey" } ``` 5. Navigate to the folder of the **Javascript External Task** client. Make sure you have npm installed and install the needed packages with an: ``` npm update ``` 6. Run the worker. ``` node service.js ``` ## Explanation of the implementations This section provides more information on each implementation. This section also shows how the response code and the information from the response body can be used differently. In this project, a response code that is anything except `200` will create an incident. The information of the response body is either used to complete the task, or used to throw a BPMN error. No matter how you implement the call, there three main options as to how a Service Task can behave: * Complete the task successfully (use the information from the REST call to route the further process.) * Throw a [BPMN error](https://docs.camunda.org/manual/latest/reference/bpmn20/events/error-events/) (information gained from the REST call leads to an error that should be handled within the logic of the process.) * Create an [incident](https://docs.camunda.org/manual/latest/user-guide/process-engine/incidents/) (information from the REST call leads to a fail and creates an incident in the workflow engine. This will require an administrator to resolve.) ### Java Delegate A **Java Delegate** is called during the execution, and must implement the **JavaDelegate** interface. In this example, the Java Class is deployed to the Camunda Engine. :file_folder: **Note:** The explanation divides the code into different pieces to outline the different concepts. The full class can be found within [this project](Make-Rest-Calls-From-Camunda-Example/CamundaApllication/src/main/java/com/example/workflow/FindGitHubRepo.java). The first part of the class gets the process variables `repoOwner` and `repoName`. These variables are used to perform the REST call within the Java code: ```java @Named public class FindGitHubRepo implements JavaDelegate { @Override public void execute(DelegateExecution execution) throws Exception { String repoOwner = (String) execution.getVariable("repoOwner"); String repoName = (String) execution.getVariable("repoName"); HttpResponse<String> response = get("https://api.github.com/repos/" + repoOwner + "/" + repoName); ``` The `get` method being called in the above snippet simply implements the rest call using the Java9 HTTP client ```Java public HttpResponse<String> get(String uri) throws Exception { HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(uri)) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); System.out.println(response.body()); return response; } ``` #### Create an incident The code example then checks the status code. If the response code is not `200`, an exception will be thrown. Within the code, this exception is not handled. Therefore, this will lead to an incident within the Camunda Platform Engine. ```java if (response.statusCode() != 200) { // create incidence if Status code is not 200 throw new Exception("Error from REST call, Response code: " + response.statusCode()); } ``` :exclamation: **Important:** Ensure you understand [transactions](https://docs.camunda.org/manual/latest/user-guide/process-engine/transactions-in-processes/) within the Camunda workflow engine. Depending on where your last transaction in the process is, the state of the process will roll back to the last transaction. Note that this may not be the service task where the error occurred. You can set transactions manually by using the **async before and after** flag in the modeler. ![Process](./img/async.png) #### Throw a BPMN error If the response code is `200`, the example code receives the response body and parses for the entry of `downloads`. If the value of downloads is `false`, the code throws a BPMN error. This error can then be handled by the logic of the process: ```java } else { //getStatusText String body = response.body(); JSONObject obj = new JSONObject(body); //parse for downloads Boolean downloads = obj.getBoolean("has_downloads"); if (!downloads) { // Throw BPMN error throw new BpmnError("NO_DOWNLOAD_OPTION", "Repo can't be downloaded"); } ``` #### Complete the task If the two prior checks do not trigger the exception or the BPMN error, the example code parses the response body for the number of forks. This will set the variable to the process and the task complete. ```java } else { //parse for forks String forks = obj.getString("forks"); int forksAsNumber = Integer.parseInt(forks); //Set variables to the process execution.setVariable("forks", forksAsNumber); } ``` ### External Task An external task is written into a list. Next, an external task worker fetches and locks it using Camunda's REST API, and the external task workers deploy independently and are therefore language independent. There is a [list with available clients in different languages](https://github.com/camunda/awesome-camunda-external-clients) you may refer to. This project uses the JavaScript external task client. :file_folder: **Note:** The explanation divides the code into different pieces to outline the different concepts. The full code example can be found within [this project](Make-Rest-Calls-From-Camunda-Example/SearchContributorService/service.js). With the External Task client, it is possible to get variables from the process. These variables are then used to make a REST call in Javascript code: ```javascript client.subscribe("searchContributors", async function({ task, taskService }) { const repoName = task.variables.get("repoName"); const repoOwner = task.variables.get("repoOwner"); const url = "https://api.github.com/repos/"+ repoOwner +"/" + repoName + "/contributors" console.log(url) try{ const contributors = await fetch(url) ``` #### Create an incident Again, we check the JavaScript code for the response code. If the code does not equal `200`, we throw an error. Later in our JavaScript code, we handle the error and use the External Task client to send back a failure to the workflow engine. This failure will then create an incident. If the response code equals `200`, the response body is returned: ```javascript .then(function(response) { if (!response.ok) { // throw Exception; var e = new Error("HTTP status " + response.status); // e.name is 'Error' e.name = 'RestError'; throw e; } return response.json(); }) ... // Handle Exception and create an incidence in Workflow Engine }catch (e){ await taskService.handleFailure(task, { errorMessage: e.name, errorDetails: e.message, retries: 0, retryTimeout: 1000 }); } ``` #### Throw a BPMN error Next, the information of the response body is used. The example uses the information of the number of contributors. If the number is smaller or equals **one**, the external task worker sends back a BPMN Error: ```javascript var numberContributors = Object.keys(contributors).length; if(numberContributors <= 1){ const processVariables = new Variables(); processVariables.set("errorMessage", "Sorry the repo has just one or none contributor, look for another one"); await taskService.handleBpmnError(task, "NO_CONTRIBUTORS", "The repo has no contributors", processVariables); } ``` #### Complete the task If there is more than one contributor, the External Task worker uses the client method to complete the task and send back variables to the process: ```javascript //Complete Task const processVariables = new Variables(); processVariables.set("contributors", numberContributors); await taskService.complete(task, processVariables) ``` ### Connectors The two previous examples implemented the REST call within their code. Beneficially, everything is together in one place and the user can easily handle the different outcomes. **Connectors** within Camunda provide an API for simple HTTP and SOAP connections; there is no need to implement the REST call with code. However, handling the different outcomes is more complicated and requires Script. :exclamation: **Important:** Be aware of the other two options. Depending on how complicated the REST call and the proceeding of the response becomes, the other options above may be more suitable. To use **Connectors** and the http client, the connect dependency and the http client must be added to the POM file: ```xml <dependency> <groupId>org.camunda.bpm</groupId> <artifactId>camunda-engine-plugin-connect</artifactId> </dependency> <dependency> <groupId>org.camunda.connect</groupId> <artifactId>camunda-connect-http-client</artifactId> </dependency> ``` To parse the response body, it may be helpful to include the [Spin Plugin](https://docs.camunda.org/manual/latest/reference/spin/) as well. To add Spin and JSON, add the dependencies to the POM file. ```xml <dependency> <groupId>org.camunda.bpm</groupId> <artifactId>camunda-engine-plugin-spin</artifactId> </dependency> <dependency> <groupId>org.camunda.spin</groupId> <artifactId>camunda-spin-dataformat-json-jackson</artifactId> </dependency> ``` This [project](https://github.com/rob2universe/camunda-http-connector-example) shows a simple use case for **Connectors**. To configure the **Connector**, the **Input Parameters** must be set. To configure a **GET** call, the method, the url, and the headers must be set: ![Output](./img/input.png) The configuration of the **Connector** allows setting output variables: ![Output](./img/output.png) The response code can be accessed easily with the Expression: ``` ${statusCode} ``` To parse the body, we use an Expression in combination with Spin: ``` ${S(response).prop("health_percentage")} ``` #### Create an incident Similar to the examples above, the **Connector** should create an incident if the response code is not `200`. :exclamation: **Important:** Ensure you understand [transactions](https://docs.camunda.org/manual/latest/user-guide/process-engine/transactions-in-processes/) within the Camunda workflow engine. Depending on where your last transaction in the process is, the state of the process will roll back to the last transaction, which might not be the service task where the error occurred. You can set transactions manually by using the **sync before and after** flag in the modeler. ![Async](./img/async.png) To observe the incident at the right place in this project, the **async before** flag is set. This will also ensure the transaction isn't rolled back too far. The response code is defined as an output variable. To check the value of the variable, the project uses [Script](https://docs.camunda.org/manual/latest/user-guide/process-engine/scripting/). Script can be used at various places. For example, Script can be used directly by defining the output variable of a **Connector**. This example uses Script as an [Execution Listener](https://docs.camunda.org/manual/latest/user-guide/process-engine/scripting/#use-scripts-as-execution-listeners) at the end of the **Connector**. ![ExecutionListenerForResponseCode](./img/ExecutionListener1.png) If the response code does not equal `200`, the Script throws an error. As the error is not handled, this leads to an incident within the workflow engine. :bangbang: **cases to consider:** In this project, the incident will be created before. If the response is not `200`, the response body will look different. Therefore, the Expression we use to store the health percentage ```${S(response).prop("health_percentage")}``` fails. The accompanying incident message won't give details about the failed REST call. Rather, it shares that the evaluation of the expression has failed. This outlines one of the challenges working with **Connectors**. If the evaluation of the response code and the different variables is handled within the same code, it is easier to maintain the outcome and the logic. #### Throw a BPMN error The example uses the gained information about the health percentage. An **Execution Listener** at the end of the connector is used to evaluate the value of the variable. ```javascript health = execution.getVariable("healthPercentage"); if(health < 70){ execution.setVariable("healthPercentage", health); throw new org.camunda.bpm.engine.delegate.BpmnError("error-not-healthy"); }else{ execution.setVariable("healthPercentage",health); } ``` #### Complete the task With a **Connector**, the task completes after calling the REST endpoint. #### Logging Connectors To get more output about the traffic with the called service, you can use these logging configuration in the `application.yaml` file: ```yaml logging: level: '[org.camunda.bpm.connect]': DEBUG '[org.apache.http.headers]': DEBUG '[org.apache.http.wire]': DEBUG '[connectjar.org.apache.http.headers]': DEBUG '[connectjar.org.apache.http.wire]': DEBUG ``` The levels are preconfigured to `INFO` in the `application.yaml` of this project. ### Script Task Scripting can be used in a variety of places within the process, including dedicated script tasks. In some distributions of Camunda the groovy engine is already included. For spring boot it needs to be added as a dependency. ```XML <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>3.0.8</version> <type>pom</type> </dependency> ``` JavaScript is part of the Java Runtime (SRE) until version 15. The Nashorn JavaScript Engine is removed in Java 15. To use another scripting language that is compatible with JSR-223 the respective jar file has to be added to the classpath. :exclamation: **Important:** Normally Script is not that common to make a REST call within Camunda This example uses Groovy to make the REST call. With the Script, it is possible to get variables from the process. These variables are then used to make a REST call in the Groovy script: ```java def repoOwner = execution.getVariable("repoOwner") def repoName = execution.getVariable("repoName") RESTClient client = new RESTClient("https://api.github.com/") def path = "/repos/"+ repoOwner +"/"+ repoName +"/languages" def response ``` The script uses then the information from the response to set the variables true, if the response body contains a certain language: ```java if(response.contentAsString.contains("Java")){ java = true; programingLanguages = programingLanguages + " Java " } ``` #### Create an incident Within the script an uncaught exception will create an incident in the engine. In this example we catch any exception from the RESTClient, print the exception to the console and then throw a new exception, which is not be caught and create the incident. ```java catch (RESTClientException e) { println(e) throw new Exception(e) } ``` #### Throw a BPMN error BPMN errors can be thrown based on the response from the REST call. The example script throws a BPMN error as soon as Scala is included in the response body: ```java if(response.contentAsString.contains("Scala")){ scala = true; programingLanguages = programingLanguages + " Scala " throw new org.camunda.bpm.engine.delegate.BpmnError("error-scala-detected"); } ``` #### Complete the task If the try block runs successfully without the execution of the if statement for scala the task will complete and the variables are set back to the process instance. ```java //return variables execution.setVariable("programingLanguages", programingLanguages) execution.setVariable("java", java) execution.setVariable("javaScript", javaScript) execution.setVariable("python", python) execution.setVariable("ruby", ruby) execution.setVariable("closure", closure ```
1
tringuyenhoaiphuong/DrawerOnTopActionBar
This example demonstrates how to make NavigationDrawer on top of Actionbar. Remember to add appcompat_v7 project to its libraries before compiling.
null
DrawerOnToActionBar =================== This example demonstrates how to make NavigationDrawer on top of Actionbar. Remember to add appcompat_v7 project to its libraries before compiling.
1
yrojha4ever/JavaStud
Official, Main: This is Core/Advance java example series project. It help to learn java step by step using pdf tutorial provided here and corresponding demo project for the eclipse. Tag: Java Student, Java Stud, Stud Java, StudJava, Java Teachers, Studs Quick Start Guide, Studs Java, Object Oriented Programming, Core Java, Java SE, Java EE, Java Enterprise Edition, Java Blog, Java Articles, Java Web, JSP, Servlet, Maven, Spring, Hibernate, Spring-boot, Spring MVC Web, Angular JS, Angular 2, Java Security, Java CRUD, Java Login Example, File Handling, Multi threading, exception handling, Collection classes, Swing, Database, Date Time, Joda Time, JPA.
angular angularjs corejava hibernate java java-enterprise-edition java-guru java-students java-tutorial java-tutorials jsp maven object-oriented-programming servlet spring spring-boot stud-java stud-projects student swing
# JavaStud This is java tutorial example series. Visit http://yro-tech.blogspot.com/ blog for more resources. [1.Introduction:](https://drive.google.com/open?id=0B3_WIs_SGCRzbDdKbTVoZHZUMGs)<br/> [2.OOP:](https://drive.google.com/open?id=0B3_WIs_SGCRzZHk2MmNsVkxqa1U)<br/> [3.Exception Handling,Inner Class, Date Time, Joda time, Reflection:](https://drive.google.com/open?id=0B3_WIs_SGCRzdkk1WGpGSGxMdU0)<br/> [4.Multithreading, IO, Serialization:](https://drive.google.com/open?id=0B3_WIs_SGCRzTl9GbFZSdmZabE0)<br/> [5.Collections, Java Generics:](https://drive.google.com/open?id=0B3_WIs_SGCRzVDg0MV9qQmVjajQ)<br/> [6. JDBC ](https://drive.google.com/open?id=0B3_WIs_SGCRzU1Z2NUhaSkdXUE0)<br/> [7. Swing ](https://drive.google.com/open?id=0B3_WIs_SGCRzRFVEdzV3ekNNMWM) &nbsp;&nbsp;&nbsp;[(Project here..)](https://github.com/yrojha4ever/StudManagProj)<br/> [8. Java EE/Servlet](https://drive.google.com/open?id=0B3_WIs_SGCRzcG1rQVRabTJSVG8) &nbsp;&nbsp;/&nbsp; [JSP](https://drive.google.com/open?id=0B3_WIs_SGCRzRnIySktZZTlsT2M) &nbsp;&nbsp;:wavy_dash:[Web Project(Jsp/Servlet)](https://github.com/yrojha4ever/JavaStudWeb)<br/> [10. Maven ](https://drive.google.com/open?id=0B3_WIs_SGCRzaHpnR0VvdkNlVWc)<br/> [11. Hibernate ](https://drive.google.com/open?id=0B3_WIs_SGCRzRnlsYkhLTW1QTlk) &nbsp;&nbsp;&nbsp;[(Hibernate Project Here...)](https://github.com/yrojha4ever/JavaStudHibernate)<br/> [11. Spring AOP `Dependency Injection` Project..](https://github.com/yrojha4ever/JavaStudSpringDI)<br/> [12. Spring Web:](https://drive.google.com/open?id=0B3_WIs_SGCRzMExNRlFJN24yT3c) &nbsp;&nbsp;&nbsp;[(Spring Web Mvc/Hibernate Project...)](https://github.com/yrojha4ever/JavaStudSpringMVCWeb) &nbsp;&nbsp;&nbsp;Spring boot version: [springmvcweb-boot](https://github.com/yrojha4ever/springmvcweb-boot)<br/> ### Some Extra course along with Spring MVC WEB! 12.1 [Angular JS APP](https://github.com/yrojha4ever/angularapp)<br/> 12.2 [Design Pattern] ``` Factory Pattern Singleton Pattern MVC Pattern Builder Pattern Decorator Pattern ``` 12.3 [Java - 8] ``` Lambda Expressions Default methods Functional interfaces Streams Filters Date-Time Package ``` 12.4 [JUnit] ``` Assertions assertThat Execution order @Test @BeforeClass @AfterClass @Before @After ``` ## :coffee: NEXT :fast_forward: [Industrial Java Development With Advance Web(Training)](IndustrialJava.MD) :coffee: ## If this repository helps you in anyway, show your love :heart: by putting a :star: on this project :v: ``` --------------------------------------------- Assignment 1: http://www.ctae.ac.in/images/editorFiles/file/Lab%20Solutions%20of%20CSE_IT/java.pdf 1. Write a program to find sum and average of N number using command line argument. 2. Write a program to find sum and average of N number input by User(using Scanner class). 3. Write a program to Demonstrate type casting of all type of data type in java. 4. Write a program to Demonstrate boxing/un-boxing of all types of data type in java. 5. Write a program to test prime number. 6. Write a program to calculate Simple Interest input by user. Simple Intrest = P*T*R/100 7. Write a program to display following string in the console: <!DOCTYPE html> <html lang="en-US"> <script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script> <body> <div ng-app=""> <p>Name : <input type="text" ng-model="name"></p> <h1>Hello '{{name}}'</h1> </div> </body> </html> --------------------------------------------- ``` ``` Assignment 2: Src: http://www.scribd.com/doc/68627280/Java-Lab-Test-Final-Questions#scribd 1.Write a program with different methods to convert Fahrenheit to Celsius andCelsius to Fahrenheit. 2.Write a program to take input from the user and calculate sales-tax. 3.Write a program to take input from the user and calculate interest rate (10%) for giving loan. 4.Write a program that reads in the radius and length of a cylinder and computesvolume. 5.Write a program that converts pounds into kg. The program prompts the user toenter a number of pounds, converts it to kg and displays the result [ 1 pound is0.454 kg]. 6.Write a program that reads an integer 0 to 1000 and adds all the digits in integer.[For example: 911 (input) -> 11 (result)]. 7.Write a program that converts lowercase letter to uppercase letter. [hint: int offset= ‘a’ – ‘A’; char uppercase = (char) (lowercase-offse) ] 8.Write a program that receives an ASCII code (int between 0 – 128) and display itscharacter [example : 97 (input) -> A(output)]. 9.Write a program that reads following information -> Employee’s name, number of hours worked in week, hourly pay rate, tax (20%) with-holding from user and prints a payroll statement with employee’s details. 10.Write a program that reads in investment amount, annual interest rate and number of years. Display the future investment value of the person. 11.Write a program that calculates the energy needed to heat water from an initialtemperature to final temperature. Your program should prompt the user to enter the amount of water in kg and initial and final temperature of water. 12.Write a program that displays the following table (5 IN ROW): a b pow(a,b) a b pow(a,b) --------------------------------------------------------------- 1 2 1 6 7 ______ 2 3 8 7 8 ______ 3 4 81 8 9 ______ 4 5 9 10 ______ 5 6 10 11 ______ 13.Write a program to calculate leap year. 14.Write a program that reads an integer and checks whether it is even, odd or primenumber, print the same as output. 15.Write a program which sorts given numbers, which is provided by user. 16.Write a program to display multiplication table for the given number by user. 17.Write a program to calculate GCD and LCM of the given input from user indifferent methods. 18.Write a program that prompts the user to enter the number of students and eachstudent’s name and score. Finally display the student with highest score. 19.Write a program that displays all number from 100 to 1000, ten per line that aredivisible by 5 and 6. 20.Use a while loop to find the smallest integer ‘n’ such that n square (n2) is greater than 12000. 21.Write a program that display all leap years, ten per line, in the twenty first centure(2001 to 2100). 22.Write a program that prompts the user to enter a decimal integer and display itscorresponding binary value. Use built-in class and without built-in class. 23.Write a program to display an integer in reverse order [ example : 1345 (input) ->5431 (output)]. 24.Write a program that prompts the user to enter a decimal integer and display itscorresponding hexadecimal value. 25.Write a program to find if the given input is palindrome or not. Also includedifferent method to see if number is prime or not. 26.Write a program which reads the score, then finds best score and finally assignsgrades to the students and prints the students details in order. (use array) 27. Write a program to count number letters in given array, with built-in function andwithout built-in functions. 28.Write a program which prompts for set of elements and search given key element. 29.Write a program to sort given array elements using insertion sort. 30.Write a program that reads eleven number from user, computes their average, andfinds out how many numbers are below average and displays duplicate numbers. 31.Write two overloaded methods that return the average of an array with followingheaders: public static int averate(int[] array); and public static doubleaverage(double[] array). 32.Write a program which reads two matrices (2 D) and adds two matrices anddisplays the output on screen. 33.Write a program with different methods to do these array operations -> sort anarray and search an element inside it, determine the upper bound of 2D array. 34.Write a program with different methods to do these array operations -> sort anarray and insert an element inside it, determine the upper bound of 2D array. 35.Write a program with different methods to do these array operations -> reverse anarray, search mini and the maxi element in an array. 36.Write a program with different methods to do these array operations -> comparetwo arrays and display if it is equal or not. 37.Write a program to use methods for calculating Fibonacci series using recursion. 38.Write a program to use recursion for calculating factorial of a number. 39.Write a program to demonstrate various arithmetic and assignment operations,right shift and left shift. 40.Write a program to accept string in command line and print the same, also counthow many characters are in the given string. 41.Write a program to demonstrate various relational and logical operations. 42.Write a program using different methods to demonstrate conditional operator anddo type conversions from -> double to float, short to int, double to int, chat to int,int to short. 43.Write a program using different methods with different control flow controls (for,switch, while) to check whether an alphabet is a vowel or not. 44. Write an exception which catches if the value is very small(eg:<.01) 45. WAP which catches different exceptions 1.divide by 0, 2.Array boundindex error 3.Wrong datatype. 46. WAP to catch wrong input frm command line argument. 47. A simple applet to display a msg:HELLO WORLD on applet window. 48. Do d above pgm by passing parameters and align d msg. 49. Your own exception -> if number is two small, (declare variables as float, do anyarithmetic operationsif the output is less than 0.01 then call and exception). 50. Write a java program which catches exception for divide by zero and array indexerror and wrong data type. 51. Write a java program to catch wrong input from command line, ( declare int variable but input of different type). 52. Display "Hello World " on applet window53. Pass parameters and display string on applet screen. 54.Align the output "Helloworld" to rightmost corner, below, middle, leftmost corner,anywhere on the applet screen. 55. Write a program to calculate the value of X= sin x+cosx+tanx using multiple threads. 56. Define a thread using thread class to generate the factorials of first 20 naturalnumbers. create an instance for this thread and then activate it. 57. Write a program to generate the square roots of the first 30 natural numbers usingRunnable Interface. 58. Define a thread to display the odd numbered element in an array of size 50.define another thread to display even numbered element in another array of size 50.Create instances of the above threads and run them. 59. Write a program that executes three threads. first thread displays Good morning everyone second.second thread displays hello every two seconds and the third thread displays welcome inevery three seconds.create the three threads by extending the thread class. ``` ### Java soft installation ``` 32 bit/64 bit: 1. Install JDK 1.8 2. Install 7zip software 3. spring-tool-suite: >create C:\java-ws folder > copy zip file "spring-tool-suite-3.8.1" into C:\java-ws folder > extract it there using 7zip To run spring-tool-suite: > go to sts-bundle>sts-3.8.1.RELEASE>STS.exe > it will promt select workspace dialog? #give path of your workspace: C:\java-ws #tick checkbox (default) #OK ``` ### Install Maven in Mac/Linux: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` Alternatively, I recommend installing Homebrew for these kinds of utilities. Then you just install Maven using: `brew install maven` <br/> PS: If you got a 404 error, try doing a brew update just before *Install Maven in Linux Ubuntu:* `sudo apt-get install maven` ### How to install lombok in mac: ``` Rename the file "STS.EXE" to "sts.exe" under ../sts-bundle/sts.app/Contents/MacOS/. And then run java -jar lomobok.jar and select the STS.ini file under ../sts-bundle/sts.app/Contents/Eclipse ``` [Assignments: ] (https://github.com/yrojha4ever/JavaStud/blob/master/Assignments.MD) <br/> Some good Links: <br/> [Designing Patterens for Human] (https://github.com/kamranahmedse/design-patterns-for-humans) [Java Exercism] (http://exercism.io/languages/java/about) [How to Conduct Good Programming Interview] (http://www.lihaoyi.com/post/HowtoconductagoodProgrammingInterview.html)
1
MinecraftU/copper-mod
Minecraft mod intended as an example for teaching
null
null
1
kevinthecity/DialerAnimation
Two different examples of how one could architect the Lollipop Dialer Animation
null
# DialerAnimation Two different examples of how one could architect the Lollipop Dialer Animation Deep link to the java source - https://github.com/kevinthecity/DialerAnimation/tree/master/app/src/main/java/com/tumblr/myapplication
0
gabrieldim/Resource-Description-Framework
Resource Description Framework aka RDF examples written in Java
java pretty-rdf-xml rdf turtle xml
# Resource Description Framework also known as RDF Used syntaxes for developing this repository: - RDF/XML - N-TRIPLES - TURTLE - pretty RDF/XML
0
tunjos/RxJava2-RxMarbles-Samples
RxJava 2 RxMarbles Samples
example examples java learning-rxjava rxjava rxjava2 sample samples tutorial tutorials
RxJava 2 RxMarbles Samples ============== This repository contains RxJava 2 implementations of the sample operators found in the [RxMarbles Android Application](https://play.google.com/store/apps/details?id=com.moonfleet.rxmarbles). Please download the app for a more interactive tutorial. ### Running Simply import the project using intelliJ IDEA and run the corresponding run configurations. ### Awesome Links [RxMarbles Android Application](https://play.google.com/store/apps/details?id=com.moonfleet.rxmarbles) [ReactiveX](http://reactivex.io/) [ReactiveX Operators](http://reactivex.io/documentation/operators.html) [RxMarbles](http://rxmarbles.com/) [RxJava Wiki](https://github.com/ReactiveX/RxJava/wiki) ### Dependencies [RxJava2](https://github.com/ReactiveX/RxJava) [RxJava2Extensions](https://github.com/akarnokd/RxJava2Extensions) ## [Transforming](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Transforming/src/Main.java) <img src="Transforming/operators/map.png" width="250"> <img src="Transforming/operators/flatMap.png" width="250"> <img src="Transforming/operators/buffer.png" width="250"> <img src="Transforming/operators/groupBy.png" width="250"> <img src="Transforming/operators/scan.png" width="250"> ## [Filtering](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Filtering/src/Main.java) <img src="Filtering/operators/debounce.png" width="250"> <img src="Filtering/operators/distinct.png" width="250"> <img src="Filtering/operators/distinctUntilChanged.png" width="250"> <img src="Filtering/operators/elementAt.png" width="250"> <img src="Filtering/operators/filter.png" width="250"> <img src="Filtering/operators/first.png" width="250"> <img src="Filtering/operators/last.png" width="250"> <img src="Filtering/operators/skip.png" width="250"> <img src="Filtering/operators/skipLast.png" width="250"> <img src="Filtering/operators/take.png" width="250"> <img src="Filtering/operators/takeLast.png" width="250"> <img src="Filtering/operators/ignoreElements.png" width="250"> ## [Combining](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Combining/src/Main.java) <img src="Combining/operators/startWith.png" width="250"> <img src="Combining/operators/amb.png" width="250"> <img src="Combining/operators/combineLatest.png" width="250"> <img src="Combining/operators/concat.png" width="250"> <img src="Combining/operators/merge.png" width="250"> <img src="Combining/operators/sequenceEqual.png" width="250"> <img src="Combining/operators/zip.png" width="250"> ## [Error Handling](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/ErrorHandling/src/Main.java) <img src="ErrorHandling/operators/onErrorReturn.png" width="250"> <img src="ErrorHandling/operators/onErrorResumeNext.png" width="250"> ## [Conditional](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Conditional/src/Main.java) <img src="Conditional/operators/all.png" width="250"> <img src="Conditional/operators/contains.png" width="250"> <img src="Conditional/operators/skipWhile.png" width="250"> <img src="Conditional/operators/skipUntil.png" width="250"> <img src="Conditional/operators/takeWhile.png" width="250"> <img src="Conditional/operators/takeUntil.png" width="250"> ## [Math](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Math/src/Main.java) <img src="Math/operators/average.png" width="250"> <img src="Math/operators/sum.png" width="250"> <img src="Math/operators/reduce.png" width="250"> <img src="Math/operators/count.png" width="250"> NB -------- All screenshots taken directly from the [RxMarbles Android Application](https://play.google.com/store/apps/details?id=com.moonfleet.rxmarbles). License -------- Copyright 2017 Tunji Olu-Taiwo Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
UrsZeidler/shr5rcp
The shadowrun 5 rich client platfrom is a model driven project for managing shadowrun resources. A shadowrun 5 character generator for example ...
character-generator emf java rcp shadowrun
shr5rcp ===================== ![logo](de.urszeidler.shr5.product/icons/shrImage_6_128.png) 1. Overview This is a program to manage resources and characters for the rolegame shadowrun 5.0. It is dedicated to create and manage characters, providing game master aid and reporting. Refer to [releases](https://github.com/UrsZeidler/shr5rcp/releases) to download the software. For now it contains : * several character generators * [shadowrun 5 character generator aka priority system](https://github.com/UrsZeidler/shr5rcp/wiki/shr5-core-rule-generator) after the core rule book * [karma based- aka buy points](https://github.com/UrsZeidler/shr5rcp/wiki/karma-generator) character generator * [sum to ten]() * [life module system](https://github.com/UrsZeidler/shr5rcp/wiki/lifemodule-generator) * [freestyle](https://github.com/UrsZeidler/shr5rcp/wiki/freestyle) * [a runtime](https://github.com/UrsZeidler/shr5rcp/wiki/script-runtime) a script and combat view let you use all stuff, making tests, manages combat turns etc. the players can now collaborate by impersonating their character instances, check out the [web-app](https://github.com/UrsZeidler/shr5rcp/wiki/script-webapp). * [editors](https://github.com/UrsZeidler/shr5rcp/wiki/editing) for the resources objects and [wizards](https://github.com/UrsZeidler/shr5rcp/wiki/createItemWizard) to create new ones * a simple character sheet * [grunt editor](https://github.com/UrsZeidler/shr5rcp/wiki/generators#grunts) and sheet * various [sheets](https://github.com/UrsZeidler/shr5rcp/wiki/m2t) for player and game master A small german instruction : [kurz übersicht](https://github.com/UrsZeidler/shr5rcp/wiki/kurz-%C3%BCbersicht) see the upcoming and past changes : [changelog](https://github.com/UrsZeidler/shr5rcp/wiki/release-notes) |using the application | player | runtime and game master| | --- | --- | ---| |<ul><li>[starting](https://github.com/UrsZeidler/shr5rcp/wiki/Installation-and-starting)</li><li>[Using the application](https://github.com/UrsZeidler/shr5rcp/wiki/Using%20the%20application)</li><li>[editors](https://github.com/UrsZeidler/shr5rcp/wiki/editing)</li><li>[generator wizard](https://github.com/UrsZeidler/shr5rcp/wiki/characterbuilding-perspective#character-generator-wizard)</li><li>[importing exporting.](https://github.com/UrsZeidler/shr5rcp/wiki/importing-exporting) </li><li>[source book view](https://github.com/UrsZeidler/shr5rcp/wiki/sourceBookView) </li><li>[update-site](https://github.com/UrsZeidler/shr5rcp/wiki/update-site) </li></ul> | <ul><li>[character generators](https://github.com/UrsZeidler/shr5rcp/wiki/generators)</li><li>[character building](https://github.com/UrsZeidler/shr5rcp/wiki/characterbuilding-perspective)</li><li> [character diary](https://github.com/UrsZeidler/shr5rcp/wiki/character-diary)</li><li> [manage money](https://github.com/UrsZeidler/shr5rcp/wiki/CredstickTransactions)</li><li> [web app player](https://github.com/UrsZeidler/shr5rcp/wiki/script-webapp-player)</li></ul> | <ul><li>[script editing](https://github.com/UrsZeidler/shr5rcp/wiki/script-editing)</li><li>[runtime](https://github.com/UrsZeidler/shr5rcp/wiki/script-runtime)</li><li>[web app](https://github.com/UrsZeidler/shr5rcp/wiki/script-webapp)</li><li>[quick combat](https://github.com/UrsZeidler/shr5rcp/wiki/script-quick-combat)</li></ul> | or the [FAQ](https://github.com/UrsZeidler/shr5rcp/wiki/faq) [![Build Status](https://buildhive.cloudbees.com/job/UrsZeidler/job/shr5rcp/badge/icon)](https://buildhive.cloudbees.com/job/UrsZeidler/job/shr5rcp/) [![simple instruction](http://img.youtube.com/vi/wQCnu3sj0RA/0.jpg)](http://www.youtube.com/watch?v=wQCnu3sj0RA) [![create a grunt group instruction](http://img.youtube.com/vi/Q0AX250K9CE/0.jpg)](http://www.youtube.com/watch?v=Q0AX250K9CE) 2. Motivation As I used to play shadowrun for a very long time, I started with an editor in Delphi for shadowrun 2.0, many year later I started a technology study with emf for the 3.01 version, as my group has now moved to 5.0 and the chummer project don't work well on linux I have moved some of the model to 5.0. This is a kind of technology study, working with an model driven approach. Find out more in the [wiki](https://github.com/UrsZeidler/shr5rcp/wiki). and checkout the [constribute](https://github.com/UrsZeidler/shr5rcp/wiki/Building%20and%20development#contributing) section If you are not part of the github community you could use other channels to contact: * diaspora: [shr5rcp@pod.geraspora.de](https://pod.geraspora.de/people/94e9fef074180132e8774860008dbc6c), a Community-run, Distributed [Social-network](https://joindiaspora.com/) * email: shr5rcp@urszeidler.de * twitter: [@shr5rcp](https://twitter.com/shr5rcp) another way to support is via bitcoin: * **12Fgh416ogMiJsYnXyeYqaEGyEQHcVFSsJ** for main support like webspace and service * **1B9nJ2eXZJz1UKaVQTeMHriD1LBQsd4CvS** for arts and graphics * **18BbgDaurToEu6sou5dmBzNUNWAhTH25Tg** for localization and spell checking * **1JcJK1nLkAZkr4jid6N44L2bYQ7rxPSVYZ** tip the coder as we could use the bitcoin to pay some people to make the tedious spell checking jobs, pay webspace or services or pay for some artwork. 3. Installation As the software is an eclipse product so you need to unpack the software and start the exe, you will need java installed. * [Installation and starting](https://github.com/UrsZeidler/shr5rcp/wiki/Installation-and-starting) * [faq](https://github.com/UrsZeidler/shr5rcp/wiki/faq) 4. Technical stuff This project is based on the EMF modeling framework the EMF Client Platform and EMF Forms. [development](https://github.com/UrsZeidler/shr5rcp/wiki/Building-and-development) License ------- The code is published under the terms of the [Eclipse Public License, version 1.0](http://www.eclipse.org/legal/epl-v10.html). <a href="http://with-eclipse.github.io/" target="_blank"> <img alt="with-Eclipse logo" src="http://with-eclipse.github.io/with-eclipse-0.jpg" /></a>
1
jwboardman/whirlpool
Example Microservices using Kafka 3.3.1 and WebSockets with Netty 5.0.0-alpha2 with both React/Typescript and HTML/JavaScript UIs
null
# TL;DR - clone repo - `./maclocal_run.sh` or `./linuxlocal_run.sh` or `wsllocal_run.sh` - when that finishes - `cd src/main/ui` - `yarn start` # Recommended Development Setup - Mac OSX, Ubuntu 20.04, or Windows WSL Ubuntu - JDK 8 - Maven 3 ## Notes - I'm using Java 8, Maven 3.8.6, Kafka 3.3.1, and Netty 5.0.0-alpha2. The script will auto-install (and remove!) Zk/Kafka version 3.0.0, so if you have an existing installation, save it or don't use the script! - localhost is used to bypass Kafka 3.3.1's desire to look up your external hostname - No database or security has been included because this is an example. - Use any username you like, and the password doesn't matter. Note that logging in multiple times with the same username is not allowed due to the simplistic "session" support with no true users or security present. It would not take a lot of work to add true sessions and allow multiple logins using the same username, with updates for a user sent to all the websockets that the user currently has open. Logging in with unique usernames on multiple browsers or tabs is not only allowed, it is coded for (users each have their own subscriptions) and encouraged. ## Prerequisites - MacOS make sure you have JAVA_HOME set - Linux use set-alternatives to make Java 8 the default - make sure your default Java is version 8. There was only so much time I had, so updating the code to a more recent Java version just wasn't going to happen ## Help installing Maven for Linux - `sudo apt install maven`. Note that this will also install OpenJDK 11. ## Help installing Java for Linux - `sudo apt install openjdk-8-jdk` - `sudo update-alternatives --config java`. This works nicely on WSL. For my Ubuntu VM I had to use - `sudo apt-get install galternatives` ## Install/Build/Start Zookeeper, Kafka, Services, and Server - For this script, do NOT click out of your terminal window until the WhirlpoolServer tab starts. Otherwise the script will act like it worked, but will actually fail. - Run `./maclocal_run.sh` (or `sudo ./linuxlocal_run.sh` or `sudo ./wsllocal_run.sh`) - `NOTE`: This will `REMOVE ANY EXISTING KAFKA INSTALLATION` located at /Applications/kafka and /Applications/kafka_2.13-3.0.0 (or /opt/kafka and /opt/kafka_2.13-3.0.0 for linux) along with `ALL` data in /tmp/zookeeper and /tmp/kafka-logs! - This will download (if it isn't already) version 3.0.0 of Kafka (with Scala 2.13) that includes Zookeeper, install them, and configure them, and starts them. It will then kick off the maven build that compiles and builds runnable deployed targets. Finally, it starts the services, and finally the server. ## Stop - This script shuts down everything and closes all except the left most tab. - Run `./maclocal_kill.sh` (or `sudo ./linuxlocal_kill.sh` or `sudo ./wsllocal_kill.sh`) - This will kill the services and server, then shut down Kafka, then shutdown Zookeeper. `NOTE`: It will also `REMOVE` the Kafka logs and Zookeeper data in /tmp/zookeeper and /tmp/kafka-logs!!! ## About the React UI Here's a screenshot: ![Whirlpool Screen Shot](https://github.com/jwboardman/whirlpool/blob/master/whirlpool_react_ui.png?raw=true "Whirlpool") To get the React site running: - from the root whirlpool directory, `cd src/main/ui` - if you don't have Node installed, use nvm to easily control Node versions - Mac - install Homebrew `ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` - install nvm `brew install nvm` - Linux - `curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash` - expect to see some errors, but they don't keep it from working - restart terminal - install Node `nvm install 16.17.0` - install yarn `npm install --global yarn` - install node modules `yarn install` - start dev server `yarn start` ## React UI usage - To add a stock symbol, click anywhere on the "Stocks +" header, enter the ticker symbol (i.e. "GOOG") and click the Add button. The reason you can click on the text or the + is for accessibility - the + by itself is really too small of a target. To remove it, click the trash can icon next to the stock. - To add a website to test whether it is up or down, click anywhere on the "Up Down +" header, type in the fully-qualified URL (i.e. http://facebook.com) and click the Add button. To remove it, click the trash can icon next to the URL. - To add a weather check, click anywhere on the "Weather +" header, type the zip code (i.e. "10001") and click the Add button. To remove it, click the trash can icon next to the weather. ## NOTE - Be patient after adding a new subscription. I set the timers at 30 seconds so I'm not hitting any sites too often. If you're really impatient, you can refresh the page after adding a subscription, which will end the WebSocket, use the cookie to re-login, then cause an out-of-band refresh command to be sent, which should get the data fairly quickly. - Subscriptions survive page refresh (with the same userid) because they are stored with each service in memory. A "real" system would of course use a database. Logging out cleans up all subscriptions for a user. A cookie is set upon login so reloading the page automatically logs you back in. The cookie is expired upon logout. ## About the old Vanilla JS UI The previous UI still works. Here's a screenshot: ![Whirlpool Screen Shot](https://github.com/jwboardman/whirlpool/blob/master/whirlpool.png?raw=true "Whirlpool") - To add a stock symbol, type it in (i.e. "GOOG") and click the A button under "Stock". To remove it, click the X. - To add a website to test whether it is up or down, type in the fully-qualified URL (i.e. http://facebook.com) and click the A button under "UpDown". To remove it, click the X. - To add a weather check, type the city,state in (i.e. "chicago,il") and click the A button under "City,State". To remove it, click the X. ## Ports/Logs - http://localhost:3000/ - the React app - http://localhost:8080/ - the old vanilla JS UI
1
vitrivr/cineast
Cineast is a multi-feature content-based mulitmedia retrieval engine. It is capable of retrieving images, audio- and video sequences as well as 3d models based on edge or color sketches, textual descriptions and example objects.
3d audio images java oas retrieval video
[![vitrivr - cineast](https://img.shields.io/static/v1?label=vitrivr&message=cineast&color=blue&logo=github)](https://github.com/vitrivr/cineast) [![GitHub release](https://img.shields.io/github/release/vitrivr/cineast?include_prereleases=&sort=semver&color=2ea44f)](https://github.com/vitrivr/cineast/releases/) [![License](https://img.shields.io/badge/License-MIT-blueviolet)](LICENSE) [![swagger-editor](https://img.shields.io/badge/open--API-in--editor-green.svg?style=flat&label=Open-Api%20(Release))](https://editor.swagger.io/?url=https://raw.githubusercontent.com/vitrivr/cineast/master/docs/openapi.json) [![swagger-editor](https://img.shields.io/badge/open--API-in--editor-green.svg?style=flat&label=Open-Api%20(Dev))](https://editor.swagger.io/?url=https://raw.githubusercontent.com/vitrivr/cineast/dev/docs/openapi.json) [![Java CI with Gradle](https://github.com/vitrivr/cineast/workflows/Java%20CI%20with%20Gradle/badge.svg)](https://github.com/vitrivr/cineast/actions?query=workflow:"Java+CI+with+Gradle") # Cineast Cineast is a multi-feature content-based multimedia retrieval engine. It is capable of retrieving images, audio- and video sequences as well as 3d models based on edge or color sketches, sketch-based motion queries and example objects. Cineast is written in Java and uses [CottontailDB](https://github.com/vitrivr/cottontaildb) as a storage backend. ## Building Cineast Cineast can be built using [Gradle](https://gradle.org/). It needs Java 17+. Building and running it is as easy as ``` git clone https://github.com/vitrivr/cineast.git cd cineast ./gradlew getExternalFiles cineast-runtime:shadowJar java -jar cineast-runtime/build/libs/cineast-runtime-x.x-all.jar cineast.json ``` For more setup information, consult our [Wiki](https://github.com/vitrivr/cineast/wiki) ## Docker image There is a Docker image available [on Docker Hub](https://hub.docker.com/r/vitrivr/cineast). You can run the CLI with: ``` docker run vitrivr/cineast cli cineast.json help ``` To change the configuration you can use a bind mount, e.g. to run the API server with custom configuration file cineast.json in the current directory: ``` docker run -v "$PWD"/cineast.json:/opt/cineast/cineast.json:ro,Z vitrivr/cineast api cineast.json ``` ## Generate OpenApi Specification If you need to rebuild the OpenApi Specification (OAS), there is a gradle task for this purpose: ``` ./gradlew -PcineastConfig=<path/to/your/config> generateOpenApiSpecs ``` You can omit `-PcineastConfig`, then the default config (`cineast.json`) is used. The generated OAS is stored at `docs/openapi.json` ## Prerequisites ### System dependencies * git * JDK 17 or higher ### 3D rendering For 3D rendering (required in order to support 3D models) you either need a video card or Mesa 3D. The JOGL library supports both. Rendering on Headless devices has been successfully tested with Xvfb. The following steps are required to enable 3D rendering support on a headless device without video card (Ubuntu 16.04.1 LTS) 1. Install Mesa 3D (should come pre-installed on Ubuntu). Check with `dpkg -l | grep mesa` 2. Install Xvfb: ``` $> sudo apt-get install xvfb ``` 3. Start a new screen: ``` $> sudo Xvfb :1 -ac -screen 0 1024x768x24 & ``` 4. Using the new screen, start Cineast: ``` $> DISPLAY=:1 java -jar cineast.jar -3d ``` The -3d option will perform a 3D test. If it succeeds, cineast should generate a PNG image depicting two coloured triangles on a black background. ### Versioning Cineast uses [semantic versioning](https://semver.org). See [the releases page](https://github.com/vitrivr/cineast/releases). ### Code Style Cineast primarily uses the [Google Java Styleguide](https://google.github.io/styleguide/javaguide.html). Please use the file supplied in the `docs/` folder To automatically apply the styleguide in [IntelliJ IDEA](https://www.jetbrains.com/idea/) go to_File_ -> _Settings_ -> _Editor_ -> _Code Style_ -> _Java_ and import the supplied file via the gear icon. You can also use [Eclipse](https://www.eclipse.org/) for development and use Google's [styleguide for eclipse](https://github.com/google/styleguide/blob/gh-pages/eclipse-java-google-style.xml).
1
ahmontero/wifi-direct-demo
This project is based on the original demo from Google. The original example show us how to send an image file from the client to the group owner. With this version, I try to show how to send data between two devices (be group owner or not).
null
null
1
MrPand-21/HRMS_Backend
This project is an example of human resources management system (HRMS) and can be used as backend
hrms human-resources-management-system java mernis postgres postgresql postgresql-database spring-boot springboot
<div align="center"> <a href="https://github.com/MrPand-21/HRMS_Backend/graphs/contributors"><img src="https://img.shields.io/github/contributors/MrPand-21/HRMS_Backend.svg?style=for-the-badge"></a> <a href="https://github.com/MrPand-21/HRMS_Backend/network/members"><img src="https://img.shields.io/github/forks/MrPand-21/HRMS_Backend.svg?style=for-the-badge"></a> <a href="https://github.com/MrPand-21/HRMS_Backend/stargazers"><img src="https://img.shields.io/github/stars/MrPand-21/HRMS_Backend.svg?style=for-the-badge"></a> <br/> <br/> <a href="https://github.com/MrPand-21/HRMS_Backend"> <img src="https://github.com/MrPand-21/MrPand-21/blob/main/HRMS.png" height="160" alt="HRMS"> </a> <h3>HRMS_Backend</h3> <p align="center"> <a href="#about-the-project">About</a> • <a href="#usage">How To Use</a> • <a href="#installation">Installation</a> • <a href="#credits">Credits</a> • <a href="#reference-documentation">Related</a> • <a href="https://github.com/MrPand-21/HRMS_Backend/issues">Report Bug</a> • <a href="https://github.com/MrPand-21/HRMS_Backend/issues">Request Feature</a> </p> <h4 align="center">N-Layer Architecture human resource management system project builted by <a href="https://www.java.com/" target="_blank">Java</a>. </h4> </div> # About the Project [![Java](https://img.shields.io/badge/Java-ED8B00?style=for-the-badge&logo=java&logoColor=white)](https://www.java.com/) [![Spring](https://img.shields.io/badge/Spring-6DB33F?style=for-the-badge&logo=spring&logoColor=white)](https://spring.io/) [![PostgreSQL](https://img.shields.io/badge/PostgreSQL-316192?style=for-the-badge&logo=postgresql&logoColor=white)](https://www.postgresql.org/) [![forthebadge](http://forthebadge.com/images/badges/built-with-love.svg)](http://forthebadge.com) N-Layer Architecture human resource management system project. Using this project, you can create `admins`, `job seekers`, `jobs` and `employers`. Along with these, you can find the <a href="">frontend</a> of this project built with react in my profile or you can use this as an api and use the `swagger-ui` interface. ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites The things you need before installing the software. * You should download <a href="https://www.postgresql.org/download/"> `PostgreSQL`</a> on your device. * You should download <a href="https://www.oracle.com/java/technologies/sdk-downloads.html">`Java Sdk`</a> on your device. * Lastly, don't forget to download an ide and plugins which you can write codes. ### Installation A step by step guide that will tell you how to get the development environment up and running: 1. Clone the repository ```bash # Clone this repository $ git clone https://github.com/MrPand-21/HRMS_Backend.git # Go into the repository $ cd HRMS_Backend ``` Open this in terminal or you can use your ide! 2. Create PostgreSQL connectinon (If don't use Postgre, execute script in yours) ```bash # open the Postgre SQL script $ nano `PostgreDatabaseSQL` ``` Here, copy the SQL script from here or open it from github and copy. Then open the Postgre's `pgAdmin4`, under Databases (Servers/Your Server/Databases/) create a database and give its name to it (Default = HRMS). Then find the CREATE Script option (double click to database). It will open a `Query Editor` to you and here, paste the SQL script we copied from `PostgreDatabaseSQL` file and execute the script. Don't forget to refresh database to see the tables. 3. Configure the Connection you can open the application.properties in terminal: ```bash #open in terminal $ nano src/main/resources/application.properties ``` or you can directly open in your code editor. Then, configure your cridentials in the application.properties file. 4. Create Mernis Connection (Optional) **This project created in Turkey, that's why it uses Mernis System to check if the person is Turkey citizen or not. If you don't want to use it, you can delete the `mernisService` folder and delete `mernisServiceAdapter.java` and related files.** To use the mernis service, you can use the <a href="https://easywsdl.com/WsdlGenerator">WSDL Generator</a>. If you use IntellijIdea, you can follow the following instructions: 1. In the IntellijIdea interface open Plugins > Marketplace and type EasyWSDL and install the plugin (here, you may need to restart Ide). 2. Open the project again 3. Double click `mernisService` folder and press `EasyWSDL - Update web service` option and it will regenerate all files. 4. Close `mernisService`folder and go `MernisServiceAdapter.java` file in core/utilities/adapters/adapters/concretes and change imports, replace old imports with new file imports and there you are! If you use Eclipse, you can follow the following instructions: 1. Double click `mernisService` folder and press Add > Connected Service. As url paste https://tckimlik.nvi.gov.tr/service/kpspublic.asmx and press finish button. 2. Close `mernisService`folder and go `MernisServiceAdapter.java` file in core/utilities/adapters/adapters/concretes and change imports, replace old imports with new file imports and there you are! ## Usage - Deployment - Server Now, you can start the project in HRMSApplication.java. Since we use springboot, you may want to go to the springboot interface. * Live: http://localhost:8080/swagger-ui.html Once you open this link after run the application, you can use it. That's it! # Controllers In the swagger-ui panel, there are 10 different controllers which you can use. They are : 1. Activation Panel Controller 2. Basic Error Controller 3. City Controller 4. Employer Controller 5. Images Controller 6. Job Controller 7. Job Position Controller 8. Job Seekers Controller 9. Work Places Controller 10. Work Times Controller Please Refer to API Documentation [![API Documentation](https://img.shields.io/badge/Swagger-85EA2D?style=for-the-badge&logo=swagger&logoColor=black)](https://github.com/ahmet-cetinkaya/hrms-project-backend/blob/master/APIDocumentation.md) ## Contributing Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. Right now there is only one branch which is master. 1. Fork the Project 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 4. Push to the Branch (`git push origin feature/AmazingFeature`) 5. Open a Pull Request ### Reference Documentation For further reference, please consider the following sections: * [Official Apache Maven documentation](https://maven.apache.org/guides/index.html) * [Spring Boot Maven Plugin Reference Guide](https://docs.spring.io/spring-boot/docs/2.4.5/maven-plugin/reference/html/) * [Create an OCI image](https://docs.spring.io/spring-boot/docs/2.4.5/maven-plugin/reference/html/#build-image) * [Spring Boot DevTools](https://docs.spring.io/spring-boot/docs/2.4.5/reference/htmlsingle/#using-boot-devtools) * [Spring Web](https://docs.spring.io/spring-boot/docs/2.4.5/reference/htmlsingle/#boot-features-developing-web-applications) * [Spring Data JPA](https://docs.spring.io/spring-boot/docs/2.4.5/reference/htmlsingle/#boot-features-jpa-and-spring-data) * [Spring Native Reference Guide](https://docs.spring.io/spring-native/docs/current/reference/htmlsingle/) * [Spring Configuration Processor](https://docs.spring.io/spring-boot/docs/2.4.5/reference/htmlsingle/#configuration-metadata-annotation-processor) ### Guides The following guides illustrate how to use some features concretely: * [Building a RESTful Web Service](https://spring.io/guides/gs/rest-service/) * [Serving Web Content with Spring MVC](https://spring.io/guides/gs/serving-web-content/) * [Building REST services with Spring](https://spring.io/guides/tutorials/bookmarks/) * [Accessing Data with JPA](https://spring.io/guides/gs/accessing-data-jpa/) * [Installing and Using EasyWSDL in IntellijIdea from stratch](https://github.com/torukobyte/JavaCampHomework/tree/master/MernisServiceEkleme) ### Additional Links These additional references should also help you: * [Configure the Spring AOT Plugin](https://docs.spring.io/spring-native/docs/0.9.2/reference/htmlsingle/#spring-aot-maven) ### Credits Carabelli - Engin Demiroğ
1
opensourceBIM/IfcValidator
Checking IFC models on the quality of the data. Implemented parts of the Dutch Rijksgebouwendienst BIM norm" as an example."
null
IfcValidator ========== This is a BIMserver plugin that checks Ifc2x3tc1 models for common requirements. > Note: This plugin needs BIMserver 1.5 The plugin generates [Extended Data](https://github.com/opensourceBIM/BIMserver/wiki/Extended-Data) according to the following [Validation Report](https://github.com/opensourceBIM/BIMserver-Repository/wiki/Validation-Report) format. For now, this plugin only implements checks that can be done with only the IFC file, so it will not read external files for cross-checking data. ## TODO - Implement for IFC4 as well ## Checks A list of checks that have been identified by asking people from the building industry and reading Dutch "norm" documents that seem computer checkable. | Check | Implemented | Part of | | ------------- | ------------- | ----- | ------ | | Exactly 1 IfcProject | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | IfcProject has at least one representation where the TrueNorth attribute has been set | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | IfcProject has a length unit set | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Length unit is either in Meters or Millimeters | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | IfcProject has an area unit | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Area unit is in m2 | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | IfcProject has a volume unit | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Volume unit is in m3 | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Exactly 1 IfcSite | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm1.1 | | [Dutch]Kadastrale aanduidingen | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 2.2.7.2 | | IfcSite has lattitude | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | IfcSite has longitude | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | IfcSite has elevation | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Has at least one IfcBuilding | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Has at least one IfcBuildingStorey | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 2.2.7.4 | | Building storeys naming according to RVB_BIM_Norm | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | [link]Building storeys with increasing numbers have increased center | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | All objects must be hierarchically structured to be in a building storey | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/accept.png) | RVB_BIM_Norm 1.1 | | Use a special "cube" to identify the origin of the model | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/exclamation.png) | | | [Check whether the right Ifc entitities have been used based on geometric ratios](#geometric-ratios) | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/exclamation.png) | | | No use of IfcProxy | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/exclamation.png) | | Every object should have some kind of identification | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/exclamation.png) | | No 2 objects can be modelled the same, be at the same place or represent the same thing | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/exclamation.png) | | Clash detection | ![](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/exclamation.png) | ## Further explanations ### Geometric ratios This check uses the geometry of an object to check whether the right IFC type has been used. For example, IfcSlab objects are usually flat surfaces that ware much wider then they are high. Columns are usually slender compared to their length etc... To be able to do these kind of comparisons it would be very useful to have oriented bounding boxes available (those are not available in BImserver at the moment). ## Eample Screenshot from a validationreport generated by this plugins, shown in BIMvie.ws ![alt text](https://github.com/opensourceBIM/IfcValidator/blob/master/docs/img/screenshot.png "screenshot")
0
GoogleCloudPlatform/cloud-build-samples
Code snippets used in Cloud Build documentation
samples
# code-examples This repository contains code examples used in the official Cloud Build [documentation](https://cloud.google.com/build/docs/). ## Contributing changes ### New samples - New samples are accepted at reviewer discretion. - New samples should come with a `README` either linking to documentation or explaining how to use the sample directly. If the `README` is excessively long, the instructions should be moved to a tutorial and linked from the `README`. - The configuration file (e.g. `cloudbuild.yaml`) should be able to run as part of automated testing. This means that in general, the file should contain substitutions rather than dummy values that the user has to manually replace. _Some exceptions to this rule may be allowed with sufficient justification_. - Sample should be readable. Use clear comments, descriptive variable names, and try to avoid excessively long lines. ### Bug Fixes - Bug fixes are welcome, either as pull requests or as GitHub issues. See [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute.
0
openjfx/samples
JavaFX samples to run with different options and build tools
documentation eclipse examples gradle ide intellij java java-11 java-12 javafx javafx-11 javafx-12 maven modular netbeans non-modular openjfx
OpenJFX Docs Samples === Description --- This repository contains a collection of HelloFX samples. Each one is a very simple HelloWorld sample created with JavaFX that can be run with different options and build tools. The related documentation for each sample can be found [here](https://openjfx.io/openjfx-docs/). For more information go to https://openjfx.io. Content --- * [HelloFX samples](#HelloFX-Samples) * [Command Line](#Command-Line) - [_Modular samples_](#CLI-Modular-Samples) - [_Non-modular samples_](#CLI-Non-Modular-Samples) * [IDEs](#IDEs) - [IntelliJ](#IntelliJ) [_Modular samples_](#IntelliJ-Modular-Samples) [_Non-modular samples_](#IntelliJ-Non-Modular-Samples) - [NetBeans](#NetBeans) [_Modular samples_](#NetBeans-Modular-Samples) [_Non-modular samples_](#NetBeans-Non-Modular-Samples) - [Eclipse](#Eclipse) [_Modular samples_](#Eclipse-Modular-Samples) [_Non-modular samples_](#Eclipse-Non-Modular-Samples) - [Visual Studio Code](#VSCode) [_Modular samples_](#VSCode-Modular-Samples) [_Non-modular samples_](#VSCode-Non-Modular-Samples) * [License](#License) * [Contributing](#Contributing) HelloFX samples<a name="HelloFX-Samples" /> --- Contains samples of a simple HelloFX class that can be run from command line, with or without build tools. Build Tool | Sample | Description ---------- | ------ | ----------- None | [HelloFX project](HelloFX/CLI) | Simple HelloFX class to run on command line. Maven | [HelloFX project](HelloFX/Maven) | Simple HelloFX class to run with Maven. Gradle | [HelloFX project](HelloFX/Gradle) | Simple HelloFX class to run with Gradle. Command Line<a name="Command-Line" /> --- Contains samples of modular and non-modular projects that can be run from command line, with or without build tools. ### _Modular samples_<a name="CLI-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- None | [HelloFX project](CommandLine/Modular/CLI) | Modular project to run on command line. Maven | [HelloFX project](CommandLine/Modular/Maven) | Modular project to run with Maven. Gradle | [HelloFX project](CommandLine/Modular/Gradle) | Modular project to run with Gradle. ### _Non-modular samples_<a name="CLI-Non-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- None | [HelloFX project](CommandLine/Non-modular/CLI) | Non-modular project to run on command line. Maven | [HelloFX project](CommandLine/Non-modular/Maven) | Non-modular project to run with Maven. Gradle | [HelloFX project](CommandLine/Non-modular/Gradle) | Non-modular project to run with Gradle. IDEs<a name="IDEs" /> --- Contains samples of modular and non-modular projects that can be run from an IDE, with or without build tools. ### IntelliJ<a name="IntelliJ" /> #### _Modular samples_<a name="IntelliJ-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/IntelliJ/Modular/Java) | Modular project to run from IntelliJ. Maven | [HelloFX project](IDE/IntelliJ/Modular/Maven) | Modular project to run from IntelliJ, with Maven. Gradle | [HelloFX project](IDE/IntelliJ/Modular/Gradle) | modular project to run from IntelliJ, with Gradle. #### _Non-modular samples_<a name="IntelliJ-Non-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/IntelliJ/Non-Modular/Java) | Non-modular project to run from IntelliJ. Maven | [HelloFX project](IDE/IntelliJ/Non-Modular/Maven) | Non-modular project to run from IntelliJ, with Maven. Gradle | [HelloFX project](IDE/IntelliJ/Non-Modular/Gradle) | Non-modular project to run from IntelliJ, with Gradle. ### NetBeans<a name="NetBeans" /> #### _Modular samples_<a name="NetBeans-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/NetBeans/Modular/Java) | Modular project to run from NetBeans. Maven | [HelloFX project](IDE/NetBeans/Modular/Maven) | Modular project to run from NetBeans, with Maven. Gradle | [HelloFX project](IDE/NetBeans/Modular/Gradle) | Modular project to run from NetBeans, with Gradle. #### _Non-modular samples_<a name="NetBeans-Non-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/NetBeans/Non-Modular/Java) | Non-modular project to run from NetBeans. Maven | [HelloFX project](IDE/NetBeans/Non-Modular/Maven) | Non-modular project to run from NetBeans, with Maven. Gradle | [HelloFX project](IDE/NetBeans/Non-Modular/Gradle) | Non-modular project to run from NetBeans, with Gradle. ### Eclipse<a name="Eclipse" /> #### _Modular samples_<a name="Eclipse-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/Eclipse/Modular/Java) | Modular project to run from Eclipse. Maven | [HelloFX project](IDE/Eclipse/Modular/Maven) | Modular project to run from Eclipse, with Maven. Gradle | [HelloFX project](IDE/Eclipse/Modular/Gradle) | Modular project to run from Eclipse, with Gradle. #### _Non-modular samples_<a name="Eclipse-Non-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/Eclipse/Non-Modular/Java) | Non-modular project to run from Eclipse. Maven | [HelloFX project](IDE/Eclipse/Non-Modular/Maven) | Non-modular project to run from Eclipse, with Maven. Gradle | [HelloFX project](IDE/Eclipse/Non-Modular/Gradle) | Non-modular project to run from Eclipse, with Gradle. ### Visual Studio Code<a name="VSCode" /> #### _Modular samples_<a name="VSCode-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Maven | [HelloFX project](IDE/VSCode/Modular/Maven) | Modular project to run from Visual Studio Code, with Maven. Gradle | [HelloFX project](IDE/VSCode/Modular/Gradle) | Modular project to run from Visual Studio Code, with Gradle. #### _Non-modular samples_<a name="VSCode-Non-Modular-Samples" /> Build Tool | Sample | Description ---------- | ------ | ----------- Java | [HelloFX project](IDE/VSCode/Non-Modular/Java) | Non-modular project to run from Visual Studio Code. Maven | [HelloFX project](IDE/VSCode/Non-Modular/Maven) | Non-modular project to run from Visual Studio Code, with Maven. Gradle | [HelloFX project](IDE/VSCode/Non-Modular/Gradle) | Non-modular project to run from Visual Studio Code, with Gradle. License<a name="License" /> --- This project is licensed under [BSD 3-Clause](LICENSE). Contributing<a name="Contributing" /> --- This project welcomes all types of contributions and suggestions. We encourage you to report issues, create suggestions and submit pull requests. Contributions can be submitted via [pull requests](https://github.com/openjfx/samples/pulls/), providing you have signed the [Gluon Individual Contributor License Agreement (CLA)](https://docs.google.com/forms/d/16aoFTmzs8lZTfiyrEm8YgMqMYaGQl0J8wA0VJE2LCCY). Please go through the [list of issues](https://github.com/openjfx/samples/issues) to make sure that you are not duplicating an issue.
0
mybatis/jpetstore-6
A web application built on top of MyBatis 3, Spring 3 and Stripes
java samples
MyBatis JPetStore ================= [![Java CI](https://github.com/mybatis/jpetstore-6/actions/workflows/ci.yaml/badge.svg)](https://github.com/mybatis/jpetstore-6/actions/workflows/ci.yaml) [![Container Support](https://github.com/mybatis/jpetstore-6/actions/workflows/support.yaml/badge.svg)](https://github.com/mybatis/jpetstore-6/actions/workflows/support.yaml) [![Coverage Status](https://coveralls.io/repos/github/mybatis/jpetstore-6/badge.svg?branch=master)](https://coveralls.io/github/mybatis/jpetstore-6?branch=master) [![License](https://img.shields.io/:license-apache-brightgreen.svg)](https://www.apache.org/licenses/LICENSE-2.0.html) ![mybatis-jpetstore](https://mybatis.org/images/mybatis-logo.png) JPetStore 6 is a full web application built on top of MyBatis 3, Spring 5 and Stripes. Essentials ---------- * [See the docs](http://www.mybatis.org/jpetstore-6) ## Other versions that you may want to know about - JPetstore on top of Spring, Spring MVC, MyBatis 3, and Spring Security https://github.com/making/spring-jpetstore - JPetstore with Vaadin and Spring Boot with Java Config https://github.com/igor-baiborodine/jpetstore-6-vaadin-spring-boot - JPetstore on MyBatis Spring Boot Starter https://github.com/kazuki43zoo/mybatis-spring-boot-jpetstore ## Run on Application Server Running JPetStore sample under Tomcat (using the [cargo-maven2-plugin](https://codehaus-cargo.github.io/cargo/Maven2+plugin.html)). - Clone this repository ``` $ git clone https://github.com/mybatis/jpetstore-6.git ``` - Build war file ``` $ cd jpetstore-6 $ ./mvnw clean package ``` - Startup the Tomcat server and deploy web application ``` $ ./mvnw cargo:run -P tomcat90 ``` > Note: > > We provide maven profiles per application server as follow: > > | Profile | Description | > | -------------- | ----------- | > | tomcat90 | Running under the Tomcat 9.0 | > | tomcat85 | Running under the Tomcat 8.5 | > | tomee80 | Running under the TomEE 8.0(Java EE 8) | > | tomee71 | Running under the TomEE 7.1(Java EE 7) | > | wildfly26 | Running under the WildFly 26(Java EE 8) | > | wildfly13 | Running under the WildFly 13(Java EE 7) | > | liberty-ee8 | Running under the WebSphere Liberty(Java EE 8) | > | liberty-ee7 | Running under the WebSphere Liberty(Java EE 7) | > | jetty | Running under the Jetty 9 | > | glassfish5 | Running under the GlassFish 5(Java EE 8) | > | glassfish4 | Running under the GlassFish 4(Java EE 7) | > | resin | Running under the Resin 4 | - Run application in browser http://localhost:8080/jpetstore/ - Press Ctrl-C to stop the server. ## Run on Docker ``` docker build . -t jpetstore docker run -p 8080:8080 jpetstore ``` or with Docker Compose: ``` docker compose up -d ``` ## Try integration tests Perform integration tests for screen transition. ``` $ ./mvnw clean verify -P tomcat90 ```
0
android/testing-samples
A collection of samples demonstrating different frameworks and techniques for automated testing
null
Android testing samples =================================== A collection of samples demonstrating different frameworks and techniques for automated testing. ### Espresso Samples **[BasicSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/BasicSample)** - Basic Espresso sample **[CustomMatcherSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/CustomMatcherSample)** - Shows how to extend Espresso to match the *hint* property of an EditText **[DataAdapterSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/DataAdapterSample)** - Showcases the `onData()` entry point for Espresso, for lists and AdapterViews **[FragmentScenarioSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/FragmentScenarioSample)** - Basic usage of `FragmentScenario` with Espresso. **[IdlingResourceSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/IdlingResourceSample)** - Synchronization with background jobs **[IntentsBasicSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/IntentsBasicSample)** - Basic usage of `intended()` and `intending()` **[IntentsAdvancedSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/IntentsAdvancedSample)** - Simulates a user fetching a bitmap using the camera **[MultiWindowSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/MultiWindowSample)** - Shows how to point Espresso to different windows **[RecyclerViewSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/RecyclerViewSample)** - RecyclerView actions for Espresso **[ScreenshotSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/ScreenshotSample)** - Screenshot capturing and saving using Espresso and androidx.test.core APIs **[WebBasicSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/WebBasicSample)** - Use Espresso-web to interact with WebViews **[BasicSampleBundled](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/BasicSampleBundled)** - Basic sample for Eclipse and other IDEs **[MultiProcessSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/MultiProcessSample)** - Showcases how to use multiprocess Espresso. ### UiAutomator Sample **[BasicSample](https://github.com/googlesamples/android-testing/tree/main/ui/uiautomator/BasicSample)** - Basic UI Automator sample ### AndroidJUnitRunner Sample **[AndroidJunitRunnerSample](https://github.com/googlesamples/android-testing/tree/main/runner/AndroidJunitRunnerSample)** - Showcases test annotations, parameterized tests and testsuite creation ### JUnit4 Rules Sample **All previous samples use ActivityTestRule or IntentsTestRule but there's one specific to ServiceTestRule: **[BasicSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/BasicSample)** - Simple usage of `ActivityTestRule` **[IntentsBasicSample](https://github.com/googlesamples/android-testing/blob/main/ui/espresso/IntentsBasicSample)** - Simple usage of `IntentsTestRule` **[ServiceTestRuleSample](https://github.com/googlesamples/android-testing/tree/main/integration/ServiceTestRuleSample)** - Simple usage of `ServiceTestRule` Prerequisites -------------- - Android SDK v28 - Android Build Tools v28.03 Getting Started --------------- These samples use the Gradle build system. To build a project, enter the project directory and use the `./gradlew assemble` command or use "Import Project" in Android Studio. - Use `./gradlew connectedAndroidTest` to run the tests on a connected emulator or device. - Use `./gradlew test` to run the unit test on your local host. There is a top-level `build.gradle` file if you want to build and test all samples from the root directory. This is mostly helpful to build on a CI (Continuous Integration) server. AndroidX Test Library --------------- Many of these samples use the AndroidX Test Library. Visit the [Testing site on developer.android.com](https://developer.android.com/training/testing) for more information. Experimental Bazel Support -------------------------- [![Build status](https://badge.buildkite.com/18dda320b265e9a8f20cb6141b1e80ca58fb62bdb443e527be.svg)](https://buildkite.com/bazel/android-testing) Some of these samples can be tested with [Bazel](https://bazel.build) on Linux. These samples contain a `BUILD.bazel` file, which is similar to a `build.gradle` file. The external dependencies are defined in the top level `WORKSPACE` file. This is __experimental__ feature. To run the tests, please install the latest version of Bazel (0.12.0 or later) by following the [instructions on the Bazel website](https://docs.bazel.build/versions/master/install-ubuntu.html). ### Bazel commands ``` # Clone the repository if you haven't. $ git clone https://github.com/google/android-testing $ cd android-testing # Edit the path to your local SDK at the top of the WORKSPACE file $ $EDITOR WORKSPACE # Test everything in a headless mode (no graphical display) $ bazel test //... --config=headless # Test a single test, e.g. ui/espresso/BasicSample/BUILD.bazel $ bazel test //ui/uiautomator/BasicSample:BasicSampleInstrumentationTest_21_x86 --config=headless # Query for all android_instrumentation_test targets $ bazel query 'kind(android_instrumentation_test, //...)' //ui/uiautomator/BasicSample:BasicSampleInstrumentationTest_23_x86 //ui/uiautomator/BasicSample:BasicSampleInstrumentationTest_22_x86 //ui/uiautomator/BasicSample:BasicSampleInstrumentationTest_21_x86 //ui/uiautomator/BasicSample:BasicSampleInstrumentationTest_19_x86 //ui/espresso/RecyclerViewSample:RecyclerViewSampleInstrumentationTest_23_x86 //ui/espresso/RecyclerViewSample:RecyclerViewSampleInstrumentationTest_22_x86 //ui/espresso/RecyclerViewSample:RecyclerViewSampleInstrumentationTest_21_x86 //ui/espresso/RecyclerViewSample:RecyclerViewSampleInstrumentationTest_19_x86 //ui/espresso/MultiWindowSample:MultiWindowSampleInstrumentationTest_23_x86 //ui/espresso/MultiWindowSample:MultiWindowSampleInstrumentationTest_22_x86 ... # Test everything with GUI enabled $ bazel test //... --config=gui # Test with a local device or emulator. Ensure that `adb devices` lists the device. $ bazel test //... --config=local_device # If multiple devices are connected, add --device_serial_number=$identifier where $identifier is the name of the device in `adb devices` $ bazel test //... --config=local_device --test_arg=--device_serial_number=$identifier ``` For more information, check out the documentation for [Android Instrumentation Tests in Bazel](https://docs.bazel.build/versions/master/android-instrumentation-test.html). You may also want to check out [Building an Android App with Bazel](https://docs.bazel.build/versions/master/tutorial/android-app.html), and the list of [Android Rules](https://docs.bazel.build/versions/master/be/android.html) in the Bazel Build Encyclopedia. Known issues: * Building of APKs is supported on Linux, Mac and Windows, but testing is only supported on Linux. * `android_instrumentation_test.target_device` attribute still needs to be specified even if `--config=local_device` is used. * If using a local device or emulator, the APKs are not uninstalled automatically after the test. Use this command to remove the packages: * `adb shell pm list packages com.example.android.testing | cut -d ':' -f 2 | tr -d '\r' | xargs -L1 -t adb uninstall` Please file Bazel related issues against the [Bazel](https://github.com/bazelbuild/bazel) repository instead of this repository. Support ------- - Google+ Community: https://plus.google.com/communities/105153134372062985968 - Stack Overflow: http://stackoverflow.com/questions/tagged/android-testing If you've found an error in this sample, please file an issue: https://github.com/googlesamples/android-testing Patches are encouraged, and may be submitted by forking this project and submitting a pull request through GitHub. Please see CONTRIBUTING.md for more details. License ------- Copyright 2015 The Android Open Source Project, Inc. Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
ihrupin/samples
samples
null
###This repo contains sample source code for my blog [WWW.HRUPIN.COM](http://www.hrupin.com)
0
netgloo/spring-boot-samples
Spring Boot samples by Netgloo
java samples spring-boot
null
0
youtube/api-samples
Code samples for YouTube APIs, including the YouTube Data API, YouTube Analytics API, and YouTube Live Streaming API. The repo contains language-specific directories that contain the samples.
null
api-samples =========== Code samples for YouTube APIs, including the YouTube Data API, YouTube Analytics API, and YouTube Live Streaming API. The repo contains language-specific directories that contain the samples.
0
spring-projects/spring-integration-samples
You are looking for examples, code snippets, sample applications for Spring Integration? This is the place.
null
Spring Integration Samples ========================== [![Revved up by Develocity](https://img.shields.io/badge/Revved%20up%20by-Develocity-06A0CE?logo=Gradle&labelColor=02303A)](https://ge.spring.io/scans?search.rootProjectNames=spring-integration-samples) Welcome to the **Spring Integration Samples** repository which provides **50+ samples** to help you learn [Spring Integration][]. To simplify your experience, the *Spring Integration* samples are split into 4 distinct categories: * Basic * Intermediate * Advanced * Applications * DSL Inside of each category you'll find a **README.md** file, which will contain a more detailed description of that category. Each sample also comes with its own **README.md** file explaining further details, e.g. how to run the respective sample. | For additional samples, please also checkout the [Spring Integration Extensions][] project as it also provides numerous samples. *Happy Integration!* # Note This (main) branch requires Spring Integration 6.0 or above. For samples running against earlier versions of Spring Integration, use the __5.5.x__ and other branches. The project requires now Java 17 or above. To open the project in the IDE, use its `import` option against a `build.gradle` file from the root project directory. ## Related GitHub projects * [Spring Integration][] * [Spring Integration Extensions][] ## Community Sample Projects * [Xavier Padró][] # Categories Below is a short description of each category. ## DSL This directory holds demos/samples for Spring Integration 4.0 Java Configuration as well as the Java DSL Extension. ## Basic This is a good place to get started. The samples here are technically motivated and demonstrate the bare minimum with regard to configuration and code to help you to get introduced to the basic concepts, API and configuration of Spring Integration. For example, if you are looking for an answer on how to wire a **Service Activator** to a **Channel** or how to apply a **Gateway** to your message exchange or how to get started with using the **MAIL** or **XML** module, this would be the right place to find a relevant sample. The bottom line is that this is a good starting point. * **amqp** - Demonstrates the functionality of the various **AMQP Adapters** * **barrier** - Shows how to suspend a thread until some asynchronous event occurs * **control-bus** - Demonstrates the functionality of the **Control Bus** * **enricher** - This sample demonstrates how the Enricher components can be used * **feed** - Demonstrates the functionality of the **Feed Adapter** (RSS/ATOM) * **file** - Demonstrates aspects of the various File Adapters (e.g. **File Inbound/Outbound Channel Adapters**, file **polling**) * **ftp** - Demonstrates the **FTP support** available with Spring Integration * **helloworld** - Very simple starting example illustrating a basic message flow (using **Channel**, **ServiceActivator**, **QueueChannel**) * **http** - Demonstrates request/reply communication when using a pair of **HTTP Inbound/Outbound gateways** * **jdbc** - Illustrates the usage of the Jdbc Adapters, including object persistence and retrieval * **jms** - Demonstrates **JMS** support available with Spring Integration * **jmx** - Demonstrates **JMX** support using a **JMX Attribute Polling Channel** and **JMX Operation Invoking Channel Adapter** * **jpa** - Shows the usage of the JPA Components * **mail** - Example showing **IMAP** and **POP3** support * **mqtt** - Demonstrates the functionality of inbound and outbound **MQTT Adapters** * **mongodb** - Shows how to persist a Message payload to a **MongoDb** document store and how to read documents from **MongoDb** * **oddeven** - Example combining the functionality of **Inbound Channel Adapter**, **Filter**, **Router** and **Poller** * **quote** - Example demoing core EIP support using **Channel Adapter (Inbound and Stdout)**, **Poller** with Interval Triggers, **Service Activator** * **sftp** - Demonstrating SFTP support using **SFTP Inbound / Outbound Channel Adapters** * **tcp-amqp** - Demonstrates basic functionality of bridging the **Spring Integration TCP Adapters** with **Spring Integration AMQP Adapters** * **tcp-broadcast** - Demonstrates broadcasting a message to multiple connected TCP clients. * **tcp-client-server** - Demonstrates socket communication using **TcpOutboundGateway**, **TcpInboundGateway** and also uses a **Gateway** and a **Service Activator** * **tcp-with-headers** - Demonstrates sending headers along with the payload over TCP using JSON. * **testing-examples** - A series of test cases that show techniques to **test** Spring Integration applications. * **twitter** - Illustrates Twitter support using the **Twitter Inbound Channel Adapter**, **Twitter Inbound Search Channel Adapter**, **Twitter Outbound Channel Adapter** * **ws-inbound-gateway** - Example showing basic functionality of the **Web Service Gateway** * **ws-outbound-gateway** - Shows outbound web services support using the **Web Service Outbound Gateway**, **Content Enricher**, Composed Message Processor (**Chain**) * **xml** - Example demonstrates various aspects of the **Xml** support using an **XPath Splitter**, **XPath Router**, **XSLT Transformer** as well as **XPath Expression** support * **xmpp** - Show the support for [**XMPP**](https://en.wikipedia.org/wiki/Extensible_Messaging_and_Presence_Protocol) (formerly known as Jabber) using e.g. GoogleTalk ## Intermediate This category targets developers who are already more familiar with the Spring Integration framework (past getting started), but need some more guidance while resolving more advanced technical problems that you have to deal with when switching to a Messaging architecture. For example, if you are looking for an answer on how to handle errors in various scenarios, or how to properly configure an **Aggregator** for the situations where some messages might not ever arrive for aggregation, or any other issue that goes beyond a basic understanding and configuration of a particular component to address "what else you can do?" types of problems, this would be the right place to find relevant examples. * **async-gateway** - Usage example of an asynchronous **Gateway** * **dynamic-poller** - Example shows usage of a **Poller** with a custom **Trigger** to change polling periods at runtime * **async-gateway** - Example shows usage of an **Asynchronous Gateway** * **errorhandling** - Demonstrates basic **Error Handling** capabilities of Spring Integration * **file-processing** - Sample demonstrates how to wire a message flow to process files either sequentially (maintain the order) or concurrently (no order). * **mail-attachments** - Demonstrates the processing of email attachments * **monitoring** The project used in the *[Spring Integration Management and Monitoring Webinar](https://www.springsource.org/node/3598)* Also available on the *[SpringSourceDev YouTube Channel](https://www.youtube.com/SpringSourceDev)* * **multipart-http** - Demonstrates the sending of HTTP multipart requests using Spring's **RestTemplate** and a Spring Integration **Http Outbound Gateway** * **rest-http** - This sample demonstrates how to send an HTTP request to a Spring Integration's HTTP service while utilizing Spring Integration's new HTTP Path usage. This sample also uses Spring Security for HTTP Basic authentication. With HTTP Path facility, the client program can send requests with URL Variables. * **retry-and-more** Provides samples showing the application of MessageHandler Advice Chains to endpoints - retry, circuit breaker, expression evaluating * **splitter-aggregator-reaper** A demonstration of implementing the Splitter and Aggregator *Enterprise Integration Patterns* (EIP) together. This sample also provides a concrete example of a [message store reaper][] in action. * **stored-procedures-derby** Provides an example of the stored procedure Outbound Gateway using *[Apache Derby](https://db.apache.org/derby/)* * **stored-procedures-ms** Provides an example of the stored procedure Outbound Gateway using *Microsoft SQL Server* * **stored-procedures-oracle** Provides an example of the stored procedure Outbound Gateway using *ORACLE XE* * **stored-procedures-postgresql** Provides an example of the stored procedure Outbound Gateway using *[PostgreSQL](https://www.postgresql.org/)* * **tcp-async-bi-directional** - Demonstrates the use of *Collaborating Channel Adapters* for arbitrary async messaging (not request/reply) between peers. * **tcp-client-server-multiplex** - Demonstrates the use of *Collaborating Channel Adapters* with multiple in-flight requests/responses over a single connection. * **travel** - More sophisticated example showing the retrieval of weather (SOAP Web Service) and traffic (HTTP Service) reports using real services * **tx-synch** Provides a sample demonstrating the use of transaction synchronization, renaming an input file to a different filename, depending on whether the transaction commits, or rolls back. ## Advanced This category targets advanced developers who are quite familiar with Spring Integration but are looking to address a specific custom need by extending the Spring Integration public API. For example, if you are looking for samples showing how to implement a custom **Channel** or **Consumer** (event-based or polling-based), or you are trying to figure out what is the most appropriate way to implement a custom **BeanParser** on top of the Spring Integration BeanParser hierarchy when implementing a custom namespace, this would be the right place to look. Here you can also find samples that will help you with adapter development. Spring Integration comes with an extensive library of adapters that allow you to connect remote systems with the Spring Integration messaging framework. However you might have a need to integrate with a system for which the core framework does not provide an adapter, so you have to implement your own. This category would include samples showing you how to implement various adapters. * **advanced-testing-examples** - Example test cases that show advanced techniques to test Spring Integration applications * **dynamic-ftp** - Demonstrates one technique for sending files to dynamic destinations. * **dynamic-tcp-client** - Demonstrates a technique for dynamically creating TCP clients. ## Applications This category targets developers and architects who have a good understanding of Message-Driven architecture and Enterprise Integration Patterns, and have an above average understanding of Spring and Spring integration and who are looking for samples that address a particular business problem. In other words, the emphasis of samples in this category is '**business use cases**' and how they can be solved via a Messaging architecture and Spring Integration in particular. For example, if you are interested to see how a Loan Broker process or Travel Agent process could be implemented and automated via Spring Integration, this would be the right place to find these types of samples. * **cafe** - Emulates a simple operation of a coffee shop combining various Spring Integration adapters (Including **Router** and **Splitter**) see [Appendix A of the reference documentation](https://docs.spring.io/spring-integration/docs/current/reference/html/#samples) for more details. Implementations are provided for: - AMQP - JMS - In memory channels * **cafe-scripted** - Scripted implementation of the classic **cafe** sample application. Supports **JavaScript**, **Groovy**, **Ruby**, and **Python**. * **loan-broker** - Simulates a simple banking application (Uses **Gateway**, **Chain**, **Header Enricher**, **Recipient List Router**, **Aggregator**) see [Appendix A of the reference documentation](https://docs.spring.io/spring-integration/docs/current/reference/html/#samples) for more details * **loanshark** This extension to the loan broker sample shows how to exchange messages between Spring Integration applications (and other technologies) using **UDP**. **file-split-ftp** - Reads a file; splits into 3 based on contents; sends files over ftp; sends email with results. # Contributing See the [Spring Integration Contributor Guidelines](https://github.com/spring-projects/spring-integration/blob/master/CONTRIBUTING.adoc) for information about how to contribute to this repository. # Resources For more information, please visit the Spring Integration website at: [https://projects.spring.io/spring-integration/](https://projects.spring.io/spring-integration/) [Spring Integration]: https://github.com/spring-projects/spring-integration [Spring Integration Extensions]: https://github.com/spring-projects/spring-integration-extensions [message store reaper]: https://docs.spring.io/spring-integration/api/org/springframework/integration/store/MessageGroupStoreReaper.html [Xavier Padró]: https://github.com/xpadro/spring-integration
1
GoogleCloudPlatform/java-docs-samples
Java and Kotlin Code samples used on cloud.google.com
appengine auth automl cdn java kotlin samples translate video vision
# Google Cloud Platform Java Samples [![Build Status][java-11-badge]][java-11-link] [![Build Status][java-17-badge]][java-17-link] [![Build Status][java-21-badge]][java-21-link] <a href="https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/GoogleCloudPlatform/java-docs-samples&page=editor&open_in_editor=README.md"> <img alt="Open in Cloud Shell" src ="http://gstatic.com/cloudssh/images/open-btn.png"></a> This repository holds sample code written in Java that demonstrates the [Google Cloud Platform](https://cloud.google.com/docs/). Some samples have accompanying guides on <cloud.google.com>. See respective README files for details. ## Google Cloud Samples To browse ready to use code samples check [Google Cloud Samples](https://cloud.google.com/docs/samples?l=java). ## Set Up 1. [Set up your Java Development Environment](https://cloud.google.com/java/docs/setup) 1. Clone this repository: git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git 1. Obtain authentication credentials. Create local credentials by running the following command and following the oauth2 flow (read more about the command [here][auth_command]): gcloud auth application-default login Or manually set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to a service account key JSON file path. Learn more at [Setting Up Authentication for Server to Server Production Applications][ADC]. *Note:* Application Default Credentials is able to implicitly find the credentials as long as the application is running on Compute Engine, Kubernetes Engine, App Engine, or Cloud Functions. ## Contributing * See the [Contributing Guide](CONTRIBUTING.md) ## Licensing * See [LICENSE](LICENSE) ## Supported Java runtimes Every submitted change has to pass all checks that run on the testing environments with Java 11 and Java 17 runtimes before merging the change to the main branch. We run periodic checks on the environments with Java 8 and Java 21 runtimes but we don't enforce passing these tests at the moment. Because Java 8 is a [supported Java runtime][supported_runtimes] in Google Cloud, please configure to build your code sample with Java 8. In exceptional cases, configure to build your code sample using Java 11. [supported_runtimes]: https://cloud.google.com/java/docs/supported-java-versions ## Source Code Headers Every file containing source code must include copyright and license information. This includes any JS/CSS files that you might be serving out to browsers. (This is to help well-intentioned people avoid accidental copying that doesn't comply with the license.) Apache header: Copyright 2022 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [ADC]: https://developers.google.com/identity/protocols/application-default-credentials [auth_command]: https://cloud.google.com/sdk/gcloud/reference/beta/auth/application-default/login [java-11-badge]: https://storage.googleapis.com/cloud-devrel-kokoro-resources/java/badges/java-docs-samples-11.svg [java-11-link]: https://storage.googleapis.com/cloud-devrel-kokoro-resources/java/badges/java-docs-samples-11.html [java-17-badge]: https://storage.googleapis.com/cloud-devrel-kokoro-resources/java/badges/java-docs-samples-17.svg [java-17-link]: https://storage.googleapis.com/cloud-devrel-kokoro-resources/java/badges/java-docs-samples-17.html [java-21-badge]: https://storage.googleapis.com/cloud-devrel-kokoro-resources/java/badges/java-docs-samples-21.svg [java-21-link]: https://storage.googleapis.com/cloud-devrel-kokoro-resources/java/badges/java-docs-samples-21.html Java is a registered trademark of Oracle and/or its affiliates.
0
baomidou/mybatis-plus-samples
MyBatis-Plus Samples
null
# MyBatis-Plus Samples [贡献代码](https://github.com/baomidou/mybatis-plus-samples) [企业版 Mybatis-Mate 高级特性](https://gitee.com/baomidou/mybatis-mate-examples) 本工程为 MyBatis-Plus 的官方示例,项目结构如下: - mybatis-plus-sample-quickstart: 快速开始示例 - mybatis-plus-sample-quickstart-springmvc: 快速开始示例(Spring MVC版本) - mybatis-plus-sample-reduce-springmvc: 简化掉默认mapper类示例(Spring MVC版本) - mybatis-plus-sample-generator: 代码生成器示例 - mybatis-plus-sample-crud: 完整 CRUD 示例 - mybatis-plus-sample-ddl-mysql: SQL 脚本自动维护 示例 - mybatis-plus-sample-wrapper: 条件构造器示例 - mybatis-plus-sample-pagination: 分页功能示例 - mybatis-plus-sample-active-record: ActiveRecord示例 - mybatis-plus-sample-sequence: Sequence示例 - mybatis-plus-sample-execution-analysis: Sql执行分析示例 - mybatis-plus-sample-performance-analysis: 性能分析示例 - mybatis-plus-sample-optimistic-locker: 乐观锁示例 - mybatis-plus-sample-sql-injector: 自定义全局操作示例 - mybatis-plus-sample-auto-fill-metainfo: 公共字段填充示例 - mybatis-plus-sample-logic-delete: 逻辑删除示例 - mybatis-plus-sample-multi-datasource: 多数据源示例 - mybatis-plus-sample-enum: 枚举注入示例 - mybatis-plus-sample-dynamic-tablename: 动态表名示例 - mybatis-plus-sample-tenant: 多租户示例 - mybatis-plus-sample-typehandler: 类型处理器示例,例如 json 字段对象转换 - mybatis-plus-sample-deluxe: 完整示例(包含 分页、逻辑删除、自定义全局操作 等绝大部分常用功能的使用示例,相当于大整合的完整示例) - mybatis-plus-sample-assembly: 分离打包示例 - mybatis-plus-sample-resultmap: 使用 resultMap 示例 - mybatis-plus-sample-id-generator: 自定义ID生成示例 - mybatis-plus-sample-id-string: 字符串ID生成示例 - mybatis-plus-sample-no-spring: 不使用spring下的示例 - mybatis-plus-sample-pagehelper: 使用pagehelper进行分页 - mybatis-plus-sample-association: 联表查询示例 - mybatis-plus-sample-jonb: 数据库 postgres 字段 jsonb 示例 ![微信 wx153666](https://images.gitee.com/uploads/images/2021/0903/235825_2d017339_12260.jpeg) - 可加作者,进入微信群
0
apache/incubator-seata-samples
Apache Seata(incubating) Samples for Java
null
# samples code specification ## Directory Structure The first and second levels are more of a directory Top level: seata-samples Second layer: at-sample, tcc-sample, saga-sample, xa-sample Third floor, The third layer is the specific sample and the naming convention is as follows: ## naming naming with framework: spring-nacos-seata, springboot-naocs-zk-seata ... ## dependency pom: The dependencies of each sample should be independent and should not depend on the dependencies of the parent pom of seata samples. # samples transaction model https://seata.apache.org/docs/user/quickstart/ ## start sequence 1、account 2、storage 3、order 4、business
0
javaee-samples/javaee7-samples
Java EE 7 Samples
cdi glassfish jaspic java javaee javaee7 jaxrs jboss jsf junit liberty payara servlet tomcat tomee
# Java EE 7 Samples # This workspace consists of Java EE 7 Samples and unit tests. They are categorized in different directories, one for each Technology/JSR. Some samples/tests have documentation, otherwise read the code. The [Java EE 7 Essentials](http://www.amazon.com/Java-EE-Essentials-Arun-Gupta/dp/1449370179/) book refers to most of these samples and provides an explanation. Feel free to add docs and send a pull request. ## How to run? ## Samples are tested on Payara, GlassFish, Wildfly and more using the Arquillian ecosystem. A brief instruction how to clone, build, import and run the samples on your local machine @radcortez provides in this sample video https://www.youtube.com/watch?v=BB4b-Yz9cF0 Only one container profile can be active at a given time otherwise there will be dependency conflicts. There are 16 available container profiles, for 6 different servers: * Payara and GlassFish * ``payara-ci-managed`` This profile will install a Payara server and start up the server per sample. Useful for CI servers. The Payara version that's used can be set via the ``payara.version`` property. This is the default profile and does not have to be specified explicitly. * ``payara-embedded`` This profile uses the Payara embedded server and runs in the same JVM as the TestClass. Useful for development, but has the downside of server startup per sample. * ``payara-remote`` This profile requires you to start up a Payara server outside of the build. Each sample will then reuse this instance to run the tests. Useful for development to avoid the server start up cost per sample. This profile supports for some tests to set the location where Payara is installed via the ``glassfishRemote_gfHome`` system property. E.g. ``-DglassfishRemote_gfHome=/opt/payara171`` This is used for sending asadmin commands to create container resources, such as users in an identity store. * ``glassfish-embedded`` This profile uses the GlassFish embedded server and runs in the same JVM as the TestClass. Useful for development, but has the downside of server startup per sample. * ``glassfish-remote`` This profile requires you to start up a GlassFish server outside of the build. Each sample will then reuse this instance to run the tests. Useful for development to avoid the server start up cost per sample. This profile supports for some tests to set the location where GlassFish is installed via the ``glassfishRemote_gfHome`` system property. E.g. ``-DglassfishRemote_gfHome=/opt/glassfish41`` This is used for sending asadmin commands to create container resources, such as users in an identity store. * WildFly * ``wildfly-ci-managed`` This profile will install a Wildfly server and start up the server per sample. Useful for CI servers. The WildFly version that's used can be set via the ``wildfly.version`` property. * ``wildfly-embedded`` This profile is almost identical to wildfly-ci-managed. It will install the same Wildfly server and start up that server per sample again, but instead uses the Arquillian embedded connector to run it in the same JVM. Useful for CI servers. The WildFly version that's used can be set via the ``wildfly.version`` property. * ``wildfly-remote`` This profile requires you to start up a Wildfly server outside of the build. Each sample will then reuse this instance to run the tests. Useful for development to avoid the server start up cost per sample. * ``wildfly-swarm`` This profile uses WildFly Swarm, which allows building uberjars that contain just enough of the WildFly application server. Here, the parts of WildFly that are included are selected based on inspecting the application and looking for the Java EE APIs that are actually used. The WildFly Swarm version that's used can be set via the ``wildfly.swarm.version`` property. * TomEE * ``tomee-ci-managed`` This profile will install a TomEE server and start up that server per sample. Useful for CI servers. This profile cannot connect to a running server. Note that the version of TomEE to be used has to be present in an available maven repository. The defaults in this profile assume that the arquillian adapter and the TomEE server have the same version. E.g both 7.0.0. To use a TomEE server that's not available in maven central, one way to use it for the samples is to install it in a local .m2 as follows: Clone TomEE repo: ``git clone https://github.com/apache/tomee`` ``cd tomee`` Switch to the desired version if needed, then build and install in .m2: ``mvn clean install -pl tomee/apache-tomee -am -Dmaven.test.skip=true`` ``mvn clean install -pl arquillian -amd -Dmaven.test.skip=true`` Make sure the version that's installed (see pom.xml in TomEE project) matches the ``tomee.version`` in the properties section in the root pom.xml of the samples project. * ``tomee-embedded`` This profile uses the TomEE embedded server and runs in the same JVM as the TestClass. * Liberty * ``liberty-managed`` This profile will start up the Liberty server per sample, and optionally connects to a running server that you can start up outside of the build (with the restriction that this server has to run on the host as where the tests are run using the same user). To connect to a running server the ``org.jboss.arquillian.container.was.wlp_managed_8_5.allowConnectingToRunningServer`` system property has to be set to true. E.g. ``-Dorg.jboss.arquillian.container.was.wlp_managed_8_5.allowConnectingToRunningServer=true`` This profile requires you to set the location where Liberty is installed via the ``libertyManagedArquillian_wlpHome`` system property. E.g. ``-DlibertyManagedArquillian_wlpHome=/opt/wlp`` This profile also requires the localConnector feature to be configured in server.xml, and if all tests are to be run the javaee-7.0 feature E.g. ```xml <featureManager> <feature>javaee-7.0</feature> <feature>localConnector-1.0</feature> </featureManager> ``` For older versions of Liberty (pre 16.0.0.0) for the JASPIC tests to even be attempted to be executed a cheat is needed that creates a group in Liberty's internal user registry: ```xml <basicRegistry id="basic"> <group name="architect"/> </basicRegistry> ``` This cheat is not needed for the latest versions of Liberty (16.0.0.0/2016.7 and later) * ``liberty-ci-managed`` This profile will download and install a Liberty server and start up the server per sample. Useful for CI servers. Note, this is not a real embedded server, but a regular server. It's now called "embedded" because no separate install is needed as it's downloaded automatically. * Weblogic * ``weblogic-remote`` This profile requires you to start up a WebLogic server outside of the build. Each sample will then reuse this instance to run the tests. This profile requires you to set the location where WebLogic is installed via the ``weblogicRemoteArquillian_wlHome`` system property. E.g. ``-DweblogicRemoteArquillian_wlHome=/opt/wls12210`` The default username/password are assumed to be "admin" and "admin007" respectively. This can be changed using the ``weblogicRemoteArquillian_adminUserName`` and ``weblogicRemoteArquillian_adminPassword`` system properties. E.g. ``-DweblogicRemoteArquillian_adminUserName=myuser`` ``-DweblogicRemoteArquillian_adminPassword=mypassword`` * Tomcat * ``tomcat-remote`` This profile requires you to start up a plain Tomcat (8.5 or 9) server outside of the build. Each sample will then reuse this instance to run the tests. Tomcat supports samples that make use of Servlet, JSP, Expression Language (EL), WebSocket and JASPIC. This profile requires you to enable JMX in Tomcat. This can be done by adding the following to ``[tomcat home]/bin/catalina.sh``: ``` JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.port=8089 -Dcom.sun.management.jmxremote=true " JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.ssl=false " JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote.authenticate=false" JAVA_OPTS="$JAVA_OPTS -Djava.rmi.server.hostname=localhost " ``` This profile also requires you to set a username (``tomcat``) and password (``manager``) for the management application in ``tomcat-users.xml``. See the file ``test-utils/src/main/resources/tomcat-users.xml`` in this repository for a full example. Be aware that this should *only* be done for a Tomcat instance that's used exclusively for testing, as the above will make the Tomcat installation **totally insecure!** * ``tomcat-ci-managed`` This profile will install a Tomcat server and start up the server per sample. Useful for CI servers. The Tomcat version that's used can be set via the ``tomcat.version`` property. The containers that download and install a server (the \*-ci-managed profiles) allow you to override the version used, e.g.: * `-Dpayara.version=4.1.1.163` This will change the version from the current one (e.g 4.1.1.171.1) to 4.1.1.163 for Payara testing purposes. * `-Dglassfish.version=4.1` This will change the version from the current one (e.g 4.1.1) to 4.1 for GlassFish testing purposes. * `-Dwildfly.version=8.1.0.Final` This will change the version from the current one (e.g. 10.1.0.Final) to 8.1.0.Final for WildFly. **To run them in the console do**: 1. In the terminal, ``mvn test -fae`` at the top-level directory to start the tests for the default profile. When developing and runing them from IDE, remember to activate the profile before running the test. To learn more about Arquillian please refer to the [Arquillian Guides](http://arquillian.org/guides/) **To run only a subset of the tests do at the top-level directory**: 1. Install top level dependencies: ``mvn clean install -pl "test-utils,util" -am`` 1. cd into desired module, e.g.: ``cd jaspic`` 1. Run tests against desired server, e.g.: ``mvn clean test -P liberty-ci-managed`` ## How to contribute ## With your help we can improve this set of samples, learn from each other and grow the community full of passionate people who care about the technology, innovation and code quality. Every contribution matters! There is just a bunch of things you should keep in mind before sending a pull request, so we can easily get all the new things incorporated into the master branch. Standard tests are jUnit based - for example [this commit](servlet/servlet-filters/src/test/java/org/javaee7/servlet/filters/FilterServletTest.java). Test classes naming must comply with surefire naming standards `**/*Test.java`, `**/*Test*.java` or `**/*TestCase.java`. For the sake of clarity and consistency, and to minimize the upfront complexity, we prefer standard jUnit tests using Java, with as additional helpers HtmlUnit, Hamcrest and of course Arquillian. Please don't use alternatives for these technologies. If any new dependency has to be introduced into this project it should provide something that's not covered by these existing dependencies. ### Some coding principles ### * When creating new source file do not put (or copy) any license header, as we use top-level license (MIT) for each and every file in this repository. * Please follow JBoss Community code formatting profile as defined in the [jboss/ide-config](https://github.com/jboss/ide-config#readme) repository. The details are explained there, as well as configurations for Eclipse, IntelliJ and NetBeans. ### Small Git tips ### * Make sure your [fork](https://help.github.com/articles/fork-a-repo) is always up-to-date. Simply run ``git pull upstream master`` and you are ready to hack. * When developing new features please create a feature branch so that we incorporate your changes smoothly. It's also convenient for you as you could work on few things in parallel ;) In order to create a feature branch and switch to it in one swoop you can use ``git checkout -b my_new_cool_feature`` That's it! Welcome in the community! ## CI Job ## CI jobs are executed by [Travis](https://travis-ci.org/javaee-samples/javaee7-samples). Note that by the very nature of the samples provided here it's perfectly normal that not all tests pass. This normally would indicate a bug in the server on which the samples are executed. If you think it's really the test that's faulty, then please submit an issue or provide a PR with a fix. ## Run each sample in Docker * Install Docker client from http://boot2docker.io * Build the sample that you want to run as ``mvn clean package -DskipTests`` For example: ``mvn -f jaxrs/jaxrs-client/pom.xml clean package -DskipTests`` * Change the second line in ``Dockerfile`` to specify the location of the generated WAR file * Run boot2docker and give the command ``docker build -it -p 80:8080 javaee7-sample`` * In a different shell, find out the IP address of the running container as: ``boot2docker ip`` * Access the sample as http://IP_ADDRESS:80/jaxrs-client/webresources/persons. The exact URL would differ based upon the sample.
0
microservices-security-in-action/samples
Microservices Security in Action Book Samples
null
# Microservices Security In Action **By [Prabath Siriwardena](https://twitter.com/prabath) and [Nuwan Dias](https://twitter.com/nuwandias)** <img src="cover.jpeg" style="float: left; width: 100%" /> [Amazon](https://www.amazon.com/Microservices-Security-Action-Prabath-Siriwardena/dp/1617295957/) | [Manning](https://www.manning.com/books/microservices-security-in-action) | [YouTube](https://www.youtube.com/channel/UCoEOYnrqEcANUgbcG-BuhSA) | [Slack](https://bit.ly/microservices-security) | [Notes](notes.md) | [Supplementary Readings](supplementary-readings.md) **NOTE: While writing the book we wanted to mostly focus on the concepts, as the concrete technologies used to implement the concepts are constantly changing and we wanted to keep them as much as simple. So we decided to use Spring Boot to implement the OAuth 2.0 authorization server used in the samples of the book. However in practice you may use Keycloak, Auth0, Okta, WSO2, and so on as your authorization server.** **Spring Boot has deprecated AuthorizationServerConfigurerAdapter, ClientDetailsServiceConfigurer, and AuthorizationServerSecurityConfigurer classes, which we used to implement the authorization server, which we will surely update in the next edition of the book and will also update the github project even before that. However, we expect this will not distract the readers that much, because we don't expect them to implement an authorization server.** ## PART 1 OVERVIEW ### 1 ■ Microservices security landscape ### 2 ■ [First steps in securing microservices](chapter02) ## PART 2 EDGE SECURITY ### 3 ■ [Securing north/south traffic with an API gateway](chapter03) ### 4 ■ [Accessing a secured microservice via a single-page application](chapter04) ### 5 ■ [Engaging throttling, monitoring, and access control](chapter05) ## PART 3 SERVICE-TO-SERVICE COMMUNICATIONS ### 6 ■ [Securing east/west traffic with certificates ](chapter06) ### 7 ■ [Securing east/west traffic with JWT](chapter07) ### 8 ■ [Securing east/west traffic over gRPC](chapter08) ### 9 ■ [Securing reactive microservices](chapter09) ## PART 4 SECURE DEPLOYMENT ### 10 ■ [Conquering container security with Docker](chapter10) ### 11 ■ [Securing microservices on Kubernetes](chapter11) ### 12 ■ [Securing microservices with Istio service mesh](chapter12) ### PART 5 SECURE DEVELOPMENT ### 13 ■ [Secure coding practices and automation](chaper13) ## APPENDICES ### A ■ OAuth 2.0 and OpenID Connect ### B ■ [JSON Web Token](appendix-b) ### C ■ Single-page application architecture ### D ■ Observability in a microservices deployment ### E ■ [Docker fundamentals](appendix-e) ### F ■ [Open Policy Agent](appendix-f) ### G ■ Creating a certificate authority and related keys with OpenSSL ### H ■ Secure Production Identity Framework for Everyone ### I ■ [gRPC fundamentals](appendix-i) ### J ■ [Kubernetes fundamentals](appendix-j) ### K ■ [Service mesh and Istio fundamentals](appendix-k)
0
junit-team/junit5-samples
Collection of sample applications using JUnit 5.
null
# JUnit 5 Samples [![ci-badge]][ci-actions] Welcome to _JUnit 5 Samples_, a collection of sample applications and extensions using JUnit Jupiter, JUnit Vintage, and the JUnit Platform on various build systems. CI builds for sample projects are performed by [GitHub Actions][ci-actions]. Using JDK 11+'s `jshell` tool, you may build all samples via the `build-all-samples.jsh` script. ## Jupiter Starter Samples _Basic setups showing how to get started with JUnit Jupiter._ ### Jupiter on Ant ![badge-jdk-8] ![badge-tool-ant] ![badge-junit-jupiter] The [junit5-jupiter-starter-ant] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using the Ant build system. ### Jupiter on Gradle ![badge-jdk-8] ![badge-tool-gradle] ![badge-junit-jupiter] The [junit5-jupiter-starter-gradle] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using the Gradle build system. ### Jupiter on Gradle using Kotlin ![badge-jdk-8] ![badge-tool-gradle] ![badge-junit-jupiter] The [junit5-jupiter-starter-gradle-kotlin] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using the Gradle build system and the Kotlin programming language. ### Jupiter on Gradle using Groovy ![badge-jdk-8] ![badge-tool-gradle] ![badge-junit-jupiter] The [junit5-jupiter-starter-gradle-groovy] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using the Gradle build system and the Groovy programming language. ### Jupiter on Maven ![badge-jdk-8] ![badge-tool-maven] ![badge-junit-jupiter] The [junit5-jupiter-starter-maven] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using the Maven build system. ### Jupiter on Maven using Kotlin ![badge-jdk-8] ![badge-tool-maven] ![badge-junit-jupiter] The [junit5-jupiter-starter-maven-kotlin] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter project using Maven build system and Kotlin programming language. ### Jupiter on Bazel ![badge-jdk-8] ![badge-tool-bazel] ![badge-junit-jupiter] The [junit5-jupiter-starter-bazel] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using the Bazel build system. ### Jupiter on sbt ![badge-jdk-8] ![badge-tool-sbt] ![badge-junit-jupiter] The [junit5-jupiter-starter-sbt] sample demonstrates the bare minimum configuration for getting started with JUnit Jupiter using sbt and the Scala programming language. ## Jupiter Feature Samples _Extending JUnit Jupiter using its `Extension` API._ ### Sample Extensions ![badge-jdk-8] ![badge-tool-gradle] ![badge-junit-jupiter] The [junit5-jupiter-extensions] sample demonstrates how one can implement custom JUnit Jupiter extensions and use them in tests. ## Migration Samples _More complex setups how to integrate various parts of "JUnit 5" including a possible migration path for JUnit 3 or 4 based projects._ ### Gradle Migration ![badge-jdk-8] ![badge-tool-gradle] ![badge-junit-platform] ![badge-junit-jupiter] ![badge-junit-vintage] The [junit5-migration-gradle] sample demonstrates how to set up a Gradle project using the JUnit Platform, JUnit Jupiter, and JUnit Vintage. ### Maven Migration ![badge-jdk-8] ![badge-tool-maven] ![badge-junit-platform] ![badge-junit-jupiter] ![badge-junit-vintage] The [junit5-migration-maven] sample demonstrates how to set up a Maven project using the JUnit Platform, JUnit Jupiter, and JUnit Vintage. ## Platform Samples _Showing basic features of the JUnit Platform._ ### Multiple Engines ![badge-jdk-11] ![badge-tool-gradle] ![badge-junit-platform] ![badge-junit-jupiter] ![badge-junit-vintage] ... The [junit5-multiple-engines] sample demonstrates how to set up a Gradle project using the JUnit Platform with various [TestEngine][guide-custom-engine] implementations. ### Living in the Modular World ![badge-jdk-11] ![badge-tool-console] ![badge-junit-platform] The [junit5-modular-world] sample demonstrates how to test code organized in modules. This sample also demonstrates how to implement a custom [TestEngine][guide-custom-engine] for the JUnit Platform using the Java Platform Module System. [junit5-jupiter-extensions]: junit5-jupiter-extensions [junit5-jupiter-starter-ant]: junit5-jupiter-starter-ant [junit5-jupiter-starter-gradle]: junit5-jupiter-starter-gradle [junit5-jupiter-starter-gradle-groovy]: junit5-jupiter-starter-gradle-groovy [junit5-jupiter-starter-gradle-kotlin]: junit5-jupiter-starter-gradle-kotlin [junit5-jupiter-starter-maven]: junit5-jupiter-starter-maven [junit5-jupiter-starter-maven-kotlin]: junit5-jupiter-starter-maven-kotlin [junit5-jupiter-starter-bazel]: junit5-jupiter-starter-bazel [junit5-jupiter-starter-sbt]: junit5-jupiter-starter-sbt [junit5-migration-gradle]: junit5-migration-gradle [junit5-migration-maven]: junit5-migration-maven [junit5-multiple-engines]: junit5-multiple-engines [junit5-modular-world]: junit5-modular-world [badge-jdk-8]: https://img.shields.io/badge/jdk-8-lightgray.svg "JDK-8" [badge-jdk-11]: https://img.shields.io/badge/jdk-11-red.svg "JDK-11 or higher" [badge-tool-ant]: https://img.shields.io/badge/tool-ant-10f0f0.svg "Ant" [badge-tool-gradle]: https://img.shields.io/badge/tool-gradle-blue.svg "Gradle wrapper included" [badge-tool-maven]: https://img.shields.io/badge/tool-maven-0440af.svg "Maven wrapper included" [badge-tool-bazel]: https://img.shields.io/badge/tool-bazel-43a047.svg "Bazel" [badge-tool-sbt]: https://img.shields.io/badge/tool-sbt-43a047.svg "SBT" [badge-tool-console]: https://img.shields.io/badge/tool-console-022077.svg "Command line tools" [badge-junit-platform]: https://img.shields.io/badge/junit-platform-brightgreen.svg "JUnit Platform" [badge-junit-jupiter]: https://img.shields.io/badge/junit-jupiter-green.svg "JUnit Jupiter Engine" [badge-junit-vintage]: https://img.shields.io/badge/junit-vintage-yellowgreen.svg "JUnit Vintage Engine" [ci-badge]:https://github.com/junit-team/junit5-samples/workflows/Build%20all%20samples/badge.svg "CI build status" [ci-actions]: https://github.com/junit-team/junit5-samples/actions [guide-custom-engine]: http://junit.org/junit5/docs/current/user-guide/#launcher-api-engines-custom "Plugging in Your Own Test Engine"
1
rengwuxian/RxJavaSamples
RxJava 2 和 Retrofit 结合使用的几个最常见使用方式举例
null
样例代码已正式切换到基于 RxJava 2 ================ > 需要旧版 RxJava 1 代码的点[这里](https://github.com/rengwuxian/RxJavaSamples/tree/1.x) ### 项目介绍 RxJava 2 和 Retrofit 结合使用的几个最常见使用方式举例。 1. **基本使用** 实现最基本的网络请求和结果处理。 ![screenshot_1](./images/screenshot_1.png) 2. **转换(map)** 把返回的数据转换成更方便处理的格式再交给 Observer。 ![screenshot_2](./images/screenshot_2.png) 3. **压合(zip)** 将不同接口并行请求获取到的数据糅合在一起后再处理。 ![screenshot_3](./images/screenshot_3.png) 4. **一次性 token** 需要先请求 token 再访问的接口,使用 flatMap() 将 token 的请求和实际数据的请求连贯地串起来,而不必写嵌套的 Callback 结构。 ![screenshot_4](./images/screenshot_4.png) 5. **非一次性 token** 对于非一次性的 token (即可重复使用的 token),在获取 token 后将它保存起来反复使用,并通过 retryWhen() 实现 token 失效时的自动重新获取,将 token 获取的流程彻底透明化,简化开发流程。 ![screenshot_5](./images/screenshot_5.png) 6. **缓存** 使用 BehaviorSubject 缓存数据。 ![screenshot_6](./images/screenshot_6.png) ### apk 下载 [RxJavaSamples_2.0.apk](https://github.com/rengwuxian/RxJavaSamples/releases/download/2.0/RxJavaSamples_2.0.apk)
0
pavansolapure/opencodez-samples
The code repository for Opencodez. Here at this git, we are trying to share code for Java, Python, PHP, Oracle and any new technology we try our hands and feel it can add value.
null
# www.opencodez.com - code samples This is where we keep all of our code samples. Please feel free to download, try and suggest any changes or enhancements. I am providing here link for each of the article that is associated with code. This will help you understand and give you a chance to express your thoughts. # Java 2-way-ssl-authentication https://www.opencodez.com/java/implement-2-way-authentication-using-ssl.htm actuators https://www.opencodez.com/java/spring-boot-actuators.htm ad-ldap-demo https://www.opencodez.com/java/configure-ldap-authentication-using-spring-boot.htm apache-cxf-ws https://www.opencodez.com/java/soap-web-services-with-apache-cxf-spring-boot.htm currency-conversion https://www.opencodez.com/java/currency-conversion-app-spring-boot-java.htm device-detection https://www.opencodez.com/java/device-detection-using-spring-mobile.htm encrypted-mail https://www.opencodez.com/java/send-encrypted-email-using-java.htm generic-csv-reader https://www.opencodez.com/java/generic-csv-file-reader-in-java.htm java-deep-learning https://www.opencodez.com/java/deeplearaning4j.htm kafka-demo https://www.opencodez.com/java/using-apache-kafka-java.htm mail-demo https://www.opencodez.com/java/java-mail-framework-using-spring-boot.htm microservices https://www.opencodez.com/java/microservices-action-spring-boot.htm multi-db https://www.opencodez.com/java/connecting-multiple-databases-spring-data-jpa.htm quartz-demo https://www.opencodez.com/java/quartz-scheduler-with-spring-boot.htm restful-demo https://www.opencodez.com/java/building-restful-web-services-using-java.htm spring-boot-datatable https://www.opencodez.com/java/datatable-with-spring-boot.htm spring-kafka-demo https://www.opencodez.com/java/how-to-build-message-driven-application-using-kafka-and-spring-boot.htm web-app-starter https://www.opencodez.com/java/java-web-application-starter-template-using-spring-boot.htm zip-test https://www.opencodez.com/java/zip-file-using-java.htm # Python keras-text-classification https://www.opencodez.com/python/text-classification-using-keras.htm # Oracle Oracle https://www.opencodez.com/oracle/oracle-job-scheduler-guide-examples-part-1.htm
0
sanaulla123/samples
Samples used in my blog posts
null
## Related Blog Posts - [JDK 12 – JEP 325 Switch Expressions](https://sanaulla.info/2019/04/13/jdk-12-switch-expressions/) - [How to create a QR Code SVG using Zxing and JFreeSVG in Java?](https://sanaulla.info/2019/04/12/how-to-create-a-qr-code-svg-using-zxing-and-jfreesvg-in-java/) - [Launch Single-File Source-Code Programs in JDK 11](https://sanaulla.info/2018/07/05/launch-single-file-source-code-programs-in-jdk-11/) - [Java 10 – JEP 286: Local-Variable Type Inference](https://sanaulla.info/2018/03/04/java-10-jep-286-local-variable-type-inference/) - [Moving From Mustache.js and jQuery to Vuejs for Client Side View Management Reactively](https://sanaulla.info/2017/12/06/moving-from-mustache-js-to-vuejs-for-client-side-view-management/)
0
lenve/javaboy-code-samples
公众号【江南一点雨】文章案例汇总,技术文章请戳这里----->
null
各位小伙伴大家好! 扫码加微信(微信ID:**a_java_boy2**),备注 SpringBoot,进群讨论。 ![微信ID:a_java_boy2](https://user-images.githubusercontent.com/6023444/75459026-ba70d500-59b9-11ea-8cbd-3d5889f356c4.png) ## 仓库简介 松哥长期坚持在微信公众号**江南一点雨**上分享原创技术教程,但是代码一直没有统一管理过,最近有小伙伴提出,希望松哥能把文章的代码整理下,方便大伙参考。于是,松哥创建了这个仓库,用来分享微信公众号**江南一点雨**上文章的案例。小伙伴要是觉得这个仓库有用的话,记得点个 star。 ## 代码索引 |code-samples|对应的文章| |:---|:---| |spring-boot-starter-custom|[徒手撸一个 Spring Boot 中的 Starter ,解密自动化配置黑魔法!](https://mp.weixin.qq.com/s/tKr_shLQnvcQADr4mvcU3A)| |api.js|[Spring Boot + Vue 前后端分离开发,前端网络请求封装与配置](https://mp.weixin.qq.com/s/K8ANNmm6ZrP2xMyK6LGZ_g)| |javassm|[这一次,我连 web.xml 都不要了,纯 Java 搭建 SSM 环境](https://mp.weixin.qq.com/s/NC_0oaeBzRjCB34U_ZWxIQ)| |properties|[是时候彻底搞清楚 Spring Boot 的配置文件 application.properties 了!](https://mp.weixin.qq.com/s/cUhzpo8zkQq09d8S4WkAsw)| |redis|[Spring Boot 操作 Redis,三种方案全解析!](https://mp.weixin.qq.com/s/cgDtmjPWTdh44bSlLC0Qsw)| |sessionshare|[Spring Boot 一个依赖搞定 session 共享,没有比这更简单的方案了!](https://mp.weixin.qq.com/s/xs67SzSkMLz6-HgZVxTDFw)| |restful|[Spring Boot 中 10 行代码构建 RESTful 风格应用](https://mp.weixin.qq.com/s/7uO87SOu93XH2Y3iWxWicg)| |shiro|[Spring Boot 整合 Shiro ,两种方式全总结!](https://mp.weixin.qq.com/s/JU_-gn-yZ4VJJXTZvo7nZQ)| |quartz|[Spring Boot 中实现定时任务的两种方式!](https://mp.weixin.qq.com/s/_20RYBkjKrB4tdpXI3hBOA)| |ehcache|[另一种缓存,Spring Boot 整合 Ehcache](https://mp.weixin.qq.com/s/i9a3VOf_GMN_UBQ-8tKi3A)| |thymeleaf|[极简 Spring Boot 整合 Thymeleaf 页面模板](https://mp.weixin.qq.com/s/7tgiuFceyZPHBZcLnPmkfw)| |exception|[Spring Boot 中关于自定义异常处理的套路!](https://mp.weixin.qq.com/s/w26MvCWQ1RO4CUJrfXi5AA)| |jdbctemplate|[Spring Boot数据持久化之JdbcTemplate](https://mp.weixin.qq.com/s/X4-e1cf3uZafg8XtMJeo_Q)| |jdbcmulti|[Spring Boot多数据源配置之JdbcTemplate](https://mp.weixin.qq.com/s/7po83-CAoryo1eglumW42Q)| |mybatis|[最简单的SpringBoot整合MyBatis教程](https://mp.weixin.qq.com/s/HOnX2XRDWrQ9oOKLo1ueKw)| |mybatismulti|[极简Spring Boot整合MyBatis多数据源](https://mp.weixin.qq.com/s/9YXwk2-4zIq60WFuy6nXdw)| |jpa|[是时候了解下Spring Boot整合 Jpa啦](http://www.javaboy.org/2019/0407/springboot-jpa.html)| |jpamulti|[Spring Boot 整合 Jpa 多数据源](http://www.javaboy.org/2019/0407/springboot-jpa-multi.html)| |jwt|[干货\|一个案例学会Spring Security 中使用 JWT!](https://mp.weixin.qq.com/s/riyFQSrkQBQBCyomE__fLA)| |security_json|[SpringSecurity登录使用JSON格式数据](https://mp.weixin.qq.com/s/X1t-VCxzxIcQKOAu-pJrdw)| |swagger2|[SpringBoot整合Swagger2,再也不用维护接口文档了!](https://mp.weixin.qq.com/s/iTsTqEeqT9K84S091ycdog)| |cors|[Spring Boot中通过CORS解决跨域问题](https://mp.weixin.qq.com/s/ASEJwiswLu1UCRE-e2twYQ)| |freemarker|[Spring Boot 整合 Freemarker,50 多行配置是怎么省略掉的?](https://mp.weixin.qq.com/s/zXwAy1dMlITqHOdBNeZEKg)| |mail|[Spring Boot 邮件发送的 5 种姿势!](https://mp.weixin.qq.com/s/8UiEMpono-hUrRVwvDjUgA)| |docker|[一键部署 Spring Boot 到远程 Docker 容器,就是这么秀!](https://mp.weixin.qq.com/s/vSCQLvQBYMYoPhdlO2v3XA)| |https|[Spring Boot 支持 Https 有那么难吗?](https://mp.weixin.qq.com/s/WOmOXN_IK0IMjL0_hlAOFA)| |jwt-demo|[Spring Security 结合 Jwt 实现无状态登录](https://mp.weixin.qq.com/s/Sn59dxwtsEWoj2wdynQuRQ)| |docker-jib|[别用 Dockerfile 部署 Spring Boot 啦!松哥教你两步搞定!](https://mp.weixin.qq.com/s/ZqWktjLUOzHNKOGE6BfHRA)| ## 松哥简介 大家好,我是江南一点雨,管理学学士,大学自学 Java 编程,从移动端到前端到后端均有涉猎,现在专注于 Java 微服务,我是 CSDN 博客专家、华为云云享专家、《Spring Boot + Vue全栈开发实战》 作者、运营了一个公众号 **江南一点雨**,专注于 Spring Boot + 微服务以及前后端分离技术分享,欢迎大家关注!关注公众号回复 2TB,领取松哥为大家精心准备的 Java 干货! ### 我的公众号 ![](http://www.javaboy.org/images/sb/javaboy.jpg) ### 我的站点 - 独立站点: http://www.javaboy.org - GitHub: https://github.com/lenve - CSDN: http://wangsong.blog.csdn.net - 思否: https://segmentfault.com/u/lenve - 博客园: https://www.cnblogs.com/lenve - 掘金: https://juejin.im/user/57d679af0bd1d000585012a7
0
spring-projects/spring-security-samples
null
null
null
0
agoncal/agoncal-sample-jaxrs
Samples about JAX-RS
jax-rs samples
# Samples - JAX-RS ## Purpose of this sample Several samples about JAX-RS : * 01-Testing : Simple integration test # Licensing <a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-sa/3.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. <div class="footer"> <span class="footerTitle"><span class="uc">a</span>ntonio <span class="uc">g</span>oncalves</span> </div>
0
habuma/spring-in-action-5-samples
Home for example code from Spring in Action 5.
null
null
1
mercyblitz/thinking-in-spring-boot-samples
小马哥书籍《Spring Boot 编程思想》示例工程
book code-project examples java mercyblitz samples spring spring-boot thinking-in-spring-boot
# 《Spring Boot 编程思想》 > 谨以此书纪念已故外婆 - 解厚群 本书全名为《Spring Boot 编程思想》,是以 Spring Boot 2.0 为讨论的主线,讨论的范围将涵盖 Spring Boot 1.x 的所有版本,以及所关联的 Spring Framework 版本,致力于: - 场景分析:掌握技术选型 - 系统学习:拒绝浅尝辄止 - 重视规范:了解发展趋势 - 源码解读:理解设计思想 - 实战演练:巩固学习成果 ## 自序 - [《核心篇》](https://mercyblitz.github.io/books/thinking-in-spring-boot/core/preface/)([预售中…](https://item.jd.com/12570242.html)) - 《运维篇》(即将完稿…) - 《Web 篇》(编写中…) ## 扉页 - [内容总览](https://mercyblitz.github.io/books/thinking-in-spring-boot/overview/) - [版本范围](https://mercyblitz.github.io/books/thinking-in-spring-boot/version/) - [相关约定](https://mercyblitz.github.io/books/thinking-in-spring-boot/conventions/) - [配套视频](https://mercyblitz.github.io/books/thinking-in-spring-boot/videos/) - [示例工程](https://mercyblitz.github.io/books/thinking-in-spring-boot/samples/) - [勘误汇总](https://mercyblitz.github.io/books/thinking-in-spring-boot/revision/) - [公益资金流向](https://mercyblitz.github.io/books/thinking-in-spring-boot/donate/) ## [关于我](https://mercyblitz.github.io/books/thinking-in-spring-boot/about/) “我是谁?”,是个不错的哲学问题。 在江湖上,大家亲切地称我 “小马哥“,我做公益,也做生意;在社区中,我又以 `mercyblitz` 的身份出没在众多开源项目,"mercy" 符合我的性格,"blitz" 说明我的风格。 承蒙错爱,不少的朋友对我过去的分享称赞有加,然而“千人之诺诺,不如一士之谔谔”,时常又让自己陷入一种迷思,到底是平台的帮衬,还是个人的确禁得起考验。于是我选择隐匿真名,希望能够听到更为真实的声音。尽管在互联网时代,只要稍作功课,个人信息几乎无处遁形。无可讳言,本人的所属公司以及职业头衔必然会形成“舞台效应”,如此一来,不但违背了写书的初衷,而且模糊了讨论的焦点。所以,本书即不会出现这些信息,又不会搞“个人崇拜”。它的价值应该体现在知识的传播,至于它的优劣则由诸君来评判。 ## 交流社区 - 微信公众号:次灵均阁 ![微信公众号二维码](https://mercyblitz.github.io/books/thinking-in-spring-boot/assets/my_mp_qrcode.jpg) - [知识星球](https://t.zsxq.com/72rj2rr): ![小马哥 Java 星球](https://mercyblitz.github.io/books/thinking-in-spring-boot/assets/my_java_planet.png) - [Github](http://github.com/mercyblitz):[http://github.com/mercyblitz](http://github.com/mercyblitz) > 更多个人信息,请使用 [Google 搜索 `mercyblitz`](https://www.google.com/search?q=mercyblitz)
0
spring-cloud/spring-cloud-stream-samples
Samples for Spring Cloud Stream
null
null
0
googleworkspace/java-samples
☕ Java samples for Google Workspace APIs.
google-workspace gsuite java samples
# Google Workspace Java Samples A collection of samples that demonstrate how to call Google Workspace APIs in Java. ## APIs ### Admin SDK - [Alert Center Quickstart](https://developers.google.com/admin-sdk/alertcenter/quickstart/java) - [Directory Quickstart](https://developers.google.com/admin-sdk/directory/v1/quickstart/java) - [Reports Quickstart](https://developers.google.com/admin-sdk/reports/v1/quickstart/java) - [Reseller Quickstart](https://developers.google.com/admin-sdk/reseller/v1/quickstart/java) ### Apps Script - [Quickstart](https://developers.google.com/apps-script/api/quickstart/java) ### Calendar - [Quickstart](https://developers.google.com/calendar/quickstart/java) - [Sync Tokens and Etags](calendar/sync) - [Command-line sample](https://github.com/google/google-api-java-client-samples/tree/master/calendar-cmdline-sample) ### Classroom - [Quickstart](https://developers.google.com/classroom/quickstart/java) ### Docs - [Quickstart](https://developers.google.com/docs/api/quickstart/java) ### Drive V3 - [Quickstart](https://developers.google.com/drive/v3/web/quickstart/java) - [Command-line sample](https://github.com/google/google-api-java-client-samples/tree/master/drive-cmdline-sample) ### Gmail - [Quickstart](https://developers.google.com/gmail/api/quickstart/java) ### People - [Quickstart](https://developers.google.com/people/quickstart/java) ### Sheets - [Quickstart](https://developers.google.com/sheets/api/quickstart/java) ### Slides - [Quickstart](https://developers.google.com/slides/quickstart/java) ### Tasks - [Quickstart](https://developers.google.com/google-apps/tasks/quickstart/java) ### Vault - [Quickstart](https://developers.google.com/vault/quickstart/java)
0
udacity/ud862-samples
null
null
# ud862-samples This repository holds the code samples written for ud862 - Android Design for Developers.
0
devunwired/accessory-samples
Talk to Your Toaster Accessory Sample Code
null
Android Accessory Examples -------------------------- This repository contains sample applications demonstrating many of the techniques for doing custom accessory development discussed in "Talk to Your Toaster". Examples include: - Bluetooth RFCOMM - Bluetooth Headset Profile - USB Open Accessory Protocol - USB Host Protocol The sample code is licensed under the MIT Open Source License
1
vinsguru/vinsguru-blog-code-samples
null
null
vinsguru-blog-code-samples
0
JetBrains/intellij-sdk-code-samples
Mirror of the IntelliJ SDK Docs Code Samples
intellij intellij-idea intellij-platform intellij-plugin intellij-plugins intellij-sdk jetbrains jetbrains-plugin jetbrains-plugins
# IntelliJ Platform SDK Code Samples [![official JetBrains project](https://jb.gg/badges/official-flat-square.svg)][jb:github] [![JetBrains IntelliJ Platform SDK Docs](https://jb.gg/badges/docs.svg?style=flat-square)][jb:docs] [![X Follow](https://img.shields.io/badge/follow-%40JBPlatform-1DA1F2?logo=x)][jb:x] [![Build](https://img.shields.io/github/actions/workflow/status/JetBrains/intellij-sdk-docs/code-samples.yml?branch=main&style=flat-square)][gh:workflow-code-samples] [![Slack](https://img.shields.io/badge/Slack-%23intellij--platform-blue?style=flat-square&logo=Slack)][jb:slack] Learn how to build plugins using IntelliJ Platform SDK for the [JetBrains products][jb:products] by experimenting with our code samples. These samples show you how features work and help you jumpstart your plugins. > [!TIP] > To start a new plugin project, consider using [IntelliJ Platform Plugin Template][gh:template] which offers a pure boilerplate template to make it easier to create a new plugin project. > > The code samples can also be found in the [IntelliJ SDK Code Samples](https://github.com/JetBrains/intellij-sdk-code-samples) mirror repository. To learn more, browse [available Extension Points][docs:eps], explore Extension Point usages in open-source plugins using [IntelliJ Platform Explorer](https://jb.gg/ipe) and learn how to [Explore the IntelliJ Platform API][docs:explore-api]. ## Target Platform All Code Samples target the latest GA platform release. Previous releases are made available via [tags](https://github.com/JetBrains/intellij-sdk-code-samples/tags). ## Structure Code Samples depend on the [IntelliJ Platform SDK][docs] and [Gradle][docs:gradle] as a build system. The main plugin definition file is stored in the `plugin.xml` file, which is created according to the [Plugin Configuration File documentation][docs:plugin.xml]. It describes definitions of the actions, extensions, or listeners provided by the plugin. ## Code Samples Please see [Code Samples][docs:code-samples] topic on how to import and run code samples. In the following table, you may find all available samples provided in the separated directories as stand-alone projects available for running with the Gradle [`runIde`](tools_gradle_intellij_plugin.md#tasks-runide) task. | Code Sample | Description | |-----------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Action Basics](./action_basics) | Action and Action Group patterns implementation, adds entries to the Tools menu. | | [Comparing References Inspection](./comparing_string_references_inspection) | Local Inspection Tool, adds entries to **Settings &#124; Editor &#124; Inspections &#124; Java &#124; Probable Bugs**. | | [Conditional Operator Intention](./conditional_operator_intention) | Intention action, suggests converting a ternary operator into an `if` block and adds entry to **Settings &#124; Editor &#124; Intentions &#124; SDK Intentions**. | | [Editor Basics](./editor_basics) | Basic Editor APIs example with editor popup menu with extra actions. | | [Framework Basics](./framework_basics) | Basic *SDK Demo Framework* support added to the **File &#124; New &#124; Project &#124; Java** wizard. | | [Kotlin Demo](./kotlin_demo) | Kotlin example extending the *Main Menu* with a **Greeting** menu group. | | [Live Templates](./live_templates) | Live templates for Markdown language, adds an entry to the **Settings &#124; Editor &#124; Live Templates** dialog. | | [Max Opened Projects](./max_opened_projects) | Application services and listeners, shows warning dialog when more than 3 open projects are opened. | | [Module](./module) | *SDK Demo Module* module type added to the **File &#124; New &#124; Project...** wizard. | | [Product Specific - PyCharm Sample](./product_specific/pycharm_basics) | Plugin project configuration for the PyCharm IDE. | | [Project Model](./project_model) | Interacts with the project model, adds menu items to **Tools** and **Editor Context** menus. | | [Project View Pane](./project_view_pane) | Project View Pane listing only image files. | | [Project Wizard](./project_wizard) | Project Wizard example with demo steps. | | [PSI Demo](./psi_demo) | PSI Navigation features presentation. | | [Run Configuration](./run_configuration) | Run configuration implementation with factory, options and UI. | | [Settings](./settings) | Custom settings panel, adds a settings panel to the **Settings** panel under **Tools**. | | [Simple Language Plugin](./simple_language_plugin) | Custom language support, defines a new *Simple language* with syntax highlighting, annotations, code completion, and other features. | | [Theme Basics](./theme_basics) | Sample *theme* plugin with basic interface modifications. | | [Tool Window](./tool_window) | Custom Tool Window example plugin. | | [Tree Structure Provider](./tree_structure_provider) | Tree Structure Provider showing only plain text files. | [gh:workflow-code-samples]: https://github.com/JetBrains/intellij-sdk-docs/actions/workflows/code-samples.yml [gh:template]: https://github.com/JetBrains/intellij-platform-plugin-template [jb:github]: https://github.com/JetBrains/.github/blob/main/profile/README.md [jb:docs]: https://plugins.jetbrains.com/docs/intellij/ [jb:products]: https://www.jetbrains.com/products.html [jb:slack]: https://plugins.jetbrains.com/slack [jb:x]: https://x.com/JBPlatform [docs]: https://plugins.jetbrains.com/docs/intellij/ [docs:code-samples]: https://plugins.jetbrains.com/docs/intellij/code-samples.html [docs:eps]: https://plugins.jetbrains.com/docs/intellij/extension-point-list.html [docs:gradle]: https://plugins.jetbrains.com/docs/intellij/developing-plugins.html [docs:plugin.xml]: https://plugins.jetbrains.com/docs/intellij/plugin-configuration-file.html [docs:explore-api]: https://plugins.jetbrains.com/docs/intellij/explore-api.html
0
IBMStreams/samples
This repository contains open-source sample applications for IBM Streams.
database geofence geofencing hdfs healthcare ibm-streams samples stream-processing text-analytics timeseries
## README -- IBMStreams samples This repository contains sample applications for IBM Streams. ### [Search the samples using the samples catalog](http://ibmstreams.github.io/samples) You can also [Contribute a sample](https://github.com/IBMStreams/samples/wiki/Adding-a-sample-to-the-catalog-and-repo) To learn more about Streams: * [IBM Streams on Github](http://ibmstreams.github.io) * [Streams Community](https://ibm.biz/streams-community) ## About this repository Each branch in the repo is related and serves a different purpose. - `main` branch: has the actual code samples organized by feature or task, e.g. Read data, Analyze data, etc. - `gh-pages` branch has the code for the catalog, ibmstreams.github.io/samples/ - AngularJS code - `fileserver` branch: This has the code for the file server. The file server is used by the catalog to provide a zip archive of each individual sample. ### Adding a new sample to the catalog You need to install jekyll. Background info: - Each individual sample has a `catalog.json` that describes the sample and is used by the gh-pages branch to display the catalog. - `extSamples.json` contains information about additional samples that are listed on the website but hosted in other repositories To add a sample to the catalog: - If the sample is going to be in this repo, - add the sample folder to the tree in the appropriate folder, e.g. if it is a Database sample, add it to ReadAndStore data. - Add a `catalog.json` file to the top level folder of the new sample, use any of the existing `catalog.json` as an example - If it is going to be hosted in another repo, - edit the `extSamples.json` file to add the information about the sample. - Update the categories array in the JSON file is based on the categories listed in categoryKey.txt - Commit your changes to the main branch. - Run the updateCatalogDB.sh script. This script will - run the `generate-catalog.py` script combines the `extSamples.json` file with the entry from each catalog.json in the repo to create a master JSON file listing every entry in the catalog. - Switch to the `gh-pages` branch and start a local copy of the server at 127.0.0.1:4000/ - 127.0.0.1:4000/ to see the catalog and verify the new sample is listed. Commit and push the changes to the gh-pages branch. *Note*: Run the data/main.json file through a JSON minimizer before commiting it to remove whitespace.
1
THEONE10211024/RxJavaSamples
收集了RxJava常见的使用场景,例子简洁、经典、易懂...
null
# RxJavaSamples 收集了RxJava常见的使用场景,例子简洁、经典、易懂...samples中的例子我已经在我的博客里介绍了,想进一步了解的同学可以看[这里](http://blog.csdn.net/theone10211024/article/details/50435325) ## (非)著名的库 * [RxJava](https://github.com/ReactiveX/RxJava) 没什么好说的,众多Rx系列的发源地。 * [RxAndroid](https://github.com/ReactiveX/RxAndroid) JakeWharton大神主导,将RxJava带入了Android,例子不多但在活跃地开发中。你可以从[这里](http://blog.csdn.net/lzyzsd/article/details/45033611)略知一二 * [RxBinding](https://github.com/JakeWharton/RxBinding) JakeWharton 大神项目,以RxJava形式实现Android里的OnClickListener 、TextWatcher、ScrollChange等事件绑定,内容相当丰富,关键还支持kotlin语法! * [RxKotlin](https://github.com/ReactiveX/RxKotlin) Kotlin在取代Java的路上又多了一门神器! * [RxRelay](https://github.com/JakeWharton/RxRelay) JakeWharton 大神又一力作! ## 项目&&例子 * [RxJava-Android-Samples]( https://github.com/kaushikgopal/RxJava-Android-Samples) 涵盖了* RxJava的一些应用场景。经典、易懂但不是很全,本项目中大部分例子来源于此,表示感谢! * [Awesome-RxJava]( https://github.com/lzyzsd/Awesome-RxJava) 收集了很多经典博客、教程、译文和App。想要入门?弄明白这里面的文章足矣! * [RxDocs](https://github.com/mcxiaoke/RxDocs) Rx和RxJava文档中文翻译项目,非常多的学习资料!! * [RengwuxianRxjava]( https://github.com/androidmalin/RengwuxianRxjava) 扔物线那篇[经典文章](http://gank.io/post/560e15be2dca930e00da1083)中的例子 * [RxBlur]( https://github.com/SmartDengg/RxBlur) 用RxJava处理和操作高斯模糊效果的简单用例。 * [Meizhi](https://github.com/drakeet/Meizhi) 基于RxJava & Retrofit开发的项目 * [RexWeather]( https://github.com/vyshane/rex-weather) 天气预报小应用,基于RxJava & Retrofit开发的项目 * [Android-ReactiveLocation](https://github.com/mcharmas/Android-ReactiveLocation) * [reark](https://github.com/reark/reark) * [RxPermissions](https://github.com/tbruyelle/RxPermissions) * [rxfilewatcher](https://github.com/helmbold/rxfilewatcher) * [RxLifecycle](https://github.com/trello/RxLifecycle)用来严格控制由于发布了一个订阅后,由于没有及时取消,导致Activity/Fragment无法销毁导致的内存泄露。 * [rxloader](https://github.com/evant/rxloader) * [ReactiveNetwork](https://github.com/pwittchen/ReactiveNetwork) 使用RxJava来监听网络连接状态和wifi信号强度变化 * [frodo](https://github.com/android10/frodo) * [rxjava-multiple-sources-sample](https://github.com/dlew/rxjava-multiple-sources-sample) 如何使用RxJava做多级缓存的案例。 * [rx-preferences](https://github.com/f2prateek/rx-preferences) 用RxJava实现Android中的SharedPreferences * [RxCache](https://github.com/VictorAlbertos/RxCache) 为Android和java开发量身打造的缓存库 ## 技术博客 * [给Android开发者的RxJava详解]( http://gank.io/post/560e15be2dca930e00da1083)扔物线力作,RxJava讲的通俗易懂。 * [可能是东半球最全的RxJava使用场景小结]( http://blog.csdn.net/theone10211024/article/details/50435325) 主要针对Android中使用场景的总结。大多数是常见且经典的例子,看看RxJava是如何解决Android开发中的痛点的! * [深入浅出RxJava]( http://blog.csdn.net/lzyzsd/article/details/41833541) 深入浅出RxJava系列,一共四篇,这是第一篇。翻译得还不错! * [RxJava大搜集]( http://www.jcodecraeer.com/a/anzhuokaifa/androidkaifa/2015/0430/2815.html)这里有你想要的 * [RxBus]( http://nerds.weddingpartyapp.com/tech/2014/12/24/implementing-an-event-bus-with-rxjava-rxbus/) 无所不能的RxJava也能做EventBus?看看别人是怎么实现的吧! * [当复仇者联盟遇上Dagger2、RxJava和Retrofit的巧妙结合](http://blog.csdn.net/handsome_926/article/details/49176227) RxJava+Dagger2+Retrofit!教你什么才是一个干净的框架! * [Advanced RxJava](http://akarnokd.blogspot.com/) 系列博客,[该系列博客的中文翻译](http://blog.piasy.com/AdvancedRxJava/)。 ## 网站 * [ReactiveX](http://reactivex.io) ReactiveX官方网站,不知道它的就如同学Android不知道Android Developer * [RxMarbles]( http://rxmarbles.com) 直观有趣的宝石图!让你对这种编程思想理解得更加透彻!! ##书籍 * [RxJava Essentials(英文版)](http://download.csdn.net/detail/theone10211024/9394367) 讲得比较详细,适合RxJava入门学习。 * [RxJava Essentials(中文版)](http://download.csdn.net/detail/theone10211024/9394379) RxJava Essentials的中文翻译。 ##### 最后,我想说的是RxJava目前在国内的使用频率还比较低,希望大家能成为推动RxJava在国普及的一员!如果你有更多好的文章、网站或工程,请发扬程序员最独特的魅力—开源精神,以便让更多的同行看见、学到和参与进来!
0
liferay/liferay-blade-samples
null
null
null
0
graphql-java-kickstart/samples
Samples for using the graphql java kickstart projects
null
# GraphQL Java Kickstart Samples [![Discuss on GitHub](https://img.shields.io/badge/GitHub-discuss-orange)](https://github.com/graphql-java-kickstart/samples/discussions) ## We are looking for contributors! Are you interested in improving our documentation, working on the codebase, reviewing PRs? [Reach out to us on Discussions](https://github.com/graphql-java-kickstart/samples/discussions) and join the team! Samples for using the GraphQL Java Kickstart projects
0
Mike-bel/MDStudySamples
Material Design Widgets Study Samples
null
# MDStudySamples Android Material Design 材料设计风格的系统性学习案例 设计规范: - [Google官网](https://material.google.com/) - [极客中文翻译](http://wiki.jikexueyuan.com/project/material-design/) ## TabLayout 博客地址:[Android TabLayout 分分钟打造一个滑动标签页](http://www.jianshu.com/p/39a66373498c) ## Snackbar 博客地址:[Android 一文告诉你到底是用Dialog,Snackbar,还是Toast](http://www.jianshu.com/p/9eb3b17b0e77) ## FloatingActionButton 博客地址:[Android FloatingActionButton 重要的操作不要太多,一个就好](http://www.jianshu.com/p/5328b2eee827) ## AppBarLayout & CoordinatorLayout - [Android 初识AppBarLayout 和 CoordinatorLayout](http://www.jianshu.com/p/ab04627cce58) - [Android 详细分析AppBarLayout的五种ScrollFlags](http://www.jianshu.com/p/7caa5f4f49bd) - [Android CoordinatorLayout实战案例学习《一》](http://www.jianshu.com/p/4b0f3c80ebc9) - [Android CoordinatorLayout 实战案例学习《二》](http://www.jianshu.com/p/360fd368936d) - [Android 一步一步分析CoordinatorLayout.Behavior](http://www.jianshu.com/p/8396b74de317) ## TODO MD风格登录页: - [fanrunqi/MaterialLogin](https://github.com/fanrunqi/MaterialLogin) - [shem8/MaterialLogin](https://github.com/shem8/MaterialLogin) BottomNavigation: - [Ashok-Varma/BottomNavigation](https://github.com/Ashok-Varma/BottomNavigation) - [aurelhubert/ahbottomnavigation](https://github.com/aurelhubert/ahbottomnavigation) ### 关注我的微信公众号:安卓笔记侠 ---- ![](http://ocq7gtgqu.bkt.clouddn.com/NiaoTechNiaoTech_QRcode.jpg)
0
lenve/spring-security-samples
null
null
### 跟着松哥学 Spring Security 扫码加微信(微信ID:**a_java_boy2**),备注 SpringSecurity,进群讨论。 ![微信ID:a_java_boy2](https://user-images.githubusercontent.com/6023444/75459026-ba70d500-59b9-11ea-8cbd-3d5889f356c4.png) 扫码关注微信公众号【江南一点雨】,回复 **ss** 获取 Spring Security 系列完整文章: ![](https://open.weixin.qq.com/qr/code?username=a_javaboy) #### 本仓库部分 Demo 有配套视频 视频地址:[https://www.bilibili.com/video/BV1xA411h7o3/](https://www.bilibili.com/video/BV1xA411h7o3/) 欢迎大家在 B 站关注我,以便及时收到最新视频。 #### 本仓库所有 Demo 都有配套文章 |Demo|文章| |:---|:---| |form-login|[松哥手把手带你入门 Spring Security,别再问密码怎么解密了](https://mp.weixin.qq.com/s/Q0GkUb1Nt6ynV22LFHuQrQ)| |form-login-2|[手把手教你定制 Spring Security 中的表单登录](https://mp.weixin.qq.com/s/kHJRKwH-WUx-JEeaQMa7jw)| |form-login-3|[Spring Security 做前后端分离,咱就别做页面跳转了!统统 JSON 交互](https://mp.weixin.qq.com/s/Xzt9ymff0DCbAQbklHOxpQ)| |form-login-4|[Spring Security 中的授权操作原来这么简单](https://mp.weixin.qq.com/s/BKAYXMaBBs0VrKZtzorn-w)| |form-login-5|[Spring Security 如何将用户数据存入数据库?](https://mp.weixin.qq.com/s/EurEXmU0M9AKuUs4Jh_V5g)| |withjpa|[Spring Security+Spring Data Jpa 强强联手,安全管理只有更简单!](https://mp.weixin.qq.com/s/VWJvINbi1DB3fF-Mcx7mGg)| |rememberme|[Spring Boot + Spring Security 实现自动登录功能](https://mp.weixin.qq.com/s/aSsGNBSWMTsAEXjn9wQnYQ)| |rememberme-persis|[Spring Boot 自动登录,安全风险要怎么控制?](https://mp.weixin.qq.com/s/T6_PBRzIADE71af3yoKB6g)| |verifycode|[SpringSecurity 自定义认证逻辑的两种方式(高级玩法)](https://mp.weixin.qq.com/s/LeiwIJVevaU5C1Fn5nNEeg)| |verifycode-2|[Spring Security 中如何快速查看登录用户 IP 地址等信息?](https://mp.weixin.qq.com/s/pSX9XnPNQPyLWGc6oWR3hA)| |session-1|[Spring Security 自动踢掉前一个登录用户,一个配置搞定!](https://mp.weixin.qq.com/s/9f2e4Ua2_fxEd-S9Y7DDtA)| |session-2|[Spring Boot + Vue 前后端分离项目,如何踢掉已登录用户?](https://mp.weixin.qq.com/s/nfqFDaLDH8UJVx7mqqgHmQ)| |stricthttpfirewall|[Spring Security 自带防火墙!你都不知道自己的系统有多安全!](https://mp.weixin.qq.com/s/Fuu9rKoOvSyuvCKSyh6dUQ)| |session-3|[什么是会话固定攻击?Spring Boot 中要如何防御会话固定攻击?](https://mp.weixin.qq.com/s/9SaNvVfiivWUIAe6OZgpZQ)| |session-4|[集群化部署,Spring Security 要如何处理 session 共享?](https://mp.weixin.qq.com/s/EAacxjaNg8BJRkTkGFLLpQ)| |csrf-1|[松哥手把手教你在 SpringBoot 中防御 CSRF 攻击!so easy!](https://mp.weixin.qq.com/s/TDm8ljxbHpMqteucfmeccA)| |csrf-2|[松哥手把手教你在 SpringBoot 中防御 CSRF 攻击!so easy!](https://mp.weixin.qq.com/s/TDm8ljxbHpMqteucfmeccA)| |csrf-3|[松哥手把手教你在 SpringBoot 中防御 CSRF 攻击!so easy!](https://mp.weixin.qq.com/s/TDm8ljxbHpMqteucfmeccA)| |-|[要学就学透彻!Spring Security 中 CSRF 防御源码解析](https://mp.weixin.qq.com/s/evyI9wo30JI78_ZYGaKFRg)| |securitydemo|[Spring Boot 中密码加密的两种姿势!](https://mp.weixin.qq.com/s/jBoU5j4YChDVwyX22SbtgA)| |-|[Spring Security 要怎么学?为什么一定要成体系的学习?](https://mp.weixin.qq.com/s/6YgV5z3Bbd7wKP5TwI1TQg)| |-|[Spring Security 两种资源放行策略,千万别用错了!](https://mp.weixin.qq.com/s/He_K-a-1JpQHZBPEUo2uZw)| |client1、client2|[松哥手把手教你入门 Spring Boot + CAS 单点登录](https://mp.weixin.qq.com/s/BzdCr_rgLttRlq_SWQuG0Q)| |client1、client2|[Spring Boot 实现单点登录的第三种方案!](https://mp.weixin.qq.com/s/YQMHms9BiVaTgYct4wk2cw)| |client1、client2|[Spring Boot+CAS 单点登录,如何对接数据库?](https://mp.weixin.qq.com/s/dWwscpe9okkJjkGQMWVYvA)| |-|[Spring Boot+CAS 默认登录页面太丑了,怎么办?](https://mp.weixin.qq.com/s/uWib5RtH1aEC7p5VK4S1Ig)| |swagger-jwt|[用 Swagger 测试接口,怎么在请求头中携带 Token?](https://mp.weixin.qq.com/s/LvMOKeX09liTvujZi6XO3Q)| |cors-1、cors-2|[Spring Boot 中三种跨域场景总结](https://mp.weixin.qq.com/s/FaSfN31z4BlxPI2QOa8t1Q)| |httpbasic|[Spring Boot 中如何实现 HTTP 认证?](https://mp.weixin.qq.com/s/7T5XA1zxQkhMvffMxHKVvQ)| |authorize|[Spring Security 中的四种权限控制方式](https://mp.weixin.qq.com/s/7cm99q5ZM4qUkekx0Xa0YQ)| |authorize-2|[Spring Security 多种加密方案共存,老破旧系统整合利器!](https://mp.weixin.qq.com/s/_iau1jsnc50vs794_ib0IA)| |-|[神奇!自己 new 出来的对象一样也可以被 Spring 容器管理!](https://mp.weixin.qq.com/s/edtYkmgx_SnYoqsy-yFmsQ)| |-|[Spring Security 配置中的 and 到底该怎么理解?](https://mp.weixin.qq.com/s/42-rjiZShvZXYM_ULQt0YQ)| |exception|[一文搞定 Spring Security 异常处理机制!](https://mp.weixin.qq.com/s/f1teXTEuDR7S0j_Ml2qL8g)| |-|[写了这么多年代码,这样的登录方式还是头一回见!](https://mp.weixin.qq.com/s/dm2SmUzb7vQZA3C0NFp86A)| |authenticationmanager|[Spring Security 竟然可以同时存在多个过滤器链?](https://mp.weixin.qq.com/s/S_maV7OvvfmYUO53AgCu0g)| |multiusers|[Spring Security 可以同时对接多个用户表?](https://mp.weixin.qq.com/s/sF4vPZQv7rtBYhBhmONJ5w)| |-|[在 Spring Security 中,我就想从子线程获取用户登录信息,怎么办?](https://mp.weixin.qq.com/s/4dcQ6lohB3sEcnkAXxdZwg)| |-|[深入理解 FilterChainProxy【源码篇】](https://mp.weixin.qq.com/s/EZsChg5YG0TBadU4q_CAnA)| |-|[深入理解 SecurityConfigurer 【源码篇】](https://mp.weixin.qq.com/s/PWIM9jgEB-F-m4Ove470wg)| |-|[深入理解 HttpSecurity【源码篇】](https://mp.weixin.qq.com/s/Kk5c5pK5_LFcCpcnIr2VrA)| |-|[深入理解 AuthenticationManagerBuilder 【源码篇】](https://mp.weixin.qq.com/s/kB4m0YJas9LHuNT8JH5ZmQ)| |authenticationmanager|[花式玩 Spring Security ,这样的用户定义方式你可能没见过!](https://mp.weixin.qq.com/s/LBgZu-mBifPG_-azCG7Lcg)| |-|[深入理解 WebSecurityConfigurerAdapter【源码篇】](https://mp.weixin.qq.com/s/vP-QGm9GNxMInIeGSZvWwQ)| |-|[盘点 Spring Security 框架中的八大经典设计模式](https://mp.weixin.qq.com/s/d2o9QpK1EfBMRR8zfHhv2g)| |-|[Spring Security 初始化流程梳理](https://mp.weixin.qq.com/s/D0weIKPto4lcuwl9DQpmvQ)| |-|[为什么你使用的 Spring Security OAuth 过期了?松哥来和大家捋一捋!](https://mp.weixin.qq.com/s/0WOefpO6-aYSIRNiNnijyg)| |-|[一个诡异的登录问题](https://mp.weixin.qq.com/s/-kDQbP1htEfn_8n7ZfKqmA)| |-|[什么是计时攻击?Spring Boot 中该如何防御?](https://mp.weixin.qq.com/s/9yK32E44UnIep-5iX0s2Ow)| |voter|[Spring Security 中如何让上级拥有下级的所有权限?](https://mp.weixin.qq.com/s/1ZWyD41R827FhghiCHY-Sw)| |voter|[Spring Security 权限管理的投票器与表决机制](https://mp.weixin.qq.com/s/sU97RQjQq2-XXQt49LkSeQ)| |-|[Spring Security 中的 hasRole 和 hasAuthority 有区别吗?](https://mp.weixin.qq.com/s/GTNOa2k9_n_H0w24upClRw)| |-|[Spring Security 中如何细化权限粒度?](https://mp.weixin.qq.com/s/Q9lfrJ3iioUpYEO9elCtsw)| |acls|[一个案例演示 Spring Security 中粒度超细的权限控制!](https://mp.weixin.qq.com/s/98CIxDVrhAiE14EXvgf7Vw)| |-|[Spring Security 中最流行的权限管理模型!](https://mp.weixin.qq.com/s/B03cDEE1i3gT0yDhIxGATw)| |based_on_url|[我又发现 Spring Security 中一个小秘密!](https://mp.weixin.qq.com/s/FzNpjyjbi6bQEEkUDRsnXA)| |-|[聊一个 GitHub 上开源的 RBAC 权限管理系统,很6!](https://mp.weixin.qq.com/s/K5YDMBREzObp2yrmLtGdjw)| |-|[RBAC 案例解读【2】](https://mp.weixin.qq.com/s/obLTxCddhJ0kqTqoVN-Axg)|
0
g00glen00b/spring-samples
A series of examples used to demonstrate certain features of Spring.
spring spring-boot spring-cloud spring-cloud-netflix spring-security
# Example applications using Spring boot This is a collection of small applications demonstrating certain features of Spring boot. Most of these are covered as well in [my blog posts](https://dimitri.codes/tag/spring-boot/). ## Contents | Blog post | GitHub | | ------------------------------------------------------------ | ------------------------------------------------------------ | | [Writing your first Spring webapp with Spring Boot](https://dimitri.codes/spring-webapp/) | [spring-boot-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-webapp) | | [JPA made easy with Spring data's repositories](https://dimitri.codes/spring-data-jpa/) | [spring-boot-jpa-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-jpa-webapp) | | [Handling errors with Spring MVC](https://dimitri.codes/handling-errors-with-spring-mvc/) | [spring-boot-jpa-error-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-jpa-error-webapp) | | [Using Docker containers for your Spring boot applications](https://dimitri.codes/docker-spring-boot/) | [spring-boot-jpa-docker-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-jpa-docker-webapp) | | [Take your Spring apps to the cloud with Bluemix and Docker](https://dimitri.codes/docker-containers-on-bluemix/) | [sping-boot-jpa-docker-bluemix-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-jpa-docker-bluemix-webapp) | | [Internationalization (i18n) with Spring](https://dimitri.codes/spring-internationalization-i18n/) | [spring-boot-i18n-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-i18n-webapp) | | [Handling forms with Spring Web and JSR-303](https://dimitri.codes/spring-form-validation/) | [spring-boot-form](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-form) | | [Producing REST API's with Spring](https://dimitri.codes/producing-rest-apis-with-spring/) | [spring-boot-rest](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-rest) | | [Consuming REST API's with Spring](https://dimitri.codes/consuming-rest-apis-with-spring/) | [spring-boot-rest](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-rest) | | [Documenting your REST API with Swagger and Springfox](https://dimitri.codes/documenting-rest-api-swagger-springfox/) | [spring-boot-swagger](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-swagger) | | [Exploring contract first options with Swagger](https://dimitri.codes/exploring-contract-first-options-swagger/) | [spring-boot-swagger](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-swagger) | | [Using Ehcache 3 with Spring boot](https://dimitri.codes/spring-boot-cache-ehcache/) | [spring-boot-ehcache](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-ehcache) | | [Using the Netflix stack with Spring boot: Eureka](https://dimitri.codes/using-the-netflix-stack-with-spring-boot-eureka/) | [spring-boot-eureka](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-eureka) | | [Using the Netflix stack with Spring boot: Ribbon](https://dimitri.codes/using-netflix-stack-spring-boot-ribbon/) | [spring-boot-eureka](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-eureka) | | [Using the Netflix stack with Spring boot: Hystrix](https://dimitri.codes/spring-boot-netflix-hystrix/) | [spring-boot-hystrix](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-hystrix) | | [Mapping with Dozer](https://dimitri.codes/mapping-with-dozer/) | [spring-boot-jpa-dozer-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-jpa-dozer-webapp) | | [Mapping beans with MapStruct](https://dimitri.codes/mapstruct/) | [spring-boot-jpa-mapstruct-webapp](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-jpa-mapstruct-webapp) | | [Getting started with Spring boot 2.0](https://dimitri.codes/getting-started-spring-boot-2/) | [spring-boot-2-web-crawler](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-2-web-crawler) | | [Validating the input of your REST API with Spring](https://dimitri.codes/validating-the-input-of-your-rest-api-with-spring) | [spring-boot-validation](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-validation) | | [Working with Spring boot and GraphQL](https://dimitri.codes/graphql-spring-boot) | [spring-boot-graphql](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-graphql) | | [Indexing documents with Spring batch](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-solr-batch) | [spring-boot-solr-batch](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-solr-batch) | | [Generating documentation for your REST API with Spring REST Docs](https://dimitri.codes/spring-rest-docs) | [spring-boot-restdocs](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-restdocs) | | [Generating documentation for your REST API with Spring and Swagger](https://dimitri.codes/generating-static-documentation-swagger) | [spring-boot-swagger-static-docs](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-swagger-static-docs) | | [Reactive relational databases with R2DBC and Spring](https://dimitri.codes/reactive-relational-databases-r2dbc-spring) | [spring-boot-r2dbc](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-r2dbc) | | [Reactive streams over the network with RSocket](https://dimitri.codes/reactive-streams-rsocket) | [spring-boot-rsocket](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-rsocket) | | [Battle of the Spring REST clients: RestTemplate, WebClient or RestClient?](https://dimitri.codes/resttemplate-or-webclient) | [spring-boot-restclient](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-restclient) | [Getting started with htmx and Spring Boot](https://dimitri.codes/spring-boot-htmx-intro) | [spring-boot-htmx](https://github.com/g00glen00b/spring-samples/tree/master/spring-boot-htmx)
0
kunal-kushwaha/DSA-Bootcamp-Java
This repository consists of the code samples, assignments, and notes for the Java data structures & algorithms + interview preparation bootcamp of WeMakeDevs.
algorithms competitive-programming data-structures faang-interview faang-preparation faang-questions google-interview interview-preparation java leetcode leetcode-java leetcode-solutions math
# DSA + Interview preparation bootcamp - [Join Replit](http://join.replit.com/kunal-kushwaha) - Subscribe to the [YouTube channel](https://www.youtube.com/KunalKushwaha?sub_confirmation=1) - [Lectures](https://www.youtube.com/playlist?list=PL9gnSGHSqcnr_DxHsP7AW9ftq0AtAyYqJ) - [Course website](https://wemakedevs.org/courses/dsa) - [Assignments](https://github.com/kunal-kushwaha/DSA-Bootcamp-Java/tree/main/assignments) (solutions can be found on LeetCode) - [Connect with me](http://kunalkushwaha.com)
0
gupaoedu-tom/spring5-samples
《Spring5核心原理与30个类手写实战》随书代码示例工程
null
# 《Spring5核心原理与30个类手写实战》 ### 4个月销量破万,连续占据畅销榜 ### 京东购买链接:[https://item.jd.com/12650902.html](https://item.jd.com/12650902.html?_blank) ### 当当购买链接:[http://product.dangdang.com/27905338.html](http://product.dangdang.com/27905338.html) 倾注十年Spring研究精华与见解 ★本书几乎涵盖在Spring应用中可能遇到的所有问题,核心原理(IoC、DI、AOP、MVC)、高仿手写、数据访问等 ★深度解析Spring 5的原理与新特性,从环境准备、顶层结构设计、数据访问等方面一步步推导出Spring设计原理 ★通过本书你可以: --看源码不再晕车,轻松找到入口。 --系统学习设计思想,提高解决问题的效率。 --培养架构思维能力,自驱学习能力。 --用30个类提炼出Spring核心的精华思想,并保证功能可用。 --从不一样的角度分析Spring经典高频面试题。 --了解Spring 5 的新特性。 --为深入了解Spring Boot做铺垫。 # 关于我 ### 为什么说我来自文艺界?   我自幼爱好书法和美术,长了一双能书会画的手,而且手指又长又白,因此以前的艺名叫“玉手藝人”。中学期间,曾获市级书法竞赛一等奖,校园美术竞赛一等奖,校园征文比赛二等奖。担任过学生会宣传部长,负责校园黑板报、校园刊物的编辑、排版、设计。   2008年参加工作后,做过家具建模、平面设计等工作,亲自设计了咕泡学院的Logo。做讲师之后,给自己起了一个跟姓氏谐音的英文名字“Tom”,江湖人称“编程界写字写得最好的、书法界编程最牛的文艺汤”。 ### 我的技术生涯   我的IT技术生涯应该算是从2009年开始的,在此之前做过UI设计,做过前端网页,到2009年才真正开始参与Java后台开发。在这里要感谢所有帮助过我入门编程的同事和老师。从2010年至2014年担任过项目组长、项目经理、架构师、技术总监,对很多的开源框架建立了自己的独特见解。我会习惯性地用形象思维来理解抽象世界。譬如:看到二进制0和1,我会想到《周易》中的两仪——阴和阳;看到颜色值用RGB表示,我会想到美术理论中的太阳光折射三原色;下班回家看到炒菜流程,我会想到模板方法模式;坐公交车看到学生卡、老人卡、爱心卡,我会想到策略模式;等等。大家看到的这本书,很多地方都融入了这种形象思维。 ### 为什么写书?   其实一开始我没想过要写书,写书的初衷主要是满足学员的诉求。大家认为我个人的学习方法、思维模式、教学方式通俗易懂,很容易让人接受,但是通过视频形式传播受众有限,学员建议我把这些宝贵的经验以纸质书的形式奉献给大家,这样定会给社会带来更大的价值。   借此机会,特别感谢责任编辑董英及电子社的团队成员为本书审稿纠错;感谢我老婆在无数个加班的夜晚给我默默的支持;感谢咕泡学院的学员给本书内容提出宝贵的修改意见。 # 随书资源下载 1. [一步一步手绘Spring DI运行时序图.jpg](https://github.com/gupaoedu-tom/resouce/blob/master/spring5/%E4%B8%80%E6%AD%A5%E4%B8%80%E6%AD%A5%E6%89%8B%E7%BB%98Spring%20DI%E8%BF%90%E8%A1%8C%E6%97%B6%E5%BA%8F%E5%9B%BE.jpg) 2. [一步一步手绘Spring IOC运行时序图.jpg](https://github.com/gupaoedu-tom/resouce/blob/master/spring5/%E4%B8%80%E6%AD%A5%E4%B8%80%E6%AD%A5%E6%89%8B%E7%BB%98Spring%20IOC%E8%BF%90%E8%A1%8C%E6%97%B6%E5%BA%8F%E5%9B%BE.jpg) 3. [一步一步手绘Spring AOP运行时序图.jpg](https://github.com/gupaoedu-tom/resouce/blob/master/spring5/%E4%B8%80%E6%AD%A5%E4%B8%80%E6%AD%A5%E6%89%8B%E7%BB%98Spring%20AOP%E8%BF%90%E8%A1%8C%E6%97%B6%E5%BA%8F%E5%9B%BE.jpg) 4. [一步一步手绘Spring MVC运行时序图.jpg](https://github.com/gupaoedu-tom/resouce/blob/master/spring5/%E4%B8%80%E6%AD%A5%E4%B8%80%E6%AD%A5%E6%89%8B%E7%BB%98Spring%20MVC%E8%BF%90%E8%A1%8C%E6%97%B6%E5%BA%8F%E5%9B%BE.jpg) 5. [SpringMVC核心组件关系图.png](https://github.com/gupaoedu-tom/resouce/blob/master/spring5/SpringMVC%E6%A0%B8%E5%BF%83%E7%BB%84%E4%BB%B6%E5%85%B3%E7%B3%BB%E5%9B%BE.png) 6. [SpringJDBC异常结构图.png](https://github.com/gupaoedu-tom/resouce/blob/master/spring5/SpringJDBC%E5%BC%82%E5%B8%B8%E7%BB%93%E6%9E%84%E5%9B%BE.png) # 技术交流 ![关注公众号Tom弹架构](https://user-images.githubusercontent.com/54272541/139790847-bbcccc8c-cd5e-4bdc-9b1b-bc923a9d4d99.png) ### 推荐《[Netty 4核心原理与手写RPC框架实战](https://github.com/gupaoedu-tom/netty4-samples)》上架3个月销量突破10000册! ### 推荐《[设计模式就该这样学](https://github.com/gupaoedu-tom/design-samples)》已全面开启预售!!!
0