full_name stringlengths 7 104 | description stringlengths 4 725 ⌀ | topics stringlengths 3 468 ⌀ | readme stringlengths 13 565k ⌀ | label int64 0 1 |
|---|---|---|---|---|
muhammedsedef/Kafka-Example | Kafka Example | couchbase docker kafka kafka-ui postgresql spring-boot wiremock zookeeper | # Kafka-Example
In this project,
- There are 3 microservices, when a user created via user-service that service insert a record onto user table(postgre)
and it produce an event to **user_service.user_created.0** topic.
- Notification-consumer service listen **user_service.user_created.0** topic and simulates the logic of sending
notification after the event it consumes after notification is sending successfully service insert a record onto couchbase
notification bucket.
- User-address-service also listen user_service.user_created.0 topic and it consume events. According to user's address text
information it insert a record onto user-address table(postgre)
## System Architechure

## Requirements
- [Java 11 JDK](https://www.oracle.com/tr/java/technologies/javase/jdk11-archive-downloads.html)
- [Docker](https://www.docker.com/products/docker-desktop/)
- [Data Grip](https://www.jetbrains.com/datagrip/download/#section=mac) or any other database GUIs
- [Postman](https://www.postman.com/downloads/)
## Setup
- Before run project you need to start docker desktop
- After docker is up, run the docker-compose.yml (You can find it in the infra-setup folder.)
- After the run docker-compose.yml file you will see docker desktop like this:

- Continue With DB Connection Part
## DB Connection
# Postgre Connection:
url: jdbc:postgresql://localhost:5432/kafka_example
username: example
password: example
* After you successfully connect database, you will see like this in datagrip:

# Couchbase Connection & Settings
- #### Open http://localhost:8091/ on your browser

username: Administrator
password: 123456
- #### Open the buckets tab and click the **ADD BUCKET**

- #### write the bucket name in our example => bucket name is **"notification"**

- #### After that open the query tab on the left side

- #### Run 2 query separately to create index on bucket
1) CREATE PRIMARY INDEX `idx_default_primary_notification` ON `notification`
2) CREATE INDEX `id` ON `notification`(`id`)
- #### Open the Security tab on the left side and click the **ADD USER**

- #### After click the add user you will see a new popup and fill the informations like :
Username: admin
Full Name: admin
Password: 123456
Verify Password: 123456

# Running
- #### Run each project application file on your code ide.
- #### After you run 3 application successfully you can check postgre db on your GUI you have to see created 2 tables which names ara **user** and **user_address**

- #### Check Topic is created or not http://localhost:9090/ (kafka ui)

- #### If you have come this far without any problems, we can open postman and try a sample request.
- #### You can import postman collection which I share in postman_collection folder

- #### After execute post endpoint and get 200 success message from postman you can see producer and consumer logs on your running terminal also you can check that records are in your databases


- #### As you can see our records successfully inserted to our databases
- #### To test batch request use MOCK DATA.json file which it is in postman folder, open the postman runner and select that json file and run.


# Topic Partition Settings
- #### To increase topic partition open http://localhost:9090/ (kafka-ui) and open the topic settings.

- #### As you can see I set partition count 4
- #### After set partition count 4, now our consumer's will rebalance because now we have 4 partition so our consumers bind all of partitions

- #### If you want to run one more consumer app follow these steps:



- #### Now you can run one more consumer app, it will up randomly port in your computer's free port because we set server.port as 0 in application.yml

- #### As you can see we have 4 partition and we run 4 consumer app (user-address-service) so each consumer app bind 1 partition of user_service.user_created.0 topic
- #### You can run again batch request on postman and you can easily see your consumer apps consume the event corresponding partition
| 1 |
bertilmuth/poem-hexagon | A simple example for a hexagonal architecture. | clean-architecture event-driven hexagonal-architecture java message-driven | # introduction
[](https://gitter.im/requirementsascode/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
A simple example for a clean hexagonal architecture.
It contains a use case model and command handlers that control the flow of the application.
The main class is [poem.simple.Main](https://github.com/bertilmuth/poem-hexagon/blob/master/src/main/java/poem/simple/Main.java).
This example is inspired by a [talk](https://www.youtube.com/watch?v=th4AgBcrEHA) by A. Cockburn and T. Pierrain on hexagonal architecture.
A [blog article](https://dev.to/bertilmuth/implementing-a-hexagonal-architecture-1kgf) describes the details.
| 1 |
phantasmicmeans/springboot-microservice-with-spring-cloud-netflix | msa backend service example with springboot REST API | eureka-client msa rest-api service spring-boot springcloud | [](http://hits.dwyl.io/phantasmicmeans/springboot-microservice-with-spring-cloud-netflix)
Spring Boot Microservice with Spring Cloud Netflix
==============
*by S.M.Lee*

> **NOTE**
> - 여기서는 MSA에서의 Service중 하나인 Notice Service를 구축하여 본다.
> - Notice Service는 간단한 REST API Server로 구성되고, Spring Cloud Netflix의 여러 component들을 활용한다.
> - Notice Service는 Spring boot Project로 구현된다. 생성된 JAR파일을 Docker container로 띄워 서비스한다.
> - 기존 Spring에서는 Maven, Gradle등의 dependency tool을 이용해 WAR파일을 생성한 후 tomcat같은 WAS에 배포하여
웹 어플리케이션을 구동하였으나, Spring boot는 JAR파일에 내장 tomcat이 존재하여, 단순히 JAR파일을 빌드하고 실행하는 것 만으로 웹 어플리케이션 구동이 가능하다.
> - JPA repository로 DB(MySQL 5.6)에 접근한다.
## Service Description ##
**Project directory tree**
.
├── Dockerfile
├── mvnw
├── mvnw.cmd
├── pom.xml
├── src/main/java/com/example/demo
| | ├── AlarmServiceApplication.java
| | ├── domain
| | | └── Notice.java
| | ├── repository
| | │ └── NoticeRepository.java
| | ├── rest
| | │ └── NoticeController.java
| | ├── service
| | ├── NoticeService.java
| | └── NoticeServiceImpl.java
│ └── resources
│ ├── application.yml
│ └── bootstrap.yml
└── target
├── classes
├── notice-service-0.1.0.jar
├── notice-service.0.1.0.jar.original
├── generated-sources ...
**Service는 "Service Register & Discovery" Server인 Eureka Server의 Client이다.**
진행하기에 앞서 Eureka에 대한 이해가 필요하다. Hystrix는 다음 장에서 다룰 예정이지만, Eureka에 대한 이해는 필수적이다.
하지만 단순히 REST API Server 구축이 목표라면 스킵하고 진행해도 된다.
> - *Netflix의 Eureka에 대한 이해 => https://github.com/phantasmicmeans/Spring-Cloud-Netflix-Eureka-Tutorial/*
> - *Hystrix에 대한 이해 => https://github.com/phantasmicmeans/Spring-Cloud-Netflix-Hystrix/*
> - *Service Registration and Discovery => https://spring.io/guides/gs/service-registration-and-discovery/*
> - *Service Discovery: Eureka Clients =>https://cloud.spring.io/spring-cloud-netflix/multi/multi__service_discovery_eureka_clients.html*
위 reference를 모두 읽고 이 튜토리얼을 진행하면 순탄하게 진행할 수 있을 것이다.
어쨌든 Eureka Client로 만들어진 Microservice는 Eureka Server(Registry)에 자신의 meta-data(host,port,address 등)를 전송한다. 이로인해 Eureka Client들은 Eureka Registry 정보를 이용해 서로간의 Communication이 가능하다.
그리고 Eureka Client는 자신이 살아 있음을 알리는 hearbeat를 Eureka Server에 보낸다. Eureka Server는 일정한 시간안에 hearbeat를 받지 못하면 Registry로 부터 Client의 정보를 제거한다.
Eureka Client는 Registry에 자신의 hostname을 등록하게 되는데 이는 DNS 역할을 하며, 추후에 Netflix의 API Gateway에서 Ribbon + Hystrix + Eureka 조합을 적절히 활용하여 편하게 Dynamic Routing 시킬 수 있다.
큰 개념은 이정도로 이해하고 일단 Server를 구축하고 Eureka Client로 만들어보자.
## 1. Dependency ##
Eureka Client로 service를 만들기 위해 spring-cloud-starter-netflix-eureka-client dependency를 추가한다. 그리고 hystrix 적용을 위해 hystrix dependency를 추가한다. 그리고 dockerfile-maven-plugin 또한 추가한다.
**pom.xml**
```xml
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.1.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<spring-cloud.version>Finchley.M9</spring-cloud.version>
<docker.image.prefix>phantasmicmeans</docker.image.prefix>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.21</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
```
## 2. Configuration ##
bootstrap.yml file은 Spring cloud application에서 apllication.yml보다 먼저 실행된다. bootstrap.yml에서 db connection을 진행하고, apllication.yml에서 applicaion의 port와 eureka server instance의 정보를 포함시킨다.
**1. bootstrap.yml**
```yml
spring:
application:
name: notice-service
jpa:
hibernate:
ddl-auto: update
show_sql: true
use_sql_comments: true
fotmat_sql: true
datasource:
url: jdbc:mysql://{Your_MYSQL_Server_Address}:3306/notice
username: {MYSQL_ID}
password: {MYSQL_PASSWORD}
driver-class-name: com.mysql.jdbc.Driver
hikari:
maximum-pool-size: 2
```
사용중인 MySQL Server Address를 spring.datasource.url 부분에 입력해야한다. 또한 username과 password도 추가한다.
**2. application.yml**
```yml
server:
port: 8763
eureka:
client:
healthcheck: true
fetch-registry: true
serviceUrl:
defaultZone: ${vcap.services.eureka-service.credentials.uri:http://{Your-Eureka-server-Address}:8761}/eureka/
instance:
preferIpAddress: true
```
eureka.client.serviceUrl.defaultZone에 다음처럼 Eureka Server Address를 추가한다.
* eureak.client.fetch-registry - Eureka Registry로 부터 Registry에 속해 있는 Eureka Client들의 정보를 가져올 수 있는 옵션이다. 이는 true로 주자!
* eureka.client.serviceUrl.defaultZone - Spring Cloud Netflix의 공식 Document에서는 "defaultZone" is a magic string fallback value that provides the service URL for any client that does not express a preference (in other words, it is a useful default). 라고 소개한다. 뭐 일단 이대로 진행하면 된다.
* eureka.instance.preferIpAddress - Eureka Client가 Eureka Registry에 자신을 등록할 때 eureka.instance.hostname으로 등록하게 된다. 그러나 어떠한 경우에는 hostname보다 IP Address가 필요한 경우가 있다. 여기서는 IP Address를 이용할 것이다.
* eureka.instance.hostname - JAVA단에서 hostname을 찾지 못하면 IP Address로 Eureka Registry에 전송된다. (이를 방지하려면 eureka.instance.hostname={your_hostname} 으로 원하는 hostname을 입력해도 되고, eureka.instance.hostname=${HOST_NAME} 으로 environment variable을 이용해 run-time때 hostname을 지정해줘도 된다.)
* eureka.instance.instanceId - 위의 예시에서는 instanceId를 등록하지 않는다. default는 ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instance_id:${server.port}}} 이다. Eureka Server가 같은 service(application)이지만 다른 client임을 구별하기 위한 용도로 사용할 수 있다.
*참고*
* Eureka Client들을 Eureka Registry에 등록하면 다음처럼 등록된다.(아래 사진은 예시일뿐이다.)

사진을 보면 Application, AMIs, Availability Zones, Status를 확인 할 수 있다.
* Application에 보여지는 여러 Service들은 각 Eureka Client의 spring.application.name이다.
* Status는 현재 service가 Up인 상태인지, Down인 상태인지를 나타낸다.
또 한가지 알아두어야 할 점은 Status 오른쪽의 list들이다. 이 list에는 각 Eureka Client의 eureka.instance.instanceId값이 등록된다.
쉽게 이해하기 위해 Notice-Service를 보자.
Notice-Service는 3개의 Up상태인 client를 가지고 있다.
* notice-service:7c09a271351a998027f0d1e2c72148e5
* notice-service:14d5f9837de754b077a6b58b7e159827
* notice-service:7c6d41264f2f71925591bbc07cfe51ec
이 3개의 client는 spring.application.name=notice-service로 같지만, eureka.instance.instanceId가 각기 다르단 얘기이다.
즉 Eureka Registry에 같은 spring.application.name을 가진 어떠한 Client가 등록되면, eureka.instance.instanceId값으로 구분 할 수 있다는 얘기다. 우리는 이를 잘 이용해서 추후에 Dynamic Routing을 할 것이므로 알아 두자.
## 3. EurekaClient ##
```java
@SpringBootApplication
@EnableEurekaClient
public class AlarmServiceApplication {
public static void main(String[] args) {
SpringApplication.run(AlarmServiceApplication.class, args);
}
```
dependency는 앞에서 설정 했으므로 main class에 @EnableEurekaClient annotation만 추가하면 된다.
그럼 이 Eureka Client를 REST API Server로 만들어보자
## 4. REST API Server 구축 ##
**REST API**
METHOD | PATH | DESCRIPTION
------------|-----|------------
GET | /notice/ | 전체 알림정보 제공
GET | /notice/{receiver_ID} | 해당 receiver 에 대한 알림 정보 제공
GET | /notice/latest/{receiver_ID} | 해당 receiver 에 대한 최근 10개 알림정보 제공
GET | /notice/previous/{receiver_ID}/{id} | 해당 receiver 에 대한 정보들 중 {id}값을 기준으로 이전 10개 정보 제공
POST | /notice/ | 알림 정보 입력
**Table(table name = notice) description**
| Field | Type | Null | Key | Default | Extra |
--------------|-------------|------|-----|---------|----------------|
| id | int(11) | NO | PRI | NULL | auto_increment |
| receiver_id | varchar(20) | NO | | NULL | |
| sender_id | varchar(20) | NO | | NULL | |
| article_id | int(11) | NO | | NULL | |
우리는 JPA를 이용해 DB에 접근할 것이다. 따라서 JPA란 무엇인지에 대해 간단하게 알아보고 넘어가자.
(DB setting은 개인적으로 하자..)
**JPA란?**
JPA는 자바 진영의 ORM 기술 표준이다. Java Persistence API(JPA)는 RDB에 접근하기 위한 표준 ORM을 제공하고, 기존 EJB에서 제공하는 Entity Bean을 대체하는 기술이다. Hibernate, OpenJPA 와 같은 구현체들이 있고 이에 따른 표준 인터페이스가 JPA인 것이다.
**ORM 이란?**
객체와 RDB를 매핑한다. 기존에 spring에서 많이 사용하던 mybatis등은 ORM이 아니고, SQL Query를 Mapping하여 실행한다.
따라서 Spring-Data-JPA를 이용하면, 객체 관점에서 DB에 접근하는 형태로 어플리케이션을 개발할 수 있다.
**JPA를 사용해야하는 이유는?**
1. 생산성 => 반복적인 SQL 작업과 CRUD 작업을 개발자가 직접 하지 않아도 된다.
2. 성능 => 캐싱을 지원하여 SQL이 여러번 수행되는것을 최적화 한다.
3. 표준 => 표준을 알아두면 다른 구현기술로 쉽게 변경할 수 있다.
**JPA Annotation**
Annotaion | DESCRIPTION
----------|------------
@Entity | Entity임을 정의한다.
@Table | (name = "table name") , Mapping할 table 정보를 알려준다.
@id | Entity class의 필드를 table의 PK에 mapping한다.
@Comlumn | field를 column에 매핑한다.
@Comlumn annotaion은 꼭 필요한것은 아니다. 따로 선언해주지 않아도 기본적으로 멤버 변수명과 일치하는 DB의 Column을 mapping한다.
@Table annotaion또한 기본적으로 @Entity로 선언된 class의 이름과 실제 DB의 Table 명이 일치하는 것을 mapping한다.
## 4.1 JPA Entity ##
SpringBoot에서는 JPA로 데이터를 접근하게끔 유도하고 있다. 이를 활용해서 REST API Server를 구축해보자.
아래는 JPA Entity를 담당할 Class이다.
**Notice.java**
```java
@Entity
public class Notice {
@Id
@GeneratedValue(strategy=GenerationType.AUTO)
private int id;
private String receiver_id;
private String sender_id;
private int article_id;
protected Notice() {}
public Notice(final int id, final String receiver_id, final String sender_id,final int article_id)
{
this.receiver_id=receiver_id;
this.sender_id=sender_id;
this.article_id=article_id;
}
public int getId()
{
return id;
}
public String getReceiver_id()
{
return receiver_id;
}
public String getSender_id()
{
return sender_id;
}
public void setReceiver_id(String receiver_id)
{
this.receiver_id=receiver_id;
}
public void setSender_id(String sender_id)
{
this.sender_id=sender_id;
}
public int getArticle_id()
{
return article_id;
}
public void setArticle_id(int article_id)
{
this.article_id=article_id;
}
@Override
public String toString()
{
return String.format("Notice [id = %d ,receiver_id = '%s', sender_id = '%s', article_id = %d] ", id, receiver_id, sender_id, article_id);
}
}
```
## 4.2 Repository ##
Entity class를 생성했다면, Repository Interface를 생성해야 한다. Spring에서는 Entity의 기본적 insert, delete, update 등이 가능하도록
CrudRepository라는 interface를 제공한다.
**NoticeRepository.java**
```java
public interface NoticeRepository extends CrudRepository<Notice, String>{
@Query("SELECT n FROM Notice n WHERE receiver_id=:receiver_id ORDER BY id DESC")
List<Notice> findNoticeByReceiverId(@Param("receiver_id") String receiver_id);
@Query("SELECT n FROM Notice n WHERE receiver_id=:receiver_id ORDER BY id DESC")
List<Notice> findLatestNoticeByReceiverId(@Param("receiver_id") String receiver_id, Pageable pageable);
@Query("SELECT n FROM Notice n WHERE n.receiver_id=:receiver_id AND n.id < :id ORDER BY n.id DESC")
List<Notice> findPreviousNoticeByReceiverId(@Param("receiver_id")String receiver_id, @Param("id") int id, Pageable pageable);
}
```
위 코드는 실제 Notice Entity를 이용하기 위한 Repository이다. 기본적인 CRUD외에 필자가 필요한 메소드를 @Query를 이용해 기존의 SQL처럼 사용하도록 지정해 놓은 상태이다.
이 외에도 CrudRepositorys는 find(), findAll(), findAllById() 등 여러 method를 제공한다. 이에 대한 세부사항은 다음 레퍼런스를 꼭 참고하자.
* Interface CrudRepository<T,ID> => https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/CrudRepository.html
## 5. Service ##
이제 실제 필요한 Service interface를 만들어 볼 차례이다.
├── rest
│ ├── NoticeController.java
├── service
├── NoticeService.java
└── NoticeServiceImpl.java
먼저 NoticeService.java 와 NoticeServiceImpl.java파일을 생성한다. NoticeService는 interface로 생성할 것이고, 이에대한 명세는 NoticeServiceImpl.java에서 구현한다. interface에 대한 method 구현시 NoticeRepository의 method를 활용한다.
**NoticeService.java**
```java
public interface NoticeService {
List<Notice> findAllNotice();
List<Notice> findAllNoticeByReceiverId(String receiver_id);
List<Notice> findLatestNoticeByReceiverId(String receiver_id);
List<Notice> findPreviousNoticeByReceiverId(String receiver_id, int id);
Notice saveNotice(Notice notice);
}
```
**NoticeServiceImpl.java 일부**
```java
@Service("noticeService")
public class NoticeServiceImpl implements NoticeService{
private final Logger logger = LoggerFactory.getLogger(this.getClass());
@Autowired
private NoticeRepository noticeRepository;
@Override
public List<Notice> findAllNotice()
{
Optional<Iterable<Notice>> maybeNoticeIter = Optional.ofNullable(noticeRepository.findAll());
return Lists.newArrayList(maybeNoticeIter.get());
}
@Override
public List<Notice> findAllNoticeByReceiverId(String receiver_id)
{
Optional<List<Notice>> maybeNotice =
Optional.ofNullable(noticeRepository.findNoticeByReceiverId(receiver_id));
return maybeNotice.get();
}
@Override
public List<Notice> findLatestNoticeByReceiverId(String receiver_id)
{
Optional<List<Notice>> maybeLatestNotice=
Optional.ofNullable(noticeRepository.findLatestNoticeByReceiverId(receiver_id, PageRequest.of(0, 10)));
return maybeLatestNotice.get();
}
```
## 6. Rest Controller
이제 controller를 만들어 보자. rest package를 따로 만들고 그곳에 RestController들을 정의한다.
├── rest
│ ├── NoticeController.java
@RestControler annotation을 설정하여 RestController를 만든다.
(HystrixMethod적용은 다음 단계에서 진행한다. 여기서는 REST API만 구축한다)
**NoticeController.java 일부**
```java
@RestController
@CrossOrigin(origins="*")
public class NoticeController {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
public static List<Notice> Temp;
@Autowired
private NoticeService noticeService;
@Autowired
private DiscoveryClient discoveryClient;
@RequestMapping(value = "/notice", method=RequestMethod.GET)
public ResponseEntity<List<Notice>> getAllNotice(){
try{
Optional<List<Notice>> maybeAllStory = Optional.ofNullable(noticeService.findAllNotice());
return new ResponseEntity<List<Notice>>(maybeAllStory.get(), HttpStatus.OK);
}catch(Exception e)
{
return new ResponseEntity<List<Notice>>(HttpStatus.NOT_FOUND);
}
}
@RequestMapping(value="/notice/{receiver_id}", method = RequestMethod.GET)
public ResponseEntity<List<Notice>> getAllNoticeByReceiverId(@PathVariable("receiver_id") final String receiver_id)
{
try {
Optional<List<Notice>> maybeSelectedNotice =
Optional.of(noticeService.findAllNoticeByReceiverId(receiver_id));
return new ResponseEntity<List<Notice>>(maybeSelectedNotice.get(), HttpStatus.OK);
}catch(Exception e)
{
return new ResponseEntity<List<Notice>>(HttpStatus.NOT_FOUND);
}
}
```
## 7. Maven Packaging
Host OS에 설치된 maven을 이용해도 되고, spring boot application의 maven wrapper를 사용해도 된다
(maven wrapper는 Linux, OSX, Windows, Solaris 등 서로 다른 OS에서도 동작한다. 따라서 추후에 여러 서비스들을 Jenkins에서 build 할 때 각 서비스들의 Maven version을 맞출 필요가 없다.)
*A Quick Guide to Maven Wrapper => http://www.baeldung.com/maven-wrapper)*
**a. Host OS의 maven 이용**
```bash
[sangmin@Mint-SM] ~/springcloud-service $ mvn package
```
**b. maven wrapper 이용**
```bash
[sangmin@Mint-SM] ~/springcloud-service $ ./mvnw package
```
## 8. Execute Spring Boot Application
REST API Server가 제대로 구축 되어졌는지 확인해보자.
```bash
[sangmin@Mint-SM] ~/springcloud-service $java -jar target/{your_application_name}.jar
```
Eureka Dashboard를 통해 Client가 제대로 등록 되어졌는지 확인해보자
Check Your Eureka Dashboard
* http://{Your-Eureka-Server-Address}:8761
* http://{Your-Eureka-Server-Address}:8761/eureka/apps
Client가 Eureka Server에 등록 될 때 약간의 시간이 소요될 수 있다.
## 9. Dockerizing
구축한 Eureka Client를 docker image를 만들어 볼 차례이다. 먼저 Dockerfile을 작성한다.
> - $mvn package
**Dockerfile**
```
FROM openjdk:8-jdk-alpine
VOLUME /tmp
#ARG JAR_FILE
#ADD ${JAR_FILE} app.jar
#dockerfile-maven-plugin으로 docker image를 생성하려면 아래 ADD ~를 주석처리하고, 위 2줄의 주석을 지우면 된다.
ADD ./target/notice-service-0.0.1.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
```
Dockerfile 작성이 끝났다면 image를 build 하자
**a. dockerfile-maven-plugin 사용시**
```bash
[sangmin@Mint-SM] ~/springcloud-service $ ./mvnw dockerfile:build
```
**b. docker CLI 사용시**
```bash
[sangmin@Mint-SM] ~/springcloud-service $ docker build -t {your_docker_id}/notice-service:latest
```
이후 docker image가 잘 생성 되었음을 확인하자.
```bash
[sangmin@Mint-SM] ~/springcloud-service $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
phantasmicmeans/notice-service latest 4b79d6a1ed24 2 weeks ago 146MB
openjdk 8-jdk-alpine 224765a6bdbe 5 months ago 102MB
```
## 10. Run Docker Container
Docker image를 생성하였으므로 이미지를 실행 시켜보자.
```bash
[sangmin@Mint-SM] ~ $ docker run -it -p 8763:8763 phantasmicmeans/notice-service:latest
```
이제 Eureka Dashboard를 통해 Client가 제대로 실행 되었는지 확인하면 된다.
## Conclusion
이상으로 간단한 REST API Server로 구축된 Microservice를 Eureka Client로 구성해 보았다. 다음 장에서는 Eureka Client로 구성된 Microservice에 Hystrix를 적용해 볼 것이다.
| 1 |
AdamBien/javaee8-mvc-sample | Java EE 8 MVC (JSR-371) Example | null | # javaee8-mvc-sample
Java EE 8 MVC [JSR-371](https://mvc-spec.java.net) example based on JAX-RS, EJBs and JSPs.
Currently this sample requires a dependency to the Java EE 8 MVC Reference Implementation [ozark](https://ozark.java.net) (see [pom](https://github.com/AdamBien/javaee8-mvc-sample/blob/master/time/pom.xml)) and GlassFish daily [build](http://dlc.sun.com.edgesuite.net/glassfish/4.1/nightly/glassfish-4.1-b13-03_16_2015.zip)
# Installation
Either build the project from sources:
`git clone https://github.com/AdamBien/javaee8-mvc-sample/` and open it with NetBeans, then just “Run” it on GlassFish daily [build](http://dlc.sun.com.edgesuite.net/glassfish/4.1/nightly/glassfish-4.1-b13-03_16_2015.zip)
or deploy the war:
Drop the [time.war](https://github.com/AdamBien/javaee8-mvc-sample/releases/download/v0.0.1/time.war) to glassfish/domains/domain1/autodeploy/ and point the browser to: [http://localhost:8080/time/views/time](http://localhost:8080/time/views/time).
| 1 |
mckeeh3/akka-java-cluster-sharding | Akka Java cluster sharding example | null | ## Akka Java Cluster Sharding Example
### Introduction
This is a Java, Maven, Akka project that demonstrates how to setup an
[Akka Cluster](https://doc.akka.io/docs/akka/current/index-cluster.html)
with an example implementation of
[Cluster Sharding](https://doc.akka.io/docs/akka/current/cluster-sharding.html).
This project is one in a series of projects that starts with a simple Akka Cluster project and progressively builds up to examples of event sourcing and command query responsibility segregation.
The project series is composed of the following projects:
* [akka-java-cluster](https://github.com/mckeeh3/akka-java-cluster)
* [akka-java-cluster-aware](https://github.com/mckeeh3/akka-java-cluster-aware)
* [akka-java-cluster-singleton](https://github.com/mckeeh3/akka-java-cluster-singleton)
* [akka-java-cluster-sharding](https://github.com/mckeeh3/akka-java-cluster-sharding) (this project)
* [akka-java-cluster-persistence](https://github.com/mckeeh3/akka-java-cluster-persistence)
* [akka-java-cluster-persistence-query](https://github.com/mckeeh3/akka-java-cluster-persistence-query)
Each project can be cloned, built, and runs independently of the other projects.
This project contains an example implementation of cluster sharding. Here we will focus on the implementation details in this project. Please see the
[Akka documentation](https://doc.akka.io/docs/akka/current/cluster-sharding.html)
for a more detailed discussion about cluster sharding.
### What is cluster sharding
According to the [Akka documentation](https://doc.akka.io/docs/akka/current/cluster-sharding.html#introduction),
"*Cluster sharding is useful when you need to distribute actors across several nodes in the cluster and want to be able to interact with them using their logical identifier, but without having to care about their physical location in the cluster, which might also change over time.*"
The common usage for cluster sharding is to distribute and engage with individual actors across the cluster. Each of these distributed actors is used to handle messages that are intended for a specific entity. Each entity represents a thing, such as a bank account or a shopping cart. Entities each have a unique identifier, such as an account or shopping cart identifier.
In this example project, the entities represent simple identifier and value. In a real application, entities represent real things, such as bank accounts. Each entity handles incoming messages. These messages are either commands, which are requests to chage the state of the entity. Other messages are query requests that are used to retrieve entity information.
Two actors are used to simulate clients that are sending messages to entities. The `EntityCommandActor` and the `EntityQueryActor` randomly generate messages to specific entities. These two actors are used to simulate incoming service requests. In a real implementation, the service would receive incoming messages, for example from an HTTP request, and forward those messages to specific entities to handle the request messages.
The process of forwarding these messages to the right entities, which could be distributed across multiple JVMs running in a cluster, is handled by cluster sharding. To send a message to an entity the sender simply sends the message to a shard region actor. The shard region actor is responsible for forwarding the message to the correct entity actor. The actual mechanics of this process is described in the
[How it works](https://doc.akka.io/docs/akka/current/cluster-sharding.html#how-it-works)
section of the cluster sharding documentation.

<center>Figure 1, Visualization of cluster sharding</center><br/>
The visualization in Figure 1 shows an example of cluster sharding. The blue leaf actors represent the entity actors. Each entity actor represents the state of an entity. The green circles that connect to the entity circles represent the running shard actors. In the example system there 15 shards configured. The shards connect to the orange shard region actors. These orange circles also represent other actors, such as the entity command and query actors. Also, the orange circles represent the root of the actor system on each cluster node.
### How it works
The `Runner` class contains the `main` method. The `main` method starts one or more Akka systems and in each actor system it starts instances of multiple actors.
The arguments passed to the main method are expected to be zero or more port numbers. These port numbers will be used to start cluster nodes, one for each specified port.
If no ports are specified a default is used to start three JVMs using ports 2551, 2552, and 0 respectively.
~~~java
if (args.length == 0) {
startupClusterNodes(Arrays.asList("2551", "2552", "0"));
} else {
startupClusterNodes(Arrays.asList(args));
}
~~~
Multiple actor systems may be started in a single JVM. However, the typical use case is that a single actor system is started per JVM. One way to think of an
[actor system](https://doc.akka.io/docs/akka/current/general/actor-systems.html)
is that they are supercharged thread pools.
The `startupClusterNodes` method is called with the list of specified port numbers. Each port is used to start an actor system and then start up various actors that will run in the demonstration.
The most notable actor in this cluster sharding example is the `shardRegion` actor.
~~~java
ActorRef shardingRegion = setupClusterSharding(actorSystem);
~~~
This actor is instantiated in the `setupClusterSharding` method.
~~~java
private static ActorRef setupClusterSharding(ActorSystem actorSystem) {
ClusterShardingSettings settings = ClusterShardingSettings.create(actorSystem);
return ClusterSharding.get(actorSystem).start(
"entity",
EntityActor.props(),
settings,
EntityMessage.messageExtractor()
);
}
~~~
This method uses the `ClusterSharding` static `get` method to create an instance of a single shard region actor per actor system. More details on how the shard region actors are used is described above. The `get` method is used to create a shard region actor passing it the code to be used to create an instance of an entity actor (`EntityActor.props()`) and the code used to extract entity and shard identifiers from messages that are sent to entity actors (`EntityMessage.messageExtractor()`).
~~~java
actorSystem.actorOf(EntityCommandActor.props(shardingRegion), "entityCommand");
actorSystem.actorOf(EntityQueryActor.props(shardingRegion), "entityQuery");
~~~
The `shardRegion` actor reference is passed as a constructor argument to the `EntityCommandActor` and the `EntityQueryActor`. These generate simulated random message traffic, they use the `shardRegion` actor ref to send messages to specific entity actors.
~~~java
shardRegion.tell(command(), self());
~~~
The `shardRegion` actor handles the heavy lifting of routing each message to the correct entity actor.
Entity actors have an interesting life-cycle. When messages are sent to a shard region actor, it routes the message to a shard actor that is responsible for the specific entity as defined by the message entity identifier.
The shared region actor is responsible for handling the routing of entity messages to the specific shard actors, which may involve other cluster sharding internal actors, and this may include forwarding the message from one cluster node to another.
When a shard actor receives an incoming entity message, it checks to see if the entity actor instance exits. If the entity actor instance does not exist, then an instance of the entity actor is created, and the message is forwarded to the newly started entity actor instance. If the entity actor instance already exists, then the message is forwarded from the shard actor to the specific entity actor instance.
Here is the source code of our example entity actor.
~~~java
package cluster.sharding;
import akka.actor.AbstractLoggingActor;
import akka.actor.PoisonPill;
import akka.actor.Props;
import akka.actor.ReceiveTimeout;
import akka.cluster.sharding.ShardRegion;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import java.util.concurrent.TimeUnit;
class EntityActor extends AbstractLoggingActor {
private Entity entity;
private final FiniteDuration receiveTimeout = Duration.create(60, TimeUnit.SECONDS);
@Override
public Receive createReceive() {
return receiveBuilder()
.match(EntityMessage.Command.class, this::command)
.match(EntityMessage.Query.class, this::query)
.matchEquals(ReceiveTimeout.getInstance(), t -> passivate())
.build();
}
private void command(EntityMessage.Command command) {
log().info("{} <- {}", command, sender());
if (entity == null) {
entity = command.entity;
final EntityMessage.CommandAck commandAck = EntityMessage.CommandAck.ackInit(command);
log().info("{}, {} -> {}", commandAck, command, sender());
sender().tell(commandAck, self());
} else {
entity.value = command.entity.value;
final EntityMessage.CommandAck commandAck = EntityMessage.CommandAck.ackUpdate(command);
log().info("{}, {} -< {}", commandAck, command, sender());
sender().tell(commandAck, self());
}
}
private void query(EntityMessage.Query query) {
log().info("{} <- {}", query, sender());
if (entity == null) {
final EntityMessage.QueryAckNotFound queryAck = EntityMessage.QueryAckNotFound.ack(query);
log().info("{} -> {}", queryAck, sender());
sender().tell(queryAck, self());
} else {
final EntityMessage.QueryAck queryAck = EntityMessage.QueryAck.ack(query, entity);
log().info("{} -> {}", queryAck, sender());
sender().tell(queryAck, self());
}
}
private void passivate() {
context().parent().tell(new ShardRegion.Passivate(PoisonPill.getInstance()), self());
}
@Override
public void preStart() {
log().info("Start");
context().setReceiveTimeout(receiveTimeout);
}
@Override
public void postStop() {
log().info("Stop {}", entity == null ? "(not initialized)" : entity.id);
}
static Props props() {
return Props.create(EntityActor.class);
}
}
~~~
Entity actors are typically set up to shut themselves down when they stop receiving messages.
~~~java
@Override
public void preStart() {
log().info("Start");
context().setReceiveTimeout(receiveTimeout);
}
~~~
The timeout period is set via a call to the `SetReceiveTimeout(...)` method. What this does is whenever the entity actor receives a message the timeout timer is reset.
~~~java
@Override
public Receive createReceive() {
return receiveBuilder()
.match(EntityMessage.Command.class, this::command)
.match(EntityMessage.Query.class, this::query)
.matchEquals(ReceiveTimeout.getInstance(), t -> passivate())
.build();
}
~~~
When no messages are received before the timeout period has expired then the entity actor is set a `ReceiveTimeout` message. In our example entity actor a receive timeout message triggers a call to a method called `passivate()`.
~~~java
private void passivate() {
context().parent().tell(new ShardRegion.Passivate(PoisonPill.getInstance()), self());
}
~~~
In the `passivate()` method a message is sent to the entity actor's parent actor, which is the shard actor, asking it to trigger a shutdown of this entity actor.
### Installation
~~~bash
git clone https://github.com/mckeeh3/akka-java-cluster-sharding.git
cd akka-java-cluster-sharding
mvn clean package
~~~
The Maven command builds the project and creates a self contained runnable JAR.
### Run a cluster (Mac, Linux)
The project contains a set of scripts that can be used to start and stop individual cluster nodes or start and stop a cluster of nodes.
The main script `./akka` is provided to run a cluster of nodes or start and stop individual nodes.
Use `./akka node start [1-9] | stop` to start and stop individual nodes and `./akka cluster start [1-9] | stop` to start and stop a cluster of nodes.
The `cluster` and `node` start options will start Akka nodes on ports 2551 through 2559.
Both `stdin` and `stderr` output is sent to a file in the `/tmp` directory using the file naming convention `/tmp/<project-dir-name>-N.log`.
Start node 1 on port 2551 and node 2 on port 2552.
~~~bash
./akka node start 1
./akka node start 2
~~~
Stop node 3 on port 2553.
~~~bash
./akka node stop 3
~~~
Start a cluster of four nodes on ports 2551, 2552, 2553, and 2554.
~~~bash
./akka cluster start 4
~~~
Stop all currently running cluster nodes.
~~~bash
./akka cluster stop
~~~
You can use the `./akka cluster start [1-9]` script to start multiple nodes and then use `./akka node start [1-9]` and `./akka node stop [1-9]`
to start and stop individual nodes.
Use the `./akka node tail [1-9]` command to `tail -f` a log file for nodes 1 through 9.
The `./akka cluster status` command displays the status of a currently running cluster in JSON format using the
[Akka Management](https://developer.lightbend.com/docs/akka-management/current/index.html)
extension
[Cluster Http Management](https://developer.lightbend.com/docs/akka-management/current/cluster-http-management.html).
### Run a cluster (Windows, command line)
The following Maven command runs a signle JVM with 3 Akka actor systems on ports 2551, 2552, and a radmonly selected port.
~~~~bash
mvn exec:java
~~~~
Use CTRL-C to stop.
To run on specific ports use the following `-D` option for passing in command line arguements.
~~~~bash
mvn exec:java -Dexec.args="2551"
~~~~
The default no arguments is equilevalant to the following.
~~~~bash
mvn exec:java -Dexec.args="2551 2552 0"
~~~~
A common way to run tests is to start single JVMs in multiple command windows. This simulates running a multi-node Akka cluster.
For example, run the following 4 commands in 4 command windows.
~~~~bash
mvn exec:java -Dexec.args="2551" > /tmp/$(basename $PWD)-1.log
~~~~
~~~~bash
mvn exec:java -Dexec.args="2552" > /tmp/$(basename $PWD)-2.log
~~~~
~~~~bash
mvn exec:java -Dexec.args="0" > /tmp/$(basename $PWD)-3.log
~~~~
~~~~bash
mvn exec:java -Dexec.args="0" > /tmp/$(basename $PWD)-4.log
~~~~
This runs a 4 node Akka cluster starting 2 nodes on ports 2551 and 2552, which are the cluster seed nodes as configured and the `application.conf` file.
And 2 nodes on randomly selected port numbers.
The optional redirect `> /tmp/$(basename $PWD)-4.log` is an example for pushing the log output to filenames based on the project direcctory name.
For convenience, in a Linux command shell define the following aliases.
~~~~bash
alias p1='cd ~/akka-java/akka-java-cluster'
alias p2='cd ~/akka-java/akka-java-cluster-aware'
alias p3='cd ~/akka-java/akka-java-cluster-singleton'
alias p4='cd ~/akka-java/akka-java-cluster-sharding'
alias p5='cd ~/akka-java/akka-java-cluster-persistence'
alias p6='cd ~/akka-java/akka-java-cluster-persistence-query'
alias m1='clear ; mvn exec:java -Dexec.args="2551" > /tmp/$(basename $PWD)-1.log'
alias m2='clear ; mvn exec:java -Dexec.args="2552" > /tmp/$(basename $PWD)-2.log'
alias m3='clear ; mvn exec:java -Dexec.args="0" > /tmp/$(basename $PWD)-3.log'
alias m4='clear ; mvn exec:java -Dexec.args="0" > /tmp/$(basename $PWD)-4.log'
~~~~
The p1-6 alias commands are shortcuts for cd'ing into one of the six project directories.
The m1-4 alias commands start and Akka node with the appropriate port. Stdout is also redirected to the /tmp directory.
| 1 |
openpreserve/format-corpus | An openly-licensed corpus of small example files, covering a wide range of formats and creation tools. | null | format-corpus
=============
An openly-licensed corpus of small example files, covering a wide range of formats and creation tools.
All items, apart from the source code under 'tools', is CC0 licenced unless otherwise stated. The source code is Apache 2.0 Licenced unless otherwise stated.
A recent summary of the contents of the repository can be found [here](http://www.opf-labs.org/format-corpus/tools/coverage/reports/).
How to Contribute
=================
See http://wiki.curatecamp.org/index.php/Collecting_format_ID_test_files for more information.
See [metadata-template.ext.md](https://github.com/openplanets/format-corpus/blob/master/metadata-template.ext.md) for a simple per-file metadata template.
Pooled Signatures
=================
As well as pooling example files, we also pool format signatures:
* Tika signatures staged here: https://github.com/openplanets/format-corpus/tree/master/tools/fidget/src/main/resources/tika-bl-staging
* Tika signatures later merged here: [https://github.com/openplanets/format-corpus/blob/master/tools/fidget/src/main/resources/org/apache/tika/mime/custom-mimetypes.xml here]
* DROID signatures go [https://github.com/openplanets/format-corpus/tree/master/tools/fidget/src/main/resources/droid here].
More details here: http://wiki.curatecamp.org/index.php/Improving_format_ID_coverage
| 1 |
politrons/RPC_reactive | Examples and explanations of how RPC systems works. | finagle grpc java reactive-programming thrift | Author Pablo Picouto García

## Reactive RPC
Here we cover with some examples and explanations how most famous RPC as [gRPC](https://grpc.io/docs/quickstart/) or
[Thrift](https://thrift.apache.org/) works.
### gRPC
##### Simple gRCP

An example of how gRPC works between client-server
* [client](src/main/java/com/politrons/grpc/simple/RpcClient.java)
* [Service](src/main/java/com/politrons/grpc/simple/RpcServiceImpl.java)
* [proto](src/main/proto/rpc_contract.proto)
##### Reactive

An example of how to use streams gRPC between client-server
* [client](src/main/java/com/politrons/grpc/reactive/ReactiveClient.java)
* [service](src/main/java/com/politrons/grpc/reactive/ReactiveServiceImpl.java)
* [proto](src/main/proto/rpc_reactive.proto)
##### Configuration
Once that you have your contracts(proto) ready, you need to build your classes which will
be used for the communication between client and server.
In these examples we decide to use the maven plugin.
The plugin you need to add in your pom is
```
<plugin>
<groupId>org.xolstice.maven.plugins</groupId>
<artifactId>protobuf-maven-plugin</artifactId>
<version>0.5.0</version>
<configuration>
<protocArtifact>
com.google.protobuf:protoc:3.3.0:exe:${os.detected.classifier}
</protocArtifact>
<pluginId>grpc-java</pluginId>
<pluginArtifact>
io.grpc:protoc-gen-grpc-java:1.4.0:exe:${os.detected.classifier}
</pluginArtifact>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>compile-custom</goal>
</goals>
</execution>
</executions>
</plugin>
```
### Thrift

An example of how thrift RPC works between client-server
* [client](src/main/scala/finagle/thrift/rpc/ThriftRPCClient.scala)
* [Service](src/main/scala/finagle/thrift/rpc/ThriftRPCServer.scala)
* [thrift](src/main/scala/finagle/thrift/idl/finagle_scrooge.thrift)
##### Configuration
Just like with gRPC once that you have your contracts(thrift) ready, you need to build your classes which will
be used for the communication between client and server.
In these examples we decide to use the twitter scrooge maven plugin.
The plugin you need to add in your pom is
```
<plugin>
<groupId>com.twitter</groupId>
<artifactId>scrooge-maven-plugin</artifactId>
<version>18.2.0</version>
<configuration>
<thriftSourceRoot>src/main/scala/finagle/thrift/idl/</thriftSourceRoot>
<thriftNamespaceMappings>
<thriftNamespaceMapping>
<from>finagle.thrift.idl</from>
<to>finagle.thrift</to>
</thriftNamespaceMapping>
</thriftNamespaceMappings>
<language>scala</language> <!-- default is scala -->
<thriftOpts>
<!-- add other Scrooge command line options using thriftOpts -->
<thriftOpt>--finagle</thriftOpt>
</thriftOpts>
<!-- tell scrooge to not to build the extracted thrift files (defaults to true) -->
<buildExtractedThrift>false</buildExtractedThrift>
</configuration>
<executions>
<execution>
<id>thrift-sources</id>
<phase>generate-sources</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>thrift-test-sources</id>
<phase>generate-test-sources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
```
### Avro

An example of how avro encoder/decoder works between client-server
* [encoder](src/main/java/com/politrons/avro/SerializeAvro.java)
* [decoder](src/main/java/com/politrons/avro/DeserializeAvro.java)
* [avro](src/main/avro/person.avsc)
An example of how avro RPC works between client-server
* [client](src/main/java/com/politrons/avro/rpc/ClientAvroRPC.java)
* [Service](src/main/java/com/politrons/avro/rpc/ServerAvroRPC.java)
* [avro](src/main/avro/avro_rpc.avpr)
##### Configuration
Just like with gRPC once that you have your contracts(avro) ready, you need to build your classes which will
be used for the communication between client and server.
In these examples we use avro-maven-plugin<.
The plugin you need to add in your pom is
```
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>1.8.2</version>
<executions>
<execution>
<phase>generate-sources</phase>
<goals>
<goal>schema</goal>
<goal>protocol</goal>
<goal>idl-protocol</goal>
</goals>
<configuration>
<sourceDirectory>${project.basedir}/src/main/avro/</sourceDirectory>
<outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
```
## Benchmarks

For this benchmark we made 1000 request with Json body for Rest and proto and thrift for RPC.
* [Rest](src/main/scala/benchmark) Http finagle client against Grizzly server.
* [Rest](src/main/scala/benchmark) Http finagle client against Finagle server.
* [gRPC](src/main/java/com/politrons/grpc/benchmark/regular) using standard implementation.
* [gRPC Reactive](src/main/java/com/politrons/grpc/benchmark/reactive) using reactive StreamObserver.
* [Thrift RPC](src/main/scala/finagle/thrift/rpc) using Apache thrift.
* [Avro RPC](src/main/java/com/politrons/avro/rpc) Using Apache Avro.
##### Results

| 0 |
Bernardo-MG/spring-ws-security-soap-example | An example showing how to set up secured SOAP web services in Spring. | spring-ws ws-security wss4j xwss | null | 1 |
sorakylin/code_demo | Demo/code/example using some java framework(eg SSM, SpringBoot, SpringCloud). | null | 使用各类主流框架、以及中间件整合之类的代码演示,注释详细
---
[](https://mit-license.org/)
<br>
<br>
### Demo 演示列表
- [sc-demo-alibaba](https://github.com/skypyb/code_demo/tree/master/sc-demo-alibaba):Spring Cloud Alibaba全家桶的示例,服务治理、熔断、通信、网关、限流、鉴权等
- [spring-security-demo](https://github.com/skypyb/code_demo/tree/master/spring-security-demo):(内附sql脚本) 标准的RBAC权限设计,基于动态查询数据库的权限判定 ( *以接口为粒度,即 Request URL+Request Method* )基于JWT的认证授权流程,使用SpringBoot+SpringSecurity+MyBatis+jjwt
- [dubbo-springboot](https://github.com/skypyb/code_demo/tree/master/dubbo-springboot):SpringBoot整合Dubbo的演示,服务治理使用Zookeeper
- [sc-demo-microservice](https://github.com/skypyb/code_demo/tree/master/sc-demo-microservice):SpringCloudNetfilx全家桶,包含Zuul,Eureka,Ribbon,Hystrix,Feign等组件的使用
- [ssm-backstage](https://github.com/skypyb/code_demo/tree/master/ssm-backstage):Spring+SpringMVC+Mybatis注解式开发的简易后台项目,Mybatis实现一对一/一对多映射,前台使用Thymeleaf+Layui
- [WebSocket](https://github.com/skypyb/code_demo/tree/master/WebSocket):SpringBoot开发的WebSocket应用,包含传统tomcat开发方式和使用`spring-boot-starter-websocket`的开发方式
- [Cache-SpringBoot](https://github.com/skypyb/code_demo/tree/master/Cache-SpringBoot):SpringBoot自带的缓存实现,@CacheConfig @CacheEvict @CachePut @Cacheable 等注解使用
- [RabbitMQ-SpringBoot](https://github.com/skypyb/code_demo/tree/master/RabbitMQ-SpringBoot):SpringBoot 集成 RabbitMQ; 消息发送/接收、死信队列
- [Event-Springboot](https://github.com/skypyb/code_demo/tree/master/Event-Springboot):SpringBoot实现异步事件驱动(发布、监听)编程,AsyncConfigurer异步配置演示,ApplicationEvent、ApplicationListener、@EventListener等相关接口/注解使用示例
| 0 |
polkadot-java/api | Java APIs around Polkadot and any Substrate-based chain RPC calls. It is dynamically generated based on what the Substrate runtime provides in terms of metadata.Full documentation & examples available. | null | # Polkadot/Substrate Java Api
This library provides a Java wrapper around all the methods exposed by a Polkadot/Subtrate network client and defines all the types exposed by a node.
- [packages](https://github.com/polkadot-java/api/tree/master/packages) -- Polkadot/substrate api Java implementation.
- [examples](https://github.com/polkadot-java/api/tree/master/examples) -- Demo projects (Gradle).
- [examples_runnable](https://github.com/polkadot-java/api/tree/master/examples_runnable) -- Demo executable JARs.
## JDK
Java 1.8
## Based JS code version
The Java version is based on JS commit [ff25a85ac20687de241a84e0f3ebab4c2920df7e](https://github.com/polkadot-js/api/commit/ff25a85ac20687de241a84e0f3ebab4c2920df7e).
## Substrate version
The working substrate version is 1.0.0-41ccb19c-x86_64-macos.
Newer substrate may be not supported.
## overview
The API is split up into a number of internal packages
- [@polkadot/api](packages/src/main/java/org/polkadot/api/) The API library, providing both Promise and RxJS Observable-based interfaces. This is the main user-facing entry point.
- [@polkadot/rpc](packages/src/main/java/org/polkadot/rpc/) RPC library.
- [@polkadot/type](packages/src/main/java/org/polkadot/type/) Basic types such as extrinsics and storage.
- [@polkadot/types](packages/src/main/java/org/polkadot/types/) Codecs for all Polkadot primitives.
## Document
* See the generated JavaDoc in /doc folder. Or visit the [document site](https://polkadot-java.github.io/)
* To generate JavaDoc by yourself, reference `gendoc.sh` in the root folder
* To understand how the system works, you may reference [Substrate](https://github.com/paritytech/substrate) and [Polkadot Network](https://polkadot.network/)
## Integrate the API into your projects
The project uses [Gradle](https://gradle.org/) as build tool. You need to install Gradle.
### Build the library with Gradle then link to the JAR
1. `git clone https://github.com/polkadot-java/api.git`
2. `cd api`
3. `gradle build`
4. Get the JARs in folder `build/libs/`
5. Add the JARs into your projects.
### Link to the source code directly,
1. `git clone https://github.com/polkadot-java/api.git`
2. Import the gradle project in folder api/packages to your workspace.
3. Add links or dependencies in the IDE. This is different in different IDEs, please reference to your IDE document.
## How to build and use sr25591 JNI
1. See polkadot-java/sr25519/readme.md to compile sr25591 library (Rust and C++).
2. See polkadot-java/sr25519/cpp/compile.sh how to compile the JNI shared library.
3. See polkadot-java/sr25519/libs/readme.md how to use the JNI in the Java API.
## How to run examples
1. Install substrate local node:
`https://github.com/paritytech/substrate`
2. Running the samples:
There are several runnable samples. To run the samples, go to folder `examples_runnable/LastestDate` (such as examples_runnable/20190525), then run each shell script.
3. To change the Substrate address, change the `endPoint` variable in each demo main file.
| 0 |
jobrunr/example-spring | An example on how to integrate JobRunr with Spring | null | # JobRunr example
This repository shows an advanced example how you can integrate JobRunr with [spring.io](https://spring.io/). In this example, Jobs are created via a web frontend (the `webapp` module) and processed in a different JVM (the `processingapp` module).
An easier example using [spring.io](https://spring.io/) can be found [here](https://github.com/jobrunr/example-java-mag)
## About this project
This project exists out of 3 modules:
- **core**: this project contains [MyService](core/src/main/java/org/jobrunr/examples/services/MyService.java), a simple spring service with two example methods which you want to run in a backgroundserver.
- **processingapp**: this app is a Spring Console application and runs indefinitely. It polls for new background jobs
and processes them when they arrive. It contains only a two classes:
- the [JobServerApplication](processingapp/src/main/java/org/jobrunr/examples/processingapp/JobServerApplication.java) which is empty 🙂
- the [JobServerConfiguration](processingapp/src/main/java/org/jobrunr/examples/processingapp/JobServerConfiguration.java) which starts the H2 Database (in server mode) and contains the database information
> the thing to note here is the [`application.properties`](processingapp/src/main/resources/application.properties) where the server and the dashboard are enabled
- **webapp**: this is a Spring Rest Webapp that enqueues new background jobs. It contains a simple `RestController`
called [JobController](webapp/src/main/java/org/jobrunr/examples/webapp/api/JobController.java) which contains some
methods (= endpoints) to enqueue jobs.
## How to run this project:
- clone the project and open it in your favorite IDE that supports gradle
- First, run the main method from
the [JobServerApplication](processingapp/src/main/java/org/jobrunr/examples/processingapp/JobServerApplication.java)
in the `processingapp` module and also keep it running
- Run the main method from the [WebApplication](webapp/src/main/java/org/jobrunr/examples/webapp/WebApplication.java) in the `webapp` module and keep it running
- Open your favorite browser:
- Navigate to the JobRunr dashboard located at http://localhost:8000/dashboard. This is running within
the [JobServerApplication](processingapp/src/main/java/org/jobrunr/examples/processingapp/JobServerApplication.java).
- To enqueue a simple job, open a new tab and go to http://localhost:8080/jobs/ and take it from there.
- Visit the dashboard again and see the jobs being processed!
| 1 |
gshaw-pivotal/spring-hexagonal-example | An example of implementing hexagonal design in a spring-boot application (A work in progress) | null | # Hexagonal (Port and Adapter) Example #
A spring-boot based example of hexagonal design (also known as the ports and adapters design).
Through the use of ports, contracts between the various modules can be set up, allowing for the modules to be easily replaced with other implementations. The only condition; that the module conforms to the contract specified.
Thus by having a hexagonal design, the current database adapter module; which is a simple in-memory implementation can be swapped out for a JPA repository or a flat file or something else and as long as it conforms to the contract (aka port) no other module (especially the domain module) needs to know or care.
Similarly, the name verifier adapter can be swapped out for a real implementation that would communicate with a third party application without the rest of our application ever knowing. Again as long as the current implementation conforms to the contract (in this case the NameVerifierService) no one will ever know the difference.
Also, the rest api adapter (which as the name implies uses HTTP REST) could be replaced by a SOAP based api and again the domain (and other modules) would not know nor need to care as long as the new api passed along the expected objects as specified in the ports (AddUserService and GetUserService).
## Getting Started ##
```
git clone https://github.com/gshaw-pivotal/spring-hexagonal-example.git
```
## Resources on Hexagonal / Ports and Adapters ##
The following are some resources that explain the hexagonal design / pattern
- [Hexagonal Architecture](http://alistair.cockburn.us/Hexagonal+architecture)
- [Ports-And-Adapters / Hexagonal Architecture](http://www.dossier-andreas.net/software_architecture/ports_and_adapters.html)
| 1 |
indrabasak/spring-loadtime-weaving-example | Spring Boot Load-Time Weaving Example with AspectJ | aspectj spring-aop spring-boot | [![Build Status][travis-badge]][travis-badge-url]
[![Quality Gate][sonarqube-badge]][sonarqube-badge-url]
[![Technical debt ratio][technical-debt-ratio-badge]][technical-debt-ratio-badge-url]
[![Coverage][coverage-badge]][coverage-badge-url]

Spring Boot Load-Time Weaving Example with AspectJ
===============================================================
This is an example of Spring Boot load time weaving with AspectJ. It's the
continuation of the previous [Spring Boot source weaving example](https://github.com/indrabasak/spring-source-weaving-example).
### Load Time Weaving
The load-time weaving is a type of binary weaving where compiled Java classes
are taken as an input at runtime instead of compile time. The classes
are weaved as they are loaded by the Java Virtual Machine (JVM).
The load-time weaving process weaves classes with the help of Java Agent. A Java Agent
intercepts the classes while they are being loaded by the JVM. The intercepted
classes are instrumented (bytecode is modified) by the agent based on
the AspectJ definitions contained in a meta file named `aop.xml`. The `aop.xml`
file should be in the classpath in order to be picked up by the agent.

### When do you need load-time weaving?
THe load-time weaving is useful when aspects are required at certain times but not
all the times. For example, monitoring application performance or investigating
thread deadlocks, etc. This way you can keep your application source code free of
aspect related code.
```java
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface CustomAnnotation {
String description() default "";
}
```
1. A `CustomAnnotationAspect` aspect to intercept any method marked with
`@CustomAnnotation`. It prints out the name of the intercepted class and method.
**Note:** Unlike the previous [source weaving example](https://github.com/indrabasak/spring-source-weaving-example),
this `CustomAnnotationAspect` aspect do not have the Spring `@Component`
annotation since it's not going to be deployed as a bean.
```java
@Aspect
public class CustomAnnotationAspect {
private static final Logger
log = LoggerFactory.getLogger(CustomAnnotationAspect.class);
@Before("@annotation(anno) && execution(* *(..))")
public void inspectMethod(JoinPoint jp, CustomAnnotation anno) {
log.info(
"Entering CustomAnnotationAspect.inspectMethod() in class "
+ jp.getSignature().getDeclaringTypeName()
+ " - method: " + jp.getSignature().getName()
+ " description: " + anno.description());
}
}
```
1. The `BookService` class is the example where the `@CustomAnnotation` is used.
The **privat**e method `validateRequest` is called from `create` method. The
`create` method is annotated with Spring's `@Transactional` annotation.
```java
@Service
public class BookService {
private static final Logger
log = LoggerFactory.getLogger(BookService.class);
private BookRepository repository;
@Autowired
public BookService(BookRepository repository) {
this.repository = repository;
}
@Transactional
public Book create(BookRequest request) {
Book entity = validateRequest(request);
return repository.save(entity);
}
public Book read(UUID id) {
return repository.getOne(id);
}
@CustomAnnotation(description = "Validates book request.")
private Book validateRequest(BookRequest request) {
log.info("Validating book request!");
Assert.notNull(request, "Book request cannot be empty!");
Assert.notNull(request.getTitle(), "Book title cannot be missing!");
Assert.notNull(request.getAuthor(), "Book author cannot be missing!");
Book entity = new Book();
entity.setTitle(request.getTitle());
entity.setAuthor(request.getAuthor());
return entity;
}
}
```
### Aspect Filter Examples
This example also includes couple of examples on how to apply aspect conditionally
based on either the called method's annotation or by name.
1. Filter by caller's method tagged with a certain type of annotation,
```java
@Aspect
public class FilterCallerAnnotationAspect {
private static final Logger
log = LoggerFactory.getLogger(FilterCallerAnnotationAspect.class);
@Before("call(* com.basaki.service.UselessService.sayHello(..))" +
" && cflow(@annotation(trx))")
public void inspectMethod(JoinPoint jp,
JoinPoint.EnclosingStaticPart esjp, Transactional trx) {
log.info(
"Entering FilterCallerAnnotationAspect.inspectMethod() in class "
+ jp.getSignature().getDeclaringTypeName()
+ " - method: " + jp.getSignature().getName());
}
}
```
This aspect will only be applied when the `service` method of `UselessService`
class is called from methods annotated with Spring's `Transactional` annotation.
1. Filter by caller's method's name,
```java
@Aspect
public class FilterCallerMethodAspect {
private static final Logger
log = LoggerFactory.getLogger(FilterCallerMethodAspect.class);
@Before("call(* com.basaki.service.UselessService.sayHello(..))" +
" && cflow(execution(* com.basaki.service.BookService.read(..)))")
public void inspectMethod(JoinPoint jp,
JoinPoint.EnclosingStaticPart esjp) {
log.info(
"Entering FilterCallerMethodAspect.inspectMethod() in class "
+ jp.getSignature().getDeclaringTypeName()
+ " - method: " + jp.getSignature().getName());
}
}
```
This aspect will only be applied when the `service` method of `UselessService`
class is called from `read` method of `BookService`.
### Dependency Requirements
#### AspectJ Runtime Library
Annotation such as `@Aspect`, `@Pointcut`, and `@Before` are in `aspectjrt.jar`.
The `aspectjrt.jar` and must be in the classpath regardless of whether
the aspects in the code are compiled with `ajc` or `javac`.
```xml
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.8.13</version>
</dependency>
```
#### AspectJ Weaving Library
The `aspectjweaver.jar` contains the AspectJ wevaing classes. The weaver is
responsible for mapping crosscutting elements to Java constructs.
```xml
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
<version>1.8.13</version>
</dependency>
```
#### AspectJ Weaver Configuration
The load-time waving is configured using a file named `aop.xml`. The `aop.xml`
is made available to the classpath by placing it in the `META-INF` directory
under the `resources` folder.
The `aop.xml` contains the following sections:
1. **Aspect** section defines all the aspects that are to be used
in the weaving process. In our example, there is only one aspect, i.e.,
`CustomAnnotationAspect`.
2. **Weaver** section defines all the classes (e.g., `com.basaki.service.*`)
that are to be woven.
It should also include the packages where the aspects are defined
(e.g., `com.basaki.aspect.*`).
It also specifies other weaving options, e.g., `verbose`, `showWeaveInfo`, etc.
```xml
<aspectj>
<aspects>
<aspect name="com.basaki.aspect.CustomAnnotationAspect"/>
<weaver options="-verbose -showWeaveInfo">
<include within="com.basaki.service.*"/>
<include within="com.basaki.aspect.*"/>
</weaver>
</aspects>
</aspectj>
```
#### AspectJ Maven Plugin
The `maven-surefire-plugin` plugin is only needed if you run the Spring Boot
application from an IDE (e.g., IntelliJ). It's required to add the
`-javaagent` JVM arguments.
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.20.1</version>
<configuration>
<argLine>-javaagent:"${settings.localRepository}"/org/aspectj/
aspectjweaver/1.8.13/
aspectjweaver-1.8.13.jar</argLine>
<useSystemClassLoader>true</useSystemClassLoader>
<forkMode>always</forkMode>
</configuration>
</plugin>
```
### Build
To build the JAR, execute the following command from the parent directory:
```
mvn clean install
```
### Run
You need to use the `-javaagent:` JVM argument whenever you run the
executable Spring Boot jar.
Here is the command to run the application:
```
java -javaagent:lib/aspectjweaver-1.8.13.jar -jar spring-loadtime-weaving-example-1.0.0.jar
```
In the example shown below, it's expected that the `aspectjweaver.jar`
is located in the `lib` directory.
### Usage
Once the application starts up at port `8080`, you can access the swagger UI at
`http://localhost:8080/swagger-ui.html`. From the UI, you can create and retrieve
book entities.
Once you create a book entity, you should notice the following message on the
terminal:
```
2018-02-09 17:11:38.022 INFO 51061 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 25 ms
2018-02-09 17:11:38.193 INFO 51061 --- [nio-8080-exec-1] c.basaki.aspect.CustomAnnotationAspect : Entering CustomAnnotationAspect.inspectMethod() in class com.basaki.service.BookService - method: validateRequest description: Validates book request.
2018-02-09 17:11:38.194 INFO 51061 --- [nio-8080-exec-1] com.basaki.service.BookService : Validating book request!
```
[travis-badge]: https://travis-ci.org/indrabasak/spring-loadtime-weaving-example.svg?branch=master
[travis-badge-url]: https://travis-ci.org/indrabasak/spring-loadtime-weaving-example/
[sonarqube-badge]: https://sonarcloud.io/api/project_badges/measure?project=com.basaki%3Aspring-loadtime-weaving-example&metric=alert_status
[sonarqube-badge-url]: https://sonarcloud.io/dashboard/index/com.basaki:spring-loadtime-weaving-example
[technical-debt-ratio-badge]: https://sonarcloud.io/api/project_badges/measure?project=com.basaki%3Aspring-loadtime-weaving-example&metric=sqale_index
[technical-debt-ratio-badge-url]: https://sonarcloud.io/dashboard/index/com.basaki:spring-loadtime-weaving-example
[coverage-badge]: https://sonarcloud.io/api/project_badges/measure?project=com.basaki%3Aspring-loadtime-weaving-example&metric=coverage
[coverage-badge-url]: https://sonarcloud.io/dashboard/index/com.basaki:spring-loadtime-weaving-example
| 1 |
mitchtabian/Retrofit-Caching-Example | An example of how to use Retrofit2 to cache HTTP responses | android-caching cache cache-control okhttp3 retrofit2 | null | 1 |
Gary111/MaskedEditText | Example of formatting card number / date / cvc during entering text | null | # MaskedEditText
Simle example of how to format card number / date / cvc during entering text
![Example Masked EditText][1]
[1]: https://github.com/Gary111/MaskedEditText/blob/master/screens/demo.gif
| 1 |
simple-elf/github-allure-history | Example of using Allure Report on GitHub Actions | null | # github-allure-history
Example project using GitHub Actions for Allure report with history on GitHub Pages
You can see [Allure Report](https://simple-elf.github.io/github-allure-history/) on GitHub Pages
## GitHub Actions
Learn how to use GitHub Actions on [official docs](https://docs.github.com/en/actions)
Here is my advices:
1. You need to enable actions in '/settings/actions', choosing 'Enable local and third party Actions for this repository'
2. Create a workflow '*.yml' file in '.gradle/workflows' directory. Example workflow [allure-history.yml](https://github.com/simple-elf/github-allure-history/blob/master/.github/workflows/allure-history.yml)
3. This workflow uses some GitHub Actions, especially 'allure-report-action'. You can see more about this action on [Marketplace](https://github.com/marketplace/actions/allure-report-with-history)
## GitHub Pages
Learn how to use GitHub Pages on [official docs](https://docs.github.com/en/github/working-with-github-pages)
Here is my advices:
1. Go to your repository '/settings', scroll down to 'GitHub Pages' section
2. By default, 'Source' is set to 'None'
3. Set it to 'gh-pages' branch and '/root' folder
4. If you don't have 'gh-pages' branch - you can't set it. You need to run workflow even once, and then you have this branch.
5. After changing settings you can see URL link in 'GitHub Pages' section like this 'Your site is published at ...'
6. Copy this link to repository details (in 'About' section on repo main page) in WebSite field
## Allure Report with history on GitHub Pages
Here is how this works:
1. Step 'Get Allure history' in workflow gets previous 'gh-pages' branch state (there is no error if it doesn't exist yet)
2. Step 'Allure Report action':
1. Creates temp folder 'allure-history' with a copy of all 'gh-pages' branch files (previous reports)
2. Get folder 'last-history' from 'gh-pages' branch to 'allure-results' of current build
3. Generate report with Allure Commandline
4. Get 'history' folder of current Allure Report to 'last-history' folder for the next build
5. Copy current Allure Report to folder with current build number
6. Creates 'index.html' file in root of 'allure-history', that will be redirecting to folder with current build number
3. Step 'Deploy report to Github Pages' do 'git push' of 'allure-history' folder to 'gh-pages' branch
4. And then just magic of GitHub Pages deploy happens, you can open root link of GitHub Pages and always see redirect to the last Allure Report | 1 |
lykhonis/MediaCodec | Example of how to use MediaCodec to encode/decode and pass samples as byte array | null | # MediaCodec
Example show case use cases of MediaCodec. It can be valuable for applications that doing encoding, decoding and transfering samples in H.264 (for example) over network, etc.
Example contains:
- Creating surface and associated canvas to draw onto
- Binding surface to encoder to produce H.264 samples
- Creating surface view and binding decoder to it
- Configuring decoder to accept H.264 and draw onto surface view
| 1 |
aws-samples/aws-greengrass-lambda-functions | Example local Lambda functions that can be used with AWS Greengrass and the AWS Greengrass Provisioner. | aws-greengrass greengrass | ## AWS Greengrass Lambda Functions
Example local Lambda functions that can be used with AWS Greengrass and the AWS Greengrass Provisioner. This repo contains
the functions and the deployment configurations to launch those functions in different configurations.
## News
2020-01-27 - Minor changes to the role naming scheme may cause issues with existing deployments. If you are experiencing issues with permissions you can either switch to the new naming scheme (e.g. `Greengrass_CoreRole`, `Greengrass_ServiceRole`, and `Greengrass_LambdaRole`) or you can update the deployments.defaults.conf file to use the older names.
## How do I launch these functions with the provisioner?
Step 1: Clone this repo
Step 2: [Read the provisioner command-line examples](https://github.com/awslabs/aws-greengrass-provisioner/blob/master/docs/CommandLine.md)
## Using Java?
Check out the [Cloud Device Driver framework](https://gitpitch.com/aws-samples/aws-greengrass-lambda-functions/master?p=presentations/cloud-device-driver-framework-for-java). It is a framework that simplifies writing Greengrass Lambda functions in Java. [You can look at the code as well](foundation/CDDBaselineJava).
## Current function list
- Python
- [BenchmarkPython](functions/BenchmarkPython) - a naive benchmark that creates a pinned function that sends messages to itself
- [HTTPPython](functions/HTTPPython) - sends HTTP requests from the core to any address (local network or otherwise), triggered by MQTT messages from the cloud
- [HelloWorldPython3](functions/HelloWorldPython3) - Hello, World in Python 3
- [HelloWorldPythonWithCloudFormation](functions/HelloWorldPythonWithCloudFormation) - Hello, World in Python with a CloudFormation template that demonstrates how to build republish rules that the provisioner can launch automatically
- [LiFXPython](functions/LiFXPython) - control LiFX bulbs
- [SocketServerPython](functions/SocketServerPython) - an example of how to listen on a socket in Python and relay the inbound TCP messages to the cloud via MQTT
- [RaspberryPiGpioPython3](functions/RaspberryPiGpioPython3) - Event driven GPIO handler for the Raspberry Pi (no polling)
- [LatencyTesterPython3](functions/LatencyTesterPython3) - Sends ping requests to a fixed list of hosts and publishes the round trip ICMP ping time via MQTT
- [CloudWatchMetricHandlerPython3](functions/CloudWatchMetricHandlerPython3) - Sends latency information to AWS as CloudWatch Metric values (used with LatencyTesterPython3)
- [SecretsManagerPython3](functions/SecretsManagerPython3) - Retrieves a secret from Secrets Manager and publishes the value on a topic for testing purposes
- [MqttClientPython3](functions/MqttClientPython3) - Connects to an MQTT broker as a client and relays messages from that broker to Greengrass
- NodeJS
- [HelloWorldNode](functions/HelloWorldNode) - Hello, World in Node
- [HTTPNode](functions/HTTPNode) - sends HTTP requests from the core to any address (local network or otherwise), triggered by MQTT messages from the cloud
- [WebServerNode](functions/WebServerNode) - an example of how to create an Express web server in a pinned Lambda function
- Java with Cloud Device Driver framework
- [CDDSkeletonJava](functions/CDDSkeletonJava) - shows how the Java Cloud Device Driver framework can be used
- [CDDDMIJava](functions/CDDDMIJava) - relays Desktop Management Interface (DMI) information to the cloud when requested via MQTT
- [CDDBenchmarkJava](functions/CDDBenchmarkJava) - a naive Java benchmark that creates a pinned function that sends messages to itself
- [CDDSenseHatJava](functions/CDDSenseHatJava) - shows how to control a SenseHat display on a Raspberry Pi
- [CDDDockerJava](functions/CDDDockerJava) - shows how to control Docker with Greengrass
- [CDDLatencyDashboard](functions/CDDLatencyDashboard) - a Vaadin-based dashboard to show latency information (used with LatencyTesterPython3)
- [CDDMdnsServiceResolverJava](functions/CDDMdnsServiceResolverJava) - resolves mDNS service discovery broadcasts and publishes the discovery information to other functions
- C
- [ARM32SampleC](functions/ARM32SampleC) - Hello World in C for ARM32 architectures
- [X86_64SampleC](functions/X86_64SampleC) - Hello World in C for X86_64 architectures
- Greengrass Provisioner functionality examples
- [Reusing functions from other groups (benchmark example)](deployments/benchmark-reuse.conf) - shows how to reuse existing functions in the Greengrass Provisioner by using the tilde `~` wildcard
- [Launching an nginx proxy on ARM Greengrass cores with the Greengrass Docker connector](deployments/arm-nginx.conf) - **ARM only!** shows how to use the Greengrass Docker connector in a deployment to launch nginx
- [Launching Wordress on X86 Greengrass cores with the Greengrass Docker connector](deployments/x86-wordpress.conf) - **X86 only!** shows how to use the Greengrass Docker connector in a deployment to launch Wordpress
- [Sharing files between two functions in Python 3](deployments/python3-shared-file.conf) - shows how to share a file between two functions through the host. Each function randomly writes a value to a file that the other function can read, then the other function picks up the value, publishes it to the core, and deletes the shared file so it can be created again.
## License Summary
This sample code is made available under a modified MIT license. See the LICENSE file.
| 1 |
djangofan/selenium-gradle-example | A Selenium2 example using the Gradle build system | null | # Info
This is a Java project that can be used as a template (or archetype) to start a WebDriver web browser testing project. I chose to simplify and and implement using simply WebDriver and Gradle.<br/>
Presentation is for "Portland Selenium Bootcamp 2013".
[PDX-SE Meetup Group](http://www.meetup.com/pdx-se/events/125285182/)
Special thanks to the creator of Gradle, for having some good examples.
[Ken Sipe](https://github.com/kensipe/gradle-samples)
NOTE: Keep in mind that these examples use multiple WebDriver instances, which may not be a normal design
pattern, especially in frameworks that are limited to one WebDriver instance, such as SauceLabs.
# Versions
Version 1.0 - March 16th, 2013
Version 2.0 - September 6th, 2013
# Project Layout
Gradle Project Root
+--- Project ':sub-project'
+--- Project ':commonlib'
+--- Project ':etsy'
+--- Project ':parallelwindows'
+--- ...
# Overview
1. Project "sub-project" is a project you add yourself, if you
want.
2. Project "etsy" is a RemoteWebDriver JUnit test-suite using
a local Grid server that is capable of running multiple
threads of single-window web browser tests.
3. Project "parallelwindows" is a test of a multi-window and
multi-threaded run using a static local website.
4. Project "commonlib" is a sub-project containing methods
shared between projects.
# SubProjects
Links to example sub-projects that belong to this project:
[ParallelWebDriver](https://github.com/djangofan/selenium-gradle-example/tree/master/parallelwindows)
[Etsy](https://github.com/djangofan/selenium-gradle-example/tree/master/etsy)
# Quick Start
Normally, this project would be ran through the Gradle plugin for Eclipse IDE, but I have tried to make it easier
by including a method to run dynamically and directly from the .zip distribution on the command line.
To try this project without requiring a Java IDE, just make sure you download Gradle 1.7+, configure your
GRADLE_HOME environment variable, add %GRADLE_HOME%\bin to your PATH, and then download the .zip distribution
of this project, unzip it, and run the included <b>runProjectMenu.bat</b> script.
# Implemented Features
<table>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
<tr>
<th>JUnit based</th>
<td>For use ONLY with JUnit 4.11 or higher because of the usage of the parameterized capability of JUnit.
This dependency is configured by the Gradle build script.</td>
</tr>
<tr>
<th>Parallel runner<br/>using JUnit</th>
<td>A parallel runner using the Gradle maxParallelForks method.</td>
</tr>
<tr>
<th>Native automation support</th>
<td>For use with Sikuli 1.0.1 or higher to test native elements that WebDriver "Action" is unable to
control. This dependency is configured in the Gradle build script. If you implement this however, you
may not be able to use the remote webdriver option in your project.</td>
</tr>
<tr>
<th>Uses RemoteWebDriver<br/>JSON Hub Server</th>
<td>I have included an implementation of a WebDriverServer class that starts a RemoteWebDriver JSON
Hub server instance in the BeforeClass method of tests. This server is a static member of the utility
class that the tests extend.</td>
</tr>
<tr>
<th>Parameterized data <br/>driven capability</th>
<td>Unit tests are parameterized from a csv file. Can also load tests from XML, XLS, a database, etc.</td>
</tr>
<tr>
<th>Logging and Reporting</th>
<td>Logs test output to console and to a file using SLF4j/LogBack API, and configured by a <b>logback.xml</b>
file. Will generate reports of JUnit test results at <b>build/reports/test/index.html</b> . Will place a
junit.log file at <b>build/logs/junit.log</b> .</td>
</tr>
<tr>
<th>Page Object design <br/>pattern</th>
<td>Uses the WebDriver "page object" design pattern, enhanced by the Selenium "LoadableComponent"
extendable class.</td>
</tr>
<tr>
<th>Fluent API design<br/>pattern</th>
<td>Implemented examples of the <i>Fluent API</i> design pattern while retaining capability of
the traditional page object pattern.</td>
</tr>
<tr>
<th>Multi-project build<br/>configuration</th>
<td>Implemented multiple project build. The root project has a subproject called "core" and all
subprojects of "core" inherit classes from it.</td>
</tr>
<tr>
<th>Run Options</th>
<td>You have three different options for running the tests: via the Gradle GUI, via your IDE Gradle
plugin, or via Gradle command line. To run with the JUnit runner in your IDE, you would need to manually
export your project as a normal Java project, because this template does not support that.</td>
</tr>
<tr>
<th>Core utility package</th>
<td>All projects inherit from a "core" project that contains classes where you can store methods
that all of your projects can share between them.</td>
</tr>
</table>
# Un-implemented Features
<table>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
<tr>
<th>Gradle Wrapper</th>
<td>Did not choose to implement the Gradle wrapper because I believe that downloading Gradle and
configuring GRADLE_HOME and PATH are easy enough. Also, a manual setup of Gradle gives us more
control using a batch script. Also, the development IDE is usually configured to use the
statically defined Gradle home.</td>
</tr>
<tr>
<th>Jar executable option</th>
<td>Creates an uberJar of all projects and subprojects that can be ran by double clicking
the .jar file. If you don't have the file association supporting it, we include a
jarAssociation.bat file to setup the file association on your Windows system. I was planning
to implement this but currently having trouble getting it to work.</td>
</tr>
</table>
# Configuration And Setup
#### Eclipse
To get it working on a regular Eclipse Kepler (2.9.0) or later, follow these steps:
1. Using the "Eclipse Marketplace" settings panel under the
Eclipse "Help" menu, install the Gradle tooling
functionality. You can do it through the "Install New
Software" menu, but it isn't recommended. If Market is
missing from your Eclipse, then add the repo:
http://download.eclipse.org/releases/kepler
and then install the "market" and restart Eclipse.
2. Download the .zip archive of this GitHub project
distribution and unzip it to your workspace. An example:
"C:\Eclipse32\workspace\selenium-gradle-example\" .
3. Use the Eclipse "Import" function from the Eclipse "File
menu" to import a "Project" of type "Gradle".
4. Browse using the import wizard to your projects "root"
directory. Then click the "Build model" button.
5. Check all checkboxes . You could also choose to add all
to your "working set" if you like but it isn't required.
6. Rebuild the dependencies by right clicking on the project
and then choose Gradle-->Refresh All Dependencies
7. Right click on your project and choose "Run As-->External
Tools Configuration". Configure a new "clean" and "build"
configuration for running a sub-project (or whatever tasks
you want to execute).
8. Optionally, you can run this project on the command line
with something like "gradle etsy:clean etsy:runTask --info"
and it will execute the project unit tests. Also, this
project provides a .bat batch script that does this and
provides a menu of other actions you can execute, including
running the "Gradle GUI".
#### IntelliJ-IDEA
The required Gradle functionality is already built into IntelliJ-IDEA 12.1+ . I think using IDEA is more difficult
but go ahead if you are familiar with it.
#### Notes
Website of this project:<br/>
http://djangofan.github.com/selenium-gradle-example/<br/>
<br/>
# FAQ
1. If the intellisense in Eclipse doesn't work, make sure you
have added all the .class directories to your Eclipse project
classpath. (See the included .classpath file.)
2. I use "GitHub GUI" to sync my local project repo to GitHub.
If you fork my project, I would recommend doing it this way
unless you are a Git expert and prefer another way.
| 1 |
ricardozanini/soccer-stats | Soccer Stats is an example application to be used as a proof of concept for a presentation at Ansible Meetup in São Paulo | ansible jenkins jenkins-pipeline spring-boot | # Soccer Stats
Soccer Stats is an example application to be used as a proof of concept for a presentation at [Ansible Meetup in São Paulo](https://www.meetup.com/Ansible-Sao-Paulo/events/243212921/).
## Pre-requistes
* JDK 1.8
* Maven 3.3+
## Environment
It's a sample Rest API built upon Spring Rest Framework. The database is based on data gathered from 2015/2016 season of Italian Soccer National Championship.
During the Spring Context bootstrap a temporary database is created using H2 with data imported from a spreedsheet.
## Installation
Just run `mvn clean package` on the project directory and your ready to go.
## Using
Bring the application up by running `java -jar soccer-stats-X.X.X.jar`, where's `X.X.X` is the project's version.
After the startup the endpoint should be availble at `http://localhost:8080/matches/{team_name}` where `{team_name}` must be a Italian team name like `juventus`, `milan`, `udinese` and so on.
To bring a specific match, try the endpoint `http://localhost:8080/matches/{home_team_name}/{visitor_team_name}` replacing the param vars to the match you'd like to see, for example:
[http://localhost:8080/matches/juventus/milan](`http://localhost:8080/matches/juventus/milan`)
## Credits
[Football-Data](http://www.football-data.co.uk/) for providing the data used for this lab. | 1 |
MetaArivu/Kafka-quickstart | Kafka Examples focusing on Producer, Consumer, KStreams, KTable, Global KTable using Spring, Kafka Cluster Setup & Monitoring. Implementing Event Sourcing and CQRS Design Pattern using Kafka | cqrs event-sourcing global-ktable kafka kafka-admin kafka-consumer kafka-producer kafka-prometheus kafka-ssl kstream ktable spring-kafka spring-kafka-test | # Kafka Examples Using Spring
This tutorial focus of different features of KAKFA
## 1: [Kafka Setup](https://github.com/MetaArivu/spring-kaka-examples/tree/main/01-kafka-setup)
- Kafka Setup
- Kafka SSL Configuration
## 2: [Kafka Producer](https://github.com/MetaArivu/spring-kaka-examples/tree/main/02-spring-kafka-producer)
This demo focus on following feature
- 1 Asynchronous Simple Event Publisher
- 2 Event Publisher with key, this will make sure event with same key goes to same partition
- 3 Event Publisher with callback method
- 4 Publish event with headers
- 5 Publish event in synchronous way
- 6 How to use embeded kafka for unit testing
## 3: [Kafka Consumer](https://github.com/MetaArivu/spring-kaka-examples/tree/main/03-spring-kafka-consumer)
This Deemo Focus on following features
- 1 Simple event consumer
- 2 Event consumer with consumer rercord, this will give you more information of message like Key, Partition, Offset etc
- 3 Event consumer with header, this will give you all the standard header and custom header information
- 4 How to handle exception in generic way
- 5 Manual Acknowledgement
- 6 Concurrent Message Listener
- 7 Retry
## 4: [Schema Registry & AVRO](https://github.com/MetaArivu/spring-kaka-examples/tree/main/04-schema-registry-with-avro)
This section focus on how to enable usage of Confluent Schema Registry and Avro serialization format in your Spring Boot applications.
## 5: KStream & KTable
<img width="1009" alt="Screen Shot 2021-11-01 at 11 42 07 PM" src="https://user-images.githubusercontent.com/23295769/139720133-89848b21-2197-427a-b82a-01425ca1ed83.png">
## 5.1: [KStream](https://github.com/MetaArivu/spring-kaka-examples/tree/main/05-kafka-streams-demo)
This section focus on how to use KStream
- 1 Working with KStream
- 2 Implementing Exactly Once Pattern
- 3 Handling Business Error
- 4 Branching
- 5 Reducing
- 6 Aggregation
- 7 Joining KStreams and Global Table
## 5.2: [KTable](https://github.com/MetaArivu/spring-kaka-examples/tree/main/06-kafka-ktable-demo)
This section focus on how to use KTable
- 1 Working with Ktable
- 2 Aggregation
- 3 Reducing
- 4 Global Table
## 6: [CQRS and EventSourcing](https://github.com/MetaArivu/spring-kaka-examples/tree/main/07-shopping-cart-cqrs-es)
This section focus on implementing Event Sourcing & CQRS using Kafka KStream & KTable. Here we have build small Shopping Cart Service functionality.

## 7: [Kafka Cluster Setup & Monitoring](https://github.com/MetaArivu/Kafka-examples/tree/main/08-cluster-setup)
This section focus on setting Kafka Cluster Setup. In this demo we will focus on setting 3 zookeeper with 3 broker setup.

<img width="1676" alt="Screen Shot 2021-11-03 at 3 58 02 PM" src="https://user-images.githubusercontent.com/23295769/140044753-02b47885-1340-49a3-80b2-d8f37a9bb132.png">
<img width="1626" alt="Screen Shot 2021-11-03 at 1 20 14 PM" src="https://user-images.githubusercontent.com/23295769/140025386-8eac06d9-b45d-4666-a038-e376b569b0da.png">
## License
Copyright © [MetaMagic Global Inc](http://www.metamagicglobal.com/), 2021-22. All rights reserved.
Licensed under the Apache 2.0 License.
**Enjoy!**
| 0 |
IMS94/spring-boot-jwt-authorization | Example project to do role based access control (RBAC) using Spring Boot and JWT | authorization jwt jwt-authentication rbac rest-api role-based-access-control roles security single-page-app spring-boot spring-security | # Role Based Access Control (RBAC) with Spring Boot and JWT
This repo hosts the source code for the article [**Role Based Access Control (RBAC) with Spring Boot and JWT**](https://medium.com/geekculture/role-based-access-control-rbac-with-spring-boot-and-jwt-bc20a8c51c15?source=github_source).
This example project demonstrates how to use the Spring Boot's inbuilt OAuth2 Resoure Server to authenticate and
authorize REST APIs with JWT. First, we have enabled **JWT authentication** and secondly, have introduced
**Role Based Access Control (RBAC)** by mapping a roles claim in JWT to granted authorities in Spring Security.
Furthermore, provides a "/login" endpoint to generate and issue JWTs upon
successful login by the users.
This approach is ideal to be used as the
**backend for a single page application (SPA)** written using a frontend framework like
ReactJS, Angular, etc...
## Solution Overview

## Role Based Access Control
An example of role based access control.

## JWT Authentication Overview

## Getting Started
- Use `mvn clean install` in the project root directory to build the project.
- Run the main class, `com.example.springboot.jwt.JwtApplication` to start the application.
## Endpoints
- `/login` -> Public endpoint which returns a signed JWT for valid user credentials (username/password)
- `/products` -> Contains several endpoints to add and remove product entities. Protected by JWT authentication and
authorized based on role.
| 1 |
ekoontz/jaas_and_kerberos | Example code: using JAAS (Java Authentication and Authorization Service) and Kerberos | null | # Introduction
This is a set of example code to explain how to use Kerberos with the
JAAS (Java Authentication and Authorization Service) API. The source
code is split into two classes, `KerberizedServer.java` and
`Client.java`. Running `make test` will compile and run them both, at
which time they will set up an authenticated context between them and
print some debugging information.
# Standard Sockets and NIO (Java's New IO)
There are two versions of the server: `KerberizedServer.java`, which
uses the traditional blocking network sockets API, and
`KerberizedServerNIO.java`, which uses NIO. The standard sockets
version is much shorter and easier to understand, so start with
that. However, most real-world Java development seems to use NIO, so I
made a version that uses that. There is also a newer framework called
[Netty](http://jboss.org/netty), built on top of NIO, which I'll be
looking into at some point, and porting the server code to.
# Acknowledgements and Related Work
## [Client/Server Hello World in Kerberos, Java and GSS](http://thejavamonkey.blogspot.com/2008/04/clientserver-hello-world-in-kerberos.html)
A search returned this guide by a blogger who goes by the name Java
Monkey. Java Monkey's approach was exactly what I was looking for:
a "Hello World" application, documented and commented clearly, that
can be studied and understood quickly.
The only drawback, from my point of view, is that it uses the
filesystem for client-server communication: the client logs into
Kerberos and generates a ticket which it writes to a key file. The
server then uses this file to authenticate the client.
However, I wanted to have a Kerberos "Hello World" where the client
and server communicate via a network socket, so that's why I've
created this github repository.
## [Rox Java NIO Tutorial](http://rox-xmlrpc.sourceforge.net/niotut/)
A good guide to learning Java NIO by James Greenfield, as part of his
RoX (RPC over XML) project. Based on his helpful information, I was
able to write a NIO version of my example code after starting with a
traditional sockets implementation.
## [Sun/Oracle official JAAS tutorials](http://java.sun.com/j2se/1.5.0/docs/guide/security/jgss/tutorials/index.html)
I recommend you look at my example code and then only afterwards look
at the Sun/Oracle materials. I found them too complex for tutorial
purposes; they are better as a reference source.
Commenting on these official JAAS documentation articles, Java Monkey writes:
> the code is a socket based client/server which is not useful at all,
> as only a lunatic would be writing their own server communications
> layer in these days of NIO and SOA.
I partially agree with him here. The main problem, from my experience,
of the Sun Tutorial is that it doesn't actually work: the code they
supply simply doesn't function as-is. Also, it uses byte arrays with
an array-length prefix to communicate between the client and server,
which is unnecessarily low-level. I improved this by using standard
sockets but used `Data`{`Input`/`Output`}`Streams` instead of byte
arrays.
One disadvantage of NIO compared to traditional sockets is the API
complexity: compare `KerberizedServer.java` with
`KerberizedServerNIO.java`: the latter is twice as long. (Although,
`KerberizedServer.java` as written, does not handle more than
one client, whereas `KerberizedServerNIO.java` does, so it's not a
fair comparison).
Another disadvantage of using NIO, however, is that you can't use
`Data`{`Input`/`Output`}`Streams`, unfortunately, as far as I can
tell; would like to be wrong about that.
# Prerequisites
## Kerberos server and client tools
If you're using Debian, install:
* krb5-admin-server
* krb5-kdc
## JDK
This code was tested with the Sun JDK version 1.6.0_22.
## GNU Make
I used GNU Make for development and testing rather than Apache Ant for
simplicity, but an Ant build.xml or a Maven pom.xml would be good to have here.
# Setup Kerberos Server Infrastructure
## Choose realm name
In my documentation and example configuration files, I use
`FOOFERS.ORG` as my Kerberos realm, and `debian64-3` as the host
running the Kerberos services. Change these based on your preference.
## Edit /etc/krb5.conf
[libdefaults]
default_realm = FOOFERS.ORG
[realms]
FOOFERS.ORG = {
kdc = debian64-3
admin_server = debian64-3
}
[domain_realm]
.foofers.org = FOOFERS.ORG
foofers.org = FOOFERS.ORG
## Choose principal names.
Choose a principal name for your example client and one for your
example server. Below I use `zookeeperclient` for the client, and
`zookeeperserver` for the server.
## Add principals using kadmin.local
### Add server principal
We will use `testserver` as the name of the server principal that
`KerberizedServer` and `KerberizedServerNio` uses. It's conventional
to use keytab files, rather than passwords, as Kerberos credentials
for server daemons. This allows a server process to start itself
without manual intervention : no password need be supplied; the server
process simply reads the keytab file and uses this to authenticate with the KDC.
We will therefore use `kadmin.local` to add the server principals
using the `-randkey` option to specify that we don't want to use a
password for server authentication.
On the host on which the KDC runs, do:
# kadmin.local
kadmin.local: addprinc -randkey testserver/HOSTNAME
kadmin.local: ktadd -k /tmp/testserver.keytab testserver/HOSTNAME
Entry for principal testserver/HOSTNAME with kvno 2, encryption type AES-256 CTS mode with 96-bit SHA-1 HMAC added to keytab WRFILE:/tmp/testserver.keytab.
Entry for principal testserver/HOSTNAME with kvno 2, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/tmp/testserver.keytab.
kadmin.local: (Ctrl-D)
# scp /tmp/testserver.keytab user@HOSTNAME:~/jaas_and_kerberos
Where `user` is the user who will run `KerberizedServer` and
`KerberizedServerNio`, and `HOSTNAME` is the host on which you will
run them.
You should add a principal entry for each network interface that your
server will use - otherwise client authentication to
`KerberizedServer` and `KerberizedServerNio` may fail, and you may see
errors in your /var/log/auth.log like :
Dec 14 14:09:17 debian64-3 krb5kdc[4177]: TGS_REQ (6 etypes {3 1 23 16 17 18}) 192.168.56.1: UNKNOWN_SERVER: authtime 0, testclient@FOOFERS.ORG for testserver/192.168.0.100@FOOFERS.ORG, Server not found in Kerberos database
To fix this, I added (using `ktadd` as above) a principal for `testserver/192.168.0.100`.
### Add client principal
On the host on which the KDC runs, do:
# kadmin.local
kadmin.local: addprinc testclient
Enter password for principal "testclient@FOOFERS.ORG":
Re-enter password for principal "testclient@FOOFERS.ORG":
See `client.properties` in this directory, which is also shown
below. This assumes you used `clientpassword` as the password in
`kadmin.local` above.
client.principal.name=zookeeperclient
client.password=clientpassword
service.principal.name=testserver
# Test Kerberos Server Infrastructure
## Test server authentication with `kinit -k -t testserver.keytab`
(these options means use the keytab for authentication rather than
asking for a password).
ekoontz@ekoontz:~/jaas$ kinit -k -t testserver.keytab testserver/192.168.0.100
ekoontz@ekoontz:~/jaas$
## Test client authentication with `kinit testclient`
ekoontz@ekoontz:~/jaas$ cat client.properties
client.principal.name=testclient
client.password=clientpassword
service.principal.name=testserver
ekoontz@ekoontz:~/jaas$ kinit testclient
Please enter the password for testclient@FOOFERS.ORG:
ekoontz@ekoontz:~/jaas$
# Compile Java example code
Run `make compile`
# Runtime configuration of Java example code
## Server principal in jaas.conf.
See `jaas.conf` in this directory, which is also shown below. Change
`HOSTNAME` to the host that `KerberizedServer` and
`KerberizedServerNio` will run on. Note that we use a single entry
(`KerberizedServer`) for both `KerberizedServer` and
`KerberizedServerNio`.
Client {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false;
};
KerberizedServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=false
keyTab="testserver.keytab"
useTicketCache=false
storeKey=true
principal="testserver/HOSTNAME";
};
# Test
Run `make test` will start up the example server and run the client
against it. You may run the client against the same server afterwards
by doing `make test_client`. You can kill an existing server process
by doing `make killserver`.
| 1 |
inazaruk/map-fragment | An example of how one can use MapActivity as a fragment. | null | map-fragment
============
An example of how one can use MapActivity as a fragment.
All code in this repository is under [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html) (unless otherwise is stated in file header). | 1 |
konmik/Dagger2Example | This example is for Snorkeling with Dagger 2" article" | null | # Dagger2Example
This example is for [Snorkeling with Dagger 2](http://konmik.github.io/snorkeling-with-dagger-2.html) article.
| 1 |
zupzup/react-native-ethereum | Example code for creating an ETH wallet with react-native | null | # react-native-ethereum-wallet
It doesn't look like much. There is a Button and if you click it, it shows you the balance of a hardcoded address.
The application also loads an account from a hardcoded json address file and prints that address on clicking the button.
This example is just a simple Proof of Concept, so the code is not pretty nor is it supposed to be re-used anywhere else.
Run with `npm start`. Run on android with `npm run android`. After running, it takes some time to connect to nodes and sync with the main ethereum network - before it is connected to any peers, the button does not work.
This project was bootstrapped with [Create React Native App](https://github.com/react-community/create-react-native-app).
| 1 |
stevehanson/spring-mvc-validation | Spring MVC validation example using JSR-303 annotations and custom validation annotations | null | Spring MVC Validation
=====================
Spring MVC validation example using JSR-303 annotations and custom validation annotations
This repo is a companion to my [Spring MVC Form Validation Tutorial](http://codetutr.com/2013/05/28/spring-mvc-form-validation/)
| 1 |
indrabasak/jpa-postgres-jsonb | Postgres JPA Example with Enum and JSONB column type. | jpa postgres-enum postgres-jpa postgres-jsonb spring-boot | [![Build Status][travis-badge]][travis-badge-url]

JPA PostgreSQL Spring Service Example with JSONB Column Type and Query By Example
=================================================================================
This is a [**Spring Boot**](https://projects.spring.io/spring-boot/) based microservice example backed by
[**PostgreSQL**](https://www.postgresql.org/) database. This examples shows how to do the following:
* Use `DBCP datasource` with Java configuration.
* Use `Custom Repository` to expose `entity manager`.
* Insert `UUID` field in Postgres database and generate `UUID `index.
* Convert Java `Enum` to Postgres `Enum` type.
* Convert Java `Object` to Postgres `JSONB` type.
* Use [`JPA Query by Example`](https://github.com/spring-projects/spring-data-commons/blob/master/src/main/asciidoc/query-by-example.adoc)
* Use [`Dozer`](http://dozer.sourceforge.net/) Java Bean mapper.
### PostgreSQL Assumptions
* You have a PostgreSQL database server running on your `localhost` and in port `5432`.
* You have a database named `postgres` running on the server
* The server has a user named `postgres` with password `postgres`.
* If any of the assumptions doesn't hold true, change the `spring.datasource` properties in the `application.yml` file.
### Create Database Entities
Execute the `create-db.sql` script under `resources` directory on your PostgreSQL server either using PostgreSQL administration and management tools, [pgAdmin](https://www.pgadmin.org/),
or from the PostgreSQL interactive terminal program, called `psql`.
### Build
Execute the following command from the parent directory:
```
mvn clean install
```
### Start the Service
The main entry point `jpa-postgres-jsonb` example is `com.basaki.example.postgres.jsonb.boot.BookApplication` class.
You can start the application from an IDE by starting the `BookApplication` class.
```
. ____ _ __ _ _
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.4.5.RELEASE)
...
2017-03-27 23:09:46.905 INFO 44570 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2017-03-27 23:09:46.911 INFO 44570 --- [ main] c.b.e.postgres.boot.BookApplication : Started BookApplication in 7.003 seconds (JVM running for 7.422)
```
The application starts up at port `8080`.
### Accessing Swagger
On your browser, navigate to `http://localhost:8080/` to view the Swagger.

Click the `Show/Hide` link to view all the operations exposed by Book API.
#### POST Example
Once expanded, create a new Book entry by clicking `POST` and entering the following JSON snippet in the `request` field and click `Try it out!`.

Here is the response you get back.

#### GET Example
To view all books, click `GET` and enter either `title`, `author`, `genre` or any combination of them and click lick `Try it out!`.
The `title` and `author` parameters are case insensitive. This is an example by query.
Here is the response you get back:

#### GET Example by Author
To view all books by author, click `GET` and enter either author's `first name`, `last name` or any combination of them and click lick `Try it out!`.
The `first name`, `last name` parameters are case insensitive and doesn't have to be complete names. This is an native query on JSON object.
Here is the response you get back:

Here is the response you get back.

[travis-badge]: https://travis-ci.org/indrabasak/jpa-postgres-jsonb.svg?branch=master
[travis-badge-url]: https://travis-ci.org/indrabasak/jpa-postgres-jsonb/ | 1 |
rharter/CompoundViews | Example app to demonstrate compound views. | null | CompoundViews
=============
Example app to demonstrate compound views.
| 1 |
kikovalle/PLGSharepointRestAPI-java | Easy to use wrapper for the Sharepoint Rest API v1. Even if this is not a full implementation it covers most common use cases and provides examples to extending this API. | null | # PLGSharepointRestAPI-java
Easy to use wrapper for the Sharepoint Rest API v1. Even if this is not a full implementation it covers most common use cases and provides examples to extending this API.
I decided to share this project here because one of the most encouraging issues I've ever found is when i tried to integrate with sharepoint online without chance of using the .Net framework. I found several java APIs that took me into headaches trying to use them. This API is a really easy to use one that covers most frequent operations I needed while integrating with Sharepoint. After a lot of research I finally got this working and I shared it.
If you find this usefull and saves you some time renember you can support me so I can achieve more time to complete this project and prepare other usefull projects that I hope will save time and efforst to someone out there.
<a href="https://www.buymeacoffee.com/kikovalle" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-blue.png" alt="Buy Me A Coffee" style="height: 51px !important;width: 217px !important;" ></a>
This is a maven project that uses spring RestTemplate to communicate with the server, but can be use in a non-spring application as you'll see in the examples I provide.
The API has been finally released to Maven Central repository (https://s01.oss.sonatype.org/#view-repositories;releases~browsestorage~io/github/kikovalle/com/panxoloto/sharepoint/rest/PLGSharepointRestAPI), so from now it is possible to include the dependency in a easier way. Simply add this to your pom.xml replacing the 1.0.3 version with the latest version of the API.
<dependency>
<groupId>io.github.kikovalle.com.panxoloto.sharepoint.rest</groupId>
<artifactId>PLGSharepointRestAPI</artifactId>
<version>1.0.3</version>
</dependency>
As this is a maven project you have to clone this repo and compile it with maven. You can modify the pom.xml to include any distribution management repository that you use in your company so you can make use of the library in any other java project
mvn clean install
Once the project is build, you can include the dependency in any other project as follow:
<dependency>
<groupId>io.github.kikovalle.com.panxoloto.sharepoint.rest</groupId>
<artifactId>PLGSharepointRestAPI</artifactId>
<version>1.0.3</version>
</dependency>
Once this is done you can test this simple examples to perform actions in your sharepoint sites.
First step is to instantiate the API client, for this you need a sharepoint user email, a password, a domain and a sharepoint site URI:
String user = "userwithrights@contososharepoint.com";
String passwd = "userpasswordforthesharepointsite";
String domain = "contoso.sharepoint.com";
String spSiteUrl = "/sites/yoursiteorsubsitepath";
<b>Get all lists of a site</b>
// Initialize the API
PLGSharepointClient wrapper = new PLGSharepointClient(user, passwd, domain, spSiteUrl);
try {
JSONObject result = wrapper.getAllLists("{}");
System.out.println(result);
} catch (Exception e) {
e.printStackTrace();
}
<b>Get a list by list title</b>
PLGSharepointClient wrapper = new PLGSharepointClient(user, passwd, domain, spSiteUrl);
try {
JSONObject result = wrapper.getListByTitle("MySharepointList", "{}");
System.out.println(result);
} catch (Exception e) {
e.printStackTrace();
}
<b>Get items of a list</b>
PLGSharepointClient wrapper = new PLGSharepointClient(user, passwd, domain, spSiteUrl);
try {
// Propertyfieldname is a column name in sharepoint site, and value is the searched value, see SP Rest API to know how to filter a list.
String queryStr = "$filter=PropertyFieldName eq 'PropertyFieldValue'";
JSONObject result = wrapper.getListItems("MySharepointList", "{}", queryStr);
System.out.println(result);
} catch (Exception e) {
e.printStackTrace();
}
<b>Get a folder by server relative URL</b>
PLGSharepointClient wrapper = new PLGSharepointClient(user, passwd, domain, spSiteUrl);
try {
JSONObject result = wrapper.getFolderByRelativeUrl("/sites/mysite/FolderName", "{}");
System.out.println(result);
} catch (Exception e) {
e.printStackTrace();
}
<b>Create a folder</b>
PLGSharepointClient wrapper = new PLGSharepointClient(user, passwd, domain, spSiteUrl);
try {
// payload is a json object where to place metadata properties to associate with file in this example i set Title
JSONObject payload = new JSONObject();
payload.put("Title","Document Title set with the API");
JSONObject result = wrapper.createFolder("/sites/mysite/parentfolderwheretocreatenew", "newfoldername", payload);
System.out.println(result);
} catch (Exception e) {
e.printStackTrace();
}
Other actions you can perform with this API are the following
<ol>
<li>Remove a folder</li>
<li>Upload a file</li>
<li>Remove a file</li>
<li>Move a folder</li>
<li>Move a file</li>
<li>Update file metadata</li>
<li>Break folder role inheritance</li>
<li>Update folder properties</li>
<li>Grant user permissions on a folder (yet to implement file permissions control)</li>
<li>Remove user permissions on a folder (yet to implement file permissions control)</li>
</ol>
If you find this project useful you can buy me a coffee to support this initiative
<a href="https://www.buymeacoffee.com/kikovalle" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-blue.png" alt="Buy Me A Coffee" style="height: 51px !important;width: 217px !important;" ></a>
| 0 |
kaiwaehner/ksql-fork-with-deep-learning-function | Deep Learning UDF for KSQL, the Streaming SQL Engine for Apache Kafka with Elasticsearch Sink Example | deep-learning h2o kafka kafka-ecosystem ksql ksql-server ksql-udf stream udf | #  Deep Learning UDF for KSQL, the Streaming SQL for Apache Kafka
<span style="color:red">*Important: This is a fork of the KSQL project to demonstrate how to built a User-Defined Function (UDF). The projects adds a H2O Deep Learning model.*</span>
For the most up-to-date version, documentation and examples of KSQL, please go to [Confluent's official KSQL Github repository](https://github.com/confluentinc/ksql).
<span style="color:red">*Update July 2018: KSQL now has official support for UDFs. This makes it much easier to implement UDFs. I built an updated example here: [KSQL UDF with Deep Learning using MQTT Proxy for Sensor Analytics](https://github.com/kaiwaehner/ksql-udf-deep-learning-mqtt-iot)... Also check out the Confluent Documentation for more information about the new UDF / UDAF features in [KSQL Custom Function Reference UDF / UDAF](https://docs.confluent.io/current/ksql/docs/udf.html)*</span>
## Use Case: Continuous Health Checks with Anomaly Detection
The following example leverages a pre-trained analytic model within a KSQL UDF for continuous stream processing in real time to do health checks and alerting in case of risk. The Kafka ecosystem is used for model serving, monitoring and alerting.

### Deep Learning with an H2O Autoencoder for Sensor Analytics
Each row (i.e. message input from the sensor to Kafka) represents a single heartbeat and contains over 200 columns with numbers.
The [User-Defined KSQL Function ‘AnomalyKudf’ applies an H2O Neural Network](https://github.com/kaiwaehner/ksql/blob/4.0.x/ksql-engine/src/main/java/io/confluent/ksql/function/udf/ml/AnomalyKudf.java). The class creates a new object instance of the Deep Learning model and applies it to the incoming sensor messages for detection of anomalies in real time.
## Slides
See https://speakerdeck.com/rmoff/processing-iot-data-with-apache-kafka-ksql-and-machine-learning
## Demo script
See [demo.adoc](demo.adoc)
## Quick Start for KSQL Machine Learning UDF
How to test this implementation? The analytic model and its dependency is already included in this project. You just have to start a Kafka broker (including Zookeeper) and the KSQL server to send input streams for model inference. Here are the steps...
### Build
UDFs currently need to rebuild the KSQL project to include the new function.
However, the Maven build for KSQL is already done in this project. If you want to change the UDF logic or add own models, then you need to rebuild the project.
mvn -DskipTests=true -Dcheckstyle.skip=true clean package
### Set up the infrastructure
Confluent CLI needs to be set up to start a new cluster the easiest way - https://github.com/confluentinc/confluent-cli
Alternatively, you can use an existing Kafka cluster with default broker URL (or reconfigure the ksql-server.properties file to point to your existing Kafka cluster URL).
Start Kafka (also starts Zookeeper as dependency):
confluent start kafka
Start Kafka Connect (not needed for KSQL, but used for integrating with Elastic for the demo):
confluent start connect
Start the KSQL Server:
bin/ksql-server-start config/ksql-server.properties
Start the KSQL CLI (or alternatively use the KSQL UI):
bin/ksql http://localhost:8088
The following creates topics and test data manually so that you can follow each step. See below for steps on generating random test data continually
This example uses [kafacat](https://github.com/edenhill/kafkacat/), an open-source command line tool for easily interacting with Kafka.
Create a Kafka Topic for this demo:
kafka-topics \
--zookeeper localhost:2181 \
--create \
--topic HealthSensorInputTopic \
--partitions 1 \
--replication-factor 1
In KSQL, create STREAM and SELECT Queries:
CREATE STREAM healthsensor (eventid integer, sensorinput varchar) WITH (kafka_topic='HealthSensorInputTopic', value_format='DELIMITED');
CREATE STREAM SENSOR_RAW WITH (VALUE_FORMAT='AVRO') AS SELECT * FROM HEALTHSENSOR;
SHOW STREAMS;
DESCRIBE healthsensor;
SELECT eventid, anomaly(SENSORINPUT) from healthsensor;
Send a sample message (returns a prediction of 1.2104138026620321):
echo -e "99999,2.10# 2.13# 2.19# 2.28# 2.44# 2.62# 2.80# 3.04# 3.36# 3.69# 3.97# 4.24# 4.53#4.80# 5.02# 5.21# 5.40# 5.57# 5.71# 5.79# 5.86# 5.92# 5.98# 6.02# 6.06# 6.08# 6.14# 6.18# 6.22# 6.27#6.32# 6.35# 6.38# 6.45# 6.49# 6.53# 6.57# 6.64# 6.70# 6.73# 6.78# 6.83# 6.88# 6.92# 6.94# 6.98# 7.01#7.03# 7.05# 7.06# 7.07# 7.08# 7.06# 7.04# 7.03# 6.99# 6.94# 6.88# 6.83# 6.77# 6.69# 6.60# 6.53# 6.45#6.36# 6.27# 6.19# 6.11# 6.03# 5.94# 5.88# 5.81# 5.75# 5.68# 5.62# 5.61# 5.54# 5.49# 5.45# 5.42# 5.38#5.34# 5.31# 5.30# 5.29# 5.26# 5.23# 5.23# 5.22# 5.20# 5.19# 5.18# 5.19# 5.17# 5.15# 5.14# 5.17# 5.16#5.15# 5.15# 5.15# 5.14# 5.14# 5.14# 5.15# 5.14# 5.14# 5.13# 5.15# 5.15# 5.15# 5.14# 5.16# 5.15# 5.15#5.14# 5.14# 5.15# 5.15# 5.14# 5.13# 5.14# 5.14# 5.11# 5.12# 5.12# 5.12# 5.09# 5.09# 5.09# 5.10# 5.08# 5.08# 5.08# 5.08# 5.06# 5.05# 5.06# 5.07# 5.05# 5.03# 5.03# 5.04# 5.03# 5.01# 5.01# 5.02# 5.01# 5.01#5.00# 5.00# 5.02# 5.01# 4.98# 5.00# 5.00# 5.00# 4.99# 5.00# 5.01# 5.02# 5.01# 5.03# 5.03# 5.02# 5.02#5.04# 5.04# 5.04# 5.02# 5.02# 5.01# 4.99# 4.98# 4.96# 4.96# 4.96# 4.94# 4.93# 4.93# 4.93# 4.93# 4.93# 5.02# 5.27# 5.80# 5.94# 5.58# 5.39# 5.32# 5.25# 5.21# 5.13# 4.97# 4.71# 4.39# 4.05# 3.69# 3.32# 3.05#2.99# 2.74# 2.61# 2.47# 2.35# 2.26# 2.20# 2.15# 2.10# 2.08" | kafkacat -b localhost:9092 -P -t HealthSensorInputTopic
Create derived stream in KSQL:
CREATE STREAM AnomalyDetection WITH (VALUE_FORMAT='AVRO') AS \
SELECT eventid, sensorinput, \
CAST (anomaly(sensorinput) AS DOUBLE) as Anomaly \
FROM healthsensor;
Now create a filter so that you only get specific messages (could be alerts):
CREATE STREAM AnomalyDetectionBreach AS \
SELECT * FROM AnomalyDetection \
WHERE Anomaly >1.3;
SELECT * FROM AnomalyDetection;
SELECT * FROM AnomalyDetectionWithFilter;
Send another test message. This one returns a prediction of 1.4191201699929437:
echo -e "33333, 6.90#6.89#6.86#6.82#6.78#6.73#6.64#6.57#6.50#6.41#6.31#6.22#6.13#6.04#5.93#5.85#5.77#5.72#5.65#5.57#5.53#5.48#5.42#5.38#5.35#5.34#5.30#5.27#5.25#5.26#5.24#5.21#5.22#5.22#5.22#5.20#5.19#5.20#5.20#5.18#5.19#5.19#5.18#5.15#5.13#5.10#5.07#5.03#4.99#5.00#5.01#5.06#5.14#5.31#5.52#5.72#5.88#6.09#6.36#6.63#6.86#7.10#7.34#7.53#7.63#7.64#7.60#7.38#6.87#6.06#5.34#5.03#4.95#4.84#4.69#4.65#4.54#4.49#4.46#4.43#4.38#4.33#4.31#4.28#4.26#4.21#4.19#4.18#4.15#4.12#4.09#4.08#4.07#4.03#4.01#4.00#3.97#3.94#3.90#3.90#3.89#3.85#3.81#3.81#3.79#3.77#3.74#3.72#3.71#3.70#3.67#3.66#3.68#3.67#3.66#3.67#3.69#3.71#3.72#3.75#3.80#3.85#3.89#3.95#4.03#4.06#4.18#4.25#4.36#4.45#4.54#4.60#4.68#4.76#4.83#4.86#4.91#4.95#4.97#4.98#5.00#5.04#5.04#5.05#5.03#5.06#5.07#5.06#5.05#5.06#5.07#5.07#5.06#5.06#5.07#5.07#5.06#5.07#5.07#5.08#5.06#5.06#5.08#5.09#5.09#5.10#5.11#5.11#5.10#5.10#5.11#5.12#5.10#5.06#5.07#5.06#5.05#5.02#5.02#5.02#5.01#4.99#4.98#5.00#5.00#5.00#5.02#5.03#5.03#5.01#5.01#5.03#5.04#5.02#5.01#5.02#5.04#5.02#5.02#5.03#5.04#5.03#5.03#5.02#5.04#5.04#5.03#5.03#5.05#5.04" | kafkacat -b localhost:9092 -P -t HealthSensorInputTopic
Inspect the resulting Kafka topics. One with all scored events:
$ kafkacat -b localhost:9092 -C -t ANOMALYDETECTION
99999,1.2104138026620321
% Reached end of topic ANOMALYDETECTION [1] at offset 0
% Reached end of topic ANOMALYDETECTION [2] at offset 0
33333,1.4191201699929437
% Reached end of topic ANOMALYDETECTION [3] at offset 1
% Reached end of topic ANOMALYDETECTION [0] at offset 1
One with just those that breach an alert:
$ kafkacat -b localhost:9092 -C -t ANOMALYDETECTIONWITHFILTER
% Reached end of topic ANOMALYDETECTIONWITHFILTER [0] at offset 0
% Reached end of topic ANOMALYDETECTIONWITHFILTER [1] at offset 0
33333,1.4191201699929437
% Reached end of topic ANOMALYDETECTIONWITHFILTER [3] at offset 0
% Reached end of topic ANOMALYDETECTIONWITHFILTER [2] at offset 1
### Replaying sample test data
Taking an input file of readings only, this will add a sequence number:
awk '{gsub(/\,/,"#");print NR","$0}' ecg_discord_test.csv > ecg_discord_test.msgs
Play data into Kafka:
kafkacat -b localhost:9092 -P -t HealthSensorInputTopic -l ecg_discord_test.msgs
Generates all readings with same/close timestamp though. To spread out over time, use `pv` to throttle to a given bytes/sec throughput:
cat ecg_discord_test.msgs | pv -q -L 1000| kafkacat -b localhost:9092 -P -t HealthSensorInputTopic
Run continually:
cd test-data
./stream_loop_of_test_data_into_kafka.sh
### Generating random test data
./bin/ksql-datagen schema=EcdSensorData.avro format=delimited topic=HealthSensorInputTopic key=eventid maxInterval=2000
This uses the ksql-datagen tool (part of KSQL project) to generate test data. Whilst it provides random data, it's not very realistic to real-world data since it is truly random, rather than following a particular realistic pattern.
### Change anomaly threshold
TERMINATE CSAS_ANOMALYDETECTIONBREACH;
DROP STREAM ANOMALYDETECTIONBREACH;
CREATE STREAM AnomalyDetectionBreach AS \
SELECT * FROM AnomalyDetection \
WHERE Anomaly >4;
## Stream to Elasticsearch
Create a Kafka Connect sink to stream all scored events to Elasticsearch:
curl -X "POST" "http://localhost:8083/connectors/" \
-H "Content-Type: application/json" \
-d '{
"name": "es_sink_raw_events",
"config": {
"topics": "SENSOR_RAW",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"key.ignore": "true",
"schema.ignore": "true",
"type.name": "type.name=kafkaconnect",
"topic.index.map": "SENSOR_RAW:healthsensorinput_raw",
"connection.url": "http://localhost:9200",
"transforms": "ExtractTimestamp",
"transforms.ExtractTimestamp.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.ExtractTimestamp.timestamp.field" : "EXTRACT_TS"
}
}'
curl -X "POST" "http://localhost:8083/connectors/" \
-H "Content-Type: application/json" \
-d '{
"name": "es_sink_anomaly",
"config": {
"topics": "ANOMALYDETECTION",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"key.ignore": "true",
"schema.ignore": "true",
"type.name": "type.name=kafkaconnect",
"topic.index.map": "ANOMALYDETECTION:healthsensorinput_scored",
"connection.url": "http://localhost:9200",
"transforms": "ExtractTimestamp",
"transforms.ExtractTimestamp.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.ExtractTimestamp.timestamp.field" : "EXTRACT_TS"
}
}'
Create a Kafka Connect sink to stream all events that breach an alert threadshold to Elasticsearch:
curl -X "POST" "http://localhost:8083/connectors/" \
-H "Content-Type: application/json" \
-d '{
"name": "es_sink_anomaly_alerts",
"config": {
"topics": "ANOMALYDETECTIONBREACH",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"key.ignore": "true",
"schema.ignore": "true",
"type.name": "type.name=kafkaconnect",
"topic.index.map": "ANOMALYDETECTIONBREACH:healthsensorinput_alerts",
"connection.url": "http://localhost:9200",
"transforms": "ExtractTimestamp",
"transforms.ExtractTimestamp.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.ExtractTimestamp.timestamp.field" : "EXTRACT_TS"
}
}'
## Viualisation

## Monitoring
Optionally, start the Confluent Control Center :
confluent start control-center
Once started, go to http://localhost:9021/monitoring/streams/ to monitor the pipelines you have built

# Join the Confluent Community
Whether you need help, want to contribute, or are just looking for the latest news around the Apache Kafka ecosystem and Confluent, you can find out how to [connect with your fellow Confluent community members here](https://www.confluent.io/contact-us-thank-you/).
* Ask a question in the #ksql channel in Confluent's public [Confluent Community Slack](https://slackpass.io/confluentcommunity). Account registration is free and self-service.
* Join the [Confluent Google group](https://groups.google.com/forum/#!forum/confluent-platform).
If you have feedback regarding the Kafka ecosystem and Machine Learning, feel free to contact me directly via LinkedIn, Twitter or Email. Also check out my other [Kafka-ML Github project](https://github.com/kaiwaehner/kafka-streams-machine-learning-examples) where I leverage Kafka's Streams API to apply analytic models trained with H2O, TensorFlow and DeepLearning4j.
## Next Steps (hopefully) coming soon:
- Real demo sensor data (i.e. a continous stream)
- Integration with Kafka Connect
- More business logic and different analytic models in the UDF
# Contributing
Contributions to the code, examples, documentation, etc, are very much appreciated.
- Report issues and bugs directly in [this GitHub project](https://github.com/kaiwaehner/ksql/issues).
# License
The project is licensed under the Apache License, version 2.0.
*Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the [Apache Software Foundation](https://www.apache.org/).*
| 1 |
BankID/SampleCode | Official BankID example code to generate the BankID demo site | null | # BankID sample code
BankID is the largest eID in Sweden, with more than 8,4 million users and over 6000 connected businesses and authorities. Our solution has revolutionized everyday life in Sweden and lays the foundation for a modern and accessible society.
## Demo site
To help you integrate BankID in a correct, secure and user-friendly way, we have created a demo site, where you can test the digital identification flow and the digital signature flow.
<img src="https://www.bankid.com/assets/bankid/img/github_demo.png" />
[Visit the demo site](https://www.bankid.com/demo)
## Sample code
Here we provide the code used to create the demo site. The code languages used are:
* Frontend: React
* Backend: Java
## Help for a full integration
To integrate the BankID infrastructure, it is necessary to set up frontend and backend as well as to have a SSL-certificate. For help with your full integration, check out our [developer section](https://www.bankid.com/en/utvecklare/guider) on our website.
## Disclaimer and terms of use
We, Finansiell ID-teknik BID AB are not responsible for the correctness, nor the usage, of the code provided. You must always test your integration thoroughly and you are responsible for ensuring it works in your environment.
You may not use the name of our company or brand without written consent.
---
## More info
[Readme for the backend](/server/README.md)
[Readme for the frontend](/client/README.md)
| 1 |
thheller/reagent-react-native | Example App using reagent with react-native via shadow-cljs | null | ```
$ npm install && cd react-native && yarn install
$ shadow-cljs watch app
;; wait for first compile to finish or metro gets confused
$ cd react-native
$ npm start
;; and
$ npm run android
;; production build
$ shadow-cljs release app
;; Create Android release
$ cd react-native/android
$ ./gradlew assembleRelease
;; APK should appear at android/app/build/outputs/apk/release
;; installs in Android as "AwesomeProject"
$ adb install -r react-native/android/app/build/outputs/apk/release/app-release.apk
```
## Notes
The `react-native` folder was generated by calling `react-native init AwesomeProject` and renaming the folder.
The `:app` build will create an `react-native/app/index.js`. In `release` mode that is the only file needed. In dev mode the `app` directory will contain many more `.js` files.
`:init-fn` is called after all files are loaded and in the case of `expo` must render something synchronously as it will otherwise complain about a missing root component.
| 1 |
schordas/SchematicPlanets | An example implementation of the Schematic content provider library. | null | # SchematicPlanets
An example implementation of the Schematic content provider library.
This app also demonstrates an approach to using RecyclerView (with a CursorLoader) and Floating Action Buttons.
Enjoy!
| 1 |
cdk8s-team/cdk8s-examples | null | null | # cdk8s-examples | 0 |
petros94/smart-home-websockets | Websocket client-server example app, with ActiveMQ message broker | null | # Smart Home demo application using Spring Boot, Websockets and ActiveMQ
Websocket client-server example app, with ActiveMQ message broker.
This is the code repo for the DZone articles:
* Part I: https://dzone.com/articles/full-duplex-scalable-client-server-communication-u
* Part II: https://dzone.com/articles/full-duplex-scalable-client-server-communication-2
## Description
In our scenario, all the smart devices have a persistent connection to a server. The server is responsible for sending commands to specific devices, such as turning on the living room lights, or enabling the alarm. It can also receive information from devices. For example there can be a temperature sensor that takes readings every minute, or an oven that sends alerts if the temperature is too high. Finally the server may also issue commands to all devices, such as turn on/off.

Each microservice (MS) is written in Java 11, using the Spring Boot framework. The communication with the clients is handled by the Device Management MS. The Control MS exposes the REST API, and communicates with the Device Mgmt MS using an Active MQ Artemis message broker. For incoming traffic routing, service discovery and load balancing we are using Spring Cloud Gateway and Eureka.
## Prerequisites
* Java 11 or above
* Docker
## How to install
From root directory type:
> mvn package -DskipTests
To build the docker images run:
> docker-compose build
## How to run
To dun with docker, after creating the images, navigate to root directory and type:
> docker-compose up
This will bring up the server-side part (including ActiveMQ) and one client
## How to test
* Monitor the services discovered by Eureka by visiting: http://localhost:8761
* The ActiveMQ artemis comes with a management page running at: http://localhost:8161
* You can send POST requests to http://localhost:8000/control-service/device
```
curl --location --request POST 'http://localhost:8000/control-service/device' \
--header 'Content-Type: application/json' \
--data-raw '{
"destination": "lights_living_room",
"command": "turn_on",
"args": {
}
}'
```
Then you can monitor the device-client / device-management service logs to see the transmitted messages
## Locust scripts
To be added soon
## Contact details
Feel free to contact us for any questions or suggestions at: kmandalas@gmail.com or submit a github issue.
| 1 |
witgo/CRF | CRF is a Java implementation of Conditional Random Fields, an algorithm for learning from labeled sequences of examples. It also includes an implementation of Maximum Entropy learning. | null | null | 0 |
chrisjenx/StaggeredGridView | Based of the google staggeredgridview that has been hidden from the ACL, this is an example based off of that. | null | StaggeredGridView
=================
## This is just a demo
If you want a better implimentation of this, either wait for google to release it.
Or try https://github.com/maurycyw/StaggeredGridView
### About
Based of the google staggeredgridview that has been hidden from the ACL, this is an example based off of that and supporting scroll listener.
| 1 |
lidimayra/from-rails-to-spring-boot | A quick guide for developers migrating from Rails to Spring Boot with examples | java mvc rails ruby ruby-on-rails spring spring-boot web | # From Rails to Spring Boot
Like Rails, Spring Boot also follows _Convention over Configuration_ principles.
This repository's goal is to focus on similarities and differences between both
frameworks in order to provide a quick guide for developers that are migrating
from one to another.
Contributions are welcome!
[Pre-requisite](#pre-requisite)\
[Maven instalation](#maven-instalation)\
[Spring Boot instalation](#spring-boot-cli-instalation)\
[App Initialization](#app-initialization)\
[Controllers & Views](#controllers-and-views)\
[Project Structure](#project-structure)\
[RESTful routes](#restful-routes)\
[From Rails Models to Spring Entities](#from-rails-models-to-spring-entities)\
[Performing a creation through a web interface](#performing-a-creation-through-a-web-interface)\
[Displaying a collection of data](#displaying-a-collection-of-data)\
[Editing and Updating data](#editing-and-updating-data)\
[Showing a Resource](#showing-a-resource)\
[Destroying a Resource](#destroying-a-resource)
## Pre-requisite
[Java Development Kit 8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)
## Maven instalation
### On Ubuntu
```
sudo apt update
sudo apt install maven
```
### On Mac OS (with Homebrew)
```
brew update
brew install maven
```
## Spring Boot CLI instalation
### On Ubuntu (with SDKMAN)
```
curl "https://get.sdkman.io" | bash
source ~/.sdkman/bin/sdkman-init.sh
sdk install springboot
```
### On Mac (with Homebrew)
```
brew tap pivotal/tap
brew install springboot
```
## App Initialization
Once Spring Boot CLI is installed, we can use `spring init` command to a start a
new Spring Boot project (just like we would do with `rails new`):
```
# rails new <app_name>
spring init <app_name> -d=web,data-jpa,h2,thymeleaf
```
`-d` allows us to specify dependencies we want to set up. In this example we're
using the ones that are aimed at a basic web project:
- [web](https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-web):
Build web, including RESTful, applications using Spring MVC. Uses Apache Tomcat as the default embedded container.
- [data-jpa](https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-data-jpa):
Persist data in SQL stores with Java Persistence API using Spring Data and Hibernate.
- [h2](https://mvnrepository.com/artifact/com.h2database/h2): Provides a fast
in-memory database that supports JDBC API, with a small
(2mb) footprint. Supports embedded and server modes as well as a browser based
console application.
- [thymeleaf](https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-thymeleaf): Server-side Java template engine
[Example of Spring Boot
initialization](https://github.com/lidimayra/from-rails-to-spring-boot/commit/310ae4766254c3b18c6fe144cf7eacee49dcc515).
Note that a class was created named as `DemoApplication.java` in
`src/main/java/com/example/<app_name>/` ([Example](https://github.com/lidimayra/from-rails-to-spring-boot/blob/310ae4766254c3b18c6fe144cf7eacee49dcc515/myapp/src/main/java/com/example/myapp/DemoApplication.java))
By default, Spring uses [Maven](https://maven.apache.org/) as the project
management tool. After running the command above, dependencies can be found in
`pom.xml` file, at the root directory.
Install dependencies specified in `pom.xml` by using Maven:
```
# bundle install
mvn clean install
```
Start the server using `spring-boot:run`, a task that's provided by Maven
plugin:
```
# rails s
mvn spring-boot:run
```
Now application can be accessed at http://localhost:8080/. At this point, an
error page will be rendered, as there are no controllers defined so far.
## Controllers and views
In Spring Boot, there is no such thing as the rails generators. Also, there
is no file like _routes.rb_, where all routes are specified in a single place.
Write the controller inside `<app_name>/src/main/java/<package_name>`:
```java
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
@Controller
public class FooController {
@GetMapping("/foo")
public String index() {
return "bar";
}
}
```
The `@GetMapping` annotation ensures that GET requests performed to `/foo` will be
mapped to the method declared right after it (there is no file similar to
Rails' routes.rb in Spring Boot. Routes are defined alongside with its methods).
Because of Thymeleaf, by returning the String "bar", the application will look
for an HTML file of the same name in `src/main/resources/templates/`
Create the following page:
_bar.html_
```html
<p>FooBar</p>
```
[Example](https://github.com/lidimayra/from-rails-to-spring-boot/commit/13d195c)
Now, if we run the application with `mvn spring-boot:run` command and access
it at `http://localhost:8080/foo`, we'll see the _bar.html_ page being rendered.
## Project Structure
At this point, we have the initial structure of a Maven project.
- Main application code is placed in
[src/main/java/](https://github.com/lidimayra/from-rails-to-spring-boot/tree/13d195c/myapp/src/main/java)
- Resources are placed in [src/main/resources](https://github.com/lidimayra/from-rails-to-spring-boot/tree/13d195c/myapp/src/main/resources)
- Tests code is placed in
[src/test/java](https://github.com/lidimayra/from-rails-to-spring-boot/tree/310ae47/myapp/src/test/java)
In the root directory, we have the pom file:
[pom.xml](https://github.com/lidimayra/from-rails-to-spring-boot/blob/47070ef50056a763fdfeba46a8c8da2034de6118/myapp/pom.xml).
This is the Maven build specification. Like in Rails Gemfile, it contains the
project's dependencies declarations.
## RESTful routes
Let's say we want to build a blog containing the seven RESTful actions (index,
new, create, show, edit and destroy) for posts path. In Rails, we could achieve
that by defining `resources: :posts` in `routes.rb` file.
As mentioned previously, Spring Boot does not have a central point where
all routes are specified. Those are defined in the controllers instead.
We've already seen an example using `@GetMapping` annotation to demonstrate the
definition of a route that uses `GET` method. Similarly, Spring supports other
four inbuilt annotations for handling different types of HTTP request methods:
`@PostMapping`, `@PutMapping`, `@DeleteMapping` and `@PatchMapping`.
Example of these concepts being applied for the blog posts can be found in [here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/101611c).
## From Rails Models to Spring Entities
In order to represent a Post in the application-level, we'll need to define it
as an Spring JPA Entity (very similar to the way it would be done with a Model
in Rails).
```java
@Entity // Designate it as a JPA Entity
public class Post {
@Id // Mark id field as the entity's identity
@GeneratedValue(strategy = GenerationType.AUTO) // Value will be automatically provided
private Long id;
private String title;
private String content;
public Long getId() { ... }
public void setId(Long id) { ... }
public String getTitle() { ... }
public void setTitle(String title) { ... }
public String getContent() { ... }
public void setContent(String content) { ... }
}
```
Spring Data JPA provides some built-in methods to manipulate common data
persistence operations through the usage of repositories in a way that's very
similar to Rails' ActiveRecord. So, to work with Post data, a PostRepository must
be implemented as well:
```java
public interface PostRepository extends JpaRepository<Post, Long> {
}
```
JpaRepository interface takes to params, in this scenario: `Post` and `Long`.
`Post` because it is the entity that will be used and `Long` because that's the
type of `Post`'s identity (ID).
This interface will be automatically implemented at runtime.
Whole example can be found [in
here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/e755e5a).
## Performing a creation through a web interface
Next step is adding a form to submit posts to the blog.
At this point, we already have the
[templates/blog/new.html](https://github.com/lidimayra/from-rails-to-spring-boot/blob/101611c7a5c5321169e492ed19381df5c1b12c76/myapp/src/main/resources/templates/blog/new.html)
file containing a single line in it.
Using Thymelaf, we can do that with the following approach:
```html
<!DOCTYPE html SYSTEM
"http://www.thymeleaf.org/dtd/xhtml1-strict-thymeleaf-4.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<body>
<p>New Post</p>
<form method="POST" action="/posts">
<label for="title">Title:</label>
<input type="text" name="title" size="50"></input><br/>
<label for="content">Content:</label><br/>
<textarea name="content" cols="80" rows="5"></textarea>
<br/>
<input type="submit"></input>
</form>
</body>
</html>
```
And then, `BlogController` must be adjusted to permit that when a POST request
to `/posts` is performed, the submitted params must be used to create this new
post.
```java
@Controller
public class BlogController {
@Autowired
private PostRepository postRepository;
@GetMapping("/posts")
public String listPosts() { ... }
@PostMapping("/posts")
public String createPost(Post post) {
postRepository.save(post); // Use JPA repository built-in method.
return "redirect:/posts"; // redirect user to /posts page.
}
}
```
See whole implementation [in
here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/b7301838feb251851874fc72704e0100d2e8fa0e#diff-926ef30f0a8789410c4e35200aacb000).
## Displaying a collection of data
We'll make changes to `/posts` page so it will list all posts that are
recorded in the database.
`BlogController`'s method that's associated to this route needs to be adjusted
for making this data available to the view:
```java
@GetMapping("/posts")
public String listPosts(Model model) {
List<Post> posts = postRepository.findAll();
model.addAttribute("posts", posts);
return "blog/index";
}
```
In Spring, Models are used to hold application data and make it available to the
view (like instance variables in Rails). In this example, we're adding the list
of posts to a key named `posts`, so we can access it from the template.
Following code must be implemented to
[templates/blog/index.html](https://github.com/lidimayra/from-rails-to-spring-boot/blob/101611c7/myapp/src/main/resources/templates/blog/index.html):
```html
<!DOCTYPE html SYSTEM "http://www.thymeleaf.org/dtd/xhtml1-strict-thymeleaf-4.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<h1>Blog</h1>
<dl th:each="post : ${posts}">
<dt>
<span th:text="${post.title}">Title</span>
</dt>
<dd>
<span th:text="${post.content}">Content</span>
</dd>
</dl>
<a th:href="@{/posts/new}">Submit a new post</a>
```
See implementation [in
here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/b96ce2c).
Now, accessing application at http://localhost:8080/posts, it is possible to
list and to submit posts using the features implemented so far. Similar approach
can be applied to implement the other actions.
## Editing and Updating data
Now we want to enable editing/updating functionalities.
Following changes must be made to `editPost()` method in `BlogController`:
```java
@getMapping("/posts/{postId}/edit")
public String editPost(@PathVariable("postId") long id, Model model) {
Post post = postRepository.findById(id)
.orElseThrow(() -> new IllegalArgumentException("Invalid Post
Id:" + id)); // Ensure post exists before rendering edit form
model.addAttribute("post", post); // enable post to be consumed by edit template
return "blog/edit"; // render edit template
}
```
Note that the `id` parameter contains a `@PathVariable` annotation. This
annotation indicates that this param must receive a value that's embedded in the
path. In this case, `id` param will have the value that's passed as `postId`
when performing a request to `/posts/{postId}/edit`. Just like we would do by
calling `params[postId]` in Rails.
Then, we must implement the edit form:
```html
<!DOCTYPE html SYSTEM "http://www.thymeleaf.org/dtd/xhtml1-strict-thymeleaf-4.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<body>
<p>Edit Post</p>
<form th:method="post"
th:action="@{/posts/{id}(id=${post.id})}"
th:object="${post}">
<input type="hidden" name="_method" value="patch" />
<label for="title">Title:</label>
<input type="text" name="title" size="50" th:field="${post.title}"></input>
<br/>
<label for="content">Content:</label>
<br/>
<textarea name="content" cols="80" rows="5" th:field="${post.content}"></textarea>
<br/>
<input type="submit"></input>
</form>
</body>
</html>
```
This is enough to render an edit form. Thanks to Thymeleaf we can use `th:field`
to map Post fields and provide a pre-populated form to the final user. At
this point, edit form can be accessed at
`https://localhost:8080/posts/<post_id>/edit`.
However, as the update behavior wasn's implemented yet, it is still pointless to
submit this form.
In order to implement it, the following changes are required in the
`BlogController`:
```java
@PatchMapping("/posts/{postId}")
public String updatePost(@PathVariable("postId") long id, Model model, Post post) {
Post recordedPost = postRepository.findById(id)
.orElseThrow(() -> new IllegalArgumentException("Invalid Post Id:" + id));
recordedPost.setTitle(post.getTitle());
recordedPost.setContent(post.getContent());
postRepository.save(recordedPost);
model.addAttribute("posts", postRepository.findAll());
return "blog/index";
}
```
After these changes, posts are ready to be edited through the UI. An edit link
can also be added to `posts/index` to enable edit form to be easily accessed:
```html
<a th:href="@{/posts/{id}/edit(id=${post.id})}">Edit</a>
```
This implementation can be seen [in
here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/2960884).
## Showing a Resource
Given what've done so far, there is nothing new in implementing the feature
responsible for showing a resource.
Changes to be performed to the controller:
```java
@GetMapping("/posts/{postId}")
public String showPost() {
public String showPost(@PathVariable("postId") long id, Model model) {
Post post = postRepository.findById(id)
.orElseThrow(() -> new IllegalArgumentException("Invalid Post Id:" + id));
model.addAttribute("post", post);
return "blog/show";
}
```
And a simple template to display title and content for a single post:
```html
<!DOCTYPE html SYSTEM "http://www.thymeleaf.org/dtd/xhtml1-strict-thymeleaf-4.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<body>
<h1 th:text="${post.title}"></h1>
<hr>
<p th:text="${post.content}"></p>
<p><a th:href="@{/posts/{id}/edit(id=${post.id})}">Edit</a></p>
<hr>
<a th:href="@{/posts/}">Go back to posts</a>
</body>
</html>
```
These changes enable post details to be available at `https://localhost:8080/posts/<post_id>`.
We can also add a link at posts index to allow direct access to show:
```html
<a th:href="@{/posts/{id}/(id=${post.id})}">Show</a>
```
Implementation can be seen [in
here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/dd98c1d).
## Destroying a Resource
Now, we'll add the feature to remove a post.
In `BlogController`:
```java
@GetMapping("/posts/{postId}/delete")
public String deletePost(@PathVariable("postId") long id, Model model) {
Post recordedPost = postRepository.findById(id)
.orElseThrow(() -> new IllegalArgumentException("Invalid Post Id:" + id));
postRepository.delete(recordedPost);
model.addAttribute("posts", postRepository.findAll());
return "blog/index";
}
```
Note that we're using GET method in here. That's because in this example, our
app is a monolith and DELETE method is not supported by the browsers. In order to
keep things simple and avoid the addition of a form with a hidden field to
handle this method (like we did when updating), this one is being used as a GET.
If this was an API, `@DeleteMapping` would be the ideal option.
And then we can add a link to delete in index page:
```html
<a th:href="@{/posts/{id}/delete(id=${post.id})}">Delete</a>
```
Now it is possible to access https://localhost:8080/posts and delete each post
by using the delete link that's displayed below it.
Implementation can be found [in
here](https://github.com/lidimayra/from-rails-to-spring-boot/commit/cb38bf7).
| 0 |
cassiomolin/log-aggregation-spring-boot-elastic-stack | Example on how to use Elastic Stack with Docker to collect, process, store, index and visualize logs of Spring Boot microservices. | null | ## Log aggregation with Spring Boot, Elastic Stack and Docker
In a microservices architecture, a single business operation might trigger a chain of downstream microservice calls, which can be pretty challenging to debug. Things, however, can be easier when the logs of all microservices are centralized and each log event contains details that allow us to trace the interactions between the applications.
This project demonstrates how to use Elastic Stack along with Docker to collect, process, store, index and visualize logs of Spring Boot microservices.
##### Table of contents
- [What is Elastic Stack?](#what-is-elastic-stack)
- [Elasticsearch](#elasticsearch)
- [Kibana](#kibana)
- [Beats](#beats)
- [Logstash](#logstash)
- [Putting the pieces together](#putting-the-pieces-together)
- [Logs as streams of events](#logs-as-streams-of-events)
- [Logging with Logback and SLF4J](#logging-with-logback-and-slf4j)
- [Enhancing log events with tracing details](#enhancing-log-events-with-tracing-details)
- [Logging in JSON format](#logging-in-json-format)
- [Running on Docker](#running-on-docker)
- [Example](#example)
- [Building the applications and creating Docker images](#building-the-applications-and-creating-docker-images)
- [Spinning up the containers](#spinning-up-the-containers)
- [Visualizing logs in Kibana](#visualizing-logs-in-kibana)
## What is Elastic Stack?
Elastic Stack is a group of open source applications from Elastic designed to take data from any source and in any format and then search, analyze, and visualize that data in real time. It was formerly known as [_ELK Stack_][elk-stack], in which the letters in the name stood for the applications in the group: [_Elasticsearch_][elasticsearch], [_Logstash_][logstash] and [_Kibana_][kibana]. A fourth application, [_Beats_][beats], was subsequently added to the stack, rendering the potential acronym to be unpronounceable. So ELK Stack became Elastic Stack.
So let's have a quick look at each component of Elastic Stack.
### Elasticsearch
[Elasticsearch][elasticsearch] is a real-time, distributed storage, JSON-based search, and analytics engine designed for horizontal scalability, maximum reliability, and easy management. It can be used for many purposes, but one context where it excels is indexing streams of semi-structured data, such as logs or decoded network packets.
### Kibana
[Kibana][kibana] is an open source analytics and visualization platform designed to work with Elasticsearch. Kibana can be used to search, view, and interact with data stored in Elasticsearch indices, allowing advanced data analysis and visualizing data in a variety of charts, tables, and maps.
### Beats
[Beats][beats] are open source data shippers that can be installed as agents on servers to send operational data directly to Elasticsearch or via Logstash, where it can be further processed and enhanced. There's a number of Beats for different purposes:
- [Filebeat][filebeat]: Log files
- [Metricbeat][metricbeat]: Metrics
- [Packetbeat][packetbeat]: Network data
- [Heartbeat][heartbeat]: Uptime monitoring
- And [more][beats].
As we intend to ship log files, [Filebeat][filebeat] will be our choice.
### Logstash
[Logstash][logstash] is a powerful tool that integrates with a wide variety of deployments. It offers a large selection of plugins to help you parse, enrich, transform, and buffer data from a variety of sources. If the data requires additional processing that is not available in Beats, then Logstash can be added to the deployment.
### Putting the pieces together
The following illustration shows how the components of Elastic Stack interact with each other:
![Elastic Stack][img.elastic-stack]
In a few words:
- Filebeat collects data from the log files and sends it to Logststash.
- Logstash enhances the data and sends it to Elasticsearch.
- Elasticsearch stores and indexes the data.
- Kibana displays the data stored in Elasticsearch.
## Logs as streams of events
The [Twelve-Factor App methodology][12factor], a set of best practices for building _software as a service_ applications, define logs as _a stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services_ which _provide visibility into the behavior of a running app._ This set of best practices recommends that [logs should be treated as _event streams_][12factor.logs]:
{: .long}
> A twelve-factor app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to `stdout`. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.
>
> In staging or production deploys, each process’ stream will be captured by the execution environment, collated together with all other streams from the app, and routed to one or more final destinations for viewing and long-term archival. These archival destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
With that in mind, the log event stream for an application can be routed to a file, or watched via realtime `tail` in a terminal or, preferably, sent to a log indexing and analysis system such as Elastic Stack.
## Logging with Logback and SLF4J
When creating Spring Boot applications that depends on the `spring-boot-starter-web` artifact, [Logback][logback] will be pulled as a transitive dependency and will be used default logging system. Logback is a mature and flexible logging system and that can be used directly or, preferable, with [SLF4J][slf4j]
SLF4J a logging facade or abstraction for various logging frameworks. For logging with SLF4J, we first have to obtain a [`Logger`][org.slf4j.Logger] instance using [`LoggerFactory`][org.slf4j.LoggerFactory], as shown below:
```java
public class Example {
final Logger log = LoggerFactory.getLogger(Example.class);
}
```
To be less verbose and avoid repeating ourselves in all classes we want to perform logging, we can use [Lombok][lombok]. It provides the [`@Slf4j`][lombok.slf4j] annotation for generating the logger field for us. The class shown above is is equivalent to the class shown below:
```java
@Slf4j
public class Example {
}
```
Once we get the logger instance, we can perform logging:
```java
log.trace("Logging at TRACE level");
log.debug("Logging at DEBUG level");
log.info("Logging at INFO level");
log.warn("Logging at WARN level");
log.error("Logging at ERROR level");
```
Parametrized messages with the `{}` syntax can also be used. This approach is preferable over string concatenation, as it doesn't incur the cost of the parameter construction in case the log level is disabled:
```java
log.debug("Found {} results", list.size());
```
In Spring Boot applications, Logback can be [configured][spring-boot.configure-logback] in the `logback-spring.xml` file, located under the `resources` folder. In this configuration file, we can take advantage of Spring profiles and the templating features provided by Spring Boot.
### Enhancing log events with tracing details
In a microservices architecture, a single business operation might trigger a chain of downstream microservice calls and such interactions between the services can be challenging to debug. To make things easier, we can use [Spring Cloud Sleuth][spring-cloud-sleuth] to enhance the application logs with tracing details.
Spring Cloud Sleuth is a distributed tracing solution for Spring Cloud and it adds a _trace id_ and a _span id_ to the logs:
- The _span_ represents a basic unit of work, for example sending an HTTP request.
- The _trace_ contains a set of spans, forming a tree-like structure. The trace id will remain the same as one microservice calls the next.
With this information, when visualizing the logs, we'll be able to get all events for a given trace or span id, providing visibility into the behavior of the chain of interactions between the services.
Once the Spring Cloud Sleuth dependency is added to the classpath, all interactions with the downstream services will be instrumented automatically and the trace and span ids will be added to the SLF4J's [Mapped Diagnostic Context][slf4j.mdc] (MDC), which will be included in the logs.
```xml
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth</artifactId>
<version>${spring-cloud-sleuth.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
</dependencies>
```
### Logging in JSON format
Logback, by default, will produce logs in plain text. But as we intend our log events to be indexed in Elasticsearch, which stores JSON documents, it would be a good idea to produce log events in JSON format instead of having to parse plain text log events in Logstash.
To accomplish it, we can use the [Logstash Logback Encoder][logstash-logback-encoder], which provides Logback encoders, layouts, and appenders to log in JSON. The Logstash Logback Encoder was originally written to support output in Logstash's JSON format, but has evolved into a general-purpose, highly-configurable, structured logging mechanism for JSON and other dataformats.
And, instead of managing log files directly, our microservices could log to the standard output using the `ConsoleAppender`. As the microservices will run in Docker containers, we can leave the responsibility of writing the log files to Docker. We will see more details about the Docker in the next section.
For a simple and quick configuration, we could use `LogstashEncoder`, which comes with a [pre-defined set of providers][logstash-logback-encoder.standard-fields]:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProperty scope="context" name="application_name" source="spring.application.name"/>
<appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="jsonConsoleAppender"/>
</root>
</configuration>
```
The above configuration will produce the following log output (just bear in mind that the actual output is a single line, but it's been formatted below for better visualization):
```json
{
"@timestamp": "2019-06-29T23:01:38.967+01:00",
"@version": "1",
"message": "Finding details of post with id 1",
"logger_name": "com.cassiomolin.logaggregation.post.service.PostService",
"thread_name": "http-nio-8001-exec-3",
"level": "INFO",
"level_value": 20000,
"application_name": "post-service",
"traceId": "c52d9ff782fa8f6e",
"spanId": "c52d9ff782fa8f6e",
"spanExportable": "false",
"X-Span-Export": "false",
"X-B3-SpanId": "c52d9ff782fa8f6e",
"X-B3-TraceId": "c52d9ff782fa8f6e"
}
```
This encoder includes the values stored in MDC by default. When Spring Cloud Sleuth is in the classpath, the following properties will added to MDC and will be logged: `traceId`, `spanId`, `spanExportable`, `X-Span-Export`, `X-B3-SpanId` and `X-B3-TraceId`.
If we need more flexibility in the JSON format and in data included in log, we can use `LoggingEventCompositeJsonEncoder`. The composite encoder has no providers configured by default, so we must add the [providers][logstash-logback-encoder.providers-for-loggingevents] we want to customize the output:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProperty scope="context" name="application_name" source="spring.application.name"/>
<appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<version/>
<logLevel/>
<message/>
<loggerName/>
<threadName/>
<context/>
<pattern>
<omitEmptyFields>true</omitEmptyFields>
<pattern>
{
"trace": {
"trace_id": "%mdc{X-B3-TraceId}",
"span_id": "%mdc{X-B3-SpanId}",
"parent_span_id": "%mdc{X-B3-ParentSpanId}",
"exportable": "%mdc{X-Span-Export}"
}
}
</pattern>
</pattern>
<mdc>
<excludeMdcKeyName>traceId</excludeMdcKeyName>
<excludeMdcKeyName>spanId</excludeMdcKeyName>
<excludeMdcKeyName>parentId</excludeMdcKeyName>
<excludeMdcKeyName>spanExportable</excludeMdcKeyName>
<excludeMdcKeyName>X-B3-TraceId</excludeMdcKeyName>
<excludeMdcKeyName>X-B3-SpanId</excludeMdcKeyName>
<excludeMdcKeyName>X-B3-ParentSpanId</excludeMdcKeyName>
<excludeMdcKeyName>X-Span-Export</excludeMdcKeyName>
</mdc>
<stackTrace/>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="jsonConsoleAppender"/>
</root>
</configuration>
```
Find below a sample of the log output for the above configuration. Again, the actual output is a single line, but it's been formatted for better visualization:
```json
{
"@timestamp": "2019-06-29T22:01:38.967Z",
"@version": "1",
"level": "INFO",
"message": "Finding details of post with id 1",
"logger_name": "com.cassiomolin.logaggregation.post.service.PostService",
"thread_name": "http-nio-8001-exec-3",
"application_name": "post-service",
"trace": {
"trace_id": "c52d9ff782fa8f6e",
"span_id": "c52d9ff782fa8f6e",
"exportable": "false"
}
}
```
## Running on Docker
We'll run Elastic Stack applications along with our Spring Boot microservices in [Docker][docker] containers:
![Docker containers][img.elastic-stack-docker]
As we will have multiple containers, we will use [Docker Compose][docker-compose] to manage them. With Compose, application’s services are configured in a YAML file. Then, with a single command, we create and start all the services from our configuration. Pretty cool stuff!
Have a look at how the services are defined and configured in the [`docker-compose.yml`][repo.docker-compose.yml]. What's important to highlight is the fact that _labels_ have been added to some services. Labels are simply metadata that only have meaning for who's using them. Let's have a quick looks at the labels that have been defined for the services:
- `collect_logs_with_filebeat`: When set to `true`, indicates that Filebeat should collect the logs produced by the Docker container.
- `decode_log_event_to_json_object`: Filebeat collects and stores the log event as a string in the `message` property of a JSON document. If the events are logged as JSON (which is the case when using the appenders defined above), the value of this label can be set to `true` to indicate that Filebeat should decode the JSON string stored in the `message` property to an actual JSON object.
Both post and comment services will produce logs to the standard output (`stdout`). By default, Docker captures the standard output (and standard error) of all your containers, and writes them to files in JSON format, using the `json-file` driver. The logs files are stored in the `/var/lib/docker/containers` directory and each log file contains information about only one container.
When applications run on containers, they become moving targets to the monitoring system. So we'll use the [autodiscover][filebeat.autodiscover] feature from Filebeat, which allows it to track the containers and adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running. So, in the [`filebeat.docker.yml`][repo.filebeat.docker.yml] file, Filebeat is configured to:
- Autodiscover the Docker containers that have the label `collect_logs_with_filebeat` set to `true`
- Collect logs from the containers that have been discovered
- Decode the `message` field to a JSON object when the log event was produced by a container that have the label `decode_log_event_to_json_object` set to `true`
- Send the log events to Logstash which runs on the port `5044`
```yaml
filebeat.autodiscover:
providers:
- type: docker
labels.dedot: true
templates:
- condition:
contains:
container.labels.collect_logs_with_filebeat: "true"
config:
- type: container
format: docker
paths:
- "/var/lib/docker/containers/${data.docker.container.id}/*.log"
processors:
- decode_json_fields:
when.equals:
docker.container.labels.decode_log_event_to_json_object: "true"
fields: ["message"]
target: ""
overwrite_keys: true
output.logstash:
hosts: "logstash:5044"
```
The above configuration uses a single processor. If we need, we could add more processors, which will be _chained_ and executed in the order they are defined in the configuration file. Each processor receives an event, applies a defined action to the event, and the processed event is the input of the next processor until the end of the chain.
Once the log event is collected and processed by Filebeat, it is sent to Logstash, which provides a rich set of plugins for further processing the events.
The Logstash pipeline has two required elements, `input` and `output`, and one optional element, `filter`. The [input plugins][logstash.input-plugins] consume data from a source, the [filter plugins][logstash.filter-plugins] modify the data as we specify, and the [output plugins][logstash.output-plugins] write the data to a destination.
![Logstash pipeline][img.logstash-pipeline]
In the [`logstash.conf`][repo.logstash.conf] file, Logstash is configured to:
- Receive events coming from Beats in the port `5044`
- Process the events by adding the tag `logstash_filter_applied`
- Send the processed events to Elasticsearch which runs on the port `9200`
```java
input {
beats {
port => 5044
}
}
filter {
mutate {
add_tag => [ "logstash_filter_applied" ]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
```
Elasticsearch will store and index the log events and, finally, we will be able to visualize the logs in Kibana, which exposes a UI in the port `5601`.
## Example
For this example, let's consider we are creating a blog engine and we have the following microservices:
- _Post service_: Manages details related to posts.
- _Comment service_: Manages details related to the comments of each post.
Each microservice is a Spring Boot application, exposing a HTTP API. As we intend to focus on _log aggregation_, let's keep it simple when it comes to the services architecture: One service will simply invoke the other service directly.
And, for demonstration purposes, all data handled by the services is stored in memory and only `GET` requests are supported. When a representation of post is requested, the post service will perform a `GET` request to the comment service to get a representation of the comments for that post. The post service will aggregate the results and return a representation of the post with comments to the client.
![Post and comment services][img.services]
Let's see how to build the source code, spin up the Docker containers, produce some log data and then visualize the logs in Kibana.
Before starting, ensure you at least Java 11, Maven 3.x and Docker set up. Then clone the [repository][repo] from GitHub:
```bash
git clone https://github.com/cassiomolin/log-aggregation-spring-boot-elastic-stack.git
```
### Building the applications and creating Docker images
Both post and comment services use the [`dockerfile-maven`][dockerfile-maven] plugin from Spotify to make the Docker build process integrate with the Maven build process. So when we build a Spring Boot artifact, we'll also build a Docker image for it. For more details, check the `Dockerfile` and the `pox.xml` of each service.
To build the Spring Boot applications and their Docker images:
- Change to the `comment-service` folder: `cd comment-service`
- Build the application and create a Docker image: `mvn clean install`
- Change back to the parent folder: `cd ..`
- Change to the `post-service` folder: `cd post-service`
- Build the application and create a Docker image: `mvn clean install`
- Change back to the parent folder: `cd ..`
### Spinning up the containers
In the root folder of our project, where the `docker-compose.yml` resides, spin up the Docker containers running `docker-compose up`.
### Visualizing logs in Kibana
- Open Kibana in your favourite browser: `http://localhost:5601`. When attempting to to access Kibana while it's starting, a message saying that Kibana is not ready yet will be displayed in the browser. Enhance your calm, give it a minute or two and then you are good to go.
- In the first time you access Kibana, a welcome page will be displayed. Kibana comes with sample data in case we want to play with it. To explore the data generate by our applications, click the _Explore on my own_ link.
![Welcome page][img.screenshot-01]
- On the left hand side, click the _Discover_ icon.
![Home][img.screenshot-02]
- Kibana uses index patterns for retrieving data from Elasticsearch. As it's the first time we are using Kibana, we must create an index pattern to explore our data. We should see an index that has been created by Logstash. So create a pattern for matching the Logstash indexes using `logstash-*` and then click the _Next step_ button.
- Kibana uses _index patterns_ for retrieving data from Elasticsearch. So, to get started, you must create an index pattern. In this page, you should see an index that has been created by Logstash. To create a pattern for matching this index, enter `logstash-*` and then click the _Next step_ button.
![Creating index pattern][img.screenshot-03]
- Then pick a field for filtering the data by time. Choose `@timestamp` and click the _Create index pattern_ button.
![Picking a field for filtering data by time][img.screenshot-04]
- The index pattern will be created. Click again in the _Discover_ icon and the log events of both post and comment services start up will be shown:
![Viewing the log events][img.screenshot-05]
- To filter log events from the post service, for example, enter `application_name : "post-service"` in the search box. Click the _Update_ button and now you'll see log events from the post service only.
![Filtering logs by application name][img.screenshot-06]
- Clean the filter input and click the _Update_ button to view all logs.
- Perform a `GET` request to `http://localhost:8001/posts/1` to generate some log data. Wait a few seconds and then click the _Refresh_ button. You will be able to see logs from the requests. The logs will contain tracing details, such as _trace.trace_id_ and _trace.span id_.
- In the left-hand side, there's a list of fields available. Hover over the list of fields and an _Add_ button will be shown for each field. Add a few fields such as `application_name`, `trace.trace_id`, `trace.span_id` and `message`.
- Now let's see how to trace a request. Pick a trace id from the logs and, in the filter box, input `trace.trace_id: "<value>"` where `<value>` is the trace id you want to use as filter criteria. Then click the _Update_ button and you will able to see logs that match that trace id.
- As can be seen in the image below, the trace id is the same for the entire operation, which started in the post service. And the log events resulted from a call to the comment service haven been assigned a different span id.
![Filtering logs by trace id][img.screenshot-07]
To stop the containers, use `docker-compose down`. It's important to highlight that both Elasticsearch indices and the Filebeat tracking data are stored in the host, under the `./elasticseach/data` and `./filebeat/data` folders. It means that, if you destroy the containers, the data will be lost.
[img.services]: /misc/img/diagrams/services.png
[img.elastic-stack]: /misc/img/diagrams/elastic-stack.png
[img.elastic-stack-docker]: /misc/img/diagrams/services-and-elastic-stack.png
[img.logstash-pipeline]: /misc/img/diagrams/logstash-pipeline.png
[img.screenshot-01]: /misc/img/screenshots/01.png
[img.screenshot-02]: /misc/img/screenshots/02.png
[img.screenshot-03]: /misc/img/screenshots/03.png
[img.screenshot-04]: /misc/img/screenshots/04.png
[img.screenshot-05]: /misc/img/screenshots/05.png
[img.screenshot-06]: /misc/img/screenshots/06.png
[img.screenshot-07]: /misc/img/screenshots/07.png
[12factor]: https://12factor.net
[12factor.logs]: https://12factor.net/logs
[spring-boot.configure-logback]: https://docs.spring.io/spring-boot/docs/current/reference/html/howto-logging.html#howto-configure-logback-for-logging
[spring-cloud-sleuth]: https://spring.io/projects/spring-cloud-sleuth
[dockerfile-maven]: https://github.com/spotify/dockerfile-maven
[slf4j]: https://www.slf4j.org/
[slf4j.manual]: https://www.slf4j.org/manual.html
[logback]: https://logback.qos.ch/
[logstash-logback-encoder]: https://github.com/logstash/logstash-logback-encoder
[logstash-logback-encoder.standard-fields]: https://github.com/logstash/logstash-logback-encoder#standard-fields
[logstash-logback-encoder.providers-for-loggingevents]: https://github.com/logstash/logstash-logback-encoder#providers-for-loggingevents
[repo]: https://github.com/cassiomolin/log-aggregation-spring-boot-elastic-stack
[repo.docker-compose.yml]: https://github.com/cassiomolin/log-aggregation-spring-boot-elastic-stack/blob/master/docker-compose.yml
[repo.logstash.conf]: https://github.com/cassiomolin/log-aggregation-spring-boot-elastic-stack/blob/master/logstash/pipeline/logstash.conf
[repo.filebeat.docker.yml]: https://github.com/cassiomolin/log-aggregation-spring-boot-elastic-stack/blob/master/filebeat/filebeat.docker.yml
[elk-stack]: https://www.elastic.co/elk-stack
[elasticsearch]: https://www.elastic.co/products/elasticsearch
[logstash]: https://www.elastic.co/products/logstash
[logstash.input-plugins]: https://www.elastic.co/guide/en/logstash/current/input-plugins.html
[logstash.filter-plugins]: https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
[logstash.output-plugins]: https://www.elastic.co/guide/en/logstash/current/output-plugins.html
[kibana]: https://www.elastic.co/products/kibana
[beats]: https://www.elastic.co/products/beats
[filebeat]: https://www.elastic.co/products/beats/filebeat
[filebeat.autodiscover]: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html
[metricbeat]: https://www.elastic.co/products/beats/metricbeat
[packetbeat]: https://www.elastic.co/products/beats/packetbeat
[heartbeat]: https://www.elastic.co/products/beats/heartbeat
[docker]: https://docs.docker.com/
[docker.json-file-logging-driver]: https://docs.docker.com/config/containers/logging/json-file/
[docker-compose]: https://docs.docker.com/compose/
[slf4j.mdc]: https://www.slf4j.org/manual.html#mdc
[lombok]: https://projectlombok.org/
[lombok.slf4j]: https://projectlombok.org/api/lombok/extern/slf4j/Slf4j.html
[org.slf4j.Logger]: https://www.slf4j.org/api/org/slf4j/Logger.html
[org.slf4j.LoggerFactory]: https://www.slf4j.org/api/org/slf4j/LoggerFactory.html | 1 |
phishman3579/Bitcoin | An example Bitcoin implementation which can be used to learn about Bitcoin/Blockchain. This implementations is for educational use only! | blockchain java | # Bitcoin
An example Bitcoin implementation which can be used to learn about Bitcoin/Blockchain. This implementations is for educational use only.
# Overview.
## Wallet
The Wallet is how peers interact with the Bitcoin peer-to-peer network. The Wallet generates a public key and a private key which it uses to sign each Transaction. The pulic key is the send-to address used by the Bitcoin network. Each Wallet has the ability to send coins from your account to another account and it also has the ability to confirm Transactions (except it's own) which it receives from the Bitcoin peer-to-peer network.
```
Wallet {
sendCoin(entity, value); // Creates a new Transaction
handleTransaction(Transaction); // Receives a unconfirmed Transaction
handleConfirmation(Transaction); // Receives a confirmed Transaction and adds to blockchain
}
```
## Transaction
Transactions are just a collection of input transactions, output transactions, a value, and a signature.
```
Transaction {
byte[] header;
Transaction[] inputs;
Transaction[] outputs;
long value;
byte[] signature;
}
```
See the [Transaction Class](https://github.com/phishman3579/Bitcoin/blob/master/src/com/jwetherell/bitcoin/data_model/Transaction.java) for reference.
#### The Wallet also has a number of Transaction rules:
* Once a Transaction has been used as an input, it cannot be used again.
* All inputs on a Transaction have to be completely consumed on a transaction.
Note: To send a Bitcoin transaction, you have to already own a Bitcoin. Getting an initial Bitcoin is usually done by trading something for a number of Bitcoins. One caveat of, having to own a Bitcoin to make a transaction, is the first transaction. The first transaction is called the genesis transaction, it is the only transaction which does not need input transactions.
### An example Transaction
If Justin wants to send 6 coins to George:
Ledger:
| Justin's unused Transactions | George's unused Transaction |
| ----------------------------- | ----------------------------- |
| Transaction #1 : 5 Coins | |
| Transaction #2 : 3 Coins | |
| Transaction #3 : 7 Coins | |
```
Aggregate Transaction #4 {
byte[] header "6 coins for George and 2 coins to Justin"
Transaction[] input { Transaction #1, Transaction #2 }
Transaction[] output { Transaction #5, Transaction #6 }
int value 0
byte[] signature "Justin's signature based on the Header"
}
```
Note: The 'value' on the Aggregate Transaction (#4) is a reward for anyone who confirms the Transaction. The higher the reward, the better chance the Transaction will be processed quicker.
```
Transaction #5 {
byte[] header "2 coins to Justin"
Transaction[] input { Transaction #1, Transaction #2 }
Transaction[] output { }
int value 2
byte[] signature "Justin's signature based on the Header"
}
Transaction #6 {
byte[] header "6 coins for George"
Transaction[] input { Transaction #1, Transaction #2 }
Transaction[] output { }
int value 6
byte[] signature "Justin's signature based on the Header"
}
```
The Aggregate Transaction (#4) will remove Transaction #1 and #2 from Justin's unused Transactions. Since the total of all inputs is 8 coins, which is 2 more than what Justin wants to send to George, the output will contain a Transaction which sends 2 coins back to Justin.
The Wallet will use it's private key to sign the Header of the Aggregate Transactions (#4) and it will also sign each of the output Transactions (#5 & #6). It will then send Transaction #4 to the Bitcoin network for confirmation.
Each peer on the Bitcoin network will receive the Transaction and try to confirm it.
To confirm a Transaction, a Peer will:
* Check the Signature of Transaction against the public key of the sender.
If it passes:
* Send the confirmed Transaction to the Bitcoin network.
## Block
The confirmed Transaction (#4) is added to a pool of confirmed Transactions. Peers (also called Miners) will gather confirmed Transactions from the pool and put them into a Block. A Block contains a number of confirmed Transactions, the Miner's signature, and a couple of other fields used for "Proof of work" processing.
```
Block {
Transaction[] transactions
int nonce
int zeros
byte[] previousHash
byte[] nextHash
byte[] signature
}
```
See the [Block Class](https://github.com/phishman3579/Bitcoin/blob/master/src/com/jwetherell/bitcoin/data_model/Block.java) for reference.
Miners will create a single 'block hash' from all the confirmed Transactions in the Block. They will then go through the process of "Proof of work". The goal of the "Proof of work" is to create a hash which begins with a random number of zeros (see the 'zeros' field). "Proof of work" is designed to be processor intensive which adds randomness to the time it takes to process a Block. A Miner will take the 'block hash' and append a random integer (called a 'nonce') to it. It will then create a new hash from 'block hash + nonce' and see if it satisfies the "Proof of work", this process will repeat until it finds a 'nonce' which satisfies the "Proof of work"
See the [Proof of work](https://github.com/phishman3579/Bitcoin/blob/master/src/com/jwetherell/bitcoin/ProofOfWork.java) for reference.
Once a Miner finds a 'nonce' which satisfies the "Proof of work", it will:
* Create another hash (see 'nextHash') using the Blockchain's current hash (see 'previousHash') and the 'block hash'
* Send the Block to the Bitcoin network.
```
Block #1 {
Transaction[] transactions { Transaction #4 }
int nonce 453;
int zeros 3;
byte[] previousHash "Blockchain hash #1";
byte[] nextHash "Blockchain hash #2";
byte[] signature "Miner's signature";
}
```
Peers on the Bitcoin network will receive the Block and start confirming it.
To confirm the Block, A Peer will:
* Make sure the 'nonce' satisfies the "Proof of work"
* Check the Block's signature
* Check the signature of each Trasaction in the Block.
If everything passes:
* Add the block to it's Blockchain.
* Send the confirmed Block to the Bitcoin network
## Blockchain
The Blockchain is a simple structure which contains a list of confirmed Blocks, a list of Transactions in chronological order, a list of unused Transactions, and the current hash.
Note: all transactions in the same block are said to have happened at the same time.
```
Blockchain {
List<Block> blockchain
List<Transactions> transactions
List<Transaction> unused
byte[] currentHash
}
```
See the [Blockchain](https://github.com/phishman3579/Bitcoin/blob/master/src/com/jwetherell/bitcoin/BlockChain.java) for reference.
When the Peer adds the Block to the Blockchain, the Blockchain will:
* Check to see if the 'previousHash' from the Block matches it's 'currentHash',
* Check to see if the input Transactions from all the Transactions in the Block are 'unused'
If everything passes:
* The Block is added to the 'blockChain'
* The Transaction is added to the 'transactions' list
* All 'input' transactions are removed from the 'unused' list
* All the 'output' transactions are added to the 'unused' list
* The 'currentHash' is updated to 'nextHash' from the current Block.
```
Blockchain {
List<Block> blockchain { Block #0 }
List<Transactions> transactions { Transaction #0 }
List<Transaction> unused { Transaction #1, Transaction #2, Transaction #3 }
byte[] currentHash "Blockchain hash #1"
}
```
Updated Blockchain.
```
Blockchain {
List<Block> blockchain { Block #0, Block #1 };
List<Transactions> transactions { Transaction #0, Transaction #4 }
List<Transaction> unused { Transaction #3, Transaction #5, Transaction #6 }
byte[] currentHash "Blockchain hash #2"
}
```
Ledger:
| Justin's unused Transactions | George's unused Transaction |
| ----------------------------- | ----------------------------- |
| Transaction #3 : 7 Coins | Transaction #6 : 6 Coins |
| Transaction #5 : 2 Coins | |
| | |
Based off of [1](http://www.michaelnielsen.org/ddi/how-the-bitcoin-protocol-actually-works/) and [2](http://www.imponderablethings.com/2013/07/how-bitcoin-works-under-hood.html)
Also see the [original paper](https://bitcoin.org/bitcoin.pdf)
| 1 |
AndroidExamples/SwipeRefreshLayout-ListViewExample | Example SwipeRefreshLayout with ListView-EmptyView combination. | null | SwipeRefreshLayout-ListViewExample
==================================
SwipeRefreshLayout example to animate the refreshing of a ListView.
V. 0.1: Support for emptyView
| 1 |
jaxio/jpa-query-by-example | The JPA Query by Example framework is used by projects generated by Celerio. | null | ## JPA Query by Example framework
[](https://travis-ci.org/jaxio/jpa-query-by-example)
Query By Example for JPA is originally inspired from [Hibernate Example criterion](https://docs.jboss.org/hibernate/orm/3.6/reference/en-US/html/querycriteria.html#querycriteria-examples
). But since Hibernate's Example is not part of JPA 2, we have created our own API, using JPA 2 only.
## How To Use QBE
We do not cover here QBE implementation details, instead we explain how to use the Query By Example API.
JPA Query by Example is available on Maven central repository:
```xml
<dependency>
<groupId>com.jaxio</groupId>
<artifactId>jpa-querybyexample</artifactId>
<version>1.0.1</version>
</dependency>
```
#### Resources
* Take a look directly at the [QBE junit tests](https://github.com/jaxio/jpa-query-by-example/blob/master/src/test/java/demo), they are almost self-explanatory.
* Use Celerio to generate an advanced CRUD application that leverages this QBE API. See [Celerio](http://www.jaxio.com/documentation/celerio/installation.html)
* [Watch a demo of an application generated by Celerio](https://www.facebook.com/video/video.php?v=524162864265905¬if_t=video_processed)
### Simple Query By Example
In its simplest form, Query By Example allows you to construct a query from a given entity instance.
Let's assume we have an [Account entity](https://github.com/jaxio/jpa-query-by-example/blob/master/src/test/java/demo/Account.java)
having a `lastName` property and that we want to query all accounts whose last name matches 'Jagger'.
Using QBE, constructing the query is as simple as setting the lastName...:
```java
Account example = new Account();
example.setLastName("Jagger");
List<Account> result = accountRepository.find(example);
```
At the SQL level, the resulting query looks like this:
```sql
select
-- skip other fields for clarity
account0_.LAST_NAME as LAST9_3_,
from
Account account0_
where
account0_.LAST_NAME=?
```
The [AccountRepository](https://github.com/jaxio/jpa-query-by-example/blob/master/src/test/java/demo/AccountRepository.java)
extends a [GenericRepository](https://github.com/jaxio/jpa-query-by-example/blob/master/src/main/java/com/jaxio/jpa/querybyexample/GenericRepository.java)
#### Case sensitivity, order by
The first query above involves a String. Let's change it to make it case insensitive.
Our `Account` entity does not carry case sensitivity meta information. For this reason, we require some extra parameters
for case sensitivity, but also ordering, etc.
The number of parameters can grow quickly, so we have grouped them in the
[SearchParameters](https://github.com/jaxio/jpa-query-by-example/blob/master/src/main/java/com/jaxio/jpa/querybyexample/SearchParameters.java) class
which can be passed as a parameter to the accountRepository's methods.
Let's make the first query above `case insensitive` and let's add an `ORDER BY`.
```java
Account example = new Account();
example.setLastName("Jagger");
SearchParameters sp = new SearchParameters().caseSensitive().orderBy(OrderByDirection.ASC, Account_.lastName);
List<Account> result = accountRepository.find(example, sp);
```
Note the usage of the
[Account_](https://github.com/jaxio/jpa-query-by-example/blob/master/src/test/java/demo/Account_.java)*
static metamodel, which helps you to keep your query related Java code strongly typed.
At the SQL level, the resulting FROM clause now looks like this:
```sql
from
ACCOUNT account0_
where
lower(account0_.LAST_NAME)=?
order by
account0_.LAST_NAME asc
```
#### Pagination
In most web application we need to paginate the query results in order to save resources. In the query below, we retrieve only
the 3rd page (we assume a page lists 25 rows). The first result is the 50th element and we retrieve at most 25 elements.
```java
Account example = new Account();
example.setLastName("Jagger");
SearchParameters sp = new SearchParameters().orderBy(OrderByDirection.ASC, Account_.lastName) //
.first(50).maxResults(25);
List<Account> result = accountRepository.find(example, sp);
```
At the SQL level, the resulting FROM clause now looks like this (we use H2 database):
```sql
from
ACCOUNT account0_
where
account0_.LAST_NAME=?
order by
account0_.LAST_NAME asc limit ? offset ?
```
#### LIKE and String
For strings, you can globally control whether a `LIKE` should be used and where the `%` wildcard should be placed. For example, adding :
```java
example.setLastName("Jag");
SearchParameters sp = new SearchParameters().startingLike();
```
to our example above would result in
```sql
account0_.LAST_NAME LIKE 'Jag%'
```
#### Multiple criteria
Until now, we have worked only with one property, lastName, but we can set other properties, for example:
```java
Account example = new Account();
example.setLastName("Jag");
example.setBirthDate(new Date());
SearchParameters sp = new SearchParameters().orderBy(OrderByDirection.ASC, Account_.lastName).startingLike();
List<Account> result = accountRepository.find(example, sp);
```
By default, the FROM clause uses a `AND` predicate.
```sql
from
ACCOUNT account0_
where
account0_.BIRTH_DATE=?
and (
account0_.LAST_NAME like ?
)
order by
account0_.LAST_NAME asc
```
To use instead `OR`, use the `.orMode()`, as follow:
```java
SearchParameters sp = new SearchParameters().orMode().orderBy(OrderByDirection.ASC, Account_.lastName).startingLike();
```
And this time we get:
```sql
where
account0_.LAST_NAME like ?
or account0_.BIRTH_DATE=?
order by
account0_.LAST_NAME asc
```
#### Is that all ?
Not really, we have just scratched the surface. For the moment, we have covered only rather simple queries.
While simplicity is key, it is often not sufficient. What about date or number range queries ? What about associated entities ? etc.
### Beyond Query By Example
#### Mixing Query by Example and Range Query.
Now, let's imagine that you also want to restrict the query above to all accounts having their date of birth between 1940 and 1945 included.
Of course, the entity does not have the appropriate property (from & to).
For this reason, we introduce an additional
[Range](https://github.com/jaxio/jpa-query-by-example/blob/master/src/main/java/com/jaxio/jpa/querybyexample/Range.java)
parameter.
Here is an example:
```java
Account example = new Account();
example.setLastName("Jagger");
Calendar from = Calendar.getInstance();
from.set(1940, 0, 1);
Calendar to = Calendar.getInstance();
to.set(1945, 11, 31);
Range<Account, Date> birthDateRange = Range.newRange(Account_.birthDate);
birthDateRange.from(from.getTime()).to(to.getTime());
SearchParameters sp = new SearchParameters().range(birthDateRange);
List<Account> result = accountRepository.find(example, sp);
```
Note that you can add ranges of any type: Integer, Long, LocalDate (joda time), BigDecimal, etc...
This codes leads in fine to following `FROM` clause:
```sql
from
ACCOUNT account0_
where
(
account0_.BIRTH_DATE between ? and ?
)
and account0_.LAST_NAME=?
```
Here is a variation of the same example (depends on need, taste and color :-):
```java
DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
Date from = dateFormat.parse("1920-12-01");
Date to = dateFormat.parse("1974-12-01");
SearchParameters sp = new SearchParameters().range(from, to, Account_.birthDate);
List<Account> accountList = accountRepository.find(sp);
```
#### Query all string properties in a OR clause
To find all entities having at least one of their String property matching a given value, use the `searchPattern` method.
Here is an example:
```java
SearchParameters sp = new SearchParameters().searchMode(SearchMode.STARTING_LIKE).searchPattern("Jag");
List<Account> result = accountRepository.find(sp);
```
The FROM clause now includes all string columns:
```sql
from
ACCOUNT account0_
where
or account0_.LAST_NAME like ?
or account0_.USERNAME like ?
```
#### Property Selector
In order to construct a `OR` clause for a given property we use the `PropertySelector` class.
Here is an example:
```java
PropertySelector<Account, String> lastNameSelector = PropertySelector.newPropertySelector(Account_.lastName);
lastNameSelector.setSelected(Arrays.asList("Jagger", "Richards", "Jones", "Watts", "taylor", "Wyman", "Wood"));
SearchParameters sp = new SearchParameters().property(lastNameSelector);
List<Account> result = accountRepository.find(sp);
```
Here is the corresponding FROM clause:
```sql
from
ACCOUNT account0_
where
account0_.LAST_NAME='Jagger'
or account0_.LAST_NAME='Richards'
or account0_.LAST_NAME='Jones'
or account0_.LAST_NAME='Watts'
or account0_.LAST_NAME='Taylor'
or account0_.LAST_NAME='Wyman'
or account0_.LAST_NAME='Wood'
```
Note that if you use JSF2 with PrimeFaces, you can directly pass a `PropertySelector` to a multiple autoComplete component's value property.
This way, the autoComplete component fills the PropertySelector. Here is how:
```xml
<p:autoComplete ... multiple="true" value="#{accountSearchForm.lastNameSelector.selected}" ... />
```
Here is a snapshot:

PrimeFaces uses the `setSelected(List<Account> selection)` method to fill the lastNameSelector.
#### Mix it all
Remember, you can mix all the example we have seen so far.
You can have in a single query having multiple ranges, multiple property selector, multiple properties set on the example entity, etc.
This gives you great power ;-)
#### Query By Example on association
The `Account` entity has a `@ManyToOne` association with the `Address` entity.
Here is how we can retrieve all accounts pointing to an Address having its `city` property set to "Paris":
```java
Account example = new Account();
example.setHomeAddress(new Address());
example.getHomeAddress().setCity("Paris");
List<Account> result = accountRepository.find(example);
Assert.assertThat(result.size(), is(2));
```
The FROM clause uses a JOIN:
```sql
from
ACCOUNT account0_ cross
join
ADDRESS address1_
where
account0_.ADDRESS_ID=address1_.ID
and address1_.CITY='Paris'
```
Enjoy!
## License
The JPA Query By Example Framework is released under version 2.0 of the [Apache License][].
[Apache License]: http://www.apache.org/licenses/LICENSE-2.0
| 1 |
FISCO-BCOS/spring-boot-starter | An example to help users use java-sdk(master branch) and web3sdk(master-web3sdk branch) with Spring Boot | null | # spring-boot-starter
本示例项目基于Java SDK + Gradle + SpringBoot方式来调用智能合约。
## 前置条件
搭建FISCO BCOS 单群组区块链(Air版本),具体步骤[参考这里](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/tutorial/air/build_chain.html) 。
## 下载spring-boot-starter、证书拷贝
```shell
git clone https://github.com/FISCO-BCOS/spring-boot-starter.git
```
进入spring-boot-starter项目
```shell
cd spring-boot-starter
```
请将证书拷贝到src/main/resources/conf目录下。
## 配置连接节点
请修改application.properties,该文件包含如下信息:
```yml
### Java sdk configuration
cryptoMaterial.certPath=conf
network.peers[0]=127.0.0.1:20200
#network.peers[1]=127.0.0.1:20201
### System configuration
system.groupId=group0
system.hexPrivateKey=
### Springboot configuration
server.port=8080
```
其中:
- Java SDK configuration配置部分与 [Java SDK](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/sdk/java_sdk/config.html)一致。就本例而言,用户需要:
- 请将network.peers更换成实际的链节点监听地址。
- cryptoMaterial.certPath设为conf
- System configuration配置部分,需要配置:
- system.hexPrivateKey是16进制的私钥明文,可运行Demos.java中的`keyGeneration`生成(文件路径:src/test/java/org/example/demo/Demos.java)。该配置允许为空,此时系统会随机生成一个私钥。
- system.groupId设为目标群组,默认为group0
Demos.java 代码如下:(**以项目最新文件为准**)
```java
package org.example.demo;
import java.util.Arrays;
import org.example.demo.constants.ContractConstants;
import org.fisco.bcos.sdk.client.Client;
import org.fisco.bcos.sdk.crypto.keypair.CryptoKeyPair;
import org.fisco.bcos.sdk.crypto.keypair.ECDSAKeyPair;
import org.fisco.bcos.sdk.crypto.keypair.SM2KeyPair;
import org.fisco.bcos.sdk.model.TransactionReceipt;
import org.fisco.bcos.sdk.transaction.manager.AssembleTransactionProcessor;
import org.fisco.bcos.sdk.transaction.manager.TransactionProcessorFactory;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
@SpringBootTest
@RunWith(SpringRunner.class)
public class Demos {
@Autowired private Client client;
@Test
public void keyGeneration() throws Exception {
// ECDSA key generation
CryptoKeyPair ecdsaKeyPair = new ECDSAKeyPair().generateKeyPair();
System.out.println("ecdsa private key :" + ecdsaKeyPair.getHexPrivateKey());
System.out.println("ecdsa public key :" + ecdsaKeyPair.getHexPublicKey());
System.out.println("ecdsa address :" + ecdsaKeyPair.getAddress());
// SM2 key generation
CryptoKeyPair sm2KeyPair = new SM2KeyPair().generateKeyPair();
System.out.println("sm2 private key :" + sm2KeyPair.getHexPrivateKey());
System.out.println("sm2 public key :" + sm2KeyPair.getHexPublicKey());
System.out.println("sm2 address :" + sm2KeyPair.getAddress());
}
@Test
public void deploy() throws Exception {
AssembleTransactionProcessor txProcessor =
TransactionProcessorFactory.createAssembleTransactionProcessor(
client, client.getCryptoSuite().getCryptoKeyPair());
String abi = ContractConstants.HelloWorldAbi;
String bin = ContractConstants.HelloWorldBinary;
TransactionReceipt receipt =
txProcessor.deployAndGetResponse(abi, bin, Arrays.asList()).getTransactionReceipt();
if (receipt.isStatusOK()) {
System.out.println("Contract Address:" + receipt.getContractAddress());
} else {
System.out.println("Status code:" + receipt.getStatus() + "-" + receipt.getStatusMsg());
}
}
}
```
## 编译和运行
您可以在idea内直接运行,也可以编译成可执行jar包后运行。以编译jar包方式为例:
```shell
cd spring-boot-starter
bash gradlew bootJar
cd dist
```
会在dist目录生成spring-boot-starter-exec.jar,可执行此jar包:
```shell
java -jar spring-boot-starter-exec.jar
```
随后,即可访问相关接口。
set示例:
```shell
curl http://127.0.0.1:8080/hello/set?n=hello
```
返回示例(表示交易哈希):
```shell
0x1c8b283daef12b38632e8a6b8fe4d798e053feb5128d9eaf2be77c324645763b
```
get示例:
```shell
curl http://127.0.0.1:8080/hello/get
```
返回示例:
```json
["hello"]
```
## 加入我们的社区
**FISCO BCOS开源社区**是国内活跃的开源社区,社区长期为机构和个人开发者提供各类支持与帮助。已有来自各行业的数千名技术爱好者在研究和使用FISCO BCOS。如您对FISCO BCOS开源技术及应用感兴趣,欢迎加入社区获得更多支持与帮助。

## 相关链接
- FISCO BCOS: [FISCO BCOS文档](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/introduction.html)。
- Java Sdk: [JavaSdk文档](https://fisco-bcos-doc.readthedocs.io/zh_CN/latest/docs/develop/sdk/java_sdk/index.html)。
- SpringBoot文档: [Spring Boot](https://spring.io/guides/gs/spring-boot/)。
- Maven工程示例:[maven示例](https://github.com/FISCO-BCOS/spring-boot-crud)。 | 1 |
ddd-by-examples/cinema | Cinema playground - example repo from reserving seats with different rules | null | # Cinema
Example repo for reserving seats in a cinema for different events with different rules
# Training material
The code is not finished, I use it for refactoring/DDD classes
# Diagram of packages/modules

| 1 |
berndruecker/customer-onboarding-camunda-8-springboot | A simple onboarding process example using BPMN, Camunda Cloud, Java, Spring Boot and REST | null | # Customer Onboarding Process
*Process solution example for customer onboarding as used in the OReilly book [Practical Process Automation](https://processautomationbook.com/).*

This following stack is used:
* Camunda Platform 8
* Java 17
* Spring Boot 3
# Intro
This simple onboarding process is meant to get started with process automation, workflow engines and BPMN.
The process model contains three tasks:
* A service task that executes Java Code to score customers (using the stateless Camunda DMN engine)
* A user task so that humans can approve customer orders (or not)
* A service task that executes glue code to call the REST API of a CRM system
The process solution is a Maven project and contains:
* The onboarding process model as BPMN
* Source code to provide a REST endpoint for clients
* Java code to do the customer scoring
* Glue code to implement the REST call to the CRM system
* Fake for CRM system providing a REST API that can be called (to allow running this example self-contained)
# How To Run
<a href="http://www.youtube.com/watch?feature=player_embedded&v=QUB0dSBBMPM" target="_blank"><img src="http://img.youtube.com/vi/QUB0dSBBMPM/0.jpg" alt="Walkthrough" width="240" height="180" border="10" /></a>
## Create Camunda Platform 8 Cluster
The easiest way to try out Camunda is to create a cluster in the SaaS environment:
* Login to https://camunda.io/ (you can create an account on the fly)
* Create a new cluster
* Create a new set of API client credentials
* Copy the client credentials into `src/main/resources/application.properties`
## Run Spring Boot Java Application
The application will deploy the process model during startup
`mvn package exec:java`
## Play
You can easily use the application by requesting a new customer onboarding posting a PUT REST request to
`curl -X PUT http://localhost:8080/customer`
You can now see the process instance in Camunda Operate - linked via the Cloud Console.
You can work on the user task using Camunda Tasklist, also linked via the Cloud Console.
# Extended Process
There is also an extended process model that adds some more tasks in the process:

You can find that in another repository on GitHub: https://github.com/berndruecker/customer-onboarding-camundacloud-springboot-extended | 1 |
bezkoder/spring-boot-login-mongodb | Spring Boot & MongoDB Login and Registration example with JWT, Spring Security, Spring Data MongoDB | authentication authorization cookie http-cookies jwt jwt-auth jwt-authentication jwt-token login mongodb mongodb-database registration spring spring-boot spring-security spring-security-jwt token-based-authentication | # Spring Boot Login and Registration example with MongoDB
Build a Spring Boot Auth with HttpOnly Cookie, JWT, Spring Security and Spring Data MongoDB. You'll know:
- Appropriate Flow for User Login and Registration with JWT
- Spring Boot Rest API Architecture with Spring Security
- How to configure Spring Security to work with JWT
- How to define Data Models and association for User Login and Registration
- Way to get and generate Cookies for Token
- Way to use Spring Data MongoDB to interact with MongoDB Database
## User Registration, Login and Authorization process.

## Spring Boot Rest API Architecture with Spring Security
You can have an overview of our Spring Boot Server with the diagram below:

For more detail, please visit:
> [Spring Boot Login and Registration example with MongoDB](https://www.bezkoder.com/spring-boot-mongodb-login-example/)
Working with Front-end:
> [Angular 12](https://www.bezkoder.com/angular-12-jwt-auth-httponly-cookie/) / [Angular 13](https://www.bezkoder.com/angular-13-jwt-auth-httponly-cookie/) / [Angular 14](https://www.bezkoder.com/angular-14-jwt-auth/) / [Angular 15](https://www.bezkoder.com/angular-15-jwt-auth/) / [Angular 16](https://www.bezkoder.com/angular-16-jwt-auth/) / [Angular 17](https://www.bezkoder.com/angular-17-jwt-auth/)
> [React](https://www.bezkoder.com/react-login-example-jwt-hooks/) / [React Redux](https://www.bezkoder.com/redux-toolkit-auth/)
More Practice:
> [Spring Boot with MongoDB CRUD example using Spring Data](https://www.bezkoder.com/spring-boot-mongodb-crud/)
> [Spring Boot MongoDB Pagination & Filter example](https://www.bezkoder.com/spring-boot-mongodb-pagination/)
> [Spring Boot + GraphQL + MongoDB example](https://www.bezkoder.com/spring-boot-graphql-mongodb-example-graphql-java/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Rest Controller Unit Test with @WebMvcTest](https://www.bezkoder.com/spring-boot-webmvctest/)
> Validation: [Spring Boot Validate Request Body](https://www.bezkoder.com/spring-boot-validate-request-body/)
> Documentation: [Spring Boot and Swagger 3 example](https://www.bezkoder.com/spring-boot-swagger-3/)
> Caching: [Spring Boot Redis Cache example](https://www.bezkoder.com/spring-boot-redis-cache-example/)
Fullstack:
> [Vue.js + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-vue-mongodb/)
> [Angular 8 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-spring-boot-mongodb/)
> [Angular 10 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-10-spring-boot-mongodb/)
> [Angular 11 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-11-spring-boot-mongodb/)
> [Angular 12 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-12-spring-boot-mongodb/)
> [Angular 13 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-13-spring-boot-mongodb/)
> [Angular 14 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-14-mongodb/)
> [Angular 15 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-15-mongodb/)
> [Angular 16 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-16-mongodb/)
> [Angular 17 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-17-mongodb/)
> [React + Spring Boot + MongoDB example](https://www.bezkoder.com/react-spring-boot-mongodb/)
Run both Back-end & Front-end in one place:
> [Integrate Angular with Spring Boot Rest API](https://www.bezkoder.com/integrate-angular-spring-boot/)
> [Integrate React with Spring Boot Rest API](https://www.bezkoder.com/integrate-reactjs-spring-boot/)
> [Integrate Vue with Spring Boot Rest API](https://www.bezkoder.com/integrate-vue-spring-boot/)
## Run Spring Boot application
```
mvn spring-boot:run
```
| 1 |
mouse0w0/forge-mixin-example | An example for using Mixin in Minecraft Forge 1.12.2 & 1.8.9 | minecraft minecraft-forge minecraft-forge-mod minecraft-mod mixin-framework mixins | # forge-mixin-example
An example for using Mixin in Minecraft Forge
| 1 |
aws-samples/aws-quarkus-demo | Quarkus example projects for Amazon ECS and Amazon EKS with AWS Fargate and AWS Lambda | amazon-ecs aws-fargate aws-lambda cdk eks quarkus quarkusio sam | # Quarkus example projects for Amazon ECS with AWS Fargate, Amazon EKS with AWS Fargate, and AWS Lambda
This repository contains different examples how [Quarkus](https://quarkus.io) can be used in combination with different AWS services:
* [Amazon ECS](https://aws.amazon.com/ecs/) with [AWS Fargate](https://aws.amazon.com/fargate/)
* [Amazon EKS](https://aws.amazon.com/eks/) with [AWS Fargate](https://aws.amazon.com/fargate/)
* [AWS Lambda](https://aws.amazon.com/lambda/)
Quarkus is "`A Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards.`"
In the examples in this repository, two different approaches have been used: a JVM based built (with an Uber-Jar) and a native-image that is created using SubstrateVM.
For the [containers example](fargate) we use Amazon ECS and Amazon EKS with AWS Fargate as base infrastructure which is created using [AWS CDK](https://github.com/aws/aws-cdk). AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers; this way you no longer have to provision, configure, or scale groups of virtual machines to run containers.
The [second example](lambda) uses AWS Lambda and [AWS SAM](https://github.com/awslabs/serverless-application-model). SAM is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings.
## Contributing
Please create a new GitHub issue for any feature requests, bugs, or documentation improvements.
Where possible, please also submit a pull request for the change.
| 1 |
fqdeng/algorithm | leetcode some example code | null | # Java常见面试算法考试大全
## 如何使用本仓库
* 我会在README里面写总结跟一些技巧 大量的细节跟注释都在代码附近,遵从一个知识离代码最近的原则
* 包名包含了问题的名字,所有的类都是Solution 兼容leetcode,通常一个包下 只有一个Solution类
* 建议clone下来配合IDEA跟README食用
## 面试算法的心得体会
* 最近看了很多题解深感国内大厂受硅谷白板编程之风所害,
纷纷搞起了算法面试题,在这里分享一些我做题的经验,
先声明我自己也是一个菜鸡,我个人的算法能力局限于 二分查找 快速排序 图搜索 生成树 skip-list 等简易中等难度的范畴,
仅能应付常规面试题。
## 为应付算法面试而应该达到的目标
* 在面试3-5年的Java开发岗位,算法面试可能大多着重于对基础数据结构
ArrayList LinkedList Stack Queue Map 的应用跟原理的掌握,这个我不推荐大家去读JDK的源代码,
里面具体的实现 其实涉及到生产实践中大量的改进,细节十分多,例如JDK8的HashMap 在达到8个hash冲突链表后会转成红黑树,
而在理论学习的阶段中,应该了解除 头接链表法解决hash冲突 其它解决hash冲突的方式,
这里我推荐大家阅读 <<算法>> 第四版,里面有对基础的数据结构算法有详细的介绍,
这本书着重于应用跟理解,不像算法导论大篇幅的描述算法的数学证明。
从工程师的角度来讲,工程师在掌握算法的原理以及实现的思想后,能够熟练运用这些工具解决问题即可,
至于在数学的基础上对算法做出正确性的推导,我觉得并不是一个工程师应该所必须掌握的能力。
* leetcode应偏向中等跟简单难度的题目,这些题目出现的概率比较大,应该尽量把这些题目都覆盖掉。
* 待补充
## 基本常识跟一些小的技巧
* 了解基本的算法复杂度分析,了解O(N) O(N^2) O(LogN) 等复杂度
* 递归不好写,容易写错, 必须要掌握写对递归的三要素 1.递归一定要有一个终结条件,例如dfs搜索,必然会有一个节点其子节点为空,此时就是递归调用的出口 2.每一次递归之后,问题的解空间是在不断缩小的 3.父调用的解空间不能与其子调用的问题重叠 (这个很难文字解释清楚,后续可能会做视频)
* 二分搜索有模板,可以背下来,如果不确定,可以把目标值跟 mid low指针上的元素都比较一下
* 一些题目其实是有一固定解题套路的,例如在你掌握一个深度搜索 广度搜索的模板之后,
可以通过这个模板来解决大量图相关的问题,当然可能在这上面可能又涉及到减枝 用HashMap去重等操作。
* 很多算法都是从朴素算法出发演化来的,例如KMP求子字符串,同样时间复杂度的RK求字符串,也都是根据朴素的双重循环比对
字符串是否相等演化而来,在学习任何算法前,可以先了解其相关朴素算法的实现,可以加深对算法数据结构的认识
* 手写代码跟白板代码,一定要着重于可理解性,一个著名的观点是代码是写给人看的,顺便让机器执行一下,在算法面试中,如果
面试官对 空间复杂度 时间复杂度没有特殊要求,应该尽量选最好解释且最容易实现的方式去写,如果你写的代码不能通过简单的test case证明其正确性,
那么面试可能会大打折扣,其实短短几十分钟的时间,很难让你写出一个炫技的实现。
* 无论任何时候先处理dirty-case,就是防御式编程,不相信任何人提交参数的正确性,这种编程思维会让你受益终身。
* 通常应该写一个问题规模比较小的test-case验证思路
* 待补充
## 常见的套路模板
* 算法题的解答,其实要在白板编程时写出正确的代码,最好结合算法的思想去背诵一些模板,
* 二分查找模板
```java
public class BinarySearch { //二分查找模板
public int search(int[] nums, int target) {
int left = 0;
int right = nums.length - 1;
while (left <= right) {
int mid = left + ((right - left) >> 1);
if (nums[mid] == target){
return mid;
}
// 如果你面试记不住 可以顺便比较一下 if nums[low] == target
else if (nums[mid] > target) {
right = mid - 1;
} else {
left = mid + 1;
}
}
return -1;
}
}
//
//循环条件: left <= right
//中间位置计算: mid = left + ((right -left) >> 1)
//左边界更新:left = mid + 1
//右边界更新: right = mid - 1
//返回值: mid / -1
```
* bfs广度搜索优先模板
```java
public class Solution {
class TreeNode {
public int val;
public TreeNode(int val) {
this.val = val;
}
public TreeNode left;
public TreeNode right;
}
// 0 <- 第一次从队列里面取到0 然后把子节点1放回去
// / \
// 1 null <- 第二次从队列里面取到1 然后把子节点 3 4 放回去
// / \
// 3 4 <- 第三次从队列里面取到3,4 因为其没有子节点 所以队列为空 返回 得到count = 3
// null -> layer
// 这里其实思路很简单,因为bfs是层序遍历,每次针对队列的循环,都是取得同层级别的节点,除开二叉树 多叉树亦是如此
//求二叉树的最大层数 算是bfs模板题
public int calculateBinaryTreeLayer(TreeNode node) {
if (node == null) {
return 0;
}
//不管怎么样通常bfs是要进队的
Queue<TreeNode> queue = new LinkedList<>();
queue.add(node);
int count = 0;
while (!queue.isEmpty()) {
List<TreeNode> polls = new ArrayList<>();
//这个是取出当前这一层所有的左右节点 实际上针对多叉树 其思想也是一样
while (!queue.isEmpty()) {
polls.add(queue.poll());
}
//把所有节点的子节点再塞回去
polls.forEach(poll -> {
if (poll.right != null) {
queue.add(poll.right);
}
if (poll.left != null) {
queue.add(poll.left);
}
});
//计数+1
count++;
}
return count;
}
/**
* not full binary tree
*/
@Test
public void testCalculateBinaryTreeTestCase2() {
TreeNode top = new TreeNode(0);
top.left = new TreeNode(1);
top.right = new TreeNode(2);
top.left.left = new TreeNode(3);
top.left.right = new TreeNode(4);
Assert.assertEquals(3, calculateBinaryTreeLayer(top));
}
/**
* full binary tree
*/
@Test
public void testCalculateBinaryTreeTestCase1() {
// 0
// / \
// 1 2
// / \ / \
// 3 4 5 6
// null -> layer
TreeNode top = new TreeNode(0);
top.left = new TreeNode(1);
top.right = new TreeNode(2);
top.left.left = new TreeNode(3);
top.left.right = new TreeNode(4);
top.right.left = new TreeNode(5);
top.right.right = new TreeNode(6);
Assert.assertEquals(3, calculateBinaryTreeLayer(top));
}
}
```
## 待续...
| 1 |
inovex/tango-ar-navigation-example | Small example implementation of an augmented reality path finding navigation using Project Tango | null | # Project Tango AR Navigation Example
This is a small example implementation of an augmented reality path
finding navigation using Project Tango.
* walkable floor plan is tracked inside a [quadtree](https://de.wikipedia.org/wiki/Quadtree)
* navigation through the quadtree using [A*](https://de.wikipedia.org/wiki/A*-Algorithmus) with euclidean heuristic
* the floor plan is shown in a top view and can be rotated and scaled with multitouch gestures

### Development
* Missing assets can be installed by `./gradlew installAssets`
* This project still depends on the android gradle plugin `1.5.0` because to the missing sdkmanager plugin release
| 1 |
crctraining/customers-accounts-and-money-transfers | One of the example applications used in Chris Richardson's microservices training class | null | # Microservices application: Customers, accounts and money transfers
This is the source code for a simple banking application.
The application architecture is based on [microservices](http://microservices.io/patterns/microservices.html), [Event Sourcing](http://microservices.io/patterns/data/event-sourcing.html) and [CQRS](http://microservices.io/patterns/data/cqrs.html).
It is built using Spring Cloud, Spring Boot and the [Eventuate platform](http://eventuate.io/).
The application is used in hands-on labs that are part of a [microservices services class](http://www.chrisrichardson.net/training.html) that is taught by [Chris Richardson](http://www.chrisrichardson.net/about.html).
| 1 |
DrFair/ExampleMod | An example mod for Necesse. | null | An example mod for Necesse.
Check out the [modding wiki page](https://necessewiki.com/Modding) for more. | 1 |
jbossdemocentral/brms-coolstore-demo | Retail online web shopping cart example, the Cool Store demo leverages JBoss BRMS, JBoss Developer Studio, and Vaadin UI framework. | null | JBoss BRMS Suite Cool Store Demo
================================
This is a retail web store demo where you will find rules, decision tables, events, and a ruleflow
that is leveraged by a web application. The web application is a WAR built using the JBoss BRMS
generated project as a dependency, providing an example project showing how developers can focus on the
application code while the business analysts can focus on rules, events, and ruleflows in the
JBoss BRMS product web based dashboard.
This demo is self contained, it uses a custom maven settings to deploy all built JBoss BRMS knowledge artifacts
into an external maven repository (not your local repository), in /tmp/maven-repo.
There are four options available to you for using this demo; local, Docker, Openshift Online and Red Hat CDK OpenShift Enterprise.
Software
--------
The following software is required to run this demo:
- [JBoss EAP 7.0 installer](https://developers.redhat.com/download-manager/file/jboss-eap-7.0.0-installer.jar)
- [JBoss BRMS 6.4.0.GA deployable for EAP 7](https://developers.redhat.com/download-manager/content/origin/files/sha256/14/148eb9be40833d5da00bb6108cbed1852924135d25ceb6c601c62ba43f99f372/jboss-brms-6.4.0.GA-deployable-eap7.x.zip)
- Git client
- Maven 3.2+
- [7-Zip](http://www.7-zip.org/download.html) (Windows only): to overcome the Windows 260 character path length limit, we need 7-Zip to unzip the BRMS deployable.
Option 1 - Install on your machine
----------------------------------
1. [Download and unzip.](https://github.com/jbossdemocentral/brms-coolstore-demo/archive/master.zip)
2. Add products to installs directory.
3. Run 'init.sh' or 'init.bat' file. 'init.bat' must be run with Administrative privileges.
4. Start JBoss BRMS Server by running ./target/jboss-eap-7.0/bin/standalone.sh
5. Login to http://localhost:8080/business-central
```
- login for admin and analyst roles (u:brmsAdmin / p:jbossbrms1!)
```
6. Build and deploy project. To do this, click on 'Authoring -> Project Authoring', this will open the 'Project Authoring' view. Click on the 'Open Project Editor' button, which opens the project editor. Now, click on the 'Build -> Build & Deploy' button, which can be found on the right-hand side of the project editor window.
7. Open shopping cart and demo away (http://localhost:8080/brms-coolstore-demo)
Option 2 - Install on OpenShift
-------------------------------
Running this demo in a container on any OpenShift Container Platform is [available at Red Hat Demo Central](https://github.com/redhatdemocentral/rhcs-coolstore-demo).
Option 3 - Run in Docker
-----------------------------------------
The following steps can be used to configure and run the demo in a container
1. [Download and unzip.](https://github.com/jbossdemocentral/brms-coolstore-demo/archive/master.zip)
2. Add the EAP installer and BPM Suite deployable to installs directory.
3. Run the 'init-docker.sh' or 'init-docker.ps1' file.
4. Start the container: `docker run -it -p 8080:8080 -p 9990:9990 jbossdemocentral/brms-coolstore-demo`
5. Login to http://<DOCKER_HOST>:8080/business-central
```
- login for admin and analyst roles (u:brmsAdmin / p:jbossbrms1!)
```
6. Open shopping cart and demo away (http://<DOCKER_HOST>:8080/brms-coolstore-demo)
Additional information can be found in the jbossdemocentral container [developer repository](https://github.com/jbossdemocentral/docker-developer)
Notes
-----
The web application (shopping cart) is built during demo installation with a provided coolstore project jar version 2.0.0. When you
open the project you will find the version is also set to 2.0.0. You can run the web application as is, but if you build and deploy
a new version of 2.0.0 to your maven repository it will find duplicate rules. To demo you deploy a new version of the coolstore
project by bumping the version number on each build and deploy, noting the KieScanner picking up the new version within 10 seconds
of a new deployment. For example, initially start project, bump the version to 3.0.0, build and deploy, open web application and
watch KieScanner in server logs pick up the 3.0.0 version. Now change a shipping rule value in decision table, save, bump project
version to 4.0.0, build and deploy, watch for KieScanner picking up new 4.0.0 version, now web application on next run will use new
shipping values.
Supporting Articles
-------------------
- [JBoss BRMS Cool Store UI gets Vaadin facelift](http://www.schabell.org/2016/01/jboss-brms-coolstore-ui-vaadin-facelift.html)
- [7 Steps to Your First Rules with JBoss BRMS Starter Kit](http://www.schabell.org/2015/08/7-steps-first-rules-jboss-brms-starter-kit.html)
- [3 shockingly easy ways into JBoss rules, events, planning & BPM](http://www.schabell.org/2015/01/3-shockingly-easy-ways-into-jboss-brms-bpmsuite.html)
- [Jump Start Your Rules, Events, Planning and BPM Today](http://www.schabell.org/2014/12/jump-start-rules-events-planning-bpm-today.html)
- [4 Foolproof Tips Get You Started With JBoss BRMS 6.0.3](http://www.schabell.org/2014/10/4-foolproof-tips-get-started-jboss-brms-603.html)
- [How to Use Rules and Events to Drive JBoss BRMS Cool Store for xPaaS](http://www.schabell.org/2014/08/how-to-use-rules-events-drive-jboss-brms-coolstore-xpaas.html)
- [Red Hat JBoss BRMS - all product demos updated for version 6.0.2.GA release](http://www.schabell.org/2014/07/redhat-jboss-brms-product-demos-6.0.2-updated.html)
- [Red Hat JBoss BRMS 6 - Demo Cool Store Dynamic Rule Updates (video)] (http://www.schabell.org/2014/05/redhat-jboss-brms6-demo-coolstore-dynamic-rule-updates.html)
- [Red Hat JBoss BRMS 6 - The New Cool Store Demo] (http://www.schabell.org/2014/03/redhat-jboss-brms-v6-coolstore-demo.html)
- [JBoss BRMS Cool Store Demo updated with EAP 6.1.1] (http://www.schabell.org/2013/09/jboss-brms-coolstore-demo-updated-eap-611.html)
- [A shopping cart example in the Cool Store Demo] (http://www.schabell.org/2013/04/jboss-brms-coolstore-demo.html)
- [Cool Store installation video part I] (http://www.schabell.org/2013/05/jboss-brms-coolstore-demo-video-partI.html)
- [Cool Store CEP and Rules video part II] (http://www.schabell.org/2013/05/jboss-brms-coolstore-demo-video-partII.html)
- [Cool Store BPM and Decision Tables video part III] (http://www.schabell.org/2013/05/jboss-brms-coolstore-demo-video-partIII.html)
Released versions
-----------------
See the tagged releases for the following versions of the product:
- v3.8 JBoss BRMS 6.4.0.GA on JBoss EAP 7.0.0.GA with cool store installed and RH CDK on OSE Cloud install option.
- v3.7 JBoss BRMS 6.3.0 on JBoss EAP 6.4.7 with cool store installed and RH CDK on OSE Cloud install option.
- v3.6 JBoss BRMS 6.2.0-BZ-1299002 on JBoss EAP 6.4.4 with cool store installed and RH CDK on OSE Cloud install option.
- v3.5 JBoss BRMS 6.2.0-BZ-1299002 on JBoss EAP 6.4.4 with cool store installed.
- v3.4 JBoss BRMS 6.2.0, JBoss EAP 6.4.4 and OSE aligned containerization.
- v3.3 JBoss BRMS 6.2.0, JBoss EAP 6.4.4 and cool store installed, UI updated to Vaadin 7.6.0.
- v3.2 JBoss BRMS 6.2.0, JBoss EAP 6.4.4 and cool store installed, UI updated to Vaadin 7.
- v3.1 JBoss BRMS 6.2.0, JBoss EAP 6.4.4 and cool store installed.
- v3.0 JBoss BRMS 6.1.1 (patch update applied) with cool store installed and Albert Wong updates for JBDS project importing.
- v2.9 JBoss BRMS 6.1.1 (patch update applied) with cool store installed.
- v2.8 JBoss BRMS 6.1 with cool store installed.
- v2.7 JBoss BRMS 6.0.3 installer with cool store configured to scan external maven repository.
- v2.6 JBoss BRMS 6.0.3 installer with cool store updated so that project unit tests running again.
- v2.5 JBoss BRMS 6.0.3 with optional containerized installation.
- v2.4 moved to JBoss Demo Central, with updated windows init.bat support and one click install button.
- v2.3 JBoss BRMS 6.0.3 installer with cool store demo installed.
- v2.2 JBoss BPM Suite 6.0.2, JBoss EAP 6.1.1, cool store demo installed.
- v2.1 JBoss BPM Suite 6.0.1, JBoss EAP 6.1.1, cool store demo installed.
- v2.0 JBoss BPM Suite 6.0.0, JBoss EAP 6.1.1, cool store demo installed.
- v1.4 is BRMS 5.3.1 deployable, running on JBoss EAP 6.1.1, integrated BRMS maven repo into project so no longer need to add to
personal settings configuration which fully automates project build.
- v1.3 is BRMS 5.3.1 deployable, running on JBoss EAP 6.1.1, and added Forge Laptop Sticker to store.
- v1.2 is BRMS 5.3.1 deployable, running on JBoss EAP 6.1, mavenized using JBoss repo.
- v1.1 new welcome screen and doc fixes.
- v1.0 is BRMS 5.3.1 deployable, running on JBoss EAP 6.

[](https://vimeo.com/ericschabell/brms-coolstore-demo)
[](http://vimeo.com/ericschabell/bpmpaas-brms-coolstore-demo)


| 0 |
kbastani/cloud-native-microservice-strangler-example | Spring Cloud example of a cloud native strangler pattern for integrating microservices with legacy applications | null | # Microservices: Cloud Native Legacy Strangler Example
This reference application is a Spring Cloud example of implementing a cloud-native [Strangler Pattern](http://www.martinfowler.com/bliki/StranglerApplication.html) using microservices. The project is intended to demonstrate techniques for integrating a microservice architecture with legacy applications in an existing SOA. This reference architecture implements a hybrid cloud architecture that uses best practices for developing _Netflix-like_ microservices using Spring Cloud.
* Cloud Native Microservices
* Uses best practices for cloud native applications
* OAuth2 User Authentication
* Netflix OSS / Spring Cloud Netflix
* Configuration Server
* Service Discovery
* Circuit Breakers
* API Gateway / Micro-proxy
* Legacy Edge Gateway
* Legacy application integration layer
* Adapter for legacy systems to consume microservices
* Lazy Migration of Legacy Data
* Microservice facades integrate domain data from legacy applications
* Database records are siphoned away from legacy databases
* Datasource routing enables legacy systems to use microservices as the system of record
* Strangler Event Architecture
* Asset capture strategy uses events to guarantee single system of record for resources
* Durable mirroring of updates back to legacy system
## Architecture Diagram

## Overview
This reference application is based on both common and novel design patterns for building a cloud-native hybrid architecture with both legacy applications and microservices. The reference project includes the following applications.
* Legacy Applications
* Customer Service
* Legacy Edge Service
* Microservices
* Discovery Service
* Edge Service
* Config Service
* User Service
* Profile Service
* Profile Web
## Legacy Database Strangulation
When building microservices, the general approach is to take existing monoliths and to decompose their components into microservices. Instead of migrating all legacy applications at once, we can allow an organic process of decomposition to drive the birth of new cloud-native applications that strangle data away from shared databases used by legacy applications. The _cloud-native strangler pattern_ focuses on the complete replacement of a monolith's database access over a period of time.
In this approach microservices will be transitioned to become the system of record for domain data used by strangled legacy applications. The process of performing an on-demand migration of data out of a shared database will require that only one system of record exists at any one time. To solve this, a _Legacy Edge_ application acts as an API gateway to allow legacy applications to talk to new microservices.
## Usage
There are two ways to run the reference application, with either Docker Compose or Cloud Foundry, the latter of which can be installed on a development machine using [PCF Dev](https://docs.pivotal.io/pcf-dev/). Since the distributed application is designed to be cloud-native, there is a lot to be gained from understanding how to deploy the example using Cloud Foundry.
### Docker Compose
To run the example using Docker Compose, a `run.sh` script is provided which will orchestrate the startup of each application. Since the example will run 8 applications and multiple backing services, it's necessary to have at least 9GB of memory allocated to Docker.
WARNING: The `run.sh` script is designed to use Docker Machine, so if you're using Docker for Mac, you'll need to modify the `run.sh` script by setting `DOCKER_IP` to `localhost`.
### Cloud Foundry
To run the example using Cloud Foundry, a `deploy.sh` script is provided which will orchestrate the deployment of each application to a simulated cloud-native environment. If you have enough resources available, you can deploy the example on [Pivotal Web Service](http://run.pivotal.io). If you're new to Cloud Foundry, it's highly recommended that you go with the PCF Dev approach, which you can install by following the directions at https://docs.pivotal.io/pcf-dev/.
When you have a CF environment to deploy the example, go ahead and run the `deploy.sh` script in the parent directory of the project. The bash script is commented enough for most to understand the steps of the deployment. Each Cloud Foundry deployment manifest is located in the directory of the application in example project, named `manifest.yml`. The deployment process will deploy the Spring Cloud backing services first, and afterward, each microservice will be deployed one by one until each application is running.
## License
This project is an open source product licensed under GPLv3.
| 1 |
FreedomChuks/NavigationUiExample- | example with working with navigation Components | null | # NavigationUiExample-
It contains Android navigation component examples which show how to use navigation component in android app with fragments and action as destinations. For details about android navigation component and this project explanation see https://medium.com/@freedom.chuks7/how-to-use-jet-pack-components-bottomnavigationview-with-navigation-ui-19fb120e3fb9

| 1 |
dbleicher/recyclerview-grid-quickreturn | An example of implementing QuickReturn on a RecyclerView using a StaggeredGridLayoutManager to display CardViews inside a SwipeRefreshLayout | null | #recyclerview-grid-quickreturn
An example of implementing QuickReturn on a RecyclerView
using a StaggeredGridLayoutManager to display CardViews inside a SwipeRefreshLayout.
This is an example project that shows one approach (probably not the best one!) of
implementing the QuickReturn pattern with a RecyclerView that uses the
StaggeredGridLayoutManager. For grins, the whole thing also supports Pull-to-Refresh
using a SwipeRefreshLayout. By the way, the cells within the layout are CardViews, and use
the `card_view:cardUseCompatPadding="true"` attribute to display properly on Lollipop.
The idea is that the QR view should not cover the top items in the RV. This is now accomplished with a custom
(but trivial) ItemDecoration. As a result, the adapter and layoutmanager are plain vanilla. I'm sure there's
a better way, so if you find it, please explain it to me! :-)
Here's what it looks like with expandable cells: https://www.youtube.com/watch?v=ulf4v3Qzn4o
Here's an older animation:

##Changes
1. (2015-1-2) Added TargetedSwipeRefreshLayout (TSRL) to permit multiple views within the SwipeRefreshLayout.
1. (2015-1-2) Removed topview detection from onScrollListener (no longer needed with TSRL).
2. (2014-11-14) Modified the spacing hack to use an ItemDecorator (cleaner approach).
##TODO
1. (2014-11-14) ~~It's been suggested that I look into using ItemDecoration instead of adjusting cell margins during onBind.~~
2. (2015-03-18) Fixed using recyclerview-v7 R22 ~~Need to address issue of inserting items at top of grid.~~
##License
```
Copyright 2014-2015 David Bleicher
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
| 1 |
berndruecker/flowing-retail-old | REPLACED BY MORE CURRENT VRSION OF THIS EXAMPLE: https://github.com/berndruecker/flowing-retail | null | # Flowing retail sample application
This sample application showcases *concepts and alternatives* to implement
* a simple order application
in the context of
* Domain Driven Design (DDD)
* Event Driven Architecture (EDA)
* Microservices (µS)
Key facts:
* Written in Java
* As simple as possible to show concepts, not build for production usage. Hint: we know some parts in the code skip well known best practices and patterns, but we focussed on making the code easy to understand. For example we prefer to duplicate code, if this means you have to read one class less to understand what a component is doing.
# Links
* Introduction blog post by Bernd Rücker: https://blog.bernd-ruecker.com/flowing-retail-demonstrating-aspects-of-microservices-events-and-their-flow-with-concrete-source-7f3abdd40e53
# Overview and architecture
Flowing retail simulates a very easy order processing system. The business logic is separated into the following microservices:

* The core domains communicate via messages with each other.
* Messages might contain *events* or *commands*.
This is the stable nucleus for flowing retail.
## Alternatives
Now there are a couple of options you can choose of when running / inspecting the example.
### Channel technology
You can choose between:
* [Apache Kafka](http://kafka.apache.org/) as event bus (option ```kafka```, *default*).
* [RabbitMQ](https://www.rabbitmq.com/) as AMQP messaging system (option ```rabbit```).
### Long running processes
In order to support [long running processes](xxx) there are multiple options, which are very interessting to compare:
* Domain entities store state (option ```entity```)
* [Camunda](http://camunda.org/) workflow engine orchestrates using BPMN models (option ```camunda```, *default*)
* [Camunda](http://camunda.org/) workflow engine orchestrates using a technical DSL (option ```camunda-dsl```)
Note that every component does its own parts of the overall order process. As an example this is illustrated using BPMN and showing the Order and Payment Service with their processes:

# Run the application
* Download or clone the source code
* Run a full maven build
```
mvn install
```
* Start all components by in one Java process
* Channel (e.g. Kafka which also requires Zookeeper)
* All microservices
```
mvn -f starter exec:java
```
If you want to select options you can also do so:
```
mvn -f starter exec:java -Dexec.args="rabbit camunda-dsl"
```
You can also import the projects into your favorite IDE and start the following class yourself:
```
starter/io.flowing.retail.command.SimpleStarter
```
* Now you can place an order via [http://localhost:8085](http://localhost:8085)
* You can inspect all events going on via [http://localhost:8086](http://localhost:8086)
# TODO ZONE
## Using Kafka
* Can be started built in, but you can also install and run yourself
* Port = default = ##
When installed yourself, create topic *"flowing-retail"*
```
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic flowing-retail
```
You can query all topics by:
```
kafka-topics.sh --list --zookeeper localhost:2181
```
## Using RabbitMQ
* Must be installed and started yourself
* Port = default = ##
## Using Camunda
You can inspect what's going on using Cockpit:
* Download Camunda Distribution of your choice
* Configure Datasource to connect to: jdbc:h2:tcp://localhost:8092/mem:camunda
* In Tomcat distribution this is configured in server/apache-tomcat-8.0.24/conf/server.xml
* In Wildfly distribution this is configred in server/wildfly-10.1.0.Final/standalone/configuration/standalone.xml
* Best: Do not start job executor
* Run it and you can use cockpit normally
If you want to restart microservices and keep cockpit running, you have to make sure your JDBC connection pool destroys stale connections. In Wildfly you can add a valdidation to your datasource, so the config will look like this:
```
<datasource jndi-name="java:jboss/datasources/ProcessEngine" pool-name="ProcessEngine" ...>
<connection-url>jdbc:h2:tcp://localhost:8092/mem:camunda</connection-url>
<driver>h2</driver>
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<security>
<user-name>sa</user-name>
<password>sa</password>
</security>
<validation>
<check-valid-connection-sql>select 1</check-valid-connection-sql>
<validate-on-match>false</validate-on-match>
</validation>
</datasource>
```
| 0 |
humank/ddd-practitioners-ref | EventStorming workshop, this is a hands-on workshop. Contains such topics: DDD, Event storming, Specification by example. Including the AWS product : Serverless Lambda , DynamoDB, Fargate, CloudWatch. | aws container ddd ecs eventbridge eventstorming fargate lambda microservices serverless | # Domain-Driven Design Practitioners Reference
## Under Construction -
**Appreciate having your support on building this workshop, hope this workshop is useful & meaningful for you. In order to well organize all of the contents for DDD practitioners reference, the workshop will be refactored to cover wider topics with a more complex business scenario sample. Plan to release new content before the end of 2021, stay tuned.**
In this ddd-practitioners-reference, you will learn more than classic Domain-Driven Design Strategic design and Tactical design patterns. DDD is good to approach decision makers to align business goals among diversity stakeholders in different BU.
When learning a series of methodology would always bored ourself and lot passion on the unlimited learning journey, so i'll walk you through a bsuiness case to practice the following methodologies/approaches to get familiar in using DDD to design solutions.
So, this guide will cover below topics whaich are what I learned from WW communities (ddd_eu, virtualddd) and awesome ddd-practitioners experience.
Outline:
- A sample business story - Trip Service
- Awaren businesss context by Wardley Maps
- Kowing your key stakeholders - Impcat Mapping
- EventStorming
- Bounded Context Canvas - founded by Nick Tune
- Aggregate Design Canvas (*) - founded by Kacper Gunia
- Aggregate Canvas (*) - founded by Arthur Chang
- Example Mapping
- Specification by Example
- Implement DDD Tactical pattern in Clean Architecture with Spring boot framework - refer to awesome-trip
- Integrate with AWS cloud native offerings
-----
## Table of Contents
- [00 - Event Storming](#eventstorming)
- [What is Event Storming?](#what-is-event-storming)
- [Whom is it for?](#whom-is-it-for)
- [Event Storming Terms](#event-storming-terms)
- [Event Storming Benefits](#event-storming-benefits)
- [Event Storming Applications](#event-storming-applications)
- [01 - Hands-on: Events exploring](docs/01-hands-on-events-exploring/README.md)
- [02 - Cafe business scenario](docs/02-coffee-shop-scenario/README.md)
- [03 - Roles, Commands, and Events Mapping](docs/03-roles-commands-events-mapping/README.md)
- [Key Business events in the coffeeshop](docs/03-roles-commands-events-mapping/README.md#key-business-events-in-the-coffeeshop)
- [Commands and Events mapping](docs/03-roles-commands-events-mapping/README.md#commands-and-events-mapping)
- [Roles](docs/03-roles-commands-events-mapping/README.md#roles)
- [Exceptions or risky events](docs/03-roles-commands-events-mapping/README.md#exceptions-or-risky-events)
- [Re-think solutions to serve risky events](docs/03-roles-commands-events-mapping/README.md#re-think-solutions-to-serve-risky-events)
- [Aggregate](docs/03-roles-commands-events-mapping/README.md#aggregate)
- [Bounded Context forming up](docs/03-roles-commands-events-mapping/README.md#bounded-context-forming-up)
- [04 - Modeling and Development](docs/04-modeling-and-development/README.md)
- [Specification by Example](docs/04-modeling-and-development/README.md#specification-by-example)
- [TDD within Unit Test environment](docs/04-modeling-and-development/README.md#tdd-within-unit-test-environment)
- [Generate unit test code skeleton](docs/04-modeling-and-development/README.md#generate-unit-test-code-skeleton)
- [Implement Domain Model from code Skeleton](docs/04-modeling-and-development/README.md#implement-domain-model-from-code-skeleton)
- [Design each Microservices in Port-adapter concept](docs/04-modeling-and-development/README.md#design-each-microservices-in-port-adapter-concept)
- [05 - Deploy Applications by AWS CDK](docs/05-deploy-applications-by-cdk/README.md)
<!---
- [05 - Domain Driven Design Tactical design pattern guidance](05-ddd-tactical-design-pattern)
- [06 - Actual Implementation](06-actual-implementation)
- [07 - Infrastructure as Code by CDK](07-iaac-cdk)
- [08 - Deploy Serverless application](08-deploy-serverless-app)
- [09 - Deploy Containerized application](09-deploy-containerized-app)
- [10 - Build up CI/CD pipeline](10-build-up-cicd-pipeline)
--->
# Event Storming

## What is Event Storming?
Event Storming is a **rapid**, **lightweight**, and often under-appreciated group modeling technique that is **intense**, **fun**, and **useful** to **accelerate** project teams. It is typically offered as an interactive **workshop** and it is a synthesis of facilitated group learning practices from Gamestorming, leveraging on the principles of Domain Driven Design (DDD).
You can apply it practically on any technical or business domain, especially those that are large, complex, or both.
## Whom is it for?
Event Storming isn't limited to just for the software development team. In fact, it is recommend to invite all the stakeholders, such as developers, domain experts, business decision makers etc to join the Event Storming workshop to collect viewpoints from each participants.
## Event Storming Terms

> Reference from Kenny Bass - https://storage.googleapis.com/xebia-blog/1/2018/10/From-EventStorming-to-CoDDDing-New-frame-3.jpg
Take a look on this diagram, there are a few colored sticky notes with different intention:
* **Domain Events** (Orange sticky note) - Describes *what happened*. Represent facts that happened in a specific business context, written in past tense
* **Actions** aka Command (Blue sticky note) - Describes an *action* that caused the domain event. It is a request or intention, raised by a role or time or external system
* **Information** (Green sticky note) - Describes the *supporting information* required to help make a decision to raise a command
* **Consistent Business Rules** aka Aggregate (Yellow sticky note)
* Groups of Events or Actions that represent a specific business capability
* Has the responsibility to accept or fulfill the intention of command
* Should be in small scope
* And communicated by eventual consistency
* **Eventual Consistent Business rules** aka Policy (Lilac sticky note)
* Represents a process or business rules. Can come from external regulation and restrictions e.g. account login success/fail process logic.
## Event Storming Benefits
Business requirements can be very complex. It is often hard to find a fluent way to help the Product Owner and Development teams to collaborate effectively. Event storming is designed to be **efficient** and **fun**. By bringing key stakeholder into the same room, the process becomes:
- **Efficient:** Everyone coming together in the same room can make decisions and sort out differences quickly. To create a comprehensive business domain model, what used to take many weeks of email, phone call or meeting exchanges can be reduced to a single workshop.
- **Simple:** Event Storming encourages the use of "Ubiquitous language" that both the technical and non-technical stakeholders can understand.
- **Fun:** Domain modeling is fun! Stakeholders get hands-on experience to domain modeling which everyone can participate and interact with each other. It also provides more opportunities to exchange ideas and improve mindsharing, from various perspective across multiple roles.
- **Effective:** Stakeholders are encouraged not to think about the data model, but about the business domain. This puts customers first and working backwards from there, achieves an outcome that is more relevant.
- **Insightful:** Event Storming generate conversations. This helps stakeholders to understand the entire business process better and help to have a more holistic view from various perspective.
## Event Storming Applications
There are many useful applications of Event Storming. The most obvious time to use event storming is at a project's inception, so the team can start with a common understanding of the domain model. Some other reasons include:
* Discovering complexity early on, finding missing concepts, understanding the business process;
* Modelling or solving a specific problem in detail;
* Learning how to model and think about modelling.
Event Storming can also help to identify key views for your user interface, which can jump start Site Mapping or Wireframing.
Let's get started with a quick example to demonstrate how to run a simple Event Storming.
[Next: 01 Hands-on Events Exploring >](docs/01-hands-on-events-exploring/README.md)
| 0 |
AlexZhukovich/MultipleRowLayoutsRecyclerView | This short example shows how to use different row layouts for RecyclerView | null | # MultipleRowLayoutsRecyclerView
This short example shows how to use different row layouts for RecyclerView
<img src="https://github.com/AlexZhukovich/MultipleRowLayoutsRecyclerView/blob/master/screen/logo.png" width="720px" height="450px" />
If you want to use different types of row layouts you must to implement next method in adapter:
```java
@Override
public int getItemViewType(int position) {
...
}
```
Article: http://alexzh.com/tutorials/multiple-row-layouts-using-recyclerview/
| 1 |
wx-chevalier/Java-Notes | :books: Java Notes & Examples. | 语法基础、数据结构、工程实践、设计模式、并发编程、JVM、Scala | clojure java | [![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[][license-url]
<!-- PROJECT LOGO -->
<br />
<p align="center">
<a href="https://github.com/wx-chevalier/Java-Notes">
<img src="https://assets.ng-tech.icu/item/header.svg" alt="Logo" style="width: 100vw;height: 400px" />
</a>
<p align="center">
<a href="https://ng-tech.icu/books/Java-Notes"><strong>在线阅读 >> </strong></a>
<br />
<br />
<a href="https://github.com/wx-chevalier">代码案例</a>
·
<a href="https://github.com/wx-chevalier/Awesome-Lists">参考资料</a>
</p>
</p>
<!-- ABOUT THE PROJECT -->
# Java Series | Java 开发基础与工程实践

Java 是由 Sun Microsystems 公司于 1995 年 5 月推出的高级程序设计语言,Java 当初诞生的时候,正是上世纪 90 年代末互联网兴起的时代,在企业应用开发中存在几个问题,一是以 IBM,SUN 和 HP 的 UNIX 服务器和大型机为主的异构环境,C/C++ 和其它语言编写的应用跨平台支持和移植比较困难,二是基于 CGI 和其它技术的网络应用从开发效率和功能性角度来看都不够理想,三是 C/C++在当时是主流编程语言,门槛高、易出错、对经验要求很高,而 Java 简单易学、安全可靠,并且一次编写到处运行,再加上 Applet、Servlet 和 JSP 技术,解决了这些痛点,满足了当时互联网程序设计和运维的要求,伴随着互联网的发展一下子就脱颖而出并长期占据主流地位。
在 CS 领域中,很少有技术能够与 Java 的影响相比肩;它在 Web 早期的创造帮助塑造了 Internet 的现代形式,包括客户端和服务器端。它的创新功能提高了编程艺术和科学水平,为计算机语言设计树立了新标准。围绕 Java 成长的具有前瞻性的文化确保 Java 可以保持生机盎然,并能适应计算领域中经常快速变化的变化。简而言之:Java 不仅是世界上最重要的计算机语言之一,而且是一种革命性的编程方式,并在此过程中改变了世界。尽管 Java 是一种经常与 Internet 编程相关的语言,但绝不限于此 Java 是一种功能强大的,功能齐全的通用编程语言。因此,如果您不熟悉编程,那么 Java 是一门优秀的学习语言。而且,要成为当今的专业程序员,就意味着可以使用 Java 进行编程,这一点非常重要。
任何一种编程语言如果要获得用户和开发者的认可,一定是要解决一些应用开发和运维的痛点的。Java 能够长盛不衰得益于在标准的统一和开放基础上不断的与时俱进。Java 除了是一种编程语言,也同时是一个运行时,为了能够在最广泛的平台和环境中运行,在诞生伊始就联合各个厂商和组织形成语言和虚拟机统一标准,并通过 TCK 对标准的具体实现进行认证,保障了来自于任何一个厂商的 JDK 的兼容性,使得 Java 没有出现如 UNIX 系统那样的问题。开放性是 Java 生命常青的另一个基石,Java 的演进一直由各个厂商和用户组成的社区来协调和驱动,遵从 JCP 的流程来讨论决定重大特性和问题,这一点保障了 Java 生态的发展壮大和活跃。社区和生态的活跃反过来又促进了 Java 的发展,Java 的一些特性和类库就是直接继承自社区的项目,比如 JDK 5 引入的 JSR 166 until.concurrent,JDK 8 引入的新 Java date 和 time API 等等。正在开发中的很多重要项目,比如 Amber、Valhalla、Loom 等等,也都是社区呼声很高的,并且在迭代中积极吸纳社区的意见和反馈。

# About
## Links
- https://crossoverjie.top/JCSprout/#/
## Copyright & More | 延伸阅读
笔者所有文章遵循[知识共享 署名 - 非商业性使用 - 禁止演绎 4.0 国际许可协议](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.zh),欢迎转载,尊重版权。您还可以前往 [NGTE Books](https://ng-tech.icu/books-gallery/) 主页浏览包含知识体系、编程语言、软件工程、模式与架构、Web 与大前端、服务端开发实践与工程架构、分布式基础架构、人工智能与深度学习、产品运营与创业等多类目的书籍列表:
[](https://ng-tech.icu/books-gallery/)
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/wx-chevalier/Java-Notes.svg?style=flat-square
[contributors-url]: https://github.com/wx-chevalier/Java-Notes/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/wx-chevalier/Java-Notes.svg?style=flat-square
[forks-url]: https://github.com/wx-chevalier/Java-Notes/network/members
[stars-shield]: https://img.shields.io/github/stars/wx-chevalier/Java-Notes.svg?style=flat-square
[stars-url]: https://github.com/wx-chevalier/Java-Notes/stargazers
[issues-shield]: https://img.shields.io/github/issues/wx-chevalier/Java-Notes.svg?style=flat-square
[issues-url]: https://github.com/wx-chevalier/Java-Notes/issues
[license-shield]: https://img.shields.io/github/license/wx-chevalier/Java-Notes.svg?style=flat-square
[license-url]: https://github.com/wx-chevalier/Java-Notes/blob/master/LICENSE.txt
| 0 |
UniversalRobots/Universal_Robots_ExternalControl_URCap | Example implementation of how to use ROS driver on-demand in a URCap. | ros ros-industrial urcaps | # URCaps External Control
The External Control URCap is the user interface for the Universal Robots [ROS](https://github.com/UniversalRobots/Universal_Robots_ROS_Driver), [ROS2](https://github.com/UniversalRobots/Universal_Robots_ROS2_Driver) and [Isaac SDK](https://github.com/UniversalRobots/Universal_Robots_Isaac_Driver) driver, as well as the [Universal Robots Client Library](https://github.com/UniversalRobots/Universal_Robots_Client_Library) used by the drivers.
It supports the Universal Robots CB3 and e-Series robots.
## Prerequisites
As this URCap is using swing to implement the user interface, the URCap library in version 1.3.0 or
higher is required. Therefore the minimal PolyScope versions are 3.7 and 5.1.
## Usage
* In the _Installation_ tab of Polyscope:
* Adjust the IP address of your robot in the _Installation_ tab of Polyscope (this step might be unnecessary in simulation).
* On the remote PC:
* Launch the suitable _launch_ file for UR3/UR5/UR10 and CB3/e-series.
* In the _Program_ tab of Polyscope:
* Add this URcap to a program by selecting it from the side menu under the tab _URcap_.
* Execute the program by pressing the _play_ button in the _Program_ tab of Polyscope.
### Multiple URCap nodes
To use this URCap node multiple times in a ur program, the control script is divided into two
scripts. After receiving the script, it is divided into a header part and a control loop part. The
header part consist of all the function deffinitions. The header is only inserted once in the
program, while the control loop is inserted for each URCap node in the program tree.
To be able to distinguish between header and control loop, the header part of the script should be
encapsulated in:
```bash
# HEADER_BEGIN
Here goes the header code
# HEADER_END
# NODE_CONTROL_LOOP_BEGINS
Here goes the control loop code
# NODE_CONTROL_LOOP_ENDS
```
If its not possible to find either `# HEADER_BEGIN` or `# HEADER_END`, the script will not be
divided into two scripts and it will not be possible to have multiple URCap nodes in one program.
## Acknowledgment
Developed in collaboration between:
[<img height="60" alt="Universal Robots A/S" src="doc/resources/ur_logo.jpg">](https://www.universal-robots.com/) and
[<img height="60" alt="FZI Research Center for Information Technology" src="doc/resources/fzi-logo_transparenz.png">](https://www.fzi.de).
<!--
ROSIN acknowledgement from the ROSIN press kit
@ https://github.com/rosin-project/press_kit
-->
<a href="http://rosin-project.eu">
<img src="http://rosin-project.eu/wp-content/uploads/rosin_ack_logo_wide.png"
alt="rosin_logo" height="60" >
</a>
Supported by ROSIN - ROS-Industrial Quality-Assured Robot Software Components.
More information: <a href="http://rosin-project.eu">rosin-project.eu</a>
<img src="http://rosin-project.eu/wp-content/uploads/rosin_eu_flag.jpg"
alt="eu_flag" height="45" align="left" >
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement no. 732287.
| 1 |
mat3e/3pigs-ddd | DDD & Clean Architecture on the example of The Three Little Pigs | null | # _The Three Little Pigs_ with DDD and clean architecture
[My tech talk](https://github.com/mat3e/talks/tree/master/docs/3pigs), fairy tale
sources: [1](https://www.gillbooks.ie/AcuCustom/Sitename/DAM/101/WWSI_OM_0902.pdf), [2](http://www.hellokids.com/c_14958/reading-learning/stories-for-children/animal-stories-for-kids/the-three-little-pigs), [3](https://sacred-texts.com/neu/eng/eft/eft15.htm), [4](https://americanliterature.com/childrens-stories/the-three-little-pigs).
> Java, Groovy + Spock, Kotlin, Maven, Spring
The main focus should be on _The Three Little Pigs_, but to show an alternative, more pragmatic approach there is also
_Little Red Riding Hood_ module, utilizing package-private access more, close to what's presented
in [this great tech talk by Jakub Nabrdalik](https://www.youtube.com/watch?v=KrLFs6f2bOA).
## Web app
App starts as an ordinary web app for
```properties
spring.main.web-application-type=servlet
```
### _The Three Little Pigs_
Available operations:
1. Build house: `POST` `localhost:8080/houses`
```json
{
"owner": "VERY_LAZY"
}
```
Possible values for `owner`:
* `VERY_LAZY`
* `LAZY`
* `NOT_LAZY`
* `NOT_LAZY_ANYMORE`
2. Verify the house state: `GET` `localhost:8080/houses/{id}`
3. Blow house down: `DELETE` `localhost:8080/houses/{id}`
There is a dedicated [Postman](https://www.postman.com/) collection with all these operations already
defined: `pigs3/adapters/src/main/resources/3Pigs.postman_collection.json`.
### _Little Red Riding Hood_
There is a dedicated [Postman](https://www.postman.com/) collection with all the operations ready to
use: `redhood/src/main/resources/RedHood.postman_collection.json`.
## Console app
When
```properties
spring.main.web-application-type=none
```
app prints the whole _The Three Little Pigs_ story in the console.
---
## EventStorming
An example session with myself, for _The Three Little Pigs_. There was no need to run Process Level ES as we already had
a single BoundedContext, and it was doable to jump straight into Design Level.
For the second module I also run Big Picture ES which helped to realize that the main things are interactions and wolf's
intentions.
### Big Picture

### Design Level - commands, rules & actors

### Design Level - naming aggregates

## Possible improvements
* `House` could have mechanisms for rebuilding
* Story can be extended - currently there is nothing about wolf climbing through the chimney and pigs lighting the fire
* New `House` method, e.g. `litFire`
* New `BigBadWolfService` method, e.g. `comeDownTheChimneyOf`
* Event, e.g. `WolfStartedClimbing` instead of `WolfResignedFromAttacking`, new event from `House`,
e.g. `WolfEscaped` (when burns in the fireplace)
* `WolfStartedClimbing` should call both `litFire` and `comeDownTheChimneyOf` in a proper order
* `WolfEscaped` should result in knowledge sharing
* Full Event Sourcing - `House` can be built just from events, no snapshots in the current form
* Rely fully on `DomainEventPublisher` - although `@DomainEvents` annotation looks
nice, it relies on `ApplicationEventPublisher` which has known limitations, especially without additional tooling
like [Spring Modulith](https://spring.io/blog/2022/10/21/introducing-spring-modulith)
* Instead of "technical" packages corresponding with Clean Architecture and DDD, there could be more process-oriented
packages, like `building` (or `construcitng`) and `destroying`, where the domain logic would lay, similar as presented
in [this great tech talk by Victor Rentea](https://www.youtube.com/watch?v=H7HWOlANX78)
| 1 |
noelportugal/GlassBeacon | A Google Glass GDK example to find iBeacons (using Estimote beacons). | null | GlassBeacon
===========
A Google Glass GDK example to find iBeacons (using Estimote beacons).

| 1 |
nisrulz/FirebaseExample | :fire: Simplistic example app demonstrating using latest Firebase features. Checkout branches for each feature. | null | null | 1 |
AlmasB/JavaFX11-example | An example that shows how to use JavaFX 11 with Java 11 | javafx javafx-11 | # JavaFX11-example
An example that shows how to use JavaFX 11 with Java 11
| 1 |
SwerveDriveSpecialties/Do-not-use-Example-Swerve-unmaintained | Example code for a swerve drivetrain using the SDS Mk2 swerve modules with NEO motors | null | # Example Swerve Project
When using Swerve Drive Specialties MK2 modules this template code will provide a quick and simple way to get your robot driving.
## Electrical Hardware Setup
1. A navX should be plugged into the roboRIO MXP port.
2. Steering encoders (analog US digital MA3) are connected to the roboRIO analog input ports.
3. Spark Max motor controllers for the drive motors are:
1. powered using 40 Amp PDP ports and breakers
2. controlled with CAN Bus
3. set to brushless brake mode (blinking cyan LED status when disabled)
4. Spark Max motor controllers for the steering motors are:
1. powered with either size PDP port. We recommend connecting them to the small ports with 30 Amp breakers
2. controlled with CAN Bus
3. set to brushless brake mode (blinking cyan LED status when disabled)
The following port mapping is recommended
1. Front Left Module
1. Drive Motor Controller – CAN ID 1
2. Steering Motor Controller – CAN ID 2
3. Steering Encoder – Analog input 0
2. Front Right Module
1. Drive Motor Controller – CAN ID 3
2. Steering Motor Controller – CAN ID 4
3. Steering Encoder – Analog input 1
3. Back Left Module
1. Drive Motor Controller – CAN ID 5
2. Steering Motor Controller – CAN ID 6
3. Steering Encoder – Analog input 2
4. Back Right Module
1. Drive Motor Controller – CAN ID 7
2. Steering Motor Controller – CAN ID 8
3. Steering Encoder – Analog input 3
## Default Control Setup
By default the robot is setup to be controlled by a XBox One controller. However any XBox-style controller should work.
The left stick is setup to control the translational movement of the robot using field-oriented control.
The right stick is setup to control the rotational movement of the robot. Right on the stick should make the robot
rotate clockwise while left should make the robot rotate counter-clockwise.
The back button on the controller is setup to re-zero the robot's gyroscope. By default, the direction the robot is
facing when turned on is the forwards direction but this can be changed by re-zeroing the gyroscope.
## Configure For Your Robot
1. Set your team number using the WPILib extension's "Set Team Number" action.
2. In the `RobotMap` class:
1. If needed, change the values to match the ports and CAN IDs on your robot.
3. In the `DrivetrainSubsystem` class:
1. Set the `TRACKWIDTH` and `WHEELBASE` to your robot's trackwidth and wheelbase.
2. Set all of the `*_ANGLE_OFFSET` constants to `-Math.toRadians(0.0)`.
4. Deploy the code to your robot.
> NOTE: The robot isn't drivable quite yet, we still have to setup the module offsets
5. Turn the robot on its side and align all the wheels so they are facing in the forwards direction.
> NOTE: The wheels will be pointed forwards (not backwards) when modules are turned so the large bevel gears are towards the right side of the robot. When aligning the wheels they must be as straight as possible. It is recommended to use a long strait edge such as a piece of 2x1 in order to make the wheels straight.
6. Record the angles of each module using the angle put onto Shuffleboard. The values are named
`Front Left Module Angle`, `Front Right Module Angle`, etc.
7. Set the values of the `*_ANGLE_OFFSET` to `-Math.toRadians(<the angle you recorded>)`
> NOTE: All angles must be in degrees.
8. Re-deploy and try to drive the robot forwards. All the wheels should stay parallel to each other. If not go back to
step 3.
9. Make sure all the wheels are spinning in the correct direction. If not, add 180 degrees to the offset of each wheel
that is spinning in the incorrect direction. i.e `-Math.toRadians(<angle> + 180.0)`.
### Optional Steps
#### Changing Controller Setup
To invert the controller sticks or modify the control mapping modify the `DriveCommand` class.
#### Using Different Types of Motors
While the default hardware setup uses NEOs & Spark MAXs to control the module, teams may desire to use different motors
to control their modules. The new `Mk2SwerveModuleBuilder` class supports any combination of NEOs, CIMs, or Mini CIMs
using either CAN or PWM.
##### Example 1
Angle motor: NEO controlled by a Spark MAX over CAN
Drive motor: NEO controlled by a Spark MAX over CAN
```java
SwerveModule module = new Mk2SwerveModuleBuilder(new Vector2(5.0, 5.0))
.angleEncoder(new AnalogInput(0), -Math.toRadians(254.16))
.angleMotor(new CANSparkMax(1, CANSparkMaxLowLevel.MotorType.kBrushless),
Mk2SwerveModuleBuilder.MotorType.NEO)
.driveMotor(new CANSparkMax(2, CANSparkMaxLowLevel.MotorType.kBrushless),
Mk2SwerveModuleBuilder.MotorType.NEO)
.build();
```
##### Example 2
Angle motor: NEO controlled by a Spark MAX over PWM with custom PID constants
Drive motor: NEO controlled by a Spark MAX over CAN
```java
SwerveModule module = new Mk2SwerveModuleBuilder(new Vector2(5.0, 5.0))
.angleEncoder(new AnalogInput(0), -Math.toRadians(330.148))
.angleMotor(new Spark(4), new PidConstants(1.0, 0.0, 0.001))
.driveMotor(new CANSparkMax(4, CANSparkMaxLowLevel.MotorType.kBrushless),
Mk2SwerveModuleBuilder.MotorType.NEO)
.build();
```
##### Example 3
Angle motor: Mini CIM controlled by a Talon SRX over CAN
Drive motor: CIM controlled by a Talon SRX over CAN
```java
SwerveModule module = new Mk2SwerveModuleBuilder(new Vector2(5.0, 5.0))
.angleEncoder(new AnalogInput(0), -Math.toRadians(118.1114))
.angleMotor(new TalonSRX(4), Mk2SwerveModuleBuilder.MotorType.MINI_CIM)
.driveMotor(new TalonSRX(5), Mk2SwerveModuleBuilder.MotorType.CIM)
.build();
``` | 1 |
sassoftware/enlighten-integration | Example code and materials that illustrate techniques for integrating SAS with popular open source analytics technologies like Python and R. | null | # enlighten-integration
Example code and materials that illustrate techniques for integrating SAS with
popular open source analytics technologies like Python and R.
See individual subdirectories for specific examples and instructions.
Contributors include:
Patrick Hall, Radhikha Myneni, Ruiwen Zhang, and Tim Haley
## Contents
The example materials in this repository use two approaches to call functionality in other languages from a SAS® session.
* The Base SAS® Java Object
* SAS/IML® Integration with R
### The Base SAS Java Object

The [Base SAS Java Object](http://support.sas.com/documentation/cdl/en/lrcon/68089/HTML/default/viewer.htm#n0swy2q7eouj2fn11g1o28q57v4u.htm) is a SAS DATA step component that enables Java objects to be instantiated, object methods to be called, and results to be returned from Java to SAS. The [SASJavaExec.java](https://github.com/sassoftware/enlighten-integration/blob/master/SAS_Base_OpenSrcIntegration/src/dev/SASJavaExec.java) class in this repository is designed to call executable files from SAS with command line arguments and reports STDOUT and STDERR back to the SAS log. Example materials in this repository use the SASJavaExec.java class to call R and Python scripts, but the class could be used to call other types of executables as well.
#### Using the Base SAS Java Object to call R or Python
[SAS_Base_OpenSrcIntegration](https://github.com/sassoftware/enlighten-integration/tree/master/SAS_Base_OpenSrcIntegration)
* White Paper on [Open Source Integration using the Base SAS Java Object](https://github.com/sassoftware/enlighten-integration/blob/master/SAS_Base_OpenSrcIntegration/SAS_Base_OpenSrcIntegration.pdf)
* White Paper on [Connecting Java to SAS Data Sets](http://support.sas.com/resources/papers/proceedings12/008-2012.pdf)
#### Using Python inside of SAS® Enterprise Miner™
[SAS_EM_PythonIntegration](https://github.com/sassoftware/enlighten-integration/tree/master/SAS_EM_PythonIntegration)
* SAS Communities Tip: [How to execute a Python script in SAS Enterprise Miner](https://communities.sas.com/t5/SAS-Communities-Library/Tip-How-to-execute-a-Python-script-in-SAS-Enterprise-Miner/tac-p/223765)
* Video on [How to Execute a Python Script in SAS Enterprise Miner](http://www.sas.com/apps/webnet/video-sharing.html?player=brightcove&width=640&height=360&autoStart=true&playerID=1873162645001&playerKey=AQ~~,AAABs_kuvqE~,9q03viSCCi8Qu-ec7KH7e-bapzBTKVDB&videoPlayer=4283224315001&emptyPage=false)
### SAS/IML Integration with R

[SAS/IML Integration with R](https://support.sas.com/documentation/cdl/en/imlug/68150/HTML/default/viewer.htm#imlug_r_toc.htm) enables data to be transferred between SAS and R and it enables calling statements from the R language from within a SAS session. One important consideration with R is its ability, much like SAS', to generate [Predictive Modeling Markup Langauge (PMML)](http://dmg.org/pmml/v4-2-1/GeneralStructure.html) to encapsulate the logic of predictive models in a portable format. The example materials in this repository use R to train models and PMML as a portable deployment mechanism. The Enterprise Miner Open Source Integration node is based on the same technologies.
#### Using SAS/IML to call R and create PMML
[SAS_IML_PmmlIntegration](https://github.com/sassoftware/enlighten-integration/tree/master/SAS_IML_PmmlIntegration)
* Video on the [Enterprise Miner Open Source Integration node](http://www.sas.com/apps/webnet/video-sharing.html?player=brightcove&width=640&height=360&autoStart=true&playerID=1873162645001&playerKey=AQ~~,AAABs_kuvqE~,9q03viSCCi8Qu-ec7KH7e-bapzBTKVDB&videoPlayer=3939327608001&emptyPage=false)
* Video on [Calling R from SAS/IML](https://www.youtube.com/watch?v=rUaTTre24kI)
#### Using R to create PMML and importing PMML into SAS Enterprise Miner
[SAS_EM_PmmlIntegration](https://github.com/sassoftware/enlighten-integration/tree/master/SAS_EM_PmmlIntegration)
## Other Integration Approaches
* [SAS Kernel for Jupyter Notebooks](https://github.com/sassoftware/sas_kernel)
* [Calling SAS from R or Python using SAS® BI Web Services](http://blogs.sas.com/content/subconsciousmusings/2015/10/13/how-analytical-web-services-can-help-scale-your-machine-learning/)
* [The LUA Procedure](http://support.sas.com/documentation/cdl/en/proc/68954/HTML/default/viewer.htm#n1csk38ocks0rgn1rr8d302ofqgs.htm)
* [The GROOVY Procedure](http://support.sas.com/documentation/cdl/en/proc/68954/HTML/default/viewer.htm#p1x8agymll9gten1ocziihptcjzj.htm)
* [The JSON Procedure](http://support.sas.com/documentation/cdl/en/proc/68954/HTML/default/viewer.htm#p06hstivs0b3hsn1cb4zclxukkut.htm) | 1 |
saturnism/spring-cloud-gcp-guestbook | null | appengine appengine-java distributed-tracing docker examples gcp gcp-spanner gcp-sql gcp-storage kubernetes microservices-architecture ml mysql spring spring-boot spring-cloud spring-cloud-gcp workshop | This is not an official Google project.
This repository contains example code for the Spring Cloud GCP lab.
The instructions are in [bit.ly/spring-gcp-lab](http://bit.ly/spring-gcp-lab)
| 0 |
subhashlamba/spring-boot-microservice-example | Spring boot microservice example with Eureka Server + Eureka Client + Spring Cloud API Gateway + OAuth2.0 + Circuit Breaker + Resilience4J + FeignClient + RestTemplate | api-gateway eureka-client eureka-server feign-client oath2 oauth2-client oauth2-server resilience4j resttemplate spring-boot-microservice spring-oauth2 zipkin-server | null | 1 |
nfrankel/jvm-controller | Example on how to write a kubernetes controller in Java | null | null | 1 |
anatawa12/ForgeGradle-example | example project of fork of ForgeGradle 1.2 made by anatawa12 | null | # anatawa12's ForgeGradle 1.2 fork for Gradle 4.4.1+ - example project
This is an example mod using the [fork of ForgeGradle-1.2 made by anatawa12](https://github.com/anatawa12/ForgeGradle-1.2).
This fork supports Gradle 4.4.1 and later. This example project uses Gradle 5.6.4.
## How to use this example project
You can download this example project from [here](https://github.com/anatawa12/ForgeGradle-example/archive/master.zip), or use it as a template on Github.
This project can be used as a replacement for Forge's 1.7.10 MDK.
## How to replace ForgeGradle 1.2. with anatawa12's fork
Although this example project has some differences to Forge's 1.7.10 MDK, anatawa12's fork of ForgeGradle 1.2 can be used by most projects with only minimal changes to their Gradle build script.
Here is a list of changes to Forge's 1.7.10 MDK Gradle build script, to replace the official ForgeGradle 1.2 plugin with the fork. These changes are likely to work with most projects based on Forge's 1.7.10 MDK.
In the repositories block of the buildscript section, add jcenter, and switch the Forge maven to use HTTPS instead of HTTP:
```diff
repositories {
mavenCentral()
maven {
name = "forge"
- url = "http://files.minecraftforge.net/maven"
+ url = "https://maven.minecraftforge.net/"
}
```
Also in the dependencies block of the buildscript section, change the dependency on Forge's official ForgeGradle 1.2 to the fork:
```diff
dependencies {
- classpath 'net.minecraftforge.gradle:ForgeGradle:1.2-SNAPSHOT'
+ classpath ('com.anatawa12.forge:ForgeGradle:1.2-1.1.+') {
+ changing = true
+ }
}
```
The Gradle wrapper should also be changed to use Gradle 4.4.1 or higher. <!--Currently, the plugin [does not support Gradle 6.x](https://github.com/anatawa12/ForgeGradle-1.2/issues/9), although this may change in the future. As such, the latest version of Gradle this plugin supports is Gradle 5.6.4.-->
| 1 |
swtestacademy/javafxexample | This is the example of this article http://www.swtestacademy.com/database-operations-javafx | null | # javafxexample
This is the example of these articles:
- http://www.swtestacademy.com/database-operations-javafx
- https://www.swtestacademy.com/database-operations-javafx/
| 1 |
JohnathanMarkSmith/springmvc-resttemplate-test | This is a Quick Example of Spring RESTTemplate Doing a POST to Spring MVC RESTful service | null | ### Using Spring RESTTemplate to Post Objects to RESTful web services with Spring's Java Configuration (JavaConfig) style with Maven, JUnit, Log4J
In this example I am going to show you how to post data to a RESTful web service in Java using Spring, Spring Java Configuration and more
### Web Service Code
Let's take a quick look at the Spring MVC Web Service code on the server:
@Controller
@RequestMapping("/api")
class JSonController
{
private static final Logger logger = LoggerFactory.getLogger(JSonController.class);
@RequestMapping(value = "/{id}", method = RequestMethod.POST)
@ResponseBody
public User updateCustomer(@PathVariable("id") String id, @RequestBody User user) {
logger.debug("I am in the controller and got ID: " + id.toString());
logger.debug("I am in the controller and got user name: " + user.toString());
return new User("NEW123", "NEW SMITH");
}
As you can see from the code above the web service is goign to what for a ID and user object to be passed in and then its going to create a new User Object and send it back to the client.
### Time For The Client Code
You can see from the client code below is that we are using Spring RESTTemaple and going to post an User Object to a web server and get one back.
@PropertySource("classpath:application.properties")
public class Main
{
/**
* Setting up logger
*/
private static final Logger LOGGER = getLogger(Main.class);
public static void main(String[] args) throws IOException
{
LOGGER.debug("Starting REST Client!!!!");
/**
*
* This is going to setup the REST server configuration in the applicationContext
* you can see that I am using the new Spring's Java Configuration style and not some OLD XML file
*
*/
ApplicationContext context = new AnnotationConfigApplicationContext(RESTConfiguration.class);
/**
*
* We now get a RESTServer bean from the ApplicationContext which has all the data we need to
* log into the REST service with.
*
*/
RESTServer mRESTServer = context.getBean(RESTServer.class);
/**
*
* Setting up data to be sent to REST service
*
*/
Map<String, String> vars = new HashMap<String, String>();
vars.put("id", "JS01");
/**
*
* Doing the REST call and then displaying the data/user object
*
*/
try
{
/*
This is code to post and return a user object
*/
RestTemplate rt = new RestTemplate();
rt.getMessageConverters().add(new MappingJacksonHttpMessageConverter());
rt.getMessageConverters().add(new StringHttpMessageConverter());
String uri = new String("http://" + mRESTServer.getHost() + ":8080/springmvc-resttemplate-test/api/{id}");
User u = new User();
u.setName("Johnathan M Smith");
u.setUser("JS01");
User returns = rt.postForObject(uri, u, User.class, vars);
LOGGER.debug("User: " + u.toString());
}
catch (HttpClientErrorException e)
{
/**
*
* If we get a HTTP Exception display the error message
*/
LOGGER.error("error: " + e.getResponseBodyAsString());
ObjectMapper mapper = new ObjectMapper();
ErrorHolder eh = mapper.readValue(e.getResponseBodyAsString(), ErrorHolder.class);
LOGGER.error("error: " + eh.getErrorMessage());
}
catch(Exception e)
{
LOGGER.error("error: " + e.getMessage());
}
}
}
You can see from the above code how easy it is to use RESTTeample to post data to a web service.
You can see how easy it is to use Spring's Java Configuration (JavaConfig) style and Not XML.. The time of using XML files with Springs is over...
### We Can I Get The Sourcec Code
You can checkout the project from github.
git clone git@github.com:JohnathanMarkSmith/springmvc-resttemplate-test.git
cd springmvc-resttemplate-test.git
If you have any questions please email me at john@johnathanmarksmith.com | 1 |
robmelfi/21-points-react | This application refers to the book “The JHipster Mini-Book by Matt Raible”. In this application the examples of the book are made using React instead of Angular. | bootstrap4 health java jhipster maven npm react spring-boot webpack | # 21-Points Health (React Version)
This application refers to the book [“The JHipster Mini-Book by Matt Raible”](http://www.jhipster-book.com). In this application the examples of the book are made using React instead of Angular.
This is the [Angular Web App](https://www.21-points.com) version.
Demo of dev [master](https://twentyone-points-react-dev.herokuapp.com)
Demo of [21 Points Health v1.0.0 (React)](https://twentyone-points-react.herokuapp.com)
Note that it takes +30 seconds to wake up in Heroku free account.
This application was generated using JHipster 5.4.1, you can find documentation and help at [https://www.jhipster.tech/documentation-archive/v5.4.1](https://www.jhipster.tech/documentation-archive/v5.4.1).
## Development
Before you can build this project, you must install and configure the following dependencies on your machine:
1. [Node.js][]: We use Node to run a development web server and build the project.
Depending on your system, you can install Node either from source or as a pre-packaged bundle.
After installing Node, you should be able to run the following command to install development tools.
You will only need to run this command when dependencies change in [package.json](package.json).
npm install
We use npm scripts and [Webpack][] as our build system.
Run the following commands in two separate terminals to create a blissful development experience where your browser
auto-refreshes when files change on your hard drive.
./mvnw
npm start
Npm is also used to manage CSS and JavaScript dependencies used in this application. You can upgrade dependencies by
specifying a newer version in [package.json](package.json). You can also run `npm update` and `npm install` to manage dependencies.
Add the `help` flag on any command to see how you can use it. For example, `npm help update`.
The `npm run` command will list all of the scripts available to run for this project.
### Service workers
Service workers are commented by default, to enable them please uncomment the following code.
* The service worker registering script in index.html
```html
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker
.register('./service-worker.js')
.then(function() { console.log('Service Worker Registered'); });
}
</script>
```
Note: workbox creates the respective service worker and dynamically generate the `service-worker.js`
### Managing dependencies
For example, to add [Leaflet][] library as a runtime dependency of your application, you would run following command:
npm install --save --save-exact leaflet
To benefit from TypeScript type definitions from [DefinitelyTyped][] repository in development, you would run following command:
npm install --save-dev --save-exact @types/leaflet
Then you would import the JS and CSS files specified in library's installation instructions so that [Webpack][] knows about them:
Note: there are still few other things remaining to do for Leaflet that we won't detail here.
For further instructions on how to develop with JHipster, have a look at [Using JHipster in development][].
## Building for production
To optimize the TwentyOnePointsReact application for production, run:
./mvnw -Pprod clean package
This will concatenate and minify the client CSS and JavaScript files. It will also modify `index.html` so it references these new files.
To ensure everything worked, run:
java -jar target/*.war
Then navigate to [http://localhost:8080](http://localhost:8080) in your browser.
Refer to [Using JHipster in production][] for more details.
## Testing
To launch your application's tests, run:
./mvnw clean test
### Client tests
Unit tests are run by [Jest][] and written with [Jasmine][]. They're located in [src/test/javascript/](src/test/javascript/) and can be run with:
npm test
UI end-to-end tests are powered by [Protractor][], which is built on top of WebDriverJS. They're located in [src/test/javascript/e2e](src/test/javascript/e2e)
and can be run by starting Spring Boot in one terminal (`./mvnw spring-boot:run`) and running the tests (`npm run e2e`) in a second one.
For more information, refer to the [Running tests page][].
### Code quality
Sonar is used to analyse code quality. You can start a local Sonar server (accessible on http://localhost:9001) with:
```
docker-compose -f src/main/docker/sonar.yml up -d
```
Then, run a Sonar analysis:
```
./mvnw -Pprod clean test sonar:sonar
```
For more information, refer to the [Code quality page][].
## Using Docker to simplify development (optional)
You can use Docker to improve your JHipster development experience. A number of docker-compose configuration are available in the [src/main/docker](src/main/docker) folder to launch required third party services.
For example, to start a mysql database in a docker container, run:
docker-compose -f src/main/docker/mysql.yml up -d
To stop it and remove the container, run:
docker-compose -f src/main/docker/mysql.yml down
You can also fully dockerize your application and all the services that it depends on.
To achieve this, first build a docker image of your app by running:
./mvnw package -Pprod jib:dockerBuild
Then run:
docker-compose -f src/main/docker/app.yml up -d
For more information refer to [Using Docker and Docker-Compose][], this page also contains information on the docker-compose sub-generator (`jhipster docker-compose`), which is able to generate docker configurations for one or several JHipster applications.
## Continuous Integration (optional)
To configure CI for your project, run the ci-cd sub-generator (`jhipster ci-cd`), this will let you generate configuration files for a number of Continuous Integration systems. Consult the [Setting up Continuous Integration][] page for more information.
[JHipster Homepage and latest documentation]: https://www.jhipster.tech
[JHipster 5.4.1 archive]: https://www.jhipster.tech/documentation-archive/v5.4.1
[Using JHipster in development]: https://www.jhipster.tech/documentation-archive/v5.4.1/development/
[Using Docker and Docker-Compose]: https://www.jhipster.tech/documentation-archive/v5.4.1/docker-compose
[Using JHipster in production]: https://www.jhipster.tech/documentation-archive/v5.4.1/production/
[Running tests page]: https://www.jhipster.tech/documentation-archive/v5.4.1/running-tests/
[Code quality page]: https://www.jhipster.tech/documentation-archive/v5.4.1/code-quality/
[Setting up Continuous Integration]: https://www.jhipster.tech/documentation-archive/v5.4.1/setting-up-ci/
[Node.js]: https://nodejs.org/
[Yarn]: https://yarnpkg.org/
[Webpack]: https://webpack.github.io/
[Angular CLI]: https://cli.angular.io/
[BrowserSync]: http://www.browsersync.io/
[Jest]: https://facebook.github.io/jest/
[Jasmine]: http://jasmine.github.io/2.0/introduction.html
[Protractor]: https://angular.github.io/protractor/
[Leaflet]: http://leafletjs.com/
[DefinitelyTyped]: http://definitelytyped.org/
| 0 |
HubertWo/java-stream-kata | Java Stream Code Kata. ☕️ 🤺 Collection of small tasks with detailed answers in form of unit tests. | by example examples java kata katas learn learning-by-doing stream | null | 0 |
Zhuinden/realm-book-example | This is an example rewrite of AndroidHive's messy tutorial, accompanying the following article on Realm. | null | # realm-book-example
This is a rewrite of a ["Realm tutorial" on Android Hive](http://www.androidhive.info/2016/05/android-working-with-realm-database-replacing-sqlite-core-data). Unfortunately the tutorial is extremely outdated (uses 0.82.1 even though the version 3.5.0 is out!), the code is unstructured (Realm transactions inside a click listener inside a dialog created in a long click listener); and it also misuses Realm quite heavily:
- using `begin/commitTransaction()` instead of `executeTransaction()`
- calling `refresh()` even though the Realm instance is freshly open
- the transactions are all done on the UI thread
- the Realm instance is never closed
It also uses outdated practices or is just not up-to-date information:
- `refresh()` doesn't even exist anymore, and even when it did, in this use-case it was not needed
- uses a Migration to pre-populate the database, even though `initialData()` exists now
- claims that `null` support for primitives isn't in, even though it was added in 0.83.0
- the code relies on `commitTransaction()` immediately updating the `RealmResults<T>` and calling `adapter.notifyDataSetChanged()` manually, but that's not the case since 0.89.0 which means you need to add a change listener to the `RealmResults<T>` (which `RealmRecyclerViewAdapter` does for you automatically)
------------------------------
So with that in mind, this repository shows how to do these things right:
- uses `executeTransactionAsync()` on the UI thread
- uses `initialData()` to prepopulate the Realm
- uses `RealmManager` class (a bit stub-like because I'll have to make its content not static later) to manage number of open activities
- uses retained fragment to count open activity
- uses retained fragment to store presenter (oh, it actually has a "presenter" instead of just throwing everything in `OnClickListener`s)
- does not use `Application` subclass explicitly because of [Firebase Crash Reporting](https://firebase.google.com/docs/crash/android) for example creating multiple Application instances
- uses `RealmRecyclerViewAdapter` with asynchronous query
So yeah, this is the interesting class:
``` java
public class RealmManager {
static Realm realm;
static RealmConfiguration realmConfiguration;
public static void initializeRealmConfig(Context appContext) {
if(realmConfiguration == null) {
setRealmConfiguration(new RealmConfiguration.Builder(appContext).initialData(new RealmInitialData())
.deleteRealmIfMigrationNeeded()
.build());
}
}
public static void setRealmConfiguration(RealmConfiguration realmConfiguration) {
RealmManager.realmConfiguration = realmConfiguration;
Realm.setDefaultConfiguration(realmConfiguration);
}
private static int activityCount = 0;
public static Realm getRealm() {
return realm;
}
public static void incrementCount() {
if(activityCount == 0) {
if(realm != null) {
if(!realm.isClosed()) {
realm.close();
realm = null;
}
}
realm = Realm.getDefaultInstance();
}
activityCount++;
}
public static void decrementCount() {
activityCount--;
if(activityCount <= 0) {
activityCount = 0;
realm.close();
Realm.compactRealm(realmConfiguration);
realm = null;
}
}
}
```
Which has its `RealmConfiguration` initialized in `Activity.onCreate()`, and the Realm instance itself is opened with `RealmManager.incrementCount()` from the retained fragment's constructor.
``` java
public class BooksScopeListener extends Fragment {
BooksPresenter booksPresenter;
public BooksScopeListener() {
setRetainInstance(true);
RealmManager.incrementCount();
booksPresenter = new BooksPresenter();
}
@Override
public void onDestroy() {
RealmManager.decrementCount();
super.onDestroy();
}
public BooksPresenter getPresenter() {
return booksPresenter;
}
}
```
Which is created in the Activity.
``` java
@Override
protected void onCreate(Bundle savedInstanceState) {
RealmManager.initializeRealmConfig(getApplicationContext());
super.onCreate(savedInstanceState);
BooksScopeListener fragment = (BooksScopeListener) getSupportFragmentManager().findFragmentByTag("SCOPE_LISTENER");
if(fragment == null) {
fragment = new BooksScopeListener();
getSupportFragmentManager().beginTransaction().add(fragment, "SCOPE_LISTENER").commit();
}
realm = RealmManager.getRealm();
booksPresenter = fragment.getPresenter();
```
The adapter is set up like this
``` java
recycler.setAdapter(new BooksAdapter(this, realm.where(Book.class).findAllAsync(), booksPresenter));
```
Where the adapter is a proper `RealmRecyclerViewAdapter`:
``` java
public class BooksAdapter extends RealmRecyclerViewAdapter<Book, BooksAdapter.BookViewHolder> {
```
And the writes are from the UI thread to a background thread using `executeTransactionAsync()`, found in the presenter.
``` java
Realm realm = RealmManager.getRealm();
realm.executeTransactionAsync(new Realm.Transaction() {
```
| 1 |
osgi/osgi.enroute | The OSGi enRoute project provides a programming model of OSGi applications. This project contains bundles providing the API for the OSGi enRoute base profile and bundles for the OSGi enRoute project. The base profile establishes a runtime that contains a minimal set of services that can be used as a base for applications. The bundles are simple implementations that can be used to run enRoute for smaller applications and provide an example how to implement it more thoroughly. There are also examples in this repo. | java osgi-applications osgi-enroute | <h1><img src="http://enroute.osgi.org/img/enroute-logo-64.png" witdh=40px style="float:left;margin: 0 1em 1em 0;width:40px">
OSGi enRoute</h1>
Interested in developing agile & maintainable Enterprise or highly distributed IoT solutions? In either case, [OSGi enRoute](http://enroute.osgi.org) provides a simple on-ramp for developing such modular distributed applications and exploring the power of OSGi.
Based upon the latest OSGi best practices and R7 Specifications, the enRoute tutorials start with a comprehensive hands-on introduction to Declarative Services, and then progresses to explore OSGi's unique and powerful approaches to Microservices & Reactive Systems.
This repository contains the code for the enRoute tutorials, and also defines useful OSGi repositories for the OSGi R7 reference implementations. You can use these repositories directly in your own OSGi applications, or as a template for creating your own personalised OSGi application runtime.
## Contributing
Want to contribute to osgi.enroute? See [CONTRIBUTING.md](CONTRIBUTING.md) for information on building, testing and contributing changes.
## License
The contents of this repository are made available to the public under the terms of the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Bundles may depend on non Apache Licensed code.
| 1 |
tspannhw/nifi-tensorflow-processor | Example Tensorflow Processor using Java API for Apache NiFi 1.2 - 1.9.1+ | apache-nifi deep-learning inception java nifi nifi-processor tensorflow tensorflow-processor | # nifi-tensorflow-processor
Example Tensorflow Processor using Java API for Apache NiFi 1.2+
Example using out of box TensorFlow Java example with NiFi
Article detailing creation, building and usage
https://community.hortonworks.com/content/kbentry/116803/building-a-custom-processor-in-apache-nifi-12-for.html
Currently simple classification models are supported. The model directory should have a file graph.pb containing a tensorflow graph, and a label.txt file containing labels by index of the output classes of the graph. The model must also have a variable called input which expects as Image tensor, and a output variable which stores a tensor of label index and probability.
This is a clean update for TensorFlow 1.6. This takes a flow file as an image
JPG, PNG, GIF
Updated TensorFlowProcessor to TF 1.6. Added more tests. More cleanup. Top 5 returned in clean naming.
Install to /usr/hdf/current/nifi/lib/
| 1 |
szaza/tensorflow-example-java | This is a Tensorflow Java example application what uses YOLOv2 model and Gradle for build and dependency management. | example java tensorflow yolo | # TensorFlow Java example with YOLOv2 built by Gradle
TensorFlow Java API is a new opportunity to use TensorFlow from Java applications.
On the [official TensorFlow site](https://www.tensorflow.org/install/install_java) you can find a description about the
Java API usage with Maven using an Inception model. This sample shows you how to use TensorFlow from Java programs using Gradle as build and
dependency management tool. In my sample code I used the YOLO vesion 2 to detect and classify objects.
### How it works?
<table>
<tr>
<td><img src="https://github.com/szaza/tensorflow-java-yolo/blob/master/src/main/resources/image/cow-and-bird.jpg" title="tensorflow java api cow and bird" alt="tensorflow java example cow and bird" width="500"/></td>
<td><img src="https://github.com/szaza/tensorflow-java-yolo/blob/master/sample/cow-and-bird.jpg" title="tensorflow java example" alt="tensorflow java example" width="500"/></td>
</tr>
<tr>
<td>Input image</td>
<td>Bird and cow detected by YOLO using TensorFlow Java API</td>
</tr>
<tr>
<td><img src="https://github.com/szaza/tensorflow-java-yolo/blob/master/src/main/resources/image/eagle.jpg" title="tensorflow yolo java eagle" alt="tensorflow yolo java eagle" width="500"/></td>
<td><img src="https://github.com/szaza/tensorflow-java-yolo/blob/master/sample/eagle.jpg" title="tensoflow java yolo sample" alt="java tensorflow yolo sample" width="500"/></td>
</tr>
<tr>
<td>Input image</td>
<td>Bird detected by YOLO using TensorFlow Java API</td>
</tr>
</table>
### Compile and run
**Preconditions:**
- Java JDK 1.8 or greater;
- TensorFlow 1.6 or grater;
- Git version control system;
**Strongly recommended to install:**
- nVidia CUDA Toolkit 8.0 or higher version;
- nVidia cuDNN GPU accelerated deep learning framework;
**Download the frozen graphs**
Before compiling the application you have to create/download some graph definition files. To try out the application you
can use my frozen graphs, which are trained to the Pascal VOC dataset with 20 classes. You can download them from my
google drive [here](https://drive.google.com/open?id=1GfS1Yle7Xari1tRUEi2EDYedFteAOaoN). Place these files under the
`src/main/resources/YOLO` directory.
Please make sure that you've set properly the *GRAPH_FILE* and *LABEL_FILE* variables in the [Configuration](https://github.com/szaza/tensorflow-java-yolo/blob/master/src/main/java/edu/ml/tensorflow/Config.java) file.
**Compile the source by using Gradle**
By default it runs on CPU. If you want to run this program with GPU support please add this line to the `build.gradle` file: <br/>
`compile group: 'org.tensorflow', name: 'libtensorflow_jni_gpu', version: '1.6.0'`
Specify the path for the image in the [Main](https://github.com/szaza/tensorflow-java-yolo/blob/master/src/main/java/edu/ml/tensorflow/Main.java) class (for sure it can be modified to read from the command line arguments).<br/>
Compile the code with the following command: `./gradlew clean build`
**Run the application**
Type the `./gradlew run` command in the command line window and hit enter. You are done!
The output is printed out with the LogBack logging framework so, it looks like:
`INFO edu.ml.tensorflow.ObjectDetector - Object: cow - confidence: 0.8864294` <br/>
`INFO edu.ml.tensorflow.ObjectDetector - Object: bird - confidence: 0.64604723`
**Note**
If you would like to create a client-server architecture with Spring Framework check this project: [TensorFlow Java tutorial with Spring](https://sites.google.com/view/tensorflow-example-java-api/tensorflow-java-api-with-spring-framework).
**FAQ**
Is it much slower than the TensorFlow Python or TensorFlow C++ API? <br/>
No, because it communicates through Java Native Interface (JNI)
## News about YoloV3 support
The current solution doesn't support the YoloV3 model and unfortunately, I do not have time to implement it, however I would be very happy if I could help to implement and I could review a PR with this feture.
For this reason I've started a new branch here: https://github.com/szaza/tensorflow-java-examples-spring/tree/feature/add-yolov3-support; If you are interested in this feature and you would like to be a collabortor, please add a comment for this thread: https://github.com/szaza/tensorflow-java-examples-spring/issues/2;
Many-many thank for any support!
| 1 |
IMS94/javacv-cnn-example | A example to demonstrate the usage of JavaCV and CNN for gender and age recognition | age age-recognition cnn gender javacv javacv-cnn opencv | # JavaCV CNN (Convolutional Neural Networks) Example for Age and Gender Recognition
A sample repository to demonstrate the usage of JavaCV and CNN for gender and age recognition. **Please refer [Age and gender recognition with JavaCV and CNN](https://medium.com/@Imesha94/age-and-gender-recognition-with-javacv-and-cnn-fdebb3d436c0) for the step by step guide.**
This repository has made use of CNNs trained by [Gil Levi and Tal Hassner in 2015](http://www.openu.ac.il/home/hassner/projects/cnn_agegender).
This simple program is capable of detecting human faces and predicting the gender and age of the detected face.
## Building project
In order to build this project, run a `mvn clean install` at the project root.
| 1 |
bezkoder/spring-security-refresh-token-jwt | Spring Security Refresh Token using JWT in Spring Boot example with HttpOnly Cookie - Expire and Renew JWT Token | authentication authorization jwt jwt-auth jwt-authentication jwt-authorization jwt-refresh-token jwt-token jwt-tokens refresh-token refresh-tokens refreshtoken spring spring-boot spring-security spring-security-jwt springboot springsecurity | # Spring Security Refresh Token with JWT in Spring Boot example
Build JWT Refresh Token with Spring Security in the Spring Boot Application. You can know how to expire the JWT Token, then renew the Access Token with Refresh Token in HttpOnly Cookie.
The instruction can be found at:
[Spring Security Refresh Token with JWT](https://www.bezkoder.com/spring-security-refresh-token/)
## User Registration, User Login and Authorization process.
The diagram shows flow of how we implement User Registration, User Login and Authorization process.

And this is for Refresh Token:

## Configure Spring Datasource, JPA, App properties
Open `src/main/resources/application.properties`
```properties
spring.datasource.url= jdbc:mysql://localhost:3306/testdb?useSSL=false
spring.datasource.username= root
spring.datasource.password= 123456
spring.jpa.properties.hibernate.dialect= org.hibernate.dialect.MySQLDialect
spring.jpa.hibernate.ddl-auto= update
# App Properties
bezkoder.app.jwtSecret= bezKoderSecretKey
bezkoder.app.jwtExpirationMs= 3600000
bezkoder.app.jwtRefreshExpirationMs= 86400000
```
## Run Spring Boot application
```
mvn spring-boot:run
```
## Run following SQL insert statements
```
INSERT INTO roles(name) VALUES('ROLE_USER');
INSERT INTO roles(name) VALUES('ROLE_MODERATOR');
INSERT INTO roles(name) VALUES('ROLE_ADMIN');
```
Related Posts:
> [Spring Boot, Spring Security: JWT Authentication & Authorization example](https://www.bezkoder.com/spring-boot-security-login-jwt/)
> [For MySQL/PostgreSQL](https://www.bezkoder.com/spring-boot-login-example-mysql/)
> [For MongoDB](https://www.bezkoder.com/spring-boot-mongodb-login-example/)
## More Practice:
> [Spring Boot File upload example with Multipart File](https://bezkoder.com/spring-boot-file-upload/)
> [Exception handling: @RestControllerAdvice example in Spring Boot](https://bezkoder.com/spring-boot-restcontrolleradvice/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Rest Controller Unit Test with @WebMvcTest](https://www.bezkoder.com/spring-boot-webmvctest/)
> [Spring Boot Pagination & Sorting example](https://www.bezkoder.com/spring-boot-pagination-sorting-example/)
> Validation: [Spring Boot Validate Request Body](https://www.bezkoder.com/spring-boot-validate-request-body/)
> Documentation: [Spring Boot and Swagger 3 example](https://www.bezkoder.com/spring-boot-swagger-3/)
> Caching: [Spring Boot Redis Cache example](https://www.bezkoder.com/spring-boot-redis-cache-example/)
Associations:
> [Spring Boot One To Many example with Spring JPA, Hibernate](https://www.bezkoder.com/jpa-one-to-many/)
> [Spring Boot Many To Many example with Spring JPA, Hibernate](https://www.bezkoder.com/jpa-many-to-many/)
> [JPA One To One example with Spring Boot](https://www.bezkoder.com/jpa-one-to-one/)
Deployment:
> [Deploy Spring Boot App on AWS – Elastic Beanstalk](https://www.bezkoder.com/deploy-spring-boot-aws-eb/)
> [Docker Compose Spring Boot and MySQL example](https://www.bezkoder.com/docker-compose-spring-boot-mysql/)
## Fullstack Authentication
> [Spring Boot + Vue.js JWT Authentication](https://bezkoder.com/spring-boot-vue-js-authentication-jwt-spring-security/)
> [Spring Boot + Angular 8 JWT Authentication](https://bezkoder.com/angular-spring-boot-jwt-auth/)
> [Spring Boot + Angular 10 JWT Authentication](https://bezkoder.com/angular-10-spring-boot-jwt-auth/)
> [Spring Boot + Angular 11 JWT Authentication](https://bezkoder.com/angular-11-spring-boot-jwt-auth/)
> [Spring Boot + Angular 12 JWT Authentication](https://www.bezkoder.com/angular-12-spring-boot-jwt-auth/)
> [Spring Boot + Angular 13 JWT Authentication](https://www.bezkoder.com/angular-13-spring-boot-jwt-auth/)
> [Spring Boot + Angular 14 JWT Authentication](https://www.bezkoder.com/angular-14-spring-boot-jwt-auth/)
> [Spring Boot + Angular 15 JWT Authentication](https://www.bezkoder.com/angular-15-spring-boot-jwt-auth/)
> [Spring Boot + Angular 16 JWT Authentication](https://www.bezkoder.com/angular-16-spring-boot-jwt-auth/)
> [Spring Boot + Angular 17 JWT Authentication](https://www.bezkoder.com/angular-17-spring-boot-jwt-auth/)
> [Spring Boot + React JWT Authentication](https://bezkoder.com/spring-boot-react-jwt-auth/)
## Fullstack CRUD App
> [Vue.js + Spring Boot + H2 Embedded database example](https://www.bezkoder.com/spring-boot-vue-js-crud-example/)
> [Vue.js + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-vue-js-mysql/)
> [Vue.js + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-vue-js-postgresql/)
> [Angular 8 + Spring Boot + Embedded database example](https://www.bezkoder.com/angular-spring-boot-crud/)
> [Angular 8 + Spring Boot + MySQL example](https://bezkoder.com/angular-spring-boot-crud/)
> [Angular 8 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-spring-boot-postgresql/)
> [Angular 10 + Spring Boot + MySQL example](https://bezkoder.com/angular-10-spring-boot-crud/)
> [Angular 10 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-10-spring-boot-postgresql/)
> [Angular 11 + Spring Boot + MySQL example](https://bezkoder.com/angular-11-spring-boot-crud/)
> [Angular 11 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-11-spring-boot-postgresql/)
> [Angular 12 + Spring Boot + Embedded database example](https://www.bezkoder.com/angular-12-spring-boot-crud/)
> [Angular 12 + Spring Boot + MySQL example](https://www.bezkoder.com/angular-12-spring-boot-mysql/)
> [Angular 12 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-12-spring-boot-postgresql/)
> [Angular 13 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-13-crud/)
> [Angular 13 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-13-mysql/)
> [Angular 13 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-13-postgresql/)
> [Angular 14 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-14-crud/)
> [Angular 14 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-14-mysql/)
> [Angular 14 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-14-postgresql/)
> [Angular 15 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-15-crud/)
> [Angular 15 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-15-mysql/)
> [Angular 15 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-15-postgresql/)
> [Angular 15 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-15-mongodb/)
> [Angular 16 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-16-crud/)
> [Angular 16 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-16-mysql/)
> [Angular 16 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-16-postgresql/)
> [Angular 16 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-16-mongodb/)
> [Angular 17 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-17-crud/)
> [Angular 17 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-17-mysql/)
> [Angular 17 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-17-postgresql/)
> [Angular 17 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-17-mongodb/)
> [React + Spring Boot + MySQL example](https://bezkoder.com/react-spring-boot-crud/)
> [React + Spring Boot + PostgreSQL example](https://bezkoder.com/spring-boot-react-postgresql/)
> [React + Spring Boot + MongoDB example](https://bezkoder.com/react-spring-boot-mongodb/)
Run both Back-end & Front-end in one place:
> [Integrate Angular with Spring Boot Rest API](https://bezkoder.com/integrate-angular-spring-boot/)
> [Integrate React.js with Spring Boot Rest API](https://bezkoder.com/integrate-reactjs-spring-boot/)
> [Integrate Vue.js with Spring Boot Rest API](https://bezkoder.com/integrate-vue-spring-boot/)
## More Practice:
> [Spring Boot File upload example with Multipart File](https://bezkoder.com/spring-boot-file-upload/)
> [Exception handling: @RestControllerAdvice example in Spring Boot](https://bezkoder.com/spring-boot-restcontrolleradvice/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Pagination & Sorting example](https://www.bezkoder.com/spring-boot-pagination-sorting-example/)
Associations:
> [JPA/Hibernate One To Many example](https://www.bezkoder.com/jpa-one-to-many/)
> [JPA/Hibernate Many To Many example](https://www.bezkoder.com/jpa-many-to-many/)
> [JPA/Hibernate One To One example](https://www.bezkoder.com/jpa-one-to-one/)
Deployment:
> [Deploy Spring Boot App on AWS – Elastic Beanstalk](https://www.bezkoder.com/deploy-spring-boot-aws-eb/)
> [Docker Compose Spring Boot and MySQL example](https://www.bezkoder.com/docker-compose-spring-boot-mysql/)
| 1 |
daemontus/VuforiaLibGDX | Example of Vuforia and LibGDX integration for 3D model rendering | null | ## Deprecated
Currently, the repo is deprecated, as it is not working with latest Vuforia SDK and I don't have the time to fix it (I am not doing AR any more). The repo is updated with latest SDK (as of July 2018), but the app sometimes crashes (race condition depending on whether Vuforia or LibGDX initialises first - seems to require some architectural changes compared to previous versions) and the model is not rendered (but the transform matrix seems to be computed correctly).
I'll try to fix it if I find some time in the future, but for now, consider it dead. I'm happy to give maintainer rights to anyone interested in keeping this alive.
If you wish to see the version working with older Vuforia SDK, see [here](https://github.com/daemontus/VuforiaLibGDX/tree/2fecef3c2d4699f8dcc9c2813a232f369e640013).
# VuforiaLibGDX
Example of Vuforia and LibGDX integration for 3D model rendering in augmented reality.
For a more detailed explenation, see this [article](https://treeset.wordpress.com/2016/06/12/vuforia-and-libgdx-3d-model-renderer/).
Note: The app will freeze for a few seconds after start up while loading the 3D model, do not panic :)
##### If you are interested in older versions of Vuforia/LibGDX, check out [this branch](https://github.com/daemontus/VuforiaLibGDX/tree/old).

| 1 |
mkuthan/example-ddd-cqrs-server | Example DDD/CQRS based on Implementing Domain Driven Design book written by Vaughn Vernon | cqrs ddd spring | [](https://travis-ci.org/mkuthan/example-ddd-cqrs-server)
[Presentation](https://docs.google.com/presentation/d/1PlKF4OW5ARqUbqSUwL4D1syEwxw-PmX4KLJObPeYQyI/pub?start=false&loop=false&delayms=3000)
| 1 |
sadra/SOLID | S.O.L.I.D Principles Example | java solid solid-principles | # SOLID
S.O.L.I.D Principles Example
| 1 |
kowalcj0/cucumber-testng-parallel-selenium | An example project that shows how to run Cucumber tests in multiple browsers simultaneously using Selenium and TestNG | null | This example project is based on few other projects:
* [Cucumber-JVM-Parallel](https://github.com/tristanmccarthy/Cucumber-JVM-Parallel)
* [java-parallel](https://github.com/cucumber/cucumber-jvm/tree/java-parallel-example/examples/java-parallel)
* [java-webbit-websockets-selenium](https://github.com/cucumber/cucumber-jvm/tree/java-parallel-example/examples/java-webbit-websockets-selenium)
It allows you to run Cucumber features (tests/scenarios) in multiple browsers simultaneously using Selenium (WebDriver) and TestNG.
## Running features in IDE
Tested in IntelliJ Idea 13.1.1
To run all stories from IDE only in Firefox, simply right click on one of the files:
* cucumber.examples.java.testNG.runners.RunCukesTestInChrome
* cucumber.examples.java.testNG.runners.RunCukesTestInFirefox
And chose "Run ..."
(Yes, choosing RunCukesTestInChrome will also run tests in FF!)
To run all stories simultaneously in both browsers (Chrome and Firefox) right click on one of the files:
* src/test/resources/TestNGRunTestsLocally.xml
* src/test/resources/TestNGRunTestsRemotely.xml
And chose "Run ..."
To run just one selected feature, change the feature name in the class below:
cucumber.examples.java.testNG.runners.RunSingleFeature
And as in previous example, right click on this class and chose "Run ..."
## Running features from CLI
Run tests using local browsers:
mvn clean install
Run tests using browsers running on remote nodes:
mvn clean install -P runTestsRemotely
## Viewing the results
All Cucumber reports [html, json, xml, js] are in: target/cucumber-report
## How to download WebDriver binaries automatically
This project is using Mark Collin's "selenium-standalone-server-plugin" which is a Maven plugin that can download
WebDriver binaries automatically.
Once you configure the plugin to your liking, then:
mvn clean install -P downloadDriverBinaries
The pom.xml is currently configured to download only a Chrome driver binary for 64bit Linux OSes.
If you can't download desired driver binary, then check if its URL and checksum specified in:
src/main/resources/RepositoryMapForMavenWebDriverBinaryDownloaderPlugin.xml
are correct. If not, then modify this file accordingly.
## Jenkins configuration
I'll add a tutorial later
### tools that need to be installed on the Jenkins Host machine
maven 2/3
### List of useful plugins
AnsiColor
Cucumber json test reporting.
cucumber-perf
cucumber-reports
GIT client plugin
GIT plugin
Hudson Locks and Latches plugin
Maven Integration plugin
SSH Credentials Plugin
TestNG Results Plugin
Xvfb plugin | 1 |
abhioncbr/Kafka-Message-Server | Example application based on Apache Kafka framework to show it usage as distributed message server. Exploring this sample application help users to understand how good and easy is Apache Kafka usage. | null | Kafka-Message-Server Example Application
========================================
Apache kafka is yet another precious gem from Apache Software Foundation. Kafka was originally developed at Linkedin and later on became a member of Apache project. Apache Kafka is a distributed publish-subscribe messaging system. Kafka differs from traditional messaging system as it is designed as distributed system, persists messages on disk and supports multiple subscribers.
Kafka-Message-Server is an sample application for demonstrating kafka usage as message-server. Please follow the below instructions for productive use of the sample application.
1) Download Apache kafka version 0.8.0 zip file from kafka download page and extract it.
2) There is no need to set hadoop or zookeper in your system. You can use zookeper startup script present in bin folder of Kafka.
3) For the execution of the sample application - copy 'kafka-message-server-example-0.8.0.jar' in to the kafka folder where 'kafka_2.8.0-0.8.0.jar' is present. Sample application is dependent on 'commons-cli-1.1.jar'. Copy 'commons-cli-1.1.jar' in to the 'libs' folder of the Apache Kafka.
4) Copy following scripts from 'Kafka-Message-Server-Example/config' folder in to 'bin' folder of kafka
a) java-mail-content-producer.sh
b) java-mail-consumer-demo.sh
c) java-mail-producer-consumer-demo.sh
d) java-mail-producer-demo.sh
five execution permission to the scripts using chmod command.
5) Copy 'commons-cli-1.1.jar' in to the Kafka 'libs' folder.
6) Start Zookeper server using command - bin/zookeeper-server-start.sh config/zookeeper.properties
7) Start Kafka server using command - bin/kafka-server-start.sh config/server.properties
8) Start mail content creation program using command - bin/java-mail-content-producer.sh -path [directory-path]
9) Start message server mail producer using command - bin/java-mail-producer-demo.sh -path [same directory path given above] -topic [topic name]
10) Start message server mail consumer using command - bin/java-mail-consumer-demo.sh -topic [same topic name given above]
| 1 |
suikki/simpleSDL | A simple crossplatform libSDL cmake build environment example/test. | null |
# simpleSDL
A simple crossplatform libSDL cmake build environment example/test. This is still somewhat
work in progress and is missing support for iOS builds.
Tested to build and run successfully with:
- Android: gradle + cmake
- Windows: mingw-w64
- Windows: Visual Studio 2015
- Linux (Ubuntu): GCC
- Mac: (Just building a simple executable, no bundle)
- Emscripten
Building on Android
-------------------
You need to have NDK and cmake plugins installed on Android SDK
(https://developer.android.com/studio/projects/add-native-code.html)
1. Copy/clone `SDL` to the `contrib/` directory
1. run `gradlew assemble` in `platforms/android`
or
Open the project in Android Studio and build using the IDE. NOTE: Make sure
to open the `platforms/android/` dir. Android studio can also
open the root dir but it's not recognized as an android project.
The included android gradle cmake project is pretty much what Android Studio
generates when you create a new empty app with native cmake support. Just
pointing to the CmakeLists.txt in the project root.
> NOTE:
>
> Currently the SDL2 Android Java code is included in this project. This is not a very good system
> as it easily leads to the SDL Java and native code being out of sync (i.e. code from different versions of SDL).
> You should replace the Java sources from the version of SDL you are using to make sure they are from the same version.
Todo
----
- Nicer way to include SDL in an android project. [A missing android feature](https://issuetracker.google.com/issues/37134163) is needed to include
prebuilt native library with headers in a .aar package.
- iOS build
- Add instructions how to build on all platforms
| 0 |
damienbeaufils/spring-data-jpa-encryption-example | An example of how to encrypt and decrypt entity fields with JPA converters and Spring Data JPA | null | # spring-data-jpa-encryption-example
[](https://travis-ci.org/damienbeaufils/spring-data-jpa-encryption-example)
An example of how to encrypt and decrypt entity fields with JPA converters and Spring Data JPA.
See [blog post](https://damienbeaufils.dev/blog/how-to-properly-encrypt-data-using-jpa-converters-and-spring-data-jpa/).
## Requirements
Java 11
## How is encryption enabled
### Entity
There is a `User` entity which have different fields: `id`, `firstName`, `lastName`, `email`, `birthDate` and `creationDate`.
All fields except `id` are encrypted in database using AES algorithm.
### Repository
There is a simple `UserRepository` which extends Spring Data `JpaRepository`.
### Converters
Encryption is enabled on fields using different JPA converters: `StringCryptoConverter`, `LocalDateCryptoConverter` and `LocalDateTimeCryptoConverter`.
This is verified with `UserRepositoryTest` integration test.
All converters are unit tested.
### Encryption key
Encryption key is empty by default (see `example.database.encryption.key` configuration key in `application.yml`).
You have to provide an encryption key in configuration or specify it in options when running application.
## Run tests
```
./gradlew check
```
| 1 |
thoersch/spring-boot-rest-api-seed | A seed and example project for a RESTful api using Springboot, Jersey, Hibernate and Jackson | null | ## Description
This is a seed and example project for building a RESTful API using the following technologies:
* Java 8
* Spring Boot
* Jersey
* Hibernate
* Jackson
* Spring DI
* Postgresql
## Install Postgresql
The seed project is using PostgreSQL 9.3+ and can be installed quite easily on mac, linux or windows following a [guide](https://www.codefellows.org/blog/three-battle-tested-ways-to-install-postgresql)
## Create the database user
```
CREATE ROLE "SpringBootUser" LOGIN
ENCRYPTED PASSWORD 'md513445691374efba1aaee7b0912e63af3'
SUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION;
```
## Create the database
```
CREATE DATABASE "SpringBootRestApi"
WITH ENCODING='UTF8'
OWNER="SpringBootUser"
CONNECTION LIMIT=-1;
```
## Build the project
```
mvn clean install
```
## Run the migrations
```
mvn liquibase:update -P local
```
## Running the API
Start the service by running the following command:
```
java -jar target/spring-boot-rest-api-seed-1.0-SNAPSHOT.jar
```
You can now test the service by consuming the api on port 8080. Some routes you can try in your browser (GET requests):
* http://127.0.0.1:8080/users
* http://127.0.0.1:8080/users/1
* http://127.0.0.1:8080/books
* http://127.0.0.1:8080/books/2
* http://127.0.0.1:8080/users/1/books
You can add new content by posting payloads like below:
```
POST 127.0.0.1:8080/users
Content-Type: application/json
{
"firstName": "you",
"lastName": "here",
"emailAddress": "you.here@example.com",
"profilePicture": "yourface.png"
}
```
## License
Copyright © 2014 Tyler Hoersch
Distributed under the Eclipse Public License either version 1.0 or (at
your option) any later version.
| 1 |
gregwhitaker/gradle-monorepo-example | Example of building projects in a monorepo using Gradle Composite Builds | gradle gradle-build gradle-compositebuild gradle-java monorepo | # gradle-monorepo-example
An example of building projects in a monorepo using [Gradle Composite Builds](https://docs.gradle.org/current/userguide/composite_builds.html).
## Repository Structure
The repository contains four projects each with their own Gradle configurations.
Projects A, B, and C have dependencies on one another:
[project-a] -- DEPENDS --> [project-b] -- DEPENDS --> [project-c]
Project D has no dependencies on the other projects:
[project-d]
## Running the Example
Follow the steps below to run the example:
### Build Project A
Run the following commands to build [project-a](project-a):
1. Change the working directory to [project-a](project-a):
cd project-a
2. Run the following command to generate classes for the project:
./gradlew classes --info
Notice in the command output that [project-b](project-b) and [project-c](project-c) was also configured and built,
but [project-d](project-d) was neither configured nor built:
> Configure project :project-b
Evaluating project ':project-b' using build file '/Users/greg/workspace/gradle-compositebuild-example/project-b/build.gradle'.
Registering project ':project-b' in composite build. Will substitute for module 'example.gcb.gregwhitaker:project-b'.
[composite-build] Configuring build: /Users/greg/workspace/gradle-compositebuild-example/project-c
> Configure project :project-c
Evaluating project ':project-c' using build file '/Users/greg/workspace/gradle-compositebuild-example/project-c/build.gradle'.
Registering project ':project-c' in composite build. Will substitute for module 'example.gcb.gregwhitaker:project-c'.
> Configure project :
Evaluating root project 'project-a' using build file '/Users/greg/workspace/gradle-compositebuild-example/project-a/build.gradle'.
All projects evaluated.
Selected primary task 'classes' from project :
Found project 'project :project-b' as substitute for module 'example.gcb.gregwhitaker:project-b'.
Selected primary task ':jar' from project :
Found project 'project :project-c' as substitute for module 'example.gcb.gregwhitaker:project-c'.
Selected primary task ':jar' from project :
Executing project-b tasks [:jar]
Executing project-c tasks [:jar]
Tasks to be executed: [task ':compileJava', task ':processResources', task ':classes']
:project-b:processResources (Thread[Task worker for ':project-b',5,main]) started.
:project-c:compileJava (Thread[Task worker for ':project-c' Thread 3,5,main]) started.
:processResources (Thread[Task worker for ':' Thread 4,5,main]) started.
> Task :project-b:processResources NO-SOURCE
Skipping task ':project-b:processResources' as it has no source files and no previous output files.
:project-b:processResources (Thread[Task worker for ':project-b',5,main]) completed. Took 0.005 secs.
> Task :processResources NO-SOURCE
Skipping task ':processResources' as it has no source files and no previous output files.
:processResources (Thread[Task worker for ':' Thread 4,5,main]) completed. Took 0.004 secs.
> Task :project-c:compileJava UP-TO-DATE
Skipping task ':project-c:compileJava' as it is up-to-date.
:project-c:compileJava (Thread[Task worker for ':project-c' Thread 3,5,main]) completed. Took 0.007 secs.
:project-c:processResources (Thread[Task worker for ':project-c' Thread 3,5,main]) started.
> Task :project-c:processResources NO-SOURCE
Skipping task ':project-c:processResources' as it has no source files and no previous output files.
:project-c:processResources (Thread[Task worker for ':project-c' Thread 3,5,main]) completed. Took 0.0 secs.
:project-c:classes (Thread[Task worker for ':project-c' Thread 3,5,main]) started.
> Task :project-c:classes UP-TO-DATE
Skipping task ':project-c:classes' as it has no actions.
:project-c:classes (Thread[Task worker for ':project-c' Thread 3,5,main]) completed. Took 0.0 secs.
:project-c:jar (Thread[Task worker for ':project-c' Thread 3,5,main]) started.
> Task :project-c:jar UP-TO-DATE
Skipping task ':project-c:jar' as it is up-to-date.
:project-c:jar (Thread[Task worker for ':project-c' Thread 3,5,main]) completed. Took 0.002 secs.
:project-b:compileJava (Thread[Task worker for ':project-b' Thread 2,5,main]) started.
> Task :project-b:compileJava UP-TO-DATE
Skipping task ':project-b:compileJava' as it is up-to-date.
:project-b:compileJava (Thread[Task worker for ':project-b' Thread 2,5,main]) completed. Took 0.005 secs.
:project-b:classes (Thread[Task worker for ':project-b' Thread 2,5,main]) started.
> Task :project-b:classes UP-TO-DATE
Skipping task ':project-b:classes' as it has no actions.
:project-b:classes (Thread[Task worker for ':project-b' Thread 2,5,main]) completed. Took 0.0 secs.
:project-b:jar (Thread[Task worker for ':project-b' Thread 2,5,main]) started.
> Task :project-b:jar UP-TO-DATE
Skipping task ':project-b:jar' as it is up-to-date.
:project-b:jar (Thread[Task worker for ':project-b' Thread 2,5,main]) completed. Took 0.002 secs.
:compileJava (Thread[Task worker for ':' Thread 4,5,main]) started.
> Task :compileJava UP-TO-DATE
Skipping task ':compileJava' as it is up-to-date.
:compileJava (Thread[Task worker for ':' Thread 4,5,main]) completed. Took 0.004 secs.
:classes (Thread[Task worker for ':' Thread 4,5,main]) started.
> Task :classes UP-TO-DATE
Skipping task ':classes' as it has no actions.
:classes (Thread[Task worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
BUILD SUCCESSFUL in 0s
### Build Project D
Run the following commands to build [project-d](project-d):
1. Change the working directory to [project-d](project-d):
cd project-d
2. Run the following command to generate classes for the project:
./gradlew classes --info
Notice that only [project-d](project-d) was configured and built:
> Configure project :
Evaluating root project 'project-d' using build file '/Users/greg/workspace/gradle-compositebuild-example/project-d/build.gradle'.
All projects evaluated.
Selected primary task 'classes' from project :
Tasks to be executed: [task ':compileJava', task ':processResources', task ':classes']
:compileJava (Thread[Task worker for ':',5,main]) started.
> Task :compileJava UP-TO-DATE
Skipping task ':compileJava' as it is up-to-date.
:compileJava (Thread[Task worker for ':',5,main]) completed. Took 0.006 secs.
:processResources (Thread[Task worker for ':',5,main]) started.
> Task :processResources NO-SOURCE
Skipping task ':processResources' as it has no source files and no previous output files.
:processResources (Thread[Task worker for ':',5,main]) completed. Took 0.0 secs.
:classes (Thread[Task worker for ':',5,main]) started.
> Task :classes UP-TO-DATE
Skipping task ':classes' as it has no actions.
:classes (Thread[Task worker for ':',5,main]) completed. Took 0.0 secs.
BUILD SUCCESSFUL in 0s
## Working in IntelliJ
IntelliJ supports Gradle Composite Builds and will automatically open any included builds for a project.
To see this in action, open [project-a](project-a) in your IntelliJ IDE. You will notice that IntelliJ automatically
opens [project-b](project-d) because it is a dependency of the current project.

## Bugs and Feedback
For bugs, questions, and discussions please use the [Github Issues](https://github.com/gregwhitaker/gradle-monorepo-example/issues).
## License
MIT License
Copyright (c) 2019 Greg Whitaker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| 1 |
mitchtabian/Bound-Services-with-MVVM | A simple example of how to bind an activity to a service while using MVVM | android-bound-service android-mvvm android-mvvm-architecture android-services | # Bound-Services-with-MVVM
A simple example of how to bind a service while using MVVM
<a href="https://codingwithmitch.com/blog/bound-services-on-android/" target="_blank">Read the blog post</a>
Or
<a href="https://www.youtube.com/watch?v=_xNkVNaC9AI" target="_blank">Watch the video</a>
##### Note: The structure of this project goes against the recommendation of a Google developer. See <a href="https://medium.com/androiddevelopers/viewmodels-and-livedata-patterns-antipatterns-21efaef74a54" target="_blank">this post</a> for more information.
| 1 |
berndruecker/ticket-booking-camunda-8 | A ticket booking example using Camunda Cloud, RabbitMQ, REST and two sample apps (Java Spring Boot and NodeJS) | null | # Ticket Booking Example

A ticket booking example using
* Camunda Platform 8,
* RabbitMQ,
* Java Spring Boot App
* NodeJS App

# How To Run
<a href="http://www.youtube.com/watch?feature=player_embedded&v=m3MYuRKLZa8" target="_blank"><img src="http://img.youtube.com/vi/m3MYuRKLZa8/0.jpg" alt="Walkthrough" width="240" height="180" border="10" /></a>
## Run RabbitMQ locally
```
docker run -p 15672:15672 -p 5672:5672 rabbitmq:3-management
```
* http://localhost:15672/#/queues/
* User: guest
* Password: guest
## Create Camunda Platform 8 SaaS Cluster
* Login to https://camunda.io/
* Create a new cluster
* When the new cluster appears in the console, create a new set of API client credentials.
* Copy the client credentials into
* Java App `booking-service-java/src/main/resources/application.properties`
* Node App `fake-services-nodejs/.env`
## Run NodeJs Fake Services
If you want to understand the code, please have a look into this get started tutorial: https://github.com/camunda/camunda-platform-get-started/tree/main/nodejs
```
cd fake-services-nodejs
npm update
ts-node src/app.ts
```
## Run Java Ticket Booking Service
If you want to understand the code, please have a look into this documentation: https://github.com/camunda/camunda-platform-get-started/tree/main/spring
```
mvn package exec:java -f booking-service-java\
```
## Test
```
curl -i -X PUT http://localhost:8080/ticket
```
Simulate failures by:
```
curl -i -X PUT http://localhost:8080/ticket?simulateBookingFailure=seats
curl -i -X PUT http://localhost:8080/ticket?simulateBookingFailure=ticket
```
| 1 |
yrizk/FragmentAnimations | Shared Element Animation Transition example adapted for Fragments. | null | FragmentAnimations
==================
Shared Element Animation Transition example adapted for Fragments.
DevBytes, hosted by Chet Hasse, included a tutorial about overriding the standard window manager at this link: https://www.youtube.com/watch?v=CPxkoe2MraA.
yrizk adapted this example to play a shared element animations between two fragments between the same activity using ViewAnimatorProperty API , which is available since API 12+
Special thanks to pabloff9 for some updates, definetly was not expecting that. if you feel so inclined, feel free to conribute.
Let me warn you, the .gitignore is in bad shape (nudge nudge).
| 1 |
rdblue/parquet-avro-protobuf | Example: Convert Protobuf to Parquet using parquet-avro and avro-protobuf | null | ## Converting Protobuf to Parquet via Avro
### Why?
This example shows how to convert a Protobuf file to a Parquet file using
Parquet's Avro object model and Avro's support for protobuf objects. Parquet
has a module to work directly with Protobuf objects, but this isn't always a
good option when writing data for other readers, like Hive.
The reason is that Parquet and Protobuf use the same schema definitions. Both
support required, optional, and repeated data fields and use repeated to encode
arrays. The mapping from Protobuf to Parquet is always 1-to-1.
Other object models, like Avro, allow arrays to be null or to contain null
elements and have an annotation, [LIST][list-annotation-docs], for encoding
these more complicated structures in Parquet's schema format using extra hidden
layers. More object models use this structure than bare repeated fields, so it
is desirable to use it when converting.
The easiest way to use the complex LIST stucture for protobuf data is to write
using parquet-avro and use Avro's support for Protobuf objects, avro-protobuf.
[list-annotation-docs]: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
### Code
Conversion is done in the [`writeProtobufToParquetAvro`method][write-proto-method].
The first step is to get a handle to Avro's Protobuf object model using
`ProtobufData.get()`.
```Java
ProtobufData model = ProtobufData.get();
```
The Protobuf object model is used to convert the Protobuf data class,
`ExampleMessage`, into an Avro schema.
```Java
Schema schema = model.getSchema(ExampleMessage.class);
```
Then, the Protobuf object model is passed to the builder when creating a
`ParquetWriter`.
```Java
ParquetWriter<ExampleMessage> parquetWriter = AvroParquetWriter
.<ExampleMessage>builder(new Path(parquetFile))
.withDataModel(model) // use the protobuf data model
.withSchema(schema) // Avro schema for the protobuf data
.build();
```
Once the parquet-avro writer is configured to use Avro's protobuf support, it
is able to write protobuf messages to the outgoing Parquet file.
```Java
ExampleMessage m;
while ((m = ExampleMessage.parseDelimitedFrom(protoStream)) != null) {
parquetWriter.write(m);
}
```
[write-proto-method]: https://github.com/rdblue/parquet-avro-protobuf/blob/master/src/main/java/com/example/ProtobufToParquet.java#L59
### Result
After running the example, you will end up with `example.parquet` in temp.
Using `parquet-tools` to view the schema shows the correct 3-level list
representation.
```
message com.example.Example$.ExampleMessage {
required int64 id;
required group strings (LIST) {
repeated group list {
required binary element (UTF8);
}
}
}
```
The original protobuf schema did not include the LIST annotation or the
additional levels needed for compatibility.
```
message ExampleMessage {
required int64 id = 1;
repeated string strings = 2;
}
```
The data looks like this when converted to JSON:
```
{"id": 0, "strings": ["a", "b", "c"]}
{"id": 1, "strings": ["b", "c", "d"]}
{"id": 2, "strings": ["c", "d", "e"]}
{"id": 3, "strings": ["d", "e", "f"]}
{"id": 4, "strings": ["e", "f", "g"]}
```
| 0 |
RoaringCatGames/libgdx-ashley-box2d-example | An example game using Box2d with Ashley ECS | null | #Example libGDX using Box2D and Ashley ECS
This is a starter project available to get up and running with libgdx using Box2D in an Ashley ECS based structure. The code is very basic, and the game does very little at this time. This example will give you a starting point with:
1. Configured texture-packer module that is wired into the gradle system so textures are re-packed when you run (see core/build.gradle)
2. Simple Texture, Transform, Animation, State, and Body components.
3. RenderingSystem that accounts for world units to pixels and Z indexing (based heavily on the ashley-superjumper example)
4. AnimationSystem that supports animations for different States.
5. PhysicsSystem that will run the World.step() and update TransformComponents for any physics Entities.
6. A basic splash screen that will show the splash image and a progress bar based on the AssetManager loading progress.
7. An Asset utility class to setup your AssetManager loading, and retrieval of assets in one location (you'll want to add code to this file to expose more assets)
8. A crude Screen dipatching pattern that can allow each screen to signal when it has ended and needs to move to the next. You can implement your own IScreenDispatcher or modify the existing to do more complex screen swapping.
9. Added the required `gdx.reflect.include` options for Ashley Components and Systems so that the GWT Html build works.
#Running
The project was created with the libGDX [setup application](https://libgdx.badlogicgames.com/download.html), and is [gradle](https://docs.gradle.org/current/release-notes) based. You can import it into your IDE of choice as a Gradle project.
From the command line you can run the project:
git clone https://github.com/RoaringCatGames/libgdx-ashley-box2d-example.git
cd libgdx-ashley-box2d-example
./gradlew texture-packer:run desktop:run
NOTE: You only need to run the ```texture-packer:run``` target once after pulling down the project. If you add/update art assets in the texture-packer project, you'll need to re-run the ```texture-packer:run``` target
| 1 |
arhohuttunen/spring-boot-hexagonal-architecture | This is the repository containing an example application for my blog post about Hexagonal Architecture with Spring Boot. | null | # Hexagonal Architecture With Spring Boot

This is the repository containing an example application for my blog post about [Hexagonal Architecture With Spring Boot](https://www.arhohuttunen.com/hexagonal-architecture-spring-boot/).
| 1 |
adamsp/FragmentStatePagerIssueExample | Example of an issue around state restoration for fragments in a FragmentStatePagerAdapter | null | null | 1 |
SapienLLCdev/Cthulhu | Arduino Library and Example code for using the Cthulhu Shield | null | ## Cthulhu_Shield_Arduino

This is a library for using the Cthulhu Shield sensory substitution/augmentation development kit created by [Sapien LLC](http://sapienllc.com/).
If you like what you see here, you can purchase a Cthulhu Shield [here!](https://sapienllc.com/shop/)
The Cthulhu Shield is an open-source Arduino Uno and Arduino Mega compatible sensory substitution/sensory augmentation device. It uses an 18 electrode grid to tactiley display signals on the tongue. The electrodes on the array can be activated with patterns of electrical pulses to depolarize nerve membranes in the tongue to create different types of touch sensations. You can use these touch sensations to draw shapes or simple images on the tongue, feel different sound frequencies, or receive turn by turn directions with your tongue.
Additionally, the Cthulhu Shield can sense whether or not your tongue is in contact with different electrodes using capacitive sensing. You can use the Cthulhu Shield to send keystrokes to your computer, control the cursor, or even control a mobility device.
In this repository, we have provided a number of example projects to get you started with your Cthulhu, but we encourage you to experiment and build your own senses!
There are some awesome uses of sensory substitution already out there. For more information on these uses, or sensory substitution or augmentation in general, take a look at the links below.
* [Brainport](https://www.youtube.com/watch?v=OKd56D2mvN0)
* [Wikipedia](https://en.wikipedia.org/wiki/Sensory_substitution)
* [Hear with your tongue](https://source.colostate.edu/words-mouth-csu-device-lets-hear-tongue/)
* [TedTalk](https://www.ted.com/talks/david_eagleman_can_we_create_new_senses_for_humans?language=en)
# Repository Contents:
* [Bare Minimum](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/BareMinimum)
* [Serial Input](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/SerialInput)
* [Accelerometer Example](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/accelerometer_Cthulhu_example)
* [GPS Directions](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/directions_example)
* [Make Patterns](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/make_patterns)
* [Thermal Camera](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/mega_heat_cam_with_shield)
* [Tactile Button](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/tactile_button_example)
* [Tactile Cursor](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/tactile_cursor)
* [Leonardo Tactile Cursor](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/Leonardo_tactile_cursor)
* [Tactile Keypad](https://github.com/SapienLLCdev/Cthulhu/tree/master/examples/tactile_keypad)
* [Cthulhu Camera Demo](https://github.com/SapienLLCdev/Cthulhu/tree/master/Android%20Examples/CthulhuCameraDemo)
# Cthulhu Shield Schematic:
.
# How to Use the Cthulhu Shield:
**Power:**
The Cthulhu Shield is made to be powered by plugging it directly into an Arduino Uno or Mega, and connecting the arduino to a USB cable attached to a properly grounded computer, or a smartphone or external battery pack. Users should not power their Arduino/Cthulhu Shield system with an AC wall adaptor, as a small number of these adaptors are not properly grounded or are otherwise unsafe and can cause injury or death if used with any electronic device.
**Input:**
When used with an Arduino Uno, the Cthulhu Shield can receive information directly from the USB port connected to a computer or smartphone, or via broken out Serial pins (RX, TX) which can used to communicate with other Arduinos/microcontrollers, or embedded devices such as Bluetooth modules or sensors with serial outputs. With an Arduino Mega, the extra IO pins can be leveraged to receive Digital or Analog signals that can be used to change the tactile output of the Cthulhu Shield.
**Output:**
When used with an Arduino Uno, the Cthulhu Shield can send information directly via USB port connected to a computer or smarptphone, or via broken out Serial pins (RX, TX) which can used to communicate with other Arduinos/microcontrollers, or embedded devices such as Bluetooth modules or sensors with serial outputs. With an Arduino Mega, the extra IO pins can be leveraged to send Digital or Analog signals to external devices.
**Electrode Control:**
Different types of sensations can be created on different electrodes and tongue locations by changing the pattern of pulses generated on each electrode. This can be done with the Cthulhu.UpdateStimuli() function. A user changes values in six (6), eighteen (18) element arrays. The values in these arrays correspond to whether one of the 18 electrodes is on or off, and what type of pulse, and pattern of pulses, is created on each electrode. Changing these patterns changes the type and quality of the sensation perceived by the user. This library was adapted from [work by Kurt Kaczmarek](https://www.sciencedirect.com/science/article/pii/S1026309811001702) and altered for the needs of our early research at Sapien LLC.
**Tongue Position Sensing:**
During electrotactile stimulation with the Cthulhu Shield, the Arduino can quickly sense the electrical potential on a given electrode, which changes if the tongue is in contact with the electrode or not. Placing your tongue on certain electrodes but not others, or swiping your tongue across different electrodes, can be detected by the Arduino, which can then send serial information (or with minor hacking, keystrokes and HID signals) to a computer or smartphone via the USB port, or external Bluetooth modules.
Currently, tongue-position-sensing is supported only on ADC enabled Arduino pins (A0-A5 on the Uno and Mega). Position sensing on digital only pins should be possible and may be implemented in the near future.
# How to Use this Repository:
If you are new to Github, Sparkfun has created an [excellent Github tutorial](https://learn.sparkfun.com/tutorials/using-github/all). If you want to get up and running quickly, just take a look at the [Download ZIP](https://learn.sparkfun.com/tutorials/using-github/all#download-zip) section.
Similarly, if you are new to Arduino, Sparkfun once again has a great [guide to installing Arduino Libraries](https://learn.sparkfun.com/tutorials/installing-an-arduino-library). After downloading the .ZIP file of the library you want from Github, follow the instructions above to integrate it with the Arduino IDE.
Please see the README files in the examples linked above for more information on their specific implementations!
| 1 |
bennylut/hello-angular2-universal-j2v8 | A simple example of using angular-universal with java backend using J2V8 | null | # Angular-Universal with Java backend using J2V8
This repository contains a simple (and very initial) example of using angular-universal with java backend using J2V8.
##Features:
- Rendering using angular universal from java using J2V8
- Serving both the application and other rest endpoint from java using sparkjava
- Basic live-reload support for the universal server build
- Linux x64 only
- Multi-Node (each in its own thread) rendring
- Java level cache for increasing performance (using the cache assumes that the render function is pure)
- The renderer itself (not including the usage example) dependes on J2V8 and Guava
##TODO:
- Fetch J2V8 from maven central
- Support other platforms
- Performance tests
- Documentation and code-cleanup
- Orginize project structure
- Implement a more complex client side application
- ~~Check if can model the bootstrap configuration object in java in order to remove the need for server.js completely~~
- ~~Cleanup and improvement of the JavaEngine (currently almost blindly based on the expressEngine)~~
- ~~Remove the direct gson dependency~~
- ~~Expose the cache through configuration~~
- ~~Make the configuration object api fluid~~
##Requirements
- x64 Linux (tested on ubuntu 16.04)
- Java 8
- Maven
- NPM
##Running Instructions
1. Clone the repository
2. Install node dependencies (`npm install`)
3. Build the java server(`mvn clean package`)
4. Build&Watch angular-universal + angular client side code (`npm start`)
5. Execute the java server (`mvn -e exec:java -Dexec.mainClass="hello.ngu.j2v8.Server"`)
6. Open your browser on `http://localhost:3000/app/`
##Links
- [Angular-Universal](https://github.com/angular/universal)
- [J2V8](https://github.com/eclipsesource/J2V8)
- [Spark-Java](http://sparkjava.com/)
- [Angular 2 Universal starter kit](https://github.com/angular/universal-starter)
###Thanks and credits
- The Angular-Universal-related files are based on Angular 2 Universal starter kit with very small modifications
- Special thanks to @irbull for the great J2V8 library and he's help.
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.