full_name stringlengths 7 104 | description stringlengths 4 725 ⌀ | topics stringlengths 3 468 ⌀ | readme stringlengths 13 565k ⌀ | label int64 0 1 |
|---|---|---|---|---|
sdaschner/coffee | Yet another coffee shop example project | istio javaee kubernetes microprofile | null | 1 |
gradle/gradle-build-scan-quickstart | An example project to experience the Build Scan® service of Develocity with Gradle builds. | null | # Build Scan® quickstart
This is an example project that you can use to experience the [Build Scan® service of Develocity][gradle.com].
It is a small Java project that has the [Develocity Gradle Plugin][manual] already applied.
## Create a Build Scan®
Follow these simple steps to create and publish a Build Scan® on [scans.gradle.com][scans.gradle.com]:
1. Clone this project
1. Run `./gradlew build --scan`
1. Agree to the [Terms of Service][terms-of-service] on the command line
The build should end with something similar to:
Publishing build scan...
https://gradle.com/s/ria2s2x5oaazq
Follow the green link shown at the end of the build to view your Build Scan® on [scans.gradle.com][scans.gradle.com].
Note: If you run a build without the `--scan` flag, no Build Scan® will be created and
no information will be sent.
## Experiment with Build Scans
Create different kinds of Build Scans by locally modifying this quickstart project. Here are some ideas:
- Edit `src/main/java/example/Example.java` to introduce compile errors
- Edit `src/test/java/example/ExampleTest.java` to introduce test failures
- Add more dependencies, more plugins, and more projects
Alternatively, enable one of your own builds to produce Build Scans by following the [step-by-step instructions][scans.gradle.com].
## Learn more
Read the [Develocity Gradle Plugin User Manual][manual] to learn more about the Build Scan® service of Develocity and the Develocity Gradle Plugin.
## Need help?
Talk to us on the [Gradle forum][gradle-forum].
If you are completely new to the Gradle Build Tool, start [here][gradle-download].
## License
The Build Scan™ quickstart project is open-source software released under the [Apache 2.0 License][apache-license].
[apache-license]: https://www.apache.org/licenses/LICENSE-2.0.html
[gradle-download]: https://gradle.org/install/
[manual]: https://docs.gradle.com/enterprise/gradle-plugin/
[gradle.com]: https://www.gradle.com
[terms-of-service]: https://gradle.com/terms-of-service
[scans.gradle.com]: https://scans.gradle.com/
[gradle-forum]: https://discuss.gradle.org/c/help-discuss/scans | 1 |
quux00/hive-json-schema | Tool to generate a Hive schema from a JSON example doc | null | # Overview
The best tool for using JSON docs with Hive is [rcongui's openx Hive-JSON-Serde](https://github.com/rcongiu/Hive-JSON-Serde). When using that JSON Serde, you define your Hive schema based on the contents of the JSON.
Hive schemas understand arrays, maps and structs. You can map a JSON array to a Hive array and a JSON "object" to either a Hive map or struct. I prefer to map JSON objects to structs.
This tool will take a curated JSON document and generate the Hive schema (CREATE TABLE statement) for use with the openx Hive-JSON-Serde. I say "curated" because you should ensure that every possible key is present (with some arbitrary value of the right data type) and that all arrays have at least one entry.
If the curated JSON example you provide has more than one entry in an array, *only the first one will be examined*, so you should ensure that it has all the fields.
For more information on using the openx Hive-JSON-SerDe, see my [blog post entry](http://thornydev.blogspot.com/2013/07/querying-json-records-via-hive.html).
# Build
mvn package
Creates `json-hive-schema-1.0.jar` and `json-hive-schema-1.0-jar-with-dependencies.jar` in the `target` directory.
# Usage
#### with the non-executable jar
java -cp target/json-hive-schema-1.0.jar net.thornydev.JsonHiveSchema file.json
# optionally specify the name of the table
java -cp target/json-hive-schema-1.0.jar net.thornydev.JsonHiveSchema file.json my_table_name
#### with the executable jar
java -jar target/json-hive-schema-1.0-jar-with-dependencies.jar file.json
java -jar target/json-hive-schema-1.0-jar-with-dependencies.jar file.json my_table_name
Both print the Hive schema to stdout.
#### Example:
Suppose I have the JSON document:
{
"description": "my doc",
"foo": {
"bar": "baz",
"quux": "revlos",
"level1" : {
"l2string": "l2val",
"l2struct": {
"level3": "l3val"
}
}
},
"wibble": "123",
"wobble": [
{
"entry": 1,
"EntryDetails": {
"details1": "lazybones",
"details2": 414
}
},
{
"entry": 2,
"EntryDetails": {
"details1": "entry 123"
}
}
]
}
I recommend distilling it down to a doc with a single entry in each array and one that has all possible keys filled in - the values don't matter as long as they are present and a type can be determined.
So for the curated version of the JSON I've removed one of the entries from the "wobble" array and ensured that the remaining one has all the fields:
{
"description": "my doc",
"foo": {
"bar": "baz",
"quux": "revlos",
"level1" : {
"l2string": "l2val",
"l2struct": {
"level3": "l3val"
}
}
},
"wibble": "123",
"wobble": [
{
"entry": 1,
"EntryDetails": {
"details1": "lazybones",
"details2": 414
}
}
]
}
Now generate the schema:
$ java -jar target/json-hive-schema-1.0-jar-with-dependencies.jar in.json TopQuark
CREATE TABLE TopQuark (
description string,
foo struct<bar:string, level1:struct<l2string:string, l2struct:struct<level3:string>>, quux:string>,
wibble string,
wobble array<struct<entry:int, entrydetails:struct<details1:string, details2:int>>>)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe';
You can then load your data into Hive and run queries like this:
hive > select wobble.entry, wobble.EntryDetails.details1, wobble.EntryDetails[0].details2 from TopQuark;
entry details1 details2
[1,2] ["lazybones","entry 123"] 414
Time taken: 15.665 seconds
| 1 |
mark-watson/java_practical_semantic_web | Code examples for my book Practical Semantic Web Programming (Java | Scala | null | 0 |
ozlerhakan/java9-module-examples | a list of Java 9 module samples to dive into the modular world | java java9 java9-jigsaw jigsaw modularity module serviceloader | null | 0 |
link-intersystems/blog | Example code and projects used in our blogs. | null | Travis CI
=========
[](https://travis-ci.org/link-intersystems/blog)
blog
====
This repository contains example code and projects used in our blogs:
- [Clean Architecture Example In Pure Java](https://github.com/link-intersystems/clean-architecture-example)
- [Anemic vs. Rich Domain Models](http://www.link-intersystems.com/blog/2011/10/01/anemic-vs-rich-domain-models/)
- [The MVC pattern implemented with java swing](http://www.link-intersystems.com/blog/2013/07/20/the-mvc-pattern-implemented-with-java-swing/)
- [A plug-in architecture implemented with java](https://www.link-intersystems.com/blog/2016/01/02/a-plug-in-architecture-implemented-with-java/)
- [Separation of api and implementation](http://www.link-intersystems.com/blog/2012/02/26/separation-of-api-and-implementation/)
- [Custom swing component renderers](http://www.link-intersystems.com/blog/2014/10/19/custom-swing-component-renderers/)
- [Singleton implementation pitfalls](http://www.link-intersystems.com/blog/2015/05/01/singleton-implementation-pitfalls/)
| 1 |
gorbin/ASNETutorial | Simple example project for https://github.com/gorbin/ASNE library | null | ASNETutorial [](https://android-arsenal.com/details/3/921) [](http://www.codeproject.com/Articles/815900/Android-social-network-integration)
============

Simple example project for https://github.com/gorbin/ASNE library
Today social network integration to your android application is common practice - it makes user easily login to your app and share their actions. There are a lot of ways to do it - usually developers add native social network SDK or use API for every network. It provides login via installed social network application or native dialogs. You have to spend time and nerves to learn and use different social network SDKs.
What if you need to add one more social network for your application? Sometimes you have to reorganize or redo all your integrations. This leads to idea to create and implement common interface for all social networks. Fortunately there is an open source modular library [ASNE](https://github.com/gorbin/ASNE) that allows you to choose necessary social network and provides full sdk and common interface for most oftenly used requests(login, share, friendslist & etc) It saves your time and simplifies adding another networks in the future. Moreover you can easily add any other social network as new module - the similar way as it's done in other modules.
In this tutorial you can learn how easily integrate Facebook, Twitter in android application using [ASNE modules](https://github.com/gorbin/ASNE). This is very basic tutorial with login, sharing link and showing friends list.
##Registering app - getting keys for your application
In order to implement Social networks in your application you need keys to make API calls. So register a new social network application and get the keys. Check small tutorial how to get it:
- [Facebook](https://github.com/gorbin/ASNE/wiki/Create-Facebook-App)
- [Twitter](https://github.com/gorbin/ASNE/wiki/Create-Twitter-App)
- [LInkedIn](https://github.com/gorbin/ASNE/wiki/Create-LinkedIn-App)
To continue you need
- Facebook App ID
- Twitter consumer key and consumer secret
- LinkedIn consumer key and consumer secret
##Integrating Facebook, Twitter and LinkedIn to your application
1. Create new Project in Android Studio
2. Let's save our social network keys in `values/strings.xml`
**strings.xml**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/res/values/strings.xml))
```xml
<?xml version="1.0" encoding="utf-8"?>
<resources>
<string name="app_name">ASNE-tutorial</string>
<string name="facebook_app_id">
1646388738920557
</string>
<string name="twitter_consumer_key">
BBQAUAVKYzmYtvEcNhUEvGiKd
</string>
byZzHPxE1tkGmnPEj5zUyc7MG464Q1LgNRcwbBJV1Ap86575os
</string>
<string name="linkedin_consumer_key">
75ubsp337ll7sf
</string>
<string name="linkedin_consumer_secret">
8DVk4hi3wvEyzjbh
</string>
</resources>
```
3. Add permissions and meta data - open `AndroidManifest.xml` file and add uses-permission for INTERNET, ACCESS_NETWORK_STATE and add meta-data for facebook(add appId key)
**AndroidManifest.xml**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/AndroidManifest.xml))
```xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="asne_tutorial.githubgorbin.com.asne_tutorial" >
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name=".MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<meta-data
android:name="com.facebook.sdk.ApplicationId"
android:value="@string/facebook_app_id"/>
</application>
</manifest>
```
4. Set dependencies for [asne-modules](https://github.com/gorbin/ASNE):
Open _Project Structure_ => choose your module and open _Dependencies_ => _Add new library dependency_

Then search for `asne` and add **asne-facebook, asne-twitter, asne-linkedin**

or just add them manually to `build.gradle`
**build.gradle**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/build.gradle))
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 19
buildToolsVersion '20.0.0'
defaultConfig {
applicationId "asne_tutorial.githubgorbin.com.asne_tutorial"
minSdkVersion 10
targetSdkVersion 19
versionCode 1
versionName "1.0"
}
buildTypes {
release {
runProguard false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support:appcompat-v7:20.0.0'
compile 'com.github.asne:asne-facebook:0.3.1'
compile 'com.github.asne:asne-linkedin:0.3.1'
compile 'com.github.asne:asne-twitter:0.3.1'
}
```
5. Lets create some layouts
Just login buttons in main fragment
**main_fragment.xml**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/res/layout/main_fragment.xml))
```xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#FFCCCCCC">
<Button
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="Login via Facebook"
android:id="@+id/facebook"
android:layout_gravity="center_horizontal"
android:background="#3b5998"
android:layout_margin="8dp"
android:textColor="#ffffffff" />
<Button
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="Login via Twitter"
android:id="@+id/twitter"
android:layout_gravity="center_horizontal"
android:background="#55ACEE"
android:layout_margin="8dp"
android:textColor="#ffffffff"/>
<Button
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="Login via LinkedIn"
android:id="@+id/linkedin"
android:layout_gravity="center_horizontal"
android:background="#287bbc"
android:layout_margin="8dp"
android:textColor="#ffffffff"/>
</LinearLayout>
```
Create simple profile card for user
**profile_fragment.xml**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/res/layout/profile_fragment.xml))
```xml
<?xml version="1.0" encoding="utf-8"?>
<ScrollView
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="@color/grey_light">
<RelativeLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:layout_margin="8dp"
android:id="@+id/frame"
android:background="@color/dark">
<RelativeLayout
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:layout_margin="3dp"
android:id="@+id/card"
android:background="#FFFFFF">
<ImageView
android:layout_width="100dp"
android:layout_height="100dp"
android:id="@+id/imageView"
android:layout_margin="8dp"
android:padding="2dp"
android:background="@color/grey_light"
android:layout_alignParentTop="true"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:src="@drawable/user"
android:adjustViewBounds="true"
android:cropToPadding="true"
android:scaleType="centerCrop"/>
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAppearance="?android:attr/textAppearanceLarge"
android:text="NoName"
android:maxLines="3"
android:singleLine="false"
android:id="@+id/name"
android:padding="8dp"
android:layout_alignTop="@+id/imageView"
android:layout_toRightOf="@+id/imageView"
android:layout_toEndOf="@+id/imageView"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="null"
android:maxLines="3"
android:singleLine="false"
android:id="@+id/id"
android:padding="8dp"
android:layout_below="@+id/name"
android:layout_alignLeft="@+id/name"
android:layout_alignStart="@+id/name" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text=""
android:id="@+id/info"
android:padding="8dp"
android:layout_marginBottom="4dp"
android:layout_below="@+id/imageView"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true" />
</RelativeLayout>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/buttonLayout"
android:layout_below="@+id/card"
android:layout_alignParentLeft="true"
android:layout_alignParentRight="true"
android:gravity="center"
android:background="@color/grey_light">
<Button
android:layout_width="match_parent"
android:layout_height="match_parent"
android:text="Friends"
android:id="@+id/friends"
android:padding="8dp"
android:background="@color/dark"
android:layout_marginRight="1dp"
android:layout_weight="1"
android:textColor="#ffffffff"/>
<Button
android:layout_width="match_parent"
android:layout_height="match_parent"
android:text="Share"
android:id="@+id/share"
android:padding="8dp"
android:background="@color/dark"
android:layout_weight="1"
android:textColor="#ffffffff"/>
</LinearLayout>
</RelativeLayout>
</ScrollView>
```
and save social networks colors to
**color.xml**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/res/values/colors.xml))
```xml
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="grey_light">#FFCCCCCC</color>
<color name="dark">#4b4b4b</color>
<color name="facebook">#3b5998</color>
<color name="twitter">#55ACEE</color>
<color name="linkedin">#287bbc</color>
</resources>
```
6. Let's setup `MainActivity.java` We should set up `onActivityResult` method to catch responses after requesting login
**MainActivity.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/MainActivity.java))
```java
public static final String SOCIAL_NETWORK_TAG = "SocialIntegrationMain.SOCIAL_NETWORK_TAG";
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
Fragment fragment = getSupportFragmentManager().findFragmentByTag(SOCIAL_NETWORK_TAG);
if (fragment != null) {
fragment.onActivityResult(requestCode, resultCode, data);
}
}
```
After every login form social networks send `onActivityResult` and we should check it and send to our `SocialNetworkManager` which deliver it to right `SocialNetwork`
7. Create `MainFragment.java` and begin transaction of this fragmetn in `MainActivity.java`
**MainActivity.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/MainActivity.java))
```java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (savedInstanceState == null) {
getSupportFragmentManager().beginTransaction()
.add(R.id.container, new MainFragment())
.commit();
}
}
```
8. Integrating of any social network is simple:
* Get `SocialNetworkManager`
```java
mSocialNetworkManager = (SocialNetworkManager) getFragmentManager().findFragmentByTag(MAinActivity.SOCIAL_NETWORK_TAG);
```
* Get keys from `values.xml` - note Facebook appId we used in `AndroidManifest.xml`
```java
String TWITTER_CONSUMER_KEY = getActivity().getString(R.string.twitter_consumer_key);
String TWITTER_CONSUMER_SECRET = getActivity().getString(R.string.twitter_consumer_secret);
String TWITTER_CALLBACK_URL = "oauth://ASNE";
String LINKEDIN_CONSUMER_KEY = getActivity().getString(R.string.linkedin_consumer_key);
String LINKEDIN_CONSUMER_SECRET = getActivity().getString(R.string.linkedin_consumer_secret);
String LINKEDIN_CALLBACK_URL = "https://asneTutorial";
```
* Create chosen `SocialNetworks` with permissions
```java
ArrayList<String> fbScope = new ArrayList<String>();
fbScope.addAll(Arrays.asList("public_profile, email, user_friends"));
FacebookSocialNetwork fbNetwork = new FacebookSocialNetwork(this, fbScope);
// permissions for twitter in developer twitter console
TwitterSocialNetwork twNetwork = new TwitterSocialNetwork(this, TWITTER_CONSUMER_KEY, TWITTER_CONSUMER_SECRET, TWITTER_CALLBACK_URL);
String linkedInScope = "r_basicprofile+r_fullprofile+rw_nus+r_network+w_messages+r_emailaddress+r_contactinfo";
LinkedInSocialNetwork liNetwork = new LinkedInSocialNetwork(this, LINKEDIN_CONSUMER_KEY, LINKEDIN_CONSUMER_SECRET, LINKEDIN_CALLBACK_URL, linkedInScope);
```
* Check if `SocialNetworkManager` is null init it and add `SocialNetworks` to it
```java
mSocialNetworkManager = new SocialNetworkManager();
mSocialNetworkManager.addSocialNetwork(fbNetwork);
mSocialNetworkManager.addSocialNetwork(twNetwork);
mSocialNetworkManager.addSocialNetwork(liNetwork);
//Initiate every network from mSocialNetworkManager
getFragmentManager().beginTransaction().add(mSocialNetworkManager, MAinActivity.SOCIAL_NETWORK_TAG).commit();
mSocialNetworkManager.setOnInitializationCompleteListener(this);
```
don't forget to implement `SocialNetworkManager.OnInitializationCompleteListener`
* If `SocialNetworkManager` - come from another fragment where we already init it - get all initialized social networks and add to them necessary listeners
```java
if(!mSocialNetworkManager.getInitializedSocialNetworks().isEmpty()) {
List<SocialNetwork> socialNetworks = mSocialNetworkManager.getInitializedSocialNetworks();
for (SocialNetwork socialNetwork : socialNetworks) {
socialNetwork.setOnLoginCompleteListener(this);
}
```
don't forget to implement `OnLoginCompleteListener`
* Now we need to catch callback after initializing of `SocialNetworks`
```java
@Override
public void onSocialNetworkManagerInitialized() {
for (SocialNetwork socialNetwork : mSocialNetworkManager.getInitializedSocialNetworks()) {
socialNetwork.setOnLoginCompleteListener(this);
initSocialNetwork(socialNetwork);
}
}
```
don't forget to implement `OnLoginCompleteListener`
Full `onCreateView` and `onSocialNetworkManagerInitialized` from MainFragment with initializing and setting listener to buttons
**MainFragment.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/MainFragment.java))
```java
public static SocialNetworkManager mSocialNetworkManager;
/**
* SocialNetwork Ids in ASNE:
* 1 - Twitter
* 2 - LinkedIn
* 3 - Google Plus
* 4 - Facebook
* 5 - Vkontakte
* 6 - Odnoklassniki
* 7 - Instagram
*/
public static final int TWITTER = 1;
public static final int LINKEDIN = 2;
public static final int FACEBOOK = 4;
private Button facebook;
private Button twitter;
private Button linkedin;
public MainFragment() {
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View rootView = inflater.inflate(R.layout.main_fragment, container, false);
((MainActivity)getActivity()).getSupportActionBar().setTitle(R.string.app_name);
// init buttons and set Listener
facebook = (Button) rootView.findViewById(R.id.facebook);
facebook.setOnClickListener(loginClick);
twitter = (Button) rootView.findViewById(R.id.twitter);
twitter.setOnClickListener(loginClick);
linkedin = (Button) rootView.findViewById(R.id.linkedin);
linkedin.setOnClickListener(loginClick);
//Get Keys for initiate SocialNetworks
String TWITTER_CONSUMER_KEY = getActivity().getString(R.string.twitter_consumer_key);
String TWITTER_CONSUMER_SECRET = getActivity().getString(R.string.twitter_consumer_secret);
String LINKEDIN_CONSUMER_KEY = getActivity().getString(R.string.linkedin_consumer_key);
String LINKEDIN_CONSUMER_SECRET = getActivity().getString(R.string.linkedin_consumer_secret);
//Chose permissions
ArrayList<String> fbScope = new ArrayList<String>();
fbScope.addAll(Arrays.asList("public_profile, email, user_friends"));
String linkedInScope = "r_basicprofile+rw_nus+r_network+w_messages";
//Use manager to manage SocialNetworks
mSocialNetworkManager = (SocialNetworkManager) getFragmentManager().findFragmentByTag(SOCIAL_NETWORK_TAG);
//Check if manager exist
if (mSocialNetworkManager == null) {
mSocialNetworkManager = new SocialNetworkManager();
//Init and add to manager FacebookSocialNetwork
FacebookSocialNetwork fbNetwork = new FacebookSocialNetwork(this, fbScope);
mSocialNetworkManager.addSocialNetwork(fbNetwork);
//Init and add to manager TwitterSocialNetwork
TwitterSocialNetwork twNetwork = new TwitterSocialNetwork(this, TWITTER_CONSUMER_KEY, TWITTER_CONSUMER_SECRET);
mSocialNetworkManager.addSocialNetwork(twNetwork);
//Init and add to manager LinkedInSocialNetwork
LinkedInSocialNetwork liNetwork = new LinkedInSocialNetwork(this, LINKEDIN_CONSUMER_KEY, LINKEDIN_CONSUMER_SECRET, linkedInScope);
mSocialNetworkManager.addSocialNetwork(liNetwork);
//Initiate every network from mSocialNetworkManager
getFragmentManager().beginTransaction().add(mSocialNetworkManager, SOCIAL_NETWORK_TAG).commit();
mSocialNetworkManager.setOnInitializationCompleteListener(this);
} else {
//if manager exist - get and setup login only for initialized SocialNetworks
if(!mSocialNetworkManager.getInitializedSocialNetworks().isEmpty()) {
List<SocialNetwork> socialNetworks = mSocialNetworkManager.getInitializedSocialNetworks();
for (SocialNetwork socialNetwork : socialNetworks) {
socialNetwork.setOnLoginCompleteListener(this);
initSocialNetwork(socialNetwork);
}
}
}
return rootView;
}
private void initSocialNetwork(SocialNetwork socialNetwork){
if(socialNetwork.isConnected()){
switch (socialNetwork.getID()){
case FACEBOOK:
facebook.setText("Show Facebook profile");
break;
case TWITTER:
twitter.setText("Show Twitter profile");
break;
case LINKEDIN:
linkedin.setText("Show LinkedIn profile");
break;
}
}
}
@Override
public void onSocialNetworkManagerInitialized() {
//when init SocialNetworks - get and setup login only for initialized SocialNetworks
for (SocialNetwork socialNetwork : mSocialNetworkManager.getInitializedSocialNetworks()) {
socialNetwork.setOnLoginCompleteListener(this);
initSocialNetwork(socialNetwork);
}
}
```

9. Request login for every social networks
```java
SocialNetwork socialNetwork = mSocialNetworkManager.getSocialNetwork(networkId);
socialNetwork.requestLogin();
```
Full `OnClickListener` loginClick with checking connection of social network and if social network connected - show `ProfileFragment.java` on click
**MainFragment.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/MainFragment.java))
```java
private View.OnClickListener loginClick = new View.OnClickListener() {
@Override
public void onClick(View view) {
int networkId = 0;
switch (view.getId()){
case R.id.facebook:
networkId = FACEBOOK;
break;
case R.id.twitter:
networkId = TWITTER;
break;
case R.id.linkedin:
networkId = LINKEDIN;
break;
}
SocialNetwork socialNetwork = mSocialNetworkManager.getSocialNetwork(networkId);
if(!socialNetwork.isConnected()) {
if(networkId != 0) {
socialNetwork.requestLogin();
MainActivity.showProgress(socialNetwork, "Loading social person");
} else {
Toast.makeText(getActivity(), "Wrong networkId", Toast.LENGTH_LONG).show();
}
} else {
startProfile(socialNetwork.getID());
}
}
};
```
10. After social network login form we got callback `onLoginSuccess(int networkId)` or `onError(int networkId, String requestID, String errorMessage, Object data)` - lets show profile if login success and show Toast on error
**MainFragment.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/MainFragment.java))
```java
@Override
public void onLoginSuccess(int networkId) {
MainActivity.hideProgress();
startProfile(networkId);
}
@Override
public void onError(int networkId, String requestID, String errorMessage, Object data) {
MainActivity.hideProgress();
Toast.makeText(getActivity(), "ERROR: " + errorMessage, Toast.LENGTH_LONG).show();
}
private void startProfile(int networkId){
ProfileFragment profile = ProfileFragment.newInstannce(networkId);
getActivity().getSupportFragmentManager().beginTransaction()
.addToBackStack("profile")
.replace(R.id.container, profile)
.commit();
}
```
11. In `ProfileFragment.java` get networkId from `MainFragment.java`
**ProfileFragment.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/ProfileFragment.java))
```java
public static ProfileFragment newInstannce(int id) {
ProfileFragment fragment = new ProfileFragment();
Bundle args = new Bundle();
args.putInt(NETWORK_ID, id);
fragment.setArguments(args);
return fragment;
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
networkId = getArguments().containsKey(NETWORK_ID) ? getArguments().getInt(NETWORK_ID) : 0;
}
```
12. Now via `networkId` we can get social network and request current user profile like:
```java
socialNetwork = MainFragment.mSocialNetworkManager.getSocialNetwork(networkId);
socialNetwork.setOnRequestCurrentPersonCompleteListener(this);
socialNetwork.requestCurrentPerson();
```
don't forget to implement `OnRequestSocialPersonCompleteListener`
13. After completing request we can use SocialPerson dadta to fill our profile view
**ProfileFragment.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/ProfileFragment.java))
```java
@Override
public void onRequestSocialPersonSuccess(int i, SocialPerson socialPerson) {
MainActivity.hideProgress();
name.setText(socialPerson.name);
id.setText(socialPerson.id);
String socialPersonString = socialPerson.toString();
String infoString = socialPersonString.substring(socialPersonString.indexOf("{")+1, socialPersonString.lastIndexOf("}"));
info.setText(infoString.replace(", ", "\n"));
Picasso.with(getActivity())
.load(socialPerson.avatarURL)
.into(photo);
}
@Override
public void onError(int networkId, String requestID, String errorMessage, Object data) {
MainActivity.hideProgress();
Toast.makeText(getActivity(), "ERROR: " + errorMessage, Toast.LENGTH_LONG).show();
}
```

14. For logout you just need to use
```java
socialNetwork.logout();
getActivity().getSupportFragmentManager().popBackStack();
```
15. Truly, that's all - we integrate Facebook, Twitter and Linkedin and get user profile. You can add other social networks like Instagram or Google Plus just adding dependency for them and adding them to `SocialNetworkManager` like in step 8:
```java
GooglePlusSocialNetwork gpNetwork = new GooglePlusSocialNetwork(this);
mSocialNetworkManager.addSocialNetwork(gpNetwork);
InstagramSocialNetwork instagramNetwork = new InstagramSocialNetwork(this, INSTAGRAM_CLIENT_KEY, INSTAGRAM_CLIENT_SECRET, instagramScope);
mSocialNetworkManager.addSocialNetwork(instagramNetwork);
```
And of course you can use any other request which we use bellow for them
14. In this tutorial we make some more requests **Share link** and **Get user friendslist**
Let's **share** simple link via social network:
* Setup share button
```java
share = (Button) rootView.findViewById(R.id.share);
share.setOnClickListener(shareClick);
```
* To share we need fill bundle and just request post link
```java
Bundle postParams = new Bundle();
postParams.putString(SocialNetwork.BUNDLE_LINK, link);
socialNetwork.requestPostLink(postParams, message, postingComplete);
```
* And of course some actions to callback
```java
private OnPostingCompleteListener postingComplete = new OnPostingCompleteListener() {
@Override
public void onPostSuccessfully(int socialNetworkID) {
Toast.makeText(getActivity(), "Sent", Toast.LENGTH_LONG).show();
}
@Override
public void onError(int socialNetworkID, String requestID, String errorMessage, Object data) {
Toast.makeText(getActivity(), "Error while sending: " + errorMessage, Toast.LENGTH_LONG).show();
}
};
```
* So `OnClickListener` shareClick is
**ProfileFragment.java**(full [source](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/ProfileFragment.java))
```java
private View.OnClickListener shareClick = new View.OnClickListener() {
@Override
public void onClick(View view) {
AlertDialog.Builder ad = alertDialogInit("Would you like to post Link:", link);
ad.setPositiveButton("Post link", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
if(networkId != MainFragment.TWITTER){
Bundle postParams = new Bundle();
postParams.putString(SocialNetwork.BUNDLE_LINK, link);
socialNetwork.requestPostLink(postParams, message, postingComplete);
} else {
socialNetwork.requestPostMessage(message + " " + link, postingComplete);
}
}
});
ad.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int i) {
dialog.cancel();
}
});
ad.setOnCancelListener(new DialogInterface.OnCancelListener() {
public void onCancel(DialogInterface dialog) {
dialog.cancel();
}
});
ad.create().show();
}
};
private AlertDialog.Builder alertDialogInit(String title, String message){
AlertDialog.Builder ad = new AlertDialog.Builder(getActivity());
ad.setTitle(title);
ad.setMessage(message);
ad.setCancelable(true);
return ad;
}
```

Here we make standard alert dialog to notify user that we want to share link and in PositiveButton we check if it is not Twitter(there are no method in twitter api to post link, but we can post message as message + link)
Let's get **friendslist** via social network:
* Get social network id
* Get `SocialNetwork` from Id and request get freinds
```java
SocialNetwork socialNetwork = MainFragment.mSocialNetworkManager.getSocialNetwork(networkId);
socialNetwork.setOnRequestGetFriendsCompleteListener(this);
socialNetwork.requestGetFriends();
```
don't forget to implement `OnRequestGetFriendsCompleteListener`
* Get response
```java
@Override
public void OnGetFriendsIdComplete(int id, String[] friendsID) {
((MainActivity)getActivity()).getSupportActionBar().setTitle(friendsID.length + " Friends");
}
@Override
public void OnGetFriendsComplete(int networkID, ArrayList<SocialPerson> socialPersons) {
MainActivity.hideProgress();
FriendsListAdapter adapter = new FriendsListAdapter(getActivity(), socialPersons, networkID);
listView.setAdapter(adapter);
}
@Override
public void onError(int networkId, String requestID, String errorMessage, Object data) {
MainActivity.hideProgress();
Toast.makeText(getActivity(), "ERROR: " + errorMessage, Toast.LENGTH_LONG).show();
}
```

More detailed you can read in [**FriendsFragment.java**](https://github.com/gorbin/ASNETutorial/blob/master/app/src/main/java/com/github/gorbin/asnetutorial/FriendsFragment.java)
##Conclusion
Using ASNE modules you can easily and quickly integrate any popular social networks and use common requests in your app. Of course library got [more methods](https://github.com/gorbin/ASNE/wiki/SocialNetwork-methods) which you can use in your application. But in case if you want to use social network methods from SDK or API you can easily get accesstokens or get instancesof main object in your App
This is simple tutorial demom if you need more complex - [check ASNE demo app](https://github.com/gorbin/ASNE)
[Codeproject article](http://www.codeproject.com/Articles/815900/Android-social-network-integration)
Source code:
[Zip](https://github.com/gorbin/ASNETutorial/archive/master.zip)
| 1 |
anvil-ui/anvil-examples | Short and descriptive examples for the Anvil MVC framework | null | # Anvil samples
[Anvil][1] is a tiny reactive UI library for Android. Inspired by [React][2]
and [Mithril][3], it brings declarative data binding, unidirectional data flow
and componentization and other things that would make your code look cleaner
and easier to maintain.
This repository contains small examples of how Anvil can be used.
## Example projects
* [Hello][4] - simple static layout with a classical text message
- how to start Anvil project
- how to write layouts without XML
* [Counter][5] - simple click counter
- how to bind variables to views
- how to bind event listeners to views
- how easy is to to keep UI in sync with data (automatic rendering)
* [Login form][6] - two input fields, push button and some logic behind them
- how to use text watcher bindings
- how to use Java 8 method references as event bindings
* [Item picker][7] - animated item picker component with next/prev buttons
- how to use animations
- how to use states
- how to use currentView() to get access to the real View object
- how to use Java8 lambdas in Anvil
* [Currency exchange app][8] - fetches latest currency rates from the backend, calculates converted values as you type.
- how to separate model logic from the view logic
- how to separate view styling from view hierarchy
- how to bind adapters
- how to get two-directional data binding for text input
* [Countdone clone][9] (current Anvil example) - pomodoro-like app: define how long the task should take and see if you finish it in time
- how to use backstack having just one activity
- how to save component state
- how to use custom fonts and icon fonts
* [Todo app][11] - classical MVC example: add tasks, check tasks, remove checked tasks
- how to use list adapters
- how the same app would look with [Java 7][10], [Java 8][11] and [Kotlin][12]
[1]: https://github.com/zserge/anvil/
[2]: http://facebook.github.io/react/
[3]: http://mithril.js.org/
[4]: ./hello/
[5]: ./counter/
[6]: ./login/
[7]: ./anim-picker/
[8]: ./currency/
[9]: ./countdone/
[10]: ./todo/
[11]: ./todo-java8/
[12]: ./todo-kotlin/
| 0 |
springapidev/java-certification | The examples are based on JAVA certification possible questions and It will also increase anyone skils of JAVA | java | # java-certification
The examples are based on JAVA certification possible questions and It will also increase anyone skils of JAVA
| 0 |
kpbird/fused-location-provider-example | Fused Location Provider Example | null | fused-location-provider-example
===============================
Fused Location Provider Example
| 1 |
fernandospr/spring-jetty-example | Spring MVC 4 example application | null | Spring MVC Embedded Jetty Example
=================================
Basic Spring MVC 4 application using embedded Jetty 9 server. No-xml configuration.
Includes API REST service, Freemarker and JSP examples.
Integration with Mongo DB.
Requirements
------------
* [Java Platform (JDK) 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html)
* [Apache Maven 3.x](http://maven.apache.org/)
| 1 |
matsim-org/matsim-maas | This project contains a collection of examples to run (Autonomous) Mobility as a Service in MATSim. | null | ")
# MATSim – Mobility as a Service
[](https://travis-ci.org/matsim-org/matsim-maas)
This project contains a collection of examples to simulate Mobility as a Service (MaaS) and Mobility on Demand (MoD) in MATSim. All services may be simulated with a driver or using Autonomous Vehicles (AVs). The basic framework for these services are the [Dynamic Vehicle Routing Problem (DVRP)](https://github.com/matsim-org/matsim/tree/master/contribs/dvrp), [Autonomous Vehicles](https://github.com/matsim-org/matsim/tree/master/contribs/av), [Taxi](https://github.com/matsim-org/matsim/tree/master/contribs/taxi) and [Demand Responsive Transport (DRT)](https://github.com/matsim-org/matsim/tree/master/contribs/drt) extensions. This means, vehicles will be dispatched on-line while the MATSim Mobility Simulation is running.

The main goal of the code in this repository is to provide examples of different usage scenarios for MaaS / AV services and make them easy to access in one single place, while the actual optimizer code remains in the MATSim contributions. All the examples run on the current MATSim Snapshot, as there are continuous improvement to the functionality.
## Functionality
The following extensions might be of particular interest:
### Taxi
The centralized dispatched of Taxi services may be simulated using different algorithms, which may be set in the Config file. An overview of different Taxi dispatch strategies and their performance is provided in:
*M. Maciejewski; J. Bischoff, & K. Nagel* **An Assignment-Based Approach to Efficient Real-Time City-Scale Taxi Dispatching**, IEEE Intelligent Systems, 2016, 31, 68-77 [Available here](http://svn.vsp.tu-berlin.de/repos/public-svn/publications/vspwp/2016/16-12/)
### Robotaxi / Shared Autonomous Vehicles
Shared Autonomous Vehicles provide the capability to simulate large fleets of automated taxi vehicles. This is done by combining a fast, rule-based dispatch algorithm with the possibility to adjust the consumed road capacity of Autonomous Vehicles.
The algorithm and results are presented in
*J. Bischoff, M. Maciejewski* **Simulation of city-wide replacement of private cars with autonomous taxis in Berlin**, Procedia Computer Science, 2016, 83, https://doi.org/10.1016/j.procs.2016.04.121
The effects of different road capacity use are described in
*M. Maciejewski, J. Bischoff* **Congestion Effects of Autonomous Taxi Fleets**; Transport, 2017, 0, 1-10, [Full text available here](http://dx.doi.org/10.14279/depositonce-7693)
### Demand Responsive Transport (DRT)
Demand Responsive Transport allows the pooling of several passengers into a single vehicle. Several constraints, such as maximum travel times and maximum waiting times, can be taken into account when organizing the vehicle dispatch.
Please find a full documentation [here](drt.md).
### Autonomous Vehicles
All MaaS extensions may be simulated with and without drivers. Arguably, the biggest influence of AV operations are road capacity and pricing.
* Road capacity can be influenced using the *AVCapacityModule*
* Pricing of MaaS modes can be influenced using Standard MATSim scoring parameters.
## Common infrastructure
With DVRP being the common base to all the modules described here, there is some common infrastructure all of the MaaS modules share:
* The *DVRP config* group. In this, both the Leg mode of an agent using MaaS can be defined (such as "taxi") as well as the network mode vehicles are using (such as "car"). Furthermore, the on-line Travel Time Calculator can be adapted, if required.
* The *Vehicles Container*. This is a file containing information about the fleet used by any MaaS extension. Fleet vehicle files can be created using the *CreateFleetVehicles* script.
## Test scenarios
The [scenarios](scenarios/) folder contains several test scenarios. These are roughly derived from existing MATSim scenarios, but often depict only the excerpt with relevance to MaaS of the scenario.
## How to use
1) Check out this project using your favorite git client or just download as a zip. As for the latter, you can download:
- the [development version](https://github.com/matsim-org/matsim-maas/archive/master.zip), which is running using the latest MATSim development snapshot
- one of the [releases](https://github.com/matsim-org/matsim-maas/releases), which is running using the official MATSim releases
Using the latest is release will give you relatively stable results, whereas using the master will provide more features, though some of them not thoroughly tested.
2) Import the folder as a new Maven project to Eclipse (Import --> Maven --> Existing project) or intelliJ (New --> Module from existing sources --> Select the folder --> Maven)
3) Run the example classes and start editing them according to your taste. You can also run `RunMaasGui` to launch a simple GUI application for running MaaS simulations.
| 0 |
imotov/elasticsearch-native-script-example | Example of Now Deprecated Native Script Plugin for Elasticsearch | null | null | 1 |
akka/alpakka-samples | Example projects building Reactive Integrations using Alpakka | akka akka-streams alpakka reactive-streams | # Alpakka samples
[Alpakka documentation](https://docs.akka.io/docs/alpakka/)
Akka is licensed under the Business Source License 1.1, please see the [Akka License FAQ](https://www.lightbend.com/akka/license-faq). | 1 |
mkuthan/example-spring | Example Spring project | bdd ddd spring tdd | null | 1 |
erikrozendaal/cqrs-lottery | Java example Domain-Driven-Design Command-Query Responsibility Separation | null | null | 1 |
bezkoder/spring-boot-security-jwt-auth-mongodb | Build Spring Boot MongoDB JWT Authentication & Authorization example with Spring Security, Spring Data | authentication authorization jwt-authentication mongodb rest-api restful-api spring-boot spring-data spring-data-mongodb spring-jwt-authentication spring-security | # Spring Boot, Spring Security, MongoDB - JWT Authentication & Authorization example
- Appropriate Flow for User Signup & User Login with JWT Authentication
- Spring Boot Application Architecture with Spring Security
- How to configure Spring Security to work with JWT
- How to define Data Models and association for Authentication and Authorization
- Way to use Spring Data MongoDB to interact with MongoDB Database
## User Registration, Login and Authorization process.

## Spring Boot Rest API Architecture with Spring Security
You can have an overview of our Spring Boot Server with the diagram below:

For more detail, please visit:
> [Spring Boot, MongoDB: JWT Authentication with Spring Security](https://bezkoder.com/spring-boot-jwt-auth-mongodb/)
> [Using HttpOnly Cookie](https://www.bezkoder.com/spring-boot-mongodb-login-example/)
Working with Front-end:
> [Vue](https://www.bezkoder.com/jwt-vue-vuex-authentication/)
> [Angular 8](https://www.bezkoder.com/angular-jwt-authentication/) / [Angular 10](https://www.bezkoder.com/angular-10-jwt-auth/) / [Angular 11](https://www.bezkoder.com/angular-11-jwt-auth/) / [Angular 12](https://www.bezkoder.com/angular-12-jwt-auth/) / [Angular 13](https://www.bezkoder.com/angular-13-jwt-auth/)
> [React](https://www.bezkoder.com/react-jwt-auth/) / [React Redux](https://www.bezkoder.com/react-redux-jwt-auth/)
More Practice:
> [Spring Boot with MongoDB CRUD example using Spring Data](https://www.bezkoder.com/spring-boot-mongodb-crud/)
> [Spring Boot MongoDB Pagination & Filter example](https://www.bezkoder.com/spring-boot-mongodb-pagination/)
> [Spring Boot + GraphQL + MongoDB example](https://www.bezkoder.com/spring-boot-graphql-mongodb-example-graphql-java/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Rest Controller Unit Test with @WebMvcTest](https://www.bezkoder.com/spring-boot-webmvctest/)
> Validation: [Spring Boot Validate Request Body](https://www.bezkoder.com/spring-boot-validate-request-body/)
> Documentation: [Spring Boot and Swagger 3 example](https://www.bezkoder.com/spring-boot-swagger-3/)
> Caching: [Spring Boot Redis Cache example](https://www.bezkoder.com/spring-boot-redis-cache-example/)
Fullstack:
> [Vue.js + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-vue-mongodb/)
> [Angular 8 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-spring-boot-mongodb/)
> [Angular 10 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-10-spring-boot-mongodb/)
> [Angular 11 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-11-spring-boot-mongodb/)
> [Angular 12 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-12-spring-boot-mongodb/)
> [Angular 13 + Spring Boot + MongoDB example](https://www.bezkoder.com/angular-13-spring-boot-mongodb/)
> [Angular 14 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-14-mongodb/)
> [Angular 15 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-15-mongodb/)
> [Angular 16 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-16-mongodb/)
> [React + Spring Boot + MongoDB example](https://www.bezkoder.com/react-spring-boot-mongodb/)
Run both Back-end & Front-end in one place:
> [Integrate Angular with Spring Boot Rest API](https://www.bezkoder.com/integrate-angular-spring-boot/)
> [Integrate React with Spring Boot Rest API](https://www.bezkoder.com/integrate-reactjs-spring-boot/)
> [Integrate Vue with Spring Boot Rest API](https://www.bezkoder.com/integrate-vue-spring-boot/)
## Run Spring Boot application
```
mvn spring-boot:run
```
| 1 |
damico/OpenPgp-BounceCastle-Example | This is an OpenPgp + BounceCastle, Java Example, for education. | null | OpenPgp-BounceCastle-Example
============================
This is an OpenPgp + BounceCastle, Java Example, for education. Check the 3 test methods inside the class org.jdamico.bc.openpgp.tests.TestBCOpenPGP:
* genKeyPair()
* encrypt()
* decrypt()
------
This source code was based on the examples found inside BounceCastle API and in the demonstration found here: http://sloanseaman.com/wordpress/2011/08/11/pgp-encryptiondecryption-in-java/
| 0 |
crnk-project/crnk-example | example application from crnk | null | # crnk example application
[](https://travis-ci.org/crnk-project/crnk-example)
[](https://gitter.im/crnk-io/Lobby)
[](https://github.com/crnk-project/crnk-framework/blob/master/LICENSE.txt)
This is a Spring-based showcasing the use of [Crnk](https://github.com/crnk-project/crnk-framework).
Further smaller example applications integrating into various frameworks can be found at
[crnk-integration-examples](https://github.com/crnk-project/crnk-framework/tree/master/crnk-integration-examples).
*WARNING: this example project is still in development and subject to various improvements, see roadmap*
## Requirements
Crnk requires Java 8 or later.
## Licensing
Crnk is licensed under the Apache License, Version 2.0.
You can grab a copy of the license at http://www.apache.org/licenses/LICENSE-2.0.
## Building from Source
Crnk make use of Gradle for its build. To build the project run
gradlew build
## Running the application
In order to run this example do:
gradlew run
or
docker run --name=crnk -p 8080:8080 crnk/example
The JSON API endpoint will be available at:
http://localhost:8080/api/
Some further URLs to play around that show the power of Crnk:
http://127.0.0.1:8080/api/movie
http://127.0.0.1:8080/api/movie/44cda6d4-1118-3600-9cab-da760bfd678c
http://127.0.0.1:8080/api/movie/44cda6d4-1118-3600-9cab-da760bfd678c
http://127.0.0.1:8080/api/movie/44cda6d4-1118-3600-9cab-da760bfd678c/project
http://127.0.0.1:8080/api/movie/44cda6d4-1118-3600-9cab-da760bfd678c/relationships/project
http://127.0.0.1:8080/api/movie?sort=-name
http://127.0.0.1:8080/api/movie?sort=-id,name
http://127.0.0.1:8080/api/movie?sort=id&page[offset]=0&page[limit]=2
http://127.0.0.1:8080/api/movie?filter[name]=Iron Man
http://127.0.0.1:8080/api/movie?filter[name][EQ]=Iron Man
http://127.0.0.1:8080/api/movie?filter[name][LIKE]=Iron
http://127.0.0.1:8080/api/schedule
http://127.0.0.1:8080/api/meta/resource
http://127.0.0.1:8080/api/vote?fields=name // demos fields set & performance issues
http://127.0.0.1:8080/api/secrets // demos error
http://127.0.0.1:8080/api/facet?filter[resourceType]=movie // get movie facets
## IDE
Make sure to enable annotation processing support when using IntelliJ IDEA. Otherwise it will
not be able to find the generated sources from the Crnk annotation processor (type-safe Query objects).
## Build Setup
The project makes use of https://github.com/rmee/gradle-plugins/ for the build setup.
- if no JAVA_HOME is configured (recommended), a suitable JDK will be downloaded automatically
by the `jdk-bootstrap` plugin.
- `src/main/helm` holds a Helm chart to deploy to Kubernetes.
- Deployment to Kubernetes is triggered by the `deploy` task. All the deployment is confined
to Docker images and a project-specific home directory located in `build/home`. No installation
of any tooling necessary thanks to the plugins in use. Further wrapper scripts like `./kubectl`
allow to use this deployment setup from a shell (GitBash, Linux, etc.). For deployment
`CRNK_GCLOUD_REGION`, `CRNK_GCLOUD_PROJECT`, `CRNK_GCLOUD_CLUSTER` environment variables must
be set and credentials be available in `crnk-example-service/secrets/gcloud.key`.
## Pointers
The `crnk-example-service` project showcases:
- Use of Lombok to avoid getter/setter boiler-plate.
- Integration of Crnk into Spring Boot
- `io.crnk:crnk-format-plain-json` has been applied for slightly simplified version of JSON:API without
`attributes`, `relationships`, `includes` section.
- A simple in-memory repository with`ScreeningRepository` that keeps all resources in a map.
- Exposing entities with crnk-jpa using `MovieRepository`, `PersonRepository`, etc. extending `JpaEntityRepositoryBase`.
Behind the scenes the `QuerySpec` is translated to an efficient JPA Criteria query.
- A manually written repository with `VoteRepository`. It makes use of Thread.sleep to simulate heavy work.
- A custom exception is introduced with `CustomExceptionMapper` that is mapped to a JSON API error and HTTP status code.
- using `@JsonApiRelationId` with `ScreeningRepository` to
handle use cases where the related ID is easy to get, which in turn allows
to omit having to implement relationship repositories.
- implementing a relationship repository with`AttributeChangeRelationshipRepository`.
- implementing a multi-valued nested resource with `Secret` and its identifier `SecretId`.
- implementing a single-valued nested resource with `SecreeningStatus`. Shared the same ID as the screening itself
and make use of `SerializeType.EAGER` to directly be shown with the screening (see
http://127.0.0.1:8080/api/screening)
- introducing new relationships to existing resources
without touching those resources with `AttributeChangeFieldProvider`.
- `PersonEntity` as dynamic resource by annotating a `Map`-based field with `@JsonAnyGetter` and `@JsonAnySetter`
- `SecurityConfiguration` performs a OAuth setup with GitHub as provider.
`LoginRepository` gives access to information about the logged-in user through http://localhost:8080/api/user.
*Enable spring security in the `application.yaml`* to make use of the security features.
*Security is disabled by default* to facilitate playing with the example app.
The security setup is still work in progress.
- `CSRF` (resp. `XSRF` in Angular terminology) protection through `SpringSecurityConfiguration`.
- `ExampleSecurityConfigurer` to setup role-based access control.
- `ScheduleDecoratorFactory` to intercept and modify requests to repositories.
- The documentation gets generated to `build/asciidoc` upon executing `gradlew build`. Have a look at the
`build.gradle` file and the capturing based on `AsciidocCaptureModule` within the test cases.
- Support for facetted search by applying `crnk-data-facets`. `MovieEntity.year` as been marked as being facetted with `@Facet`.
See `http://127.0.0.1:8080/api/facet?filter[resourceType]=movie`.
- `MovieRepository` provides an interface for the `MovieRepositoryImpl` which allows type-safe access
to movie result lists in `CrnkClient`.
- `MovieRepository` makes use of `HasMoreResourcesMetaInformation` through a custom `MovieList` type. This
triggers the use of a previous/next paging strategy, rather than always computing the total count
in a second, potentially expensive query.
The `TestDataLoader` will automatically setup some test data upon start.
The project itself makes use of an number of third-party plugins to bootstrap a JDK, build Helm packages and
allow a Kubernetes installation to Google Cloud. For more information see https://github.com/rmee/gradle-plugins/.
Feedback and PRs very welcomed!
## Links
* [Homepage](http://www.crnk.io)
* [Documentation](http://www.crnk.io/releases/stable/documentation/)
* [Source code](https://github.com/crnk-project/crnk-example/)
* [Issue tracker](https://github.com/crnk-project/crnk-example/issues)
* [Forum](https://gitter.im/crnk-io/Lobby)
* [Build](https://travis-ci.org/crnk-project/crnk-example/)
| 1 |
infinum/Dagger-2-Example | Dagger 2 example project | null | # Dagger 2 example
An example project showing how to use Dagger 2.
It uses [Pokeapi](http://pokeapi.co/docs/) to show a list of pokemons with details.
# Credits
Maintained and sponsored by
[Infinum] (http://www.infinum.co).
<img src="https://infinum.co/infinum.png" width="264">
| 1 |
soyjuanmalopez/clean-architecture | A example of clean architecture in Java 8 and Spring Boot 2.0 | actors arquitecture clean clean-arch clean-architecture clean-code hibernate java-8 lombok maven multiproject spring spring-boot template video | Clean Architecture example
Multiporject maven.
Explain
https://medium.com/swlh/clean-architecture-java-spring-fea51e26e00
My other artichels
Recillence in Java Aplication
https://medium.com/swlh/future-proofing-your-java-applications-understanding-the-power-of-resilience-patterns-with-cfbafdcfdc86
Concurrency Java Application with examples
https://medium.com/swlh/conquering-concurrency-in-spring-boot-strategies-and-solutions-152f41dd9005
*Compile* </br>
mvn clean install
*Run* </br>
mvn spring-boot:run
| 1 |
mschwartau/keycloak-custom-protocol-mapper-example | An example for building custom keycloak protocol mappers | null | # Keycloak custom protocol mapper example / customize JWT tokens
Per default [Keycloak](https://www.keycloak.org/) writes a lot of things into the [JWT tokens](https://tools.ietf.org/html/rfc7519),
e.g. the preferred username. If that is not enough, a lot of additional built in protocol mappers can be added to customize
the [JWT token](https://tools.ietf.org/html/rfc7519) created by [Keycloak](https://www.keycloak.org/) even further. They can be added in the client
section via the mappers tab (see the [documentation](https://www.keycloak.org/docs/latest/server_admin/index.html#_protocol-mappers)). But sometimes the build
in protocol mappers are not enough. If this is the case, an own protocol mapper can be added to [Keycloak](https://www.keycloak.org/) via an (not yet)
official [service provider API](https://www.baeldung.com/java-spi). This project shows how this can be done.
## Entrypoints into this project
1. [data-setup](data-setup): Project to configure [Keycloak](https://www.keycloak.org/) via its REST API. Configures a realm so that it uses the example
protocol mapper. Contains a [main method](data-setup/src/main/java/hamburg/schwartau/datasetup/bootstrap/DataSetupMain.java) which can be executed against a
running [Keycloak](https://www.keycloak.org/) instance. Doesn't need to be executed manually because it's executed automatically by
the `docker-entrypoint.sh` during startup.
2. [protocol-mapper](protocol-mapper): Contains the protocol mapper code. The resulting jar file will be deployed to [Keycloak](https://www.keycloak.org/). I
tried to explain things needed in comments in the [protocol-mapper project](protocol-mapper)
3. [Dockerfile](Dockerfile): Adds the jar file containing the [protocol mapper](protocol-mapper/src/main/java/hamburg/schwartau/HelloWorldMapper.java), created
by the [protocol-mapper project](protocol-mapper), to the keycloak instance.
## Try it out
To try it out do the following things:
### Konfiguration of keycloak
1. If you have already started this project and changed something, execute `docker-compose down -v` so
that the volumes and so on are destroyed. Otherwise the old keycloak in memory
database might be reused or you might not see your changed data.
2. Start build and start keycloak using docker: `docker-compose up --build`.
3. After the keycloak has been started, the [main class `DataSetupMain`](data-setup/src/main/java/hamburg/schwartau/datasetup/bootstrap/DataSetupMain.java) in
our [data-setup](data-setup) module should be started automatically by the `docker-entrypoint.sh` in the Dockerfile and should add some example data to the
keycloak instance. You should see the message `The data has been imported` in the console if it has been executed successfully.
3. Now you can open the [Keycloak admin console](http://localhost:11080/auth/admin/) and login with username / password: admin / password.
This initial password for the admin user were configured in our [docker-compose](docker-compose.yml) file.
4. You should see that the master and an example realm, which was added by the [data-setup](data-setup) module automatically, exists currently. For this example
realm the [hello world mapper](protocol-mapper/src/main/java/hamburg/schwartau/HelloWorldMapper.java) is
configured (in [clients=>example-realm-client=>Client scopes=>dedicated](http://localhost:11080/auth/admin/master/console/#/example-realm/clients/example-realm-client/clientScopes/dedicated)): 
Now [Keycloak](https://www.keycloak.org/) is configured. As a next step we want to check the token.
### Checking the access token
To check the token, we need to login. To get the tokens using the direct flow (not recommended for production usage, just for easy demo purposes. See
this [page](https://auth0.com/docs/api-auth/which-oauth-flow-to-use)) execute the following curl command:
curl -d 'client_id=example-realm-client' -d 'username=jdoe' -d 'password=password' -d 'grant_type=password' 'http://localhost:11080/auth/realms/example-realm/protocol/openid-connect/token'
Note that using the direct flow is only possible because we configured keycloak to allow it in
the [`RealmSetup` class](data-setup/src/main/java/hamburg/schwartau/datasetup/bootstrap/RealmSetup.java).
Response should be like:
{
"access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJYbl9PXzN6VHJpSjBzOE5RUzlpMVpBcF9pZVN2YXRwOHRIWmtpTGNwM1RrIn0.eyJleHAiOjE2NzExMzMzMjMsImlhdCI6MTY3MTEzMzAyMywianRpIjoiYTcwYjA4NjQtNmI3Mi00MjljLTliMDEtZWIzNzBhMTE5YTgzIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDoxMTA4MC9hdXRoL3JlYWxtcy9leGFtcGxlLXJlYWxtIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6IjkxMmNkZmJhLWNlNGQtNDgzMS04NjA3LWQzM2VmOTkzOTdmYyIsInR5cCI6IkJlYXJlciIsImF6cCI6ImV4YW1wbGUtcmVhbG0tY2xpZW50Iiwic2Vzc2lvbl9zdGF0ZSI6ImRlMzljN2M2LTM0ZWMtNGM4MC1iZTM2LWIyODE4YTkxMjMyYyIsImFjciI6IjEiLCJyZWFsbV9hY2Nlc3MiOnsicm9sZXMiOlsiZGVmYXVsdC1yb2xlcy1leGFtcGxlLXJlYWxtIiwib2ZmbGluZV9hY2Nlc3MiLCJ1bWFfYXV0aG9yaXphdGlvbiJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsInNpZCI6ImRlMzljN2M2LTM0ZWMtNGM4MC1iZTM2LWIyODE4YTkxMjMyYyIsImVtYWlsX3ZlcmlmaWVkIjpmYWxzZSwibmFtZSI6IkpvaG4gRG9lIiwiZ3JvdXBzIjpbXSwicHJlZmVycmVkX3VzZXJuYW1lIjoiamRvZSIsImdpdmVuX25hbWUiOiJKb2huIiwiZmFtaWx5X25hbWUiOiJEb2UiLCJleGFtcGxlIjp7Im1lc3NhZ2UiOiJoZWxsbyB3b3JsZCJ9fQ.wZI33cy6X2yxnsz1HeU3snrPi8xg1Pq8TiNIxPfP-RLtPQm5-3of9kTFXNvtZkA2Om3rzlI_NfyYy8eq4VArujVvvkKx5oxGZ0Q9Tv6LU0ufS4YfW0t0oAbEdNmONBXUszcl_HKX_5Pnvbs7DwR04ErAmzguECnky9hdYy0nJREnfrTwr6Ss270H8HaQ-DJ1T4x-iFzuwRkQZTg_PUfRxts0tjsIRehFPxadLujj4ZpsguvfXqCD11Gb4a2xXSm6S2iDP8sa_zwaWCbRDraBUCcEy192hADDNVDBQPYgUe-0Sj7z_mPNviEiMagAmBFCj8W-czkEWwnX_WodeVThWA",
"expires_in": 300,
"refresh_expires_in": 1800,
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICIzODE5NTdiZi1jMzI0LTQ3M2UtOTA4MS1lN2MxODVmMzllYjUifQ.eyJleHAiOjE2NzExMzQ4MjMsImlhdCI6MTY3MTEzMzAyMywianRpIjoiYWM4Njk0NzktYzY2Yi00YWIwLWIzYzQtZDc1ZjU0NWZmOTk3IiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDoxMTA4MC9hdXRoL3JlYWxtcy9leGFtcGxlLXJlYWxtIiwiYXVkIjoiaHR0cDovL2xvY2FsaG9zdDoxMTA4MC9hdXRoL3JlYWxtcy9leGFtcGxlLXJlYWxtIiwic3ViIjoiOTEyY2RmYmEtY2U0ZC00ODMxLTg2MDctZDMzZWY5OTM5N2ZjIiwidHlwIjoiUmVmcmVzaCIsImF6cCI6ImV4YW1wbGUtcmVhbG0tY2xpZW50Iiwic2Vzc2lvbl9zdGF0ZSI6ImRlMzljN2M2LTM0ZWMtNGM4MC1iZTM2LWIyODE4YTkxMjMyYyIsInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsInNpZCI6ImRlMzljN2M2LTM0ZWMtNGM4MC1iZTM2LWIyODE4YTkxMjMyYyJ9.AKWuXIuq__KzZC32GrGlhbDe_gZkyQsqKRSIDBKSgJQ",
"token_type": "Bearer",
"not-before-policy": 0,
"session_state": "de39c7c6-34ec-4c80-be36-b2818a91232c",
"scope": "email profile"
}
Then copy the `access_token` value and decode it, e.g. by using [jwt.io](https://jwt.io/). You'll
get something like the following:
{
"exp": 1671133323,
"iat": 1671133023,
"jti": "a70b0864-6b72-429c-9b01-eb370a119a83",
"iss": "http://localhost:11080/auth/realms/example-realm",
"aud": "account",
"sub": "912cdfba-ce4d-4831-8607-d33ef99397fc",
"typ": "Bearer",
"azp": "example-realm-client",
"session_state": "de39c7c6-34ec-4c80-be36-b2818a91232c",
"acr": "1",
"realm_access": {
"roles": [
"default-roles-example-realm",
"offline_access",
"uma_authorization"
]
},
"resource_access": {
"account": {
"roles": [
"manage-account",
"manage-account-links",
"view-profile"
]
}
},
"scope": "email profile",
"sid": "de39c7c6-34ec-4c80-be36-b2818a91232c",
"email_verified": false,
"name": "John Doe",
"groups": [],
"preferred_username": "jdoe",
"given_name": "John",
"family_name": "Doe",
"example": {
"message": "hello world"
}
}
The value auf our own [Hello World Token mapper](protocol-mapper/src/main/java/hamburg/schwartau/HelloWorldMapper.java) got added to the token because
the message 'hello world' appears in the example.message field.
## Acknowledgements
- Examples for [Keycloak](https://www.keycloak.org/): https://github.com/keycloak/keycloak/tree/master/examples
- I got the idea for how to add a custom protocol mapper to [Keycloak](https://www.keycloak.org/) from
this [jboss mailing list entry](http://lists.jboss.org/pipermail/keycloak-user/2016-February/004891.html)
## Links
- To use keycloak with an angular app, I found this example app to be helpful: https://github.com/manfredsteyer/angular-oauth2-oidc
- Login Page for the users: Login Url: [http://localhost:11080/auth/realms/example-realm/account](http://localhost:11080/auth/realms/example-realm/account)
| 1 |
joshlong-attic/spring-and-kafka | Example code for my blog introducing Spring and Apache Kafka | null | # Apache Kafka and Spring Integration, Spring XD, and the Lattice Distributed Runtime
Applications generated more and more data than ever before and a huge part of the challenge - before it can even be analyzed - is accommodating the load in the first place. [Apache's Kafka](http://kafka.apache.org) meets this challenge. It was originally designed by LinkedIn and subsequently open-sourced in 2011. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. The design is heavily influenced by transaction logs. It is a messaging system, similar to traditional messaging systems like RabbitMQ, ActiveMQ, MQSeries, but it's ideal for log aggregation, persistent messaging, fast (_hundreds_ of megabytes per second!) reads and writes, and can accommodate numerous clients. Naturally, this makes it _perfect_ for cloud-scale architectures!
Kafka [powers many large production systems](https://cwiki.apache.org/confluence/display/KAFKA/Powered+By). LinkedIn uses it for activity data and operational metrics to power the LinkedIn news feed, and LinkedIn Today, as well as offline analytics going into Hadoop. Twitter uses it as part of their stream-processing infrastructure. Kafka powers online-to-online and online-to-offline messaging at Foursquare. It is used to integrate Foursquare monitoring and production systems with Hadoop-based offline infrastructures. Square uses Kafka as a bus to move all system events through Square's various data centers. This includes metrics, logs, custom events, and so on. On the consumer side, it outputs into Splunk, Graphite, or Esper-like real-time alerting. Netflix uses it for 300-600BN messages per day. It's also used by Airbnb, Mozilla, Goldman Sachs, Tumblr, Yahoo, PayPal, Coursera, Urban Airship, Hotels.com, and a seemingly endless list of other big-web stars. Clearly, it's earning its keep in some powerful systems!
## Installing Apache Kafka
There are many different ways to get Apache Kafka installed. If you're on OSX, and you're using Homebrew, it can be as simple as `brew install kafka`. You can also [download the latest distribution from Apache](http://kafka.apache.org/downloads.html). I downloaded `kafka_2.10-0.8.2.1.tgz`, unzipped it, and then within you'll find there's a distribution of [Apache Zookeeper](https://zookeeper.apache.org/) as well as Kafka, so nothing else is required. I installed Apache Kafka in my `$HOME` directory, under another directory, `bin`, then I created an environment variable, `KAFKA_HOME`, that points to `$HOME/bin/kafka`.
Start Apache Zookeeper first, specifying where the configuration properties file it requires is:
```
$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
```
The Apache Kafka distribution comes with default configuration files for both Zookeeper and Kafka, which makes getting started easy. You will in more advanced use cases need to customize these files.
Then start Apache Kafka. It too requires a configuration file, like this:
```
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
```
The `server.properties` file contains, among other things, default values for where to connect to Apache Zookeeper (`zookeeper.connect`), how much data should be sent across sockets, how many partitions there are by default, and the broker ID (`broker.id` - which must be unique across a cluster).
There are other scripts in the same directory that can be used to send and receive dummy data, very handy in establishing that everything's up and running!
Now that Apache Kafka is up and running, let's look at working with Apache Kafka from our application.
## Some High Level Concepts..
A Kafka _broker_ cluster consists of one or more servers where each may have one or more broker processes running. Apache Kafka is designed to be highly available; there are no _master_ nodes. All nodes are interchangeable. Data is replicated from one node to another to ensure that it is still available in the event of a failure.
In Kafka, a _topic_ is a category, similar to a JMS destination or both an AMQP exchange and queue. Topics are partitioned, and the choice of which of a topic's partition a message should be sent to is made by the message producer. Each message in the partition is assigned a unique sequenced ID, its _offset_. More partitions allow greater parallelism for consumption, but this will also result in more files across the brokers.
_Producers_ send messages to Apache Kafka broker topics and specify the partition to use for every message they produce. Message production may be synchronous or asynchronous. Producers also specify what sort of replication guarantees they want.
_Consumers_ listen for messages on topics and process the feed of published messages. As you'd expect if you've used other messaging systems, this is usually (and usefully!) asynchronous.
Like [Spring XD](http://spring.io/projects/spring-xd) and numerous other distributed system, Apache Kafka uses Apache Zookeeper to coordinate cluster information. Apache Zookeeper provides a shared hierarchical namespace (called _znodes_) that nodes can share to understand cluster topology and availability (yet another reason that [Spring Cloud](https://github.com/spring-cloud/spring-cloud-zookeeper) has forthcoming support for it..).
Zookeeper is very present in your interactions with Apache Kafka. Apache Kafka has, for example, two different APIs for acting as a consumer. The higher level API is simpler to get started with and it handles all the nuances of handling partitioning and so on. It will need a reference to a Zookeeper instance to keep the coordination state.
Let's turn now turn to using Apache Kafka with Spring.
## Using Apache Kafka with Spring Integration
The recently released [Apache Kafka 1.1 Spring Integration adapter]() is very powerful, and provides inbound adapters for working with both the lower level Apache Kafka API as well as the higher level API.
The adapter, currently, is XML-configuration first, though work is already underway on a Spring Integration Java configuration DSL for the adapter and milestones are available. We'll look at both here, now.
To make all these examples work, I added the [libs-milestone-local Maven repository](http://repo.spring.io/simple/libs-milestone-local) and used the following dependencies:
- org.apache.kafka:kafka_2.10:0.8.1.1
- org.springframework.boot:spring-boot-starter-integration:1.2.3.RELEASE
- org.springframework.boot:spring-boot-starter:1.2.3.RELEASE
- org.springframework.integration:spring-integration-kafka:1.1.1.RELEASE
- org.springframework.integration:spring-integration-java-dsl:1.1.0.M1
### Using the Spring Integration Apache Kafka with the Spring Integration XML DSL
First, let's look at how to use the Spring Integration outbound adapter to send `Message<T>` instances from a Spring Integration flow to an external Apache Kafka instance. The example is fairly straightforward: a Spring Integration `channel` named `inputToKafka` acts as a conduit that forwards `Message<T>` messages to the outbound adapter, `kafkaOutboundChannelAdapter`. The adapter itself can take its configuration from the defaults specified in the `kafka:producer-context` element or it from the adapter-local configuration overrides. There may be one or many configurations in a given `kafka:producer-context` element.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
<int:channel id="inputToKafka">
<int:queue/>
</int:channel>
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter"
kafka-producer-context-ref="kafkaProducerContext"
channel="inputToKafka">
<int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0" task-executor="taskExecutor"/>
</int-kafka:outbound-channel-adapter>
<task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500"/>
<int-kafka:producer-context id="kafkaProducerContext">
<int-kafka:producer-configurations>
<int-kafka:producer-configuration broker-list="localhost:9092"
topic="event-stream"
compression-codec="default"/>
</int-kafka:producer-configurations>
</int-kafka:producer-context>
</beans>
```
Here's the Java code from a Spring Boot application to trigger message sends using the outbound adapter by sending messages into the incoming `inputToKafka` `MessageChannel`.
```java
package xml;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.DependsOn;
import org.springframework.context.annotation.ImportResource;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.GenericMessage;
@SpringBootApplication
@EnableIntegration
@ImportResource("/xml/outbound-kafka-integration.xml")
public class DemoApplication {
private Log log = LogFactory.getLog(getClass());
@Bean
@DependsOn("kafkaOutboundChannelAdapter")
CommandLineRunner kickOff(@Qualifier("inputToKafka") MessageChannel in) {
return args -> {
for (int i = 0; i < 1000; i++) {
in.send(new GenericMessage<>("#" + i));
log.info("sending message #" + i);
}
};
}
public static void main(String args[]) {
SpringApplication.run(DemoApplication.class, args);
}
}
```
### Using the New Apache Kafka Spring Integration Java Configuration DSL
Shortly after the Spring Integration 1.1 release, Spring Integration rockstar [Artem Bilan](https://spring.io/team/artembilan) got to work [on adding a Spring Integration Java Configuration DSL analog](http://repo.spring.io/simple/libs-milestone-local/org/springframework/integration/spring-integration-java-dsl/1.1.0.M1/) and the result is a thing of beauty! It's not yet GA (you need to add the `libs-milestone` repository for now), but I encourage you to try it out and kick the tires. It's working well for me and the Spring Integration team are always keen on getting early feedback whenever possible! Here's an example that demonstrates both sending messages and consuming them from two different `IntegrationFlow`s. The producer is similar to the example XML above.
New in this example is the polling consumer. It is batch-centric, and will pull down all the messages it sees at a fixed interval. In our code, the message received will be a map that contains as its keys the topic and as its value another map with the partition ID and the batch (in this case, of 10 records), of records read. There is a `MessageListenerContainer`-based alternative that processes messages as they come.
```java
package jc;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.springframework.integration.IntegrationMessageHeaderAccessor;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.dsl.SourcePollingChannelAdapterSpec;
import org.springframework.integration.dsl.kafka.Kafka;
import org.springframework.integration.dsl.kafka.KafkaHighLevelConsumerMessageSourceSpec;
import org.springframework.integration.dsl.kafka.KafkaProducerMessageHandlerSpec;
import org.springframework.integration.dsl.support.Consumer;
import org.springframework.integration.kafka.support.ZookeeperConnect;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.Map;
/**
* Demonstrates using the Spring Integration Apache Kafka Java Configuration DSL.
* Thanks to Spring Integration ninja <a href="http://spring.io/team/artembilan">Artem Bilan</a>
* for getting the Java Configuration DSL working so quickly!
*
* @author Josh Long
*/
@EnableIntegration
@SpringBootApplication
public class DemoApplication {
public static final String TEST_TOPIC_ID = "event-stream";
@Component
public static class KafkaConfig {
@Value("${kafka.topic:" + TEST_TOPIC_ID + "}")
private String topic;
@Value("${kafka.address:localhost:9092}")
private String brokerAddress;
@Value("${zookeeper.address:localhost:2181}")
private String zookeeperAddress;
KafkaConfig() {
}
public KafkaConfig(String t, String b, String zk) {
this.topic = t;
this.brokerAddress = b;
this.zookeeperAddress = zk;
}
public String getTopic() {
return topic;
}
public String getBrokerAddress() {
return brokerAddress;
}
public String getZookeeperAddress() {
return zookeeperAddress;
}
}
@Configuration
public static class ProducerConfiguration {
@Autowired
private KafkaConfig kafkaConfig;
private static final String OUTBOUND_ID = "outbound";
private Log log = LogFactory.getLog(getClass());
@Bean
@DependsOn(OUTBOUND_ID)
CommandLineRunner kickOff(
@Qualifier(OUTBOUND_ID + ".input") MessageChannel in) {
return args -> {
for (int i = 0; i < 1000; i++) {
in.send(new GenericMessage<>("#" + i));
log.info("sending message #" + i);
}
};
}
@Bean(name = OUTBOUND_ID)
IntegrationFlow producer() {
log.info("starting producer flow..");
return flowDefinition -> {
Consumer<KafkaProducerMessageHandlerSpec.ProducerMetadataSpec> spec =
(KafkaProducerMessageHandlerSpec.ProducerMetadataSpec metadata)->
metadata.async(true)
.batchNumMessages(10)
.valueClassType(String.class)
.<String>valueEncoder(String::getBytes);
KafkaProducerMessageHandlerSpec messageHandlerSpec =
Kafka.outboundChannelAdapter(
props -> props.put("queue.buffering.max.ms", "15000"))
.messageKey(m -> m.getHeaders().get(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER))
.addProducer(this.kafkaConfig.getTopic(),
this.kafkaConfig.getBrokerAddress(), spec);
flowDefinition
.handle(messageHandlerSpec);
};
}
}
@Configuration
public static class ConsumerConfiguration {
@Autowired
private KafkaConfig kafkaConfig;
private Log log = LogFactory.getLog(getClass());
@Bean
IntegrationFlow consumer() {
log.info("starting consumer..");
KafkaHighLevelConsumerMessageSourceSpec messageSourceSpec = Kafka.inboundChannelAdapter(
new ZookeeperConnect(this.kafkaConfig.getZookeeperAddress()))
.consumerProperties(props ->
props.put("auto.offset.reset", "smallest")
.put("auto.commit.interval.ms", "100"))
.addConsumer("myGroup", metadata -> metadata.consumerTimeout(100)
.topicStreamMap(m -> m.put(this.kafkaConfig.getTopic(), 1))
.maxMessages(10)
.valueDecoder(String::new));
Consumer<SourcePollingChannelAdapterSpec> endpointConfigurer = e -> e.poller(p -> p.fixedDelay(100));
return IntegrationFlows
.from(messageSourceSpec, endpointConfigurer)
.<Map<String, List<String>>>handle((payload, headers) -> {
payload.entrySet().forEach(e -> log.info(e.getKey() + '=' + e.getValue()));
return null;
})
.get();
}
}
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
```
The example makes heavy use of Java 8 lambdas.
The producer spends a bit of time establishing how many messages will be sent in a single send operation, how keys and values are encoded (Kafka only knows about `byte[]` arrays, after all) and whether messages should be sent synchronously or asynchronously. In the next line, we configure the outbound adapter itself and then define an `IntegrationFlow` such that all messages get sent out via the Kafka outbound adapter.
The consumer spends a bit of time establishing which Zookeeper instance to connect to, how many messages to receive (10) in a batch, etc. Once the message batches are recieved, they're handed to the `handle` method where I've passed in a lambda that'll enumerate the payload's body and print it out. Nothing fancy.
## Using Apache Kafka with Spring XD
Apache Kafka is a message bus and it can be very powerful when used as an integration bus. However, it really comes into its own because it's fast enough and scalable enough that it can be used to route big-data through processing pipelines. And if you're doing data processing, you really want [Spring XD](http://projects.spring.io/spring-xd/)! Spring XD makes it dead simple to use Apache Kafka (as the support is built on the Apache Kafka Spring Integration adapter!) in complex stream-processing pipelines. Apache Kafka is exposed as a Spring XD _source_ - where data comes from - and a sink - where data goes to.
<img src ="http://projects.spring.io/spring-xd/img/spring-xd-unified-platform-for-big-data.png" />
Spring XD exposes a super convenient DSL for creating `bash`-like pipes-and-filter flows. Spring XD is a centralized runtime that manages, scales, and monitors data processing jobs. It builds on top of Spring Integration, Spring Batch, Spring Data and Spring for Hadoop to be a one-stop data-processing shop. Spring XD Jobs read data from _sources_, run them through processing components that may count, filter, enrich or transform the data, and then write them to sinks.
Spring Integration and Spring XD ninja [Marius Bogoevici](https://twitter.com/mariusbogoevici), who did a lot of the recent work in the Spring Integration and Spring XD implementation of Apache Kafka, put together a really nice example demonstrating [how to get a full working Spring XD and Kafka flow working](https://github.com/spring-projects/spring-xd-samples/tree/master/kafka-source). The `README` walks you through getting Apache Kafka, Spring XD and the requisite topics all setup. The essence, however, is when you use the Spring XD shell and the shell DSL to compose a stream. Spring XD components are named components that are pre-configured but have lots of parameters that you can override with `--..` arguments via the XD shell and DSL. (That DSL, by the way, is written by the amazing [Andy Clement](https://spring.io/team/aclement) of Spring Expression language fame!) Here's an example that configures a stream to read data from an Apache Kafka source and then write the message a component called `log`, which is a sink. `log`, in this case, could be syslogd, Splunk, HDFS, etc.
```bash
xd> stream create kafka-source-test --definition "kafka --zkconnect=localhost:2181 --topic=event-stream | log" --deploy
```
And that's it! Naturally, this is just a tase of Spring XD, but hopefully you'll agree the possibilities are tantalizing.
## Deploying a Kafka Server with Lattice and Docker
It's easy to get an example Kafka installation all setup using [Lattice](http://lattice.cf), a distributed runtime that supports, among other container formats, the very popular Docker image format. [There's a Docker image provided by Spotify that sets up a collocated Zookeeper and Kafka image](https://github.com/spotify/docker-kafka). You can easily deploy this to a Lattice cluster, as follows:
```bash
ltc create --run-as-root m-kafka spotify/kafka
```
From there, you can easily scale the Apache Kafka instances and even more easily still consume Apache Kafka from your cloud-based services.
## Next Steps
You can find the code [for this blog on my GitHub account](https://github.com/joshlong/spring-and-kafka).
We've only scratched the surface!
If you want to learn more (and why wouldn't you?), then be sure to check out Marius Bogoevici and Dr. Mark Pollack's upcoming [webinar on Reactive data-pipelines using Spring XD and Apache Kafka](https://spring.io/blog/2015/03/17/webinar-reactive-data-pipelines-with-spring-xd-and-kafka) where they'll demonstrate how easy it can be to use RxJava, Spring XD and Apache Kafka!
| 1 |
Jaouan/Article-Details-Transition-Example | It's just an example of material transition. | null | Android - Article details transition example
========
[](https://android-arsenal.com/details/3/4114)
It's just some examples of material transitions.
Pop from top|Pop from item
-------------|-------------
|
References
========
- "Pop from top" transition is inspired (more or less) by [Ivan Bjelajac's transition](http://www.materialup.com/posts/article-details-transition).
- The project uses [ButterKnife](http://jakewharton.github.io/butterknife/).
License
========
[Apache License Version 2.0](LICENSE)
| 1 |
khanhnguyenj/huongdanjava.com | All example projects from https://huongdanjava.com. | null | # huongdanjava.com | 1 |
JonathanM2ndoza/Hexagonal-Architecture-DDD | Example of Hexagonal Architecture and DDD | hexagonal-architecture | # Hexagonal-Architecture-DDD
Ports and Adapters or also known as Hexagonal Architecture, is a popular architecture invented by Alistair Cockburn in 2005.
Example of how to use Hexagonal Architecture and the basic of Domain Driven Design (DDD)
This example is made with Spring Boot, MongoDB, PostgreSQL
## Domain Driven Design (DDD)
Domain-Driven Design is an approach to software development that centers the development on programming a domain model that has a rich understanding of the processes and rules of a domain.
Bounded Context is a central pattern in Domain-Driven Design. It is the focus of DDD's strategic design section which is all about dealing with large models and teams. DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships.

*Reference:*
- https://martinfowler.com/tags/domain%20driven%20design.html
## Hexagonal Architecture
The hexagonal architecture, or ports and adapters architecture, is an architectural pattern used in software design. It aims at creating loosely coupled application components that can be easily connected to their software environment by means of ports and adapters. This makes components exchangeable at any level and facilitates test automation.
The business logic interacts with other components through ports and adapters. This way, we can change the underlying technologies without having to modify the application core.
**The hexagonal architecture is based on three principles and techniques:**
1. Explicitly separate Application, Domain, and Infrastructure
2. Dependencies are going from Application and Infrastructure to the Domain
3. We isolate the boundaries by using Ports and Adapters
Note: The words Application, Domain and Infrastructure do not come from the original article but from the frequent use of hexagonal architecture by Domain-Driven Design practitioners.

**Note: A port in Java is an interface. An adapter is one implementation of that interface.**
### Domain Layer, in the center
- The domain layer represents the inside of the application and provides ports to interact with application use cases (business logic).
- This is the part that we want to isolate from both left and right sides. It contains all the code that concerns and implements business logic (use cases).
- Because domain objects have no dependencies on other layers of the application, changes in other layers don’t affect them.
### Application Layer, on the left
- The application layer provides different adapters for outside entities to interact with the domain through the port.
- This is the side through which the user or external programs will interact with the application. It contains the code that allows these interactions. Typically, your user interface code, your HTTP routes for an API, your JSON serializations to programs that consume your application are here.
### Infrastructure Layer, on the right
- Provide adapters and server-side logic to interact with the application from the right side. Server-side entities, such as a database or other run-time devices, use these adapters to interact with the domain.
- It contains essential infrastructure details such as the code that interacts with your database, makes calls to the file system, or code that handles HTTP calls to other applications on which you depend for example.
*Reference:*
- https://en.wikipedia.org/wiki/Hexagonal_architecture_(software)
- https://dzone.com/articles/hexagonal-architecture-in-java-2
- https://blog.octo.com/en/hexagonal-architecture-three-principles-and-an-implementation-example/#principles
## Microservices Architecture

In our example we will use the basic architecture above without API Gateway. Customer, Product, Order do not necessarily have to be in different databases, it depends on the bounded context. The main objective is to highlight the use of Hexagonal Architecture in the microservices code.
All microservices are implemented with Spring Boot, however microservices can be implemented with different technologies.
| 1 |
redhat-cop/businessautomation-cop | All examples related to business automation processes such as jbpm, drools, dmn, optaplanner, cloud native kogito(quarkus), quickstart, pipelines, runtimes, etc. | buisness-automation businessautomation-cop dmn dmn-examples drools drools-example jbpm jbpm-example jbpm-process jbpm-springboot kogito kogito-example kogito-quickstart optaplanner quarkus rhdm rhpam rhpam-setup rhpam-springboot rhpam7 | []()
# Business Automation Community of Practice
Business Automation or Red Hat Process Automation Manager is a portfolio that includes a web tool for authoring, rules management application and building business processes (Workbench/Business Central).
This repository is meant to help bootstrap users of the Red Hat Process Automation Manager Portfolio to get started in building and using Red Hat Process Automation Manager (PAM) and Red Hat Decision Manager (DM) to build applications to run in different runtimes such as JBoss enterprise platform, spring boot and cloud-native using Kogito(Quarkus implementation for JBPM and drools).
# Red Hat Process Automation Manager (PAM)
PAM is a platform for developing business process management (BPM),
business rules management (BRM), Decision Model and Notation (DMN), and business resource optimization and complex event processing (CEP).
# Red Hat Decision Manager (DM)
DM is a platform for developing and authoring business rules management (BRM), business resource optimization and complex event processing (CEP).
## Decision Model and Notation (DMN)
It is a standard established by the Object Management Group (OMG) for describing and modelling operational decisions.
## Business Process Management (BPM)
It's a tool to help model, analyze, measure, improve, automate business processes and decisions based on the JBPM framework.
## Business rules management (BRM)
It provides a core Business Rules Engine (BRE) and full runtime support for Decision Model and Notation (DMN) models.
## Cloud Native - Kogito
Kogito is a framework that compiles your business process and business rules in a more cloud-native approach taking advantage of the latest technologies (Quarkus, Knative, etc.), you get amazingly fast boot times and instant scaling on orchestration platforms like Kubernetes.
## What's In This Repo?
This repo contains process automation(JBPM) and business rules(Drools and DMN) related quickstarts of several different flavours.
| 0 |
srecon/the-apache-ignite-book | All code samples, scripts and more in-depth examples for The Apache Ignite Book. Include Apache Ignite 2.6 or above | bigdata distributed-database gridgain hadoop hibernate hibernate-ogm hive ignite in-memory-caching in-memory-computations in-memory-database java memoization nosql-database spark spring-data sql streaming streaming-data | # The Apache Ignite Book
<a href="http://leanpub.com/ignitebook"><img src="https://github.com/srecon/the-apache-ignite-book/blob/master/3D_mini.png" alt="he Apache Ignite Book" height="256px" align="right"></a>
This is the code repository (code samples, scripts and more in-depth examples) for [The Apache Ignite Book](https://leanpub.com/ignitebook).
> [!IMPORTANT]
> Note that, updated examples with Apache Ignite version 2.14.x are located on **chapters-java11x** folder.
> Folder **chapters** supports older Ignite version like 2.6.0.
> Use the following JVM options to run the examples on JVM 17 or later, for instance,
>
> java --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-opens=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED --add-opens=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED -jar ./target/HelloIgnite-runnable.jar
## Naming conventions
Each chapter in the book has a corresponding folder within the repository. Each folder contains a set of files or folders related to each section of the chapter. For an example, the listing of the _memoization_ section is placed in the folder _chapters/chapter-5/memoization_.
## What is this book about?
Apache Ignite is one of the most widely used open source memory-centric distributed, caching, and processing platform. This allows the users to use the platform as an in- memory computing framework or a full functional persistence data stores with SQL and ACID transaction support. On the other hand, Apache Ignite can be used for accelerating existing Relational and NoSQL databases, processing events & streaming data or developing Microservices in fault-tolerant fashion.
This book addressed anyone interested in learning in-memory computing and distributed database. This book intends to provide someone with little to no experience of Apache Ignite with an opportunity to learn how to use this platform effectively from scratch taking a practical hands-on approach to learning.
This book covers the following exciting features:
* Apache Ignite architecture in depth such as data distributing technics (DHT), Rendezvous hashing, durable memory architecture, various cluster topologies, Ignite native persistence, Baseline topology and much more.
* Apache Ignite proven use cases as a memory-centric distributed database, caching and computing platforms.
* Getting started with Apache Ignite by using different tools and technics.
* Caching strategies by examples and how to use Apache Ignite for improving application performance including Hibernate L2 cache, MyBatis, memoization and web session cluster.
* Using Spring Data with Apache Ignite for developing high-performance web applications.
* Ignite query (SQL, API, Text and Scan queries) capabilities in depth.
* Using Spark RDD and Data frames for improving performance on processing fast data.
* Developing and executing distributed computations in a parallel fashion to gain high performance, low latency, and linear scalability.
* Developing distributed Microservices in fault-tolerant fashion.
* Processing events & streaming data for IoT projects, integrate Apache Ignite with other frameworks like Kafka, Storm, Camel, etc.
* Configuring, management and monitoring Ignite cluster with built-in and 3rd party tools such.
If you feel this book is for you, get your [copy](https://leanpub.com/ignitebook) today!
> [!TIP]
> If you are not sure if this book is for you, I suggest you read the sample chapter of the book. The sample chapter is available in different formats including [HTML](https://leanpub.com/ignitebook/read_sample). Anyway, I encourage you to try it out, and if you don't like the book, you can always ask a 100% refund within 45 days.
## build and install
Run the following command from the **chapters-java11x** directory
```
mvn clean install
```
We recommend a workstation with the following configurations for working with the repository:
| № | Name | Value |
|---|--------------|--------------------------------------------------------------|
| 1 | JDK | Oracle/Open JDK 11.x and above. |
| 2 | OS | Linux, MacOS (10.8.3 and above), Windows Vista SP2 and above |
| 3 | Network | No restriction |
| 4 | RAM | Minimum 4GB of RAM |
| 5 | CPU | Minimum 2 core |
| 5 | IDE | Eclipse, IntelliJ Idea, NetBeans or JDeveloper |
| 6 | Apache Maven | Version 3.6.3 or above |
## Conventions
The code will look like the following:
```
public class MySuperExtractor implements StreamSingleTupleExtractor<SinkRecord, String, String> {
@Override public Map.Entry<String, String> extract(SinkRecord msg) {
String[] parts = ((String)msg.value()).split("_");
return new AbstractMap.SimpleEntry<String, String>(parts[1], parts[2]+":"+parts[3]);
}
}
```
Any command-line input or output is written as follows:
```
[2018-12-30 15:39:04,479] INFO Kafka version : 2.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-12-30 15:39:04,479] INFO Kafka commitId : 3402a8361b734732 (org.apache.kafka.common.utils.AppInfoParser)
[2018-12-30 15:39:04,480] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
## This Github repository contains the following examples:
- Ignite example in pure Java
- Ignite example in Spring
- Ignite thinclient example
- Ignite isolated cluster example
- Ignite with Hibernate example
- Ignite with MyBatis example
- Memoization example in Ignite
- Ignite web session clustering example
- Ignite SQL examples
- Ignite QueryApI example
- Ignite text query example
- Ignite distributed SQL JOINs example
- Ignite Spring data example
- Ignite compute grid examples
- Microservices with Ignite examples
- Ignite camel integration example
- Ignite flume integration example
- Ignite kafka integration example
- Ignite Storm integration example
- Ignite Spark RDD example
- Ignite Spark Data frame example
- Ignite Zookeeper discovery example
- Ignite Baseline by examples
- Ignite monitoring by VisualVM/Grafana example
- and much more ...
| 0 |
ITHit/WebDAVServerSamplesJava | WebDAV server examples in Java based on IT Hit WebDAV Server Library for Java | amazon-s3 java kotlin ms-ofba oracle samples server spring spring-boot springboot sql webdav |
<h1>WebDAV Server Examples, Java</h1>
<div class="description"><p style="line-height: 22px; font-size: 15px; font-weight: normal;">IT Hit WebDAV Server Library for Java is provided with several examples that demonstrate how to build a WebDAV server with SQL back-end or with file system storage. You can adapt these samples to utilize almost any back-end storage including storing data in CMS/DMS/CRM, Azure or Amazon storage.</p>
<p style="line-height: 22px; font-size: 15px; font-weight: normal;">A sample HTML page included with samples demonstrates how to use <a title="IT Hit WebDAV Ajax Libray" href="https://www.webdavsystem.com/ajax/" target="_blank">IT Hit WebDAV Ajax Libray</a> to open documents from a web page for editing, list documents and navigate folder structure as well as build search capabilities.</p>
<h2>Online Demo Server</h2>
<p style="line-height: 22px; font-size: 15px; font-weight: normal;"><a title="https://www.WebDAVServer.com" href="https://www.WebDAVServer.com" target="_blank">https://www.WebDAVServer.com</a></p>
<h2> Requirements</h2>
<p style="line-height: 22px; font-size: 15px; font-weight: normal;">The samples are tested with <strong><span>Java 1.8</span></strong> in the following environments:</p>
<ul>
<li style="margin-bottom: 16px;">Tomcat 7 or later</li>
<li style="margin-bottom: 16px;">Glassfish 4.1.1 or later</li>
<li style="margin-bottom: 16px;">JBoss Wildfly 9 or later or respective EAP</li>
<li style="margin-bottom: 16px;">WebLogic 12c or later</li>
<li style="margin-bottom: 16px;">WebSphere 8.5.5.11 or later</li>
<li style="margin-bottom: 16px;">Jetty 9.3.13 or later</li>
</ul>
<h2>Full-text Search and indexing</h2>
<p style="line-height: 22px; font-size: 15px; font-weight: normal;">The samples are provided with full-text search and indexing based on use Apache Lucene as indexing engine and Apache Tika as content analysis toolkit.</p>
<p style="line-height: 22px; font-size: 15px; font-weight: normal;">The server implementation searches both file names and file content including content of Microsoft Office documents as well as any other documents which format is supported by Apache Tika, such as LibreOffice, OpenOffice, PDF, etc.</p></div>
<ul class="list">
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/jakarta/springboot3fsstorage">
<h2>Spring Boot WebDAV Server Example with File System Back-end, Java</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/jakarta/springboot3fsstorage">
<p>
This sample provides a WebDAV server running on the Spring Boot framework with files being stored in the file system. The WebDAV requests are processed in a dedicated context, while the rest of the website processes regular HTTP requests, serving web <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/springbootoraclestorage">
<h2>Spring Boot WebDAV Server Example with Oracle Back-end, Java</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/springbootoraclestorage">
<p>
This sample provides a WebDAV server running on the Spring Boot framework. All data including file content, document structure, and custom attributes are stored in the Oracle database. The IT Hit WebDAV Ajax Library is used to display and browse serv <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/springboots3storage">
<h2>Spring Boot WebDAV Server Example with Amazon S3 Back-end, Java</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/springboots3storage">
<p>
This sample is a fully functional Class 2 WebDAV server that runs on the Spring Boot framework and stores all data in the Amazon S3 bucket. The WebDAV requests are processed on a /DAV/ context, while the rest of the website processes regular HTTP req <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/oraclestorage">
<h2>WebDAV Server Example with Oracle Back-end, Java</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/oraclestorage">
<p>
The sample provides Class 2 WebDAV server implementation that can be hosted in Apache Tomcat, GlassFish, JBoss, WebLogic, WebSphere or other compliant application server. All data including file content, documents structure and custom attributes is s <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/jakarta/filesystemstorage">
<h2>WebDAV Server Example with File System Back-end, Java and Kotlin</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/jakarta/filesystemstorage">
<p>
This sample is a fully functional Class 2 WebDAV server that stores all data in the file system. It utilizes file system Extended Attributes (in case of Linux and macOS) or Alternate Data Streams (in case of Windows/NTFS) to store locks and custom pr <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/jakarta/collectionsync">
<h2>WebDAV Server Example with Collection Synchronization Support</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/jakarta/collectionsync">
<p>
This sample is a fully functional Class 2 WebDAV server with collection synchronization support (RFC 6578) that stores all data in the file system. This sample is similar to what is provided by the Java demo WebDAV server at: https://webdavserver.com <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/android/androidfsstorage">
<h2>Java WebDAV Server Example for Android</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/android/androidfsstorage">
<p>
This sample is a Class 2 WebDAV server that runs on Android. It uses modified NanoHTTPD as an application server and publishes files from a mobile application folder or from media folder. Locks and properties in SQLite database.
To see the documents <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/deltav">
<h2>WebDAV Server Example with Versioning, Java</h2>
</a>
<a href="https://github.com/ITHit/WebDAVServerSamplesJava/tree/master/Java/javax/deltav">
<p>
The sample provides DeltaV WebDAV server implementation that can be hosted in Apache Tomcat, GlassFish, JBoss, WebLogic or WebSphere. The data is stored in Oracle database. The IT Hit WebDAV Ajax Library is used to display and browse server content o <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://www.webdavsystem.com/javaserver/server_examples/running_webdav_samples/">
<h2>Running the WebDAV Samples</h2>
</a>
<a href="https://www.webdavsystem.com/javaserver/server_examples/running_webdav_samples/">
<p>
Once your sample is configured and running you will see the following web page (note that the port that the sample is using may be different from the one on the screenshots):
This web page is a MyCustomHandlerPage.html included in each sample and <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://www.webdavsystem.com/javaserver/server_examples/search/">
<h2>Configuring Full-Text Search for Files Stored in File System or in Oracle Database</h2>
</a>
<a href="https://www.webdavsystem.com/javaserver/server_examples/search/">
<p>
The samples provided with SDK use Apache Lucene as indexing engine and Apache Tika as content analysis toolkit.
The server implementation searches both file names and file content including content of Microsoft Office documents as well as any other <span>...</span>
</p>
</a>
</li>
<li>
<a class="link-header" href="https://www.webdavsystem.com/javaserver/server_examples/troubleshooting/">
<h2>WebDAV Server Samples Problems and Troubleshooting</h2>
</a>
<a href="https://www.webdavsystem.com/javaserver/server_examples/troubleshooting/">
<p>
Examining Logs
If things are not going as planned and you run into issues the first place to look would be the log file &lt;Your Tomcat location&gt;\Tomcat x.x\logs\localhost.xxxx-xx-xx.log . The logs will reflect as to what is going on and it will <span>...</span>
</p>
</a>
</li>
</ul>
| 0 |
mphasize/FullDome | Tools and Examples | null | FullDome: Tools and Examples
============================
This is a collection of tools and code I came across or I modified for displaying interactive content in a fulldome environment.
I do this as part of a class I'm taking called "Immersive Data Visualisation" at the FH Potsdam.
[Workspace](http://incom.org/workspace/2755)
Tools:
======
Processing / FullDomeTemplate
---
Amazing Tool by Christopher Warnow! Warps your Processing Sketch into a dome master with some really nice OpenGL magic.
For [more details and instructions](https://github.com/mphasize/FullDome/tree/master/Processing/FullDomeTemplate) on usage (german only so far...) click the link and scroll down to README...
Processing / SimulationTemplate
---
The SimulationTemplate creates a movable Dome and projects your interactive Sketches into it.
For [more details and instructions](https://github.com/mphasize/FullDome/tree/master/Processing/SimulationTemplate) on usage (german only so far...) click the link and scroll down to README...
| 0 |
viglucci/app-jcef-example | Example application for using Java Chrome Embedded Framework | null | # app-jcef-example
Example application for using Java Chrome Embedded Framework
## Prerequisites
### JCEF
A build of JCEF to be available via your PATH environment variable.
https://bitbucket.org/chromiumembedded/java-cef/wiki/BranchesAndBuilding
Add the `some_directory/jcef_build/native/Release` directory to your PATH environment variable.
### Java JDK
Java 1.8
## Maven Build
1. mvn clean package
## IntelliJ IDEA Build
Setup:
1. File -> Project Structure -> Artifacts
2. Green + -> Jar -> from module with dependencies
3. Select SimpleFrameExample
Build:
1. Build -> Build Artifacts
## Run from run.bat
1. From IntelliJ project explorer -> right click run.bat -> Run 'run'
| 1 |
alexaverbuch/akka_chat_java | Akka chat example using Java API | null | null | 1 |
ewolff/microservice-istio | Example for a microservices system based in Kubernetes and the service mesh Istio | null | Microservice Istio Sample
=====================
[Deutsche Anleitung zum Starten des Beispiels](WIE-LAUFEN.md)
This demo uses [Kubernetes](https://kubernetes.io/) as Docker
environment. Kubernetes also support service discovery and load
balancing. An Apache httpd as a reverse proxy routes the calls to the
services.
Also the demo uses [Istio](https://istio.io/) for features like
monitoring, tracing, fault injection, and circuit breaking.
This project creates a complete microservice demo system in Docker
containers. The services are implemented in Java using Spring Boot and
Spring Cloud.
It uses three microservices:
- `Order` to accept orders.
- `Shipping` to ship the orders.
- `Invoicing` to ship invoices.
How to run
---------
See [How to run](HOW-TO-RUN.md).
Remarks on the Code
-------------------
The microservices are:
- [microservice-istio-order](microservice-istio-demo/microservice-istio-order) to create the orders
- [microserivce-istio-shipping](microservice-istio-demo/microservice-istio-shipping) for the shipping
- [microservice-istio-invoicing](microservice-istio-demo/microservice-istio-invoicing) for the invoices
The microservices have an Java main application in `src/test/java` to
run them stand alone. microservice-demo-shipping and
microservice-demo-invoicing both use a stub for the
other order service for the tests.
The data of an order is copied - including the data of the customer
and the items. So if a customer or item changes in the order system
this does not influence existing shipments and invoices. It would be
odd if a change to a price would also change existing invoices. Also
only the information needed for the shipment and the invoice are
copied over to the other systems.
The job to poll the order feed is run every 30 seconds.
| 1 |
oktadev/jhipster-microservices-example | JHipster Microservices Example using Spring Cloud, Spring Boot, Angular, Docker, and Kubernetes | angular docker google-cloud jhipster jhipster-microservices kubernetes minikube spring-boot spring-cloud webpack | # JHipster Microservices Example
> A microservice architecture created with JHipster. Uses Spring Cloud, Spring Boot, Angular, and MongoDB for a simple blog/store applications.
Please read [Develop and Deploy Microservices with JHipster](https://developer.okta.com/blog/2017/06/20/develop-microservices-with-jhipster) to see how this example was created.
**Prerequisites:** [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html), [Node.js](https://nodejs.org/) 6.11, [Yarn](https://yarnpkg.com/lang/en/docs/install/), and [Docker](https://docs.docker.com/engine/installation/).
**NOTE:** If you're not on Mac or Windows, you may need to [install Docker Compose](https://docs.docker.com/compose/install/) as well.
> [Okta](https://developer.okta.com/) has Authentication and User Management APIs that reduce development time with instant-on, scalable user infrastructure. Okta's intuitive API and expert support make it easy for developers to authenticate, manage and secure users and roles in any application.
* [Getting Started](#getting-started)
* [Links](#links)
* [Help](#help)
* [License](#license)
## Getting Started
To install this example application, run the following commands:
```bash
git clone https://github.com/oktadeveloper/jhipster-microservices-example.git
cd jhipster-microservices-example
```
1. Start the registry by running `./mvnw -Pprod` in the `registry` directory.
2. Install dependencies in the `blog` directory, build the UI, and run the Spring Boot app.
```
yarn
./mvnw
```
3. Start MongoDB using Docker Compose in the `store` directory.
```bash
docker-compose -f src/main/docker/mongodb.yml up
```
4. Install dependencies in the `store` directory, build the UI, and run the Spring Boot app.
```
yarn
./mvnw
```
You should be able to see the `blog` app at <http://localhost:8080> and edit products (from the `store` app)
### Run with Docker Compose
You can use Docker Compose to start everything if you don't want to start applications manually with Maven.
1. Make sure Docker is running.
2. Build Docker images for the `blog` and `store` applications by running the following command in both directories.
```
./mvnw package -Pprod docker:build
```
3. Open a terminal, navigate to the `docker` directory of this project, and run the following command. If you have a lot
of RAM on your machine, you might want to adjust Docker's default setting (2 GB).
```
docker-compose up -d
````
TIP: Remove `-d` from the end of the command above if you want to see logs from all containers in the current window.
4. Use [Kitematic](https://kitematic.com/) to view the ports and logs for the services deployed.
To create activity in JHipster Console's charts, you run the Gatling tests in the `blog` and `store` projects.
```bash
./mvnw gatling:execute
```
To remove all Docker containers, run the following commands or do it manually using Kitematic.
```bash
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
```
To find what's running on a port on macOS, use `sudo lsof -i :9092 # checks port 9092`.
### Run with Kubernetes and Minikube
1. Install [kubectl](https://kubernetes.io/docs/tasks/kubectl/install/), [VirtualBox](https://www.virtualbox.org/wiki/Downloads), and [Minikube](https://github.com/kubernetes/minikube/releases).
2. Start Minikube using `minikube start`.
3. To be able to work with the docker daemon, make sure Docker is running, then run the following command in your terminal:
```bash
eval $(minikube docker-env)
```
4. Create Docker images of the `blog` and `store` applications:
```bash
./mvnw package -Pprod docker:build
```
5. Navigate to the `kubernetes` directory in your terminal and re-generate the files so they match your Docker repository name.
```
jhipster kubernetes
```
Follow the instructions for tagging and pushing the Docker images.
```bash
docker image tag blog {yourRepoName}/blog
docker push {yourRepoName}/blog
docker image tag store {yourRepoName}/store
docker push {yourRepoName}/store
```
6. Use `kubectl` to deploy to Minikube.
```
kubectl apply -f registry
kubectl apply -f blog
kubectl apply -f store
```
The deployment process can take several minutes to complete. Run `minikube dashboard` to see the deployed containers.
You can also run `kubectl get po -o wide --watch` to see the status of each pod.
6. Run `minikube service blog` to view the blog application. You should be able to login and add blogs, entries, and products.
To remove all deployed containers, run the following command:
kubectl delete deployment --all
To stop Minikube, run `minikube stop`.
**NOTE:** If you run `minikube delete` and have trouble running `minikube start` afterward, run `rm -rf ~/.minikube`.
See [this issue](https://github.com/kubernetes/minikube/issues/290) for more information.
### Google Cloud
1. Create a Google Cloud project at [console.cloud.google.com](https://console.cloud.google.com/).
2. Navigate to <https://console.cloud.google.com/kubernetes/list> to initialize the Container Engine for your project.
3. Install [Google Cloud SDK](https://cloud.google.com/sdk/) and set project using:
gcloud config set project <project-name>
4. Create a cluster:
gcloud container clusters create <cluster-name> --machine-type=n1-standard-2 --scopes cloud-platform --zone us-west1-a
To see a list of possible zones, run `gcloud compute zones list`.
5. Push the `blog` and `store` docker images to [Docker Hub](https://hub.docker.com/). You will need to create an account
and run `docker login` to push your images. The images can be run from any directory.
```bash
docker image tag blog mraible/blog
docker push mraible/blog
docker image tag store mraible/store
docker push mraible/store
```
6. Run `kubectl` commands to deploy.
```bash
kubectl apply -f registry
kubectl apply -f blog
kubectl apply -f store
```
7. Use port-forwarding to see the registry app locally.
kubectl port-forward jhipster-registry-0 8761:8761
8. Run `kubectl svc blog` to view the blog application on Google Cloud.
9. Scale microservice apps as needed with `kubectl`:
kubectl scale --replicas=3 deployment/store
To see a screencast of this process, [watch this YouTube video](https://youtu.be/dgVQOYEwleA).
### AWS
If you know how to deploy this architecture to AWS, I'd love to hear about it! I [tried in anger](https://groups.google.com/forum/#!msg/jhipster-dev/NNA3TScENVE/WmbG2Qt_AwAJ), but ultimately failed.
## Links
This example uses [JHipster](https://www.jhipster.tech), and awesome project that allows you to generate a microservices architecture with [Spring Boot](https://projects.spring.io/spring-boot/). See [Develop a Microservices Architecture with OAuth 2.0 and JHipster](https://developer.okta.com/blog/2018/03/01/develop-microservices-jhipster-oauth) for an example that uses OAuth and Okta.
## Help
Please post any questions as comments on the [blog post](https://developer.okta.com/blog/2018/03/01/develop-microservices-jhipster-oauth), or visit our [Okta Developer Forums](https://devforum.okta.com/). You can also email developers@okta.com if would like to create a support ticket.
## License
Apache 2.0, see [LICENSE](LICENSE).
| 1 |
brainhubeu/react-native-opencv-tutorial | 👩🏫Fully working example of the OpenCV library used together with React Native | camera computer-vision java javascript jest objective-c objective-c-plus-plus opencv react react-native react-native-camera reactnative | <br/>
<h1 align="center">
react-native-opencv-tutorial
</h1>
<p align="center">
A fully working example of the OpenCV library used together with React Native.
</p>
<p align="center">
<strong>
<a href="https://brainhub.eu/blog/opencv-react-native-image-processing/">Blog post</a> |
<a href="https://brainhub.eu/contact/">Hire us</a>
</strong>
</p>
<div align="center">
[](https://github.com/brainhubeu/react-native-opencv-tutorial/blob/master/LICENSE.MD)
[](http://makeapullrequest.com)
</div>
## What this tutorial is about
This tutorial is how to use React Native together with OpenCV for image processing. This example uses native Java and Objective-C bindings for OpenCV. In this example we use the device's camera to take a photo and detect whether the taken photo is clear or blurred.
## Demo
The examples below show the situation right after taking a photo. The first one shows what happens if we take a blurry photo and the second one is the situation after we took a clear photo and are able to proceed with it to do whatever we want.


## Blog post
https://brainhub.eu/blog/opencv-react-native-image-processing/
## Prerequisites
1. XCode
2. Android Studio
## How to run the project
1. Clone the repository.
2. `cd cloned/repository/path`
3. `npm i` or `yarn`
4. `react-native link`
5. Run `./downloadAndInsertOpenCV.sh`.
6. Download manually the Android pack from https://opencv.org/releases.html (version 3.4.1).
7. Unzip the package.
8. Import OpenCV to Android Studio, From File -> New -> Import Module, choose sdk/java folder in the unzipped opencv archive.
9. Update build.gradle under imported OpenCV module to update 4 fields to match your project's `build.gradle`<br/>
a) compileSdkVersion<br/>
b) buildToolsVersion<br/>
c) minSdkVersion<br/>
d) targetSdkVersion.
10. Add module dependency by Application -> Module Settings, and select the Dependencies tab. Click + icon at bottom, choose Module Dependency and select the imported OpenCV module. For Android Studio v1.2.2, to access to Module Settings : in the project view, right-click the dependent module -> Open Module Settings.
11. `react-native run-ios` or `react-native run-android`.
### Additional notes
In case of any `downloadAndInsertOpenCV.sh ` script related errors, please, check the paths inside this file and change them if they do not match yours.
If this script does not run at all since it has no permissions, run `chmod 777 downloadAndInsertOpenCV.sh`.
If you do not have `React Native` installed, type `npm i -g react-native-cli` in the terminal.
### License
reactNativeOpencvTutorial is copyright © 2018-2020 [Brainhub](https://brainhub.eu/?utm_source=github) It is free software, and may be redistributed under the terms specified in the [license](LICENSE.MD).
### About
reactNativeOpencvTutorial is maintained by the Brainhub development team. It is funded by Brainhub and the names and logos for Brainhub are trademarks of Brainhub Sp. z o.o.. You can check other open-source projects supported/developed by our teammates here.
[](https://brainhub.eu/?utm_source=github)
We love open-source JavaScript software! See our other projects or hire us to build your next web, desktop and mobile application with JavaScript.
| 1 |
juhahinkula/StudentListFinal | Spring Boot CRUD example with Spring security | crud spring-boot spring-data-jpa spring-security | # StudentList
Simple CRUD application made with Spring Boot
- Spring Boot
- Spring Security
- Thymeleaf
- H2 database
- Bootstrap
Usage:<br>
1) Clone the project <br>```git clone https://github.com/juhahinkula/StudentListFinal.git```<br>
2) run the following command in a terminal window (in the complete) directory:<br>
```./mvnw spring-boot:run```<br>
3) Navigate to localhost:8080<br>
If using Eclipse, you can also run the project in the following way:<br>
1) Eclipse: File -> Import -> Maven -> Existing Maven Projects<br>
2) Run<br>
3) Navigate to localhost:8080<br>
Application contains two demo users: <br>
user/user (role=USER) <br>
admin/admin (role=ADMIN)<br>
## Screenshot

| 1 |
raycad/stream-processing | Stream processing guidelines and examples using Apache Flink and Apache Spark | apache-flink apache-spark batch-processing data-analysis streaming | ## 1. Big Data Analysis Overview

## 2. Batch Processing vs Stream Processing
### 2.1. Batch Processing
In batch processing, newly arriving data elements are collected into a group. The whole group is then processed at a future time (as a batch, hence the term “batch processing”). Exactly when each group is processed can be determined in a number of ways–for example, it can be based on a scheduled time interval (e.g. every five minutes, process whatever new data has been collected) or on some triggered condition (e.g. process the group as soon as it contains five data elements or as soon as it has more than 1MB of data).

**Micro-Batch** is frequently used to describe scenarios where batches are small and/or processed at small intervals. Even though processing may happen as often as once every few minutes, data is still processed a batch at a time.
### 2.2. Stream Processing
In stream processing, each new piece of data is processed when it arrives. Unlike batch processing, there is no waiting until the next batch processing interval and data is processed as individual pieces rather than being processed a batch at a time.

**Use cases:**
* Algorithmic Trading, Stock Market Surveillance
* Monitoring a production line
* Intrusion, Surveillance and Fraud Detection ( e.g. Uber)
* Predictive Maintenance, (e.g. Machine Learning Techniques for Predictive Maintenance)
<H3>Batch Processing vs Streaming Processing</H3>
| | Batch Processing | Stream Processing |
|-|-|-|
| **Data Scope** | Queries or processing over all or most of the data in the dataset | Queries or processing over data within a rolling time window, or on just the most recent data record |
| **Data Size** | Large batches of data | Individual records or micro batches consisting of a few records |
| **Performance** | Latencies in minutes to hours | Requires latency in the order of seconds or milliseconds |
| **Analyses** | Complex analytics | Simple response functions, aggregates, and rolling metrics |
## 3. Apache Flink
### 3.1. Apache Flink Ecosystem

#### 3.1.1. Storage / Streaming
Flink doesn’t ship with the storage system; it is just a computation engine. Flink can read, write data from different storage system as well as can consume data from streaming systems. Below is the list of storage/streaming system from which Flink can read write data:
* **HDFS**: Hadoop Distributed File System
* **Local-FS**: Local File System
* **S3**: Simple Storage Service from Amazon
* **HBase**: NoSQL Database in Hadoop ecosystem
* **MongoDB**: NoSQL Database
* **RDBMS**: Any relational database
* **Kafka**: Distributed messaging Queue
* **RabbitMQ**: Messaging Queue
* **Flume**: Data Collection and Aggregation Tool
#### 3.1.2. Deploy
Flink can be deployed in following modes:
* **Local mode**: On a single node, in single JVM
* **Cluster**: On a multi-node cluster, with following resource manager.
* **Standalone**: This is the default resource manager which is shipped with Flink.
* **YARN**: This is a very popular resource manager, it is part of Hadoop
* **Mesos**: This is a generalized resource manager.
* **Cloud**: on Amazon or Google cloud
#### 3.1.3. Distributed Streaming Dataflow
It is also called as the kernel of Apache Flink. This is the core layer of flink which provides distributed processing, fault tolerance, reliability, native iterative processing capability,...
#### 3.1.4. APIs and Library
* __DataSet API__\
It allows the user to implement operations like map, filter, join, group, etc. on the dataset. It is mainly used for distributed processing.
* __DataStream API__\
It handles a continuous stream of the data. To process live data stream it provides various operations like map, filter, update states, window, aggregate, etc. It can consume the data from the various streaming source and can write the data to different sinks.
* __Table__\
It enables users to perform ad-hoc analysis using SQL like expression language for relational stream and batch processing. It can be embedded in DataSet and DataStream APIs.
* __Gelly__\
It is the graph processing engine which allows users to run set of operations to create, transform and process the graph.
* __FlinkML__\
It is the machine learning library which provides intuitive APIs and an efficient algorithm to handle machine learning applications.
### 3.2. Apache Flink Features

**1. Fast Speed**\
Flink processes data at lightning fast speed (hence also called as 4G of Big Data).
**2. Stream Processing**\
Flink is a true stream processing engine.
**3. In-memory Computation**\
Data is kept in random access memory(RAM) instead of some slow disk drives and is processed in parallel. It improves the performance by an order of magnitudes by keeping the data in memory.
**4. Ease of Use**\
Flink’s APIs are developed in a way to cover all the common operations, so programmers can use it efficiently.
**5. Broad integration**\
Flink can be integrated with the various storage system to process their data, it can be deployed with various resource management tools. It can also be integrated with several BI tools for reporting.
**6. Deployment**\
It can be deployed through Mesos, Hadoop via YARN, or Flink's own cluster manager or cloud (Amazon, Google cloud).
**7. Scalable**\
Flink is highly scalable. With increasing requirements, we can scale the flink cluster.
**8. Fault Tolerance**\
Failure of hardware, node, software or a process doesn’t affect the cluster.
### 3.3. Apache Flink Architecture
Apache Flink is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
* **Program**\
It is a piece of code, which you run on the **Flink Cluster**.
* **Client**\
It is responsible for taking code (program) and constructing job dataflow graph, then passing it to **JobManager**. It also retrieves the Job results.
* **JobManager**\
Also called **masters**. They coordinate the distributed execution. They schedule tasks, coordinate checkpoints, coordinate recovery on failures, etc. After receiving the Job Dataflow Graph from Client, it is responsible for creating the execution graph. It assigns the job to **TaskManagers** in the cluster and supervises the execution of the job.
There is always at least one **Job Manager**. A high-availability setup will have multiple **JobManagers**, one of which one is always the leader, and the others are standby.
* **TaskManager**\
Also called **workers** or **slaves**. They execute the tasks that have been assigned by **JobManager** (or more specifically, the subtasks) of a dataflow, and buffer and exchange the data streams. All the **TaskManagers** run the tasks in their separate slots in specified parallelism. It is responsible to send the status of the tasks to **JobManager**.


### 3.4. Apache Flink Programs and Dataflows
The basic building blocks of Flink programs are streams and transformations.
Conceptually a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result.
When executed, Flink programs are mapped to streaming dataflows, consisting of streams and transformation operators. Each dataflow starts with one or more sources and ends in one or more sinks. The dataflows resemble arbitrary directed acyclic graphs (DAGs). Although special forms of cycles are permitted via iteration constructs, for the most part we will gloss over this for simplicity.

 |
### 3.5. Data Source
* Sources are where the program reads its input from.
* A source is attached to the program by using StreamExecutionEnvironment.addSource(sourceFunction).
* Flink comes with a number of pre-implemented source functions, but you can always write your own custom sources by implementing the SourceFunction for non-parallel sources, or by implementing the ParallelSourceFunction interface or extending the RichParallelSourceFunction for parallel sources.
There are several predefined stream sources accessible from the StreamExecutionEnvironment:
**1. File-based:**
* readTextFile(path) - Reads text files, i.e. files that respect the TextInputFormat specification, line-by-line and returns them as Strings.
* readFile(fileInputFormat, path) - Reads (once) files as dictated by the specified file input format.
* readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo) - This is the method called internally by the two previous ones. It reads files in the path based on the given fileInputFormat. Depending on the provided watchType, this source may periodically monitor (every interval ms) the path for new data (FileProcessingMode.PROCESS_CONTINUOUSLY), or process once the data currently in the path and exit (FileProcessingMode.PROCESS_ONCE). Using the pathFilter, the user can further exclude files from being processed.
**2. Socket-based:**
* socketTextStream - Reads from a socket. Elements can be separated by a delimiter.
**3. Collection-based:**
* fromCollection(Collection) - Creates a data stream from the Java Java.util.Collection. All elements in the collection must be of the same type.
* fromCollection(Iterator, Class) - Creates a data stream from an iterator. The class specifies the data type of the elements returned by the iterator.
* fromElements(T ...) - Creates a data stream from the given sequence of objects. All objects must be of the same type.
* fromParallelCollection(SplittableIterator, Class) - Creates a data stream from an iterator, in parallel. The class specifies the data type of the elements returned by the iterator.
* generateSequence(from, to) - Generates the sequence of numbers in the given interval, in parallel.
**4. Custom:**
* addSource - Attach a new source function. For example, to read from Apache Kafka you can use addSource(new FlinkKafkaConsumer08<>(...)). See connectors for more details.
### 3.6. Collection Data Sources
Flink provides special data sources which are backed by Java collections to ease testing. Once a program has been tested, the sources and sinks can be easily replaced by sources and sinks that read from / write to external systems.
```java
final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
// Create a DataStream from a list of elements
DataStream<Integer> myInts = env.fromElements(1, 2, 3, 4, 5);
// Create a DataStream from any Java collection
List<Tuple2<String, Integer>> data = ...
DataStream<Tuple2<String, Integer>> myTuples = env.fromCollection(data);
// Create a DataStream from an Iterator
Iterator<Long> longIt = ...
DataStream<Long> myLongs = env.fromCollection(longIt, Long.class);
```
**Note**: Currently, the collection data source requires that data types and iterators implement Serializable. Furthermore, collection data sources can not be executed in parallel ( parallelism = 1).
### 3.7. DataStream Transformations
See the following links:\
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/index.html
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/dataset_transformations.html
**DataStream Transformations Functions**
Most transformations require user-defined functions. This section lists different ways of how they can be specified.
**1. Implementing an interface**
The most basic way is to implement one of the provided interfaces:
```java
class MyMapFunction implements MapFunction<String, Integer> {
public Integer map(String value) {
return Integer.parseInt(value);
}
};
data.map(new MyMapFunction());
```
**2. Anonymous classes**
You can pass a function as an anonymous class:
```java
data.map(new MapFunction<String, Integer> () {
public Integer map(String value) {
return Integer.parseInt(value);
}
});
```
**3. Java 8 Lambdas**
Flink also supports Java 8 Lambdas in the Java API.
```java
data.filter(s -> s.startsWith("http://"));
data.reduce((i1, i2) -> i1 + i2);
```
### 3.8. Data Sink
Data sinks consume DataStreams and forward them to files, sockets, external systems, or print them. Flink comes with a variety of built-in output formats that are encapsulated behind operations on the DataStreams:
* writeAsText() / TextOutputFormat - Writes elements line-wise as Strings. The Strings are obtained by calling the toString() method of each element.
* writeAsCsv(...) / CsvOutputFormat - Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the toString() method of the objects.
* print() / printToErr() - Prints the toString() value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to print. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.
* writeUsingOutputFormat() / FileOutputFormat - Method and base class for custom file outputs. Supports custom object-to-bytes conversion.
* writeToSocket - Writes elements to a socket according to a SerializationSchema
* addSink - Invokes a custom sink function. Flink comes bundled with connectors to other systems (such as Apache Kafka) that are implemented as sink functions.
### 3.9. Iterations
Iterative streaming programs implement a step function and embed it into an IterativeStream. As a DataStream program may never finish, there is no maximum number of iterations. Instead, you need to specify which part of the stream is fed back to the iteration and which part is forwarded downstream using a split transformation or a filter. Here, we show an example using filters. First, we define an IterativeStream
```java
IterativeStream<Integer> iteration = input.iterate();
```
Then, we specify the logic that will be executed inside the loop using a series of transformations (here a simple map transformation)
```java
DataStream<Integer> iterationBody = iteration.map(/* this is executed many times */);
```
To close an iteration and define the iteration tail, call the closeWith(feedbackStream) method of the IterativeStream. The DataStream given to the closeWith function will be fed back to the iteration head. A common pattern is to use a filter to separate the part of the stream that is fed back, and the part of the stream which is propagated forward. These filters can, e.g., define the “termination” logic, where an element is allowed to propagate downstream rather than being fed back.
```java
iteration.closeWith(iterationBody.filter(/* one part of the stream */));
DataStream<Integer> output = iterationBody.filter(/* some other part of the stream */);
```
**See more**: https://ci.apache.org/projects/flink/flink-docs-stable/dev/datastream_api.html
### 3.10. Apache Flink Build/Run/Test
**1. Build Flink from source code**
```
$ git clone https://github.com/apache/flink.git
$ cd flink
$ git checkout release-1.8.1
$ mvn clean install -Pinclude-kinesis -Pinclude-hadoop -DskipTests
```
**2. Start Flink Cluster**
https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/deployment/cluster_setup.html
```
$ cd [flink]/build-target
# Change the flink master ip by editting the jobmanager.rpc.address value
$ nano conf/flink-conf.yaml
# Configure flink slave (worker) nodes
$ nano conf/slaves
# Start the cluster
$ bin/start-cluster.sh
# Then type passwords of slave nodes for the master to execute them...
```
**[NOTE]:** The Flink directory must be available on every worker under the same path. You can use a shared NFS directory, or copy the entire Flink directory to every worker node.
Check the cluster dashboard by access to http://<flink-master-ip>:8081
**Test a Flink Job**
Firstly run a input stream server using Netcat
```
# -l: listen (server mode)
$ nc -l 9000
```
To test client using Netcat
```
$ nc localhost 9000
```
**If you want to test streaming huge data you can run the python tcp server in the scripts folder**
```bash
$ python [stream-processing]/src/scripts/tcp_server.py
```
Then you can type sentences from the terminal to send data to Flink job application. There are 2 ways to test a Flink job:
- Submit new job from the dashboard
```
See the example below (Index 2.3)
```
- Manually running job from terminal
```
$ ./bin/flink run [flink-word-count-path]/SocketWindowWordCount.jar --hostname <netcat-host-ip> --port 9000
```
**3. Flink Cluster**

**4. Flink Cluster Dashboards**

**5. Task Managers**

**6. Job Submission**

**7. Job Results**

**8. Stop Flink Cluster**
```
$ ./bin/stop-cluster.sh
```
## 4. Apache Spark
### 4.1. Apache Spark Features
**Apache Spark** is an open source cluster computing framework for real-time data processing. The main feature of Apache Spark is its in-memory cluster computing that increases the processing speed of an application. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance. It is designed to cover a wide range of workloads such as batch applications, iterative algorithms, interactive queries, and streaming.

**1. Speed**
Spark runs up to 100 times faster than Hadoop MapReduce for large-scale data processing. It is also able to achieve this speed through controlled partitioning. Source
**2. Near real-time Processing**
It offers near real-time computation & low latency because of in-memory computation.
**3. In-memory Computation**
Data is kept in random access memory(RAM) instead of some slow disk drives and is processed in parallel. It improves the performance by an order of magnitudes by keeping the data in memory.
**4. Ease of Use**
Spark has easy-to-use APIs for operating on large datasets. This includes a collection of over 100 operators for transforming data and familiar data frame APIs for manipulating semi-structured data.
**5. Unified Engine**
Spark comes packaged with higher-level libraries, including support for SQL queries, streaming data, machine learning and graph processing. These standard libraries increase developer productivity and can be seamlessly combined to create complex workflows.
**6. Deployment**
It can be deployed through Mesos, Hadoop via YARN, or Spark’s own cluster manager.
**7. Polyglot**
Spark provides high-level APIs in Java, Scala, Python, and R. Spark code can be written in any of these four languages. It also provides a shell in Scala and Python.
**8. Fault Tolerance**
Upon the failure of worker node, using lineage of operations we can re-compute the lost partition of RDD from the original one. Thus, we can easily recover the lost data.
### 4.2. Apache Spark Ecosystem

**1. Spark Core**
Spark Core is the base engine for large-scale parallel and distributed data processing. It is responsible for memory management and fault recovery, scheduling, distributing and monitoring jobs on a cluster & interacting with storage systems.
**2. Spark Streaming**
Spark Streaming is the component of Spark which is used to process real-time streaming data. Thus, it is a useful addition to the core Spark API. It enables high-throughput and fault-tolerant stream processing of live data streams.
**3. Spark SQL**
Spark SQL is a new module in Spark which integrates relational processing with Spark’s functional programming API. It supports querying data either via SQL or via the Hive Query Language. For those of you familiar with RDBMS, Spark SQL will be an easy transition from your earlier tools where you can extend the boundaries of traditional relational data processing.
**4. GraphX**
GraphX is the Spark API for graphs and graph-parallel computation. Thus, it extends the Spark RDD with a Resilient Distributed Property Graph. At a high-level, GraphX extends the Spark RDD abstraction by introducing the Resilient Distributed Property Graph (a directed multigraph with properties attached to each vertex and edge).
**5. MLlib (Machine Learning)**
MLlib stands for Machine Learning Library. Spark MLlib is used to perform machine learning in Apache Spark.
### 4.3. Apache Spark Architecture
| Term | Meaning |
|-|-|
| Application | User program built on Spark. Consists of a driver program and executors on the cluster.|
| Application jar | A jar containing the user's Spark application. In some cases users will want to create an "uber jar" containing their application along with its dependencies. The user's jar should never include Hadoop or Spark libraries, however, these will be added at runtime.|
| Driver program | The process running the main() function of the application and creating the SparkContext.|
| Cluster manager | An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN).|
| Deploy mode | Distinguishes where the driver process runs. In "cluster" mode, the framework launches the driver inside of the cluster. In "client" mode, the submitter launches the driver outside of the cluster.|
| Worker node | Any node that can run application code in the cluster.|
| Executor | A process launched for an application on a worker node, that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors.|
| Task | A unit of work that will be sent to one executor.|
| Job | A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g. save, collect); you'll see this term used in the driver's logs.|
| Stage | Each job gets divided into smaller sets of tasks called stages that depend on each other (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs.|

**1. Spark Driver**
* Separate process to execute user applications
* Creates SparkContext to schedule jobs execution and negotiate with cluster manager
**2. Executors**
* Run tasks scheduled by driver
* Store computation results in memory, on disk or off-heap
* Interact with storage systems
**3. Cluster Manager**
* Mesos
* YARN
* Spark Standalone
**4. SparkContext**
* Represents the connection to a Spark cluster, and can be used to create RDDs, accumulators and broadcast variables on that cluster
**5. DAGScheduler**
* Computes a DAG of stages for each job and submits them to TaskScheduler
* Determines preferred locations for tasks (based on cache status or shuffle files locations) and finds minimum schedule to run the jobs
**6. TaskScheduler**
* Responsible for sending tasks to the cluster, running them, retrying if there are failures, and mitigating stragglers
**7. SchedulerBackend**
* Backend interface for scheduling systems that allows plugging in different implementations(Mesos, YARN, Standalone, local)
**8. BlockManager**
* Provides interfaces for putting and retrieving blocks both locally and remotely into various stores (memory, disk, and off-heap)

### 4.4. Resilient Distributed Dataset (RDD)
RDDs are the building blocks of any Spark application. RDDs Stands for:
* Resilient: Fault tolerant and is capable of rebuilding data on failure
* Distributed: Distributed data among the multiple nodes in a cluster
* Dataset: Collection of partitioned data with values, e.g. tuples or other objects

**1. In-memory Computation**
* RDDs stores intermediate data/results in RAM instead of disk.
**2. Lazy Evaluations**
* All transformations are lazy, right away it will not compute the results.
* Apache Spark computes transformations when an action requires a result for the driver program.
**3. Immutability**
* Once RDD created it cannot be changed because of read only abstraction.
* RDD can be transformed one form to another RDD using transformations like map, filter, join and co-group.
* Immutable nature of RDD Spark helps to maintain level of high consistency.
**4. Partitioned**
* Partitions are basic units of parallelism in Apache Spark
* RDDs are collection of partitions.
* Each partition is one logical division of data which is mutable in nature. One can create a partition through some transformations on existing partitions.
* Partitions of an RDD are distributed across all the nodes in a network.
**5. Parallel**
**6. Persistence**
* In RDD persistence nature does fast computations.
* Here users have the option of selecting RDD for reuse and they can also select storage either disk or in-memory to store data.
**7. Fault Tolerance**
* Spark RRD has the capability to operate and recover loss after a failure occurs.
* It rebuild lost data on failure using lineage, each RDD remembers how it was created from other datasets to recreate itself.
### 4.5. RDD Workflow

There are two ways to create RDDs − parallelizing an existing collection in your driver program, or by referencing a dataset in an external storage system, such as a shared file system, HDFS, HBase, etc.
### 4.6. RDD Operations
RDDs support two types of operations:
* Transformations: are the functions that take an RDD as an input and produce one or more RDDs as an output.
* Actions: which returns final result to the driver program after running RDD computations on the dataset.
**E.g:**
* map is a transformation that passes each dataset element through a function and returns a new RDD representing the results.
* reduce is an action that aggregates all the elements of the RDD using some function and returns the final result to the driver program (although there is also a parallel reduceByKey that returns a distributed dataset).
**1. Transformations**
* Apply user function to every element in a partition (or to the whole partition).
* Apply aggregation function to the whole dataset (groupBy, sortBy).
* Introduce dependencies between RDDs to form DAG.
* Provide functionality for repartitioning (repartition, partitionBy).
**2. Actions**
* Trigger job execution.
* Used to materialize computation results.
### 4.7. Directed Acyclic Graph (DAG)
**Direct**: Means which is directly connected from one node to another. This creates a sequence i.e. each node is in linkage from earlier to later in the appropriate sequence.
**Acyclic**: Defines that there is no cycle or loop available. Once a transformation takes place it cannot returns to its earlier position.
**Graph**: From graph theory, it is a combination of vertices and edges. Those pattern of connections together in a sequence is the graph.
**Directed Acyclic Graph** is an arrangement of edges and vertices. In this graph, vertices indicate RDDs and edges refer to the operations applied on the RDD. According to its name, it flows in one direction from earlier to later in the sequence. When we call an action, the created DAG is submitted to DAG Scheduler. That further divides the graph into the stages of the jobs.
* The DAG scheduler divides operators into stages of tasks. A stage is comprised of tasks based on partitions of the input data. The DAG scheduler pipelines operators together. For e.g. Many map operators can be scheduled in a single stage. The final result of a DAG scheduler is a set of stages.
* The Stages are passed on to the Task Scheduler.The task scheduler launches tasks via cluster manager (Spark Standalone/Yarn/Mesos). The task scheduler doesn't know about dependencies of the stages.
* The Worker executes the tasks on the Slave.
### 4.8. Internal Job Execution In Spark

**STEP 1**: The client submits spark user application code. When an application code is submitted, the driver implicitly converts user code that contains transformations and actions into a logically directed acyclic graph called DAG. At this stage, it also performs optimizations such as pipelining transformations.
**STEP 2**: After that, it converts the logical graph called DAG into physical execution plan with many stages. After converting into a physical execution plan, it creates physical execution units called tasks under each stage. Then the tasks are bundled and sent to the cluster.
**STEP 3**: Now the driver talks to the cluster manager and negotiates the resources. Cluster manager launches executors in worker nodes on behalf of the driver. At this point, the driver will send the tasks to the executors based on data placement. When executors start, they register themselves with drivers. So, the driver will have a complete view of executors that are executing the task.
**STEP 4**: During the course of execution of tasks, driver program will monitor the set of executors that runs. Driver node also schedules future tasks based on data placement.
**Example**
```java
val input = sc.textFile("log.txt")
val splitedLines = input.map(line => line.split(" "))
.map(words => (words(0), 1))
.reduceByKey{(a,b) => a + b}
```

## 5. Data Processing Technologies Comparison
| | Apache Hadoop | Apache Spark | Apache Flink |
|-|-|-|-|
| Year of Origin| 2005| 2009| 2009|
| Place of Origin| MapReduce (Google) Hadoop (Yahoo)| University of California, Berkeley| Technical University of Berlin|
| Data Processing Engine| Batch| Micro-Batch| Stream|
| Processing Speed| Slower than Spark and Flink| 100x Faster than Hadoop| Faster than spark|
| Data Transfer| Batch| Micro-Batch| Pipelined and Batch|
| Programming Languages| Java, C, C++, Ruby, Groovy, Perl, Python| Java, Scala, Python and R| Java and Scala|
| Programming Model| MapReduce| Resilient Distributed Datasets (RDD)| Cyclic Dataflows|
| Memory Management| Disk Based| JVM Managed| Automatic Memory Management. It has its own memory management system, separate from Java’s garbage collector|
| Latency| Low| Medium| Low|
| Throughput| Medium| High| High|
| Optimization| Manual. Jobs has to be manually optimize| Manual. Jobs has to be manually optimize| Automatic. Flink jobs are automatically optimized. Flink comes with an optimizer that is independent with actual programming interface|
| Duplicate Elimination| NA| Spark process every records exactly once hence eliminates duplication| Flink process every records exactly once hence eliminates duplication|
| Windows Criteria| NA| Spark has time-based Window criteria| Flink has record-based, time-baed or any custom user-defined Window criteria|
| API| Low-level| High-level| High-level|
| Streaming Support| NA| Spark Streaming| Flink Streaming|
| SQL Support| Hive, Impala| SparkSQL| Table API and SQL|
| Graph Support| NA| GraphX| Gelly|
| Machine Learning Support| NA| SparkML| FlinkML|
## 6. References
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/index.html
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/dataset_transformations.html
https://cwiki.apache.org/confluence/display/FLINK/Data+exchange+between+tasks
https://www.infoq.com/articles/machine-learning-techniques-predictive-maintenance
https://www.edureka.co/blog/spark-architecture
https://data-flair.training/blogs/spark-tutorial
https://data-flair.training/blogs/spark-in-memory-computing
https://medium.com/stream-processing/what-is-stream-processing-1eadfca11b97 | 0 |
teslacoil/Example_NovaTheme | Documented example theme for Nova Launcher | null | Example Nova Launcher Theme
===========================
Nova Launcher is the highly customizable launcher for Android. Part of this customization comes from developers such as yourself.
This document covers the theme format for Nova Launcher.
Most launchers, including Nova Launcher, support a superset of the theme format from Go Launcher.
However this document and project is specific to Nova Launcher. This document will try to note things which are not supported by other launchers.
This project is a sample theme for Nova Launcher that can covers:
* AndroidManifest
* Automatic App Icon Theming
* Manual App Icon Theming
* Dock Background Theming
* Wallpapers
AndroidManifest
---------------
Nova Launcher identifies themes by searching for activities that can respond to the `com.novalauncher.THEME` intent.
This is done by adding the following below an <activity> tag in your `AndroidManifest.xml`
<intent-filter>
<action android:name="com.novalauncher.THEME" />
</intent-filter>
Automatic App Icon Theming
--------------------------
Applying an icon theme (Nova Settings > Look and Feel > Icon Theme) will replace app icons with the icons specified in the theme.
Optionally, new app icons can be automatically generated using a background, foreground, scale and mask.
Configuration for this is done in the theme's `res/xml/appfilter.xml` .
### Replacement Icons as drawables
Replacing a specific app icon with a custom drawable included in your theme is done via:
<item component="ComponentInfo{com.android.chrome/com.google.android.apps.chrome.Main}" drawable="ic_browser_green" />
With system apps, different devices or roms may use different component names for various components. For example The dialer app on Nexus devices is `com.android.contacts/.activities.DialtactsActivity` but on HTC devices it is `com.android.htccontacts/.DialerTabActivity`. Nova includes an internal databases of these for most devices, allowing the theme to apply an icon to the system's phone app by specifying a single keyword rather than each individual activity for every device.
**Note** Only non-Play Store system apps are included in this, third-party apps,
or Play Store apps such as Chrome, will not be included and must be themed manually.
The keywords supported are:
* `:BROWSER`
* `:CALCULATOR`
* `:CALENDAR`
* `:CAMERA`
* `:CLOCK`
* `:CONTACTS`
* `:EMAIL`
* `:GALLERY`
* `:PHONE`
* `:SMS`
Additionally Nova's app drawer icon can be themed with `:LAUNCHER_ACTION_APP_DRAWER` and `:LAUNCHER_ACTION_APP_DRAWER_NIGHT`. The night version is used when in night mode and the user has Night mode > Drawer icon enabled in Nova Settings.
A full example is:
<item component=":SMS" drawable="ic_sms_green" />
**Note** Other launchers do not support these system app keywords and will ignore them.
### Identifying activity ComponentNames
Nova Launcher includes an option to export a full set of activity names and their original icons. This can serve as a starting point for your theme. You can find this at `Nova Settings > Long-press Volume down for Labs > Debug > Export Icons`
A zip will be created at `/sdcard/novaIconExport.zip` which contains a complete `res/xml/appfilter.xml` file as well as the original icons at the highest density they are available (in the appropriate drawable directory).
Nova also has an option to help find individual component names. Enable Nova Settings > Long-press volume down for Labs > Debug > Show Component in Edit dialog. Then either drag an app from the drawer to the Edit option, or long-press on a desktop icon and select Edit. The component name will be listed at the bottom of the dialog.
### Generating Replacement Icons
For apps that do not have a drawable replacement one can be generated by specifying parameters in the `appfilter.xml`
The four parameters are:
#### iconback
<iconback img1="ic_back1" img2="ic_back2" img3="ic_back3" />
The background to be drawn behind the original icon. If multiple images are specified (as above `img1`, `img2` and `img3`) then one will be chosen randomly.
#### iconupon
<iconupon img1="ic_foreground1" img2="ic_foreground2" />
The foreground to be drawn on top of the original icon. If multiple images are specified then one will be chosen randomly.
#### iconmask
<iconmask img1="ic_mask1" />
A mask to apply to the original icon, allowing reshaping it. Black opaque pixels in the mask will be erased while transparent pixels be unchanged.
If multiple images are specified then one will be chosen randomly.
#### scale
<scale factor=".75" />
The scale the original icon should be drawn at.
Manual Icon Theming
-------------------
Nova Launcher allows users to manually select a replacement icon, for an app, shortcut, or folder. To allow users to select one of your icons for this specify them in the `res/xml/drawable.xml`. Each icon is as follows
<item drawable="ic_jellybean" />
The order you list the icons in will be the order that they appear in the icon picker.
You may optionally break the icons into categories by adding dividers:
<category title="Games" />
Nova Launcher supports resource identifiers, for example for localization or compile-time error checking:
<item drawable="@drawable/ic_jellybean" />
<category title="@string/games" />
Other launchers do not support category dividers or resource identifiers
Dock Backgrounds
----------------
Nova Launcher allows the user to build a custom dock background based on an image picked from a theme.
There is no fixed size for a dock background as Nova Launcher supports devices of many different screen sizes and aspect ratios. Additionally, there are different orientations on the same device. Instead of trying to stretched an image to fit patterns are used to fill the appropriate amount of space on any configuration.
Patterns are specified in `res/xml/theme_patterns.xml` and point to a drawable that is designed to be repeated.
These patterns can either be full color, or grayscale and allow the user to specify a color by setting `canColor="true"`.
Otherwise the format is identical to `res/xml/drawables.xml`
<item drawable="@drawable/pattern_checkerboard" canColor="true" />
<item drawable="@drawable/pattern_colors" canColor="false" />
**Note** Other launchers do not support patterns for dock backgrounds and instead stretch and distort images for the dock background. It is a poor user experience, especially on tablets, but Nova Launcher is backwards compatible with this backwards approach. Legacy dock backgrounds can be specified in an `string-array` named `dock_backgroundlist` , which is also used by other launchers.
Wallpapers
----------
To add your wallpapers to Nova Launcher's wallpaper picker specify them in `res/xml/theme_wallpapers.xml` . The format is identical to `res/xml/drawables.xml` .
<item drawable="@drawable/wallpaper_red" />
| 1 |
Nukkit/ExamplePlugin | Example Nukkit plugin, showing the API | null | # ExamplePlugin
Example Nukkit plugin, showing the API
| 1 |
gtiwari333/spring-boot-web-application-sample | Real World Spring Boot Web Application Example with tons of ready to use features | archunit gatling java java-web jms keycloak mapstruct seed-project selenide selenium skeleton-application spock spring spring-boot spring-mvc spring-security thymeleaf webjars | ### A Spring Boot Web Application Sample with tons of ready-to-use features. This can be used as starter for bigger projects.
#### Variations
- Simpler version without KeyCloak and multi-modules is on separate project https://github.com/gtiwari333/spring-boot-blog-app
- Microservice example that uses Spring Cloud features(discovery, gateway, config server etc) is on separate project https://github.com/gtiwari333/spring-boot-microservice-example-java
### App Architecture:
[](https://lucid.app/documents/view/fa076c6e-86d3-412b-a9bc-1996dca86a1e)
#### Included Features/Samples
MicroService:
[//]: # (- Spring micrometer based tracing with zipkin)
- Exposing and implementing Open Feign clients
- Spring Cloud Contract (WIP)
Spring MVC:
- Public and internal pages
- MVC with thymeleaf templating
- Live update of thymeleaf templates for local development
- HTML fragments, reusable pagination component using Thymeleaf parameterized fragments
- webjar - bootstrap4 + jquery
- Custom Error page
- Request logger filter
- Swagger API Docs with UI ( http://localhost:8081/swagger-ui.html)
- @RestControllerAdvice, @ControllerAdvice demo
- CRUD UI + File upload/download
- favicon handler
Security:
- Account management with KeyCloak
- Spring Security
- User/User_Authority entity and repository/services
- login, logout, home pages based on user role
- Domain object Access security check on update/delete using custom PermissionEvaluator
- private pages based on user roles
- public home page -- view all notes by all
- Limit max number of record in a paged request
Persistence/Search:
- Data JPA with User/Authority/Note/ReceivedFile entities, example of EntityGraph
- MySQL or any other SQL db can be configured for prod/docker etc profiles
- (in old code) H2 db for local, Console enabled for local ( http://localhost:8081/h2-console/, db url: jdbc:h2:mem:testdb, username: sa)
- jOOQ integration with code generation based on JPA entity
- Liquibase database migration
Test:
- Unit/integration with JUnit 5, Mockito and Spring Test
- Tests with Spock Framework (Groovy 4, Spock 2)
- e2e with Selenide, fixtures. default data generated using Spring
- Load test with Gatling/Scala
- Architecture tests using ArchUnit
- file upload/download e2e test with Selenide
- TestContainers to perform realistic integration test
- Reset DB and Cache between test
- Assert expected query count during integration test
Misc:
- Code Generation: lombok, mapstruct
- Message Queue using ActiveMQ Artemis
- Approval/flagging api - message based
- Nested comment
- Cache implemented
- Zipkin tracing
- Websocket implemented to show article/comment review status/notifications..
Future: do more stuff
- CQRS with event store/streaming
- Spring Cloud Contract integration (WIP)
- Docker-compose deploy/kubernetes
- Visitors log - IP, browser, etc
- Centralized error reporting
- Geo-Spatial query for visitors
- Grafana Dashboard, @Timed and more ...
- logback LevelChangePropagator integration
- logback error email
- logback rolling policy
- Integrate Markdown editor for writing notes
- rate limit by IP on public API ( article api )
- Fetch user's avatar
- UI improvement
- S3 file upload, test with localstack TestContainers
- nested comment query/performance fix
- Signup UI
- vendor neutral security with OIDC
- JfrUnit ( WIP )
-
### Requirements
- JDK 17+
- Lombok configured on IDE
- http://ganeshtiwaridotcomdotnp.blogspot.com/2016/03/configuring-lombok-on-intellij.html
- For eclipse, download the lombok jar, run it, and point to eclipse installation
- Maven
- Docker
- Make sure docker is started and running
- Run `$ sudo chmod 666 /var/run/docker.sock` if you get error like this "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (Details: [13] Permission denied)"
#### How to Run
It contains following applications:
- main-app
- email-service (optional)
- report-service (optional)
- trend-service (optional)
- content-checker (optional)
# Note you will need to create a database named 'seedapp' in your mysql server
Option 1 - run with manually started KeyCloak, ActiveMQ and MySQL servers
- Run ```mvn clean install``` at root
- Run ```docker-compose -f config/docker-compose.yml up``` at root to start docker containers
- Go to main-app folder and run ```mvn``` to start the application
Option 2 - automatically start KeyCloak, ActiveMQ and MySQL using TestContainer while application is starting
- Run ```mvn clean install``` at root
- Go to main-app folder and run ```mvn -Pdev,withTestContainer``` to start the application
Option 3 - run from IDE
- import into your IDE and compile the full project and run the Application.java on main-app module
- Update run configuration to run maven goal `wro4j:run` Before Launch. It should be after 'Build'
## Run Tests (use ./mvnw instead of mvn if you want to use maven wrapper)
## It uses TestContainers, which requires Docker to be installed locally.
##### Running full tests
`mvn clean verify`
##### Running unit tests only (it uses maven surefire plugin)
`mvn compiler:testCompile resources:testResources surefire:test`
##### Running integration tests only (it uses maven-failsafe-plugin)
`mvn compiler:testCompile resources:testResources failsafe:integration-test`
## Code Quality
##### The `error-prone` runs at compile time.
##### The `modernizer` `checkstyle` and `spotbugs` plugin are run as part of maven `test-compile` lifecycle phase. use `mvn spotbugs:gui' to
##### SonarQube scan
Run sonarqube server using docker
`docker run -e SONAR_ES_BOOTSTRAP_CHECKS_DISABLE=true -p 9000:9000 sonarqube:latest`
Perform scan:
`mvn sonar:sonar`
mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin
View Reports in SonarQube web ui:
- visit http://localhost:9000
- default login and password are `admin`, you will be asked to change password after logging in with default
username/password
- (optional) change sonarqube admin password without logging
in: `curl -u admin:admin -X POST "http://localhost:9000/api/users/change_password?login=admin&previousPassword=admin&password=NEW_PASSWORD"`
- if you change the password, make sure the update `-Dsonar.password=admin` when you run sonarqube next time
### Dependency vulnerability scan
Owasp dependency check plugin is configured. Run `mvn dependency-check:check` to run scan and
open `dependency-check-report.html` from target to see the report.
## Run Tests Faster by using parallel maven build
`mvn -T 5 clean package`
Once the application starts, open `http://localhost:8081` on your browser. The default username/passwords are listed on : gt.app.Application.initData, which are:
- system/pass
- user1/pass
- user2/pass
#### Screenshots:
#### Public View

#### Read Article with nested comment/discussion

#### Logged in Feed View

#### Logged in User's Article List View

#### Admin User's Review Page to approve/disapprove flagged posts

#### Review Page

#### New Article

#### Dependency/plugin version checker
- `mvn versions:display-dependency-updates`
- `mvn versions:display-plugin-updates`
| 1 |
Microservice-API-Patterns/LakesideMutual | Example Application for Microservice API Patterns (MAP) and other patterns (DDD, PoEAA, EIP) | null | #  Lakeside Mutual
Lakeside Mutual is a fictitious insurance company which serves as a sample application to demonstrate microservices and domain-driven design. The company provides several digital services to its customers and its employees. [Microservice API Patterns (MAP)](https://microservice-api-patterns.org/) are applied in the application backends (see [MAP.md](./MAP.md)).
## Architecture Overview
The following diagram shows an overview of the core components that are the building blocks for the services Lakeside Mutual provides to its customers and its employees:

The following sections contain a short description of each service:
- **[Customer Core](customer-core)**
The Customer Core backend is a [Spring Boot](https://projects.spring.io/spring-boot/) application that manages the personal data about
individual customers. It provides this data to the other backend services through an HTTP resource API.
- **[Customer Self-Service Backend](customer-self-service-backend)**
The Customer Self-Service backend is a [Spring Boot](https://projects.spring.io/spring-boot/) application that
provides an HTTP resource API for the Customer Self-Service frontend.
- **[Customer Self-Service Frontend](customer-self-service-frontend)**
The Customer Self-Service frontend is a [React](https://reactjs.org/) application that allows users to register themselves, view their current insurance policy and change their address.
- **[Customer Management Backend](customer-management-backend)**
The Customer Management backend is a [Spring Boot](https://projects.spring.io/spring-boot/) application that
provides an HTTP resource API for the Customer Management frontend and the Customer Self-Service frontend. In addition, [WebSockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) are used to implement the chat feature to deliver chat messages in realtime between the callcenter agent using the Customer Management frontend and the Customer logged into the Self-Service frontend.
- **[Customer Management Frontend](customer-management-frontend)**
The Customer Management frontend is a [React](https://reactjs.org/) application that allows Customer-Service operators to interact with customers and help them resolve issues related to Lakeside Mutual's insurance products.
- **[Policy Management Backend](policy-management-backend)**
The Policy Management backend is a [Spring Boot](https://projects.spring.io/spring-boot/) application that provides an HTTP resource API for the Customer Self-Service frontend and the Policy Management frontend. It also sends a message (via [ActiveMQ](http://activemq.apache.org/) messaging) to the Risk Management Server whenever an insurance policy is created / updated.
- **[Policy Management Frontend](policy-management-frontend)**
The Policy Management frontend is a [Vue.js](https://vuejs.org/) application that allows Lakeside Mutual employees to view and manage the insurance policies of individual customers.
- **[Risk Management Server](risk-management-server)**
The Risk-Management server is a [Node.js](https://nodejs.org) application that gathers data about customers / policies and can generate a customer data report on demand.
- **[Risk Management Client](risk-management-client)**
The Risk-Management client is a command-line tool built with [Node.js](https://nodejs.org). It allows the
professionals of Lakeside Mutual to periodically download a customer data report which helps them during risk assessment.
- **[Eureka Server](eureka-server)**
[Eureka Server](https://spring.io/guides/gs/service-registration-and-discovery/#initial) provides a service registry. It is a regular Spring Boot application to which all other Spring services can connect to access other services. For example, the Customer Self-Service Backend uses Eureka to connect to the Customer Core. Usage of Eureka is optional.
- **[Spring Boot Admin](spring-boot-admin)**
[Spring Boot Admin](https://github.com/codecentric/spring-boot-admin) is an open source software for managing and monitoring Spring Boot applications. It *is* a Spring Boot application too. Usage within the Lakeside Mutual services is optional and only included for convenience with all security disabled.
The backends use Domain-Driven Design (DDD) to structure their domain (business) logic and their service-internal logical layers. To do so, they use marker interfaces defined in this [Domain-Driven Design Library](https://github.com/Microservice-API-Patterns/DDD-Library).
To learn more about individual components, please have a look at the README file in the corresponding subfolder.
## Getting started
Detailed setup instructions can be found in each application's README file. To conveniently start all applications, the `run_all_applications` scripts can be used. Alternatively, to start a minimal subset of applications, i.e., the Customer Management Applications and the Customer Core, use the `run_customer_management_applications` script.
1. Make sure you have [Java 8 or higher](https://adoptium.net/) installed.
1. Install [Node](https://nodejs.org/en/). Version 12 or later is required. You can check the currently installed version by running `node --version`.
1. Install Python. We don't use Python ourselves, but some Node.js packages require native addons that are built using node-gyp, which requires Python. See the [node-gyp README for details on which Python version to install](https://github.com/nodejs/node-gyp#on-unix).
1. Install Maven (see [https://maven.apache.org](https://maven.apache.org) for installation instructions).
1. Run the `run_all_applications` script suitable for your platform. Note that the frontend applications might be running before the backends are ready. In that case, just reload the page in the browser.
If the script exits, one of the applications could not be started. For troubleshooting, we recommend to start the applications individually. Note that you don't need to start all applications. The overview diagram above can be used to figure out the dependencies of each service.
The following table lists all the ports that have to be free for each component to work correctly. If you need to change any of these ports, please
consult the README of the corresponding component:
| Component | Ports |
| ---------- | ----- |
| [Customer Self-Service Backend](customer-self-service-backend) | 8080 (HTTP resource API) |
| [Policy Management Backend](policy-management-backend) | 8090 (HTTP resource API)<br/>61613 (ActiveMQ broker)<br/>61616 (ActiveMQ broker) |
| [Customer Management Backend](customer-management-backend) | 8100 (HTTP resource API) |
| [Customer Core](customer-core) | 8110 (HTTP resource API) |
| [Customer Self-Service Frontend](customer-self-service-frontend) | 3000 (Web server) |
| [Policy Management Frontend](policy-management-frontend) | 3010 (Web server) |
| [Customer Management Frontend](customer-management-frontend) | 3020 (Web server) |
| [Risk Management Server](risk-management-server) | 50051 (gRPC server) |
| [Risk Management Client](risk-management-client) | - (CLI Client) |
| [Eureka Server](eureka-server) | 8761 (Admin web frontend) |
| [Spring Boot Admin](spring-boot-admin) | 9000 (Web server) |
## Docker
All projects come with Dockerfiles that can be used to run the services as Docker containers. The [docker-compose.yml](./docker-compose.yml) builds and starts all applications in a single command. See the [docker-compose.yml](./docker-compose.yml) for more information. Note that building the images takes some time (without caches, our most recent build took 6 minutes on a development machine).
## Data Stores
Each backend service has its own data store. The Spring-JPA based applications all use the H2 relational database. By default, all data will be lost during restarts, please see the individual README files to enable durable persistency. The backend services also contain the H2 Console to browse the database. It can be found at `/console`. For example, for the Customer Core, the address is [http://localhost:8110/console](http://localhost:8110/console).
## Frequently Asked Questions and Troubleshooting
See our [FAQ](./FAQ.md) for more information on running the applications and the [IDE instructions](./IDE_INSTRUCTIONS.md) page to get started with IntelliJ IDEA, Eclipse and Visual Studio Code.
## License
This project is made available under the Eclipse Public License v 2.0. See the [LICENSE](LICENSE.md) file for the full license.
| 1 |
JavaZakariae/Spring5Certification | Spring Certification: This repository contains my examples and some best references to prepare the Spring 5 certification | aop certification ioc-framework pluralsight-courses spring spring-boot spring-data spring-data-jpa spring-jdbc spring-mvc spring-security springboot springframework springmvc | 

# Preparation for the Spring 5 Certification
- [Preparation for the Spring 5 Certification](#preparation-for-the-spring-5-certification)
- [Introduction](#introduction)
- [The best resources to follow](#the-best-resources-to-follow)
- [Books](#books)
- [Youtube channel](#youtube-channel)
- [Pluralsight courses(paid but worth it)](#pluralsight-coursespaid-but-worth-it)
- [AOP](#aop)
- [Spring JDBC](#spring-jdbc)
- [Spring data jpa](#spring-data-jpa)
- [Spring Boot](#spring-boot)
- [Stackoverflow questions](#stackoverflow-questions)
- [Other resources](#other-resources)
- [Personal opinion](#personal-opinion)
## Introduction
This repository contains my code examples for the preparation of the certification. I have passed the certification on the 20th july of 2019, I have been certified with a score of 88%. This repository contains some good references to master the fundamentals of the framework, so It's not only for a certification purpose.
The certification questions are based on the official [study-guide](https://d1fto35gcfffzn.cloudfront.net/academy/Spring-Professional-Certification-Study-Guide.pdf), so I recommend to focus on those questions. In my opinion, if you can easily awnsers those questions, you will pass the certification.
To prepare my certification, I read few books, a lot of articles, I watched many Pluralsight courses and some Youtube videos on specific subjects.
To help you prepare the certification, I recommend to learn from the following resources, some are not free, but they are worth.
## The best resources to follow
[Core Spring 5 Certification in Detail](https://leanpub.com/corespring5certificationindetail) by [Ivan Krizsan
](https://leanpub.com/u/ivan-krizsan)
This book should be a must for everyone willing to pass the certification, it should be checked only as a last step before passing the certification, so for the newcomers to the framework I don't recommend to begin with, because it is just a summary and it contains some responses that could be helpful to pass the certification.
[ivankrizsan's blog](https://www.ivankrizsan.se/)
## Books
[Spring in Action: Covers Spring 4](https://www.amazon.com/Spring-Action-Covers-4/dp/161729120X/ref=sr_1_2?keywords=spring+in+action&qid=1570962746&sr=8-2):
I don't recommend the fifth edition, the 4th edition is a good start for everyone willing to understand the fundamentals of the Framework, like Dependency Injection, Aspect Oriented Programming.... It is not mandatory to read every chapter. It is the first resource i have checked, I didn't read the full book, but only few chapters, let's say 50% of the book.
[Spring 5 Design Patterns](https://www.amazon.com/Spring-Design-Patterns-application-development/dp/1788299450/ref=sr_1_1?crid=ZRVPY8S85GBD&keywords=spring+design+patterns&qid=1570962674&sprefix=spring+design+pa%2Caps%2C216&sr=8-1) :
A really good book that helped me to dig deeper into the framework. It will help you to understand how the Spring Framework is using some design patterns internally.
[Spring Boot in Action](https://www.amazon.com/Spring-Boot-Action-Craig-Walls/dp/1617292540/ref=sr_1_2?qid=1570963413&refinements=p_27%3ACraig+Walls&s=books&sr=1-2&text=Craig+Walls):
For a deeper understanding of the Spring boot module. The book is not big but it gives more details about the spring boot modules. Apart from that, I have watched many youtube videos to get much understanding of the spring boot modules, I recommend to check some talks given by [Stéphane Nicoll](https://www.youtube.com/results?search_query=spring+boot+stephane+nicol) about spring boot.
## Youtube channel
[Laurentiu Spilca](https://www.youtube.com/channel/UC0z3MpVGrpSZzClXrYcZBfw):
Laurentiu Spilca made a playlist on Spring Fundamentals, you will find very good content on DI, AOP, Transaction, Data, Rest, Actuator and many more, recently he made a playlist about spring security...
## Pluralsight courses(paid but worth it)
#### AOP
[Aspect Oriented Programming](https://app.pluralsight.com/library/courses/aspect-oriented-programming-spring-aspectj/table-of-contents)
#### Spring JDBC
[Building Applications Using Spring JDBC
](https://app.pluralsight.com/library/courses/building-applications-spring-jdbc/table-of-contents)
#### Spring data jpa
[Getting Started with Spring Data JPA](https://app.pluralsight.com/library/courses/spring-data-jpa-getting-started/table-of-contents)
#### Spring Boot
[Creating Your First Spring Boot Application](https://app.pluralsight.com/library/courses/spring-boot-first-application/table-of-contents)
[Spring Boot: Efficient Development, Configuration, and Deployment](https://app.pluralsight.com/library/courses/spring-boot-efficient-development-configuration-deployment/table-of-contents)
## Stackoverflow questions
Here are some responses to some very important questions:
- [java - @RequestBody and @ResponseBody annotations in Spring - Stack Overflow](https://stackoverflow.com/questions/11291933/requestbody-and-responsebody-annotations-in-spring).
- [java - ApplicationContext and ServletContext - Stack Overflow](https://stackoverflow.com/questions/31931848/applicationcontext-and-servletcontext).
- [Differences between Abstract Factory Pattern and Factory Method - Stack Overflow](https://stackoverflow.com/questions/5739611/differences-between-abstract-factory-pattern-and-factory-method).
- [java - BeanFactoryPostProcessor and BeanPostProcessor in lifecycle events - Stack Overflow](https://stackoverflow.com/questions/30455536/beanfactorypostprocessor-and-beanpostprocessor-in-lifecycle-events).
- [java - Difference between <context:annotation-config> vs <context:component-scan> - Stack Overflow](https://stackoverflow.com/questions/7414794/difference-between-contextannotation-config-vs-contextcomponent-scan).
- [java - Difference between applicationContext.xml and spring-servlet.xml in Spring Framework - Stack Overflow](https://stackoverflow.com/questions/3652090/difference-between-applicationcontext-xml-and-spring-servlet-xml-in-spring-frame).
- [java - Difference between Interceptor and Filter in Spring MVC - Stack Overflow](https://stackoverflow.com/questions/35856454/difference-between-interceptor-and-filter-in-spring-mvc/35856496).
- [java - Hibernate SessionFactory vs. EntityManagerFactory - Stack Overflow](https://stackoverflow.com/questions/5640778/hibernate-sessionfactory-vs-entitymanagerfactory).
- [java - How do I update an entity using spring-data-jpa? - Stack Overflow](https://stackoverflow.com/questions/11881479/how-do-i-update-an-entity-using-spring-data-jpa).
- [java - How do servlets work? Instantiation, sessions, shared variables and multithreading - Stack Overflow](https://stackoverflow.com/questions/3106452/how-do-servlets-work-instantiation-sessions-shared-variables-and-multithreadi).
- [java - How to accept Date params in a GET request to Spring MVC Controller? - Stack Overflow](https://stackoverflow.com/questions/15164864/how-to-accept-date-params-in-a-get-request-to-spring-mvc-controller).
- [java - How to access a value defined in the application.properties file in Spring Boot - Stack Overflow](https://stackoverflow.com/questions/30528255/how-to-access-a-value-defined-in-the-application-properties-file-in-spring-boot).
- [java - PUT request in Spring MVC - Stack Overflow](https://stackoverflow.com/questions/35878351/put-request-in-spring-mvc).
- [java - Spring MVC: How to perform validation? - Stack Overflow](https://stackoverflow.com/questions/12146298/spring-mvc-how-to-perform-validation).
- [java - What are the possible values of the #Hibernate hbm2ddl.auto configuration and what do they do - Stack Overflow](https://stackoverflow.com/questions/438146/what-are-the-possible-values-of-the-hibernate-hbm2ddl-auto-configuration-and-wh).
- [java - What is @ModelAttribute in Spring MVC? - Stack Overflow](https://stackoverflow.com/questions/3423262/what-is-modelattribute-in-spring-mvc).
- [java - What is a NoSuchBeanDefinitionException and how do I fix it? - Stack Overflow](https://stackoverflow.com/questions/39173982/what-is-a-nosuchbeandefinitionexception-and-how-do-i-fix-it).
- [java - What's the difference between ResponseEntity and HttpEntity in Spring_ - Stack Overflow](https://stackoverflow.com/questions/42829823/whats-the-difference-between-responseentity-and-httpentity-in-spring).
- [Spring JdbcTemplate execute vs update - Stack Overflow](https://stackoverflow.com/questions/39454507/spring-jdbctemplate-execute-vs-update).
- [spring - How to implement RowMapper using java lambda expression - Stack Overflow](https://stackoverflow.com/questions/41923360/how-to-implement-rowmapper-using-java-lambda-expression).
- [spring boot - Difference between using MockMvc with SpringBootTest and Using WebMvcTest - Stack Overflow](https://stackoverflow.com/questions/39865596/difference-between-using-mockmvc-with-springboottest-and-using-webmvctest).
- [What is the difference between BeanPostProcessor and init/destroy method in Spring? - Stack Overflow](https://stackoverflow.com/questions/9862127/what-is-the-difference-between-beanpostprocessor-and-init-destroy-method-in-spri).
- [Spring Boot - Loading Initial Data - Stack Overflow](https://stackoverflow.com/questions/38040572/spring-boot-loading-initial-data).
- [java - BeanPostProcessor confusion - Stack Overflow](https://stackoverflow.com/questions/9761839/beanpostprocessor-confusion?rq=1).
- [Using Spring ResponseEntity to Manipulate the HTTP Response](https://www.baeldung.com/spring-response-entity).
- [Spring MVC and the @ModelAttribute Annotation | Baeldung](https://www.baeldung.com/spring-mvc-and-the-modelattribute-annotation).
- [Spring AOP AspectJ @AfterThrowing Example](https://howtodoinjava.com/spring-aop/aspectj-afterthrowing-annotation-example).
- [Difference between getOne and findById in Spring Data JPA?](https://www.javacodemonk.com/difference-between-getone-and-findbyid-in-spring-data-jpa-3a96c3ff).
- [Difference Between BeanFactory and ApplicationContext in Spring](https://dzone.com/articles/difference-between-beanfactory-and-applicationcont).
## Other resources
[Git repository](https://github.com/vshemyako/spring-certification-5.0): This repository contains responses of the the official study-guide questions.
[Git repository](https://github.com/LinnykOleh/Spring): This second repository contains the responses of the official study-guide questions and also explanations of the theoretical notion we deal with in the Certification.
[Git repository](https://github.com/vojtechruz/spring-core-cert-notes-4.2): This third repository, even if it is about the spring 4 certification, you can find very good content and explanation about the spring ecosystem.
## Personal opinion
Sometimes even if the resources are valuable, it can be hard to understand some subjects, for my personal experience, i try to see the prerequisites of the complicated subject.
It depends also on your background on Spring and some design patterns used by the framework.
In parallel with that, I read a lot of answers from Stackoverflow, just google what you don't understand.
| 0 |
theautonomy/bouncycastle-gpg-example | Bouncy Castle OpenGPG encryption and decryption example | null | ## Introduction
This is an example of using Bouncy Castle's OpenPGP utility to encrypt
and decrypt files.
This project is a refactory of the Bouncy Castle example which you can
find [here](http://www.java2s.com/Open-Source/Java-Document/Security/Bouncy-Castle/org/bouncycastle/openpgp/examples/KeyBasedLargeFileProcessor.java.htm)
## Code snippet to encrypt a file without signing
BCPGPEncryptor encryptor = new BCPGPEncryptor();
encryptor.setArmored(false);
encryptor.setCheckIntegrity(true);
encryptor.setPublicKeyFilePath("./test.gpg.pub");
encryptor.encryptFile("./test.txt", "./test.txt.enc");
## Code snippet to decrypt a file without verifying signature;
BCPGPDecryptor decryptor = new BCPGPDecryptor();
decryptor.setPrivateKeyFilePath("test.gpg.prv");
decryptor.setPassword("password");
decryptor.decryptFile("test.txt.enc", "test.txt.dec");
## Code snippet to encrypt and sign a file
BCPGPEncryptor encryptor = new BCPGPEncryptor();
encryptor.setArmored(false);
encryptor.setCheckIntegrity(true);
encryptor.setPublicKeyFilePath("./test.gpg.pub");
encryptor.setSigning(true);
encryptor.setSigningPrivateKeyFilePath("wahaha.gpg.prv");
encryptor.setSigningPrivateKeyPassword("password");
encryptor.encryptFile("./test.txt", "./test.txt.signed.enc");
## Code snippet to decrypt a file and verify signature;
BCPGPDecryptor decryptor = new BCPGPDecryptor();
decryptor.setPrivateKeyFilePath("test.gpg.prv");
decryptor.setPassword("password");
decryptor.setSigned(true);
decryptor.setSigningPublicKeyFilePath("wahaha.gpg.pub");
// this file is encrypted with weili's public key and signed using wahaha's private key
decryptor.decryptFile("test.txt.signed.enc", "test.txt.signed.dec");
## Try it
This project contains a test pgp public and private key which are used for test
purpose so that you can try it out right away. You can run the following mvn command
from command line:
>mvn exec:java -Dexec.mainClass=com.test.pgp.bc.BCPGPTest
## Note
If you get error "java.security.InvalidKeyException: Illegal key size", you may need to install
the unrestricted policy files for the JVM you are using. See details [here](http://www.bouncycastle.org/wiki/display/JA1/Frequently+Asked+Questions)
| 1 |
PacktPublishing/Java-9-Programming-By-Example | Java 9 Programming By Example published by Packt | null |
# Java 9 Programming By Example
This is the code repository for [Java 9 Programming By Example](https://www.packtpub.com/application-development/java-9-programming-example?utm_source=github&utm_medium=repository&utm_campaign=9781786468284), published by [Packt](https://www.packtpub.com/?utm_source=github). It contains all the supporting project files necessary to work through the book from start to finish.
## About the Book
This book gets you started with essential software development easily and quickly, guiding you through Java’s different facets. By adopting this approach, you can bridge the gap between learning and doing immediately. You will learn the new features of Java 9 quickly and experience a simple and powerful approach to software development. You will be able to use the Java runtime tools, understand the Java environment, and create Java programs.
We then cover more simple examples to build your foundation before diving to some complex data structure problems that will solidify your Java 9 skills. With a special focus on modularity and HTTP 2.0, this book will guide you to get employed as a top notch Java developer.
By the end of the book, you will have a firm foundation to continue your journey towards becoming a professional Java developer.
## Instructions and Navigation
All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.
The code will look like the following:
```
package packt.java9.by.example.ch03;
public interface Sort {
void sort(SortableCollection collection);
}
```
To immerse into the content of this book and to soak up most of the skills and knowledge, we assume that you already have some experience with programming. We do not assume too much but hope that you already know what a variable is, that computers have memory, disk, network interfaces, and what they generally are.
In addition to these basic skills, there are some technical requirements to try out the code and the examples of the book. You need a computer—something that is available today and can run Windows, Linux, or OSX. You need an operating system and, probably, that is all you need to pay for. All other tools and services that you will need are available as open source and free of charge. Some of them are also available as commercial products with an extended feature set, but for the scope of this book, starting to learn Java 9 programming, those features are not needed. Java, a development environment, build tools, and all other software components we use are open source.
## Related Products
* [Java 9 with JShell](https://www.packtpub.com/application-development/java-9-jshell?utm_source=github&utm_medium=repository&utm_campaign=9781787282841)
* [Java Data Science Cookbook](https://www.packtpub.com/big-data-and-business-intelligence/java-data-science-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781787122536)
* [Java Hibernate Cookbook](https://www.packtpub.com/application-development/java-hibernate-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781784391904)
### Suggestions and Feedback
[Click here](https://docs.google.com/forms/d/e/1FAIpQLSe5qwunkGf6PUvzPirPDtuy1Du5Rlzew23UBp2S-P3wB-GcwQ/viewform) if you have any feedback or suggestions.
### Download a free PDF
<i>If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.<br>Simply click on the link to claim your free PDF.</i>
<p align="center"> <a href="https://packt.link/free-ebook/9781786468284">https://packt.link/free-ebook/9781786468284 </a> </p> | 1 |
ttulka/ddd-example-ecommerce-microservices | Domain-driven design microservices example | architecture ddd docker domain-driven-design event-driven example kubernetes microservices oop soa spring-boot |
# DDD Microservices Example Project in Java: eCommerce
The purpose of this project is to provide a sample implementation of an e-commerce product following **Domain-Driven Design (DDD)** and **Service-Oriented Architecture (SOA)** principles.
Programming language is Java with heavy use of Spring Boot, Docker and Kubernetes.
## Purpose of the Project
This repository focuses mostly on cross-cutting, infrastructure and deployment concerns.
For the domain and application concepts see the [original repository](https://github.com/ttulka/ddd-example-ecommerce).
## Monolith vs Microservices
Both monolithic and microservices deployments are implemented.
To run the monolithic application:
```sh
./gradlew :application:bootRun
```
To set up and run microservices, see the [Docker](#docker-containers) and [Kubernetes](#kubernetes) sections.
Read more about monoliths vs microservices at https://blog.ttulka.com/good-and-bad-monolith
## Message Broker
As the message broker a simple **Redis** instance could be used with Spring profile `redis`:
```sh
docker run --rm --name redis-broker -p 6379:6379 -d redis:6 redis-server
./gradlew :application:bootRun --args='--spring.profiles.active=redis'
```
Alternatively, **RabbitMq** could be used as the message broker with Spring profile `rabbitmq`:
```sh
docker run --rm --name rabbitmq-broker -p 5672:5672 -d rabbitmq:3
./gradlew :application:bootRun --args='--spring.profiles.active=rabbitmq'
```
When neither `redis` not `rabbitmq` profiles are active, the system will fall-back to use of Spring application events as the default messaging mechanism.
### Messaging Integration
To make the code independent of a concrete messaging implementation and easy to use, Spring application events are used for the internal communication.
In practice, this means that messages are published via `EventPublisher` abstraction and consumed via Spring's `@EventListener`.
To make this work, the external messages are re-sent as Spring application events under the hood.
## Database
The whole system uses one externalized database with particular tables owned exclusively by services.
In a real-world system this separation would be further implemented by separate schemas/namespaces.
As the database **PostgreSQL** instance could be used:
```sh
docker run --rm --name postgres -p 5432:5432 -e POSTGRES_PASSWORD=secret -d postgres:13
```
Start the application with Spring profile `postgres`:
```sh
./gradlew :application:bootRun --args='--spring.profiles.active=postgres'
```
When the `postgres` profile is not active, the system will fall-back to use H2 as the default database.
## Gradle Build
The project is a Gradle multi-project, all sub-projects can be built in a single command:
```sh
./gradlew clean build
```
## Docker Containers
Build an image per microservice via Gradle Spring Boot plugin:
```sh
./gradlew bootBuildImage
```
To run the containers:
```sh
docker container run --rm -p 8080:8001 ttulka/ecommerce-catalog-service
docker container run --rm -p 8080:8002 ttulka/ecommerce-order-service
docker container run --rm -p 8080:8003 ttulka/ecommerce-cart-service
docker container run --rm -p 8080:8004 ttulka/ecommerce-payment-service
docker container run --rm -p 8080:8005 ttulka/ecommerce-delivery-service
docker container run --rm -p 8080:8006 ttulka/ecommerce-dispatching-service
docker container run --rm -p 8080:8007 ttulka/ecommerce-warehouse-service
docker container run --rm -p 8080:8000 ttulka/ecommerce-portal-service
```
Active profiles can be set as follows:
```sh
docker container run --rm -e "SPRING_PROFILES_ACTIVE=redis,postgres" -p 8080:8001 ttulka/ecommerce-catalog-service
```
### Docker-Compose
Build NGINX reverse proxy image:
```sh
docker build -t ttulka/ecommerce-reverseproxy reverseproxy
```
Start the entire microservices stack:
```sh
docker-compose up
```
Access the Postgres database and init some data:
```sh
docker exec -it <containerID> psql -U postgres postgres
```
```sql
INSERT INTO categories VALUES
('C1', 'books', 'Books'),
('C2', 'games', 'Games');
INSERT INTO products VALUES
('P1', 'Domain-Driven Design', 'by Eric Evans', 45.00),
('P2', 'Object Thinking', 'by David West', 35.00),
('P3', 'Chess', 'Classic game.', 3.20);
INSERT INTO products_in_categories VALUES
('P1', 'C1'),
('P2', 'C1'),
('P3', 'C2');
INSERT INTO products_in_stock VALUES
('P1', 5),
('P2', 0),
('P3', 1);
```
The NGINX reverse proxy serves as a simple API gateway:
```sh
curl localhost:8080/catalog/products
curl localhost:8080/warehouse/stock/5
```
## Kubernetes
To use local images for development with Minikube, run the following command to use local Docker images registry:
```sh
minikube start
eval $(minikube docker-env)
```
Afterwards, build the docker images again for the Minikube's Docker daemon:
```sh
./gradlew bootBuildImage
```
Create deployments:
```sh
kubectl apply -f 1-infrastructure.k8s.yml
kubectl apply -f 2-backend-services.k8s.yml
kubectl apply -f 3-frontend-portal.k8s.yml
kubectl apply -f 4-api-gateway.k8s.yml
```
Set up ports forwarding to access the cluster from your local network:
```sh
kubectl port-forward service/reverseproxy 8080:8080
```
Alternatively, you can create an Ingress:
```sh
minikube addons enable ingress
kubectl apply -f 5-ingress.k8s.yml
# get the ingress address
kubectl get ingress ecommerce-ingress
# add the address into hosts
sudo cp /etc/hosts hosts.bak
sudo echo -e '\n<ingress-address> ecommerce.local' >> /etc/hosts
# access the application in browser: http://ecommerce.local
```
| 1 |
piomin/sample-quarkus-applications | Example application built using Quarkus framework | graphql h2-database jaxrs openapi3 panache quarkus quarkus-hibernate-orm quarkus-kotlin quarkus-maven quarkus-panache quarkus-resteasy smallrye swagger | null | 1 |
minwan1/spring-security-oauth2-example | :dart: This is spring-oauth2 example | null | [](https://travis-ci.com/minwan1/spring-security-oauth2-example)
[](https://coveralls.io/github/minwan1/spring-security-oauth2-example)
# Spring-Security-OAuth2-example
Spring-Security-OAuth2 구현 예제입니다.
# 개발환경
* Spring boot 1.5.9
* Java 8
* Mockito,
* Spring REST
* JPA
# 문서
1. [step1 : oauth1, oauth2란](https://github.com/minwan1/spring-security-oauth2-example/blob/master/docs/step-1%3Aoauth1%2Coauth2%EB%9E%80.md)
2. [step2 : spring-security-oauth2구현(in-memory, jdbc방식을 사용한 예제)](https://github.com/minwan1/spring-security-oauth2-example/blob/master/docs/step-2%3Aspring-oauth2-%EA%B5%AC%ED%98%84.md)
# 브랜치
* [example-1 : in-memory 방식을 사용한 OAuth2 구현](https://github.com/minwan1/spring-security-oauth2/tree/example-1)
* [example-2 : JDBC 방식을 사용한 OAuth2 구현](https://github.com/minwan1/spring-security-oauth2/tree/example-2)
# 실행
```
$ mvn spring-boot:run
```
# Test 방법
```
Spring-oauth/src/test/java/com/example/oauth/OauthApplicationTests.java
```
| 1 |
agiledon/payroll-ddd | DDD Example based on Scenario-Driven Design and Test-Driven Design | null | null | 1 |
Jaouan/Sending-Animation-Example | It's just an example of sending animation. | null | Android - Article sending animation example
========
It's just an example of sending animation.

References
========
- The project uses [ButterKnife](http://jakewharton.github.io/butterknife/).
License
========
[Apache License Version 2.0](LICENSE) | 1 |
damienbeaufils/spring-boot-clean-architecture-demo | An example of clean architecture implementation with Spring Boot | null | # spring-boot-clean-architecture-demo
[](https://travis-ci.org/damienbeaufils/spring-boot-clean-architecture-demo)
An example of clean architecture with Spring Boot
## Foreword
This application is designed using a [Clean Architecture pattern](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html) (also known as [Hexagonal Architecture](http://www.maximecolin.fr/uploads/2015/11/56570243d02c0_hexagonal-architecture.png)).
Therefore [SOLID principles](https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)) are used in code, especially the [Dependency Inversion Principle](https://en.wikipedia.org/wiki/Dependency_inversion_principle) (do not mix up with the classic dependency injection in Spring for example).
Concretely, there are 3 main packages: `domain`, `use_cases` and `infrastructure`. These packages have to respect these rules:
- `domain` contains the business code and its logic, and has no outward dependency: nor on frameworks (Hibernate for example), nor on `use_cases` or `infrastructure` packages.
- `use_cases` is like a conductor. It will depend only on `domain` package to execute business logic. `use_cases` should not have any dependencies on `infrastructure`.
- `infrastructure` contains all the technical details, configuration, implementations (database, web services, etc.), and must not contain any business logic. `infrastructure` has dependencies on `domain`, `use_cases` and frameworks.
## Install
```
./gradlew assemble
```
## Test
```
./gradlew check
```
## Mutation testing
```
./gradlew pitest
```
## Run
```
./gradlew bootRun
```
| 1 |
ssalevan/cc-helloworld | CommonCrawl Hello World example | null | null | 1 |
ajbkr/HTML5-Cross-Platform-Game-Development-Using-Phaser-3 | Reworked examples from Emanuele Feronato's HTML5 Cross Platform Game Development Using Phaser 3 (Babel, webpack) | null | HTML5 Cross Platform Game Development Using Phaser 3
====================================================
Reworked examples from Emanuele Feronato's [HTML5 Cross Platform Game
Development Using Phaser
3](http://phaser.io/shop/books/phaser3-cross-platform-games).
I have modified some of the source code to use ES6/7 and have added linting
using [JavaScript Standard Style](https://standardjs.com/).
I have also added Babel, webpack, et al. for bundling.
Android/Cordova Build
---------------------
Ensure `cordova` has been installed from npm:
```
$ npm install cordova -g # may need sudo
```
To build for Android:
```
$ cd 028 # or 029
$ npm run build
$ npm run copy-dist
$ cd ../cordovafolder
$ cordova prepare
```
| 0 |
kousen/mockitobook | Code examples for the book Mockito Made Clear, from Pragmatic Programmers | null | # mockitobook
Code examples for the book _Mockito Made Clear_,
published by Pragmatic Programmers.
See [the book page](https://pragprog.com/titles/mockito/mockito-made-clear/) for more information.

You can check out the whole GitHub Action at [diagram.yml](/.github/workflows/diagram.yml). Notice that we're excluding the `ignore` and `.github` folders, using the `excluded_paths` config.
## Running the code
To run the code, use the included Gradle wrapper (the `gradlew` scripts for Un*x and Windows) and execute either the `build` or `test` tasks. You can also run individual tests via Gradle, or just load them into your preferred IDE.
This project uses [Gradle version catalogs](https://docs.gradle.org/current/userguide/platforms.html#sub:central-declaration-of-dependencies), which require Gradle 7.4 or higher. The included wrapper is higher than that. The dependency versions are inside the `libs.versions.toml` file in the `gradle` directory, which are used inside `build.gradle`.
See also the Mockito play list at my [YouTube channel](https://www.youtube.com/@talesfromthejarside?sub_confirmation=1). | 0 |
msfroh/lucene-university | Self-contained worked examples of Apache Lucene features and functionality | null | # Self-contained Lucene examples
This repository contains some examples of [Apache Lucene](https://lucene.apache.org/) features with verbose explanations
as code comments written in Markdown.
The goal is to provide code samples that can be used a few ways:
1. Read the source code. The comments should make what's going on pretty clear.
2. Open a code sample in your IDE and step through it with a debugger. Follow along with the comments as you go. Make
changes to the code and see what happens. (Some examples include suggested changes.)
3. Read the code and documentation as a web page generated with [Docco](https://ashkenas.com/docco/) over at
https://msfroh.github.io/lucene-university/docs/SimpleSearch.html. (Go to the "Jump to..." box in the top-right to load
other examples.) This should feel kind of like reading a book.
## Getting started
This repository currently depends on a snapshot of Lucene 10, which requires JDK 17 or higher.
You can clone the repository and build the examples with:
```
git clone https://github.com/msfroh/lucene-university.git
cd lucene-university
./gradlew build
```
Using IntelliJ, you can use "File -> New -> Project from Existing Sources..." and point it to the location where the
code was cloned. Select "Import Project from Existing Model" and choose "Gradle" (assuming you have the Gradle plugin
installed). If you run into errors regarding class file versions, you may need to go to "File -> Project Structure..."
to make sure that you have selected the correct JDK (17 or higher) and set an appropriate language level.
## Contributing
Contributions are welcome! Check the [GitHub issues](https://github.com/msfroh/lucene-university/issues) for requests
and suggestions for material to cover. If there is something else you think could use a worked example, feel free to
directly open a pull request with an example or create an issue requesting one.
Code examples should satisfy the following:
1. Each source file should be self-contained and should only import Lucene and Java classes. The example class should not inherit from
anything else. If you need a small helper class, make it a `private static` inner class.
2. Each example class should have a `public static void main` method that clearly walks through the steps to demonstrate the given feature.
3. Each example should start with a comment with a large header (`// # This is title text`), and a summary explaining what the example
is about, before the `package` declaration.
## License
All code in this repository is licensed under the Apache License, Version 2.0. See the LICENSE file in the root of the repository for the
full text of the license.
| 0 |
pauldragoslav/Spring-boot-Banking | Example project demonstrating the use of Spring-boot in a banking microservice | null | # Spring-boot Banking
Example project demonstrating the use of Java and Spring-boot to build a microservice to be used by an online bank
## Running locally
```
./mvnw clean install -DskipTests=true
```
```
java -jar target/Banking-0.0.1.jar
```
## Running on Docker
```
docker build -t "spring-boot:banking" .
```
```
docker run -p 8080:8080 spring-boot:banking
```
## Testing
Import the Postman collection file into the application or copy the request body from there
### How to test
1. Create account
> Use create account API to create an account by providing a `bankName` and `ownerName`
>

> Make sure to write down the `sortCode` and the `accountNumber` to proceed with other APIs
2. Deposit Cash
>Use noted `accountNumber` as `targetAccountNo` and provide amount greater than zero to deposit cash into an account

3. Check Balance
>Use noted `accountNumber` and `sortCode` to check account balance

4. Withdraw Cash
>Use noted `accountNumber` and `sortCode` and `amount` grater than zero to withdraw cash from an account

5. Check Balance again to verify withdrawal

### Extensions
1. Use of persisted database
2. Use of asynchronous programming backed by message queue for transactions
3. Others mentioned throughout the code | 1 |
studerw/activiti-example | Activiti Workflow example using Spring MVC | activiti-workflow activity alfresco java spring-mvc workflow | # Activiti Workflow Example
Activiti Workflow example using **Spring MVC 4**, **Activiti Workflow 5**, and an embedded **Tomcat Servlet**.
__________________________________________

## Running
To run you need Maven:
```bash
mvn tomcat7:run
```
Use `Ctrl-c` to stop the app.
Access the site:
[http://127.0.0.1:9090/activiti-example/]([http://127.0.0.1:9090/activiti-example/])
* Login in as any of the users - the password is the same name as the user (e.g. kermit/kermit).
* Create a document and submit for approval. Take notice for which group (engineering, sales, management, etc.) the document was created.
* Then logout and login as another user in the same group. You can
view the list of users by clicking the 'users' button on the bottom right of the form.
* View your tasks and see that there is a new document waiting to be approved. Approve or deny. Log back in as the original author.
* Next modify the workflow for a group by adding additional approval steps, using different users and/or groups if desired. Create new document(s) under the modified group(s).
## Building
To build:
```bash
mvn clean install
```
| 1 |
caseyscarborough/spring-redis-caching-example | An example of caching in Spring using Redis. | null | # Spring Redis Caching Example
This repository contains an example of caching data in Spring using Redis.
## Running the Application
The application can be run by executing the following:
```bash
mvn clean jetty:run
```
Then navigate to [localhost:8080](http://localhost:8080) in your browser. | 1 |
utopia-group/regel | REGEL: Regular Expression Generation from Examples and Language | null | # REGEL: Regular Expression Generation from Examples and Language
This is the code repository for the paper ["Multi-modal Synthesis of Regular Expressions"](https://arxiv.org/abs/1908.03316).
## Prerequisites
Before runing the code for parsing your own language or reproducing experimental results, please first set up the [Sempre](https://github.com/percyliang/sempre) tool following the instructions below:
```shell
cd sempre
./pull-dependencies core
./pull-dependencies corenlp
./pull-dependencies freebase
./pull-dependencies tables
```
This repository also requires the following:
- [Z3](https://github.com/Z3Prover/z3). Make sure you have Z3 installed with the Java binding.
- `ant` to compile the java files.
- `python3 ` 3.7
- `java` 1.8.0
Alternately you can use the included Dockerfile to build a Docker image:
```shell
docker build -t regel:v1 .
```
## Benchmarks
### Existing Benchmarks
This repository includes two benchmarks domain:
- StackOverflow (`$benchmark_domain = "so"`)
- DeepRegex (`$benchmark_domain = "deepregex"`).
The benchmarks (including natural language, examples and ground truth) are under `exp/$benchmark_domain/benchmark`.
We include the set of sketches we used under `exp/$benchmark_domain/sketch`
### Generate your own benchmarks
To generate your own benchmarks, you need to do the following:
#### Prepare a benchmark file
Create a new folder `exp/$your_benchmark_domain`
Inside `exp/$your_benchmark_domain/benchmark`, create benchmark files look like the following:
```
// natural language
# natural language description goes here
// examples
# write example in the format "$example_string$",$sign$ where sign can be
# 1) + to indicate it's a positive example
# 2) - to indicate it's a negative example
// gt
# ground truth in the dsl
```
The sample benchmarks can be found under the so and deepregex dataset.
#### Prepare a test/train set file for parser
The procedure to create such a file is under "Train Sketch Parser" -> "Prepare Train Set File". The procedure creating a test set file is same as the one creating a train set file except you fill the `Sketch` field with `null`.
## Sketch Generation
**Note:** if you only runs the `so` or `deepregex` benchmarks, you don't need to generate sketch unless you trained a new model.
After having a set of benchmarks to generate sketch, run the following script:
```shell
python parse_benchmark.py --benchmark $your_benchmark_domain --model_dir $trained_model --max_sketch $number_of_sketch_generated_per_benchmark
```
The generated sketches will be put under `exp/$your_benchmark_domain/sketch`
##### `$trained_model`
We provided the pre-trained model for the two benchmarks datasets:
`so`: `pretrained_models/pretrained_so`
`deepregex`:`pretrained_models/pretrained_turk`
## Sketch Completion
To get the instantiations of the sketches that satisfy the given examples, invoke the following command:
```shell
python exp.py --benchmark $your_benchmark_domain --log_name $log_folder_name --sketch $sketch_folder_name --mem_max $max_memeory_allowed --synth_mode $synthesis_mode --processnum $number_of_process_allowed --timeout $timeout_for_each_benchmark
```
##### `$synthesis_mode`
The synthesizer can be runned in the following mode:
`1`: enables all Regel Functionalities
`2`: enables Regel with pruning using over and under-approxmiation only
`4`: enumeration with no pruning techniques
`5`: run Regel with no sketches
##### `$number_of_process_allowed`
Regel tries to instantiate multiple sketches at parallel. Let `number_of_process_allowed = 1` if you wants to disable the parallel functionality. Otherwise, instantiate this parameter with an argument greater than `1`. The default value is `5`.
#### Output Processing
The script outputs to the `$log_folder_name` where a single file in the folder corresponds to the output of a single benchmark. To process the output in batch and generate a `csv` output, invoke the following script:
```shell
python process_output.py --log_folder $log_folder_name --log_path $path_to_log_folder --output_name $output_file_name
```
The output file will be inside the log folder as `$output_file_name.csv`
## Interactive Mode
We also provide a way to run Regel interactively (i.e. allowing users to interact with Regel by providing examples to refine the synthesis results).
### Run Interactive Mode with Benchmark Set
```shell
python interactive.py --run_mode 1 --benchmark $your_benchmark_domain --synth_mode $synthesis_mode --process_num $number_of_process_allowed --mem_max $max_memory_allowed --top $top_k_results_allowed --timeout $timeout_for_each_benchmark --max_iter $max_iter --save_history $save_history
```
##### `$top_k_results_allowed`
In interactive Regel, we only show user the first k finished sketch results.
##### `$save_history`
The interactive Regel allows you to stop at any point working and continue from where you left last time. Set this to `True` if you wants to enable this functionality.
##### `$max_iter`
The maximum of interaction allowed for each benchmark.
#### Additional examples
For benchmark domains `so` and `deepregex`, we provide a set of additional examples to further refine the results automatically. These additional examples are stored in `interactive/$your_benchmark_domain/examples_cache`. All the furture addtional examples you entered will also stored inside this directory.
#### The workflow of the interactive script:
For each benchmark:
1. Regel reads the benchmarks and sketches and invoke the synthesizer
2. Rank the outputs and get the `$top_k_results_allowed` results
3. Check if any of the results returned matches the ground truth
4. If matches ground truth, the script automatically goes to the next benchmark
5. If does not match the ground truth, the script first find examples in the `examples_cache` that matches the synthesized regex but not the ground truth regex (this will be a negative example) or matches the ground truth regex but not the synthesized regex (this will be a positive example) and uses this example as the one to refine the synthesizer
6. If there does not exist a example in the `examples_cache` that matches the criteria, the script will ask the user to enter two additional examples and indicate whether they are positive and negative examples
7. Regel will run again using the updated examples
### Run Interactive Mode with Arbituary Natural Language and Examples ("Customize" Mode)
```shell
python interactive.py --run_mode 0 --synth_mode $synthesis_mode --process_num $number_of_process_allowed --mem_max $max_memory_allowed --top $top_k_results_allowed --timeout $timeout_for_each_benchmark --max_iter $max_iter --skecth_num $number_of_sketch_per_benchmark --save_history $save_history
```
#### The workflow of the interactive script (customize mode):
1. Enter a file name for your benchmark
2. Enter the natural language
3. Enter the examples
4. Indicating whether the example entered is a positive or negative example: use + to indicate it is a positive example, and - to indicate it is a negative example
The benchmark file created will be saved at `exp/customize/benchmark/$benchmark_file_name`
5. Regel will generate `$number_of_sketch_per_benchmark` number of sketches.
The sketch file created will be saved at `exp/customize/sketch/$benchmark_file_name`
6. Regel will run the synthesizer, and return either `$top_k_results_allowed` number of results or indicate it times out.
7. Regel will ask you if there is any correct regex returned. Enter `y` if there is and enter the correct regex index.
8. If you enter `n` for the last question, Regel will ask you to enter two additional examples to disambiguate the regexes
9. Enter the examples in the same way as you enter the initial examples. After getting the additional examples, Regel will rerun the synthesizer and return the updated synthesis result.
### Output
The output of execution is saved at `interactive/$your_benchmark_domain/logs/$synthesis_mode/raw_output.csv`.
If you uses `customize` mode, `$your_benchmark_domain = 'customize'`
## Train Sketch Parser
We have provided pretrained model that is ready to use in `sempre/pretrained`. We as well provide a examples procedure of how to train a model on **StackOverflow** dataset.
**Prepare Train Set File**
We show a example train set file in `data/so.raw.txt`. It is a TSV file with three fields (`ID`, `NL`, and `Sketch`). Please prepare your train set file in this format, and put it in the `sempre/dataset/` directory with the name `*dataset-name*.raw.txt`.
**Train Parse Script**
To train a model, call
`python py_scripts/train.py *dataset-name* *model-dir*`
E.g. `python py_scripts/train.py so models/so`
This command will preproces the training data (to fit the form required by `Sempre`. The processed data form will be stored in `regex/data/so`) and train a model that will be saved in the `*model-dir*` directory.
**Parse Script**
To parse a set of language descriptions using trained model, call
`python pyscripts/parse.py *dataset-name* *model-dir* *topk*`, where `*topk*` is the desired number of sketches.
The parsed sketches will be in `ouputs/*dataset-name*`, where each file contains the sketches for a single benchmark.
| 0 |
jakubnabrdalik/architecture-guild | An example of an Architecture Guild repository | null | null | 1 |
pavelfomin/spring-boot-rest-example | Spring boot example with REST and spring data JPA | null | # Spring boot example with REST and spring data JPA
See [micronaut-rest-example](https://github.com/pavelfomin/micronaut-rest-example) for `Micronaut` implementation.
### Running tests
* Maven: `./mvnw clean test`
* Gradle: `./gradlew clean test`
### Endpoints
| Method | Url | Decription |
| ------ | --- | ---------- |
| GET |/actuator/info | info / heartbeat - provided by boot |
| GET |/actuator/health| application health - provided by boot |
| GET |/v2/api-docs | swagger json |
| GET |/swagger-ui.html| swagger html |
| GET |/v1/person/{id}| get person by id |
| GET |/v1/persons | get N persons with an offset|
| PUT |/v1/person | add / update person|
### Change maven version
`mvn -N io.takari:maven:wrapper -Dmaven=3.8.4` | 1 |
mraible/jhipster5-demo | Get Started with JHipster 5 Tutorial and Example | angular java jhipster jwt-authentication spring-boot typescript webpack | # blog
This application was generated using JHipster 5.0.1, you can find documentation and help at [https://www.jhipster.tech/documentation-archive/v5.0.1](https://www.jhipster.tech/documentation-archive/v5.0.1).
## Development
Before you can build this project, you must install and configure the following dependencies on your machine:
1. [Node.js][]: We use Node to run a development web server and build the project.
Depending on your system, you can install Node either from source or as a pre-packaged bundle.
2. [Yarn][]: We use Yarn to manage Node dependencies.
Depending on your system, you can install Yarn either from source or as a pre-packaged bundle.
After installing Node, you should be able to run the following command to install development tools.
You will only need to run this command when dependencies change in [package.json](package.json).
yarn install
We use yarn scripts and [Webpack][] as our build system.
Run the following commands in two separate terminals to create a blissful development experience where your browser
auto-refreshes when files change on your hard drive.
./mvnw
yarn start
[Yarn][] is also used to manage CSS and JavaScript dependencies used in this application. You can upgrade dependencies by
specifying a newer version in [package.json](package.json). You can also run `yarn update` and `yarn install` to manage dependencies.
Add the `help` flag on any command to see how you can use it. For example, `yarn help update`.
The `yarn run` command will list all of the scripts available to run for this project.
### Service workers
Service workers are commented by default, to enable them please uncomment the following code.
* The service worker registering script in index.html
```html
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker
.register('./service-worker.js')
.then(function() { console.log('Service Worker Registered'); });
}
</script>
```
Note: workbox creates the respective service worker and dynamically generate the `service-worker.js`
### Managing dependencies
For example, to add [Leaflet][] library as a runtime dependency of your application, you would run following command:
yarn add --exact leaflet
To benefit from TypeScript type definitions from [DefinitelyTyped][] repository in development, you would run following command:
yarn add --dev --exact @types/leaflet
Then you would import the JS and CSS files specified in library's installation instructions so that [Webpack][] knows about them:
Edit [src/main/webapp/app/vendor.ts](src/main/webapp/app/vendor.ts) file:
~~~
import 'leaflet/dist/leaflet.js';
~~~
Edit [src/main/webapp/content/css/vendor.css](src/main/webapp/content/css/vendor.css) file:
~~~
@import '~leaflet/dist/leaflet.css';
~~~
Note: there are still few other things remaining to do for Leaflet that we won't detail here.
For further instructions on how to develop with JHipster, have a look at [Using JHipster in development][].
### Using angular-cli
You can also use [Angular CLI][] to generate some custom client code.
For example, the following command:
ng generate component my-component
will generate few files:
create src/main/webapp/app/my-component/my-component.component.html
create src/main/webapp/app/my-component/my-component.component.ts
update src/main/webapp/app/app.module.ts
## Building for production
To optimize the blog application for production, run:
./mvnw -Pprod clean package
This will concatenate and minify the client CSS and JavaScript files. It will also modify `index.html` so it references these new files.
To ensure everything worked, run:
java -jar target/*.war
Then navigate to [http://localhost:8080](http://localhost:8080) in your browser.
Refer to [Using JHipster in production][] for more details.
## Testing
To launch your application's tests, run:
./mvnw clean test
### Client tests
Unit tests are run by [Jest][] and written with [Jasmine][]. They're located in [src/test/javascript/](src/test/javascript/) and can be run with:
yarn test
UI end-to-end tests are powered by [Protractor][], which is built on top of WebDriverJS. They're located in [src/test/javascript/e2e](src/test/javascript/e2e)
and can be run by starting Spring Boot in one terminal (`./mvnw spring-boot:run`) and running the tests (`yarn run e2e`) in a second one.
### Other tests
Performance tests are run by [Gatling][] and written in Scala. They're located in [src/test/gatling](src/test/gatling).
To use those tests, you must install Gatling from [https://gatling.io/](https://gatling.io/).
For more information, refer to the [Running tests page][].
## Using Docker to simplify development (optional)
You can use Docker to improve your JHipster development experience. A number of docker-compose configuration are available in the [src/main/docker](src/main/docker) folder to launch required third party services.
For example, to start a postgresql database in a docker container, run:
docker-compose -f src/main/docker/postgresql.yml up -d
To stop it and remove the container, run:
docker-compose -f src/main/docker/postgresql.yml down
You can also fully dockerize your application and all the services that it depends on.
To achieve this, first build a docker image of your app by running:
./mvnw verify -Pprod dockerfile:build dockerfile:tag@version dockerfile:tag@commit
Then run:
docker-compose -f src/main/docker/app.yml up -d
For more information refer to [Using Docker and Docker-Compose][], this page also contains information on the docker-compose sub-generator (`jhipster docker-compose`), which is able to generate docker configurations for one or several JHipster applications.
## Continuous Integration (optional)
To configure CI for your project, run the ci-cd sub-generator (`jhipster ci-cd`), this will let you generate configuration files for a number of Continuous Integration systems. Consult the [Setting up Continuous Integration][] page for more information.
[JHipster Homepage and latest documentation]: https://www.jhipster.tech
[JHipster 5.0.1 archive]: https://www.jhipster.tech/documentation-archive/v5.0.1
[Using JHipster in development]: https://www.jhipster.tech/documentation-archive/v5.0.1/development/
[Using Docker and Docker-Compose]: https://www.jhipster.tech/documentation-archive/v5.0.1/docker-compose
[Using JHipster in production]: https://www.jhipster.tech/documentation-archive/v5.0.1/production/
[Running tests page]: https://www.jhipster.tech/documentation-archive/v5.0.1/running-tests/
[Setting up Continuous Integration]: https://www.jhipster.tech/documentation-archive/v5.0.1/setting-up-ci/
[Gatling]: http://gatling.io/
[Node.js]: https://nodejs.org/
[Yarn]: https://yarnpkg.org/
[Webpack]: https://webpack.github.io/
[Angular CLI]: https://cli.angular.io/
[BrowserSync]: http://www.browsersync.io/
[Jest]: https://facebook.github.io/jest/
[Jasmine]: http://jasmine.github.io/2.0/introduction.html
[Protractor]: https://angular.github.io/protractor/
[Leaflet]: http://leafletjs.com/
[DefinitelyTyped]: http://definitelytyped.org/
| 1 |
RayRoestenburg/AkkaExamples | Akka examples | null | null | 0 |
tehmou/RxJava-code-examples | Simple code examples in forms of JUnit tests to illustrate functional reactive programming with RxJava | null | # Running the project
This project contains only JUnit tests. First install Gradle (mine was
1.8) and then run the tests from the project root with
`gradle test`
| 0 |
cdietrich/xtext-languageserver-example | An Example for an Xtext Language Server | null | # Xtext Visual Studio Code Example
This is an [Example](https://github.com/xtext/xtext-languageserver-example/blob/master/vscode-extension-self-contained/README.md) showing the Visual Studio Code Integration of Xtext using the Microsoft Language Server Protocol.

## Quickstart
Requires Visual Studio Code (VS Code) with version 1.4.0 or greater to be on the path as `code` and Java 8+ available as `java`.
- Run `./gradlew startCode`
This will start VS Code and after a few seconds load the `demo` folder of this repository.
## Project Structure
- `vscode-extension` (node based VS Code extension to run with a separate server using socket)
- `vscode-extension-self-contained` (node based VS Code extension to run with a embedded server using process io)
- `org.xtext.example.mydsl` (contains the dsl)
- `org.xtext.example.mydsl.ide` (contains the dsl specific customizations of the Xtext language server)
- `org.xtext.example.mydsl.tests`
## Building in Details
1. Make sure that `java -version` is executable and pointing to a Java 8+ JDK.
2. Type `code`. If the command is not known, open VS Code and select *View / Command Palette*. Enter `code` and select to install `code` on the path.
1. Run `./gradlew startCode` to build the DSL and the VS Code extensions.
### Scenario 1 (embedded server)
1. Install the self-contained extension into VS Code using
`code --install-extension vscode-extension-self-contained/build/vscode/vscode-extension-self-contained-0.0.1.vsix`
2. Run a second instance of vscode on the demo folder `code demo`
### Scenario 2 (client-only with separate server process)
1. Run `./gradlew run` or launch RunServer from Eclipse.
2. Open `vscode-extension` in VS Code and `F5` to launch new editor (you may need a Debug -> Start Debugging initally).
1. Open folder `demo` in the new editor.
### Build VS Code Extension Package manually (manually Gradle)
```
npm install -g vsce
cd vscode-extension
vsce package
cd ../vscode-extension-self-contained
vsce package
```
### Hints
For Other Xtext/VSCode versions please also check other branches for newer/older Xtext Versions that also support newer/older vscode versions
Atom language client is dead. We plan to update to a fork. See https://github.com/itemis/xtext-languageserver-example/issues/73 | 1 |
steveonjava/JavaFX-Spring | Example application demonstrating integration of JavaFX and Spring technologies on the client and server | null | JavaFX-Spring
=============
Example application demonstrating integration of JavaFX and Spring technologies on the client and server. For more details about the technologies used, and a details of the code, please refer to the following 3 part blog series:
* [JavaFX in Spring Day 1 - Application Initialization](http://steveonjava.com/javafx-and-spring-day-1)
* [JavaFX in Spring Day 2 - Configuration and FXML](http://steveonjava.com/javafx-in-spring-day-2)
* [JavaFX in Spring Day 3 - Authentication and Authorization](http://steveonjava.com/javafx-in-spring-day-3)
To run this example, you will need to build and run the server and client projects individually using maven. You can either do this via an IDE or from the command line.
To start with, please make sure you have the following prerequisites:
* Maven (3.x or higher)
* JDK 7 (update 4 or higher)
The command line steps to get this up and running are:
cd server
mvn jetty:run
cd ..
cd client
mvn compile exec:java
If it doesn't work, make sure that maven is running the right version of java by calling "mvn -version". If you are still having trouble, check out the blogs mentioned above, and post if your issue is not resolved. | 1 |
perslab/depict | DEPICT code, instructions and an example | null | # Dependencies
* Mac OS X, or UNIX operating system (Microsoft Windows is not supported)
* Java SE 6 (or higher)
* [Java.com](https://www.java.com/en/download/)
* Python version 2.7 (Python version 3 or higher is not supported)
* [Python.org](https://www.python.org/downloads/)
* PIP (used for install Python libraries)
* `sudo easy_install pip`
* Python intervaltree library
* `sudo pip install intervaltree`
* Pandas (version 0.15.2 or higher)
* `sudo pip install pandas`
* PLINK version 1.9 (August 1 release or newer)
* [PLINK version 1.9](https://www.cog-genomics.org/plink2/)
# DEPICT
The following description explains how to download DEPICT, test run it on example files and how to run it on your GWAS summary statistics.
## Download DEPICT
Download the compressed [DEPICT version 1 rel194](https://drive.google.com/file/d/0B3TrbUOwncN-NG9nTFdDbC1rNXc/view?usp=sharing&resourcekey=0-Jts51YpkcZcGpvsOP29Kfw) files and unzip the archive to where you would like the DEPICT tool to live on your system. Note that you when using DEPICT can write your analysis files to a different folder. Be sure to that you meet all the dependencies described above. If you run DEPICT at the Broad Institute, see [below section](#depict_at_broad).
## Test run DEPICT
The following steps outline how to test run DEPICT on LDL cholesterol GWAS summary statistics from [Teslovich, Nature 2010](http://www.nature.com/nature/journal/v466/n7307/full/nature09270.html). This example is available in both the 1000 Genomes Project pilot phase DEPICT version and the 1000 Genomes Project phase 3 DEPICT version.
1. Edit `DEPICT/example/ldl_teslovich_nature2010.cfg`
* Point `plink_executable` to where PLINK executable (version 1.9 or higher) is on our system (e.g. `/usr/bin/plink`)
2. Run DEPICT on the LDL summary statistics
* E.g. `./src/python/depict.py example/ldl_teslovich_nature2010.cfg`
3. Investigate the results (see the [Wiki](https://github.com/perslab/DEPICT/wiki) for a description of the output format).
* DEPICT loci `ldl_teslovich_nature2010_loci.txt`
* Gene prioritization results `ldl_teslovich_nature2010_geneprioritization.txt`
* Gene set enrichment results `ldl_teslovich_nature2010_genesetenrichment.txt`
* Tissue enrichment results `ldl_teslovich_nature2010_tissueenrichment.txt`
## <a name="depict_your_gwas"></a>Run DEPICT based on your GWAS
The following steps allow you to run DEPICT on your GWAS summary statistics. We advice you to run the above LDL cholesterol example before this point to make sure that you meet all the necessary dependencies to run DEPICT.
1. Make sure that you use hg19 genomic SNP positions
2. Make an 'analysis folder' in which your trait-specific DEPICT analysis will be stored
3. Copy the template config file from `src/python/template.cfg` to your analysis folder and give the config file a more meaningful name
4. Edit your config file
* Point `analysis_path` to your analysis folder. This is the directory to which output files will be written
* Point `gwas_summary_statistics_file` to your GWAS summary statistics file. This file can be either in plain text or gzip format (i.e. having the .gz extension)
* Specify the GWAS association p value cutoff (`association_pvalue_cutoff`). We recommend using `5e-8` or `1e-5`
* Specify the label, which DEPICT uses to name all output files (`label_for_output_files`)
* Specify the name of the association p value column in your GWAS summary statistics file (`pvalue_col_name`)
* Specify the name of the marker column (`marker_col_name`). Format: <chr:pos>, ie. '6:2321'. If this column does not exist chr_col and pos_col will be used, then leave if empty
* Specify the name of the chromosome column (`chr_col_name`). Leave empty if the above `marker_col_name` is set
* Specify the name of the position column (`pos_col_name`). Leave empty if the above `marker_col_name` is set. Please make sure that your SNP positions used human genome build GRCh37 (hg19)
* Specify the separator used in the GWAS summary statistics file (`separator`). Options are
* `tab`
* `comma`
* `semicolon`
* `space`
* Point `plink_executable` to where PLINK 1.9 executable (August 1 release or newer) is on your system (e.g. `/usr/bin/plink`)
* If you are using other genotype data than the data part of DEPICT then point `genotype_data_plink_prefix` to where your PLINK binary format 1000 Genomes Project genotype files are on your system. Specify the entire path of the filenames except the extension
5. Run DEPICT
* `<path to DEPICT>/src/python/depict.py <path to your config file>`
6. Investigate the results which have been written to your analysis folder. See the [Wiki](https://github.com/perslab/DEPICT/wiki) for details on the output format
* Associated loci in file ending with `_loci.txt`
* Gene prioritization results in file ending with `_geneprioritization.txt`
* Gene set enrichment results in file ending with `_genesetenrichment.txt`
* Tissue enrichment results in file ending with `_tissueenrichment.txt`
## <a name="depict_at_broad"></a>DEPICT at the Broad Institute
### Run the LDL example
1. Copy the example config file `/cvar/jhlab/tp/depict/example/ldl_teslovich_nature2010.cfg` to your working directory and `change analysis_path` to that directory
2. Run DEPICT using
`qsub -e err -o out -cwd -l h_vmem=12g /cvar/jhlab/tp/depict/src/python/broad_run.sh python /cvar/jhlab/tp/depict/src/python/depict.py <your modified config file>.cfg`
### Run DEPICT on own GWAS
1. Follow the above [steps 1-4](#depict_your_gwas)
2. Run DEPICT using
``` bash
use UGER
qsub -e err -o out -cwd -l m_mem_free=2.5g -pe smp 6 /cvar/jhlab/tp/depict/src/python/broad_run.sh python /cvar/jhlab/tp/DEPICT/src/python/depict.py <your modified config file>.cfg
```
Be aware that DEPICT needs at least needs 14GB memory when if modify the memory used per slot/thread.
# Troubleshooting
Please send the log file (ending with `_log.txt`) with a brief description of the problem to Tune H Pers (tunepers@broadinstitute.org).
The overall version of DEPICT follows the DEPICT publications. The current version is `v1` from [Pers, Nature Communications, 2015](http://www.nature.com/ncomms/2015/150119/ncomms6890/full/ncomms6890.html) and the release follows the number of commits of the DEPICT git repository (`git log --pretty=format:'' | wc -l`). The latest 1000 Genomes Project pilot phase DEPICT version is `rel138`, the latest 1000 Genomes Project phase 3 version is `rel137`.
# How to cite
[Pers, Nature Communications 2015](http://www.ncbi.nlm.nih.gov/pubmed/25597830)
[1000 Genomes Project](http://www.ncbi.nlm.nih.gov/pubmed/20981092), because DEPICT makes extensively use of their data.
# Data used in these examples
LDL GWAS [summary statistics](http://csg.sph.umich.edu/abecasis/public/lipids2010/) from [Teslovich, Nature 2010](http://www.nature.com/nature/journal/v466/n7307/full/nature09270.html) are used as input in this example. We included all SNPs with P < 5e-8 and manually added chromosome and position columns (hg19/GRCh37).
1000 Genomes Consortium pilot release and phase 3 release data are used in DEPICT. Please remember to cite [their paper](http://www.nature.com/nature/journal/v467/n7319/full/nature09534.html) in case you use our tool.
| 1 |
yidongnan/spring-boot-grpc-example | spring-boot-grpc-example | grpc spring-boot spring-boot-grpc | # Spring Boot Grpc Example
| 0 |
MaLeLabTs/RegexGenerator | This project contains the source code of a tool for generating regular expressions for text extraction: 1. automatically, 2. based only on examples of the desired behavior, 3. without any external hint about how the target regex should look like | null | # RegexGenerator
This project contains the source code of a tool for generating regular expressions for text extraction and classification (flagging):
1. automatically,
2. based only on examples of the desired behavior,
3. without any external hint about how the target regex should look like.
An online, interactive version of this engine is accessible at: [http://regex.inginf.units.it/](http://regex.inginf.units.it/)
RegexGenerator was developed at the [Machine Learning Lab, University of Trieste, Italy] (http://machinelearning.inginf.units.it).
The provided engine is a developement release (1) that implements the algorithms published in our articles (2):
* Bartoli, De Lorenzo, Medvet, Tarlao, Inference of Regular Expressions for Text Extraction from Examples, IEEE Transactions on Knowledge and Data Engineering, 2016
* Bartoli, De Lorenzo, Medvet, Tarlao, Can a machine replace humans in building regular expressions? A case study, IEEE Intelligent Systems, 2016
* Bartoli, De Lorenzo, Medvet, Tarlao, Virgolin, Evolutionary Learning of Syntax Patterns for Genic Interaction Extraction, ACM Genetic and Evolutionary Computation Conference (GECCO), 2015, Madrid (Spain)
More details about the project can be found on [Machine Learning Lab news pages](http://machinelearning.inginf.units.it/news/newregexgeneratortoolonline).
We hope that you find this code instructive and useful for your research or study activity.
If you use our code in your reasearch please cite our work and please share back your enhancements, fixes and
modifications.
## Project Structure
The RegexGenerator project is organized in three NetBeans Java subprojects:
* ConsoleRegexTurtle: cli frontend for the GP engine
* MaleRegexTurtle: provides the regular expression tree representation
* Random Regex Turtle: GP search engine
## Other Links
[Twitter account](https://twitter.com/MaleLabTs) of Machine Learning Lab
RegexGenerator [wiki](https://github.com/MaLeLabTs/RegexGenerator/wiki) with installation walkthrough and guide
---
(1) This is a developement version branch which *slightly* differs from the cited works.
(2) BibTeX format:
@article{bartoli2016inference,
author={A. Bartoli and A. De Lorenzo and E. Medvet and F. Tarlao},
journal={IEEE Transactions on Knowledge and Data Engineering},
title={Inference of Regular Expressions for Text Extraction from Examples},
year={2016},
volume={28},
number={5},
pages={1217-1230},
doi={10.1109/TKDE.2016.2515587},
ISSN={1041-4347},
month={May},
}
@inproceedings{bartoli2015evolutionary,
title={Evolutionary Learning of Syntax Patterns for Genic Interaction Extraction},
author={Bartoli, Alberto and De Lorenzo, Andrea and Medvet, Eric and
Tarlao, Fabiano and Virgolin, Marco},
booktitle={Proceedings of the 2015 on Genetic and Evolutionary Computation Conference},
pages={1183--1190},
year={2015},
organization={ACM}
}
@article{bartoli2016can,
title={Can a machine replace humans in building regular expressions? A case study},
author={Bartoli, Alberto and De Lorenzo, Andrea and Medvet, Eric and Tarlao, Fabiano},
journal={IEEE Intelligent Systems},
volume={31},
number={6},
pages={15--21},
year={2016},
publisher={IEEE}
}
| 0 |
mbode/flink-prometheus-example | Example setup to demonstrate Prometheus integration of Apache Flink | flink prometheus | [](https://github.com/mbode/flink-prometheus-example/actions)
[](https://codecov.io/gh/mbode/flink-prometheus-example)
[](https://github.com/apache/flink/releases/tag/release-1.19.0)
[](https://github.com/prometheus/prometheus/releases/tag/v2.37.1)
This repository contains the live demo to my talk _Monitoring Flink with Prometheus_, which I have given at:
* [Flink Forward Berlin 2018](https://berlin-2018.flink-forward.org/conference-program/#monitoring-flink-with-prometheus), _2018-09-04_ (:video_camera: [Video](https://www.youtube.com/watch?v=vesj-ghLimA) :page_facing_up: [Slides](https://www.slideshare.net/MaximilianBode1/monitoring-flink-with-prometheus))
* [Spark & Hadoop User Group Munich](https://www.meetup.com/de-DE/Hadoop-User-Group-Munich/events/252393503/), _2018-09-26_
The blog post [Flink and Prometheus: Cloud-native monitoring of streaming applications](https://flink.apache.org/features/2019/03/11/prometheus-monitoring.html) explains how to run the demo yourself.
## Getting Started
### Startup
```
./gradlew composeUp
```
### Web UIs
- [Flink JobManager](http://localhost:8081/#/overview)
- [Prometheus](http://localhost:9090/graph)
- [Grafana](http://localhost:3000) (credentials _admin:flink_)
- Prometheus endpoints
- [Job Manager](http://localhost:9249/metrics)
- [Task Manager 1](http://localhost:9250/metrics)
- [Task Manager 2](http://localhost:9251/metrics)
## Built With
- [Apache Flink](https://flink.apache.org)
- [Prometheus](https://prometheus.io)
- [Grafana](https://grafana.com)
- [docker-compose](https://docs.docker.com/compose/) – provisioning of the test environment
- [Gradle](https://gradle.org) with [kotlin-dsl](https://github.com/gradle/kotlin-dsl)
- [gradle-testsets-plugin](https://github.com/unbroken-dome/gradle-testsets-plugin)
- [shadow](https://github.com/johnrengelman/shadow)
- [spotless](https://github.com/diffplug/spotless/tree/master/plugin-gradle)
- [spotbugs](https://github.com/spotbugs/spotbugs-gradle-plugin)
- [gradle-docker-compose-plugin](https://github.com/avast/gradle-docker-compose-plugin)
- [gradle-versions-plugin](https://github.com/ben-manes/gradle-versions-plugin)
## Development
typical tasks:
- verify: `./gradlew check`
- integration tests: `./gradlew integrationTest`
- list outdated dependenices: `./gradlew dependencyUpdates`
- update gradle: `./gradlew wrapper --gradle-version=<x.y>` (twice)
| 1 |
eliast/dropwizard-guice-example | A complete dropwizard example for using Guice | null | dropwizard-guice-example
========================
A complete dropwizard example for using Guice.
```
mvn clean package
java -jar target/hello-guice-*-SNAPSHOT.jar server hello-world.yml
```
| 1 |
Leandros/ActionBar-with-Tabs | Example for ActionBar with Tabs | null | Example Android App for ActionBar with Tabs layout
==================================================
Visit: http://arvid-g.de
The original article to this example: http://arvid-g.de/12/android-4-actionbar-with-tabs-example
| 1 |
GoogTech/design-patterns-in-java | :coffee: 📖 使用通俗易懂的案例,类图,及配套学习笔记来详解 Java 的二十三种设计模式 ! | classdiagram examples java-design-patterns jdk11 learning-notes maven | ### 配套博客学习笔记 : https://goog.tech/blog/tags/design-and-pattern
> 参考书籍( 推荐 ) : `《Java设计模式 - 刘伟》`,`《图解设计模式 - [日]结城浩》`
### 创建型模式
:heavy_check_mark: `简单工厂模式( Simple Factor Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/06/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BSimple-Factory-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/simple_factory_pattern)
:heavy_check_mark: `工厂方法模式( Factory Method Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/05/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BFactory-Method-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/factory_method_pattern)
:heavy_check_mark: `抽象工厂模式( Abstract Factroy Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/07/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BAbstract-Factory-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/abstract_factory_pattern)
:heavy_check_mark: `建造者模式( Builder Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/17/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BBuilder-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/builder_pattern)
:heavy_check_mark: `单例模式( Singleton Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/06/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BSingleton-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/singleton_pattern)
:heavy_multiplication_x: `原型模式( Prototype Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
### 结构型模式
:heavy_check_mark: `适配器模式( Adapter Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/03/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BAdapter-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/adapter_pattern)
:heavy_check_mark: `代理模式( Proxy Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/25/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BProxy-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/froxy_pattern)
:heavy_check_mark: `组合模式( Composite Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/11/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BComposite-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/composite_pattern)
:heavy_check_mark: `装饰模式( Decorator Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/08/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BDecorator-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/decorator_pattern)
:heavy_check_mark: `外观模式( Facade Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/12/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BFacade-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/facade_pattern)
:heavy_multiplication_x: `桥接模式( Bridge Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
:heavy_multiplication_x: `享元模式( Flyweight Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
### 行为型模式
:heavy_check_mark: `命令模式( Command Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/20/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BCommand-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/command_pattern)
:heavy_check_mark: `迭代器模式( Iterator Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/02/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BIterator-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/iterator_pattern)
:heavy_check_mark: `模板方法模式( Template Method Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/04/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BTemplate-Method-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/template_method_pattern)
:heavy_check_mark: `观察者模式( Observer Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/09/28/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BObserver-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/observer_pattern)
:heavy_multiplication_x: `中介者模式( Mediator Pattern )`
> :memo: [学习笔记](https://goog.tech/blog/2019/10/10/Java%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F%E4%B9%8BMediator-Pattern/) ,[示例程序](https://github.com/GoogTech/design-patterns-in-java/tree/master/design-patterns/src/main/java/pers/huangyuhui/mediator_pattern)
:heavy_multiplication_x: `职责链模式( Chain of Responsibility Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
:heavy_multiplication_x: `解释器模式( Interpreter Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
:heavy_multiplication_x: `备忘录模式( Memento Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
:heavy_multiplication_x: `状态模式( State Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
:heavy_multiplication_x: `策略模式( Strategy Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
:heavy_multiplication_x: `访问者模式( Visitor Pattern )`
> :memo: [学习笔记updating](demo) ,[示例程序updating](demo)
| 0 |
asbnotebook/spring-boot | List of example codes related to spring boot. More details are available at : https://asbnotebook.com | null | This is list of spring boot example code which is posted on https://asbnotebook.com.
| 1 |
ricardojlrufino/eclipse-cdt-standalone-astparser | Example using the Eclipse CDT Parser API | null | eclipse-cdt-standalone-astparser
========
Example of Using the Eclipse CDT Parser API
Another usage sample is: https://github.com/ricardojlrufino/cplus-libparser
Dependencies
====
*already included (but you can grab the new versions)*
* org.eclipse.cdt.core_5.6.0.201402142303.jar
* org.eclipse.equinox.common_3.6.200.v20130402-1505.jar
Preview
====
 | 1 |
plaa/mongo-spark | Example application on how to use mongo-hadoop connector with Spark | null | mongo-spark
===========
Example application on how to use [mongo-hadoop][1] connector with [Apache Spark][2].
Read more details at http://codeforhire.com/2014/02/18/using-spark-with-mongodb/
[1]: https://github.com/mongodb/mongo-hadoop
[2]: https://spark.incubator.apache.org/
Prerequisites
-------------
* MongoDB installed and running on localhost
* Scala 2.10 and SBT installed
Running
-------
Import data into the database, run either `JavaWordCount` or `ScalaWordCount` and print the results.
mongoimport -d beowulf -c input beowulf.json
sbt 'run-main JavaWordCount'
sbt 'run-main ScalaWordCount'
mongo beowulf --eval 'printjson(db.output.find().toArray())' | less
License
-------
The code itself is released to the public domain according to the [Creative Commons CC0][3].
The example files are based on [Beowulf][4] from Project Gutenberg and is under its corresponding license.
[3]: http://creativecommons.org/publicdomain/zero/1.0/
[4]: http://www.gutenberg.org/ebooks/981
| 1 |
allure-examples/junit4-java-maven | Example of Allure Report usage with JUnit 4, Java and Maven | allure allure-report example java junit junit4 jupiter maven | # Allure Example
> Example of Allure Report usage with JUnit 4, Java and Maven
<!--<img src="https://allurereport.org/public/img/allure-report.svg" alt="Allure Report logo" style="float: right" />-->
- Learn more about Allure Report at https://allurereport.org
- 📚 [Documentation](https://allurereport.org/docs/) – discover official documentation for Allure Report
- ❓ [Questions and Support](https://github.com/orgs/allure-framework/discussions/categories/questions-support) – get help from the team and community
- 📢 [Official annoucements](https://github.com/orgs/allure-framework/discussions/categories/announcements) – be in touch with the latest updates
- 💬 [General Discussion ](https://github.com/orgs/allure-framework/discussions/categories/general-discussion) – engage in casual conversations, share insights and ideas with the community
---
The generated report is available here: [https://allure-examples.github.io/junit4-java-maven](https://allure-examples.github.io/junit4-java-maven/)
| 1 |
Dinnerbone/BukkitFullOfMoon | Custom generator example for Bukkit | null | null | 1 |
k33ptoo/Drapo-Dashboard-JavaFX | A JavaFX example based on Drapo's dashboard an inspiration ui. | javafx javafx-application javafx-desktop-apps javafx-gui | # Drapo-Dashboard-JavaFX
A JavaFX example based on Drapo's dashboard an inspiration ui.
Clone and do your thing.

Check out - https://dribbble.com/shots/3750197-Drapo-Pro-s-dashboard?utm_source=Pinterest_Shot&utm_campaign=drapo&utm_content=Drapo%20Pro%27s%20dashboard&utm_medium=Social_Share
Catch me outside.
* https://www.facebook.com/keeptoo.ui.ux/
* https://www.youtube.com/KeepToo
| 1 |
HMS-Core/hms-ml-demo | HMS ML Demo provides an example of integrating Huawei ML Kit service into applications. This example demonstrates how to integrate services provided by ML Kit, such as face detection, text recognition, image segmentation, asr, and tts. | asr audio-file-transcription bank-card-recognition classification deep-learning document face-detection face-recognition hms huawei id-card-recognition image-segmentation kotlin-android language-detection machine-learning object-detection-and-tracking ocr text-to-speech text-translation tts-android | # hms-ml-demo
[](https://developer.huawei.com/consumer/en/doc/development/hiai-Guides/service-introduction-0000001050040017)
English | [中文](https://github.com/HMS-Core/hms-ml-demo/blob/master/README_ZH.md)
## Introduction
This project includes multiple demo apps that showcase the capabilities of the [HUAWEI ML Kit](https://developer.huawei.com/consumer/en/doc/development/hiai-Guides/service-introduction-0000001050040017).
There are 2 main directories:
- [MLKit-Sample](https://github.com/HMS-Core/hms-ml-demo/tree/master/MLKit-Sample) contains a scenario-based demo. To directly download and install Android binaries, scan the QR codes [here](https://developer.huawei.com/consumer/en/doc/development/hiai-Examples/sample-code-0000001050265470)
- [ApplicationCases](https://github.com/HMS-Core/hms-ml-demo/tree/master/ApplicationCases) contains various application cases
## Technical Support
If you are still evaluating HMS Core, obtain the latest information about HMS Core and share your insights with other developers at [Reddit](https://www.reddit.com/r/HuaweiDevelopers/.).
- To resolve development issues, please go to [Stack Overflow](https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Votes). You can ask questions below the `huawei-mobile-services` tag, and Huawei R&D experts can solve your problem online on a one-to-one basis.
- To join the developer discussion, please visit [Huawei Developer Forum](https://forums.developer.huawei.com/forumPortal/en/forum/hms-core).
If you have problems using the sample code, submit [issues](https://github.com/HMS-Core/hms-ml-demo/issues) and [pull requests](https://github.com/HMS-Core/hms-ml-demo/pulls) to the repository.
| 1 |
albertattard/java-fork-join-example | Java Creed - Java Fork Join Example | null | Java 7 introduced a new type of `ExecutorService` ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ExecutorService.html)) called **Fork/Join Framework** ([Tutorial](https://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html)), which excels in handling recursive algorithms. Different from other implementations of the `ExecutorService`, the Fork/Join Framework uses a work-stealing algorithm ([Paper](http://gee.cs.oswego.edu/dl/papers/fj.pdf)), which maximise the threads utilisation, and provides a simpler way to deal with tasks which spawn other tasks (referred to as _subtasks_).
All code listed below is available at: [https://github.com/javacreed/java-fork-join-example](https://github.com/javacreed/java-fork-join-example). Most of the examples will not contain the whole code and may omit fragments which are not relevant to the example being discussed. The readers can download or view all code from the above link.
This article provides a brief description of what is referred to as traditional executor service (so to called them) and how these work. It then introduces the Fork/Join Framework and describes how this differentiates from the traditional executor service. The third section in this article shows a practical example of the Fork/Join Framework and demonstrates most of its main components.
## Executor Service
A bank, or post office, has several counters from where the customers are served at the branches. When a counter finishes with the current customer, the next customer in the queue will take its place and the person behind the counter, also referred to as the _employee_, starts serving the new customer. The employee will only serve one customer at a given point in time and the customers in the queue need to wait for their turn. Furthermore, the employees are very patient and will never ask their customers to leave or step aside, even if these are waiting for something else to happen. The following image shows a simple view of the customers waiting and the employees serving the customers at the head of the queue.

Something similar happens in a multithreaded program where the `Thread`s ([Java Doc](http://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html)) represents the employees and the tasks to be carried out are the customers. The following image is identical to the above, with just the labels updated to use the programming terminology.

This should help you relate these two aspects and better visualise the scenario being discussed.
Most of the thread pools ([Tutorial](https://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html)) and executor services work in this manner. A `Thread` is assigned a _task_ and will only move to the next _task_ once the one at hand is finished. Tasks can take quite a long time to finish and may block waiting for something else to happen. This works well in many cases, but fails badly with problems that need to be solved recursively.
Let us use the same analogy that we used before with the customer waiting in the queue. Say that _Customer 1_, who is being served by _Employee 1_, needs some information from _Customer 6_, who is not yet in the queue. He or she (_Customer 1_) calls their friend (_Customer 6_) and waits for him or her (_Customer 6_) to come to the bank. In the meantime, _Customer 1_ stays at the counter occupying _Employee 1_. As mentioned before, the employees are very patient and will never send a customer back to the queue or ask them to step aside until all his or her dependencies are resolved. _Customer 6_ arrives and queues as shown below.

With _Customer 1_ still occupying an employee, and for the sake of this argument the other customers, _Customer 2_ and _Customer 3_, too do the same (that is wait for something which is queued), then we have a deadlock. All employees are occupied by customers that are waiting for something to happen. Therefore the employees will never be free to serve the other customers.
In this example we saw a weakness of the traditional executor services when dealing with tasks, which in turn depend on other tasks created by them (referred to as subtasks). This is very common in recursive algorithms such as Towers of Hanoi ([Wiki](http://en.wikipedia.org/wiki/Tower_of_Hanoi)) or exploring a tree like data structure (calculating the total size of a directory). The Fork/Join Framework was designed to address such problems as we will see in the following section. Later on in this article we will also see an example of the problem discussed in this section.
## Fork/Join Framework
The main weakness of the traditional executor service implementations when dealing with tasks, which in turn depend on other subtasks, is that a thread is not able to put a task back to the queue or to the side and then serves/executes an new task. The Fork/Join Framework addresses this limitation by introducing another layer between the tasks and the threads executing them, which allows the threads to put blocked tasks on the side and deal with them when all their dependencies are executed. In other words, if _Task 1_ depends on _Task 6_, which task (_Task 6_) was created by _Task 1_, then _Task 1_ is placed on the side and is only executed once _Task 6_ is executed. This frees the thread from _Task 1_, and allows it to execute other tasks, something which is not possible with the traditional executor service implementations.
This is achieved by the use of _fork_ and _join_ operations provided by the framework (hence the name Fork/Join). _Task 1_ forks _Task 6_ and then joins it to wait for the result. The fork operation puts _Task 6_ on the queue while the join operation allows _Thread 1_ to put _Task 1_ on the side until _Task 6_ completes. This is how the fork/join works, fork pushes new things to the queue while the join causes the current task to be sided until it can proceed, thus blocking no threads.
The Fork/Join Framework makes use of a special kind of thread pool called `ForkJoinPool` ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html)), which differentiates it from the rest. `ForkJoinPool` implements a work-stealing algorithm and can execute `ForkJoinTask` ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinTask.html)) objects. The `ForkJoinPool` maintains a number of threads, which number is typically based on the number of CPUs available. Each thread has a special kind of queue, `Deque`s ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/Deque.html)), where all its tasks are placed. This is quite an important point to understand. The threads do not share a common queue, but each thread has its own queue as shown next.

The above image illustrates another queue that each thread has (lower part of the image). This queue, so to call it, allows the threads to put aside tasks which are blocked waiting for something else to happen. In other words, if the current task cannot proceed (as it performs a _join_ on a subtask), then it is placed on this queue until all of its dependencies are ready.
New tasks are added to the thread's queue (using the _fork_ operation) and each thread always processes the last task added to its queue. This is quite important. If the queue of a thread has two tasks, the last task added to the queue is processed first. This is referred to as last in first out, LIFO ([Wiki](http://en.wikipedia.org/wiki/LIFO_%28computing%29)).

In the above image, _Thread 1_ has two tasks in its queue, where _Task 1_ was added to the queue before _Task 2_. Therefore, _Task 2_ will be executed first by _Thread 1_ and then it executes _Task 1_. Any idle threads can take tasks from the other threads queues if available, that is, work-stealing. A thread will always steal oldest tasks from some other thread's queue as shown in the following image.

As shown in the above image, _Thread 2_ stole the oldest task, _Task 1_, from _Thread 1_. As a rule of thumb, threads will always attempt to steal from their neighbouring thread to minimise the contention that may be created during the work stealing.
The order in which tasks are executed and stolen is quite important. Ideally, work-stealing does not happen a lot as this has a cost. When a task is moved from one thread to another, the context related with this task needs to be moved from one thread's stack to another. The threads may be (and the Fork/Join framework spread work across all CPUs) on another CPU. Moving thread context from one CPU to another can be even slower. Therefore, the Fork/Join Framework minimises this as described next.
A recursive algorithm starts with a large problem and applies a divide-and-conquer technique to break down the problem into smaller parts, until these are small enough to be solved directly. The first task added to the queue is the largest task. The first task will break the problem into a set of smaller tasks, which tasks are added to the queue as shown next.

_Task 1_ represents our problem, which is divided into two tasks, _Task 2_ is small enough to solve as is, but _Task 3_ needs to be divided further. Tasks _Task 4_ and _Task 5_ are small enough and these require no further splitting. This represents a typical recursive algorithm which can be split into smaller parts and then aggregates the results when ready. A practical example of such algorithm is calculating the size of a directory. We know that the size of a directory is equal to the size of its files.

Therefore the size of _Dir 1_ is equal to the size of _File 2_ plus the size of _Dir 3_. Since _Dir 3_ is a directory, its size is equal to the size of its content. In other words, the size of _Dir 3_ is equal to the size of _File 4_ plus the size of _File 5_.
Let us see how this is executed. We start with one task, that is, to compute the size of directory as shown in the following image.

_Thread 1_ will take _Task 1_ which tasks forks two other subtasks. These tasks are added to the queue of _Thread 1_ as shown in the next image.

_Task 1_ is waiting for the subtasks, _Task 2_ and _Task 3_ to finish, thus is pushed aside which frees _Thread 1_. To use better terminology, _Task 1_ joins the subtasks _Task 2_ and _Task 3_. _Thread 1_ starts executing _Task 3_ (the last task added to its queue), while _Thread 2_ steals _Task 2_.

Note that _Thread 1_ has already started processing its second task, while _Thread 3_ is still idle. As we will see later on, the threads will not perform the same number of work and the first thread will always produce more than the last thread. _Task 3_ forks two more subtasks which are added to the queue of the thread that is executing it. Therefore, two more tasks are added to _Thread 1_'s queue. _Thread 2_, ready from _Task 2_, steals again another task as shown next.

In the above example, we saw that _Thread 3_ never executed a task. This is because we only have very little subtasks. Once _Task 4_ and _Task 5_ are ready, their results are used to compute _Task 3_ and then _Task 1_.
As hinted before, the work is not evenly distributed among threads. The following chart shows how the work is distributed amongst threads when calculating the size of a reasonably large directory.

In the above example four threads were used. As expected, Thread 1 performs almost 40% of the work while Thread 4 (the last thread) performs slightly more than 5% of the work. This is another important principle to understand. The Fork/Join Framework will not distribute work amongst threads evenly and will try to minimise the number of threads utilised. The second threads will only take work from the first thread is this is not cooping. As mentioned before, moving tasks between threads has a cost which the framework tries to minimise.
This section described on some detail how the Fork/Join Framework works and how threads steal work from other threads' queue. In the following section we will see several practical example of the Fork/Join Framework and will analyse the obtained results.
## Calculate Directory Total Size
To demonstrate the use of the Fork/Join Framework, we will calculate the size of a directory, which problem can be solved recursively. The size of file can determined by the method `length()` ([Java Doc](http://docs.oracle.com/javase/7/docs/api/java/io/File.html#length())). The size of a directory is equal to the size of all its files.
We will use several approaches to calculate the size of a directory, some of which will use the Fork/Join Framework, and we will analyse the obtained results in each case.
## Using a Single Thread (no-concurrency)
The first example will not make use of threads and simple defines the algorithm to the used.
```java
package com.javacreed.examples.concurrency.part1;
import java.io.File;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DirSize {
private static final Logger LOGGER = LoggerFactory.getLogger(DirSize.class);
public static long sizeOf(final File file) {
DirSize.LOGGER.debug("Computing size of: {}", file);
long size = 0;
/* Ignore files which are not files and dirs */
if (file.isFile()) {
size = file.length();
} else {
final File[] children = file.listFiles();
if (children != null) {
for (final File child : children) {
size += DirSize.sizeOf(child);
}
}
}
return size;
}
}
```
The class `DirSize ` has a single method called `sizeOf()`, which method takes a `File` ([Java Doc](http://docs.oracle.com/javase/7/docs/api/java/io/File.html)) instance as its argument. If this instance is a file, then the method returns the file's length otherwise, if this is a directory, this method calls the method `sizeOf()` for each of the files within the directory and returns the total size.
The following example shows how to run this example, using the file path as defined by `FilePath.TEST_DIR` constant.
```java
package com.javacreed.examples.concurrency.part1;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.javacreed.examples.concurrency.utils.FilePath;
public class Example1 {
private static final Logger LOGGER = LoggerFactory.getLogger(Example1.class);
public static void main(final String[] args) {
final long start = System.nanoTime();
final long size = DirSize.sizeOf(FilePath.TEST_DIR);
final long taken = System.nanoTime() - start;
Example1.LOGGER.debug("Size of '{}': {} bytes (in {} nano)", FilePath.TEST_DIR, size, taken);
}
}
```
The above example will compute the size of the directory and will print all visited files before printing the total size. The following fragment only shows the last line which is the size of the _downloads_ directory under a test folder (`C:\Test`) together with the time taken to compute the size.
```
...
16:55:38.045 [main] INFO Example1.java:38 - Size of 'C:\Test\': 113463195117 bytes (in 4503253988 nano)
```
To disable the logs for each file visited, simple change the log level to `INFO` (in the `log4j.properties`) and the logs will only show the final results.
```
log4j.rootCategory=warn, R
log4j.logger.com.javacreed=info, stdout
```
Please note that the logging will only make things slower. In fact if you run the example without logs (or the logs set to `INFO`), the size of the directory is computed much faster.
In order to obtain a more reliable result, we will run the same test several times and return the average time taken as shown next.
```java
package com.javacreed.examples.concurrency.part1;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.javacreed.examples.concurrency.utils.FilePath;
import com.javacreed.examples.concurrency.utils.Results;
public class Example2 {
private static final Logger LOGGER = LoggerFactory.getLogger(Example1.class);
public static void main(final String[] args) {
final Results results = new Results();
for (int i = 0; i < 5; i++) {
results.startTime();
final long size = DirSize.sizeOf(FilePath.TEST_DIR);
final long taken = results.endTime();
Example2.LOGGER.info("Size of '{}': {} bytes (in {} nano)", FilePath.TEST_DIR, size, taken);
}
final long takenInNano = results.getAverageTime();
Example2.LOGGER.info("Average: {} nano ({} seconds)", takenInNano, TimeUnit.NANOSECONDS.toSeconds(takenInNano));
}
}
```
The same test is executed five times and the average result is printed last as shown next.
```
16:58:00.496 [main] INFO Example2.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 4266090211 nano)
16:58:04.728 [main] INFO Example2.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 4228931534 nano)
16:58:08.947 [main] INFO Example2.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 4224277634 nano)
16:58:13.205 [main] INFO Example2.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 4253856753 nano)
16:58:17.439 [main] INFO Example2.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 4235732903 nano)
16:58:17.439 [main] INFO Example2.java:46 - Average: 4241777807 nano (4 seconds)
```
### RecursiveTask
The Fork/Join Framework provides two types of tasks, the `RecursiveTask` ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/RecursiveTask.html)) and the `RecursiveAction` ([Java Doc](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/RecursiveAction.html)). In this section we will only talk about the `RecursiveTask`. The `RecursiveAction` is discussed later on.
A `RecursiveTask` is a task that when executed it returns a value. Therefore, such task returns the result of the computation. In our case, the task returns the size of the file or directory it represents. The class `DirSize` was modified to make use of `RecursiveTask` as shown next.
```java
package com.javacreed.examples.concurrency.part2;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DirSize {
private static final Logger LOGGER = LoggerFactory.getLogger(DirSize.class);
private static class SizeOfFileTask extends RecursiveTask<Long> {
private static final long serialVersionUID = -196522408291343951L;
private final File file;
public SizeOfFileTask(final File file) {
this.file = Objects.requireNonNull(file);
}
@Override
protected Long compute() {
DirSize.LOGGER.debug("Computing size of: {}", file);
if (file.isFile()) {
return file.length();
}
final List<SizeOfFileTask> tasks = new ArrayList<>();
final File[] children = file.listFiles();
if (children != null) {
for (final File child : children) {
final SizeOfFileTask task = new SizeOfFileTask(child);
task.fork();
tasks.add(task);
}
}
long size = 0;
for (final SizeOfFileTask task : tasks) {
size += task.join();
}
return size;
}
}
public static long sizeOf(final File file) {
final ForkJoinPool pool = new ForkJoinPool();
try {
return pool.invoke(new SizeOfFileTask(file));
} finally {
pool.shutdown();
}
}
private DirSize() {}
}
```
Let us break this class into smaller parts and describe each separately.
1. The class has a private constructor because it was not meant to be initialised. Therefore there is no point to initialise it (creating objects of this type) and in order to prevent someone to initialising it, we made the constructor `private`. All methods are static and these can be called against the class directly.
```java
private DirSize() {}
```
1. The method `sizeOf()` does not compute the size of the file or directory. Instead it creates an instance of the `ForkJoinPool` and starts the computational process. It waits for the directory size to be computed and finally it shut down the pool before exiting.
```java
public static long sizeOf(final File file) {
final ForkJoinPool pool = new ForkJoinPool();
try {
return pool.invoke(new SizeOfFileTask(file));
} finally {
pool.shutdown();
}
}
```
The threads created by the `ForkJoinPool` are daemon threads by default. Some articles advice against the need of shutting this pool down since these threads will not prevent the VM from shutting down. With that said, I recommend to shut down and dispose of any objects properly when these are no longer needed. These daemon threads may be left idle for long time even when these are not required anymore.
1. The method `sizeOf()` creates an instance `SizeOfFileTask`, which class extends `RecursiveTask<Long>`. Therefore the invoke method will return the objects/value returned by this task.
```java
return pool.invoke(new SizeOfFileTask(file));
```
Note that the above code will block until the size of the directory is computed. In other words the above code will wait for the task (and all the subtasks) to finish working before continuing.
1. The class `SizeOfFileTask` is an inner class within the `DirSize` class.
```java
private static class SizeOfFileTask extends RecursiveTask<Long> {
private static final long serialVersionUID = -196522408291343951L;
private final File file;
public SizeOfFileTask(final File file) {
this.file = Objects.requireNonNull(file);
}
@Override
protected Long compute() {
/* Removed for brevity */
}
}
```
It takes the file (which can be a directory) for which size is to be computed as argument of its sole constructor, which file cannot be `null`. The `compute()` method is responsible from computing the work of this task. In this case the size of the file or directory, which method is discussed next.
1. The `compute()` method determines whether the file passed to its constructor is a file or directory and acts accordingly.
```java
@Override
protected Long compute() {
DirSize.LOGGER.debug("Computing size of: {}", file);
if (file.isFile()) {
return file.length();
}
final List<SizeOfFileTask> tasks = new ArrayList<>();
final File[] children = file.listFiles();
if (children != null) {
for (final File child : children) {
final SizeOfFileTask task = new SizeOfFileTask(child);
task.fork();
tasks.add(task);
}
}
long size = 0;
for (final SizeOfFileTask task : tasks) {
size += task.join();
}
return size;
}
```
If the file is a file, then the method simply returns its size as shown next.
```java
if (file.isFile()) {
return file.length();
}
```
Otherwise, if the file is a directory, it lists all its sub-files and creates a new instance of `SizeOfFileTask` for each of these sub-files.
```java
final List<SizeOfFileTask> tasks = new ArrayList<>();
final File[] children = file.listFiles();
if (children != null) {
for (final File child : children) {
final SizeOfFileTask task = new SizeOfFileTask(child);
task.fork();
tasks.add(task);
}
}
```
For each instance of the created `SizeOfFileTask`, the `fork()` method is called. The `fork()` method causes the new instance of `SizeOfFileTask` to be added to this thread's queue. All created instances of `SizeOfFileTask` are saved in a list, called `tasks`. Finally, when all tasks are forked, we need to wait for them to finish summing up their values.
```java
long size = 0;
for (final SizeOfFileTask task : tasks) {
size += task.join();
}
return size;
```
This is done by the `join()` method. This ` join()` will force this task to stop, step aside if needs be, and wait for the subtask to finish. The value returned by all subtasks is added to the value of the variable `size` which is returned as the size of this directory.
The Fork/Join Framework is more complex when compared with the simpler version which does not use multithreading. This is a fair point, but the simpler version is 4 times slower. The Fork/Join example took on average a second to compute the size, while the non-threading version took 4 seconds on average as shown next.
```
16:59:19.557 [main] INFO Example3.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 2218013380 nano)
16:59:21.506 [main] INFO Example3.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 1939781438 nano)
16:59:23.505 [main] INFO Example3.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 2004837684 nano)
16:59:25.363 [main] INFO Example3.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 1856820890 nano)
16:59:27.149 [main] INFO Example3.java:42 - Size of 'C:\Test\': 113463195117 bytes (in 1782364124 nano)
16:59:27.149 [main] INFO Example3.java:46 - Average: 1960363503 nano (1 seconds)
```
In this section we saw how multithreading help us in improving the performance of our program. In the next section we will see the how inappropriate use of multithreading can make things worse.
### ExecutorService
In the previous example we saw how concurrency improved the performance of our algorithm. When misused, multithreading can provide poor results as we will see in this section. We will try to solve this problem using a traditional executor service.
**Please note that the code shown in this section is broken and does not work. It will hang forever and is only included for demonstration purpose.**
The class `DirSize` was modified to work with `ExecutorService` and `Callable` ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Callable.html)).
```java
package com.javacreed.examples.concurrency.part3;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* This example is broken and suffers from deadlock and is only included for documentation purpose.
*
* @author Albert Attard
*/
public class DirSize {
private static class SizeOfFileCallable implements Callable<Long> {
private final File file;
private final ExecutorService executor;
public SizeOfFileCallable(final File file, final ExecutorService executor) {
this.file = Objects.requireNonNull(file);
this.executor = Objects.requireNonNull(executor);
}
@Override
public Long call() throws Exception {
DirSize.LOGGER.debug("Computing size of: {}", file);
long size = 0;
if (file.isFile()) {
size = file.length();
} else {
final List<Future<Long>> futures = new ArrayList<>();
for (final File child : file.listFiles()) {
futures.add(executor.submit(new SizeOfFileCallable(child, executor)));
}
for (final Future<Long> future : futures) {
size += future.get();
}
}
return size;
}
}
public static <T> long sizeOf(final File file) {
final int threads = Runtime.getRuntime().availableProcessors();
DirSize.LOGGER.debug("Creating executor with {} threads", threads);
final ExecutorService executor = Executors.newFixedThreadPool(threads);
try {
return executor.submit(new SizeOfFileCallable(file, executor)).get();
} catch (final Exception e) {
throw new RuntimeException("Failed to calculate the dir size", e);
} finally {
executor.shutdown();
}
}
private static final Logger LOGGER = LoggerFactory.getLogger(DirSize.class);
private DirSize() {}
}
```
The idea is very much the same as before. The inner class `SizeOfFileCallable` extends `Callable<Long>` and delegates the computation of its subtasks to the instance of `ExecutorService` passed to its constructor. This is not required when dealing with the `RecursiveTask`, was the latter automatically adds its subclasses to the thread's queue for execution.
We will not go through this in more detail to keep this article focused on the Fork/Join Framework. As mentioned already, this method blocks once all threads are occupied as shown next.
```
17:22:39.216 [main] DEBUG DirSize.java:78 - Creating executor with 4 threads
17:22:39.222 [pool-1-thread-1] DEBUG DirSize.java:56 - Computing size of: C:\Test\
17:22:39.223 [pool-1-thread-2] DEBUG DirSize.java:56 - Computing size of: C:\Test\Dir 1
17:22:39.223 [pool-1-thread-4] DEBUG DirSize.java:56 - Computing size of: C:\Test\Dir 2
17:22:39.223 [pool-1-thread-3] DEBUG DirSize.java:56 - Computing size of: C:\Test\Dir 3
```
This example is executed on a Core i5 computer, which has four available processors (as indicated by `Runtime.getRuntime().availableProcessors()` [Java Doc](http://docs.oracle.com/javase/6/docs/api/java/lang/Runtime.html#availableProcessors())). Once all four threads are occupied, this approach will block forever as we saw in the bank branch example in the beginning of this article. All threads are occupied and thus cannot be used to solve the other tasks. One can suggest using more threads. While that may seem to be a solution, the Fork/Join Framework solved the same problem using only four threads. Furthermore, threads are not cheap and one should not simply spawn thousands of threads just because he or she chooses an inappropriate technique.
While the phrase multithreading is overused in the programming community, the choice of multithreading technique is important as some options simply do not work in certain scenarios as we saw above.
### RecursiveAction
The Fork/Join Framework supports two types of tasks. The second type of task is the `RecursiveAction`. These types of tasks are not meant to return anything. These are ideal for cases where you want to do an action, such as delete a file, without returning anything. In general you cannot delete an empty directory. First you need to delete all its files first. In this case the `RecursiveAction` can be used where each action either deletes the file, or first deletes all directory content and then deletes the directory itself.
Following is the final example we have in this article. It shows the modified version of the `DirSize`, which makes use of the `SizeOfFileAction` inner class to compute the size of the directory.
```java
package com.javacreed.examples.concurrency.part4;
import java.io.File;
import java.util.Objects;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.ForkJoinTask;
import java.util.concurrent.RecursiveAction;
import java.util.concurrent.atomic.AtomicLong;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class DirSize {
private static class SizeOfFileAction extends RecursiveAction {
private static final long serialVersionUID = -196522408291343951L;
private final File file;
private final AtomicLong sizeAccumulator;
public SizeOfFileAction(final File file, final AtomicLong sizeAccumulator) {
this.file = Objects.requireNonNull(file);
this.sizeAccumulator = Objects.requireNonNull(sizeAccumulator);
}
@Override
protected void compute() {
DirSize.LOGGER.debug("Computing size of: {}", file);
if (file.isFile()) {
sizeAccumulator.addAndGet(file.length());
} else {
final File[] children = file.listFiles();
if (children != null) {
for (final File child : children) {
ForkJoinTask.invokeAll(new SizeOfFileAction(child, sizeAccumulator));
}
}
}
}
}
public static long sizeOf(final File file) {
final ForkJoinPool pool = new ForkJoinPool();
try {
final AtomicLong sizeAccumulator = new AtomicLong();
pool.invoke(new SizeOfFileAction(file, sizeAccumulator));
return sizeAccumulator.get();
} finally {
pool.shutdown();
}
}
private static final Logger LOGGER = LoggerFactory.getLogger(DirSize.class);
private DirSize() {}
}
```
This class is very similar to its predecessors. The main difference lies in the way the final value (the size of the file or directory) is returned. Remember that the `RecursiveAction` cannot return a value. Instead, all tasks will share a common counter of type `AtomicLong` and these will increment this common counter instead of returning the size of the file.
Let us break this class into smaller parts and go through each part individually. We will skip the parts that were already explained before so not to repeat ourselves.
1. The method `sizeOf()` makes use of the `ForkJoinPool` as before. The common counter, named `sizeAccumulator`, is initialised in this method too and passed to the first task. This instance will be shared with all subtasks and all will increment this value.
```java
public static long sizeOf(final File file) {
final ForkJoinPool pool = new ForkJoinPool();
try {
final AtomicLong sizeAccumulator = new AtomicLong();
pool.invoke(new SizeOfFileAction(file, sizeAccumulator));
return sizeAccumulator.get();
} finally {
pool.shutdown();
}
}
```
Like before, this method will block until all subtasks are ready, after which returns the total size.
1. The inner class `SizeOfFileAction` extends `RecursiveAction` and its constructor takes two arguments.
```java
private static class SizeOfFileAction extends RecursiveAction {
private static final long serialVersionUID = -196522408291343951L;
private final File file;
private final AtomicLong sizeAccumulator;
public SizeOfFileAction(final File file, final AtomicLong sizeAccumulator) {
this.file = Objects.requireNonNull(file);
this.sizeAccumulator = Objects.requireNonNull(sizeAccumulator);
}
@Override
protected void compute() {
/* Removed for brevity */
}
}
```
The first argument is the file (or directory) which size will be computed. The second argument is the shared counter.
1. The compute method is slightly simpler here as it does not have to wait for the subtasks. If the given file is a file, then it increments the common counter (referred to as `sizeAccumulator`). Otherwise, if this file is a directory, it forks the new instances of `SizeOfFileAction` for each child file.
```java
protected void compute() {
DirSize.LOGGER.debug("Computing size of: {}", file);
if (file.isFile()) {
sizeAccumulator.addAndGet(file.length());
} else {
final File[] children = file.listFiles();
if (children != null) {
for (final File child : children) {
ForkJoinTask.invokeAll(new SizeOfFileAction(child, sizeAccumulator));
}
}
}
}
```
In this case the method `invokeAll()` ([Java Doc](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinTask.html#invokeAll(java.util.concurrent.ForkJoinTask...))) is used to fork the tasks.
This approach takes approximately 11 seconds to complete making it the slowest of all three as shown next.
```
19:04:39.925 [main] INFO Example5.java:40 - Size of 'C:\Test': 113463195117 bytes (in 11445506093 nano)
19:04:51.433 [main] INFO Example5.java:40 - Size of 'C:\Test': 113463195117 bytes (in 11504270600 nano)
19:05:02.876 [main] INFO Example5.java:40 - Size of 'C:\Test': 113463195117 bytes (in 11442215513 nano)
19:05:15.661 [main] INFO Example5.java:40 - Size of 'C:\Test': 113463195117 bytes (in 12784006599 nano)
19:05:27.089 [main] INFO Example5.java:40 - Size of 'C:\Test': 113463195117 bytes (in 11428115064 nano)
19:05:27.226 [main] INFO Example5.java:44 - Average: 11720822773 nano (11 seconds)
```
This may be a surprise to many. How is this possible, when multiple threads were used? This is a common misconception. Multithreading does not guarantee better performance. In this case we have a design flaw which ideally we avoid. The common counter named `sizeAccumulator` is shared between all threads and thus causes contention between threads. This actually defeats the purpose of the divide and conquer technique as a bottleneck is created.
## Conclusion
This article provided a detailed explanation of the Fork/Join Framework and how this can be used. It provided a practical example and compared several approaches. The Fork/Join Framework is ideal for recursive algorithms but it does not distribute the load amongst the threads evenly. The tasks and subtask should not block on anything else but join and should delegate work using fork. Avoid any blocking IO operations within tasks and minimise the mutable share state especially modifying the variable as much as possible as this has a negative effect on the overall performance.
| 1 |
lisawray/fontbinding | A full example of custom fonts in XML using data binding and including font caching. | null | ## [Deprecated]
Fonts in XML are now supported by the Android support library as of 26.0, including in styles and themes. I recommend using the support library and IDE integration for all your modern font needs!
https://developer.android.com/guide/topics/ui/look-and-feel/fonts-in-xml.html#using-support-lib
# fontbinding
Easy custom fonts in XML using [data binding](http://developer.android.com/tools/data-binding/guide.html).
No setup required, no extra Java code, and no custom views.
```xml
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:font="@{`alegreya`}"
/>
```
<img src="https://raw.githubusercontent.com/lisawray/fontbinding/master/screenshot_land.png" alt="Drawing" height="400px"/>
This example includes a simple font cache that automatically loads names from your `assets/fonts` folder and lazy-loads typefaces. Just drag and drop font files and use them in XML by their normal or lowercase filenames (e.g. "Roboto-Italic" or "roboto-italic" for `Roboto-Italic.otf`). That's it!
### Data Binding
Make sure to use the data binding framework to inflate your layout.
```java
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
DataBindingUtil.setContentView(this, R.layout.activity_main);
}
}
```
More about data binding: http://developer.android.com/tools/data-binding/guide.html
### Custom Naming
You can set custom names for your fonts, but you don't have to.
```java
FontCache.getInstance().addFont("alegreya", "Alegreya-Regular.ttf");
```
## Note:
It's not currently possible to define custom attributes in styles using data binding. If you require this, check out [Calligraphy](https://github.com/chrisjenx/Calligraphy) by Chris Jenx.
| 1 |
neo4j-examples/movies-java-bolt | Neo4j Movies Example application with SparkJava backend using the neo4j-java-driver | bolt cypher graph graph-database java movies-api neo4j | null | 1 |
iriusrisk/zap-webdriver | Example security tests using Selenium WebDriver and OWASP ZAP | null | zap-webdriver
=============
Example security tests using JUnit, Selenium WebDriver and OWASP ZAP to test the Bodgeit store (https://code.google.com/p/bodgeit/)
The tests use selenium to navigate and login to the app, then spider the content with ZAP and perform a security scan using ZAP's scanner. Tests pass or fail based on vulnerabilities found.
Getting started
===============
1. Download and start the [bodgeit store](https://code.google.com/p/bodgeit/) on port 8080
2. Download and start [OWASP ZAP](https://code.google.com/p/zaproxy/wiki/Downloads?tm=2) at least version 2.4
3. In the ZAP Options change the local proxy port to 8888
4. Download this repository
5. Look through the src/test/java/net/continuumsecurity/ZapScanTest class and check that the static fields match your setup. In particular, change the CHROME_DRIVER_PATH to point to the chrome driver instance appropriate for your platform, the driver/ directory contains versions for Linux, Mac and Windows.
5. Run: mvn test
Details
=======
The Selenium steps to navigate the application and submit forms is contained in the MyAppNavigation class. The JUnit testing steps are defined in ZapScanTest.
Keeping these two aspects separate makes test maintenance easier. If your testing team already has Selenium code to perform navigation (e.g. Page Objects), you can then drop those in to the MyAppNavigation class.
The ZapScanTest class should be regarded as a starting point for your own test cases and it makes some wild assumptions about which alerts to ignore. If you're going to use these tests as
part of a Continuous Integration/Continuous Delivery process then please make sure that the build will fail for important security vulnerabilities.
For a more comprehensive security testing framework with security requirements specified in plain English and many more pre-written tests, consider the [BDD-Security framework](http://www.continuumsecurity.net/bdd-intro.html) instead.
| 1 |
cstew/CustomBehavior | An example of a custom CoordinatorLayout Behvior | null | Two examples of custom `Behavior`s with `FloatingActionButton` and `CoordinatorLayout`.
Shrink and Rotate

| 1 |
oktadev/auth0-java-oauth-examples | null | null | null | 0 |
m-cakir/bubble-sheet-multiple-choice-scanner | Bubble sheet multiple choice scanner example with OpenCV | bubble-sheet image-processing java multiple-choice opencv | # Bubble Sheet Multiple Choice Scanner
Bubble sheet multiple choice scanner example with OpenCV Java (opencv-3.4.0). Not ready for production usage yet.
## Install
Download OpenCV from [official site](https://opencv.org/releases.html). Then add library to project and set VM options as following.
```
// native library path
-Djava.library.path=/opencv/build/lib
```
### Intellij
```
File > Project Structure (Ctrl + Alt + Shift + S) > Libraries > + (Alt + Insert) > Select OpenCV jar file
Run/Debug Configuration -> Application -> VM options
```
## Steps
* Dilate source image for better recognition
* Transform to Grayscale format
* Threshold operation (for recognizing mask/conjuction with bitwise_and)
* Blur filter
* Canny edge algorithm
* Adaptive Thresh (for find main wrapper rectangle & bubbles)
* Recognize main wrapper rectangle according to hierarchy
* Find bubbles with estimated ratio (~17/15.5)
* Sort bubbles by coordinate points
* Recognize which option is filled or empty with bitwise_and and countNonZero
## Sources
* Pdf - [Bubble Sheet Form](sources/bubble-sheet.pdf)
* Inputs - Example Sheet [1](sources/sheet-1.jpg) & [2](sources/sheet-2.jpg)
* Outputs for Sheet [1](sources/result-sheet-1.png) & [2](sources/result-sheet-2.png)
## Running
Run the "main" method of Main class.
```
public static void main(String[] args) throws Exception {
sout("...started");
(1) Mat source = Imgcodecs.imread(getSource("sheet-1.jpg"));
Scanner scanner = new Scanner(source, 20);
(2) scanner.setLogging(true);
scanner.scan();
sout("...finished");
}
```
(1) change source file name
(2) if logging is
* enabled, you can see processing flow and some detailed logs.
* disabled, you can see only output/result file.
## Output (for sheet-2)
```
...started
*************************************
*************************************
answer is ....
*************************************
*************************************
1. A
2. D
3. B
4. EMPTY/INVALID
5. D
6. A
7. D
8. C
9. A
10. EMPTY/INVALID
11. B
12. A
13. D
14. EMPTY/INVALID
15. B
16. EMPTY/INVALID
17. EMPTY/INVALID
18. C
19. EMPTY/INVALID
20. D
...finished
```

| 1 |
unclebob/Episode-10-ExpenseReport | The Expense Report example from cleancoders.com episode 10 | null | null | 1 |
mikesmullin/Assembly | Various x86/64 Assembly examples for learning | null | # Assembly
Follow along as we learn Assembly Language (ASM). Including x86, x86_64 architecture,
machine language, JVM bytecode, and fundamentals of hardware.
# My Book
- Thorough notes and links on x86/64 Machine Code Assembly / Disassembly
https://gist.github.com/mikesmullin/6259449
# Examples:
- `bootloader/` - on BIOS, boot, and operating system assembly
- `windows/` - on Windows process assembly (incl. OpenGL)
- `linux_gdb/` - on Linux assembly and Asm/Disasm/Debug tooling
# Related:
- [mikesmullin/OperatingSystem](https://github.com/mikesmullin/OperatingSystem) repo | 0 |
mrthetkhine/designpattern | Design pattern code example in Java | design pattern programming | Implementatin of Major Design pattern in GoF
Design pattern code example in Java
| 1 |
bekkopen/jetty-pkg | Embed-your-webapp into jetty7 example | null | Usage
=====
* Add your war file artifact to the Maven <code>pom.xml</code>
* Build it <code>mvn clean install</code>
* Run it <code>java -jar yourWebApp-version.jar start</code>
Configuration
=============
Create a secret (i.e. one per environment):
<pre>
:➜ md5sum yourWarFile.war
eb27fb2e61ed603363461b3b4e37e0a0 yourWarFile.war
</pre>
Create a configuration file:
<pre>
:➜ cat > /etc/bekkopen/appname.properties
jetty.contextPath=/appname
jetty.port=7000
jetty.workDir=/var/apps/appname/
jetty.secret=eb27fb2e61ed603363461b3b4e37e0a0
[ctrl+d]
</pre>
Start it with a configuration file (default: CWD/jetty.properties):
<pre>
:➜ java -Dconfig=/etc/bekkopen/appname.properties -jar appname-1.0.0rc0.jar
</pre>
Override individual properties:
<pre>
:➜ java -Djetty.port=7001 -jar appname-1.0.0rc0.jar
</pre>
(I didn't bother implementing combinations of system properties and resource properties - we i.e. use Constretto in our own launcher).
| 1 |
smithy-lang/smithy-examples | A collection of examples to help users get up and running with Smithy | api aws build-tool codegen examples smithy smithy-models | # Smithy Examples
[](https://github.com/smithy-lang/smithy-examples/actions/workflows/integ.yml)
This repository contains a range of examples to help you get up and running with [Smithy](https://smithy.io).
*Note*: You will need the [Smithy CLI](https://smithy.io/2.0/guides/smithy-cli/index.html) installed to use the examples in this
repository as templates.
If you do not have the CLI installed, follow [this guide](https://smithy.io/2.0/guides/smithy-cli/index.html) to install it now.
### What is Smithy
Smithy is an interface definition language and set of tools that allows developers to build clients and servers in
multiple languages. A Smithy model enables API providers to generate clients and servers in various programming languages,
API documentation, test automation, and example code.
## Examples
- [Quick Start](quickstart-examples) - Build the Smithy [quick start example](https://smithy.io/2.0/quickstart.html).
- [Conversion](conversion-examples) - Convert Smithy models to other formats (such as OpenAPI) and vice versa
- [Custom Traits](custom-trait-examples) - Create custom Smithy [traits](https://smithy.io/2.0/spec/model.html#traits) to use for defining custom model metadata.
- [Projections](projection-examples) - Using Smithy [projections](https://smithy.io/2.0/guides/building-models/build-config.html#projections) to create different views of
your model for specific consumers.
- [Shared Models](shared-model-examples) - Create a package of common Smithy shapes that can be shared between Smithy models.
- [Linting and Validation](linting-and-validation-examples) - Use linters and validators to ensure APIs adhere to best practices and standards.
## Contributing
Contributions are welcome. Please read the [contribution guidelines](CONTRIBUTING.md) first.
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the MIT-0 License.
| 0 |
sysco-middleware/kafka-testing | Test examples of kafka-clients: unit, integration, end-to-end | embedded-kafka embeddedsinglenodekafkacluster kafka-streams kafka-streams-test-utils kafka-testing | null | 0 |
lemonlabs/u2020-mortar | [DEPRECATED] Port of Jake Wharton's U2020 sample app with use of Mortar & Flow + couple more examples | null | U+2020 mortar
============
Disclaimer: Mortar & Flow have evolved since this project was created. It will be updated when we've completely decided what to do with the libraries.
Port of Jake Wharton's U+2020 sample app with use of Mortar & Flow + couple more examples.
This is more of a kitchen sink of things we are trying with Mortar&Flow. Use as a base for a project at your own risk. Ideas and PRs are welcome.
* [Jake Wharton's U2020](https://github.com/JakeWharton/u2020)
* [Square's Mortar](https://github.com/square/mortar)
* [Square's Flow](https://github.com/square/flow)
| 1 |
networknt/light-example-4j | Example APIs or services to demo all feature of the light-4j framework | null | # light-example-4j
Example APIs to demo all feature of the light-4j and frameworks built on top of light-4j.
[Stack Overflow](https://stackoverflow.com/questions/tagged/light-4j) |
[Google Group](https://groups.google.com/forum/#!forum/light-4j) |
[Gitter Chat](https://gitter.im/networknt/light-4j) |
[Subreddit](https://www.reddit.com/r/lightapi/) |
[Youtube Channel](https://www.youtube.com/channel/UCHCRMWJVXw8iB7zKxF55Byw) |
[Documentation](https://doc.networknt.com) |
[Contribution Guide](https://doc.networknt.com/contribute/) |
| 1 |
frenmanoj/bookstore | A complete example for Spring MVC + Maven + Hibernate CRUD operation | null | # bookstore
A complete example for Spring MVC + Maven + Hibernate CRUD operation
# Running the Application
+ Open the Command Prompt
+ Go to the root project directory ( bookstore )
+ Run the following maven command to download all dependent JARs.
```
mvn eclipse:clean eclipse:eclipse
```
+ Run Tomcat server
```
mvn clean tomcat7:run
```
+ Go to the browser and enter the following URL:
```
http://localhost:8080/bookstore/book/
```
The port number might be different in your case. Please have a look at the tomcat log in console for that.
# Blog Reference:
[https://shrestha-manoj.blogspot.com/2014/05/spring-mvc-maven-hibernate-crud-example.html](https://shrestha-manoj.blogspot.com/2014/05/spring-mvc-maven-hibernate-crud-example.html)
| 1 |
hdiv/hdiv-spring-mvc-showcase | Spring MCV and Hdiv example application | null | Hdiv: Application Self-Protection
=================================
Sample application showing the integration between Spring Mvc and Hdiv.
How to build the application
============================
Clone this repo and build war file (you'll need Git and Maven installed):
git clone git://github.com/hdiv/hdiv-spring-mvc-showcase.git
cd hdiv-spring-mvc-showcase
mvn package
mvn tomcat7:run
Open [http://localhost:8080/hdiv-spring-mvc-showcase](http://localhost:8080/hdiv-spring-mvc-showcase) in your favorite browser.
| 1 |
headius/indy_deep_dive | Examples from my invokedynamic deep dive" talk" | null | null | 0 |
oktadev/auth0-full-stack-java-example | 🔥 Full Stack Java Example | auth0 fullstack fullstack-java java oidc react spring-boot | # Full Stack Java Example with JHipster (React + Spring Boot) 🤓
This example app shows you how to create a slick-looking, full-stack, secure application using React, Spring Boot, and JHipster.
Please read the following blog posts to learn more:
- [Full Stack Java with React, Spring Boot, and JHipster][blog] to see how this app was created.
- [Introducing Spring Native for JHipster: Serverless Full-Stack Made Easy][blog-spring-native] to convert this app to an executable with Spring Native.
- [Use GitHub Actions to Build GraalVM Native Images][blog-github-graalvm] to automate your GraalVM builds.
**Prerequisites:**
- [Node.js 14+](https://nodejs.org/)
- [Java 11+](https://sdkman.io)
- [Docker Compose](https://docs.docker.com/compose/install/)
- An [Auth0 Account](https://auth0.com/signup)
> [Auth0](https://auth0.com) is an easy to implement, adaptable authentication and authorization platform. Basically, we make your login box awesome.
- [Getting Started](#getting-started)
- [Links](#links)
- [Help](#help)
- [License](#license)
## Getting Started
To install this example, clone it.
```
git clone https://github.com/oktadev/auth0-full-stack-java-example.git
cd auth0-full-stack-java-example
```
Create a `.auth0.env` file in the root of the project, and fill it with the code below to override the default OIDC settings:
```shell
export SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI=https://<your-auth0-domain>/
export SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID=<your-client-id>
export SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET=<your-client-secret>
export JHIPSTER_SECURITY_OAUTH2_AUDIENCE=https://<your-auth0-domain>/api/v2/
```
You'll need to create a new web application in Auth0 and fill in the `<...>` placeholders before this works.
### Create an OpenID Connect App on Auth0
Log in to your Auth0 account (or [sign up](https://auth0.com/signup) if you don't have an account). You should have a unique domain like `dev-xxx.eu.auth0.com`.
Press the **Create Application** button in [Applications section](https://manage.auth0.com/#/applications). Use a name like `JHipster Baby!`, select `Regular Web Applications`, and click **Create**.
Switch to the **Settings** tab and configure your application settings:
- Allowed Callback URLs: `http://localhost:8080/login/oauth2/code/oidc`
- Allowed Logout URLs: `http://localhost:8080/`
Scroll to the bottom and click **Save Changes**.
In the [roles](https://manage.auth0.com/#/roles) section, create new roles named `ROLE_ADMIN` and `ROLE_USER`.
Create a new user account in the [users](https://manage.auth0.com/#/users) section. Click on the **Role** tab to assign the roles you just created to the new account.
_Make sure your new user's email is verified before attempting to log in!_
Next, head to **Auth Pipeline** > **Rules** > **Create**. Select the `Empty rule` template. Provide a meaningful name like `Group claims` and replace the Script content with the following.
```js
function(user, context, callback) {
user.preferred_username = user.email;
const roles = (context.authorization || {}).roles;
function prepareCustomClaimKey(claim) {
return `https://www.jhipster.tech/${claim}`;
}
const rolesClaim = prepareCustomClaimKey('roles');
if (context.idToken) {
context.idToken[rolesClaim] = roles;
}
if (context.accessToken) {
context.accessToken[rolesClaim] = roles;
}
callback(null, user, context);
}
```
This code is adding the user's roles to a custom claim (prefixed with `https://www.jhipster.tech/roles`). This claim is mapped to Spring Security authorities in `SecurityUtils.java`.
Click **Save changes** to continue.
**NOTE**: Want to have all these steps automated for you? Vote for [this issue](https://github.com/auth0/auth0-cli/issues/351) in the Auth0 CLI project.
### Run Your JHipster App with Auth0
Set your Auth0 properties in `.auth0.env`, and start the app.
```shell
source .auth0.env
./mvnw
```
_Voilà_ - your full stack app is using Auth0! Open your favorite browser to `http://localhost:8080` and sign-in.
## Links
This example uses the following open source libraries:
- [JHipster](https://www.jhipster.tech)
- [Spring Boot](https://spring.io/projects/spring-boot)
- [Spring Security](https://spring.io/projects/spring-security)
## Help
Please post any questions as comments on the [blog post][blog].
## License
Apache 2.0, see [LICENSE](LICENSE).
[blog]: https://auth0.com/blog/full-stack-java-with-react-spring-boot-and-jhipster/
[blog-spring-native]: https://developer.okta.com/blog/2022/03/03/spring-native-jhipster
[blog-github-graalvm]: https://developer.okta.com/blog/2022/04/22/github-actions-graalvm
| 1 |
gpcodervn/Design-Pattern-Tutorial | This project includes all examples about 23 design patterns of GoF with some other patterns in software development | null | # Design-Pattern-Tutorial
This project includes all examples about 23 design patterns of GoF with some other patterns in software development
## Design Patterns là gì?
Design Pattern là một kỹ thuật trong lập trình hướng đối tượng, nó khá quan trọng và mọi lập trình viên muốn giỏi đều phải biết. Được sử dụng thường xuyên trong các ngôn ngữ OOP. Nó sẽ cung cấp cho bạn các "mẫu thiết kế", giải pháp để giải quyết các vấn đề chung, thường gặp trong lập trình. Các vấn đề mà bạn gặp phải có thể bạn sẽ tự nghĩ ra cách giải quyết nhưng có thể nó chưa phải là tối ưu. Design Pattern giúp bạn giải quyết vấn đề một cách tối ưu nhất, cung cấp cho bạn các giải pháp trong lập trình OOP.
Design Patterns không phải là ngôn ngữ cụ thể nào cả. Nó có thể thực hiện được ở phần lớn các ngôn ngữ lập trình, chẳng hạn như Java, C#, thậm chí là Javascript hay bất kỳ ngôn ngữ lập trình nào khác.
Mỗi pattern mô tả một vấn đề xảy ra lặp đi lặp lại, và trình bày trọng tâm của giải pháp cho vấn đề đó, theo cách mà bạn có thể dùng đi dùng lại hàng triệu lần mà không cần phải suy nghĩ.
— Christopher Alexander —
## Phân loại Design Patterns
Năm 1994, bốn tác giả Erich Gamma, Richard Helm, Ralph Johnson và John Vlissides đã cho xuất bản một cuốn sách với tiêu đề Design Patterns – Elements of Reusable Object-Oriented Software, đây là khởi nguồn của khái niệm design pattern trong lập trình phần mềm.
Bốn tác giả trên được biết đến rộng rãi dưới tên Gang of Four (bộ tứ).
Theo quan điểm của bốn người, design pattern chủ yếu được dựa theo những quy tắc sau đây về thiết kế hướng đối tượng.
Lập trình cho interface chứ không phải để implement interface đó.
Ưu tiên object composition hơn là thừa kế.
Hệ thống các mẫu Design pattern hiện có 23 mẫu được định nghĩa trong cuốn “Design patterns Elements of Reusable Object Oriented Software” và được chia thành 3 nhóm:
Creational Pattern (nhóm khởi tạo – 5 mẫu) gồm: Factory Method, Abstract Factory, Builder, Prototype, Singleton. Những Design pattern loại này cung cấp một giải pháp để tạo ra các object và che giấu được logic của việc tạo ra nó, thay vì tạo ra object một cách trực tiếp bằng cách sử dụng method new. Điều này giúp cho chương trình trở nên mềm dẻo hơn trong việc quyết định object nào cần được tạo ra trong những tình huống được đưa ra.
Structural Pattern (nhóm cấu trúc – 7 mẫu) gồm: Adapter, Bridge, Composite, Decorator, Facade, Flyweight và Proxy. Những Design pattern loại này liên quan tới class và các thành phần của object. Nó dùng để thiết lập, định nghĩa quan hệ giữa các đối tượng.
Behavioral Pattern (nhóm tương tác/ hành vi – 11 mẫu) gồm: Interpreter, Template Method, Chain of Responsibility, Command, Iterator, Mediator, Memento, Observer, State, Strategy và Visitor. Nhóm này dùng trong thực hiện các hành vi của đối tượng, sự giao tiếp giữa các object với nhau.
### Nhóm Creational (nhóm khởi tạo)
- Hướng dẫn Java Design Pattern – Singleton
- Hướng dẫn Java Design Pattern – Factory Method
- Hướng dẫn Java Design Pattern – Abstract Factory
- Hướng dẫn Java Design Pattern – Builder
- Hướng dẫn Java Design Pattern – Prototype
- Hướng dẫn Java Design Pattern – Object Pool
### Nhóm Structural (nhóm cấu trúc)
- Hướng dẫn Java Design Pattern – Adapter
- Hướng dẫn Java Design Pattern – Bridge
- Hướng dẫn Java Design Pattern – Composite
- Hướng dẫn Java Design Pattern – Decorator
- Hướng dẫn Java Design Pattern – Facade
- Hướng dẫn Java Design Pattern – Flyweight
- Hướng dẫn Java Design Pattern – Proxy
### Nhóm Behavioral (nhóm hành vi/ tương tác)
- Hướng dẫn Java Design Pattern – Chain of Responsibility
- Hướng dẫn Java Design Pattern – Command
- Hướng dẫn Java Design Pattern – Interpreter
- Hướng dẫn Java Design Pattern – Iterator
- Hướng dẫn Java Design Pattern – Mediator
- Hướng dẫn Java Design Pattern – Memento
- Hướng dẫn Java Design Pattern – Observer
- Hướng dẫn Java Design Pattern – State
- Hướng dẫn Java Design Pattern – Strategy
- Hướng dẫn Java Design Pattern – Template Method
- Hướng dẫn Java Design Pattern – Visitor
Refer: https://gpcoder.com/4164-gioi-thieu-design-patterns/
| 0 |
asanchezyu/RetrofitSoapSample | Retrofit with SOAP services Example. ( WSDL, SOAP, converter, SimpleXml,...) | null | null | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.