added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:37:51.966283
| 2024-06-11T06:05:34
|
2345473725
|
{
"authors": [
"iwasakims",
"sekikn"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3511",
"repo": "apache/bigtop",
"url": "https://github.com/apache/bigtop/pull/1280"
}
|
gharchive/pull-request
|
BIGTOP-4126. Remove obsolete test resources for Hue.
https://issues.apache.org/jira/browse/BIGTOP-4126
Obsolete test resources for Hue causes completion error. This causes issue on mvn deploy in the release process.
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy:[40,23] 1. ERROR in /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy (at line 40)
Shell sh = new Shell();
^
Groovy:expecting ']', found ';' @ line 40, column 25.
[ERROR] /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy:[48,1] 2. ERROR in /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy (at line 48)
sh.exec("curl -m 60 --data '${creds}' ${loginURL}");
^
Groovy:unexpected token: sh @ line 48, column 5.
...
+1, I made sure that the mvn commands before deploying artifacts described the release process are successful with this PR. Thanks @iwasakims.
cherry-picked this to master branch.
|
2025-04-01T06:37:52.037348
| 2021-02-03T22:48:12
|
800762691
|
{
"authors": [
"BuildStream-Migration-Bot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3512",
"repo": "apache/buildstream",
"url": "https://github.com/apache/buildstream/issues/1038"
}
|
gharchive/issue
|
Move source cache to proto based service
See original issue on GitLab
In GitLab by [Gitlab user @raoul].hidalgocharman on May 29, 2019, 13:48
Background
The source cache should move towards using a protocol buffer based format similar to new artifact service architecture (overall plan described in #909). The reference service should be kept for now to allow it to be used with older buildstream clients.
Task description
[x] Design new source protos. This should be similar to artifact protos and may include metadata such as the sources provenance data.
[x] Implement new SourceCacheService that uses this.
[x] Use new SourceCacheService in SourceCache for pulling and pushing sources.
Acceptance Criteria
All old source cache tests pass using new source protos.
In GitLab by [Gitlab user @raoul].hidalgocharman on May 29, 2019, 13:48
mentioned in merge request !1362
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 19, 2019, 12:11
mentioned in merge request !1410
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 21, 2019, 14:13
assigned to [Gitlab user @raoul].hidalgocharman
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 8bbea0cc9d1c07dbcf7bba563082c31694d92220
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 35214d0d87c759788a1be22d5e9b77ddecf9806e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 8e264f093816a63f77e52d58332e8b32d713eb92
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 55bf188cb681ca0683dda3c82202ee21369cb47d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 12:16
marked the task Design new source protos. This should be similar to artifact protos and may include metadata such as the sources provenance data. as completed
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 12:16
marked the task Implement new SourceCacheService that uses this. as completed
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:08
mentioned in commit c9f8581531a0d583ef3cb21519aefb9e1ba66bd4
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:08
mentioned in commit ef712320ebd39f2259b90ce7cd3f7d9ffdb28a5c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:08
mentioned in commit 544c02367784e2e401760bf171d8b61ae8d3959a
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 8c9df8706969b5346b404c1581299fbcf4e3676e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 4533989e94d68b80ea1dfd7a59dd7177417f91e7
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 0d6a1e9fa891e27883689bd65a8f7b46e991162c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit<PHONE_NUMBER>1de343b499b597c751667fc4d6a763
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 12874bd9a67492a6d58d5aadd9ed8d5b737a1e0a
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 15:50
mentioned in commit 4dc530b338a4e434bd315d5f2d4102b563f49c77
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 4032c81019e8dd69ca541e87f99f484ffde3db52
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 357ab273a01711a592b0ca8de99b501e2c1cb62e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 78d567dddea3c28f4e737db00a092ae8a82c405e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 1581ff9cd6f6d8a2ce32d61446de300d85bae4e8
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit a8e04529d62f5c7c9894c45b83330823d4eb6bba
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:42
mentioned in commit 6c540804827d18deb562afb3da19f4039a25c81a
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:42
mentioned in commit 0397b66e9bda61035253fa718eff59538a7d211d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:48
mentioned in merge request !1435
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:55
mentioned in commit c76881a12a4dc10194ea3999f6d7d76181c34244
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:55
marked the task Use new SourceCacheService in SourceCache for pulling and pushing sources. as completed
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:30
mentioned in commit 5ce4cc7499d8eab81f0fa5f9f3e93249d443605f
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:30
mentioned in commit 6b42c2da0cc248ef0d0819c892feba6d10ca19f1
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit abea6490cb6036b1fa9c898879e5e20956007393
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit caba7d3a59ed24f1edf86366b8a7cf3675b9c1fc
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit b5e84029b07e95b281dfcc0da73f274e1b55b1c3
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit 47ace4076c34f43b9c92ebc01ef3f354b501f5d2
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit 4e2c3b28d89ded00f62e226ee5a3db8a96530c73
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit f9ffbfe9af2bb8563af54b4b7ccc98f5292da030
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit ac7a02fbef5fea5f8ace2357111e3e99794fb515
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit b80260e694c9ac88bd45b79319e10e8c82f7c84f
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit 3f3a29fed5e3dac3634208bf7b724314cfb2896d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 16:59
mentioned in commit 9497b02f9eae40e9dca97c1b325cc6ca6a52fe81
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 16:59
mentioned in commit 2a58f66d09600bdc79d6d25d771274bcc1544434
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 16:59
mentioned in commit 6b13a80e5f4e9792768b04e45266b74944daa643
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit eee1512447206b28d62b7bce38e86d96e6c19a3c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit e41517f305c23d5efce24ae4a91326fc81d93a32
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 82de872b3ec4eba90e8c022476bf760f350b81b2
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 5fc9ba356b967100260bbfc2aedfa19a193fbf86
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 2259e0453aedfa35f1a7539c19a629302fa9fd82
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 3cd1e50a38595da169e10f9c7f6b5912cb745264
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 14be3c3070f06f3d8194daa8474c035b5776fbc3
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 3063d1df3dba3ceab7ffa9fc7227cf4102d13c3d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 97858e21acf8497543e94f5338b035b12ff1ca1c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit ec012e5cd21ce3e0840ad54cd0becd2cb21eb889
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit 0d69662b5d4711098d0bbf6f15b5c0f7da8098a7
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit 5dc76fdb7833fa71d89c2f56a4be098cecf7495b
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit a4e8907c60bffcb51c4057327a9300938df87956
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit 5158c9fce3dae4f88f20a1e6bf48a0d426ac2036
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit c12587ce8e1ef32f1b66ff0399d5b76650f57c90
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit ab06c00ded23866ec88ce9132fe3000c9bfe823b
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit d06bacd228313b943224abb92435915b3a177d23
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit 46418bf79a740fb6c906962e52c243243d2849bb
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit 8d7afd7514f701870ed8333b822a2bb2682ef0de
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit 1fb4716df7caae754501717ad519f5f1e5268a7c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:47
mentioned in commit 2d48dc85f2814d6d2b04e48917c09a415cdbd540
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 15:44
mentioned in commit b15b32376f6abe1735233bd5d85c42fd1ad5a703
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit bb2cf18be0aef7d6e394a0c6ff6d83eac737c60b
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit 6b6e04ddb1c03f683e3e5591f057207d1303e6b6
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit d493682609f8f96ed127f4083bad42fa2fabb250
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit c20eac1e7ac80e1dc36b23b04affacfbe2cca338
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit d61e21448953942ce90d457dad7189c0dda61bc7
In GitLab by [Gitlab user @marge-bot123] on Jul 8, 2019, 12:17
closed via merge request !1435
In GitLab by [Gitlab user @marge-bot123] on Jul 8, 2019, 12:17
mentioned in commit cf0516ba92fb4a220ae0086c411314cec4974df5
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit d1be05c771bdbe054f01eaaea977d4b20e401354
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit dc689098d164510eb22820776f9c8cf1cf1fd642
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit 38f7bffd87ffd901e316a7966fefc9d189658e19
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit c02c0170058f36eff722bb341edbed3feae18145
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit 107bae99f159d22cbedd155c5db9255782fad3c0
|
2025-04-01T06:37:52.045863
| 2021-02-03T23:54:26
|
800801003
|
{
"authors": [
"BuildStream-Migration-Bot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3513",
"repo": "apache/buildstream",
"url": "https://github.com/apache/buildstream/issues/1062"
}
|
gharchive/issue
|
Add API for composing lists in yaml nodes
See original issue on GitLab
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:14
Background
As part of #1061 we need to perform composition between lists like so:
# bar.bst
kind: autotools
dependencies:
(>):
- foo-lib.bst
# project.conf
elements:
autotools:
dependencies:
- autotools.bst
These lists are expected to be composed into this:
- autotools.bst
- foo-lib.bst
This means we need to do something along the lines of:
default_deps = _yaml.node_get(project_conf, list, "dependencies")
deps = _yaml.node_get(element, list, "dependencies")
_yaml.composite(default_deps, deps)
But this will not work, because _yaml.composite() will only deal with _yaml.Nodes.
Task description
We should add some form of an API to allow doing this - I can see either of these things working:
_yaml.composite() learns to deal with plain lists - the problem here is that we'd struggle providing provenance data, and the behavior of composite(list, list) isn't obvious (although !1601 will probably make that "safe append", at least for dependencies).
_yaml.get_node() returns proper _yaml.Nodes when type=_yaml.Node for lists - this feels a bit more reasonable, but I'm likely overlooking something :)
Acceptance Criteria
We should be able to compose lists without creating naughty synthetic nodes.
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:15
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:15
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:16
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:18
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 12:02
changed the description
|
2025-04-01T06:37:52.059776
| 2021-01-01T05:18:57
|
777202019
|
{
"authors": [
"BuildStream-Migration-Bot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3514",
"repo": "apache/buildstream",
"url": "https://github.com/apache/buildstream/issues/1351"
}
|
gharchive/issue
|
Checkout needs sandbox even when run with --no--intergration
See original issue on GitLab
In GitLab by [Gitlab user @willsalmon] on Jul 7, 2020, 11:27
Summary
Checkout needs sandbox even when run with --no--intergration
This means that if a project was built in CI on a different arch or with RE and the RE bots are busy, artefacts can not be checked out even if they are just a tar/docker-image/single file.
We could do it with a approach like https://gitlab.com/BuildStream/buildstream/-/merge_requests/1983 or we tweak how the sandbox is invoked so that it dose not need a real sandbox.
In GitLab by [Gitlab user @cs-shadow] on Jul 14, 2020, 22:02
I'm not sure if I follow. This is already possible, and precisely the reason we have the dummy sandbox (SandboxDummy). This should be automatically get created when buildbox-run isn't available for whatever reason.
If this is not happening, this would be a bug in BuildStream. If so, please share more details about it, and how to reproduce it.
In GitLab by [Gitlab user @cs-shadow] on Jul 14, 2020, 22:02
I'm not sure if I follow. This is already possible, and precisely the reason we have the dummy sandbox (SandboxDummy). This should be automatically get created when buildbox-run isn't available for whatever reason.
If this is not happening, this would be a bug in BuildStream. If so, please share more details about it, and how to reproduce it.
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 10:51
For this case buildbox-run is available but dose not support the target arch.
The element was made by using remote exicution or CI with a different arch. by setting the sandbox arch you can get the right cache key and pull down the artefact. But when you try to check it out then the sandbox dose not create a dummy but complains that the arch is not supported for a full sandbox.
This makes seance when --no-intergration is not used but dose not make sense if --no-intergration is specified.
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 10:51
For this case buildbox-run is available but dose not support the target arch.
The element was made by using remote exicution or CI with a different arch. by setting the sandbox arch you can get the right cache key and pull down the artefact. But when you try to check it out then the sandbox dose not create a dummy but complains that the arch is not supported for a full sandbox.
This makes seance when --no-intergration is not used but dose not make sense if --no-intergration is specified.
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:03
Thanks. I think the fix in that case should be to ensure that we do use the dummy sandbox in such code paths, rather than circumventing that in places other than the sandbox module. This keeps all related logic in one place and avoids unncessary forks in the code.
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:03
Thanks. I think the fix in that case should be to ensure that we do use the dummy sandbox in such code paths, rather than circumventing that in places other than the sandbox module. This keeps all related logic in one place and avoids unncessary forks in the code.
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:04
mentioned in merge request !1983
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:04
mentioned in merge request !1983
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 13:49
The issue that i had was that AFAICT we pick which sandbox to use for the run really early, at the platform level so you lose the chance to fall back at the point were we actually invoke it. Im not sure if that's true but that's what it looked like when I looked. Duno if [Gitlab user @cs-shadow] or [Gitlab user @juergbi] can point me in the right direction for how to fix this sensibly.
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 13:49
The issue that i had was that AFAICT we pick which sandbox to use for the run really early, at the platform level so you lose the chance to fall back at the point were we actually invoke it. Im not sure if that's true but that's what it looked like when I looked. Duno if [Gitlab user @cs-shadow] or [Gitlab user @juergbi] can point me in the right direction for how to fix this sensibly.
|
2025-04-01T06:37:52.067836
| 2021-11-25T16:22:44
|
1063769227
|
{
"authors": [
"httpsOmkar",
"mgubaidullin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3515",
"repo": "apache/camel-karavan",
"url": "https://github.com/apache/camel-karavan/pull/129"
}
|
gharchive/pull-request
|
removed deprecated version of copy to clipboard method to clipboard.w…
…riteText
Thanks for the copyToClipboard refactoring.
Could you please remove changes in render() method from PR. They make code cleaner but they are not related to the PR (removed deprecated version of copy to clipboard method). It is better to make them in the separate one.
|
2025-04-01T06:37:52.069261
| 2020-10-21T20:33:29
|
726843647
|
{
"authors": [
"JiriOndrusek",
"ppalaga"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3516",
"repo": "apache/camel-quarkus",
"url": "https://github.com/apache/camel-quarkus/issues/1941"
}
|
gharchive/issue
|
Camel Avro RPC component native support
This one is for camel-avro-rpc component. We already have the avro dataformat in native https://github.com/apache/camel-quarkus/issues/782
Please, assign to me.
|
2025-04-01T06:37:52.070487
| 2023-08-06T08:02:36
|
1838100706
|
{
"authors": [
"orpiske"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3517",
"repo": "apache/camel",
"url": "https://github.com/apache/camel/pull/11012"
}
|
gharchive/pull-request
|
(chores) camel-optaplanner: reduce the time spent trying to solve the problems
Signed-off-by: Otavio R. Piske<EMAIL_ADDRESS>
Before: Total time: 14:04 min
After: Total time: 01:14 min
|
2025-04-01T06:37:52.076687
| 2017-07-03T14:22:55
|
240187817
|
{
"authors": [
"CarbonDataQA",
"chenliang613",
"jackylk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3518",
"repo": "apache/carbondata",
"url": "https://github.com/apache/carbondata/pull/1129"
}
|
gharchive/pull-request
|
[CARBONDATA-1259] CompareTest improvement
changes:
check query result details, report error if result is not the same
add support for comparison with ORC file
add decimal data type
Build Success with Spark 1.6, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/294/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/2878/
retest this please
Build Success with Spark 1.6, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/314/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/2900/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/2902/
Build Success with Spark 1.6, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/316/
Build Success with Spark 1.6, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/317/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/2903/
LGTM
|
2025-04-01T06:37:52.109662
| 2017-12-19T10:01:28
|
283167863
|
{
"authors": [
"CarbonDataQA",
"ravipesala",
"xuchuanyin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3519",
"repo": "apache/carbondata",
"url": "https://github.com/apache/carbondata/pull/1678"
}
|
gharchive/pull-request
|
[CARBONDATA-1903] Fix code issues in carbondata
Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:
[x] Any interfaces changed?
No
[x] Any backward compatibility impacted?
No
[x] Document update required?
No
[x] Testing done
Please provide details on
- Whether new unit test cases have been added or why no new tests are required?
No, only fixed code related issues.
- How it is tested? Please attach test report.
Tested in local machine
- Is it a performance related change? Please attach the performance test report.
No, only fixed code related issues.
- Any additional information to help reviewers in testing this change.
No
[x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
Not related
Modification
Remove unused code like FileUtil
Fix/Optimize code issues in carbondata
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2126/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/902/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2412/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2437/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2153/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/924/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2162/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/933/
retest this please
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2241/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1017/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2499/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2242/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1018/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2246/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1023/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2505/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2251/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1028/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2507/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2514/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2262/
retest this please
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1040/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2285/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1069/
retest this please
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2291/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1075/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1078/
retest this please
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2307/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1091/
retest this please
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1110/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2330/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2352/
retest this please
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1135/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2355/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1137/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1139/
SDV Build Fail , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/2568/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1143/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2373/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2404/
Build Failed with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1181/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/2435/
Build Success with Spark 2.2.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/1211/
|
2025-04-01T06:37:52.128057
| 2018-05-06T07:51:06
|
320571195
|
{
"authors": [
"CarbonDataQA",
"rahulforallp",
"ravipesala",
"sgururajshetty",
"xubo245"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3520",
"repo": "apache/carbondata",
"url": "https://github.com/apache/carbondata/pull/2274"
}
|
gharchive/pull-request
|
[CARBONDATA-2440] doc updated to set the property for SDK user
[x] Any interfaces changed? NO
[x] Any backward compatibility impacted? No
[x] Document update required? ==> Yes
[x] Testing done ==> All UT and SDV succes report is enough.
[x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/4745/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/5667/
Build Success with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4507/
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/4746/
retest this please
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/5671/
Build Success with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4511/
LGTM
Build Success with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4686/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/5842/
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/4887/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/5883/
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/4918/
Build Success with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4731/
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/4919/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/5885/
@xubo245 review comments resolved .
please import CarbonProperties before using it.
@xubo245 import done
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/5016/
Build Success with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4844/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/6003/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/6014/
Build Failed with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4856/
Build Success with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/6022/
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/5029/
Build Success with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4863/
SDV Build Success , Please check CI http://<IP_ADDRESS>:8080/job/ApacheSDVTests/5036/
Build Failed with Spark 2.2.1, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder/4874/
Build Failed with Spark 2.1.0, Please check CI http://<IP_ADDRESS>:8080/job/ApacheCarbonPRBuilder1/6033/
retest this please
LGTM
|
2025-04-01T06:37:52.146927
| 2016-01-20T13:57:59
|
127687962
|
{
"authors": [
"MJJoyce",
"OCWJenkins",
"Omkar20895",
"lewismc"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3521",
"repo": "apache/climate",
"url": "https://github.com/apache/climate/pull/276"
}
|
gharchive/pull-request
|
CLIMATE-379 - Allows dataset customisation
This patch allows the user to customise the name of the local dataset that is being uploaded in the web-app.
Can one of the admins verify this patch?
hey @lewismc @MJJoyce, please have a look at this patch.
I am +1, any comments @MJJoyce ?
:+1:
Please commit @Omkar20895
I haven't been granted the write access to the repository yet. right??
No you have, the canonical source is here
https://git-wip-us.apache.org/repos/asf/climate.git
The Github code is merely a mirror and is not the canonical source as it is
not hosted at the Apache Software Foundation.
The link above is hosted at the ASF and therefore canonical.
Thanks
Lewis
On Wed, Jan 27, 2016 at 9:38 AM, Omkar<EMAIL_ADDRESS>wrote:
I haven't been granted the write access to the repository yet. right??
—
Reply to this email directly or view it on GitHub
https://github.com/apache/climate/pull/276#issuecomment-175762787.
--
Lewis
|
2025-04-01T06:37:52.157650
| 2020-10-28T10:55:38
|
731333422
|
{
"authors": [
"blueorangutan",
"rhtyd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3522",
"repo": "apache/cloudstack-primate",
"url": "https://github.com/apache/cloudstack-primate/pull/841"
}
|
gharchive/pull-request
|
packaging: enforce new min. CloudStack version 4.15 starting GA/1.0
There are many changes, including API changes in upstream master/4.15
which makes it challenging to maintain backward compability of Primate
with older versions of CloudStack. Therefore we need to ensure that the
rpm and deb Primate pkgs require CloudStack 4.15 as minimum version.
This would still leave some flexibility for advanced users of archive
builds (which adds risks that some features don't work with 4.14 or
older versions).
Following this we need to update https://github.com/apache/cloudstack-documentation/pull/150 as well wrt the min. version Primate will support and installation instructions. By default, we'll ship primate with every cloudstack repo so users won't need to setup the repo themselves (the other way is for cloudstack-management to install the repo config automatically).
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build primate packages. I'll keep you posted as I make progress.
Packaging result: :heavy_check_mark:centos :heavy_check_mark:debian :heavy_check_mark:archive.
QA: http://primate-qa.cloudstack.cloud:8080/client/pr/841 (JID-3629)
|
2025-04-01T06:37:52.167024
| 2019-07-02T17:20:43
|
463346204
|
{
"authors": [
"DaanHoogland",
"mhp0rtal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3523",
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/3459"
}
|
gharchive/issue
|
Misuses of cryptographic APIs
Hi
The following lines have cryptographic API misuses. File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 75: API name => MessageDigest: File name => utils/src/main/java/com/cloud/utils/nio/Link.java: Line number => 371: API name => KeyStore:Second parameter should never be of type java.lang.String. File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 30: API name => MessageDigest:Unexpected call to method <java.security.MessageDigest: byte[] digest()> on object of type java.security.MessageDigest. Expect a call to one of the following methods <java.security.MessageDigest: void update(byte[])>,<java.security.MessageDigest: void update(byte[],int,int)>,<java.security.MessageDigest: byte[] digest(byte[])>,<java.security.MessageDigest: void update(java.nio.ByteBuffer)>,<java.security.MessageDigest: void update(byte)> File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 37: API name => MessageDigest: File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 52: API name => MessageDigest:Unexpected call to method reset on object of type java.security.MessageDigest. Expect a call to one of the following methods digest,update File name => utils/src/main/java/com/cloud/utils/crypt/RSAHelper.java: Line number => 81: API name => Cipher: File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 67: API name => MessageDigest:First parameter (with value "MD5") should be any of {SHA-256, SHA-384, SHA-512} File name => utils/src/main/java/com/cloud/utils/EncryptionUtil.java: Line number => 63: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/SwiftUtil.java: Line number => 234: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/SwiftUtil.java: Line number => 234: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 75: API name => MessageDigest: File name => utils/src/main/java/com/cloud/utils/security/CertificateHelper.java: Line number => 72: API name => KeyStore:Unexpected call to method store on object of type java.security.KeyStore. Expect a call to one of the following methods getKey,getEntry File name => utils/src/main/java/com/cloud/utils/security/CertificateHelper.java: Line number => 117: API name => KeyStore:Unexpected call to method store on object of type java.security.KeyStore. Expect a call to one of the following methods getKey,getEntry File name => utils/src/main/java/com/cloud/utils/EncryptionUtil.java: Line number => 63: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/crypt/RSAHelper.java: Line number => 79: API name => Cipher:First parameter (with value "RSA/None/PKCS1Padding") should be any of RSA/{Empty String, ECB} File name => utils/src/main/java/com/cloud/utils/security/CertificateHelper.java: Line number => 99: API name => KeyStore:Second parameter should never be of type java.lang.String. File name => utils/src/main/java/com/cloud/utils/crypt/RSAHelper.java: Line number => 81: API name => Cipher:
@mhp0rtal can you give expoits for any of those isses?
Can you also please give a version on which these apply, as the first three do not show code matching the message;
1: File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 75: API name => MessageDigest:
line 71 is an empty line
2: File name => utils/src/main/java/com/cloud/utils/nio/Link.java: Line number => 371: API name => KeyStore:Second parameter should never be of type java.lang.String.
call on line 371 has only one parameter
3: File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 30: API name => MessageDigest:Unexpected call to method <java.security.MessageDigest: byte[] digest()> on object of type java.security.MessageDigest. Expect a call to one of the following methods <java.security.MessageDigest: void update(byte[])>,<java.security.MessageDigest: void update(byte[],int,int)>,<java.security.MessageDigest: byte[] digest(byte[])>,<java.security.MessageDigest: void update(java.nio.ByteBuffer)>,<java.security.MessageDigest: void update(byte)>
line 30 is empty
I stopped checking there but I propose you debug your tool of investigation.
I'm closing this issue but if you feel it is still valid, please add needed extra info and reopen.
|
2025-04-01T06:37:52.173459
| 2022-04-22T03:07:09
|
1211758533
|
{
"authors": [
"CKrieger2020",
"DaanHoogland",
"nxsbi",
"rohityadavcloud"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3524",
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/6298"
}
|
gharchive/issue
|
Place VM's on the same network as the CloudStack-management server
ISSUE TYPE
Other
COMPONENT NAME
UI, IP
CLOUDSTACK VERSION
<IP_ADDRESS>
SUMMARY
I am trying to find a method of placing newly deployed VM's on the same network as my management server, but so far it only allows me to create guest VM's behind a NAT with a private IP address. I need to create multiple VM's hosting services that can be routed to an external DNS. My concern is that the NAT will make routing to these machines impossible.
I have tried changing the network offering on my guest network from "offering for isolated network with NAT service enabled" to "offering for isolated network with no NAT service enabled", but it gives me this error:
"can't upgrade from network offering edad787d-baa0-4ca8-a67b-bd288adc6d37 to 052cee24-272b-44f1-a4df-4b5da7a30744; check logs for more information"
I would choose this during network deployment, but I do not have the option to choose "offering for isolated network with no NAT service enabled" until after deployment.
I would appreciate any advice you can give to help guide me from here as I am new to cloudstack
EXPECTED RESULTS
Place all deployed VM's on same IP range as the management console itself.
ACTUAL RESULTS
Forced to deploy VM's behind NAT per default configuration
@CKrieger2020 this is certainly a new use case. After a quick read I think you might want to investigate going to IPv6.
Another possible way to go is to deploy in a shared network.
@CKrieger2020 - have you tried to create a new Network or create a L2 network (assuming your CS instance is on a separate L2 network than what is available in CS for users to choose from)
In my home lab I have been able to create instance using L2 which talks to my other computers on the home network.
It's possible to do this by deploying VMs on a L2 network whose vlan is vlan://untagged essentially you'll be on the same network as your host/mgmt network.
@CKrieger2020 can you check the above suggestion and re-open the issue to discuss more.
To discuss further questions, you can raise them on the users@ and/or dev@ ML https://cloudstack.apache.org/mailing-lists.html
|
2025-04-01T06:37:52.189814
| 2023-02-07T21:53:57
|
1575078517
|
{
"authors": [
"Atiqul-Islam",
"DaanHoogland",
"Pearl1594",
"weizhouapache"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3525",
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/7178"
}
|
gharchive/issue
|
Issue while setting up CloudStack Advance Zone with security group
ISSUE TYPE
Bug Report
COMPONENT NAME
Advanced Zone with Security Groups setup
CLOUDSTACK VERSION
4.17.2
CONFIGURATION
Zone:
IPV4 DNS: <IP_ADDRESS>
Internal DNS: <IP_ADDRESS>
Pysical Network 1:
Management Traffic: cloudbr0
Guest Traffic: cloudbr1
Pod:
Gateway: <IP_ADDRESS>
Netmask: <IP_ADDRESS>
IP Range: <IP_ADDRESS> to <IP_ADDRESS>
Guest Traffic:
Gateway: <IP_ADDRESS>
Netmask: <IP_ADDRESS>
IP Range <IP_ADDRESS> to <IP_ADDRESS>
Host:
IP: <IP_ADDRESS>
User: root
Password: password
Tag: h1
OS / ENVIRONMENT
Ubuntu 22.04 server with two network bridge cloudbr0 and cloudbr1
SUMMARY
Apache CloudStack v4.17.2
I am trying to setup CloudStack Advance Zone with security groups.
I have two network bridges cloudbr0 (<IP_ADDRESS>/16) and cloudbr1 (<IP_ADDRESS>/16). I am using cloudbr0 for Management Network and cloudbr1 for the Guest Network.
However, the zone creation keeps failing adding host with the error message - failed to add host as resource already exists as LibvirtComputingResource.
For some reason it seems like CloudStack is trying to add the same host twice.
STEPS TO REPRODUCE
Configuring CloudStack Advance Zene with security group on Ubuntu 22.04 server
EXPECTED RESULTS
Successfully create advance zone with security group.
ACTUAL RESULTS
Host setup fails with the following error:
Could not add host at [http://<IP_ADDRESS>] with zone [1], pod [1] and cluster [1] due to: [ can't setup agent, due to com.cloud.utils.exception.CloudRuntimeException: Skipping host <IP_ADDRESS> because 2f02300b-d9bf-3229-acb8-21054c500f47 is already in the database for resource 2f02300b-d9bf-3229-acb8-21054c500f47-LibvirtComputingResource with ID 86f5dcd2-9d6e-444e-b0df-e0dcb1509699 - Skipping host <IP_ADDRESS> because 2f02300b-d9bf-3229-acb8-21054c500f47 is already in the database for resource 2f02300b-d9bf-3229-acb8-21054c500f47-LibvirtComputingResource with ID 86f5dcd2-9d6e-444e-b0df-e0dcb1509699].
Was there an issue faced during zone creation after the host addition step, maybe during setting up the stores? I had faced a similar issue in the past, where in if the zone creation fails at any point and we are prompted to rectify the issue, and then restart the zone creation workflow, it attempts to re-add the host. Can you check the database if an entry already exists in the host table and if it does, delete them and restart the zone creation process.
@Atiqul-Islam
Can you upload the full management server log ?
@Pearl1594 I am installing Cloud Stack on a fresh Ubuntu Sever, there was no host created before the zone creation.
@weizhouapache
Management Server Log
@Atiqul-Islam
it looks you use a server as both management server and cloudstack agent.
from the log, host was added twice and of course it failed at 2nd attempt. everything else looks good.
@weizhouapache
Why was the host added twice is it because I am using the same server as both management and agent?
I didn't do manually anything to create a host, I just started cloudstack and tried setting up the advanced zone with security group. Thats where I configured the host. During the process of creating the zone it seemed like cloudstack was trying to add the same zone twice.
@weizhouapache
Why was the host added twice is it because I am using the same server as both management and agent?
I didn't do manually anything to create a host, I just started cloudstack and tried setting up the advanced zone with security group. Thats where I configured the host. During the process of creating the zone it seemed like cloudstack was trying to add the same zone twice.
@Atiqul-Islam
I just wanted to confirm your configurations.
I will try to reproduce the issue.
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@Atiqul-Islam no problem.
it seems like a minor issue for you I think.
The zone has been created successfully, and system vms are Running when you enabed the zone, right ?
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@Atiqul-Islam no problem.
it seems like a minor issue for you I think. The zone has been created successfully, and system vms are Running when you enabed the zone, right ?
@weizhouapache
It seems like there was possibly network issues during the setup process, some component of the Zone could be in a bad state, as there was no Virtual Router created for the guest network.
I am also getting the following error when I am trying to add an Ubuntu 20.04 iso.
Unable to resolve releases.ubuntu.com
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@Atiqul-Islam no problem.
it seems like a minor issue for you I think. The zone has been created successfully, and system vms are Running when you enabed the zone, right ?
@weizhouapache
Systems VMs are up and running after I enabled the Zone. However, it seems like the zone network might not be properly configured. Some component of the Zone could be in a bad state, as there was no Virtual Router created for the guest network.
I am also getting the following error when I am trying to add an Ubuntu 20.04 iso.
Unable to resolve releases.ubuntu.com
I did check the bare metal system running the management server and the host can ping releases.ubuntu.com
@Atiqul-Islam I have checked your log. It seems everything went smoothly, except the extra step to add host again when all are done. I think you can ignore the error.
for the issue with DNS , you need to log into Secondary storage Vm (a.k.a SSVM) and check if the domain can be resolved. you might need to update the DNS and internal DNS in zone configuration
@weizhouapache
I am unable to get into SSVM console. When I try to get into the console using the GUI, it seems to cannot load the page. In addition, where do I find the login credentials to the SSVM.
Also shouldn't there be a virtual router created as well for the gateway of the guest network?
@weizhouapache
I am unable to get into SSVM console. When I try to get into the console using the GUI, it seems to cannot load the page. In addition, where do I find the login credentials to the SSVM.
Also shouldn't there be a virtual router created as well for the gateway of the guest network?
@Atiqul-Islam sorry for late response.
you can ssh into system vms and virtual routers from the kvm host.
ssh -p 3922 -i /root/.ssh/id_rsa.cloud 169.254.x.x
or "virsh console s-xx-VM"
the credential is root/password
The virtual router will be created when a vm is created I think.
@Atiqul-Islam I am closing this issue. please reopen or create a new one if ou think that is invalid.
|
2025-04-01T06:37:52.196677
| 2024-02-29T19:53:45
|
2161994075
|
{
"authors": [
"BryanMLima",
"DaanHoogland",
"winterhazel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3526",
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/8730"
}
|
gharchive/issue
|
[UI] Storage menu not showing even with API permissions
ISSUE TYPE
Bug Report
COMPONENT NAME
UI
CLOUDSTACK VERSION
Main
SUMMARY
As reported in https://github.com/apache/cloudstack/pull/8713#issuecomment-1969866705, the Storage submenu in the sidebar is not displayed to users when they do not have permission to the API listVolumesMetrics. However, roles can have permissions to other APIs, such as listBackups and listSnapshots, in which case the submenu should be displayed. This scenario is probably not exclusive to the Storage menu.
STEPS TO REPRODUCE
Create a role with permission to allow the APIs listBackups and listSnapshots and deny the API listVolumesMetrics. The UI dashboard will not show the Storage menu in the sidebar.
EXPECTED RESULTS
The UI should show the Storage submenu alongside with its own Backups and Snapshots submenus.
ACTUAL RESULTS
The submenu Storage is not displayed, even though the role has permission to list snapshots and backups.
@DaanHoogland @winterhazel @sureshanaparti, I am probably overthinking this scenario, however, the permission property in the JS component (storage.js) works like an AND operator. Maybe it could function like an OR operator as well; what do you guys think?
@DaanHoogland @winterhazel @sureshanaparti, I am probably overthinking this scenario, however, the permission property in the JS component (storage.js) works like an AND operator. Maybe it could function like an OR operator as well; what do you guys think?
This was exactly my thought when I learned of it @BryanMLima . Let's try.
Hey everyone,
I've implemented @BryanMLima's idea in #8978. The filtering still works as an AND operator for routes that correspond to a page; however, I have changed so that routes corresponding to sections get shown if the user has access to any of its pages.
Closing this as this was addressed in PR #8978.
|
2025-04-01T06:37:52.372583
| 2017-04-14T11:27:52
|
221792509
|
{
"authors": [
"blueorangutan",
"borisstoyanov",
"cloudmonger",
"ravening",
"remibergsma",
"rhtyd",
"ustcweizhou",
"weizhouapache",
"wido"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3527",
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/pull/2046"
}
|
gharchive/pull-request
|
CLOUDSTACK-7958: Add configuration for limit to CIDRs for Admin API calls
The global setting 'management.admin.cidr' is set to <IP_ADDRESS>/0,::/0
by default preserve the current behavior and thus allow API calls
for Admin accounts from all IPv4 and IPv6 subnets.
Users can set it to a comma-separated list of IPv4/IPv6 subnets to
restrict API calls for Admin accounts to certain parts of their network(s).
This is to improve Security. Should a attacker steal the Access/Secret key
of a Admin account he/she still needs to be in a subnet from where Admin accounts
are allowed to perform API calls.
This is a good security measure for APIs which are connected to the public internet.
This PR also includes a commit to cleanup and improve NetUtils.
No existing methods have been altered. That has been verified by adding additional Unit Tests for this.
@DaanHoogland: I improved the logging as you suggested/requested.
A TRACE for every request and WARN when a request is denied. Tried this locally:
2017-04-14 15:45:58,901 WARN [c.c.a.ApiServlet] (catalina-exec-17:ctx-5955fcab ctx-c572b42e) (logid:7b251506) Request by accountId 2 was denied since <IP_ADDRESS> does not match <IP_ADDRESS>/8,::1/128
In this case only localhost (IPv4/IPv6) is allowed to perform requests.
@PaulAngus This is what we talked about in Prague. Mind taking a look?
Nice @wido, will give it a go soon!
Thanks @remibergsma. Added the return statement.
Thinking about it. Does a 401 sound good? Or should we maybe use a 403 Forbidden?
@wido I'm playing a bit with it, also because currently CloudMonkey displays a clear message but the UI will simply freeze and do nothing. Only when you look at the underlying API calls you'll see why it isn't working. I'm testing to see the http status code makes any difference of that we need to handle it in the UI somewhere.
We should also think about the order: first alert on the CIDR check and then check user/pass (as it is now) or the other way around.
Also noticed it doesn't work with spaces before/after comma's so we might want to add a .replaceAll("\\s","") or similar.
@wido very nice feature.
It would also be nice if this can be added to domain or account setting.
@remibergsma: Yes, I am aware of that UI problem. Not sure how to fix it.
After thinking about it, I went for '403 Forbidden' and also stripped whitespace from the config key.
I think the order is OK right now unless other opinions?
@ustcweizhou: That is a lot more difficult then a global value, isn't it? Since you have to query it every time.
Or should configkey allow this very easily?
@wido I think we need to do the check on two places, also on the login() method. That makes sure we don't issue a session key when user/pass are OK but we still reject it based on the CIDR. In my testing that also fixes the UI issue. There are two ways to authenticate so that makes sense I'd say. It'll then also work with authentication plugins, such as LDAP/AD.
Switching the scope of the config is easy, but indeed you'll be querying it on every API call. That does have the benefit you don't need to restart the mgt server when you make a change, but the downside is also obvious. One way to resolve it, is to make a global config setting that switches the feature on/off (and that config is loaded at bootstrap) so you can opt-in for the more heavy checks.
I'll play a bit more with it tonight.
@remibergsma: I pulled your code, thanks! It now works per account @ustcweizhou
How does this look?
Good one @remibergsma. I reverted that piece and also the baremetal refusal of users.
There were some conflicts after changes were made in master. Fixed those.
As this one is merge ready, can it go into master now?
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✖centos6 ✖centos7 ✖debian. JID-839
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✖centos6 ✖centos7 ✖debian. JID-844
@wido can you check/fix build failures?
@rhtyd: I went through the logs, but there is nothing that I can find that points to this PR.
Everything seems to pass.
@wido it's a build failure issue, please see travis (job#1) failure:
[[1;34mINFO[m] Compiling 45 source files to /home/travis/build/apache/cloudstack/vmware-base/target/classes
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] COMPILATION ERROR :
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] /home/travis/build/apache/cloudstack/vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java:[1456,20] error: cannot find symbol
[[1;34mINFO[m] 1 error
...
[[1;34mINFO[m] Apache CloudStack VMware Base ...................... [1;31mFAILURE[m [ 4.785 s]
Fixed @rhtyd
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:52 min (Wall Clock)
[INFO] Finished at: 2017-07-24T14:07:11+02:00
[INFO] Final Memory: 118M/1939M
[INFO] ------------------------------------------------------------------------
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✖centos6 ✖centos7 ✖debian. JID-856
@wido still failing, please do a clean rebuild, check Travis?
ACS CI BVT Run
Sumarry:
Build Number 1006
Hypervisor xenserver
NetworkType Advanced
Passed=102
Failed=11
Skipped=12
Link to logs Folder (search by build_no): https://www.dropbox.com/sh/r2si930m8xxzavs/AAAzNrnoF1fC3auFrvsKo_8-a?dl=0
Failed tests:
test_scale_vm.py
ContextSuite context=TestScaleVm>:setup Failing since 33 runs
test_loadbalance.py
test_01_create_lb_rule_src_nat Failed
test_02_create_lb_rule_non_nat Failed
test_non_contigiousvlan.py
test_extendPhysicalNetworkVlan Failed
test_deploy_vm_iso.py
test_deploy_vm_from_iso Failing since 63 runs
test_volumes.py
test_06_download_detached_volume Failing since 3 runs
test_vm_life_cycle.py
test_10_attachAndDetach_iso Failing since 63 runs
test_routers_network_ops.py
test_01_isolate_network_FW_PF_default_routes_egress_true Failing since 96 runs
test_02_isolate_network_FW_PF_default_routes_egress_false Failing since 96 runs
test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failing since 94 runs
test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failing since 94 runs
Skipped tests:
test_vm_nic_adapter_vmxnet3
test_01_verify_libvirt
test_02_verify_libvirt_after_restart
test_03_verify_libvirt_attach_disk
test_04_verify_guest_lspci
test_05_change_vm_ostype_restart
test_06_verify_guest_lspci_again
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm
Passed test suits:
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_vm_snapshots.py
test_over_provisioning.py
test_global_settings.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_login.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_metrics_api.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_disk_offerings.py
I tried again @rhtyd
wido@wido-laptop:~/repos/cloudstack$ mvn -T2C clean install
This works! All unit tests also pass. No build failure either:
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 06:30 min (Wall Clock)
[INFO] Finished at: 2017-07-25T09:27:50+02:00
[INFO] Final Memory: 119M/1556M
[INFO] ------------------------------------------------------------------------
@wido make sure you're not breaking noredist builds as well, see Travis and try to get it green;
[[1;34mINFO[m] [1m--- [0;32mmaven-compiler-plugin:3.2:compile[m [1m(default-compile)[m @ [36mcloud-plugin-hypervisor-vmware[0;1m ---[m
[[1;34mINFO[m] Changes detected - recompiling the module!
[[1;34mINFO[m] Compiling 51 source files to /home/travis/build/apache/cloudstack/plugins/hypervisors/vmware/target/classes
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] COMPILATION ERROR :
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] /home/travis/build/apache/cloudstack/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[534,35] error: cannot find symbol
[[1;34mINFO[m] 1 error
Aha @rhtyd!
I've fixed that
ACS CI BVT Run
Sumarry:
Build Number 1014
Hypervisor xenserver
NetworkType Advanced
Passed=102
Failed=9
Skipped=12
Link to logs Folder (search by build_no): https://www.dropbox.com/sh/r2si930m8xxzavs/AAAzNrnoF1fC3auFrvsKo_8-a?dl=0
Failed tests:
test_vm_snapshots.py
test_change_service_offering_for_vm_with_snapshots Failed
test_deploy_vm_iso.py
test_deploy_vm_from_iso Failing since 69 runs
test_list_ids_parameter.py
ContextSuite context=TestListIdsParams>:setup Failing since 45 runs
test_volumes.py
test_06_download_detached_volume Failed
test_vm_life_cycle.py
test_10_attachAndDetach_iso Failing since 69 runs
test_routers_network_ops.py
test_01_isolate_network_FW_PF_default_routes_egress_true Failing since 102 runs
test_02_isolate_network_FW_PF_default_routes_egress_false Failing since 102 runs
test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failing since 100 runs
test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failing since 100 runs
Skipped tests:
test_vm_nic_adapter_vmxnet3
test_01_verify_libvirt
test_02_verify_libvirt_after_restart
test_03_verify_libvirt_attach_disk
test_04_verify_guest_lspci
test_05_change_vm_ostype_restart
test_06_verify_guest_lspci_again
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm
Passed test suits:
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_metrics_api.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_disk_offerings.py
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-867
@blueorangutan test
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests
I've tried several times now, the environments fails to come up; perhaps @borisstoyanov can takeover testing...
21:38:41 FAILED - RETRYING: TASK: get wait for state of system VMs to be Running (1 retries left).
21:38:47 fatal: [pr2046-t1277-kvm-centos7-mgmt1]: FAILED! => {"attempts": 200, "changed": true, "cmd": "cloudmonkey list systemvms | jq '.systemvm[]| select(.systemvmtype==\"consoleproxy\")|.state'", "delta": "0:00:00.186658", "end": "2017-07-28 20:38:19.617884", "failed": true, "rc": 0, "start": "2017-07-28 20:38:19.431226", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
21:38:47
21:38:47 cmd: cloudmonkey list systemvms | jq '.systemvm[]| select(.systemvmtype=="consoleproxy")|.state'
21:38:47
21:38:47 start: 2017-07-28 20:38:19.431226
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1018
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. Do you think there are changes that might affect systemvm agents/patching? /cc @borisstoyanov
FYI, we haven't seen this in other master PRs, so could be related to these changes...
@wido can you fix the conflicts?
@rhtyd Fixed!
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1427
I will fix fhe conflicts asap
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1500
very good !
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1505
@wido can you resolve the conflicts, thanks.
I will in a few days.
Can we merge this one afterwards? I keep resolving conflicts which happen since other code is merged ;)
Sure, ping me @wido
@wido sorry to bring up the old pr but how can I configure under this account level?
I logged as in a regular user with "Domain admin" role but i dont see any settings tab under the account
@wido sorry to bring up the old pr but how can I configure under this account level? I logged as in a regular user with "Domain admin" role but i dont see any settings tab under the account
@ravening please have a look at #4339
|
2025-04-01T06:37:52.377609
| 2018-11-08T17:39:40
|
378843692
|
{
"authors": [
"drajakumar",
"garydgregory"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3528",
"repo": "apache/commons-collections",
"url": "https://github.com/apache/commons-collections/pull/57"
}
|
gharchive/pull-request
|
COLLECTIONS-701 SetUniqueList.add() crashes due to infinite recursion…
… when it receives itself
Hi @drajakumar ,
I'm not sure this patch makes sense. Take a look at org.apache.commons.collections4.list.Collections701Test: For ArrayList and HashSet, adding a collection to itself is fine.
In this patch, the argument is not only silently ignored, but the behavior is not even documented. Whatever we do, we really need to document anything that deviates from the standard JRE List contract.
IMO, the fix should be so that a SetUniqueList behaves like a ArrayList and HashSet, it just works.
You did not have to close the PR, I was hoping you would provide a more complete solution ;-)
@garydgregory can you kindly check the new fix, thank you!
@garydgregory can you kindly check the new fix, thank you!
|
2025-04-01T06:37:52.390974
| 2016-10-22T11:58:46
|
184626396
|
{
"authors": [
"PascalSchumacher",
"Xaerxess",
"coveralls",
"kinow"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3529",
"repo": "apache/commons-lang",
"url": "https://github.com/apache/commons-lang/pull/199"
}
|
gharchive/pull-request
|
LANG-1258: Add ArrayUtils#toStringArray(Object[]) method
patch supplied by IG
Coverage increased (+0.04%) to 93.581% when pulling bfc8805ac898cb4d7a8ca45ab1d5106bc2b63342 on PascalSchumacher:ArrayUtils#toStringArray into 91d6bd74fa358fdc8d7cb7681c76c509fd9a8e7d on apache:master.
Coverage increased (+0.04%) to 93.581% when pulling bfc8805ac898cb4d7a8ca45ab1d5106bc2b63342 on PascalSchumacher:ArrayUtils#toStringArray into 91d6bd74fa358fdc8d7cb7681c76c509fd9a8e7d on apache:master.
Patch looks good. I wonder if the inline if will raise a warning in checkstyle. Other than that, +1 :D
I added null elements handling in PR to this branch, because in current state NPE would be thrown in case of null in array.
I have updated the pull request with @Xaerxess changes.
Coverage increased (+0.01%) to 93.551% when pulling bf978e7ec7a1bb3d1d671331383619d81bae95ed on PascalSchumacher:ArrayUtils#toStringArray into 91d6bd74fa358fdc8d7cb7681c76c509fd9a8e7d on apache:master.
Sure, consistency in API is the key. But stringIfNull is used when array itself is null, not element, so it wouldn't be consistent with the API.
@Xaerxess Sorry for the confusion, I wasn't talking about toString, but about the toPrimitive methods e.g.
https://github.com/apache/commons-lang/blob/96c8ea2fb3719e2f6e3d7a4d7b46718f26515a86/src/main/java/org/apache/commons/lang3/ArrayUtils.java#L4497
As these also convert the type of the array I think they are the most similar existing methods (compared to the new method in this pull request).
Coverage increased (+0.008%) to 93.563% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.03%) to 93.588% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.03%) to 93.588% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.008%) to 93.563% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.04%) to 93.595% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Merged: https://github.com/apache/commons-lang/commit/8d95ae41975a2307501aa0f4a7eba296c59edce9 and https://github.com/apache/commons-lang/commit/8d601ab71228f7c3dff950540e7ee6e4043e9053
Thanks everybody!
|
2025-04-01T06:37:52.425978
| 2018-10-16T02:39:15
|
370416045
|
{
"authors": [
"adadgio",
"albertleao",
"breautek"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3530",
"repo": "apache/cordova-plugin-wkwebview-engine",
"url": "https://github.com/apache/cordova-plugin-wkwebview-engine/issues/60"
}
|
gharchive/issue
|
WKWebview Won't Load
I receive the following error when I have wk webview installed:
Failed to load resource: file:///var/containers/Bundle/Application/39F29CF9-4A2F-4A20-A296-0CAE487974B3/my.app/www/plugins/cordova-plugin-wkwebview-engine/src/www/ios/ios-wkwebview.js The requested URL was not found on this server.
Failed to load resource: file:///var/containers/Bundle/Application/39F29CF9-4A2F-4A20-A296-0CAE487974B3/my.app/www/plugins/cordova-plugin-wkwebview-engine/src/www/ios/ios-wkwebview-exec.js The requested URL was not found on this server.
Error: Module cordova-plugin-wkwebview-engine.ios-wkwebview-exec does not exist.
I have the following in my config.xml
<feature name="CDVWKWebViewEngine">
<param name="ios-package" value="CDVWKWebViewEngine" />
</feature>
<preference name="CordovaWebViewEngine" value="CDVWKWebViewEngine" />
I noticed when I run cordova build ios, the 2 js files in the plugins folder get created, then deleted. Not sure why. Every other plugin is working fine.
I had the same issue and solved it installing : cordova plugin add cordova-plugin-wkwebview-engine. Apparently this plugin needs to be installed for the fix to work.
Dont't forget to add ALL this in your config:
..........[other code]
Closing, the solution is posted above.
|
2025-04-01T06:37:52.430336
| 2021-09-14T21:40:25
|
996464124
|
{
"authors": [
"corentin-begne",
"timbru31"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3531",
"repo": "apache/cordova-windows",
"url": "https://github.com/apache/cordova-windows/issues/394"
}
|
gharchive/issue
|
Microsoft Surface hybrid touch events
Bug Report
Problem
Touch events are not fired correctly
What is expected to happen?
Moving camera in webgl using touch on screen
What does actually happen?
Not any touch events are fired, only mouse and mouse down, few mouse move then nothing, no mouse up.
If I touch without moving mouse up is fired.
Information
Same site on web work correctly with the same device.
Command or Code
Environment, Platform, Device
Microsoft Surface hybrid computer
Version information
Cordova 10.0.0
cordova-windows 8.0.0-dev
Windows 10
Checklist
[x] I searched for existing GitHub issues
[x] I updated all Cordova tooling to most recent version
[x] I included all the necessary information above
We are archiving this repository following Apache Cordova's Deprecation Policy. We will not continue to work on this repository. Therefore all issues and pull requests are being closed. Thanks for your contribution.
|
2025-04-01T06:37:52.440773
| 2019-06-26T15:04:39
|
461026797
|
{
"authors": [
"modul",
"rnewson",
"wohali"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3532",
"repo": "apache/couchdb",
"url": "https://github.com/apache/couchdb/issues/2061"
}
|
gharchive/issue
|
Changes feed with selector filter doesnt show deleted docs
Description
Deleted documents are not included in the changes feed when filtering with _selector. It does work while filtering by doc_ids, which is not applicable in my scenario.
Steps to Reproduce
# Create document
curl -H 'Content-Type: application/json' -X PUT "localhost:5984/storage/123456" -d '{"name": "Document Name"}'
# Start changes feed
curl -H 'Content-Type: application/json' -X POST "localhost:5984/storage/_changes?feed=longpoll&filter=_selector&since=now" -d '{"selector": {"name": "Document Name"}}'
# Remove document (in another shell, of course)
curl -H 'Content-Type: application/json' -X DELETE "localhost:5984/storage/123456?rev=<LAST-REV>"
Expected Behaviour
The request to _changes should terminate with a results object for that particular document, including deleted=true.
Your Environment
CouchDB running from (official) docker image:
{
"couchdb": "Welcome",
"features": [
"pluggable-storage-engines",
"scheduler"
],
"git_sha": "c298091a4",
"uuid": "54b4e44520a6fc9996a7eb635783fa96",
"vendor": {
"name": "The Apache Software Foundation"
},
"version": "2.3.1"
}
In your third step you remove the 'name' property of the document, and thus it no longer matches your selector.
The DELETE method performs a PUT preserving only _id, _rev and _deleted (set to true).
Instead do a PUT, keeping the "name" field and adding "_deleted":true
Hi there,
This is not a CouchDB bug. GitHub is for actual CouchDB bugs only.
If you are looking for general support with using CouchDB, please try one of these other options:
The user mailing list. Signup instructions are here
The Slack/IRC chat room. Joining instructions are here
Well, that does make sense. I would suggest to mention this in the docs even though it is self-explanatory.
Thanks for the quick response and sorry for opening this as a bug.
|
2025-04-01T06:37:52.443162
| 2017-07-18T19:01:43
|
243818280
|
{
"authors": [
"iilyak",
"nickva"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3533",
"repo": "apache/couchdb",
"url": "https://github.com/apache/couchdb/pull/694"
}
|
gharchive/pull-request
|
Remove get_details replicator job gen_server call
This was used from a test only and it wasn't reliable. Because of replicator
job delays initialization the State would be either #rep_state{} or #rep. If
replication job hasn't finished initializing, then state would be #rep{} and a
call like get_details which matches the state with #rep_state{] would fail with
the badmatch error.
As seen in issue #686
So remove get_details call and let the test rely on task polling as all other
tests do.
@nickva: While on it could you fix a compile warning on line 100?
couchdb/src/couch_replicator/test/couch_replicator_compact_tests.erl:100: Warning: variable 'RepId' is unused
|
2025-04-01T06:37:52.444916
| 2024-08-28T23:43:27
|
2493152583
|
{
"authors": [
"andygrove",
"psvri"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3534",
"repo": "apache/datafusion-comet",
"url": "https://github.com/apache/datafusion-comet/issues/882"
}
|
gharchive/issue
|
Implement native parsing of CSV files
What is the problem the feature request solves?
We can probably accelerate reading of CSV files by continuing to use JVM Spark to read bytes from disk but then parse the CSV in native code.
Describe the potential solution
No response
Additional context
No response
Hello.
I would like to start working on this.
|
2025-04-01T06:37:52.449539
| 2024-05-17T19:32:52
|
2303436574
|
{
"authors": [
"alamb",
"jayzhan211"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3535",
"repo": "apache/datafusion",
"url": "https://github.com/apache/datafusion/issues/10565"
}
|
gharchive/issue
|
Using Expr::field panics
Describe the bug
After https://github.com/apache/datafusion/pull/10375 merged Expr::field now panics if you use it (as we did in influxdb) DataFusion panics when you try to execute it
To Reproduce
Try to evaluate an expression like col("props").field("a")
Here is a full reproducer in the sql_integration test:
(venv) andrewlamb@Andrews-MacBook-Pro:~/Software/datafusion$ git diff
diff --git a/datafusion/core/tests/expr_api/mod.rs b/datafusion/core/tests/expr_api/mod.rs
index d7e839824..d4141a836 100644
--- a/datafusion/core/tests/expr_api/mod.rs
+++ b/datafusion/core/tests/expr_api/mod.rs
@@ -58,6 +58,25 @@ fn test_eq_with_coercion() {
);
}
+
+#[test]
+fn test_expr_field() {
+ // currently panics with
+ // Internal("NamedStructField should be rewritten in OperatorToFunction")
+ evaluate_expr_test(
+ col("props").field("a"),
+ vec![
+ "+------------+",
+ "| expr |",
+ "+------------+",
+ "| 2021-02-01 |",
+ "| 2021-02-02 |",
+ "| 2021-02-03 |",
+ "+------------+",
+ ],
+ );
+}
+
Expected behavior
Ideally the test should pass Expr::field would continue to work
We could also potentially remove Expr::field but I think that would be less user friendly
Additional context
I am pretty sure I Expr::field is widely used so I think we should continue to support it if possible
I wonder if we could have Expr::field call get_field if the core functions feature was enabled and panic otherwise 🤔
That would be easy to use for most people and backwards compatible
I can fix this along with #10374
Thank you @jayzhan211 🙏 -- I will review it now
|
2025-04-01T06:37:52.453684
| 2022-04-26T07:08:02
|
1215501455
|
{
"authors": [
"XuXuClassMate",
"labbomb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3536",
"repo": "apache/dolphinscheduler",
"url": "https://github.com/apache/dolphinscheduler/issues/9780"
}
|
gharchive/issue
|
[Bug] [Next-UI][V1.0.0-Beta] Local log file cache is not cleared
Search before asking
[X] I had searched in the issues and found no similar issues.
What happened
View the first component log, when viewing the second component log, it is displayed as the first component log
What you expected to happen
View the first component log, when viewing the second component log, it is displayed to see the second component log
How to reproduce
View the first component log, when viewing the second component log, it is displayed as the first component log
Anything else
No response
Version
3.0.0-alpha
Are you willing to submit PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
i will fix it
|
2025-04-01T06:37:52.466567
| 2023-12-02T03:59:12
|
2021842446
|
{
"authors": [
"codecov-commenter",
"davidzollo",
"fuchanghai",
"xujiaqiang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3537",
"repo": "apache/dolphinscheduler",
"url": "https://github.com/apache/dolphinscheduler/pull/15260"
}
|
gharchive/pull-request
|
[Feature-15260][dolphinscheduler-datasource-hana] add hana related dependencies
Purpose of the pull request
Brief change log
Verify this pull request
This pull request is code cleanup without any test coverage.
(or)
This pull request is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(or)
If your pull request contain incompatible change, you should also add it to docs/docs/en/guide/upgrede/incompatible.md
[Feature][dolphinscheduler-datasource-hana] add hana related dependencies #15259
same with #15127
与#15127相同
same with #15127
This feature was developed by me, but not the PR I submitted. I may be clearer on how to modify it. If you need to merge # 15127, please merge it into the dev branch as soon as possible. My # 15146 requires this modification item
与#15127相同
This feature was developed by me, but not the PR I submitted. I may be clearer on how to modify it. If you need to merge #15127 , please merge it into the dev branch as soon as possible. My #15146 requires this modification item
cc @caishunfeng
test use in the wrong place
Codecov Report
Attention: 4 lines in your changes are missing coverage. Please review.
Comparison is base (0c470ff) 38.19% compared to head (ad073a9) 38.16%.
:exclamation: Current head ad073a9 differs from pull request most recent head 98670be. Consider uploading reports for the commit 98670be to get more accurate results
Files
Patch %
Lines
.../plugin/datasource/hana/HanaDataSourceChannel.java
0.00%
2 Missing :warning:
...in/datasource/hana/HanaPooledDataSourceClient.java
0.00%
2 Missing :warning:
Additional details and impacted files
@@ Coverage Diff @@
## dev #15260 +/- ##
============================================
- Coverage 38.19% 38.16% -0.03%
+ Complexity 4673 4671 -2
============================================
Files 1278 1285 +7
Lines 44482 44463 -19
Branches 4783 4770 -13
============================================
- Hits 16988 16968 -20
- Misses 25632 25633 +1
Partials 1862 1862
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T06:37:52.470088
| 2022-08-26T09:59:58
|
1352043753
|
{
"authors": [
"GDragon97",
"stalary"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3538",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/issues/12110"
}
|
gharchive/issue
|
[Feature] Mysql subtable with binlog model merge one table
Search before asking
[X] I had searched in the issues and found no similar issues.
Description
I want to megre Mysql subtable into one table and can update by unique key
Use case
No response
Related issues
No response
Are you willing to submit PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
You can use Agg key and replace_if_not_null implement multitable merge.
|
2025-04-01T06:37:52.474279
| 2023-01-11T09:33:37
|
1528711666
|
{
"authors": [
"hello-stephen",
"zbtzbtzbt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3539",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/15824"
}
|
gharchive/pull-request
|
Enhancement delete skiplist for duplicate table in memtable
Proposed changes
delete skiplist for dup table when data load.
Problem summary
when load data, data will be insert into skiplist in memtable, for each row skiplist need to Find and Sort, which time cost is O(log(n)), I don't think it's a good way.
There are two ways to insert data into memtable:
insert into skiplist: O(log(n)), when need flush, the data is already sorted
insert into a block(append only): O(1), when need flush, sort once.
this pr implement way2 for duplicate table: append data to a block and sort when need flush.
I don't know how good or bad it turned out
This pr needs to use the community's testing framework
So I submit first to see the result.
TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 36.64 seconds
load time: 517 seconds
storage size:<PHONE_NUMBER>2 Bytes
https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/tmp/20230112052016_clickbench_pr_78264.html
|
2025-04-01T06:37:52.480694
| 2023-02-06T13:57:48
|
1572616887
|
{
"authors": [
"BiteTheDDDDt",
"hello-stephen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3540",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/16451"
}
|
gharchive/pull-request
|
Chore make compile option work on C objects && some refactor of cmakelists
Proposed changes
make compile option work on C objects && some refactor of cmakelists
Problem summary
Describe your changes.
Checklist(Required)
Does it affect the original behavior:
[ ] Yes
[ ] No
[ ] I don't know
Has unit tests been added:
[ ] Yes
[ ] No
[ ] No Need
Has document been added or modified:
[ ] Yes
[ ] No
[ ] No Need
Does it need to update dependencies:
[ ] Yes
[ ] No
Are there any changes that cannot be rolled back:
[ ] Yes (If Yes, please explain WHY)
[ ] No
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 33.6 seconds
load time: 467 seconds
storage size:<PHONE_NUMBER>4 Bytes
https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/tmp/20230206145530_clickbench_pr_90979.html
|
2025-04-01T06:37:52.482554
| 2023-06-06T11:53:42
|
1743703918
|
{
"authors": [
"sohardforaname"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3541",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/20516"
}
|
gharchive/pull-request
|
Fixadd sync after insert into table for nereids_p0
Proposed changes
Issue Number: close #xxx
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
|
2025-04-01T06:37:52.484245
| 2023-07-18T08:44:14
|
1809460622
|
{
"authors": [
"BiteTheDDDDt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3542",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/21923"
}
|
gharchive/pull-request
|
Bug fix ScannerContext is done make query failed
Proposed changes
fix ScannerContext is done make query failed
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
|
2025-04-01T06:37:52.485871
| 2023-10-26T12:04:53
|
1963399845
|
{
"authors": [
"airborne12"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3543",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/25972"
}
|
gharchive/pull-request
|
[Fix](inverted index) reorder ConjunctionQuery deconstruct order
Proposed changes
Issue Number: close #xxx
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
|
2025-04-01T06:37:52.489109
| 2023-11-05T11:56:42
|
1977755754
|
{
"authors": [
"Bears0haunt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3544",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/26433"
}
|
gharchive/pull-request
|
cases Add backup & restore test case of dup table
Proposed changes
Issue Number: close #xxx
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
run buildall
run buildall
run buildall
run buildall
run buildall
run buildall
run buildall
|
2025-04-01T06:37:52.491014
| 2023-12-20T08:01:14
|
2050044875
|
{
"authors": [
"dataroaring",
"platoneko"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3545",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/28719"
}
|
gharchive/pull-request
|
enhance Reduce log in tablet meta
Proposed changes
Reduce log in tablet meta
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
run buildall
|
2025-04-01T06:37:52.493660
| 2024-07-31T05:23:35
|
2439155617
|
{
"authors": [
"GoGoWen",
"morrySnow"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3546",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/38565"
}
|
gharchive/pull-request
|
Fix query not hit partition in original planner
Proposed changes
Fix issue that query not hit partition in original planner, which will cause serve performance degradation.
this issue seems introduced by https://github.com/apache/doris/pull/21533
run buildall
run p0
run buildall
run p0
run buildall
run buildall
could u submit a pr to master?
|
2025-04-01T06:37:52.495901
| 2024-08-12T12:21:37
|
2460848782
|
{
"authors": [
"qidaye"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3547",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/39248"
}
|
gharchive/pull-request
|
[fix](inverted index)Add exception check when write bkd index
Proposed changes
We are not catching the exception when add values in bkd_writer, if error throws, BE will run into segment fault.
So we add the exception check here to avoid coredump.
run buildall
|
2025-04-01T06:37:52.497232
| 2024-09-12T08:09:23
|
2521667620
|
{
"authors": [
"w41ter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3548",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/40734"
}
|
gharchive/pull-request
|
fix Fix atomic restore with exists replicas
create replicas with base tablet and schema hash
ignore storage medium when creating replicas with the base tablet
The atomic restore is introduced in #40353.
run buildall
|
2025-04-01T06:37:52.499155
| 2024-11-04T03:39:32
|
2631749548
|
{
"authors": [
"TangSiyang2001"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3549",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/43181"
}
|
gharchive/pull-request
|
fix Revome "UNIQUE KEY (k1)" case for test_dynamic_partition_mod_distribution_key (#41002)
Proposed changes
pick: #41002
Remove "UNIQUE KEY (k1)" case, because for unique table hash column must be key column, but for that historical bugs, this case will fail if adding k2 unique key.
Seperate a p0 suite from docker suite because docker suite will not be triggered in community doris p0 CI.
run buildall
|
2025-04-01T06:37:52.507561
| 2024-12-26T04:05:47
|
2759246453
|
{
"authors": [
"Jibing-Li",
"Thearas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3550",
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/45996"
}
|
gharchive/pull-request
|
fixFix bug of pr 44905
What problem does this PR solve?
lock object in PasswordPolicy is written to disk, when user upgrade from older version, this lock will be null, and cause user couldn't connect to Doris.
Code cause this issue PasswordPolicy:
@SerializedName(value = "lock")
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
Issue Number: close #xxx
Related PR: #xxx
Problem Summary:
Release note
None
Check List (For Author)
Test
[ ] Regression test
[ ] Unit Test
[ ] Manual test (add detailed scripts or steps below)
[x] No need to test or manual test. Explain why:
[ ] This is a refactor/code format and no logic has been changed.
[ ] Previous test can cover this change.
[ ] No code files have been changed.
[ ] Other reason
Behavior changed:
[ ] No.
[ ] Yes.
Does this need documentation?
[ ] No.
[ ] Yes.
Check List (For Reviewer who merge this PR)
[ ] Confirm the release note
[ ] Confirm test cases
[ ] Confirm document
[ ] Add branch pick label
Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR.
Please clearly describe your PR:
What problem was fixed (it's best to include specific error reporting information). How it was fixed.
Which behaviors were modified. What was the previous behavior, what is it now, why was it modified, and what possible impacts might there be.
What features were added. Why was this function added?
Which code was refactored and why was this part of the code refactored?
Which functions were optimized and what is the difference before and after the optimization?
run buildall
|
2025-04-01T06:37:52.508976
| 2020-01-03T13:21:14
|
544990773
|
{
"authors": [
"arina-ielchiieva",
"vvysotskyi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3551",
"repo": "apache/drill",
"url": "https://github.com/apache/drill/pull/1950"
}
|
gharchive/pull-request
|
DRILL-7461: Do not pass ClassNotFoundException into SQLNonTransientConnectionException cause when checking that Drill is run in embedded mode
For problem description please refer DRILL-7461.
+1, LGTM
|
2025-04-01T06:37:52.511980
| 2022-06-18T01:42:03
|
1275630033
|
{
"authors": [
"abhishekagarwal87",
"paul-rogers"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3552",
"repo": "apache/druid",
"url": "https://github.com/apache/druid/issues/12676"
}
|
gharchive/issue
|
License check fails intermitently
This build and this build both failed during the license check, when writing reports. The build phase completes properly. However, the step to write reports repeatedly fails.
This PR is pretty simple: it is just repackaging of cod that was in a previous PR that passed the license checks. It seems that the license check is just flaky.
As a side note: it seems impossible to run the rat check on a development machine? Rat complains about hundreds of Git, Eclipse and derived files when run with the same Maven command line as reported in the build.
Errors from the build:
[INFO] Building druid-s3-extensions 0.24.0-SNAPSHOT [18/69]
[INFO] --------------------------------[ jar ]---------------------------------
...
[INFO] Rat check: Summary over all files. Unapproved: 0, unknown: 0, generated: 0, approved: 48 licenses.
...
[INFO] druid-s3-extensions ................................ SUCCESS [ 0.073 s]
[INFO] druid-kinesis-indexing-service ..................... SUCCESS [ 0.059 s]
[INFO] druid-azure-extensions ............................. SUCCESS [ 0.651 s]
[INFO] druid-google-extensions ............................ SUCCESS [ 0.075 s]
[INFO] druid-hdfs-storage ................................. SUCCESS [ 0.074 s]
...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
...
Generating dependency reports
Generating report for /home/travis/build/apache/druid
Generating report for /home/travis/build/apache/druid/extensions-core/s3-extensions
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/s3-extensions
Generating report for /home/travis/build/apache/druid/extensions-core/testing-tools
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/testing-tools
Generating report for /home/travis/build/apache/druid/extensions-core/kinesis-indexing-service
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/kinesis-indexing-service
Generating report for /home/travis/build/apache/druid/extensions-core/mysql-metadata-storage
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/mysql-metadata-storage
Generating report for /home/travis/build/apache/druid/extensions-core/simple-client-sslcontext
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/simple-client-sslcontext
Rat has to be run on a clean checkout, yes. I have seen this one before. One thing we would like to do first is to surface this encountered error. it will need some changes in the python file.
|
2025-04-01T06:37:52.532398
| 2021-04-19T23:10:43
|
862081183
|
{
"authors": [
"a2l007",
"capistrant",
"kfaraz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3553",
"repo": "apache/druid",
"url": "https://github.com/apache/druid/pull/11135"
}
|
gharchive/pull-request
|
Create dynamic config that can limit number of non-primary replicants loaded per coordination cycle
Start Release Notes
Adds new Dynamic Coordinator Config maxNonPrimaryReplicantsToLoad with default value of Integer.MAX_VALUE. This configuration can be used to set a hard upper limit on the number of non-primary replicants that will be loaded in a single Druid Coordinator execution cycle. The default value will mimic the behavior that exists today.
Example usage: If you set this configuration to 1000, the Coordinator duty RunRules will load a maximum of 1000 non-primary replicants in each RunRules execution. Meaning if you ingested 2000 segments with a replication factor of 2, the coordinator would load 2000 primary replicants and 1000 non-primary replicants on the first RunRules execution. Then the next RunRules execution, the last 1000 non-primary replicants will be loaded.
End Release Notes
Description
Add a new dynamic configuration to the coordinator that gives an operator the power to set a hard limit for the number of non-primary segment replicas that are loaded during a single execution of RunRules#run. This allows the operator to limit the amount of work loading non-primary replicas that RunRules will execute in a single run. An example of a reason to use a non-default value for this new config is if the operator wants to ensure that major events such as historical service(s) leaving the cluster, large ingestion jobs, etc. do not cause an abnormally long RunRules execution compared to the cluster's baseline runtime.
Example
cluster: 3 historical servers in _default_tier with 18k segments per server. Each segment belongs to a datasource that has the load rule "LoadForever 2 replicas on _default_tier". The cluster load status is 100% loaded.
Event: 1 historical drops out of the cluster.
Today: The coordinator will load all 18k segments that are now under-replicated in a single execution of RunRules (as long as Throttling limits are not hit and there is capacity)
My change: The coordinator can load a limited number of these under-replicated segments IF the operator has tuned the new dynamic config down from its default. For instance, the operator could say that it is 2k. Meaning it would take at least 9 coordination cycles to fully replicate the segments that were on the recently downed host.
Why
Operators need to balance lots of competing needs. Having the cluster fully replicated is great for HA. But if an event causes the coordinator to take 20 minutes to fully replicate because it has to load thousands of replicas, we sacrifice the timeliness of loading newly ingested segments that were inserted into the metastore after this long coordination cycle started. Maybe the operator cares more about that fresh data timeliness than the replication status, so they change the new config to a value that causes RunRules to take less time but require more execution cycles to bring the data back to full replication.
Really what the change aims to do is give an operator more flexibility. As written the default would give the operator the exact same functionality that they see today.
Design
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Key changed/added classes in this PR
CoordinatorDynamicConfig
ReplicationThrottler
RunRules
LoadRule
This PR has:
[x] been self-reviewed.
[x] added documentation for new or modified features or behaviors.
[x] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
[x] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
[x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
[x] been tested in a test Druid cluster.
Thanks for the PR! This config should come in handy to reduce coordinator churn in case historicals fall out of the cluster. Have you thought about configuring maxNonPrimaryReplicantsToLoad specific to a tier instead of a global property?
Also could you please add some docs related to this property to the configuration docs?
I added the missing docs.
I had not thought about making this a per-tier setting. I'm coming at it from the angle of an operator not caring if the non-primary replicants are in tier X, Y, or Z, but rather just wanting to make sure the coordinator never spends too much time loading these segments and not doing its other jobs, mainly discovering and loading newly ingested segments.
https://github.com/apache/druid/blob/master/server/src/main/java/org/apache/druid/server/coordinator/CoordinatorDynamicConfig.java#L141
This PR has a similar issue that resulted in this block of code. I think I will do the same solution for now. but long term it would be cool if this had a more elegant solution.
@a2l007 are you okay with merge this week now that the issue for pursuing a cleaner configuration strategy is created?
@capistrant Yup, LGTM. Thanks!
@capistrant , I was taking a look at the maxNonPrimaryReplicantsToLoad config but I couldn't really distinguish it from replicationThrottleLimit.
I see that you have made a similar observation here:
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Could you please help me understand the difference between the two? In which case would we want to tune this config rather than tuning the replicationThrottleLimit itself?
@capistrant , I was taking a look at the maxNonPrimaryReplicantsToLoad config but I couldn't really distinguish it from replicationThrottleLimit.
I see that you have made a similar observation here:
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Could you please help me understand the difference between the two? In which case would we want to tune this config rather than tuning the replicationThrottleLimit itself?
My observation is that maxNonPrimaryReplicantsToLoad is a new way of throttling replication. Not that it is doing the same thing as replicationThrottleLimit
replicationThrottleLimit is a limit on the number of in-progress replica loads at any one time during RunRules. We tack the in-progress loads in a list. Items are removed from said list when a LoadQueuePeon issues a callback to remove them on completion of the load.
maxNonPrimaryReplicantsToLoad is a hard limit on the number of replica loads during RunRules. Once it is hit, there is no more non-primary replicas created for the rest of RunRules.
You'd want to tune maxNonPrimaryReplicantsToLoad if you want to put an upper bound on the work to load non-primary replicas done by the coordinator per execution of RunRules. The reason we use it at my org is because we want the coordinator to avoid "putting it's head in the sand" and loading replicas for an un-desirable amount of time instead of finishing it's duties and refreshing its metadata. An example of an "un-desirable amount of work" is if a Historical drops out of the cluster momentarily while the Coordinator is refreshing its SegmentReplicantLookup. The coordinator all of a sudden thinks X segment are under-replicated. But if the Historical is coming back online (say after a restart to deploy new configs), we don't want the Coordinator to spin and load those X segments when it could just finish its duties and notice that the segments are not under-replicated anymore.
I'm not aware of reasons for using replicationThrottleLimit. It didn't meet my orgs needs for throttling replication and it is why I introduced the new config. I guess it is a way to avoid flooding the cluster with replica loads? My clusters have actually tuned that value up to avoid hitting it at the low default that exists. We don't care about the number of in-flight loads, we just care about limiting the total number of replica loads per RunRules execution.
Let me know if that clarification is still not making sense.
Thanks for the explanation, @capistrant !
I completely agree with your opinion that coordinator should not get stuck in a single run and should always keep moving, thereby refreshing its metadata snapshot. I suppose the other open PR from you is in the same vein.
I also think replicationThrottleLimit should probably have done this in the first place, as it was trying to solve the same problem that you describe. Putting the limit on the number of replica loads "currently in progress" is not a very good safeguard to achieve this.
Thanks for adding this config, as I am sure it must come in handy for proper coordinator management.
|
2025-04-01T06:37:52.773785
| 2021-11-18T09:20:20
|
1057119697
|
{
"authors": [
"AlbumenJ",
"LMDreamFree",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3554",
"repo": "apache/dubbo",
"url": "https://github.com/apache/dubbo/pull/9297"
}
|
gharchive/pull-request
|
try not to instantiate tool classes
What is the purpose of the change
Brief changelog
Verifying this change
Checklist
[x] Make sure there is a GitHub_issue field for the change (usually before you start working on it). Trivial changes like typos do not require a GitHub issue. Your pull request should address just this issue, without pulling in other changes - one PR resolves one issue.
[ ] Each commit in the pull request should have a meaningful subject line and body.
[ ] Write a pull request description that is detailed enough to understand what the pull request does, how, and why.
[ ] Check if is necessary to patch to Dubbo 3 if you are work on Dubbo 2.7
[ ] Write necessary unit-test to verify your logic correction, more mock a little better when cross module dependency exist. If the new feature or significant change is committed, please remember to add sample in dubbo samples project.
[ ] Add some description to dubbo-website project if you are requesting to add a feature.
[ ] GitHub Actions works fine on your own branch.
[ ] If this contribution is large, please follow the Software Donation Guide.
What is your purpose of this pr?
Codecov Report
Merging #9297 (170bef0) into 3.0 (3469842) will decrease coverage by 0.09%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## 3.0 #9297 +/- ##
============================================
- Coverage 64.69% 64.60% -0.10%
- Complexity 328 329 +1
============================================
Files 1206 1206
Lines 51849 51849
Branches 7717 7692 -25
============================================
- Hits 33544 33497 -47
- Misses 14688 14725 +37
- Partials 3617 3627 +10
Impacted Files
Coverage Δ
...dubbo/config/spring/util/DubboAnnotationUtils.java
48.27% <0.00%> (ø)
...he/dubbo/config/spring/util/SpringCompatUtils.java
28.26% <0.00%> (ø)
...dubbo/common/status/support/LoadStatusChecker.java
46.15% <0.00%> (-15.39%)
:arrow_down:
...ache/dubbo/remoting/transport/AbstractChannel.java
75.00% <0.00%> (-12.50%)
:arrow_down:
...ian2/dubbo/AbstractHessian2FactoryInitializer.java
50.00% <0.00%> (-11.12%)
:arrow_down:
.../apache/dubbo/remoting/transport/AbstractPeer.java
63.04% <0.00%> (-8.70%)
:arrow_down:
.../common/threadpool/serial/SerializingExecutor.java
70.37% <0.00%> (-7.41%)
:arrow_down:
...ng/transport/dispatcher/all/AllChannelHandler.java
62.06% <0.00%> (-6.90%)
:arrow_down:
.../org/apache/dubbo/rpc/protocol/tri/WriteQueue.java
68.75% <0.00%> (-6.25%)
:arrow_down:
...pache/dubbo/remoting/transport/AbstractServer.java
57.14% <0.00%> (-4.29%)
:arrow_down:
... and 25 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3469842...170bef0. Read the comment docs.
|
2025-04-01T06:37:52.780779
| 2022-04-06T23:24:50
|
1195299338
|
{
"authors": [
"Maneesh43",
"jiawulin001"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3555",
"repo": "apache/echarts",
"url": "https://github.com/apache/echarts/issues/16843"
}
|
gharchive/issue
|
[Feature] Ability to add icons/images in chart title.
What problem does this feature solve?
Current API doesn't support adding icons/images to chart title.
What does the proposed API look like?
title:{
image:"imagepath"
}
You can add images to title with textStyle.rich. Please refer to documentation and examples
Here's an example I made for you.
Code sample
var ROOT_PATH =
'https://cdn.jsdelivr.net/gh/apache/echarts-website@asf-site/examples';
const Sunny = ROOT_PATH + '/data/asset/img/weather/sunny_128.png';
option = {
xAxis: {
type: 'category',
data: ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
},
yAxis: {
type: 'value'
},
title: {
text: '{a|}Sunny Day!',
textStyle: {
color: 'black',
rich: {
a: {
backgroundColor: {
image: Sunny
},
height: 40,
width: 50
}
}
}
},
series: [
{
data: [120, 200, 150, 80, 70, 110, 130],
type: 'bar',
showBackground: true,
backgroundStyle: {
color: 'rgba(180, 180, 180, 0.2)'
}
}
]
};
Thank you!
|
2025-04-01T06:37:52.786148
| 2023-02-23T19:59:41
|
1597453222
|
{
"authors": [
"Ovilia",
"amit-unravel",
"gl260",
"hanshupe007",
"ianschmitz",
"psychopathh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3556",
"repo": "apache/echarts",
"url": "https://github.com/apache/echarts/issues/18306"
}
|
gharchive/issue
|
[Bug] Overflow truncate breaks axis label click target position
Version
5.4.1
Link to Minimal Reproduction
No response
Steps to Reproduce
Paste the following in the echarts example:
option = {
xAxis: {
axisLabel: {
// BUG: Overflow: "truncate" breaks the click target on axis label
overflow: 'truncate',
width: 80
},
triggerEvent: true,
type: 'category',
data: ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
},
yAxis: {
axisLabel: {
// BUG: Overflow: "truncate" breaks the click target on axis label
overflow: 'truncate',
width: 80
},
triggerEvent: true,
type: 'value'
},
series: [
{
data: [120, 200, 150, 80, 70, 110, 130],
type: 'bar',
showBackground: true,
backgroundStyle: {
color: 'rgba(180, 180, 180, 0.2)'
}
}
]
};
Current Behavior
Hovering mouse over an axis label should have a click target that is aligned with the axis label text. Click target is where mouse pointer is hovering in the screenshot:
Expected Behavior
Click target is aligned with axis label text.
Environment
- OS:macOS Monterey
- Browser: Chrome 109
- Framework:
Any additional comments?
Looks to be related to https://github.com/apache/echarts/issues/17343 possibly?
This seems to be a bug. If you are interested in making a pull request, it can help you fix this problem quicker. Please checkout the wiki to learn more.
Is there any fix planned for this issue? Experiencing the same behavior, click event is not aligned with the text.
I added an empty rich: {} to the axisLabel and everything worked as expected
I added an empty rich: {} to the axisLabel and everything worked as expected
太棒了!
I added an empty rich: {} to the axisLabel and everything worked as expected
This hack worked for me.
|
2025-04-01T06:37:52.790725
| 2023-12-05T05:49:25
|
2025336731
|
{
"authors": [
"wizardzhang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3557",
"repo": "apache/eventmesh",
"url": "https://github.com/apache/eventmesh/pull/4603"
}
|
gharchive/pull-request
|
[ISSUE #4602] when wechat send message api response errcode is not zero, the wechat sink connector does not throw IllegalAccessException
Fixes #4602
Motivation
fix bug when wecaht api response fail
Modifications
rename org.apache.eventmesh.connector.wechat.sink.connector.TemplateMessageResponse#errocode
to org.apache.eventmesh.connector.wechat.sink.connector.TemplateMessageResponse#errcode
add abnormal test case
You can add this Sink Connector to the list in this document. https://github.com/apache/eventmesh/tree/master/eventmesh-connectors#connector-status
already done, please review
Finally, are you willing to write a document for your connector (#4601 (comment)) to facilitate user understanding and use. If you are willing, you can do it in this PR or in a new PR in the future.
最后,请问是否愿意给您的这个Connector写一下文档 (#4601 (comment) ),方便用户理解和使用。如果您愿意,可以在该PR中做,也可以以后在新的PR中做。
i prefer in a new PR to do this
@pandaapo hello, i found some commit in this PR used wrong email, so i want to close this PR, an create a new one
|
2025-04-01T06:37:52.794185
| 2024-06-17T02:54:32
|
2356233712
|
{
"authors": [
"Jiabao-Sun"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3558",
"repo": "apache/flink-connector-mongodb",
"url": "https://github.com/apache/flink-connector-mongodb/pull/36"
}
|
gharchive/pull-request
|
[FLINK-35623] Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/
Hi @GOODBOY008, could you help review this?
Hi @yux, could you help review this?
|
2025-04-01T06:37:52.799983
| 2019-11-22T07:21:22
|
527031733
|
{
"authors": [
"leonardBang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3559",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/10289"
}
|
gharchive/pull-request
|
[FLINK-14924][table sql / api] CsvTableSource can not config empty column as null
CsvTableSource can not config empty column as null.
What is the purpose of the change
This pull request add option of treating empty column as null for CsvTableSource.
Brief change log
update file org.apache.flink.table.sources.CsvTableSource.java
Verifying this change
Add ITCase to to test CsvTableSource:
org.apache.flink.table.runtime.batch.sql.TableSourceITCase.scala
org.apache.flink.table.runtime.utils.CommonTestData.scala
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): ( no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
I look up csv descriptor, we have two csv descriptor named CSV and OldCsv.
Only OldCsv will call CsvTableSource and OldCsv is deprecated, CSV has an another mechanism RuntimeConverter and supports null literal property which interpret literal string(i.e. "null" or "N/A") as null value.
So I think we do not need to add this feature now.
@KurtYoung
I update the PR, Could you have a more look ?
|
2025-04-01T06:37:52.807431
| 2020-05-27T08:54:07
|
625508259
|
{
"authors": [
"SteNicholas",
"zentol"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3560",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/12354"
}
|
gharchive/pull-request
|
[FLINK-17572][runtime] Remove checkpoint alignment buffered metric from webui
What is the purpose of the change
After avoid caching buffers for blocked input channels before barrier alignment, runtime never cache buffers while checkpoint barrier alignment, therefore checkpoint alignment buffered metric would always be 0, which should remove it directly in CheckpointStatistics , CheckpointingStatistics, TaskCheckpointStatistics, TaskCheckpointStatisticsWithSubtaskDetails and SubtaskCheckpointStatistics.
Brief change log
Remove alignmentBuffered attribute in CheckpointStatistics , CheckpointingStatistics, TaskCheckpointStatistics, TaskCheckpointStatisticsWithSubtaskDetails and SubtaskCheckpointStatistics
Remove alignment_buffered in Checkpoint Detail from job-checkpoints.component.html.
Remove alignment_buffered column in document of /jobs/:jobid/checkpoints rest interface.
Verifying this change
Modify test object create by CheckpointStatistics , CheckpointingStatistics, TaskCheckpointStatistics, TaskCheckpointStatisticsWithSubtaskDetails and SubtaskCheckpointStatistics in CheckpointingStatisticsTest, TaskCheckpointStatisticsTest and TaskCheckpointStatisticsWithSubtaskDetailsTest.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
This breaks the REST API contracts; see the JIRA for details.
|
2025-04-01T06:37:52.814486
| 2020-06-07T14:06:38
|
633444633
|
{
"authors": [
"zhijiangW"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3561",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/12511"
}
|
gharchive/pull-request
|
[FLINK-18063][checkpointing] Fix the race condition for aborting current checkpoint in CheckpointBarrierUnaligner
What is the purpose of the change
There are three aborting scenarios which might encounter race condition:
1. CheckpointBarrierUnaligner#processCancellationBarrier
2. CheckpointBarrierUnaligner#processEndOfPartition
3. AlternatingCheckpointBarrierHandler#processBarrier
They only consider the pending checkpoint triggered by #processBarrier from task thread to abort it. Actually the checkpoint might also be triggered by #notifyBarrierReceived from netty thread in race condition, so we should also handle properly to abort it.
Brief change log
Fix the process of AlternatingCheckpointBarrierHandler#processBarrier
Fix the process of CheckpointBarrierUnaligner#processEndOfPartition to abort checkpoint properly
Fix the process of CheckpointBarrierUnaligner#processCancellationBarrier to abort checkpoint properly
Verifying this change
Added new unit test CheckpointBarrierUnalignerTest#testProcessCancellationBarrierAfterNotifyBarrierReceived
Added new unit test CheckpointBarrierUnalignerTest#testProcessCancellationBarrierAfterProcessBarrier
Added new unit test CheckpointBarrierUnalignerTest#testProcessCancellationBarrierBeforeProcessAndReceiveBarrier
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
Cherry-pick it to master from #12406 which was reviewed and approved before.
The failure e2e is known StreamingKafkaITCase, so ignore it to merge.
|
2025-04-01T06:37:52.831737
| 2021-06-24T08:39:15
|
928988649
|
{
"authors": [
"Tartarus0zm",
"zentol"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3562",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/16275"
}
|
gharchive/pull-request
|
[FLINK-20518][rest] Add decoding characters for MessageQueryParameter
What is the purpose of the change
Add decoding characters for rest service
Brief change log
Add decoding characters for rest service
Verifying this change
no
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not documented)
@zentol please take a look, if you have time, thanks
As I said in the ticket, the server side decodes characters just fine. The UI isn't encoding special characters; that's the isssue.
As I said in the ticket, the server side decodes characters just fine. The UI isn't encoding special characters; that's the isssue.
Maybe I didn't understand what you mean correctly?
Do you think that special characters should be encoded on the UI side?
Instead of decoding it on the server again?
Do you think that special characters should be encoded on the UI side? Instead of decoding it on the server again?
yes and yes. I did a manual check when reviewing https://github.com/apache/flink/pull/13514 and confirmed that our REST API handles escaped special characters fine. So this is purely a UI issue.
Do you think that special characters should be encoded on the UI side? Instead of decoding it on the server again?
yes and yes. I did a manual check when reviewing #13514 and confirmed that our REST API handles escaped special characters fine. So this is purely a UI issue.
@zentol
I add some log on UI and REST server, then I found,
UI send 0.GroupWindowAggregate(window=[TumblingGroupWindow(%27w$__rowtime__60000)]__properti.watermarkLatency
but REST server received is 0.GroupWindowAggregate(window=%5BTumblingGroupWindow(%252527w$__rowtime__60000)%5D__properti.watermarkLatency by RouterHandler, this has been encoded 3 times,
QueryStringDecoder decode only once, so this happened.
Do you want anything else to discover?
All I know is that if you send a request, with encoding applied once, things work fine.
All I know is that if you send a request, with encoding applied once, things work fine.
@zentol I found the root cause! We are using flink on yarn, RM will escape the special characters again!!
So what is your suggestion to solve this problem?
We are using flink on yarn, RM will escape the special characters again!!
YARN making things complicate again... 😢
Earlier you said that the requests are encoded 3 times; The UI does it once (does it?), and the RM does it (I assume) once. Any idea where the third one comes from?
So what is your suggestion to solve this problem?
hmm...it seems a bit arbitrary to stack a fixed number of decode calls; what if yet another middle-layer gets added between the UI and rest API, things could break at any time. Are there any downsides to decoding too often? As in, we loop the decoding until nothing changes anymore (although that also feels just wrong...).
Earlier you said that the requests are encoded 3 times; The UI does it once (does it?), and the RM does it (I assume) once. Any idea where the third one comes from?
I don’t know how many encodes will be done in RM.
The 3 times mentioned before are based on the need to decode 3 times.
hmm...it seems a bit arbitrary to stack a fixed number of decode calls; what if yet another middle-layer gets added between the UI and rest API, things could break at any time. Are there any downsides to decoding too often? As in, we loop the decoding until nothing changes anymore (although that also feels just wrong...).
We are like #13514?
Handle special characters single quotes.
Earlier you said that the requests are encoded 3 times; The UI does it once (does it?), and the RM does it (I assume) once. Any idea where the third one comes from?
I don’t know how many encodes will be done in RM.
The 3 times mentioned before are based on the need to decode 3 times.
hmm...it seems a bit arbitrary to stack a fixed number of decode calls; what if yet another middle-layer gets added between the UI and rest API, things could break at any time. Are there any downsides to decoding too often? As in, we loop the decoding until nothing changes anymore (although that also feels just wrong...).
We are like #13514?
Handle special characters single quotes.
@zentol What do you think of this?
We are like #13514?
Handle special characters single quotes.
I don't understand what you are asking/suggesting, please elaborate.
We are like #13514?
Handle special characters single quotes.
I don't understand what you are asking/suggesting, please elaborate.
@zentol We add the handling of single quotes in MetricQueryService#replaceInvalidChars to avoid single quotes;
Decode multiple times, not the best solution.
Decode multiple times, not the best solution.
I agree, but I have outlined in #13514 why replacing more characters is not a good option as well.
|
2025-04-01T06:37:52.842039
| 2022-02-10T05:44:20
|
1129485720
|
{
"authors": [
"MrWhiteSike"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3563",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/18698"
}
|
gharchive/pull-request
|
[FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
The TaskInfo is stored in the blob store on job creation time as a persistent artifact
Deployments RPC transmits only the blob storage reference
TaskManagers retrieve the TaskInfo from the blob cache
Verifying this change
Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end deployment with large payloads (100MB)
Extended integration test for recovery after master (JobManager) failure
Added test that validates that TaskInfo is transferred only once across recoveries
Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
Hi, @Thesharing @RocMarshal , May I get your help to review it? Thanks.
@flinkbot run Azure
|
2025-04-01T06:37:52.846459
| 2022-08-05T08:49:40
|
1329644627
|
{
"authors": [
"JasonLeeCoding"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3564",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/20466"
}
|
gharchive/pull-request
|
[FLINK-28837][chinese-translation] Translate "Hybrid Source" page of …
What is the purpose of the change
Translate "Hybrid Source" page of "DataStream Connectors" into Chinese
Brief change log
Translate "Hybrid Source" page of "DataStream Connectors" into Chinese
Verifying this change
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not documented)
@flinkbot run azure
@flinkbot run azure
@wuchong please help review when you are free, thanks.
|
2025-04-01T06:37:52.853527
| 2023-01-02T08:09:41
|
1516131539
|
{
"authors": [
"link3280"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3565",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/21581"
}
|
gharchive/pull-request
|
[FLINK-30538][SQL gateway/client] Improve error handling of stop job operation
What is the purpose of the change
Currently, the stop-job operation produces some verbose error msg and doesn't handle exceptions in stop-without-savepoint gracefully.
This PR fixes the problem.
Brief change log
Wrap simple cancel with try-catch.
Wait for simple cancel Acknowledge before returning 'OK'.
Simplify exception message for stop job operations.
Verifying this change
Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): no
The public API, i.e., is any changed class annotated with @Public(Evolving): no
The serializers: no
The runtime per-record code paths (performance sensitive): no
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
The S3 file system connector: no
Documentation
Does this pull request introduce a new feature? no
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
@flinkbot run azure
@flinkbot run azure
Please kindly take a look @fsk119
CI ran into https://issues.apache.org/jira/browse/FLINK-30328. Re-run CI.
@flinkbot run azure
@fsk119 CI turned green. Please kindly take a look at your convenience.
ping @fsk119 . It should be a quick one :)
ping @fsk119
|
2025-04-01T06:37:52.859217
| 2024-04-19T02:11:46
|
2251933405
|
{
"authors": [
"jiangxin369"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3566",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/24683"
}
|
gharchive/pull-request
|
[FLINK-35166][runtime] Make the SortBufferAccumulator use more buffers when the parallelism is small
What is the purpose of the change
Improve the performance of hybrid shuffle when enable memory decoupling and meantime the parallelism is small.
Brief change log
Make the SortBufferAccumulator use more buffers when the parallelism is small
Verifying this change
Please make sure both new and modified tests in this PR follow the conventions for tests defined in our code quality guide.
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not applicable)
Is TPC-DS performance no longer regression after the fix?
@reswqa The regression still exists because we replace the HashBufferAccumulator with the SortBufferAccumulator when the decoupling is enabled and the parallelism is less than 512, but this PR reduces the regression. According to the previous discussion, the regression is acceptable if the feature is enabled.
|
2025-04-01T06:37:52.867797
| 2024-11-12T17:45:56
|
2652904072
|
{
"authors": [
"davidradl",
"g-s-eire"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3567",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/25649"
}
|
gharchive/pull-request
|
Upgrade com.squareup.okio:okio
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
The TaskInfo is stored in the blob store on job creation time as a persistent artifact
Deployments RPC transmits only the blob storage reference
TaskManagers retrieve the TaskInfo from the blob cache
Verifying this change
Please make sure both new and modified tests in this PR follow the conventions for tests defined in our code quality guide.
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end deployment with large payloads (100MB)
Extended integration test for recovery after master (JobManager) failure
Added test that validates that TaskInfo is transferred only once across recoveries
Manually verified the change by running a 4 node cluster with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
Reviewed by Chi on 21/11/24. Asked submitter questions
Please could you raise a Jira detailing the reason you want to upgrade this component (e.g. is there a particular bug that this would fix)
|
2025-04-01T06:37:52.869522
| 2016-10-28T09:41:37
|
185880994
|
{
"authors": [
"chermenin",
"zentol"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3568",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/2709"
}
|
gharchive/pull-request
|
[FLINK-4631] Avoided NPE in OneInputStreamTask.
Added additional condition to check possible NPE. This PR solve FLINK-4631.
+1 to merge
merging
|
2025-04-01T06:37:52.870624
| 2017-05-10T08:21:29
|
227601507
|
{
"authors": [
"fhueske",
"twalthr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3569",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/3862"
}
|
gharchive/pull-request
|
[FLINK-6483] [table] Support time materialization
This PR adds support for time materialization. It also fixes several bugs related to time handling in the Table API & SQL.
Thanks for the update @twalthr!
Looks very good. Will merge this.
|
2025-04-01T06:37:52.875990
| 2017-06-28T04:46:43
|
239051744
|
{
"authors": [
"tillrohrmann",
"zjureel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3570",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/4204"
}
|
gharchive/pull-request
|
[FLINK-6522] Add ZooKeeper cleanup logic to ZooKeeperHaServices
Thanks for contributing to Apache Flink. Before you open your pull request, please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your pull request. For more information and/or questions please refer to the How To Contribute guide.
In addition to going through the list, please provide a meaningful description of your changes.
[ ] General
The pull request references the related JIRA issue ("[FLINK-XXX] Jira title text")
The pull request addresses only one issue
Each commit in the PR has a meaningful commit message (including the JIRA id)
[ ] Documentation
Documentation has been added for new functionality
Old documentation affected by the pull request has been updated
JavaDoc for public methods has been added
[ ] Tests & Build
Functionality added by the pull request is covered by tests
mvn clean verify has been executed successfully locally or a Travis build has passed
Hi @tillrohrmann , I have created this PR for issue FLINK-6522. Could you please have a look when you're free, thanks
@tillrohrmann Thank you for your review. I use prefix as the name of sub directory, and add test case to FileSystemStateStorageHelper#closeAndCleanupAllData. Also I have fixed the problem you metioned, thanks
Solved by FLINK-11336.
|
2025-04-01T06:37:52.969547
| 2017-12-09T02:29:29
|
280668091
|
{
"authors": [
"fhueske",
"xccui"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3571",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/5140"
}
|
gharchive/pull-request
|
[FLINK-7797] [table] Add support for windowed outer joins for streaming tables
What is the purpose of the change
This PR adds support for windowed outer joins for streaming tables.
Brief change log
Adjusts the plan translation logic to accept stream window outer join.
Adheres an ever emitted flag to each row. When a row is removed from the cache (or detected as not cached), a null padding join result will be emitted if necessary.
Adds a custom JoinAwareCollector to track whether there's a successfully joined result for both sides in each join loop.
Adds table/SQL translation tests, and also join integration tests. Since the runtime logic is built on the existing window inner join, no new harness tests are added.
Updates the SQL/Table API docs.
Verifying this change
This PR can be verified by the cases added in JoinTest and JoinITCase.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (yes)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (yes)
If yes, how is the feature documented? (remove the restriction notes)
Thanks for the PR @xccui.
I'll try to have a look at it sometime this week.
Best, Fabian
|
2025-04-01T06:37:52.977138
| 2018-10-22T13:04:57
|
372512734
|
{
"authors": [
"Clarkkkkk",
"StefanRRichter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3572",
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/6898"
}
|
gharchive/pull-request
|
[FLINK-10431] Extraction of scheduling-related code from SlotPool into preliminary Scheduler
What is the purpose of the change
This PR extracts the scheduling related code (e.g. slot sharing logic) from to slot pool into a preliminary version of a future scheduler component. Our primary goal is fixing the scheduling logic for local recovery. Changes in this PR open up potential for more code cleanups (e.g. removing all scheduling concerns from the slot pool, removing ProviderAndOwner, moving away from some CompletableFuture return types, etc). This cleanup and some test rewrites will happen in a followup PR.
Brief change log
SlotPool is no longer a RpcEndpoint, we need to take care that all state modification happens in the component's main thread now.
Introduced SlotInfo and moving the slot sharing code into a scheduler component. Slot pool code can now deal with single slot requests. The pattern of interaction is more explicit, we have 3 main new methods: getAvailableSlotsInformation to list available slots, allocateAvailableSlot to allocated a listed / available slot, requestNewAllocatedSlot to request a new slot from the resoure manager. The old codepaths currently still co-exist in the slot pool and will be removed in followup work.
Introduce creating a collection of all previous allocations through ExecutionGraph::computeAllPriorAllocationIds. This serves as basis to compute a "blacklist" of allocation ids that we use to fix the scheduling of local recovery.
Provide an improved version of the scheduling for local recovery, that uses a blacklist.
Verifying this change
This change is already covered by existing tests, but we still need to rewrite tests for the slot pool and add more additional tests in followup work.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not applicable)
Hi @StefanRRichter , I am just wondering why make SlotPool no longer to be an RpcEndpoint?
@Clarkkkkk background is that it will make things easier and otherwise you have concurrency between two components that want to interact in transactional ways: if the scheduler runs in a different thread than the slot pool there can be concurent modifications to the slot pool (e.g. slots added/removed) between the scheduler asking for the available slots and the scheduler requesting the available slot. All of this has to be resolved and it becomes harder to understand and reason about the code. This can be avoided if scheduler and slot pool run in the same thread, and we are also aiming at having all modifications to the execution graph in the same thread as well. The threading model would then be that blocking or expensive operations run in their own thread so that the main thread is never blocked, but the results are always synced back to a main thread to runs all the modifications in scheduler, slot pool, execution graph, etc.
Closed because there is an updated version of this PR in #7662.
|
2025-04-01T06:37:52.979704
| 2016-11-18T15:03:19
|
190340170
|
{
"authors": [
"bessbd",
"peterableda"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3573",
"repo": "apache/flume",
"url": "https://github.com/apache/flume/pull/87"
}
|
gharchive/pull-request
|
Remove hostHeader = hostname property from Host interceptor example
We are overriding the host header name from host to hostname in the example usage section. Due to this example users are overriding the header name too but still use the %{host} substitution as shown in the HDFS Sink section. This won't work for them.
This change removes this config line.
+1, LGTM
I'll leave some time for others to review this, then commit it if nobody disagrees.
I'm about to commit this.
@peterableda : thank you for the patch!
|
2025-04-01T06:37:52.994123
| 2019-09-24T08:02:00
|
497524222
|
{
"authors": [
"bschuchardt",
"mhansonp",
"mkevo",
"pivotal-jbarrett"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3574",
"repo": "apache/geode",
"url": "https://github.com/apache/geode/pull/4085"
}
|
gharchive/pull-request
|
GEODE-6927 make getThreadOwnedConnection code thread safe
Thank you for submitting a contribution to Apache Geode.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message?
[x] Has your PR been rebased against the latest commit within the target branch (typically develop)?
[x] Is your initial contribution a single, squashed commit?
[x] Does gradlew build run cleanly?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
Note:
Please ensure that once the PR is submitted, check Concourse for build issues and
submit an update to your PR as soon as possible. If you need help, please send an
email to<EMAIL_ADDRESS>
Instead of adding synchronizations and null checks everywhere let's just make that field final and change the close() method to clear it.
Even butt but I think there is still a race in what to do with calls to this method concurrently with close. The map would be cleared by close but then updated by this method. Having little background in what this is tracking I can’t say if this is good or bad, it is simply just a case the original code seemed to try to avoid.
I don't think we should be using the state of a collection as an indication of whether the service is open or closed. There should be other state that we use to avoid creating a new connection and putting it in the connection map if ConnectionTable is closed.
I don't think we should be using the state of a collection as an indication of whether the service is open or closed. There should be other state that we use to avoid creating a new connection and putting it in the connection map if ConnectionTable is closed.
Couldn't agree more! If you think its safe for connection references that race into the map on close to sit there then this is an easy fix. A simple check if the collection is closed after the put completes could accomplish this. The close itself would empty the map and any race on that operation would get cleared up on the back end after the put by checking for closed state and removing the entry just put.
@bschuchardt @pivotal-jbarrett Are there still concerns with this PR?
@bschuchardt @pivotal-jbarrett Are there still concerns with this PR?
My concern over the threadConnectionMap null checks hasn't been addressed. I've been fighting against the nulling-out of instance variables like this forever. It's always causing NPEs for unsuspecting programmers who don't recognize that this anti-pattern is being used. The instance variable ought to be "final" and some other state should be added and consulted to see if the connection table has been closed.
@bschuchardt @pivotal-jbarrett Are there still concerns with this PR?
My concern over the threadConnectionMap null checks hasn't been addressed. I've been fighting against the nulling-out of instance variables like this forever. It's always causing NPEs for unsuspecting programmers who don't recognize that this anti-pattern is being used. The instance variable ought to be "final" and some other state should be added and consulted to see if the connection table has been closed.
Hi @bschuchardt ,
I think that the best way is to make this threadConnectionMap final and change this close() to iterate over the map, close all connections and clear map. In this way, we don't need these null checks.
Writing it to local map and then execute command on it isn't good as computeIfAbsent() can throw NPE as we didn't know if someone deleted it as we are checking our local copy.
Tnx @bschuchardt! :)
|
2025-04-01T06:37:52.996657
| 2021-04-03T22:06:49
|
849748989
|
{
"authors": [
"danielsun1106",
"eric-milles"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3575",
"repo": "apache/groovy",
"url": "https://github.com/apache/groovy/pull/1541"
}
|
gharchive/pull-request
|
GROOVY-8983: STC: support "Type[] array = collectionOfTypeOrSubtype"
https://issues.apache.org/jira/browse/GROOVY-8983
NOTE: GenericsUtils.parameterizeType("List<? super Type>","Collection") returns Collection<Type> and not Collection<? super Type>. I attempted to address this, but was not successful. This should probably be fixed at some point because it breaks the semantics of "? super Type".
Merged. Thanks!
|
2025-04-01T06:37:52.999742
| 2019-10-18T14:41:25
|
509125099
|
{
"authors": [
"liuxunorg",
"yuanzac"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3576",
"repo": "apache/hadoop-submarine",
"url": "https://github.com/apache/hadoop-submarine/pull/56"
}
|
gharchive/pull-request
|
[SUBMARINE-248]. Add websocket interface to submarine workbench server.
What is this PR for?
Add WebSocket interface to the submarine workbench server. So that the frontend and backend can have bidirectional communications.
What type of PR is it?
Feature
What is the Jira issue?
https://issues.apache.org/jira/browse/SUBMARINE-248
How should this be tested?
https://travis-ci.org/yuanzac/hadoop-submarine/builds/599666968
Questions:
Does the licenses files need update? No
Is there breaking changes for older versions? No
Does this needs documentation? No
Thanks @liuxunorg and @jiwq for the review~
Will merge if no more comments
|
2025-04-01T06:37:53.022674
| 2019-07-03T21:17:13
|
463957156
|
{
"authors": [
"bgaborg",
"hadoop-yetus",
"mackrorysd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3577",
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/1054"
}
|
gharchive/pull-request
|
HADOOP-16409. Allow authoritative mode on non-qualified paths.
This addresses whitespace nits from Gabor's review of https://github.com/apache/hadoop/pull/1043, and allows non-qualified paths to be specified in the config.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
31
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1
mvninstall
1085
trunk passed
+1
compile
30
trunk passed
+1
checkstyle
23
trunk passed
+1
mvnsite
41
trunk passed
+1
shadedclient
684
branch has no errors when building and testing our client artifacts.
+1
javadoc
28
trunk passed
0
spotbugs
58
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
55
trunk passed
_ Patch Compile Tests _
+1
mvninstall
31
the patch passed
+1
compile
27
the patch passed
+1
javac
27
the patch passed
-0
checkstyle
16
hadoop-tools/hadoop-aws: The patch generated 1 new + 40 unchanged - 0 fixed = 41 total (was 40)
+1
mvnsite
32
the patch passed
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedclient
707
patch has no errors when building and testing our client artifacts.
+1
javadoc
20
the patch passed
+1
findbugs
56
the patch passed
_ Other Tests _
+1
unit
285
hadoop-aws in the patch passed.
+1
asflicense
27
The patch does not generate ASF License warnings.
3267
Subsystem
Report/Notes
Docker
Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/1054
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname
Linux fff5b7977c40 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
personality/hadoop.sh
git revision
trunk / 8965ddc
Default Java
1.8.0_212
checkstyle
https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
Test Results
https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/testReport/
Max. process+thread count
414 (vs. ulimit of 5500)
modules
C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
Console output
https://builds.apache.org/job/hadoop-multibranch/job/PR-1054/1/console
versions
git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
Test result against ireland: 4 known testMRJob failures, no others.
+1 on this.
|
2025-04-01T06:37:53.101301
| 2022-03-31T23:55:34
|
1189017299
|
{
"authors": [
"hadoop-yetus",
"virajith",
"xinglin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3578",
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/4128"
}
|
gharchive/pull-request
|
Hadoop-18169. getDelegationTokens in ViewFs should also fetch the token from fallback FS
Description of PR
cherry-pick of 15a5ea2c955a7d1b89aea0cb127727a57db76c76 from trunk to branch-2.10 and created TestViewFsLinkFallback.java file with one test case included.
All other test cases in TestViewFsLinkFallback.java from trunk are removed, as the implementation of InternalDirOfViewFs (createInternal function) is out of date and these test cases won't pass. Leave the fix and the inclusion of these other unit tests as a future pull request.
How was this patch tested?
mvn test -Dtest="TestViewFsLinkFallback"
@omalley,
Here is the backport for DelegationToken for 2.10. Could you take a look? Thanks,
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 35s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ branch-2.10 Compile Tests _
+0 :ok:
mvndep
2m 21s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
12m 12s
branch-2.10 passed
+1 :green_heart:
compile
13m 8s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
compile
10m 45s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
checkstyle
1m 50s
branch-2.10 passed
+1 :green_heart:
mvnsite
2m 30s
branch-2.10 passed
+1 :green_heart:
javadoc
2m 43s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 2s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
-1 :x:
spotbugs
2m 2s
/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html
hadoop-common-project/hadoop-common in branch-2.10 has 2 extant spotbugs warnings.
-1 :x:
spotbugs
2m 31s
/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 1 extant spotbugs warnings.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 20s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 37s
the patch passed
+1 :green_heart:
compile
12m 21s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javac
12m 21s
the patch passed
+1 :green_heart:
compile
10m 46s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
javac
10m 46s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 50s
the patch passed
+1 :green_heart:
mvnsite
2m 28s
the patch passed
+1 :green_heart:
javadoc
2m 40s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 4s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
spotbugs
4m 51s
the patch passed
_ Other Tests _
-1 :x:
unit
8m 24s
/patch-unit-hadoop-common-project_hadoop-common.txt
hadoop-common in the patch passed.
-1 :x:
unit
62m 56s
/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 56s
The patch does not generate ASF License warnings.
169m 34s
Reason
Tests
Failed junit tests
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
hadoop.io.compress.TestCompressorDecompressor
hadoop.fs.sftp.TestSFTPFileSystem
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4128
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
uname
Linux 2d6802dd4aa1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
branch-2.10 / 88a556b79fdc6952d10a6c771f7b436349830a5c
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Multi-JDK versions
/usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/1/testReport/
Max. process+thread count
2709 (vs. ulimit of 5500)
modules
C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/1/console
versions
git=2.17.1 maven=3.6.0 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org
This message was automatically generated.
This change alone seems fine. It would be good and helpful for others reviewing to identify the differences in viewfs to explain why the rest of the tests couldn't be backported, but thats more of a nice to have, not required to ship it
@mccormickt12 thanks for taking a quick review.
create() method in InternalDirOfViewFs diverged between trunk and branch-2.10. In trunk, when fallbackFS is configured, we can create a file but in branch-2.10, it does not do the check whether fallbackFS exists and simply throws "read-only fs" exception, thus failing these unit tests.
InternalDirOfViewFs class in trunk has two more members than in branch-2.10. Without access to fsState, we can not check whether a fallbackFS is set or not. Does not seem trivial to bring InternalDirOfViewFs in branch-2.10 to be in sync with trunk. Leave it as a separate patch for later.
private final boolean showMountLinksAsSymlinks;
private InodeTree<FileSystem> fsState;
Thanks for backporting this @xinglin and explaining what's not present in 2.10. As the rest of the functionality doesn't exist, I am good with having the required test backported. Can you please rebase your branch and push it again so Yetus gives a positive run? I am +1 on merging this PR after that.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
8m 12s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ branch-2.10 Compile Tests _
+0 :ok:
mvndep
4m 3s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
13m 58s
branch-2.10 passed
+1 :green_heart:
compile
13m 12s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
compile
10m 50s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
checkstyle
2m 6s
branch-2.10 passed
+1 :green_heart:
mvnsite
2m 36s
branch-2.10 passed
+1 :green_heart:
javadoc
2m 47s
branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 9s
branch-2.10 passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
-1 :x:
spotbugs
2m 28s
/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html
hadoop-common-project/hadoop-common in branch-2.10 has 2 extant spotbugs warnings.
-1 :x:
spotbugs
2m 33s
/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 1 extant spotbugs warnings.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 24s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 38s
the patch passed
+1 :green_heart:
compile
12m 23s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javac
12m 23s
the patch passed
+1 :green_heart:
compile
10m 46s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
javac
10m 46s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
2m 2s
the patch passed
+1 :green_heart:
mvnsite
2m 30s
the patch passed
+1 :green_heart:
javadoc
2m 39s
the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10
+1 :green_heart:
javadoc
2m 10s
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
+1 :green_heart:
spotbugs
4m 56s
the patch passed
_ Other Tests _
-1 :x:
unit
8m 30s
/patch-unit-hadoop-common-project_hadoop-common.txt
hadoop-common in the patch passed.
-1 :x:
unit
63m 37s
/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
1m 0s
The patch does not generate ASF License warnings.
183m 18s
Reason
Tests
Failed junit tests
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
hadoop.io.compress.TestCompressorDecompressor
hadoop.fs.sftp.TestSFTPFileSystem
hadoop.hdfs.server.namenode.ha.TestEditLogTailer
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys
hadoop.hdfs.TestDataTransferKeepalive
Subsystem
Report/Notes
Docker
ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/4128
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
uname
Linux c9dcbe54b6da 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
branch-2.10 / 17b677c42e1dcd5b4236389e1e6735133f698c7e
Default Java
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Multi-JDK versions
/usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/2/testReport/
Max. process+thread count
2364 (vs. ulimit of 5500)
modules
C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4128/2/console
versions
git=2.17.1 maven=3.6.0 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org
This message was automatically generated.
@virajith rebased last night but I guess we are seeing the same set of unit test failures. these tests have been failing before and don't seem to be related with our patch. Please see another example, where we see the same set of unit test failures for another backport to branch-2.10. I think we can commit this PR.
https://github.com/apache/hadoop/pull/4124
The failures in Yetus in the last run are unrelated to the changes in this PR. I will be merging this. Thanks for the backport @xinglin !
|
2025-04-01T06:37:53.202241
| 2023-07-18T10:30:54
|
1809651422
|
{
"authors": [
"Hexiaoqiao",
"hadoop-yetus",
"tomscut",
"zhangshuyan0"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3579",
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/5854"
}
|
gharchive/pull-request
|
HDFS-17094. EC: Fix bug in block recovery when there are stale datanodes.
Description of PR
When a block recovery occurs, RecoveryTaskStriped in datanode expects rBlock.getLocations() and rBlock. getBlockIndices() to be in one-to-one correspondence. However, if there are locations in stale state when NameNode handles heartbeat, this correspondence will be disrupted. In detail, there is no stale location in recoveryLocations, but the block indices array is still complete (i.e. contains the indices of all the locations).
https://github.com/apache/hadoop/blob/c44823dadb73a3033f515329f70b2e3126fcb7be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1720-L1724
https://github.com/apache/hadoop/blob/c44823dadb73a3033f515329f70b2e3126fcb7be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1754-L1757
This will cause BlockRecoveryWorker.RecoveryTaskStriped#recover() to generate a wrong internal block ID, and the corresponding datanode cannot find the replica, thus making the recovery process fail.
https://github.com/apache/hadoop/blob/c44823dadb73a3033f515329f70b2e3126fcb7be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java#L407-L416
This bug needs to be fixed.
How was this patch tested?
Add a new unit test.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 43s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
45m 26s
trunk passed
+1 :green_heart:
compile
1m 24s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
1m 19s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
checkstyle
1m 16s
trunk passed
+1 :green_heart:
mvnsite
1m 31s
trunk passed
+1 :green_heart:
javadoc
1m 12s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 39s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 20s
trunk passed
+1 :green_heart:
shadedclient
35m 43s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 13s
the patch passed
+1 :green_heart:
compile
1m 16s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
1m 16s
the patch passed
+1 :green_heart:
compile
1m 11s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
javac
1m 11s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 2s
the patch passed
+1 :green_heart:
mvnsite
1m 19s
the patch passed
+1 :green_heart:
javadoc
0m 56s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 30s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 13s
the patch passed
+1 :green_heart:
shadedclient
35m 58s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
214m 54s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 55s
The patch does not generate ASF License warnings.
357m 49s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5854
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux f7f9bf4d0ae1 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 5b72c1d7c5ef527c328245874ab5f5d7ab86e9ab
Default Java
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/1/testReport/
Max. process+thread count
3374 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@Hexiaoqiao @tomscut Thanks for your review. I've update this PR according to the suggestions. Please take a look, thanks again.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 42s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
52m 58s
trunk passed
+1 :green_heart:
compile
1m 42s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
1m 29s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
checkstyle
1m 23s
trunk passed
+1 :green_heart:
mvnsite
1m 40s
trunk passed
+1 :green_heart:
javadoc
1m 21s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
2m 0s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
4m 5s
trunk passed
-1 :x:
shadedclient
42m 13s
branch has errors when building and testing our client artifacts.
_ Patch Compile Tests _
-1 :x:
mvninstall
0m 23s
/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
hadoop-hdfs in the patch failed.
+1 :green_heart:
compile
1m 32s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
1m 32s
the patch passed
+1 :green_heart:
compile
1m 26s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
javac
1m 26s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 13s
the patch passed
+1 :green_heart:
mvnsite
1m 30s
the patch passed
+1 :green_heart:
javadoc
1m 6s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 36s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 48s
the patch passed
+1 :green_heart:
shadedclient
36m 38s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
223m 42s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 58s
The patch does not generate ASF License warnings.
382m 58s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5854
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 79c529060291 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 446ddffc53cb891e0a410bd76a6864666f22ff11
Default Java
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/2/testReport/
Max. process+thread count
3594 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/2/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 42s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+1 :green_heart:
mvninstall
49m 27s
trunk passed
+1 :green_heart:
compile
1m 27s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
compile
1m 21s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
checkstyle
1m 14s
trunk passed
+1 :green_heart:
mvnsite
1m 29s
trunk passed
+1 :green_heart:
javadoc
1m 12s
trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 38s
trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 20s
trunk passed
+1 :green_heart:
shadedclient
35m 43s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 :green_heart:
mvninstall
1m 16s
the patch passed
+1 :green_heart:
compile
1m 12s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javac
1m 12s
the patch passed
+1 :green_heart:
compile
1m 11s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
javac
1m 11s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 1s
the patch passed
+1 :green_heart:
mvnsite
1m 16s
the patch passed
+1 :green_heart:
javadoc
0m 57s
the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 :green_heart:
javadoc
1m 31s
the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
+1 :green_heart:
spotbugs
3m 14s
the patch passed
+1 :green_heart:
shadedclient
36m 2s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
215m 51s
hadoop-hdfs in the patch passed.
+1 :green_heart:
asflicense
0m 58s
The patch does not generate ASF License warnings.
362m 17s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/5854
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname
Linux 3913e9b84c85 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 446ddffc53cb891e0a410bd76a6864666f22ff11
Default Java
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/3/testReport/
Max. process+thread count
3028 (vs. ulimit of 5500)
modules
C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5854/3/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@zhangshuyan0 Could you please backport this to branch-3.3? Thanks!
@zhangshuyan0 Could you please backport this to branch-3.3? Thanks!
Ok, I'll do this later.
@tomscut This PR can cherry pick to branch-3.3 smoothly, Please cherry-pick directly if you evaluate it also need to fix for branch-3.3 rather than submit another PR. Thanks.
@tomscut This PR can cherry pick to branch-3.3 smoothly, Please cherry-pick directly if you evaluate it also need to fix for branch-3.3 rather than submit another PR. Thanks.
OKK, I have backport to branch-3.3. I thought it would be safer to trigger jenkins. But for this PR, it's really not necessary. Thank you for your advice.
|
2025-04-01T06:37:53.450590
| 2023-10-14T01:59:03
|
1942869576
|
{
"authors": [
"hadoop-yetus",
"slfan1989"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3580",
"repo": "apache/hadoop",
"url": "https://github.com/apache/hadoop/pull/6189"
}
|
gharchive/pull-request
|
YARN-11592. Add timeout to GPGUtils#invokeRMWebService.
Description of PR
JIRA: YARN-11592. Add timeout to GPGUtils#invokeRMWebService.
How was this patch tested?
For code changes:
[ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
[ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
11m 33s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 1 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 19s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 22s
trunk passed
+1 :green_heart:
compile
4m 30s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 3s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 11s
trunk passed
+1 :green_heart:
javadoc
2m 13s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 7s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
3m 34s
trunk passed
+1 :green_heart:
shadedclient
20m 48s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 9s
the patch passed
+1 :green_heart:
compile
4m 0s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
4m 0s
the patch passed
+1 :green_heart:
compile
3m 50s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 50s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 6s
the patch passed
+1 :green_heart:
mvnsite
1m 54s
the patch passed
+1 :green_heart:
javadoc
1m 56s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
1m 53s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
3m 43s
the patch passed
+1 :green_heart:
shadedclient
21m 7s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 55s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 49s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
0m 55s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
141m 23s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 4b7c9e4364ed 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 3f3f91ce98cd23e9a14a7af041e635179617c5a8
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/1/testReport/
Max. process+thread count
553 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 59s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
19m 51s
trunk passed
+1 :green_heart:
compile
4m 29s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 1s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 54s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 45s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 24s
trunk passed
+1 :green_heart:
shadedclient
20m 48s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 26s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 27s
the patch passed
+1 :green_heart:
compile
3m 54s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 54s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 8s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged - 0 fixed = 165 total (was 164)
+1 :green_heart:
mvnsite
2m 32s
the patch passed
+1 :green_heart:
javadoc
2m 31s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 27s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 40s
the patch passed
+1 :green_heart:
shadedclient
21m 22s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 57s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 50s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 36s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 55s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
137m 2s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 4db31f3a3fa2 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 2bac4178db80535ab2b252690b6bbf535f52d067
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/2/testReport/
Max. process+thread count
617 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/2/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 5s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 15s
trunk passed
+1 :green_heart:
compile
4m 32s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 1s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 52s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 44s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 25s
trunk passed
+1 :green_heart:
shadedclient
21m 9s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 29s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
compile
3m 56s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 56s
the patch passed
+1 :green_heart:
blanks
0m 1s
The patch has no blanks issues.
-0 :warning:
checkstyle
1m 8s
/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged - 0 fixed = 165 total (was 164)
+1 :green_heart:
mvnsite
2m 28s
the patch passed
+1 :green_heart:
javadoc
2m 26s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 21s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 41s
the patch passed
+1 :green_heart:
shadedclient
21m 12s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 49s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 36s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 55s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
137m 56s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux af5599409b7d 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 16d99187211c620092dc2aaa593e196bb94b3359
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/3/testReport/
Max. process+thread count
755 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/3/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 27s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 4s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
19m 59s
trunk passed
+1 :green_heart:
compile
4m 30s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 5s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 16s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 52s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 44s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 25s
trunk passed
+1 :green_heart:
shadedclient
21m 6s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 28s
the patch passed
+1 :green_heart:
compile
3m 52s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 52s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 4s
the patch passed
+1 :green_heart:
mvnsite
2m 30s
the patch passed
+1 :green_heart:
javadoc
2m 30s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 26s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 33s
the patch passed
+1 :green_heart:
shadedclient
21m 9s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 48s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 36s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 56s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 47s
The patch does not generate ASF License warnings.
136m 30s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/4/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 36fc2cc59e18 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / c38f58c72e9d4bc15ba8236c956169669a5d0bbd
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/4/testReport/
Max. process+thread count
613 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/4/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 26s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 1s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 53s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
22m 9s
trunk passed
+1 :green_heart:
compile
5m 11s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 24s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 40s
trunk passed
+1 :green_heart:
javadoc
2m 45s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 35s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 9s
trunk passed
+1 :green_heart:
shadedclient
25m 50s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 17s
the patch passed
+1 :green_heart:
compile
3m 54s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 54s
the patch passed
+1 :green_heart:
compile
4m 30s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
4m 30s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 15s
the patch passed
+1 :green_heart:
mvnsite
2m 17s
the patch passed
+1 :green_heart:
javadoc
2m 32s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 23s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 44s
the patch passed
+1 :green_heart:
shadedclient
23m 40s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 52s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 48s
hadoop-yarn-common in the patch passed.
-1 :x:
unit
0m 34s
/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
hadoop-yarn-client in the patch failed.
+1 :green_heart:
unit
0m 57s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 48s
The patch does not generate ASF License warnings.
147m 23s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6189/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 602f21487dad 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / c38f58c72e9d4bc15ba8236c956169669a5d0bbd
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6189/1/testReport/
Max. process+thread count
554 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6189/1/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 24s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 0s
codespell was not available.
+0 :ok:
detsecrets
0m 0s
detect-secrets was not available.
+0 :ok:
xmllint
0m 0s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
15m 39s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 10s
trunk passed
+1 :green_heart:
compile
4m 29s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 1s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 13s
trunk passed
+1 :green_heart:
mvnsite
2m 50s
trunk passed
+1 :green_heart:
javadoc
2m 52s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 45s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 26s
trunk passed
+1 :green_heart:
shadedclient
21m 5s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 26s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 28s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
compile
3m 58s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 58s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 5s
the patch passed
+1 :green_heart:
mvnsite
2m 30s
the patch passed
+1 :green_heart:
javadoc
2m 30s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 26s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 35s
the patch passed
+1 :green_heart:
shadedclient
20m 57s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 48s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
26m 6s
hadoop-yarn-client in the patch passed.
+1 :green_heart:
unit
0m 58s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 50s
The patch does not generate ASF License warnings.
162m 39s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/5/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux 740c2c885900 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 0836b2b76db98ef0b298bc2580e71b278aa46cc2
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/5/testReport/
Max. process+thread count
577 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/5/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@goiri Can you help review this PR? Thank you very much!
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Logfile
Comment
+0 :ok:
reexec
0m 25s
Docker mode activated.
_ Prechecks _
+1 :green_heart:
dupname
0m 0s
No case conflicting files found.
+0 :ok:
codespell
0m 1s
codespell was not available.
+0 :ok:
detsecrets
0m 1s
detect-secrets was not available.
+0 :ok:
xmllint
0m 1s
xmllint was not available.
+1 :green_heart:
@author
0m 0s
The patch does not contain any @author tags.
+1 :green_heart:
test4tests
0m 0s
The patch appears to include 2 new or modified test files.
_ trunk Compile Tests _
+0 :ok:
mvndep
16m 13s
Maven dependency ordering for branch
+1 :green_heart:
mvninstall
20m 16s
trunk passed
+1 :green_heart:
compile
4m 39s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
compile
4m 0s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
checkstyle
1m 12s
trunk passed
+1 :green_heart:
mvnsite
2m 51s
trunk passed
+1 :green_heart:
javadoc
2m 51s
trunk passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 44s
trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 23s
trunk passed
+1 :green_heart:
shadedclient
21m 13s
branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 :ok:
mvndep
0m 25s
Maven dependency ordering for patch
+1 :green_heart:
mvninstall
1m 28s
the patch passed
+1 :green_heart:
compile
3m 53s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javac
3m 53s
the patch passed
+1 :green_heart:
compile
3m 57s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
javac
3m 57s
the patch passed
+1 :green_heart:
blanks
0m 0s
The patch has no blanks issues.
+1 :green_heart:
checkstyle
1m 6s
the patch passed
+1 :green_heart:
mvnsite
2m 32s
the patch passed
+1 :green_heart:
javadoc
2m 30s
the patch passed with JDK Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04
+1 :green_heart:
javadoc
2m 27s
the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 :green_heart:
spotbugs
4m 33s
the patch passed
+1 :green_heart:
shadedclient
20m 56s
patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 :green_heart:
unit
0m 56s
hadoop-yarn-api in the patch passed.
+1 :green_heart:
unit
4m 49s
hadoop-yarn-common in the patch passed.
+1 :green_heart:
unit
26m 10s
hadoop-yarn-client in the patch passed.
+1 :green_heart:
unit
0m 59s
hadoop-yarn-server-globalpolicygenerator in the patch passed.
+1 :green_heart:
asflicense
0m 51s
The patch does not generate ASF License warnings.
163m 39s
Subsystem
Report/Notes
Docker
ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/6/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hadoop/pull/6189
Optional Tests
dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint
uname
Linux a7bba7fed213 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool
maven
Personality
dev-support/bin/hadoop.sh
git revision
trunk / 6256a3c5e19afd9e1afc1f8b7e4d236db5aaca86
Default Java
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-<IP_ADDRESS>+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/6/testReport/
Max. process+thread count
726 (vs. ulimit of 5500)
modules
C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-yarn
Console output
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6189/6/console
versions
git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by
Apache Yetus 0.14.0 https://yetus.apache.org
This message was automatically generated.
@goiri Thank you very much for your help in reviewing the code!
|
2025-04-01T06:37:53.602043
| 2019-08-13T10:23:24
|
480077860
|
{
"authors": [
"Apache-HBase",
"Apache9",
"openinx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3581",
"repo": "apache/hbase",
"url": "https://github.com/apache/hbase/pull/486"
}
|
gharchive/pull-request
|
HBASE-22810 Initialize an separate ThreadPoolExecutor for taking/restoring snapshot
…oring snapshot
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
38
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
+1
mvninstall
337
master passed
+1
compile
54
master passed
+1
checkstyle
80
master passed
+1
shadedjars
274
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
37
master passed
0
spotbugs
254
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
250
master passed
_ Patch Compile Tests _
+1
mvninstall
304
the patch passed
+1
compile
56
the patch passed
+1
javac
56
the patch passed
-1
checkstyle
77
hbase-server: The patch generated 4 new + 167 unchanged - 2 fixed = 171 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
274
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
946
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
33
the patch passed
+1
findbugs
255
the patch passed
_ Other Tests _
+1
unit
6636
hbase-server in the patch passed.
+1
asflicense
27
The patch does not generate ASF License warnings.
10077
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 4d977315dc04 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 8c1edb3bba
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/artifact/out/diff-checkstyle-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/testReport/
Max. process+thread count
5094 (vs. ulimit of 10000)
modules
C: hbase-server U: hbase-server
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/1/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
46
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
+1
mvninstall
331
master passed
+1
compile
52
master passed
+1
checkstyle
75
master passed
+1
shadedjars
275
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
36
master passed
0
spotbugs
256
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
253
master passed
_ Patch Compile Tests _
+1
mvninstall
301
the patch passed
+1
compile
53
the patch passed
+1
javac
53
the patch passed
-1
checkstyle
75
hbase-server: The patch generated 4 new + 167 unchanged - 2 fixed = 171 total (was 169)
+1
whitespace
1
The patch has no whitespace issues.
+1
shadedjars
265
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
970
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
33
the patch passed
+1
findbugs
249
the patch passed
_ Other Tests _
-1
unit
6854
hbase-server in the patch failed.
+1
asflicense
32
The patch does not generate ASF License warnings.
10271
Reason
Tests
Failed junit tests
hadoop.hbase.master.assignment.TestOpenRegionProcedureHang
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 499a55e83057 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / e69af5affe
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/testReport/
Max. process+thread count
4524 (vs. ulimit of 10000)
modules
C: hbase-server U: hbase-server
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/2/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
37
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
+1
mvninstall
323
master passed
+1
compile
53
master passed
+1
checkstyle
74
master passed
+1
shadedjars
261
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
38
master passed
0
spotbugs
207
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
206
master passed
_ Patch Compile Tests _
+1
mvninstall
297
the patch passed
+1
compile
52
the patch passed
+1
javac
52
the patch passed
-1
checkstyle
72
hbase-server: The patch generated 4 new + 167 unchanged - 2 fixed = 171 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
263
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
905
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
32
the patch passed
+1
findbugs
243
the patch passed
_ Other Tests _
+1
unit
6706
hbase-server in the patch passed.
+1
asflicense
24
The patch does not generate ASF License warnings.
9937
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux aec1cc3557ba 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 27ed2ac071
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/artifact/out/diff-checkstyle-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/testReport/
Max. process+thread count
4995 (vs. ulimit of 10000)
modules
C: hbase-server U: hbase-server
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/3/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
84
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
0
mvndep
40
Maven dependency ordering for branch
+1
mvninstall
455
master passed
+1
compile
96
master passed
+1
checkstyle
136
master passed
+1
shadedjars
376
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
64
master passed
0
spotbugs
308
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
366
master passed
_ Patch Compile Tests _
0
mvndep
17
Maven dependency ordering for patch
+1
mvninstall
413
the patch passed
+1
compile
100
the patch passed
+1
javac
100
the patch passed
-1
checkstyle
104
hbase-server: The patch generated 3 new + 162 unchanged - 7 fixed = 165 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
371
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
1282
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
65
the patch passed
+1
findbugs
393
the patch passed
_ Other Tests _
+1
unit
204
hbase-common in the patch passed.
-1
unit
13817
hbase-server in the patch failed.
+1
asflicense
66
The patch does not generate ASF License warnings.
18990
Reason
Tests
Failed junit tests
hadoop.hbase.util.TestFromClientSide3WoUnsafe
hadoop.hbase.client.TestFromClientSide3
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux cd6837ae2514 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 27ed2ac071
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/testReport/
Max. process+thread count
4725 (vs. ulimit of 10000)
modules
C: hbase-common hbase-server U: .
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/4/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
38
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
0
mvndep
39
Maven dependency ordering for branch
+1
mvninstall
340
master passed
+1
compile
74
master passed
+1
checkstyle
93
master passed
+1
shadedjars
263
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
54
master passed
0
spotbugs
257
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
305
master passed
_ Patch Compile Tests _
0
mvndep
14
Maven dependency ordering for patch
+1
mvninstall
286
the patch passed
+1
compile
71
the patch passed
+1
javac
71
the patch passed
-1
checkstyle
69
hbase-server: The patch generated 3 new + 162 unchanged - 7 fixed = 165 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
264
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
893
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
55
the patch passed
+1
findbugs
310
the patch passed
_ Other Tests _
+1
unit
174
hbase-common in the patch passed.
-1
unit
6870
hbase-server in the patch failed.
+1
asflicense
38
The patch does not generate ASF License warnings.
10643
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 2b37abec7345 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / 53db390f60
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/testReport/
Max. process+thread count
4656 (vs. ulimit of 10000)
modules
C: hbase-common hbase-server U: .
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/5/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
But please fix the checkstyle issues before merging...
The checkstyle is not an issue I think, the '(' is preceded by a whitespace because we want to make the code align as before :
/**
* Messages originating from Client to Master.<br>
* C_M_CREATE_TABLE<br>
* Client asking Master to create a table.
*/
C_M_CREATE_TABLE (47, ExecutorType.MASTER_TABLE_OPERATIONS),
/**
* Messages originating from Client to Master.<br>
* C_M_SNAPSHOT_TABLE<br>
* Client asking Master to snapshot an offline table.
*/
C_M_SNAPSHOT_TABLE (48, ExecutorType.MASTER_SNAPSHOT_OPERATIONS),
/**
* Messages originating from Client to Master.<br>
* C_M_RESTORE_SNAPSHOT<br>
* Client asking Master to restore a snapshot.
*/
C_M_RESTORE_SNAPSHOT (49, ExecutorType.MASTER_SNAPSHOT_OPERATIONS),
:broken_heart: -1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
58
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
+1
test4tests
0
The patch appears to include 1 new or modified test files.
_ master Compile Tests _
0
mvndep
39
Maven dependency ordering for branch
+1
mvninstall
426
master passed
+1
compile
100
master passed
+1
checkstyle
139
master passed
+1
shadedjars
343
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
70
master passed
0
spotbugs
327
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
390
master passed
_ Patch Compile Tests _
0
mvndep
18
Maven dependency ordering for patch
+1
mvninstall
405
the patch passed
+1
compile
107
the patch passed
+1
javac
107
the patch passed
-1
checkstyle
104
hbase-server: The patch generated 3 new + 162 unchanged - 7 fixed = 165 total (was 169)
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
359
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
1294
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
70
the patch passed
+1
findbugs
432
the patch passed
_ Other Tests _
+1
unit
217
hbase-common in the patch passed.
-1
unit
19484
hbase-server in the patch failed.
+1
asflicense
58
The patch does not generate ASF License warnings.
24660
Reason
Tests
Failed junit tests
hadoop.hbase.client.TestFromClientSide
hadoop.hbase.replication.TestReplicationDisableInactivePeer
hadoop.hbase.snapshot.TestFlushSnapshotFromClient
hadoop.hbase.client.TestFromClientSide3
hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs
hadoop.hbase.replication.TestReplicationSmallTests
hadoop.hbase.master.TestAssignmentManagerMetrics
hadoop.hbase.replication.TestReplicationSmallTestsSync
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/486
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux 8e8f7bc61373 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-486/out/precommit/personality/provided.sh
git revision
master / d9d5f69fc6
Default Java
1.8.0_181
checkstyle
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/artifact/out/diff-checkstyle-hbase-server.txt
unit
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/artifact/out/patch-unit-hbase-server.txt
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/testReport/
Max. process+thread count
4829 (vs. ulimit of 10000)
modules
C: hbase-common hbase-server U: .
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-486/6/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
|
2025-04-01T06:37:53.716679
| 2019-08-13T17:41:50
|
480281328
|
{
"authors": [
"Apache-HBase",
"jatsakthi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3582",
"repo": "apache/hbase",
"url": "https://github.com/apache/hbase/pull/489"
}
|
gharchive/pull-request
|
HBASE-22845 Revert MetaTableAccessor#makePutFromTableState access to …
…public
HBCK2 is dependent on it
:confetti_ball: +1 overall
Vote
Subsystem
Runtime
Comment
0
reexec
62
Docker mode activated.
_ Prechecks _
+1
dupname
0
No case conflicting files found.
+1
hbaseanti
0
Patch does not have any anti-patterns.
+1
@author
0
The patch does not contain any @author tags.
-0
test4tests
0
The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ master Compile Tests _
+1
mvninstall
387
master passed
+1
compile
23
master passed
+1
checkstyle
30
master passed
+1
shadedjars
269
branch has no errors when building our shaded downstream artifacts.
+1
javadoc
24
master passed
0
spotbugs
72
Used deprecated FindBugs config; considering switching to SpotBugs.
+1
findbugs
69
master passed
_ Patch Compile Tests _
+1
mvninstall
294
the patch passed
+1
compile
24
the patch passed
+1
javac
24
the patch passed
+1
checkstyle
28
the patch passed
+1
whitespace
0
The patch has no whitespace issues.
+1
shadedjars
263
patch has no errors when building our shaded downstream artifacts.
+1
hadoopcheck
955
Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
+1
javadoc
21
the patch passed
+1
findbugs
77
the patch passed
_ Other Tests _
+1
unit
108
hbase-client in the patch passed.
+1
asflicense
11
The patch does not generate ASF License warnings.
3044
Subsystem
Report/Notes
Docker
Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-489/1/artifact/out/Dockerfile
GITHUB PR
https://github.com/apache/hbase/pull/489
Optional Tests
dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile
uname
Linux cb8f6c7cb6e6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 GNU/Linux
Build tool
maven
Personality
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-489/out/precommit/personality/provided.sh
git revision
master / 8c1edb3bba
Default Java
1.8.0_181
Test Results
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-489/1/testReport/
Max. process+thread count
291 (vs. ulimit of 10000)
modules
C: hbase-client U: hbase-client
Console output
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-489/1/console
versions
git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by
Apache Yetus 0.10.0 http://yetus.apache.org
This message was automatically generated.
|
2025-04-01T06:37:53.737222
| 2022-11-08T12:17:43
|
1440089068
|
{
"authors": [
"devaspatikrishnatri"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3583",
"repo": "apache/hive",
"url": "https://github.com/apache/hive/pull/3740"
}
|
gharchive/pull-request
|
HIVE-26656:Remove hsqldb dependency in hive due to CVE-2022-41853
What changes were proposed in this pull request?
Remove hsqldb dependency in hive
Why are the changes needed?
fix cve
Does this PR introduce any user-facing change?
no
How was this patch tested?
manual
@saihemanth-cloudera please see this
|
2025-04-01T06:37:53.740410
| 2023-04-13T10:40:30
|
1666166720
|
{
"authors": [
"subhasisgorai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3584",
"repo": "apache/hive",
"url": "https://github.com/apache/hive/pull/4230"
}
|
gharchive/pull-request
|
HIVE-26930: Support for increased retention of Notification Logs and Change Manager entries
What changes does this PR contain?
To support the Planned/Unplanned Failover, we need the capability to increase the retention period for both the Notification Logs and Change Manager entries until the successful reverse replication is done (i.e. the Optimized Bootstrap). A database-level property 'repl.db.under.failover.sync.pending' was introduced to signify this state. This PR contains the changes for,
selective deletion of notification events that are not relevant to the database(s) in failover
skipping the CM clearer thread execution until the time the Optimized Bootstrap is not done
Why this change is needed?
The change is needed to make the Optimized Bootstrap and Point-in-time consistency possible. If the relevant Notification logs and Change Manager entries are not retained, we can't perform the Optimized Bootstrap.
Were the changes tested?
I have included relevant unit tests in this PR, and will also perform manual verification after deploying the changes on a cluster.
Does this PR introduce any user-facing change?
No
Closing this PR, will raise another.
|
2025-04-01T06:37:53.742472
| 2023-02-09T14:22:18
|
1577983566
|
{
"authors": [
"hansva"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3585",
"repo": "apache/hop",
"url": "https://github.com/apache/hop/issues/2291"
}
|
gharchive/issue
|
[Task]: update MSSQL JDBC Driver
What needs to happen?
Update version of MSSQL jdbc driver included in Hop installation.
https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc
Issue Priority
Priority: 2
Issue Component
Component: Database
already done with #2445
|
2025-04-01T06:37:53.743637
| 2024-01-07T13:33:38
|
2069132555
|
{
"authors": [
"phax"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3586",
"repo": "apache/httpcomponents-client",
"url": "https://github.com/apache/httpcomponents-client/pull/533"
}
|
gharchive/pull-request
|
5.3.x - HTTPCLIENT-2314
Make sure an UnknownHostException is thrown, even if a custom DnsResolver implementation is used.
Fixes the issue of HTTPCLIENT-2314
@ok2c Thanks for the review. Done
|
2025-04-01T06:37:53.791398
| 2024-04-12T00:25:32
|
2238815549
|
{
"authors": [
"danny0405",
"the-other-tim-brown"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3587",
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/11001"
}
|
gharchive/pull-request
|
[HUDI-7576] Improve efficiency of getRelativePartitionPath, reduce computation of partitionPath in AbstractTableFileSystemView
Change Logs
Improve the efficiency of getRelativePartitionPath by reducing the number of operations on the path object that are required to get the final result
Reduce the number of times a partitionPath is computed by supplying a partition path argument where possible in the AbstractFileSystemView
Impact
Reduces overhead of building FSViews with large numbers of files
Risk level (write none, low medium or high below)
None
Documentation Update
Describe any necessary documentation update if there is any new feature, config, or user-facing change. If not, put "none".
The config description must be updated if new configs are added or the default value of the configs are changed
Any new feature or user-facing change requires updating the Hudi website. Please create a Jira ticket, attach the
ticket number here and follow the instruction to make
changes to the website.
Contributor's checklist
[ ] Read through contributor's guide
[ ] Change Logs and Impact were stated clearly
[ ] Adequate tests were added if applicable
[ ] CI passed
@the-other-tim-brown Can you fix the Azure CI failure?
@danny0405 error is:
TestUpsertPartitioner.testUpsertPartitionerWithSmallFileHandlingPickingMultipleCandidates:470 expected: <[BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-1, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-2, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-3, partitionPath=2016/03/15}]> but was: <[BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-3, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-2, partitionPath=2016/03/15}, BucketInfo {bucketType=UPDATE, fileIdPrefix=fg-1, partitionPath=2016/03/15}]>
I'll put up a separate minor pr to make the ordering deterministic for small file handling
@danny0405 https://github.com/apache/hudi/pull/11008
@danny0405 can you take another look when you get a chance? I have updated a few spots in the code
|
2025-04-01T06:37:53.795401
| 2021-12-20T06:38:46
|
1084417784
|
{
"authors": [
"nsivabalan",
"scxwhite"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3588",
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/4400"
}
|
gharchive/pull-request
|
[HUDI-3069] compact improve
Brief change log
compact improve
I found that when the compact plan is generated, the delta log files under each filegroup are arranged in the natural order of instant time. in the majority of cases,We can think that the latest data is in the latest delta log file, so we sort it from large to small according to the instance time, which can largely avoid rewriting the data in the compact process, and then optimize the compact time.
In addition, when reading the delta log file, we compare the data in the external spillablemap with the delta log data. If oldrecord is selected, there is no need to rewrite the data in the external spillablemap. Rewriting data will waste a lot of resources when data is spill to disk
This pull request is already covered by existing tests, such as (please describe tests).
Committer checklist
[*] Has a corresponding JIRA in PR title & commit()
[*] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
@yihua : Can you follow up on the review please.
|
2025-04-01T06:37:53.797208
| 2022-04-30T11:57:19
|
1221813896
|
{
"authors": [
"codope"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3589",
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/5476"
}
|
gharchive/pull-request
|
[HUDI-3931][DOCS] Guide to setup async metadata indexing
@nsivabalan @bhasudha @xushiyan Added this guide to go under Services tab. We can land this doc. I'll update the blog #5449 with more design elements. That can go after multi-modal index blog.
|
2025-04-01T06:37:53.800610
| 2022-07-05T05:45:53
|
1293816447
|
{
"authors": [
"yihua"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3590",
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/6043"
}
|
gharchive/pull-request
|
[HUDI-4360] Fix HoodieDropPartitionsTool based on refactored meta sync
What is the purpose of the pull request
This PR fixes HoodieDropPartitionsTool based on refactored meta sync and the failed Java CI on master.
Brief change log
Fix the usage of old configs and APIs in HoodieDropPartitionsTool.
Verify this pull request
This pull request is a trivial rework / code cleanup without any test coverage.
Committer checklist
[ ] Has a corresponding JIRA in PR title & commit
[ ] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
The changes are limited to the HoodieDropPartitionsTool only and should not affect others. Java CI passes. Landing this to fix the master soon.
|
2025-04-01T06:37:53.808540
| 2023-10-20T01:46:16
|
1953339575
|
{
"authors": [
"yihua"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3591",
"repo": "apache/hudi",
"url": "https://github.com/apache/hudi/pull/9894"
}
|
gharchive/pull-request
|
[HUDI-6798] Add record merging mode and implement event-time ordering in the new file group reader
Change Logs
This PR adds a new table config hoodie.record.merge.mode to control the record merging mode and behavior in the new file group reader (HoodieFileGroupReader) and implements event-time ordering in it. The table config hoodie.record.merge.mode is going to be the single config that determines how the record merging happens in release 1.0 and beyond. Detailed changes include:
Adds RecordMergeMode to define three merging mode:
OVERWRITE_WITH_LATEST: using transaction time to merge records, i.e., the record from later transaction overwrites the earlier record with the same key. This corresponds to the behavior of existing payload class OverwriteWithLatestAvroPayload.
EVENT_TIME_ORDERING: using event time as the ordering to merge records, i.e., the record with the larger event time overwrites the record with the smaller event time on the same key, regardless of transaction time. The event time or preCombine field needs to be specified by the user. This corresponds to the behavior of existing payload class DefaultHoodieRecordPayload.
CUSTOM: using custom merging logic specified by the user. When a user specifies a custom record merger strategy or payload class with Avro record merger, this is going to be specified so the record merging follows user-defined logic as before.
As of now, setting hoodie.record.merge.mode is not mandatory (HUDI-7850 as a follow-up to make it mandatory in release 1.0). This PR adds the inference logic based on the payload class name, payload type, and record merger strategy in HoodieTableMetaClient to properly set hoodie.record.merge.mode in the table config.
Adds merging logic of OVERWRITE_WITH_LATEST and EVENT_TIME_ORDERING in HoodieBaseFileGroupRecordBuffer that do not have to go through the record merger APIs to simplify the implementation (opening up for further optimization when possible). As a fallback, user can always set CUSTOM as the record merge mode to leverage payload class or record merger implementation for transaction or event time-based merging.
Adds a custom compareTo API in HoodieBaseFileGroupRecordBuffer to compare ordering field values of different types, due to an issue around ordering values in the delete records (HUDI-7848).
Adjusts tests to cover the new record merge modes.
New unit and functional tests are added around the new logic. Existing unit and functional tests using file group readers on Spark cover all different merging modes.
Impact
Add record merging mode and implement event-time ordering in the new file group reader
Risk level
medium
Documentation Update
HUDI-7842 to update the docs on the website
Contributor's checklist
[ ] Read through contributor's guide
[ ] Change Logs and Impact were stated clearly
[ ] Adequate tests were added if applicable
[ ] CI passed
CI is green.
|
2025-04-01T06:37:53.825584
| 2024-10-16T20:34:32
|
2592998408
|
{
"authors": [
"dwilson1988",
"loicalleyne",
"zeroshade"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3592",
"repo": "apache/iceberg-go",
"url": "https://github.com/apache/iceberg-go/pull/176"
}
|
gharchive/pull-request
|
IO Implementation using Go CDK
Extends PR #111
Implements #92. The Go CDK has well-maintained implementations for accessing objects stores from S3, Azure, and GCS via a io/fs.Fs-like interface. However, their file interface doesn't support the io.ReaderAt interface or the Seek() function that Iceberg-Go requires for files. Furthermore, the File components are private. So we copied the wrappers and implement the remaining functions inside of Iceberg-Go directly.
In addition, we add support for S3 Read IO using the CDK, providing the option to choose between the existing and new implementation using an extra property.
GCS connection options can be passed in properties map.
@dwilson1988 I saw your note about wanting to work on the CDK features, if you're able to provide some feedback that would be great.
@loicalleyne - happy to take a look. We use this internally in some of our software with Parquet and implemented a ReaderAt. I'll do a more thorough review when I get a chance, but my first thought was to leave it completely separate from the blob.Bucket implementation and let the Create/New funcs simple accept a *blob.Bucket and leave the rest as an exercise to the user. This keeps it more or less completely isolated from the implementation. Thoughts on this direction?
My goal today was just to "get something on paper" to move this forward since the other PR has been stalled since July, I used the other PR as a starting point so I mostly followed the existing patterns. Very open to moving things around if it makes sense. Do you have any idea how your idea would work with the interfaces defined in io.go?
Understood! I'll dig into your last question and get back to you.
Okay, played around a bit and here's where my head is at.
The main reason I'd like to isolated the creation of a *blob.Bucket is I've found that the particular implementation of bucket access can get tricky and rather than support it in this package for all situations, support the most common usage in io.LoadFS/inferFileIOFromSchema and change io.CreateBlobFileIO to accept a *url.URL and a *blob.Bucket. This enables a user to open a bucket with whatever implementation they so choose (GCS, Azure, S3, MinIO, Mem, FileSystem, etc) and there's less code here to maintain.
What I came up with is changing CreateBlobFileIO to:
// CreateBlobFileIO creates a new BlobFileIO instance
func CreateBlobFileIO(parsed *url.URL, bucket *blob.Bucket) (*BlobFileIO, error) {
ctx := context.Background()
return &BlobFileIO{Bucket: bucket, ctx: ctx, opts: &blob.ReaderOptions{}, prefix: parsed.Host + parsed.Path}, nil
}
The URL is still critical there, but now we don't have to concern ourselves with credentials to open the bucket except for in LoadFS.
Thoughts on this?
@dwilson1988
Sounds good, I've made the changes, please take a look.
@loicalleyne is this still on your radar?
hi @dwilson1988
yes, I'm wrapping up some work on another project and will be jumping back on this in a day or two.
Cool - just checking. I'll be patient. 🙂
@dwilson1988 made the suggested changes, there's a deprecation warning on the S3 config EndpointResolver methods that I haven't had time to look into, maybe you could take a look?
Hi @dwilson1988, do you think you'll have time to take a look at this?
Hi @dwilson1988, do you think you'll have time to take a look at this?
I opened a PR on your branch earlier today
@zeroshade hoping you can review when you've got time.
I should be able to give this a review tomorrow or Friday. In the meantime can you resolve the conflict in the go.mod? Thanks!
@loicalleyne looks like the integration tests are failing, unable to read the manifest files from the minio instance.
I did some debugging by copying some of the test scenarios into a regular Go program (if anyone can tell me how to run Delve in VsCode on a test that uses testify please let me know), running the docker compose file and manually running the commands in iceberg-go\.github\workflows\go-integration.yml (note: to point to the local Minio in Docker I had to run export AWS_S3_ENDPOINT=http://<IP_ADDRESS>:9000).
It seems there's something wrong with the bucket prefix and how it interacts with subsequent calls, the prefix is assigned here
ie. it's trying to HEAD object
default/test_null_nan/metadata/00000-770ce240-af4c-49dd-bae9-6871f55f8be1.metadata.jsonwarehouse/default/test_null_nan/metadata/snap-2616202072048292962-1-6c011b0d-0f2a-4b62-bc17-158f94b1c470.avro
Unfortunately I don't have time to investigate any further right now, @dwilson1988 if you've seen this before please let me know.
I've been able to replicate and debug the issue myself locally. Aside from needing to make a bunch of changes to fix the prefix, bucket and key strings, I was still unable to get gocloud.dev/blob/s3blob to find the file appropriately. I followed it down to the call to clientV2.GetObject and the s3v2.GetObjectInput has all the correct values: Bucket: "warehouse", Key: "default/test_all_types/....." etc. and yet minio still reports a 404. So I'm not sure what's going on.
I'll try poking at this tomorrow a bit more and see if i can make a small mainprog that is able to use s3blob to access a file from minio locally as a place to start.
Then I suspect it might be the s3ForcePathStyle option referred to here. It affected Minio in particular once they moved to s3 V2.
@loicalleyne I haven't dug too far into the blob code, is it a relatively easy fix to handle that s3ForcePathStyle?
My understanding is that it's just another property to pass in props. Would also have to add it as a recognized property/constant in io/s3.go I should think.
s3.UsePathStyle
// Allows you to enable the client to use path-style addressing, i.e.,
// https://s3.amazonaws.com/BUCKET/KEY . By default, the S3 client will use virtual
// hosted bucket addressing when possible( https://bucket.s3.amazonaws.com/KEY ).
UsePathStyle [bool](https://pkg.go.dev/builtin#bool)
@loicalleyne can you take a look at the latest changes I made here?
Is it intended to not provide the choice between virtual hosted bucket addressing and path-style addressing?
LGTM otherwise - the tests are passing :)
@loicalleyne following pyiceberg's example, I've added an option to force virtual addressing. That work for you?
LGTM 👍
@dwilson1988 When you get a chance, can you take a look at the changes I made here. I liked your thought on isolating things, but there was still a bunch of specific options for particular bucket types that needed to get accounted for as the options are not always passed via URL due to how Iceberg config properties work.
So I'd like your thoughts or comments on what I ultimately came up with to simplify what @loicalleyne already had while solving the failing tests and whether it fits what you were thinking and using internally. Once this is merged, I'd definitely greatly appreciate contributions for Azure as you said :)
@zeroshade - I'll take a look this weekend!
|
2025-04-01T06:37:53.833841
| 2024-04-10T08:35:21
|
2235050946
|
{
"authors": [
"Fokko",
"MehulBatra",
"chinmay-bhat",
"swapdewalkar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3593",
"repo": "apache/iceberg-python",
"url": "https://github.com/apache/iceberg-python/issues/595"
}
|
gharchive/issue
|
Implement caching of manifest-files
Feature Request / Improvement
We currently loop over the manifests of a snapshot often just once. But now when we're compounding the operations (DELETE+APPEND), there is a fair chance that read a manifest more than once. The spec states that manifest files are immutable, this means that we can cache is locally using a method annotated with the lru_cache.
I am trying to working on this, is it possible to assign it to me?
@swapdewalkar Thanks for picking this up! I've just assigned it to you
Hi @swapdewalkar I wanted to check in and see if you have any updates on this task. If you need any assistance or if there are any obstacles, please let me know—I will be happy to help!
Hi, can we increase the scope of this issue to cache/store all_manifests, data_manifests & delete_manifests? Or do I create a new issue for this? This feature would be useful for tasks like Incremental Scans (Append, Changelog, etc) where we frequently access manifest files. I imagine this to be similar to the java implementation.
Also, since @swapdewalkar hasn't responded and if they do not have the time/bandwidth for the issue, I'm happy to give this a shot! :)
@chinmay-bhat I think we can generalize this quite easily, since from the spec:
Once written, data and metadata files are immutable until they are deleted.
I think we could go as easy to have a lru-cache based on the path to the metadata to cache it :)
Thanks @Fokko for the quick response.
based on the path to the metadata to cache it
I'm not clear on this. Are you saying we can simply add lru_cache to def manifests(self, io: FileIO) in class Snapshot? And then whenever we need data manifests or delete manifests, we iterate over the cached manifests? Wouldn't it be better to cache those too, since as you said, the files are immutable?
For ex:
@lru_cache
def manifests(self, io: FileIO):
......
@lru_cache
def data_manifests():
return [manifest_file for manifest_file in self.manifests if manifest.content == ManifestContent.DATA]
@chinmay-bhat I don't think it is as easy as that. We should ensure that the manifest_list path is part of the cache. We could share the cache between calls, since if you do subsequent queries, and the snapshot hasn't been updated, this would speed up the call quite a bit.
We could also make the FileIO part of the caching key. I don't think that's stricktly required, but if something changed in the FileIO we might want to invalidate the cache, but I'm open to arguments here.
Thank you for clarifying! Here's how I imagine manifests() would look like :)
@lru_cache()
def manifests(self, manifest_location: str) -> List[ManifestFile]:
if manifest_location is not None:
file = load_file_io().new_input(manifest_location)
return list(read_manifest_list(file))
return []
When we call snapshot.manifests(snapshot.manifest_list), if manifest_list is the same, we simply query the cached files. But if the snapshot is updated, manifest_list is also updated, and calling manifests() triggers a re-read of manifest files.
Is this similar to what you have in mind?
|
2025-04-01T06:37:53.835504
| 2024-11-27T13:55:19
|
2698638819
|
{
"authors": [
"Fokko",
"jonathanc-n"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3594",
"repo": "apache/iceberg-rust",
"url": "https://github.com/apache/iceberg-rust/issues/726"
}
|
gharchive/issue
|
Extend the DataFileWriterBuilder tests
In data_file_writer we write out a schema but don't have any tests for that. I think it would be good to write out a schema to validate that the field-IDs are there (they are, I checked by hand). And also add a test where we write DataFile that has a partition.
@Fokko I would like to try working on this, may I be assigned this?
|
2025-04-01T06:37:53.846158
| 2016-09-04T09:46:27
|
174940547
|
{
"authors": [
"alexvanboxel",
"bolkedebruin"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3595",
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/1781"
}
|
gharchive/pull-request
|
[AIRFLOW-467] Allow defining of project_id in BigQeuryHook
Dear Airflow Maintainers,
Please accept this PR that addresses the following issues:
https://issues.apache.org/jira/browse/AIRFLOW-467
Testing Done:
Unit tests are added including backward compatibility tests
Awesome
|
2025-04-01T06:37:53.851826
| 2018-01-18T12:57:17
|
289619431
|
{
"authors": [
"r39132",
"topedmaria"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3596",
"repo": "apache/incubator-airflow",
"url": "https://github.com/apache/incubator-airflow/pull/2953"
}
|
gharchive/pull-request
|
Update README.md
Make sure you have checked all steps below.
JIRA
[ ] My PR addresses the following Airflow JIRA issues and references them in the PR title. For example, "[AIRFLOW-XXX] My Airflow PR"
https://issues.apache.org/jira/browse/AIRFLOW-XXX
Description
[ ] Here are some details about my PR, including screenshots of any UI changes:
Tests
[ ] My PR adds the following unit tests OR does not need testing for this extremely good reason:
Commits
[ ] My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
Subject is separated from body by a blank line
Subject is limited to 50 characters
Subject does not end with a period
Subject uses the imperative mood ("add", not "adding")
Body wraps at 72 characters
Body explains "what" and "why", not "how"
[ ] Passes git diff upstream/master -u -- "*.py" | flake8 --diff
I'm closing this as there has been no movement from the submitter.
|
2025-04-01T06:37:53.867601
| 2023-07-05T15:51:09
|
1789841860
|
{
"authors": [
"cfmcgrady",
"pan3793",
"waitinfuture"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3597",
"repo": "apache/incubator-celeborn",
"url": "https://github.com/apache/incubator-celeborn/pull/1683"
}
|
gharchive/pull-request
|
[CELEBORN-769] Change default value of celeborn.client.push.maxReqsIn…
…Flight to 16
What changes were proposed in this pull request?
Change default value of celeborn.client.push.maxReqsInFlight to 16.
Why are the changes needed?
Previous value 4 is too small, 16 is more reasonable.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Pass GA.
cc @AngersZhuuuu @pan3793
changing the key from celeborn.client.push.maxReqsInFlight to celeborn.client.push.maxRequestsInFlight? it's better to avoid using abbreviations in configurations. WDYT? @pan3793 @waitinfuture
@cfmcgrady We use Reqs in several configurations, and "max" also is an abbr of "maximum" :)
@cfmcgrady We use Reqs in several configurations, and "max" also is an abbr of "maximum" :)
ok, Spark also has the key like spark.reducer.maxReqsInFlight
changing the key from celeborn.client.push.maxReqsInFlight to celeborn.client.push.maxRequestsInFlight? it's better to avoid using abbreviations in configurations. WDYT? @pan3793 @waitinfuture
+1 personally, but I think it's OK to use abbreviations, sometimes whole word is too long 😄
|
2025-04-01T06:37:53.873485
| 2019-12-12T05:04:23
|
536759331
|
{
"authors": [
"lenboo",
"qiaozhanwei",
"sunnerrr",
"wuchunfu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3598",
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/issues/1456"
}
|
gharchive/issue
|
[BUG] dolphinscheduler compile failed
Problem description:
Now I have a problem. I have cloned the latest code from GitHub, but when I compile it, I fail to compile it.
Compile command:
mvn -U clean package -Prelease -Dmaven.test.skip=true
The following is the compile error information:
Current version:
1.2.0 dev branch
Compiling environment:
MacOS 10.15.2
Expected results:
Hope to be able to compile successfully in any environment and run it successfully.
please cat README.md, there would be 'how to build'.
need .proto file compile java class,idea have relate plugin
I changed the compile command to mvn clean install -Prelease Still failed to compile
Current version:
1.2.0 dev branch ? dev or 1.2.0 branch ?
I compile dev,1.2.0 branch and incoming apache release 1.2.0 no problem
The current branch is dev
I can't import maven dependency with IntelliJ IDEA tool and report the error of Cannot resolve io.grpc:grpc-core:1.9.0
|
2025-04-01T06:37:53.878513
| 2021-03-26T05:55:47
|
841616320
|
{
"authors": [
"Joder5",
"chengshiwen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3599",
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/issues/5155"
}
|
gharchive/issue
|
A Linux distribution for docker-compose
As I know,the image of apache/dolphinscheduler was base on the Alpine image。But when I install Python dependencies,It takes a lot of time,at least an hour。What's more,It will need more any other dependencies,for example
apk update && apk add --no-cache gcc g++ python3-dev libffi-dev librdkafka librdkafka-dev mariadb-connector-c-dev musl-dev libxml2-utils libxslt libxslt-dev py3-numpy py3-pandas
But in the end, I fail to install python dependencies,because of the pandas。
So can you build another version based on other Linux,such as slim。 constrast
Thank you very much !
@Joder5 This is not a problem. As for a any bare linux, none of the libraries you mentioned will exist.
In order to improve the update speed, you can set the mirror source of alpine as follows:
# 1. install command/library/software
# If install slowly, you can replcae alpine's mirror with aliyun's mirror, Example:
# RUN sed -i "s/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g" /etc/apk/repositories
# RUN sed -i 's/dl-cdn.alpinelinux.org/mirror.tuna.tsinghua.edu.cn/g' /etc/apk/repositories
As for pip, you can also use the mirror source like https://pypi.tuna.tsinghua.edu.cn/simple
pip install --no-cache-dir -i https://pypi.tuna.tsinghua.edu.cn/simple
As for slim, we will consider it later.
I give you a version of Dockerfile based on debian:slim, which will be added to the warehouse after optimization in the future.
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
FROM openjdk:8-jdk-slim
ARG VERSION
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ Asia/Shanghai
ENV LANG C.UTF-8
ENV DOCKER true
# 1. install command/library/software
# If install slowly, you can replcae alpine's mirror with aliyun's mirror, Example:
RUN echo \
"deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster main contrib non-free\n\
deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster-updates main contrib non-free\n\
deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster-backports main contrib non-free\n\
deb http://mirrors.tuna.tsinghua.edu.cn/debian-security buster/updates main contrib non-free" > /etc/apt/sources.list
RUN apt-get update && \
apt-get install -y tzdata dos2unix python python3 procps netcat sudo tini postgresql-client && \
echo "Asia/Shanghai" > /etc/timezone && \
rm -f /etc/localtime && \
dpkg-reconfigure tzdata && \
rm -rf /var/lib/apt/lists/* /tmp/*
# 2. add dolphinscheduler
ADD ./apache-dolphinscheduler-incubating-${VERSION}-dolphinscheduler-bin.tar.gz /opt/
RUN ln -s /opt/apache-dolphinscheduler-incubating-${VERSION}-dolphinscheduler-bin /opt/dolphinscheduler
ENV DOLPHINSCHEDULER_HOME /opt/dolphinscheduler
# 3. add configuration and modify permissions and set soft links
COPY ./checkpoint.sh /root/checkpoint.sh
COPY ./startup-init-conf.sh /root/startup-init-conf.sh
COPY ./startup.sh /root/startup.sh
COPY ./conf/dolphinscheduler/*.tpl /opt/dolphinscheduler/conf/
COPY ./conf/dolphinscheduler/logback/* /opt/dolphinscheduler/conf/
COPY ./conf/dolphinscheduler/env/dolphinscheduler_env.sh /opt/dolphinscheduler/conf/env/
RUN dos2unix /root/checkpoint.sh && \
dos2unix /root/startup-init-conf.sh && \
dos2unix /root/startup.sh && \
dos2unix /opt/dolphinscheduler/conf/env/dolphinscheduler_env.sh && \
dos2unix /opt/dolphinscheduler/script/*.sh && \
dos2unix /opt/dolphinscheduler/bin/*.sh && \
rm -rf /bin/sh && \
ln -s /bin/bash /bin/sh && \
mkdir -p /tmp/xls /usr/lib/jvm && \
ln -sf /usr/local/openjdk-8 /usr/lib/jvm/java-1.8-openjdk && \
echo "Set disable_coredump false" >> /etc/sudo.conf
# 4. expose port
EXPOSE 5678 1234 12345 50051
ENTRYPOINT ["/usr/bin/tini", "--", "/root/startup.sh"]
``
close by #5158
|
2025-04-01T06:37:53.881164
| 2021-04-01T06:47:20
|
848066617
|
{
"authors": [
"CalvinKirs",
"xingchun-chen",
"zjw-zjw"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3600",
"repo": "apache/incubator-dolphinscheduler",
"url": "https://github.com/apache/incubator-dolphinscheduler/issues/5194"
}
|
gharchive/issue
|
And it is running with mapreduce, when I stop the workflow,I think this MR application will be kill, but was not. The Mr applicaiton is still RUNNING. And I dont know why.
Which version of DolphinScheduler:
-[1.3.5]
This is the log
It is the same as #4862
fix by #4936
|
2025-04-01T06:37:53.887193
| 2021-09-07T04:52:54
|
989590768
|
{
"authors": [
"liutang123",
"qzsee"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3601",
"repo": "apache/incubator-doris",
"url": "https://github.com/apache/incubator-doris/pull/6580"
}
|
gharchive/pull-request
|
[FOLLOWUP] create table like clause support copy rollup
Proposed changes
for issue #6474
create table test.table1 like test.table with rollup (r1,r2) -- copy some rollup
create table test.table1 like test.table with rollup -- copy all rollup
create table test.table1 like test.table -- only copy base table
Types of changes
What types of changes does your code introduce to Doris?
Put an x in the boxes that apply
[ ] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Documentation Update (if none of the other choices apply)
[ ] Code refactor (Modify the code structure, format the code, etc...)
[ ] Optimization. Including functional usability improvements and performance improvements.
[ ] Dependency. Such as changes related to third-party components.
[x] Other.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
[x] I have created an issue on (Fix #6474) and described the bug/feature there in detail
[x] Compiling and unit tests pass locally with my changes
[x] I have added tests that prove my fix is effective or that my feature works
[x] If these changes need document changes, I have updated the document
[x] Any dependent changes have been merged
Further comments
If this is a relatively large or complex change, kick off the discussion at<EMAIL_ADDRESS>by explaining why you chose the solution you did and what alternatives you considered, etc...
Please rebase to master
|
2025-04-01T06:37:53.888436
| 2018-11-14T03:40:39
|
380522516
|
{
"authors": [
"jihoonson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3602",
"repo": "apache/incubator-druid",
"url": "https://github.com/apache/incubator-druid/pull/6622"
}
|
gharchive/pull-request
|
Properly reset total size of segmentsToCompact in NewestSegmentFirstIterator
When NewestSegmentFirstIterator searches for segments to compact, it sometimes needs to clear the segments found so far and starts again. The total size of segments is also needed to be reset properly.
@gianm thanks for the quick review. Added a test.
|
2025-04-01T06:37:53.889933
| 2018-04-12T12:56:10
|
313715022
|
{
"authors": [
"huangxincheng",
"mercyblitz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3603",
"repo": "apache/incubator-dubbo-spring-boot-project",
"url": "https://github.com/apache/incubator-dubbo-spring-boot-project/issues/105"
}
|
gharchive/issue
|
The application.properties file specifies that the dubbo. protocol. port attribute is invalid.
The application.properties file specifies that the dubbo. protocol. port attribute is invalid.
The dubbo-spring-boot-starter.versionversion is 0.0.1.
You can used dubbo.protocol.${name}.port
|
2025-04-01T06:37:53.896324
| 2018-05-24T06:02:58
|
325975446
|
{
"authors": [
"carryxyh",
"chickenlj",
"whanice",
"zonghaishang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3604",
"repo": "apache/incubator-dubbo",
"url": "https://github.com/apache/incubator-dubbo/issues/1841"
}
|
gharchive/issue
|
I am confused with config server, transporter config.
In my understanding.
if we config transporter="netty4" in provider side. It will affect the consumer。
so we can use server="netty4" in provider side.
but why dubbo check sever key in DubboProtocol.initClient ?
the consumer with low version (not support netty4) will start failed.
And the transporter key will not affect the consumer now.
transport config will not override.
the config that can override is like timeout, retries, loadbalance, actives and so on.
@whanice
if we config transporter="netty4" in provider side. It will affect the consumer。
Transporter is the protocolconfig attribute, the client does not have a protocolconfig concept, therefore the tranporter on the provider side will not be passed to the client.
but why dubbo check sever key in DubboProtocol.initClient ?
Because the value stored by the server will be passed to the client,therefore, the client will try to take the value of the server first.
the consumer with low version (not support netty4) will start failed.
I have tried neety4 as a provider, netty3 as a client, and can use it normally, Convenient to provide demo program verification?
Thank you very much for your support of dubbo.
Thanks for your guys reply.
Surely. the tcp connection should not be related to the nio framework, No matter what you use, netty3, netty4 or mina in server. Will not affect the client use.
I care about migration. when consumer use dubbo-2.5.x . not have netty4 extension.
when provider wants to upgrade dubbo verison to dubbo-2.6.x and use netty4. It will affect the consumer.
So I think in DubboProtol.initClient. change to will be better.
// client type setting.
String str = url.getParameter(Constants.CLIENT_KEY, url.getParameter(Constants.TRANSPORTER_KEY, Constants.DEFAULT_REMOTING_CLIENT));
And in the future. I think the sever ket shoud not pass to consumer.
You are right, I will confirm this question again.
Through communication, we keep netty as the extension point name, there will be no problem
And in the future. I think the sever key shoud not pass to consumer.
Agree, and this can be solved with #2030
|
2025-04-01T06:37:53.910733
| 2023-01-06T11:11:24
|
1522398549
|
{
"authors": [
"chenyi19851209",
"jonyangx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3605",
"repo": "apache/incubator-eventmesh",
"url": "https://github.com/apache/incubator-eventmesh/pull/2838"
}
|
gharchive/pull-request
|
fix call the same method twice
Fixes #2670 .
Motivation
Explain the content here.
Explain why you want to make the changes and what problem you're trying to solve.
Modifications
Describe the modifications you've done.
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
If a feature is not applicable for documentation, explain why?
If a feature is not documented yet in this PR, please create a followup issue for adding the documentation
fix for isse #2670
Pls add Motivation and Modifications description,and solve the build error. @joaovitoras
|
2025-04-01T06:37:53.919473
| 2016-01-30T00:20:09
|
129918242
|
{
"authors": [
"radarwave",
"randomtask1155",
"yaoj2"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3606",
"repo": "apache/incubator-hawq",
"url": "https://github.com/apache/incubator-hawq/pull/305"
}
|
gharchive/pull-request
|
hawqextract column context does not exist error
When running hawq extract python stack trace is returned because pg_aoseg no longer has a column called content
[gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect database localhost:5432 gpadmin 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- extract AO_FileLocations Traceback (most recent call last): File "/usr/local/hawq-master/bin/hawqextract", line 551, in <module> sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in main metadata = extract_metadata(conn, args[0]) File "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: ERROR: column "content" does not exist LINE 2: SELECT content, segno as fileno, eof as filesize
LGTM.
Verified this can fix generate yaml file.
Would you please file a jira and update the commit message? Like below:
HAWQ-XXX. hawqextract column context does not exist error
BTW, did you tested run with the generated yarm file? Thanks.
when attempting to create apache jira i get timeout or null pointer
exception. I will try again later.
I only ran the YAML file through yamllint but did not test it in a
mapreduce job or anything. I stumbled on the error while working on
something else. Also comparing it to GPDB the output looks similar.
On Fri, Mar 11, 2016 at 9:11 AM, Radar Lei<EMAIL_ADDRESS>wrote:
LGTM.
Verified this can fix generate yaml file.
Would you please file a jira and update the commit message? Like below:
HAWQ-XXX. hawqextract column context does not exist error
BTW, did you tested run with the generated yarm file? Thanks.
—
Reply to this email directly or view it on GitHub
https://github.com/apache/incubator-hawq/pull/305#issuecomment-195406454
.
Fix yaml file generation is good enough for this pull request. We can check if it work with mapreduce job in separate jira.
Please update the commit message, then we can merge it in. Thanks.
jira created
https://issues.apache.org/jira/browse/HAWQ-535
You might need to update the git commit message with the jira number, so when we finished merge, it can keep the origin author information.
thanks i amended the commit message
On Mon, Mar 14, 2016 at 9:42 AM, Radar Lei<EMAIL_ADDRESS>wrote:
You might need to update the git commit message with the jira number, so
when we finished merge, it can keep the origin author information.
—
Reply to this email directly or view it on GitHub
https://github.com/apache/incubator-hawq/pull/305#issuecomment-196343453
.
LGTM
Now the fix is in. Thanks.
@randomtask1155 , please close this pull request since it's already been merged. Thanks.
|
2025-04-01T06:37:53.931994
| 2017-02-08T10:30:11
|
206155833
|
{
"authors": [
"amaya382",
"coveralls",
"maropu",
"myui"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3607",
"repo": "apache/incubator-hivemall",
"url": "https://github.com/apache/incubator-hivemall/pull/41"
}
|
gharchive/pull-request
|
[HIVEMALL-54][SPARK] Add an easy-to-use script for spark-shell
What changes were proposed in this pull request?
This pr added a script to automatically download the latest Spark version, compile Hivemall for the version, and invoke spark-shell with the compiled Hivemall binary.
This pr also included a documentation for hivemall-on-spark installation.
What type of PR is it?
Improvement
What is the Jira issue?
https://issues.apache.org/jira/browse/HIVEMALL-54
How was this patch tested?
Manually tested.
Coverage increased (+0.3%) to 36.142% when pulling 5433fd56db138880c9e0ed0f2d20cf7396d4e5e8 on maropu:AddScriptForSparkShell into 85f8e173a2a97005c00b84140f4b9150060c4a56 on apache:master.
Coverage increased (+0.3%) to 36.142% when pulling 5433fd56db138880c9e0ed0f2d20cf7396d4e5e8 on maropu:AddScriptForSparkShell into 85f8e173a2a97005c00b84140f4b9150060c4a56 on apache:master.
@amaya382 @Lewuathe Could you confirm that the updated bin/spark-shell properly works if your spare time?
@maropu BTW, better to update incubator-hivemall-site once this PR is merged.
yea, I'll update just after this merged.
@myui 👌
Coverage remained the same at 35.844% when pulling 17f2dcf375a1c4c0ad979c80c6591143aefe8a1b on maropu:AddScriptForSparkShell into 85f8e173a2a97005c00b84140f4b9150060c4a56 on apache:master.
@amaya382 Have you confirmed?
@maropu please merge this PR if he say fine.
Looks mostly good, but I found minor issues not directly related to this PR.
Some declarations in define-all.spark are incorrect and duplicated.
e.g. train_arowh: https://github.com/maropu/incubator-hivemall/blob/AddScriptForSparkShell/resources/ddl/define-all.spark#L32-L36
@amaya382 Can you make prs to fix them?
Merged.
@amaya382 I made a JIRA ticket: https://issues.apache.org/jira/browse/HIVEMALL-65
@maropu okay, I'll do in a few days
Coverage remained the same at 35.814% when pulling ff03adf1de142fb9cdbe373bed718dda2e8e840e on maropu:AddScriptForSparkShell into 247e1aef87ddc789dacfb74e469263e7e2ab603e on apache:master.
Coverage remained the same at 35.814% when pulling ff03adf1de142fb9cdbe373bed718dda2e8e840e on maropu:AddScriptForSparkShell into 247e1aef87ddc789dacfb74e469263e7e2ab603e on apache:master.
@maropu Could you update incubator-hivemall-site? This feature should be documented.
|
2025-04-01T06:37:53.935851
| 2017-06-26T11:54:34
|
238519866
|
{
"authors": [
"coveralls",
"myui"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3608",
"repo": "apache/incubator-hivemall",
"url": "https://github.com/apache/incubator-hivemall/pull/90"
}
|
gharchive/pull-request
|
[HIVEMALL-96-2] Added Geo Spatial UDFs
What changes were proposed in this pull request?
This PR added 5 Geo Spatial UDFs: lat2tiley, lon2tilex, tilex2lon, tileytolat, and haversine_distance.
What type of PR is it?
Feature
What is the Jira issue?
https://issues.apache.org/jira/browse/HIVEMALL-96
How was this patch tested?
Unit tests and manual tests
How to use this feature?
WITH data as (
select 51.51202 as lat, 0.02435 as lon, 17 as zoom
union all
select 51.51202 as lat, 0.02435 as lon, 4 as zoom
union all
select null as lat, 0.02435 as lon, 17 as zoom
)
select
lat, lon, zoom,
tile(lat, lon, zoom) as tile,
(lon2tilex(lon,zoom) + lat2tiley(lat,zoom) * cast(pow(2, zoom) as bigint)) as tile2,
lon2tilex(lon, zoom) as xtile,
lat2tiley(lat, zoom) as ytile,
tiley2lat(lat2tiley(lat, zoom), zoom) as lat2, -- tiley2lat returns center of the tile
tilex2lon(lon2tilex(lon, zoom), zoom) as lon2 -- tilex2lon returns center of the tile
from
data;
select
haversine_distance(35.6833, 139.7667, 34.6603, 135.5232) as km,
haversine_distance(35.6833, 139.7667, 34.6603, 135.5232, true) as mile;
Coverage increased (+0.2%) to 40.19% when pulling 6c391786cf877ef3080db9403192c55700491bae on myui:HIVEMALL-96-2 into c06378a81723e3998f90c08ec7444ead5b6f2263 on apache:master.
|
2025-04-01T06:37:53.958441
| 2022-12-08T11:56:39
|
1484477213
|
{
"authors": [
"turboFei"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3609",
"repo": "apache/incubator-kyuubi",
"url": "https://github.com/apache/incubator-kyuubi/pull/3946"
}
|
gharchive/pull-request
|
Config the rest frontend service max worker thread
Why are the changes needed?
How was this patch tested?
[ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
[ ] Add screenshots for manual tests if appropriate
[ ] Run test locally before make a pull request
thanks, merged to master
|
2025-04-01T06:37:53.959344
| 2022-06-18T10:14:10
|
1275752618
|
{
"authors": [
"Beacontownfc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3610",
"repo": "apache/incubator-linkis-website",
"url": "https://github.com/apache/incubator-linkis-website/issues/354"
}
|
gharchive/issue
|
The Linkis website needs to add a github action to check whether the linkis website link is available
Some links on the linkis website have been broken.
#341 fix the issue
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.