added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:37:53.999130
| 2017-07-05T06:17:46
|
240548199
|
{
"authors": [
"codecov-io",
"jerryshao"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3611",
"repo": "apache/incubator-livy",
"url": "https://github.com/apache/incubator-livy/pull/13"
}
|
gharchive/pull-request
|
Minor. Fix deprecated conf warning log issue
./livy-sshao-server.out.5:17/05/08 16:50:44 WARN LivyConf: The configuration key livy.spark.deployMode has been deprecated as of Livy 0.4 and may be removed in the future. Please use the new key livy.spark.deploy-mode instead.
./livy-sshao-server.out.5:17/05/08 16:50:45 WARN LivyConf: The configuration key livy.spark.scalaVersion has been deprecated as of Livy 0.4 and may be removed in the future. Please use the new key livy.spark.scala-version instead.
./livy-sshao-server.out.5:17/05/08 16:51:04 WARN RSCConf: The configuration key livy.rsc.driver_class has been deprecated as of Livy 0.4 and may be removed in the future. Please use the new key livy.rsc.driver-class instead.
This log is incorrect even if we use new configuration key. This is mainly because the logic in logDeprecationWarning to check alternative configurations is not correct.
Codecov Report
Merging #13 into master will increase coverage by 0.18%.
The diff coverage is 94.73%.
@@ Coverage Diff @@
## master #13 +/- ##
============================================
+ Coverage 70.47% 70.65% +0.18%
- Complexity 729 733 +4
============================================
Files 96 96
Lines 5161 5177 +16
Branches 779 781 +2
============================================
+ Hits 3637 3658 +21
+ Misses 1006 996 -10
- Partials 518 523 +5
Impacted Files
Coverage Δ
Complexity Δ
...java/org/apache/livy/client/common/ClientConf.java
99% <94.73%> (-1%)
44 <6> (+3)
rsc/src/main/java/org/apache/livy/rsc/RSCConf.java
86.73% <0%> (-1.03%)
7% <0%> (ø)
...in/java/org/apache/livy/rsc/driver/JobWrapper.java
80.64% <0%> (ø)
8% <0%> (+1%)
:arrow_up:
rsc/src/main/java/org/apache/livy/rsc/rpc/Rpc.java
78.61% <0%> (+0.62%)
12% <0%> (ø)
:arrow_down:
...ain/java/org/apache/livy/rsc/driver/RSCDriver.java
78.44% <0%> (+0.86%)
40% <0%> (-1%)
:arrow_down:
rsc/src/main/java/org/apache/livy/rsc/Utils.java
85.71% <0%> (+2.38%)
16% <0%> (ø)
:arrow_down:
...in/java/org/apache/livy/rsc/rpc/RpcDispatcher.java
69.56% <0%> (+3.26%)
20% <0%> (+1%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 412ccc8...e3a584d. Read the comment docs.
Thanks @ajbozarth , merging to master.
|
2025-04-01T06:37:54.004676
| 2018-10-31T04:16:29
|
375791871
|
{
"authors": [
"NRauschmayr",
"frankfliu",
"larsonwu0220"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3612",
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/13053"
}
|
gharchive/issue
|
Something wrong of form .rec of my own data
I try to form .rec and .idx of my own data.
First, run im2rec.py to generate list :
python im2rec.py --list --recursive --train-ratio 1 [list folder] [images folder]
Second, run im2rec.py to generate .rec and .idx.
python im2rec.py --num-thread 4 [list folder] [images folder]
But when I read index,
imgrec = mx.recordio.MXIndexedRecordIO(args.idx_path, args.bin_path, 'r')
s = imgrec.read_idx(0)
header, _ = mx.recordio.unpack(s)
The "header" has no label value.
Where is the problem?
Thanks.
@mxnet-label-bot [Question]
Thanks a lot for reporting this issue. I tried it out and you are right, that the header does not have the correct values. Using imgrec.read instead of imgrec.read_idx seems to solve this problem.
Thanks a lot for reporting this issue. Is every label in the file incorrect or just a few?
|
2025-04-01T06:37:54.016221
| 2021-03-22T08:11:05
|
837445662
|
{
"authors": [
"fhieber",
"harupy",
"leezu",
"praneethkv",
"szha"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3613",
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/issues/20068"
}
|
gharchive/issue
|
OSError: libopenblas.so.0: cannot open shared object file: No such file or directory in mxnet 1.8.0
Description
mxnet 1.8.0 emits the following error when running import mxnet:
OSError: libopenblas.so.0: cannot open shared object file: No such file or directory
Error Message
(generated from the dockerfile attached in the to-reproduce section)
+ Step 1/3 : FROM python:3.7
---> 7fefbebd95b5
+ Step 2/3 : RUN pip install mxnet
---> Running in fc634966f9aa
Collecting mxnet
Downloading mxnet-1.8.0-py2.py3-none-manylinux2014_x86_64.whl (38.7 MB)
Collecting graphviz<0.9.0,>=0.8.1
Downloading graphviz-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting numpy<2.0.0,>1.16.0
Downloading numpy-1.20.1-cp37-cp37m-manylinux2010_x86_64.whl (15.3 MB)
Collecting requests<3,>=2.20.0
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.4-py2.py3-none-any.whl (153 kB)
Installing collected packages: urllib3, idna, chardet, certifi, requests, numpy, graphviz, mxnet
+ Successfully installed certifi-2020.12.5 chardet-4.0.0 graphviz-0.8.4 idna-2.10 mxnet-1.8.0 numpy-1.20.1 requests-2.25.1 urllib3-1.26.4
WARNING: You are using pip version 20.3.1; however, version 21.0.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container fc634966f9aa
---> b1c12d9f4376
+ Step 3/3 : RUN python -c "import mxnet"
---> Running in 8362fc58b280
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/mxnet/__init__.py", line 23, in <module>
from .context import Context, current_context, cpu, gpu, cpu_pinned
File "/usr/local/lib/python3.7/site-packages/mxnet/context.py", line 23, in <module>
from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
File "/usr/local/lib/python3.7/site-packages/mxnet/base.py", line 351, in <module>
_LIB = _load_lib()
File "/usr/local/lib/python3.7/site-packages/mxnet/base.py", line 342, in _load_lib
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
File "/usr/local/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
+ OSError: libopenblas.so.0: cannot open shared object file: No such file or directory
The command '/bin/sh -c python -c "import mxnet"' returned a non-zero code: 1
(key lines are colored green)
To Reproduce
Steps to reproduce
Prepare the following dockerfile:
FROM python:3.7
RUN pip install mxnet
RUN python -c "import mxnet"
Then, run docker build .
What have you tried to solve it?
Environment
We recommend using our script for collecting the diagnostic information with the following command
curl --retry 10 -s https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py | python3
Environment Information
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Stepping: 13
CPU MHz: 2400.000
BogoMIPS: 4800.00
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx pdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase bmi1 avx2 bmi2 erms xsaveopt arat
----------Python Info----------
Version : 3.7.9
Compiler : GCC 8.3.0
Build : ('default', 'Nov 18 2020 14:10:47')
Arch : ('64bit', 'ELF')
------------Pip Info-----------
Version : 20.3.1
Directory : /usr/local/lib/python3.7/site-packages/pip
----------MXNet Info-----------
No MXNet installed.
----------System Info----------
Platform : Linux-4.19.76-linuxkit-x86_64-with-debian-10.6
system : Linux
node : 9bcb86c4d6cf
release : 4.19.76-linuxkit
version : #1 SMP Tue May 26 11:42:35 UTC 2020
----------Hardware Info----------
machine : x86_64
processor :
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0155 sec, LOAD: 0.9034 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1894 sec, LOAD: 0.2668 sec.
Error open Gluon Tutorial(cn): https://zh.gluon.ai, <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1091)>, DNS finished in 0.3720698356628418 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0193 sec, LOAD: 0.5068 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0474 sec, LOAD: 0.5751 sec.
Error open Conda: https://repo.continuum.io/pkgs/free/, HTTP Error 403: Forbidden, DNS finished in 0.01907634735107422 sec.
----------Environment----------
Removing intermediate container 9bcb86c4d6cf
I think this is due to a change in CD that openblas is no longer statically linked in libmxnet. For now you can install openblas separately.
cc @mseth10 @leezu
@harupy it's unclear what you are doing. You need to provide more details.
We observe the same issue when trying to use pip-installed MXNet 1.8 (ubuntu, CPU): https://github.com/awslabs/sockeye/runs/2161521442?check_suite_focus=true
@mseth10 @access2rohit please take a look why the CD didn't package the libopenblas.so
tools/pip/setup.py includes instructions for copying libopenblas.so in v1.8.x, v1.x, and master. But apparently that didn't work in v1.8.x, potentially due to some missing library file names in the jenkins files?
v1.8.x https://github.com/apache/incubator-mxnet/blob/a0535ddfb0246f53f7b851baf861fc06d3ff48c3/tools/pip/setup.py#L170-L172
v1.x https://github.com/apache/incubator-mxnet/blob/cfa1c890a7ecb8b5e29ff4e90d6784141f09c4cd/tools/pip/setup.py#L164-L166
master https://github.com/apache/incubator-mxnet/blob/4d706e8c19b3354878eda9467b149c0ce1fd6d47/tools/pip/setup.py#L165-L167
However, I noted that v1.8.x also attempts to copy libquadmath, which MUST NOT happen due to license of libquadmath. That should be fixed. Fortunately that code didn't run due to the current bug.
The problem is that https://github.com/apache/incubator-mxnet/pull/19514 is missing on v1.8.x
My container suddenly started failing to build and upon looking, this was the error. I started using previous version which is 1.7.0.post2 and works perfectly
@praneethkv we are working on patching the wheels to fix the problem
|
2025-04-01T06:37:54.022234
| 2018-04-07T05:40:02
|
312172693
|
{
"authors": [
"anirudh2290",
"eric-haibin-lin",
"haojin2"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3614",
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/10454"
}
|
gharchive/pull-request
|
Improve row_sparse tutorial
Description
@haojin2
Checklist
Essentials
[ ] Passed code style checking (make lint)
[ ] Changes are complete (i.e. I finished coding on this PR)
[ ] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
[ ] Code is well-documented:
For user-facing API changes, API doc string has been updated.
For new C++ functions in header files, their functionalities and arguments are documented.
For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
[ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
Changes
[ ] Feature1, tests, (and when applicable, API doc)
[ ] Feature2, tests, (and when applicable, API doc)
Comments
If this change is a backward incompatible change, why must this change be made.
Interesting edge cases to note here
LGTM!
can you associate this with a jira ?
I don’t think Tiny PR like this needs one but yeah I can create a JIRA item..
|
2025-04-01T06:37:54.026248
| 2019-02-04T23:39:59
|
406569032
|
{
"authors": [
"ptrendx",
"vandanavk",
"wkcn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3615",
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/14066"
}
|
gharchive/pull-request
|
Add dtype visualization to plot_network
Description
Add possibility to print type information alongside shape in mx.vis.plot_network.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
[x] Changes are complete (i.e. I finished coding on this PR)
[x] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
[x] Code is well-documented:
For user-facing API changes, API doc string has been updated.
Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
[x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
@mxnet-label-bot add [pr-awaiting-review, Visualization]
@szha Does anything else need to be done with this PR?
Merged. Thanks for your contribution!
|
2025-04-01T06:37:54.033504
| 2019-05-14T21:38:18
|
444137161
|
{
"authors": [
"TaoLv",
"larroy",
"pengzhao-intel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3616",
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/14947"
}
|
gharchive/pull-request
|
Silence excessive mkldnn logging output on tests.
Description
Silenced excessive logging output:
http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validation/pipelines/unix-cpu/branches/PR-14940/runs/1/nodes/283/steps/749/log/?start=0
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
[x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
[x] Changes are complete (i.e. I finished coding on this PR)
[x] All changes have test coverage:
Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
[x] Code is well-documented:
For user-facing API changes, API doc string has been updated.
For new C++ functions in header files, their functionalities and arguments are documented.
For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
[x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
@pengzhao-intel I saw this excessive log output and had a 3h timeout on tests, wanted to see if the PR validation time goes down because of this, probably not related.
@larroy Sorry just notice that you're trying to save time of CI? I remember MXNET_MKLDNN_DEBUG is turned on explicitly in CI. So my suggestion might not help for this case.
@larroy Sorry just notice that you're trying to save time of CI? I remember MXNET_MKLDNN_DEBUG is turned on explicitly in CI. So my suggestion might not help for this case.
How about turning it off because we have enough test cases in the CI for MKLDNN now?
Thank you for your improvement. Merge now.
|
2025-04-01T06:37:54.035656
| 2017-11-12T04:25:57
|
273198275
|
{
"authors": [
"burness",
"eric-haibin-lin",
"piiswrong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3617",
"repo": "apache/incubator-mxnet",
"url": "https://github.com/apache/incubator-mxnet/pull/8621"
}
|
gharchive/pull-request
|
fix row_sparse demo tutorials doc
Description
row sparse demo tutorials, it has a mistake that x should be 12 and w should be 23.
The CI is a bit brittle. Do you mind sync with master and trigger the CI again?
Ok! I will try @eric-haibin-lin
why is there no change?
Looks like this is already merged in. Closing
|
2025-04-01T06:37:54.037103
| 2017-09-30T20:44:49
|
261870969
|
{
"authors": [
"matthiasblaesing"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3618",
"repo": "apache/incubator-netbeans",
"url": "https://github.com/apache/incubator-netbeans/pull/30"
}
|
gharchive/pull-request
|
[NETBEANS-54] Module Review defaults
no external library
checked Rat report; unrecognized license headers manually changed
skimmed through the module, did not notice additional problems
Merged into master with 1 positive review. Thank you for reviewing.
|
2025-04-01T06:37:54.042425
| 2019-12-31T22:35:45
|
544282176
|
{
"authors": [
"masayuki2009",
"patacongo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3619",
"repo": "apache/incubator-nuttx",
"url": "https://github.com/apache/incubator-nuttx/pull/21"
}
|
gharchive/pull-request
|
net: tcp: Fix compile error in tcp.h
During build testing for spresense with the latest master, I encountered the following compile error.
tcp/tcp_netpoll.c: In function 'tcp_pollsetup':
./tcp/tcp.h:1191:42: error: expected ')' before ';' token
# define tcp_backlogavailable(c) (false);
^
tcp/tcp_netpoll.c:308:40: note: in expansion of macro 'tcp_backlogavailable'
if (!IOB_QEMPTY(&conn->readahead) || tcp_backlogavailable(conn))
^~~~~~~~~~~~~~~~~~~~
Makefile:102: recipe for target 'tcp_netpoll.o' failed
Actually, the line 1191 in tcp/tcp.h was added in 2014-07-06 but not used so far.
And the line 308 in tcp/tcp_netpoll.c was not called on my environment before merging the following commit
commit 90c52e6f8f7efce97ac718c0f98addc13ec880d2
Author: Xiang Xiao<EMAIL_ADDRESS>Date: Tue Dec 31 09:26:14 2019 -0600
The fix in this PR is very trivial and I think @xiaoxiang781216 enabled CONFIG_NET_TCPBACKLOG on his environments because he did not encounter the error.
BTW: CONFIG_NET_TCPBACKLOG probably should always be enabled. Otherwise, you will miss connection requests. That configuration is a candidate to be removed and just have connection backlog support enabled at all times.
BTW: CONFIG_NET_TCPBACKLOG probably should always be enabled. Otherwise, you will miss connection requests. That configuration is a candidate to be removed and just have connection backlog support enabled at all times.
@patacongo Thanks for the comment. I will modify our defconfigs in separate PR later.
@masayuki2009 Removing support for CONFIG_NET_TCPBACKLOG altogether might be a better solution? Anyone else have an opinion to the contrary?
The TCP backlog was conditioned originally only to support support super tiny networking. But with all of the growth in networking, I think super-tiny networking is not really an option and is certainly not advised in this case since the consequences of disabling backlog are so bad. NuttX now really only supports "small" networking, not super-tiny networking.
@masayuki2009 Removing support for CONFIG_NET_TCPBACKLOG altogether might be a better solution? Anyone else have an opinion to the contrary?
The TCP backlog was conditioned originally only to support super-tiny networking. But with all of the growth in networking, I think super-tiny networking is not really an option and is certainly not advised in this case since the consequences of disabling backlog are so severe. NuttX now really only supports "small" networking, not super-tiny networking.
@patacongo I think that's a good idea, because we will not use NuttX networking with super-tiny environment. If nobody has an objection on it, please remove support for CONFIG_NET_TCPBACKLOG.
|
2025-04-01T06:37:54.044407
| 2017-11-27T22:22:10
|
277190319
|
{
"authors": [
"mrutkows",
"pritidesai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3620",
"repo": "apache/incubator-openwhisk-wskdeploy",
"url": "https://github.com/apache/incubator-openwhisk-wskdeploy/issues/650"
}
|
gharchive/issue
|
Refactor Whisk Deploy errors into new package to avoid cyclic dependencies
currently, wskdeployerror.go (and it unit test file) are in "utils" package; however, if we want to do better unit testing we need to be able to test errors against manifests and the YAML parser. This would mean importing "parsers" into "utils" which leads to cyclic dependency errors in GoLang.
To overcome this cyclic error, we must refactor the error modules into a new "wskdeplyerror" package.
Fixed with wskderrors at https://github.com/apache/incubator-openwhisk-wskdeploy/blob/master/wskderrors/wskdeployerror.go
|
2025-04-01T06:37:54.046643
| 2024-03-14T19:27:35
|
2187118195
|
{
"authors": [
"laglangyue",
"pjfanning"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3621",
"repo": "apache/incubator-pekko",
"url": "https://github.com/apache/incubator-pekko/pull/1192"
}
|
gharchive/pull-request
|
support config for jackson buffer recycler pool
the buffer recycler is an important performance feature in Jackson
Jackson 2.17 also changes the default pool implementation and this has proved an issue - see https://github.com/FasterXML/jackson-module-scala/issues/672
my plan for Pekko is to keep the existing ThreadLocal implementation as the default even if Jackson has a different default
It looks good.
Is it necessary to supplement the documentation?
https://pekko.apache.org/docs/pekko/current/serialization-jackson.html#additional-features
|
2025-04-01T06:37:54.072065
| 2024-02-07T06:26:01
|
2122280836
|
{
"authors": [
"codecov-commenter",
"xingfudeshi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3622",
"repo": "apache/incubator-seata",
"url": "https://github.com/apache/incubator-seata/pull/6342"
}
|
gharchive/pull-request
|
optimize:compatible with integration-tx-api module and spring module
[x] I have registered the PR changes.
Ⅰ. Describe what this PR did
Ⅱ. Does this pull request fix one issue?
fixes #6334
fixes #6335
Ⅲ. Why don't you add test cases (unit test/integration test)?
Ⅳ. Describe how to verify it
Ⅴ. Special notes for reviews
some checkstyle failed.
ok
Codecov Report
Attention: 33 lines in your changes are missing coverage. Please review.
Comparison is base (74785d2) 51.95% compared to head (45b667b) 51.21%.
Report is 6 commits behind head on 2.x.
Additional details and impacted files
@@ Coverage Diff @@
## 2.x #6342 +/- ##
============================================
- Coverage 51.95% 51.21% -0.75%
+ Complexity 5171 5117 -54
============================================
Files 918 921 +3
Lines 32039 32166 +127
Branches 3866 3874 +8
============================================
- Hits 16647 16474 -173
- Misses 13768 14102 +334
+ Partials 1624 1590 -34
Files
Coverage Δ
...in/java/org/apache/seata/common/DefaultValues.java
0.00% <ø> (ø)
...che/seata/common/exception/JsonParseException.java
100.00% <ø> (ø)
.../org/apache/seata/common/metadata/ClusterRole.java
100.00% <ø> (ø)
...ava/org/apache/seata/common/metadata/Metadata.java
100.00% <ø> (ø)
...apache/seata/common/metadata/MetadataResponse.java
80.00% <ø> (ø)
...in/java/org/apache/seata/common/metadata/Node.java
100.00% <ø> (ø)
...java/org/apache/seata/common/util/ConfigTools.java
100.00% <ø> (ø)
...ava/org/apache/seata/common/util/DurationUtil.java
83.33% <ø> (ø)
...a/org/apache/seata/common/util/HttpClientUtil.java
52.45% <ø> (+4.91%)
:arrow_up:
...main/java/org/apache/seata/common/util/IOUtil.java
70.00% <ø> (ø)
... and 91 more
... and 165 files with indirect coverage changes
Done. @slievrly
|
2025-04-01T06:37:54.077938
| 2018-10-23T07:13:11
|
372841515
|
{
"authors": [
"terrymanu"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3623",
"repo": "apache/incubator-shardingsphere",
"url": "https://github.com/apache/incubator-shardingsphere/issues/1366"
}
|
gharchive/issue
|
Split integrated test cases to a new project
[x] Create new repo named sharding-sphere-integrated-test
[ ] Move JDBC integrated test cases to sharding-sphere-integrated-test
[ ] Add MySQL & PostgreSQL docker image for integrated test cases
[ ] Comb JDBC test cases
[ ] Comb JDBC test cases for sharding-rule(sharding, masterslave) with raw bootstrap
[ ] Comb JDBC test cases for other bootstrap(spring namespace, springboot)
[ ] Comb JDBC test cases for database pool(DBCP, HikariCP, C3P0)
[ ] Comb JDBC test cases for ORM(Spring JDBC Template, JPA, Mybatis, Hibernate)
[ ] Comb Proxy test cases
[ ] Add Sharding-Proxy docker image for integrated test cases
[ ] Comb Proxy test cases for sharding-rule(sharding, masterslave) with raw bootstrap
[ ] Comb Proxy test cases for other bootstrap(spring namespace, springboot)
[ ] Comb Proxy test cases for database pool(DBCP, HikariCP, C3P0)
[ ] Comb Proxy test cases for ORM(Spring JDBC Template, JPA, Mybatis, Hibernate)
[ ] Comb Orchestration test cases
[ ] Add Zookeeper docker image for integrated test cases
[ ] Add Etcd docker image for integrated test cases
[ ] Comb Orchestration test cases for jdbc and proxy
expired
|
2025-04-01T06:37:54.128449
| 2017-07-19T14:40:16
|
244064620
|
{
"authors": [
"toncek87",
"xrmx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3624",
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/issues/3162"
}
|
gharchive/issue
|
Redshift could not connect to the server
I tried connect to my Redshift database. There are millions of rows. When query time is longer than
one minute, Superset response is "could not connect to the server". I checked Allow Run Async, but there is another message "Failed to start remote query on worker. Tell your administrator to verify the availability of the message queue." Could you help me with this issue?
You can start by reading the documentation https://superset.incubator.apache.org/installation.html#sql-lab
thanks, i missed it
I installed Redis
pip3 install redis
dependencies
pip install -U "celery[redis]"
run Redis
redise-server
Paste this code to /usr/local/lib/python3.4/dist-packages/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
HTTP_HEADERS = {
'super': 'header!'
}
# comment the current RESULTS_BACKEND value
#RESULTS_BACKEND = None
# assign a new value to RESULTS_BACKEND
RESULTS_BACKEND = FileSystemCache('/tmp/sqllab_cache', default_timeout=60*24*7)
When i try superset worker
Starting SQL Celery worker. Traceback (most recent call last): File "/home/rko/venv/bin/superset", line 15, in <module> manager.run() File "/home/rko/venv/lib/python3.4/site-packages/flask_script/__init__.py", line 412, in run result = self.handle(sys.argv[0], sys.argv[1:]) File "/home/rko/venv/lib/python3.4/site-packages/flask_script/__init__.py", line 383, in handle res = handle(*args, **config) File "/home/rko/venv/lib/python3.4/site-packages/flask_script/commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "/home/rko/venv/lib/python3.4/site-packages/superset/cli.py", line 189, in worker 'broker': config.get('CELERY_CONFIG').BROKER_URL, AttributeError: 'NoneType' object has no attribute 'BROKER_URL'
Can you give me any tips?
i've installed Redis, run superset worker and run superset server..
SQLlab gives me "Failed to start remote query on worker. Tell your administrator to verify the availability of the message queue."
and console:
/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/compiler.py:624: SAWarning: Can't resolve label reference 'changed_on desc'; converting to text() (this warning may be suppressed after 10 occurrences)
util.ellipses_string(element.element))
/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/compiler.py:624: SAWarning: Can't resolve label reference 'database_name asc'; converting to text() (this warning may be suppressed after 10 occurrences)
util.ellipses_string(element.element))
2017-07-20 14:15:42,256:INFO:root:Parsing with sqlparse statement SELECT count(uid) as pocet
FROM events2
where event = 'pageview' and time like '%2017-06-30%'
2017-07-20 14:15:42,300:INFO:root:Triggering query_id: 43
/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/sqltypes.py:596: SAWarning: Dialect sqlite+pysqlite does not support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
'storage.' % (dialect.name, dialect.driver))
2017-07-20 14:15:42,980:ERROR:root:[Errno 111] Connection refused
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 36, in call
return self.value
AttributeError: 'ChannelPromise' object has no attribute 'value'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 494, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 187, in _publish
channel = self.channel
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in _get_channel
channel = self._channel = channel()
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 38, in call
value = self.value = self.contract()
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 224, in
channel = ChannelPromise(lambda: connection.default_channel)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 819, in default_channel
self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 802, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 757, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 296, in connect
self.transport.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 123, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 164, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 414, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 515, in _ensured
reraise_as_library_errors=False,
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 405, in ensure_connection
callback)
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 333, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 261, in connect
return self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 802, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 757, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 296, in connect
self.transport.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 123, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 164, in _connect
self.sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/superset/views/core.py", line 2011, in sql_json
store_results=not query.select_as_cta)
File "/usr/local/lib/python3.4/dist-packages/celery/app/task.py", line 412, in delay
return self.apply_async(args, kwargs)
File "/usr/local/lib/python3.4/dist-packages/celery/app/task.py", line 535, in apply_async
**options
File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 737, in send_task
amqp.send_task_message(P, name, message, **options)
File "/usr/local/lib/python3.4/dist-packages/celery/app/amqp.py", line 558, in send_task_message
**properties
File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 527, in _ensured
errback and errback(exc, 0)
File "/usr/lib/python3.4/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 419, in _reraise_as_library_errors
sys.exc_info()[2])
File "/usr/local/lib/python3.4/dist-packages/vine/five.py", line 178, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 414, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 515, in _ensured
reraise_as_library_errors=False,
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 405, in ensure_connection
callback)
File "/usr/local/lib/python3.4/dist-packages/kombu/utils/functional.py", line 333, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 261, in connect
return self.connection
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 802, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 757, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
conn.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 296, in connect
self.transport.connect()
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 123, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 164, in _connect
self.sock.connect(sa)
kombu.exceptions.OperationalError: [Errno 111] Connection refused
[2017-07-20 14:15:51 +0200] [1746] [INFO] Handling signal: winch
[2017-07-20 14:15:51 +0200] [1746] [INFO] Handling signal: winch
You have misconfigured celery, it's looking for an amqp broker while you said you want to use redis.
i changed that and worker console:
[2017-07-20 14:35:59,067: ERROR/ForkPoolWorker-29] Task superset.sql_lab.get_sql_results[dc953f93-5cf6-4de6-9432-0d09a354ca2e] raised unexpected: Exception("Results backend isn't configured.",)
Traceback (most recent call last):
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 622, in protected_call
return self.run(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 81, in get_sql_results
handle_error("Results backend isn't configured.")
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 78, in handle_error
raise Exception(query.error_message)
Exception: Results backend isn't configured.
i have this config:
/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
host='localhost', port=6379, key_prefix='superset_results')
You have to add to add proper quoting to your code excerpts otherwise it's impossible to help you.
i changed that and worker console:
[2017-07-20 14:35:59,067: ERROR/ForkPoolWorker-29] Task superset.sql_lab.get_sql_results[dc953f93-5cf6-4de6-9432-0d09a354ca2e] raised unexpected: Exception("Results backend isn't configured.",)
Traceback (most recent call last):
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 622, in protected_call
return self.run(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 81, in get_sql_results
handle_error("Results backend isn't configured.")
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 78, in handle_error
raise Exception(query.error_message)
Exception: Results backend isn't configured.
i have this config:
/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
host='localhost', port=6379, key_prefix='superset_results')
is this better?
i changed that and worker console:
[2017-07-20 14:35:59,067: ERROR/ForkPoolWorker-29] Task superset.sql_lab.get_sql_results[dc953f93-5cf6-4de6-9432-0d09a354ca2e] raised unexpected: Exception("Results backend isn't configured.",)
Traceback (most recent call last):
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/celery/app/trace.py", line 622, in protected_call
return self.run(*args, **kwargs)
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 81, in get_sql_results
handle_error("Results backend isn't configured.")
File "/home/rko/venv/lib/python3.4/site-packages/superset/sql_lab.py", line 78, in handle_error
raise Exception(query.error_message)
Exception: Results backend isn't configured.
i have this config:
/superset/config.py
Configure the class "CeleryConfig", by adding the URL of your Redis installation (in my case, localhost:6379):
class CeleryConfig(object):
BROKER_URL = 'redis://localhost:6379/'
CELERY_IMPORTS = ('superset.sql_lab', )
CELERY_RESULT_BACKEND = 'redis://localhost:6379/'
CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
CELERY_CONFIG = CeleryConfig
from werkzeug.contrib.cache import RedisCache
RESULTS_BACKEND = RedisCache(
host='localhost', port=6379, key_prefix='superset_results')
is this better?
What mean? Results backend isn't configured.
|
2025-04-01T06:37:54.136659
| 2018-08-18T00:44:42
|
351775295
|
{
"authors": [
"codecov-io",
"kristw"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3625",
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/5670"
}
|
gharchive/pull-request
|
Refactor treemap
Decouple the visualization code from slice and formData
Test
Ran a development instance with the code above and verified with production instance that they produce the same results.
@williaster @conglei @graceguo-supercat
Codecov Report
Merging #5670 into master will decrease coverage by 0.03%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #5670 +/- ##
==========================================
- Coverage 63.51% 63.48% -0.04%
==========================================
Files 360 360
Lines 22904 22915 +11
Branches 2551 2551
==========================================
Hits 14548 14548
- Misses 8341 8352 +11
Partials 15 15
Impacted Files
Coverage Δ
superset/assets/src/visualizations/treemap.js
0% <0%> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cdd348a...723f82b. Read the comment docs.
|
2025-04-01T06:37:54.143654
| 2018-11-16T21:51:43
|
381773405
|
{
"authors": [
"codecov-io",
"graceguo-supercat",
"mistercrunch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3626",
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/6405"
}
|
gharchive/pull-request
|
[fix] view results in sql lab
click view results from sql lab will see JS excetions:
exception is from this line:
https://github.com/apache/incubator-superset/blob/69e8df404d46e35bf686cc92992d6e0415172d90/superset/assets/src/SqlLab/components/ExploreResultsButton.jsx#L171
@mistercrunch @michellethomas @kristw
Codecov Report
Merging #6405 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #6405 +/- ##
=======================================
Coverage 77.31% 77.31%
=======================================
Files 67 67
Lines 9581 9581
=======================================
Hits 7408 7408
Misses 2173 2173
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c42bcf8...7cebf7b. Read the comment docs.
LGTM
|
2025-04-01T06:37:54.160631
| 2019-06-06T21:18:40
|
453242667
|
{
"authors": [
"agrawaldevesh",
"codecov-io",
"john-bodley"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3627",
"repo": "apache/incubator-superset",
"url": "https://github.com/apache/incubator-superset/pull/7667"
}
|
gharchive/pull-request
|
[epoch] Remove non-UTC epoch logic
CATEGORY
Choose one
[x] Bug Fix
[ ] Enhancement (new features, refinement)
[ ] Refactor
[ ] Add tests
[ ] Build / Development Environment
[ ] Documentation
SUMMARY
As @agrawaldevesh correctly identified in https://github.com/apache/incubator-superset/pull/6721 previously we were computing the Unix timestamp for the right-hand-side (RHS) of the temporal filter condition using the local time zone as opposed to UTC which is the definition of Unix (or epoch) time.
@agrawaldevesh's change was behind a feature flag and disabled by default however this clearly is a bug and I sense we should remedy the problem by merely replacing the previously incorrect logic. Note I strongly believe users were probably unaware of the issue as Unix timestamps aren't human readable.
TEST PLAN
CI.
ADDITIONAL INFORMATION
[ ] Has associated issue:
[ ] Changes UI
[ ] Requires DB Migration.
[ ] Confirm DB Migration upgrade and downgrade tested.
[ ] Introduces new feature or API
[ ] Removes existing feature or API
REVIEWERS
to: @agrawaldevesh @betodealmeida @michellethomas @mistercrunch @villebro
https://github.com/apache/incubator-superset/issues/7656
Codecov Report
Merging #7667 into master will increase coverage by <.01%.
The diff coverage is 75%.
@@ Coverage Diff @@
## master #7667 +/- ##
==========================================
+ Coverage 65.57% 65.58% +<.01%
==========================================
Files 435 435
Lines 21754 21749 -5
Branches 2394 2394
==========================================
- Hits 14266 14264 -2
+ Misses 7367 7364 -3
Partials 121 121
Impacted Files
Coverage Δ
superset/config.py
93.97% <ø> (-0.04%)
:arrow_down:
superset/connectors/sqla/models.py
82.39% <75%> (+0.41%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d62c37b...e93f33b. Read the comment docs.
@agrawaldevesh are you onboard with this change?
Go for it ! I only introduced the flag since I did not want to break
existing use cases. I have no issues with making this be default
On Mon, Jun 10, 2019 at 5:02 PM John Bodley<EMAIL_ADDRESS>wrote:
@agrawaldevesh
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_agrawaldevesh&d=DwMCaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=Z3aBmCzO3AGueh3ZEPa3n4ujmbgsJJDE-x0U4W8t_Us&m=O_sBQHDGT4uYRBuD0MeJgRcwajjdTrjVL114CzrCW6A&s=jbT8jeF_xUkzsr1n0uEA-H-vfQW3Vu6ryUdUELUU7vQ&e=
are you onboard with this change?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dsuperset_pull_7667-3Femail-5Fsource-3Dnotifications-26email-5Ftoken-3DAE44R77I3WBGYPB5XWLSMQLPZ3TRVA5CNFSM4HVJ6VW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXLRTFI-23issuecomment-2D500636053&d=DwMCaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=Z3aBmCzO3AGueh3ZEPa3n4ujmbgsJJDE-x0U4W8t_Us&m=O_sBQHDGT4uYRBuD0MeJgRcwajjdTrjVL114CzrCW6A&s=3Z1HFovjiRBXcX0cRTa1BCbVGz5q-yVMM9B9u7jB_yM&e=,
or mute the thread
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AE44R77CTI7IHEM57QUNF2LPZ3TRVANCNFSM4HVJ6VWQ&d=DwMCaQ&c=r2dcLCtU9q6n0vrtnDw9vg&r=Z3aBmCzO3AGueh3ZEPa3n4ujmbgsJJDE-x0U4W8t_Us&m=O_sBQHDGT4uYRBuD0MeJgRcwajjdTrjVL114CzrCW6A&s=S3v_GuSJtAWrrOdirnt1nz9dQNSS9KGhn1jIfdjd24I&e=
.
|
2025-04-01T06:37:54.168840
| 2017-04-21T23:09:27
|
223519389
|
{
"authors": [
"akchinSTC",
"dusenberrymw",
"mboehm7"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3628",
"repo": "apache/incubator-systemml",
"url": "https://github.com/apache/incubator-systemml/pull/468"
}
|
gharchive/pull-request
|
[SYSTEMML-1554] IPA Scalar Transient Read Replacement
Currently, during IPA we collect all variables (scalars & matrices)
eligible for propagation across blocks (i.e. not updated in block), and
then propagate the only the matrix sizes across the blocks. It seems
plausible that we could also replace all eligible scalar transient reads
with literals based on the variables that have already been collected.
The benefit is that many ops will be able to determine their respective
output sizes during regular compilation, instead of having to wait until
dynamic recompilation, and thus we can reduce the pressure on dynamic
recompilation.
Are there drawbacks to this approach? The use case is that I was seeing a large number of memory warnings while training a convolutional net due to the sizes being unknown during regular compilation, yet the engine only having CP versions of the ops. Additionally, I was running into actual heap space OOM errors for situations that should not run out of memory, and thus I started exploring.
I've attached an example script and the explain plan (hops & runtime) w/ and w/o the IPA scalar replacement to the associated JIRA issue.
cc @mboehm7
thanks @dusenberrymw I gave it a try on our ARIMA application testcase which historically was challenging for scalar propagation. Unfortunately, it failed due to an - probably unrelated - issue in replaceLiteralFullUnaryAggregate. Once I resolved this, I'll play around with it a bit more.
Once this PR is in, we should also think about the problem of propagating scalars into functions if functions are called once or with consistent scalar inputs.
@mboehm7 Great, interested to see what else is needed for this to be generally applicable. Also, definitely +1 for propagating scalars into functions. In particular, we should allow for the case of functions for which any subset of the inputs are consistent scalars. I.e., a function may have an unknown matrix size as an input, but then have several other scalar inputs that are always consistent.
Thanks, @mboehm7. Looks like it is failing a test still -- org.apache.sysml.test.integration.functions.misc.DataTypeChangeTest#testDataTypeChangeValidate4c. Looking into it, it fails due to trying to cast a Matrix to a Scalar object. At a deeper level, it looks like the propagated variable map is holding onto the "matrix" X, rather than dropping it as it should, since X is turned into a scalar by the call X = foo(X). Interestingly, the FunctionOp for the foo function is marked as having an Unknown datatype and valuetype. That to me seems to be big issue, but I'm not sure exactly where that is failing. Thoughts? Overall, this seems like a bug that was just hidden before, rather than being newly introduced.
yes this is almost certainly a bug - originally we allowed data type changes in conditional control flow (e.g., if branch assigns a scalar and else branch a matrix), in which case we assign UNKNOWN for subsequent references. However, I modified this years ago because SystemML could not compile valid instructions for these scenarios unless we extend the recompiler to actually update the data type and block sizes there.
By the way, aside from that test, everything else passed.
cc @mboehm7 Can you review this fix for the datatype conversion issue? I'm also waiting for the full testing with Jenkins.
Refer to this link for build results (access rights to CI server needed):
https://sparktc.ibmcloud.com/jenkins/job/SystemML-PullRequestBuilder/1437/
Thanks, @mboehm7. I'll update the docs and merge.
Refer to this link for build results (access rights to CI server needed):
https://sparktc.ibmcloud.com/jenkins/job/SystemML-PullRequestBuilder/1442/
|
2025-04-01T06:37:54.172009
| 2019-07-29T07:06:59
|
473885670
|
{
"authors": [
"YorkShen",
"darkThanBlack"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3629",
"repo": "apache/incubator-weex",
"url": "https://github.com/apache/incubator-weex/issues/2758"
}
|
gharchive/issue
|
[iOS] PR #2394 covered by PR #2520 so release v0.26.0 still have thread issue
#2394
#2520
Try release 0.28
If this still bothers you and you find a solution, you could give us a PR if you can, I am very happy to talk with you implemenation detail or review you PR in mailing list.
I have a busy schedule and I can't read Github issue every day, but I check mailing list every day. I am sorry if this bothers you.
|
2025-04-01T06:37:54.176179
| 2016-03-22T15:29:03
|
142685794
|
{
"authors": [
"Leemoonsoo",
"jsimsa"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3630",
"repo": "apache/incubator-zeppelin",
"url": "https://github.com/apache/incubator-zeppelin/pull/790"
}
|
gharchive/pull-request
|
[ZEPPELIN-757] Ordering dropdown menu items alphabetically.
What is this PR for?
Fixing documentation.
What type of PR is it?
Documentation
Todos
N/A
What is the Jira issue?
https://issues.apache.org/jira/browse/ZEPPELIN-757
How should this be tested?
Follow the steps in https://github.com/apache/incubator-zeppelin/blob/master/docs/README.md to build the documentation.
Screenshots (if appropriate)
Questions:
Does the licenses files need update? No
Is there breaking changes for older versions? No
Does this needs documentation? No
Ready for review.
Thanks @jsimsa for the fix. LGTM and merge if there're no more discussions.
|
2025-04-01T06:37:54.183468
| 2021-02-08T21:53:25
|
803999765
|
{
"authors": [
"ope-nz",
"qiaojialin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3631",
"repo": "apache/iotdb",
"url": "https://github.com/apache/iotdb/issues/2663"
}
|
gharchive/issue
|
Constant High CPU Usage
Describe the bug
I have a small IOTDB instance running on Ubuntu. Over time the CPU utilisation is very high even though there are a low number of clients and transactions. The CPU usage appears to climb over time. This system has been running for several weeks without a restart. If I run "ps -ef" I can see that it is IOTDB that is using the CPU constantly.
Below is the CPU usage before and after I ran stop-server/start-server. The CPU dropped from > 80% down to < 10%.
To Reproduce
Steps to reproduce the behavior:
Run IOTDB 11.0 with default settings
Expected behavior
The CPU usage should not be so high.
Screenshots
See above.
Desktop (please complete the following information):
OS: Ubuntu 20.04.1 LTS
Browser Not Applicable
Version 11.0
Additional context
None
I have upgraded to 11.2 so I will monitor and see if the issue persists.
Step 1: Could you upload the logs during the high cpu usage?
Step 2: If possible, could you please use JProfiler to record the CPU when the CPU usage is high, and save it as a .jps snapshot file, then we can see what happens.
Step 3: Some config that may solve the problem:
iotdb-engine.properties
enable_unseq_compaction=false
I have been monitoring the CPU over the past week or so. The image below shows a weeks worth of CPU data. Notice that after a few days the CPU is back up high.
There are no errors in the logs - most logging is info level with query reponse time.
I am going to restart the database with "enable_unseq_compaction=false" and see how that goes.
I am still having the same problem even with "enable_unseq_compaction=false"
I have restarted to captured the logs.
|
2025-04-01T06:37:54.197102
| 2021-07-22T15:11:46
|
950774670
|
{
"authors": [
"ijuma"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3632",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/11108"
}
|
gharchive/pull-request
|
KAFKA-13116: Fix message_format_change_test and compatibility_test_new_broker_test failures
These failures were caused by a46b82bea9abbd08e5. Details for each test:
message_format_change_test: use IBP 2.8 so that we can write in older message
formats.
compatibility_test_new_broker_test_failures: fix down-conversion path to handle
empty record batches correctly. The record scan in the old code ensured that
empty record batches were never down-converted, which hid this bug.
Verified with ducker that some variants of these tests failed without these changes
and passed with them.
Note that the upgrade_test is still failing. It looks like there are multiple causes,
so I left that for another PR.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@hachikuji I addressed your comment, this is ready for another review.
To double-check the local results, I started the branch builder here too https://jenkins.confluent.io/job/system-test-kafka-branch-builder/4620/
Failures are unrelated. Merging to master and cherry-picking to 3.0.
The branch builder system tests passed btw.
|
2025-04-01T06:37:54.202103
| 2022-06-10T10:10:08
|
1267342788
|
{
"authors": [
"divijvaidya",
"ijuma"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3633",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/12281"
}
|
gharchive/pull-request
|
KAFKA-13971: Fix atomicity violations caused by improper usage of ConcurrentHashMap - part2
## Problem #1 in DelegatingClassLoader.java
Atomicity violation in example such as:
Consider thread T1 reaches line 228, but before executing context switches to thread T2 which also reaches line 228. Again context switches to T1 which reaches line 232 and adds a value to the map. T2 will execute line 228 and creates a new map which overwrites the value written by T1, hence change done by T1 would be lost. This code change ensures that two threads cannot initiate the TreeMap, instead only one of them will.
Problem #2 in RocksDBMetricsRecordingTrigger.java
Atomicity violation in example such as:
Consider thread T1 reaches line 40 but before executing it context switches to thread T2 which also reaches line 40. In a serialized execution order, thread T2 should have thrown the exception but it won't in this case. The code change fixes that.
Note that some other problems associated with use of concurrent hashmap has been fixed in https://github.com/apache/kafka/pull/12277
Is the relevant code specified as thread safe?
Is the relevant code specified as thread safe?
Thank you for your review @ijuma. I appreciate it. Though, I am afraid I don't understand your question.
Are you asking whether the existing code is supposed to be thread safe?
If yes, for DelegatingClassLoader.java the javadoc for the class mentioned that it is supposed to be thread safe (but it isn't due to the bug that is fixed in this review). For the RocksDBMetricsRecordingTrigger.java, we run a thread periodically from a metric trigger thread pool which reads from the map maintained in the class. At the same time it is possible that another thread is mutating the map during startup/shutdown of rocksDB which may leave the map in inconsistent state. Hence, it's important for this class to be thread safe as well.
Also, note that both the classes in this review use ConcurrentHashMap (albeit incorrectly) to ensure thread safe mutation over the map.
Are you asking whether the changed code is thread safe?
If yes, the change uses atomic operations provided by ConcurrentHashMap to ensure thread safety.
@C0urante please review when you get a chance.
|
2025-04-01T06:37:54.206928
| 2022-11-22T20:27:46
|
1460508948
|
{
"authors": [
"Cerchie",
"ableegoldman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3634",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/12893"
}
|
gharchive/pull-request
|
KAFKA-14260: add synchronized to prefixScan method
As a result of "14260: InMemoryKeyValueStore iterator still throws ConcurrentModificationException", I'm adding synchronized to prefixScan as an alternative to going back to the ConcurrentSkipList.
I've read up on testing multi-threaded behavior and I believe it's best to leave the testing as it is for now as testing whether synchronized works doesn't always work. I did make sure ./gradlew test was green on my branch. Happy to be corrected here.
This is my first PR. As of the guidelines, I that the contribution is my original work and that I license the work to the project under the project's open source license. I see that I also need to make a build trigger request, @ableegoldman I would appreciate one please :)
I do not believe this requires a documentation update as it is just bringing a method up to standard. Again, happy to help out if it turns out otherwise.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
I see that I also need to make a build trigger request
By the way, this is thankfully no longer the case (it used to be really annoying and only worked like half the time) -- these days the build will run on any PR that's opened, and will rerun each time you push a new commit. I guess the contributing guidelines are out of date so thanks for bringing that up 🙂 I'll update them
Merged to trunk and cherrypicked to 3.4
|
2025-04-01T06:37:54.215742
| 2024-06-16T17:52:49
|
2355896530
|
{
"authors": [
"handfreezer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3635",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/16361"
}
|
gharchive/pull-request
|
KAFKA-16707: Kafka Kraft : using Principal Type in StandardACL in order to defined ACL with a notion of group without rewriting KafkaPrincipal of client by rules
Default StandardAuthorizer in Kraft mode is defining a KafkaPrincpal as type=User and a name, and a special wildcard eventually.
The difficulty with this solution is that we can't define ACL by group of KafkaPrincipal.
There is a way for the moment to do so by defining RULE to rewrite the KafkaPrincipal name field, BUT, to introduce this way the notion of group, you have to set rules which will make you loose the uniq part of the KafkaPrincipal name of the connected client.
The concept here, in the StandardAuthorizer of Kafka Kraft, is to add the management of KafkaPrincipal type:
Regex
StartsWith
EndsWith
Contains
(User is still available and keep working as before to avoid any regression/issue with current configurations)
This would be done in the StandardAcl class of metadata/authorizer, and the findresult method of StandardAuthorizerData will delegate the match to the StandardAcl class (for performance reason: precompile regex in ACL).
*I added tests in metadat, and run ./gradlew test from kafak:trunk and my fork: no more failed test on my branch than kafka:trunk
Committer Checklist (excluded from commit message)
[ x ] Verify design and implementation => thanks to spell checker in gradle process
[ x ] Verify test coverage and CI build status => adding few tests in metadata, an run gradlew test without more failed test thant kafka:trunk
[ x ] Verify documentation (including upgrade notes) : added few lines in doc, no upgrade info as the previous behaviour should still work as before.
Link to the JIRA-16707
Hello, When I'm running "./gradlew test" on my side from apache/kafka/trunk clone, I have failed tests.
So Is there a way to know (an easy one?) to know which failed test in "continuous-integration/jenkins/pr-merge" I have to look ?
|
2025-04-01T06:37:54.222215
| 2024-09-26T13:29:59
|
2550600816
|
{
"authors": [
"dajac",
"mumrah",
"squah-confluent"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3636",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/17285"
}
|
gharchive/pull-request
|
MINOR: Cache topic resolution in TopicIds set
Looking up topics in a TopicsImage is relatively slow. Cache the results
in TopicIds to improve assignor performance. In benchmarks, we see a
noticeable improvement in performance in the heterogeneous case.
Before
Benchmark (assignmentType) (assignorType) (isRackAware) (memberCount) (partitionsToMemberRatio) (subscriptionType) (topicCount) Mode Cnt Score Error Units
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HOMOGENEOUS 1000 avgt 5 36.400 ± 3.004 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HETEROGENEOUS 1000 avgt 5 158.340 ± 0.825 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HOMOGENEOUS 1000 avgt 5 1.329 ± 0.041 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HETEROGENEOUS 1000 avgt 5 382.901 ± 6.203 ms/op
After
Benchmark (assignmentType) (assignorType) (isRackAware) (memberCount) (partitionsToMemberRatio) (subscriptionType) (topicCount) Mode Cnt Score Error Units
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HOMOGENEOUS 1000 avgt 5 36.465 ± 1.954 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL RANGE false 10000 10 HETEROGENEOUS 1000 avgt 5 114.043 ± 1.424 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HOMOGENEOUS 1000 avgt 5 1.454 ± 0.019 ms/op
ServerSideAssignorBenchmark.doAssignment INCREMENTAL UNIFORM false 10000 10 HETEROGENEOUS 1000 avgt 5 342.840 ± 2.744 ms/op
Based heavily on https://github.com/apache/kafka/pull/16527.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@mumrah
Another thing to consider is the lifetime of the cache. Do we really need the ID + name mappings kept in memory forever?
The lifetime of the cache is bound to the call. It is not kept forever.
Does this come down to performance differences between HashMap and PCollectionsImmutableMap?
Yes.
If we decide we really need faster topic ID to name lookups, I would consider adding it to TopicsImage. Managing a cache outside of the image will be a bit difficult.
We could consider this separately. At the moment, we don't really have the time to do it. The current strategy seems to be a good tradeoff at the moment given that it is only bound to the call and not kept forever.
@dajac thanks for the explanation, makes sense. Can we include a javadoc on the class describing the expected lifetime of this class?
@mumrah
Seeing that the cache is not actually used outside of tests and benchmarks, I'm guessing this is still WIP.
It's used in TargetAssignmentBuilder.build(), which is used by the new group coordinator.
I've updated the javadoc to describe the lifetime of the cache.
@mumrah I will merge it. If you have further comments, @squah-confluent can address separately.
|
2025-04-01T06:37:54.225638
| 2017-02-20T02:29:27
|
208769653
|
{
"authors": [
"amethystic",
"ijuma",
"omkreddy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3637",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/2576"
}
|
gharchive/pull-request
|
kafka-4767: KafkaProducer is not joining its IO thread properly
KafkaProducer#close swallows the InterruptedException which might be acceptable when it's invoked from within the main thread or user is extending Thread and therefore controls all the code higher up on the call stack. For other cases, it'd better retstore the interupted status after capturing the exception.
@ijuma Please have a review. Thanks.
@ijuma Please have a reivew on this PR. Thanks.
LGTM
Thanks for the PR. I looked this in more detail and it looks like we eventually throw KafkaException for this case. In the consumer, we throw InterruptException (which is a non-checked version of InterruptedException that inherits from KafkaException). Seems like we should do the same here. That class sets the interrupt in the constructor.
@ijuma Followed the same pattern as what KafkaConsumer#close treats interruption but also explicitly added an if clause to check InterruptedException since firstException would be set to it explicitly in KafkaProducer#close. Please have a review on that. Thanks.
@ijuma Well, already removed those dead code and also added the code to reserve the interruption status. Looks good now?
|
2025-04-01T06:37:54.235622
| 2017-06-29T22:18:01
|
239626978
|
{
"authors": [
"asfgit",
"ijuma",
"vahidhashemian"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3638",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/3460"
}
|
gharchive/pull-request
|
KAFKA-5534: offsetForTimes result should include partitions with no offset
For topics that support timestamp search, if no offset is found for a partition, the partition should still be included in the result with a null offset value. This KafkaConsumer method currently excludes such partitions from the result.
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5806/
Test PASSed (JDK 8 and Scala 2.12).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5820/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/5937/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/5922/
Test PASSed (JDK 8 and Scala 2.12).
@hachikuji, is this what you had in mind?
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6154/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6138/
Test PASSed (JDK 8 and Scala 2.12).
We should add a note to the upgrade notes and I think we can only merge this in trunk as it does change the behaviour.
Thanks @ijuma. I'll update to upgrade notes with this change. I assume it's this file that needs to be updated.
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6180/
Test PASSed (JDK 7 and Scala 2.11).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6164/
Test PASSed (JDK 8 and Scala 2.12).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6210/
Test PASSed (JDK 8 and Scala 2.12).
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.11/6226/
Test PASSed (JDK 7 and Scala 2.11).
|
2025-04-01T06:37:54.238252
| 2018-04-18T21:51:39
|
315655169
|
{
"authors": [
"guozhangwang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3639",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/4894"
}
|
gharchive/pull-request
|
MINOR: add window store range query in simple benchmark
There are a couple minor additions in this PR:
add a new test for window store, to range query upon receiving each record.
in the non-windowed state store case, add a get call before the put call.
Enable caching by default to be consistent with other Join / Aggregate cases, where caching is enabled by default.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@mjsax @bbejeck @vvcephei
|
2025-04-01T06:37:54.241020
| 2018-04-27T18:12:10
|
318500968
|
{
"authors": [
"ijuma"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3640",
"repo": "apache/kafka",
"url": "https://github.com/apache/kafka/pull/4940"
}
|
gharchive/pull-request
|
Upgrade ZooKeeper to 3.4.12 and Scala to 2.12.6
ZK 3.4.12 fixes the regression that forced us to go back to
3.4.10. Release notes:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310801&version=12342040
Scala 2.12.6 fixes the issue that prevented us from upgrading
to 2.12.5.
Committer Checklist (excluded from commit message)
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation (including upgrade notes)
@junrao trying the ZK upgrade again.
|
2025-04-01T06:37:54.248169
| 2019-11-05T20:55:59
|
518024605
|
{
"authors": [
"jameschen1519",
"risdenk",
"smolnar82"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3641",
"repo": "apache/knox",
"url": "https://github.com/apache/knox/pull/177"
}
|
gharchive/pull-request
|
[WIP] KNOX-2095 - Adding in DefaultDispatch code and tests to handle 504 errors
What changes were proposed in this pull request?
Currently, Knox masks all connection errors as 500 errors, when they may be more accurately described using other error codes, especially 504. A change has been made to display a 504 error in the event of a socket timeout..
How was this patch tested?
Ran ant verify under Knox 1.2.0. The patch was generated against the Knox 1.2.0 branch, and subsequently applied to Knox 1.4.0. Currently ant verify is failing, but these are most likely transient errors as the same errors appear in master. Still running tests.
Hmm. It looks like the check at https://api.travis-ci.org/v3/job/607855611/log.txt failed, but the logs suggest that it may be running plugins that aren't thread safe. Assuming that this is a threading issue, is there any way to rerun the tests, possibly without parallelism?
The error was
[ERROR] Failures:
[ERROR] GatewayCorrelationIdTest.testTestService:209
Expected: is <46>
but: was <45>
Not an error I've seen before. I retriggered the JDK 11 job. We have some flaky tests related to ZK but not that test.
Hi Kevin, sorry for the late followup; was out yesterday. Seems like it failed on gateway-service-remoteconfig again; was it still the testTestService test? I don't have JDK 11 installed on this machine and I'd like to confirm before trying to pursue the error.
(On that note, is there any good way to check the Surefire reports/view build artifacts?)
Grasping at straws here, but looking through the test case at https://github.com/apache/knox/blob/89caa5feeed706abc8d7ce1407830ae00d97d405/gateway-test/src/test/java/org/apache/knox/gateway/GatewayCorrelationIdTest.java, is it possible that the reduced timeout might be causing the issue? I'm not completely sure how the test works, but with the change in this PR, all connection attempts that experience a socket timeout are automatically given a 403, whereas without the change, there would at least be an attempt to contact the failover nodes.
...then again, I suppose this wouldn't explain the successes in JDK8. It's a bit difficult to tell without looking at the reports unfortunately.
@jameschen1519 I don't think the test failures are related to your change.
(On that note, is there any good way to check the Surefire reports/view build artifacts?)
The Travis build details are linked in the pr and then you go to the specific build and build log. That has the same output if you were to run locally.
This PR has not been touched for over a year ago; closing it.
|
2025-04-01T06:37:54.256113
| 2024-01-31T09:04:35
|
2109613782
|
{
"authors": [
"codecov-commenter",
"ulysses-you",
"wForget"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3642",
"repo": "apache/kyuubi",
"url": "https://github.com/apache/kyuubi/pull/6035"
}
|
gharchive/pull-request
|
[WIP][KYUUBI #6031] Add CollectMetricsPrettyDisplayListener
:mag: Description
Issue References 🔗
This pull request fixes #6031
Describe Your Solution 🔧
Types of changes :bookmark:
[ ] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Test Plan 🧪
Behavior Without This Pull Request :coffin:
Behavior With This Pull Request :tada:
Related Unit Tests
Checklist 📝
[ ] This patch was not authored or co-authored using Generative Tooling
Be nice. Be informative.
cc @zhouyifan279
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (208354c) 61.17% compared to head (cd23093) 61.03%.
Additional details and impacted files
@@ Coverage Diff @@
## master #6035 +/- ##
============================================
- Coverage 61.17% 61.03% -0.14%
Complexity 23 23
============================================
Files 623 623
Lines 37144 37144
Branches 5032 5032
============================================
- Hits 22721 22669 -52
- Misses 11979 12018 +39
- Partials 2444 2457 +13
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T06:37:54.261866
| 2023-01-22T09:35:50
|
1552056991
|
{
"authors": [
"jvz",
"u-ways"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3643",
"repo": "apache/logging-log4j-kotlin",
"url": "https://github.com/apache/logging-log4j-kotlin/pull/27"
}
|
gharchive/pull-request
|
Upgrade kotlin and log4j versions
Hello,
It seems that the Log4j version of the Kotlin API is out of date.
On top of that, the Kotlin version hasn't been updated for the past 2 years.
I have did a code analysis scan and found no incompatibles, tests are passing, and everything seems to be working as excepted.
Reasons to upgrade to 1.8.0 that are relevant:
1.8.0 - Improved kotlin-reflect performance
1.6.0 - Further Improvements to type inference for recursive generic types
1.5.30 - Improvements to type inference for recursive generic types
1.5.0 - Inline classes are released as Stable
And lots of other performance improvements overall...
Changes
Upgrade log4j to 2.19.0
Upgrade Kotlin to 1.8.0
kotlinx.coroutines to 1.6.4
Trivial changes - Feel free to amend as needed. :)
@jvz Can you have a look please?
The Kotlin API is a minimum version requirement. I've been using this library with Kotlin 1.4.x, 1.7.x, and 1.8.x. The Log4j version update is good, though!
I see, that makes sense, thanks for letting me know, I am closing this one and opening a more appropriate PR for Log4j upgrade only.
|
2025-04-01T06:37:54.265649
| 2023-10-04T06:53:36
|
1925525700
|
{
"authors": [
"dungba88",
"gf2121",
"mikemccand"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3644",
"repo": "apache/lucene",
"url": "https://github.com/apache/lucene/issues/12619"
}
|
gharchive/issue
|
Make FST BytesStore grow smoothly
Description
Too bad we don't have a writer that uses tiny (like 8 bytes) block at first, but doubles size for each new block (16 bytes, 32 bytes next, etc.). Then we would naturally use log(size) number of blocks without over-allocating.
But then reading bytes is a bit tricky because we'd need to take discrete log (base 2) of the address. Maybe it wouldn't be so bad -- we could do this with Long.numberOfLeadingZeros maybe? But that's a bigger change ... we can do this separately/later.
From https://github.com/apache/lucene/pull/12604#discussion_r1344639608
Note that oal.store.ByteBuffersDataOutput takes a different and neat approach to gracefully growing: it picks an initial block size, and appends new blocks as you write bytes, but then if it reaches 100 blocks, it "resizes" itself by doubling the block size and copying over, so that now you have 50 blocks.
So it's still O(N) amortized cost of that doubling/copying with time, and at any given moment you will not be wasting too many %tg of the bytes you've written, except at the start 1 KB block size.
In https://github.com/apache/lucene/pull/12624, I moved the main FST body out of BytesStore into ByteBuffersDataOutput, and BytesStore becomes only a single byte[] for the currently written node so maybe we don't need to do this?
|
2025-04-01T06:37:54.279972
| 2015-09-15T20:03:21
|
106634413
|
{
"authors": [
"hgschmie",
"michael-o"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3645",
"repo": "apache/maven-plugins",
"url": "https://github.com/apache/maven-plugins/pull/60"
}
|
gharchive/pull-request
|
Add new option "failOnWarning".
This option causes the maven-compiler-plugin to treat warnings as errors
and fail accordingly.
Simply adds "-Werror" to the compiler command line. It may be nice to
add this to the plexus-compiler-api proper but as the sonatype repo only
has tags to 2.4 and the current plugin references 2.6 (and I have no
idea where that comes from), I went the easy route. Happy to refactor
it if wanted.
Is this PR still relevant. If so, is there any reason not to pass to compilerArgs?
Hm. No comment for almost a year and then one comment and closed within 8 days.
Yes, it is still relevant. As an option, I can expose this as a property with
...
<failOnWarning>${failWarningSwitch}</failOnWarning>
...
so this can be overridden from the command line. If it gets added to , there is no way to control this dynamically from the command line.
plexus compiler api 2.8.1 supports failOnWarning directly. I may simply redo this patch to leverage this.
Let's open, Can you reopen, can you provide a PR?
|
2025-04-01T06:37:54.379290
| 2019-09-26T14:28:45
|
498922932
|
{
"authors": [
"utzig"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3646",
"repo": "apache/mynewt-core",
"url": "https://github.com/apache/mynewt-core/pull/2017"
}
|
gharchive/pull-request
|
P-NUCLEO-WB55
Slinky works, SPI/I2C/TIM under test...
@kasjer Could you take a look at this again, I think most review issues were tackled, apart from the int vs unsigned for I2c pins and flash suggestions (FLASH_PAGE_SIZE and removing _ prefix). I am not sure what your suggestion is for the first one, but the issue spans across families so I think it would makes more sense to do it in another PR that covers them all. For the flash suggestions, I agree, but would rather send a new PR that changes it on every family. Is that OK?
@kasjer All issues addressed.
|
2025-04-01T06:37:54.385259
| 2016-10-11T15:48:01
|
182301536
|
{
"authors": [
"jfrazee",
"mattyb149"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3647",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/1123"
}
|
gharchive/pull-request
|
NIFI-2887 NumberFormatException in HL7ExtractAttributes for repeating segments with order control codes
Fixes NIFI-2887 and adds more test cases
+1 LGTM, ran the unit tests and on a full NiFi, verified the NFE is not presented and the flow file is parsed successfully. Merging to master, thanks!
+1 LGTM, ran the unit tests and with a full NiFi, verified the NFE is not presented and messages are parsed successfully. Merging to master, thanks!
|
2025-04-01T06:37:54.395892
| 2017-09-09T16:58:18
|
256451451
|
{
"authors": [
"mattyb149",
"pvillard31"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3648",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/2138"
}
|
gharchive/pull-request
|
NIFI-4371 - add support for query timeout in Hive processors
Thank you for submitting a contribution to Apache NiFi.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[x] Has your PR been rebased against the latest commit within the target branch (typically master)?
[x] Is your initial contribution a single, squashed commit?
For code changes:
[x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[x] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
The "unit tests" for TestSelectHiveQL use Derby as the database, only to test the functionality of getting the "HiveQL" statement to the database and parsing its results. In that vein, Derby supports setQueryTimeout (DERBY-31), so can we add a unit test that sets the value, to exercise that part of the code?
Thanks for the review @mattyb149 and @joewitt. I updated the property description based on your comments. Regarding the unit test, since I'm using a HiveStatement object in the custom validate method, I'm not sure I can easily test the property (it'll always fail in a default build even with the Derby backend). And, if adding a unit test where I expect the validation to raise an error, this test may not have the expected result if using different profiles in the maven build.
I'm getting NPEs in the unit tests, something weird with MockPropertyValue getting created without "expectExpressions" being set to anything, causing isExpressionLanguagePresent() to throw the NPE
I already noticed this error while working on others PRs (I'm a bit surprised I didn't notice the NPE on this PR...). It's because we're checking if the processor is valid before enabling expression validation (https://github.com/apache/nifi/blob/master/nifi-mock/src/main/java/org/apache/nifi/util/StandardProcessorTestRunner.java#L169). We can't really do it the other way around without changing a lot of things.
I updated the PR to just check if expectExpressions is null and, if yes, return false. This way we can use isExpressionLanguagePresent() in a custom validate method.
I talked to @markap14 about it, perhaps this fix is fine or we can just change it to a boolean, but I'll let him take a look too.
@pvillard31 Mind doing a rebase here, and updating the QUERY_TIMEOUT property to use FlowFile Attribute scope? I pushed up a rebased branch with the additional commit (https://github.com/mattyb149/nifi/commit/40b9d1db89168fac08f343be772516132f1f67c0) but I don't know if you can cherry-pick from there or if you have to do your own rebase, then cherry-pick my additional commit (if you want to use it of course).
Done @mattyb149 - thanks!
Hey @mattyb149 - I believe we added this one for Hive 3 processors but forgot this PR. I know you're not available at the moment, but just a reminder for when you're back ;) (or if someone else wants to merge it in)
finally got time to get back on this one... if you want to have another look @mattyb149
|
2025-04-01T06:37:54.402794
| 2017-11-27T20:32:47
|
277159339
|
{
"authors": [
"aburkard",
"joewitt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3649",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/2299"
}
|
gharchive/pull-request
|
NIFI-4445 Add support for ListS3Version2 API
Thank you for submitting a contribution to Apache NiFi.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[x] Has your PR been rebased against the latest commit within the target branch (typically master)?
[x] Is your initial contribution a single, squashed commit?
For code changes:
[x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[x] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[x] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
@aburkard can you confirm the JIRA you meant this for? NIFI-4445 looks like a typo.
Thanks
@joewitt yep my bad, NIFI-4628 is the right issue.
|
2025-04-01T06:37:54.417609
| 2018-08-02T20:30:23
|
347163237
|
{
"authors": [
"joewitt",
"mcgilman",
"ottobackwards"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3650",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/2933"
}
|
gharchive/pull-request
|
NIFI-5479 Upgraded Jetty. Moved where we unpack bundled deps to so we…
… can avoid a new jetty bug with META-INF loading logic. WIP for testing/eval. Not ready for merge
Thank you for submitting a contribution to Apache NiFi.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[ ] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[ ] Has your PR been rebased against the latest commit within the target branch (typically master)?
[ ] Is your initial contribution a single, squashed commit?
For code changes:
[ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
2018-08-02 16:20:07,347 WARN [NiFi Web Server-21] o.e.jetty.annotations.AnnotationParser javax.inject.Inject scanned from multiple locations: jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-1.jar!/javax/inject/Inject.class, jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-2.5.0-b42.jar!/javax/inject/Inject.class
2018-08-02 16:20:07,348 WARN [NiFi Web Server-21] o.e.jetty.annotations.AnnotationParser javax.inject.Named scanned from multiple locations: jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-1.jar!/javax/inject/Named.class, jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-2.5.0-b42.jar!/javax/inject/Named.class
2018-08-02 16:20:07,348 WARN [NiFi Web Server-21] o.e.jetty.annotations.AnnotationParser javax.inject.Provider scanned from multiple locations: jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/work/jetty/nifi-update-attribute-ui-1.8.0-SNAPSHOT.war/webapp/WEB-INF/lib/javax.inject-1.jar!/javax/inject/Provider.class, jar:file:///Users/jwitt/Development/joewitt-nifi.git/nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-S
2018-08-02 16:20:16,335 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2018-08-02 16:20:16,429 WARN [main] o.e.jetty.annotations.AnnotationParser Unknown asm implementation version, assuming version 393216
Todo:
identify source of and cleanup warnings that now show up on application startup
test/what if one just copied the contents of a new lib dir on top of their old lib dir as an upgrade process...will our work dirs get cleaned and restart properly
add docs in the unpack method to explain why we move META-INF/bundled-dependencies to NAR-INF/bundled-dependencies
this is just a technique to work with older/current nar creation approach.
this is done because jetty's code assumes that META-INF is only in a directory path once or else it fails to find some tlds.
but we want to keep META-INF for things like META-INF/MANIFEST.mf and maven bits.
we might want to just move META-INF/bundled-dependencies to bundled-dependencies. The 'NAR-INF' part is not value add since the nar metadata is in META-INF/MANIFEST.mf and not easily moved due to jar/manifest loading code
file a JIRA to change where we write them in the nar plugin to NAR-INF/bundled-dependencies directly
Test secure/non-secure clusters/etc..
big thanks to @mcgilman for finding the needed dep change in nifi-web-ui and identifying why we needed a workaround for how we extract working dir nar deps due to recent jetty change
Would it help if you would load NAR's without unpacking them to disk?
that would not enable us to work around this issue and does not bring the benefits that led to unpacking in the first place
@joewitt Thanks for the PR! When starting up in secure mode using a configuration that works with current master branch, I received some stack traces regarding the initialization of the SSLContext. There appears to be a runtime difference introduced here that affects the loading of providers.
1305 Caused by: java.security.NoSuchAlgorithmException: no such algorithm: JKS for provider BC
1306 at sun.security.jca.GetInstance.getService(GetInstance.java:87)
1307 at sun.security.jca.GetInstance.getInstance(GetInstance.java:206)
1308 at java.security.Security.getImpl(Security.java:698)
1309 at java.security.KeyStore.getInstance(KeyStore.java:896)
1310 ... 21 common frames omitted
|
2025-04-01T06:37:54.426511
| 2019-05-29T01:12:33
|
449552886
|
{
"authors": [
"alopresto",
"joewitt",
"mcgilman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3651",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/3497"
}
|
gharchive/pull-request
|
NIFI-6323 Changed URLs in XML files to use https:// where possible
Thank you for submitting a contribution to Apache NiFi.
Please provide a short description of the PR here:
Description of PR
This PR changes existing URLs (project description, mailing lists, dependency repositories, and schema references) to use the https:// protocol when possible. It also standardizes the location of the Maven 4.0.0 XML schema descriptor.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[x] Has your PR been rebased against the latest commit within the target branch (typically master)?
[ ] Is your initial contribution a single, squashed commit? Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not squash or use --force when pushing to allow for clean monitoring of changes.
For code changes:
[x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[ ] Have you written or updated unit tests to verify your changes?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
Will review...
did full clean build w/contrib check. all looks good and nifi itself still seems good. +1 (assuming gilman also is)
Also +1. Successful build with cleaned mvn repo. Verified standalone and clustered functionality. Will merge.
|
2025-04-01T06:37:54.436449
| 2020-04-03T16:49:06
|
593511945
|
{
"authors": [
"BAGELreflex",
"Zhouhao12345",
"fwolfsjaeger",
"mattyb149",
"pvillard31"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3652",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/4179"
}
|
gharchive/pull-request
|
NIFI-7240: Fixed out-of-order Table Map events in CaptureChangeMySQL
Thank you for submitting a contribution to Apache NiFi.
Please provide a short description of the PR here:
Description of PR
When using triggers to update tables in MySQL, the Table Map events may be out of order with the corresponding Write Rows events for those tables. This PR keeps a temporary map of table IDs to cache keys in order to retrieve the correct table information during Write Rows processing.
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[x] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[x] Has your PR been rebased against the latest commit within the target branch (typically master)?
[x] Is your initial contribution a single, squashed commit? Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not squash or use --force when pushing to allow for clean monitoring of changes.
For code changes:
[ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[x] Have you written or updated unit tests to verify your changes?
[ ] Have you verified that the full build is successful on both JDK 8 and JDK 11?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.
I've successfully compiled it on JDK 8 and I can confirm that it does fix the issue, it would be great if this change could make it into the next stable version.
Hey @mattyb149 - can you rebase against the main branch? Happy to get this in. I recently reviewed another PR related to this processor.
@mattyb149 Are u still following this thread?
There are two more issues opened about this now:
https://issues.apache.org/jira/browse/NIFI-6914
https://issues.apache.org/jira/browse/NIFI-7252
This issue is drastically impacting my organization's ability to utilize NiFi for synchronizing changes from an older legacy system into our new one. Any way this PR can be re-opened or reviewed? @mattyb149 @pvillard31
|
2025-04-01T06:37:54.448955
| 2020-05-26T08:40:43
|
624704761
|
{
"authors": [
"adarmiento",
"axdmoraes",
"pvillard31"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3653",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/4298"
}
|
gharchive/pull-request
|
NIFI-7486 Make InvokeHttp authentication properties able to read from variables.
Description of PR
InvokeHTTP Basic HTTP credentials support variable registry
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
For all changes:
[X] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
[X] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
[X] Has your PR been rebased against the latest commit within the target branch (typically master)?
[X] Is your initial contribution a single, squashed commit? Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not squash or use --force when pushing to allow for clean monitoring of changes.
For code changes:
[X] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?
[X] Have you written or updated unit tests to verify your changes?
[ ] Have you verified that the full build is successful on JDK 8?
[ ] Have you verified that the full build is successful on JDK 11?
[ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
[ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?
[ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?
[ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?
For documentation related changes:
[ ] Have you ensured that format looks appropriate for the output in which it is rendered?
Note:
Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible.
Hey @adarmiento - does it really makes sense now that we have the parameters concept in NiFi? Besides variables should not be used for sensitive properties so I'd definitely not recommend using a variable on the password property.
Hey @adarmiento - does it really makes sense now that we have the parameters concept in NiFi? Besides variables should not be used for sensitive properties so I'd definitely not recommend using a variable on the password property.
Hello @pvillard31, I did not think about it, probably because I was still using old 1.9.2 until now.
Probably could be a security bad choice then (I noted that also the proxy credentials allow variables, maybe we should then update those two properties for consistency)
Thanks for the tips anyway, I'll keep this in mind for the future
I think it's hard to remove variable support after the fact because of backward compatibility concerns (even though I'd be a +1 for removing variable support on sensitive properties). I think that (to be discussed though) when the community will start working around Apache NiFi 2.x, we would possibly remove the variables to only support parameters.
If you think that a change would still make sense, let me know and we can have a look.
In my case, I´m running NIFI under Kubernets and the usage of variables in sensitive properties is needed to use environment variables. These type of properties are injected in the container during the execution and then can be used in NIFI.
If it´s not supported, I need to write in each processor the user and password.
Maybe, in version 2.x, NIFI can have a different expression language scopes. One for environment variable and other for variables.
@axdmoraes - thanks for the feedback. How is the flow published in NiFi? As part of the Docker image? via a volume? or from a NiFi Registry instance?
Using Nifi registry instance. We use the same registry for different environments.
In that case, would it be an option to use the CLI or REST API to set the parameters values after the flow has been deployed in NiFi from the NiFi Registry. On my side, with my k8s deployments, I'm doing something looking like the below
# add the Registry client in NiFi (to adapt for your secured NiFi instances)
curl 'http://nifi:8080/nifi-api/controller/registry-clients' -H 'Content-Type: application/json' --data-binary '{"revision":{"version":0},"component":{"name":"NiFi Registry","uri":"http://nifi-registry:18080/nifi-registry"}}'
# Deploy the flow in NiFi (add the logic to retrieve the bucket/flow/version)
/opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi pg-import -u http://nifi:8080 --bucketIdentifier $bucketID --flowIdentifier $flow --flowVersion $version
# Get the parameter context ID
paramContextID=`/opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi list-param-contexts -u http://nifi:8080 -ot json | grep -v cli.sh | jq -r '.parameterContexts[].id'`
# Set the parameters values (you can do something dynamic based on your needs)
/opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi set-param -u http://nifi:8080 --paramContextId $paramContextID --paramName MY_PARAMETER --paramValue MY_VALUE
# Start the controller services (add your logic to retrieve the PG ID)
/opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi pg-enable-services -u http://nifi:8080 --processGroupId $pgid
# Start the process group (add your logic to retrieve the PG ID)
/opt/nifi/nifi-toolkit-1.11.4/bin/cli.sh nifi pg-start -u http://nifi:8080 --processGroupId $pgid
Thanks for the suggestion. I will try. We were using version 1.9.2 and I didn't know about parameters context.
|
2025-04-01T06:37:54.455802
| 2022-09-26T15:10:11
|
1386254686
|
{
"authors": [
"bbende",
"mattyb149",
"tamas-horvath"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3654",
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/6448"
}
|
gharchive/pull-request
|
NIFI-10549: Remove group name wildcard from assembly for nifi-ranger-resources
Summary
NIFI-10549
Tracking
Please complete the following tracking steps prior to pull request creation.
Issue Tracking
[x] Apache NiFi Jira issue created
Pull Request Tracking
[x] Pull Request title starts with Apache NiFi Jira issue number, such as NIFI-00000
[x] Pull Request commit message starts with Apache NiFi Jira issue number, as such NIFI-00000
Pull Request Formatting
[x] Pull Request based on current revision of the main branch
[x] Pull Request refers to a feature branch with one commit containing changes
Verification
Please indicate the verification steps performed prior to pull request creation.
Build
[ ] Build completed using mvn clean install -P contrib-check
[ ] JDK 8
[x] JDK 11
[ ] JDK 17
Licensing
[ ] New dependencies are compatible with the Apache License 2.0 according to the License Policy
[ ] New dependencies are documented in applicable LICENSE and NOTICE files
Documentation
[ ] Documentation formatting appears as expected in rendered files
LGTM
Looks good, will merge
|
2025-04-01T06:37:54.464182
| 2024-10-12T14:15:59
|
2583080286
|
{
"authors": [
"Gary-Hobson",
"cederom",
"nuttxpr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3655",
"repo": "apache/nuttx-apps",
"url": "https://github.com/apache/nuttx-apps/pull/2708"
}
|
gharchive/pull-request
|
nshlib: Add startup process buried points
Summary
nshlib: Add startup process buried points
trace: fix trace dump crash
Impact
Testing
sim
[Experimental Bot, please feedback here]
No, this PR does not fully meet the NuttX requirements.
Missing Information:
Summary:
Why are these changes necessary? Bug fixes? Enhancements?
How do the changes work? Provide specifics about the buried points added and the cause of the trace dump crash and how it was fixed.
Issue References: Link any relevant NuttX or NuttX Apps issues.
Impact:
Address all impact categories (user, build, hardware, documentation, security, compatibility). Even if there's no impact, state "NO" explicitly for each.
Testing:
Insufficient Detail: "sim" isn't descriptive enough. Specify the simulator (e.g., qemu-rv32) and the configuration used.
Missing Logs: Provide actual testing logs from before and after the changes.
Recommendations:
Expand the Summary: Explain the "why" and "how" for each change.
Complete the Impact Assessment: Go through each category and provide a clear "YES" or "NO" along with a brief explanation if necessary.
Provide Detailed Testing Information: Specify the exact simulator, configurations, and include the relevant testing logs.
My only remark is this is very useful feature, probably will be developed further in future, but as for quite a big change better description in the PR and some additional comments in the commit message would help understand the change better by others that want to use it too :-)
Is existing API impacted anyhow? Will old code work the same way or needs an update?
Is documentation update required / necessary? Maybe it would be good to provide documentation on how to use new functionalities? Newcomers tend to start at documentation so share your inventions there too :-)
If the buffer is too small, sure we can increase the buffer, but also overflow checks are necessary?
|
2025-04-01T06:37:54.467233
| 2022-12-19T22:13:19
|
1503685014
|
{
"authors": [
"PetervdPerk",
"acassis",
"pkarashchenko",
"xiaoxiang781216"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3656",
"repo": "apache/nuttx",
"url": "https://github.com/apache/nuttx/pull/7933"
}
|
gharchive/pull-request
|
LPC17_40 CAN driver SocketCAN enforce TX fifo behaviour
Summary
The SocketCAN driver expects a FIFO behaviour yet LPC17_40 didn't enforce this.
This changes enables prioritization on transmit function to enforce this.
Impact
Fix transmit behaviour
Testing
Tested on a LPC1768
Could you please squash commits?
Could you please squash commits?
And now we have a documentation to explain how to do it:
https://nuttx.apache.org/docs/latest/contributing/making-changes.html#how-to-include-the-suggestions-on-your-pull-request
Of course, @PetervdPerk is not a New Kind on the Block, the idea was to help new contributors
Let' ignore the macOS ci broken.
|
2025-04-01T06:37:54.474620
| 2021-10-08T02:34:43
|
1020618876
|
{
"authors": [
"dongjoon-hyun",
"guiyanakuang",
"stiga-huang"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3657",
"repo": "apache/orc",
"url": "https://github.com/apache/orc/pull/932"
}
|
gharchive/pull-request
|
ORC-1021: Add -fno-omit-frame-pointer in DEBUG and RELWITHDEBINFO builds
What changes were proposed in this pull request?
This PR adds -fno-omit-frame-pointer gcc option in DEBUG and RELWITHDEBINFO builds, which helps to generate stacktrace in debugging and profiling. Refs:
https://www.brendangregg.com/perf.html#StackTraces
https://issues.apache.org/jira/browse/IMPALA-4132
Why are the changes needed?
Described as above.
How was this patch tested?
Built in ubuntu16.04 with gcc 8.4.0.
+1 LGTM
I backported this to branch-1.7.
|
2025-04-01T06:37:54.476622
| 2020-12-04T02:26:45
|
756744437
|
{
"authors": [
"cku328",
"lamber-ken"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3658",
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/1655"
}
|
gharchive/pull-request
|
HDDS-4549. Fix typos in documents
What changes were proposed in this pull request?
Fix typos in documents
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-4549
How was this patch tested?
No need
@cku328 done, thanks
Thanks @lamber-ken for working on this.
I will merge it later.
|
2025-04-01T06:37:54.483420
| 2021-04-09T13:28:11
|
854526178
|
{
"authors": [
"adoroszlai",
"elek",
"mukul1987"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3659",
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/2140"
}
|
gharchive/pull-request
|
HDDS-5084. Include HISTORY.md/SECURITY.md/CONTRIBUTING.md in the release artifacts.
JIRA: https://issues.apache.org/jira/browse/HDDS-5084
What changes were proposed in this pull request?
During the ozone-1.1.0 vote I realized that HISTORY.md/SECURITY.md/CONTRIBUTING.md files are missing from the bin and src artifacts. I think they include very useful information and would be better to include them in the release artifacts.
How was this patch tested?
mvn clean install -Dmaven.javadoc.skip=true -DskipTests -Psign,dist,src -Dtar -Dgpg.keyname=$CODESIGNINGKEY
cd hadoop-ozone/dist/target/
tar tzf hadoop-ozone-1.1.0-SNAPSHOT.tar.gz
tar tzf hadoop-ozone-1.1.0-src-SNAPSHOT.tar.gz
We should update the History.md, it is quite old right now.
@mukul1987 This is a good suggestion, but I disagree with the other statement:
Including it in the current form in the release doesn't makes sense.
It is not going to be part of the 1.1.0 release, so there is plenty of time to update it until the next one.
It's not only about HISTORY.md, but other two docs as well.
Writing prose for the history doc is quite distinct from updating a script to copy some files. It may very well be updated by other people, not necessarily @elek.
So I think this change is fine in its scope.
Fair point @adoroszlai. Can we please create a followup jira and mark it as a blocker for 1.2.0 release? I feel if we will update the history by the next release, then we should be good.
Thanks the suggestion @mukul1987, very good point.
It seems to be a small update, so I created the patch itself (please see #2149). And agree: as we have PRs for both problems in our radar we can merge the two PRs in any order.
Thanks Marton. +1 for this patch as well.
I have already added +1 to the other patch.
Thanks for updating the file.
Thanks the review @mukul1987 @ayushtkn and @adoroszlai
I am merging it after the green build.
|
2025-04-01T06:37:54.489825
| 2022-02-20T20:59:17
|
2368057279
|
{
"authors": [
"asfimport"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3660",
"repo": "apache/parquet-java",
"url": "https://github.com/apache/parquet-java/issues/2670"
}
|
gharchive/issue
|
Bump Thrift to 0.16.0
Thrift 0.16.0 has been released https://github.com/apache/thrift/releases/tag/v0.16.0
Reporter: Vinoo Ganesh / @vinooganesh
Assignee: Vinoo Ganesh / @vinooganesh
Related issues:
Release 1.12.3 (is depended upon by)
Note: This issue was originally created as PARQUET-2128. Please see the migration documentation for further details.
Vinoo Ganesh / @vinooganesh:
Fixed in https://github.com/apache/parquet-mr/pull/948
Steve Loughran / @steveloughran:
homebrew doesn't have anything < 0..18.0, which is java11+ only, so not something parquet can switch to.
which means that we have to stop using homebrew here and take control of our build dependencies ourselves. I've already done that with maven and openjdk as brew is too enthusiastic about breaking my workflow.
*none of us can rely on homebrew or use "homebrew doesn't have this" as a reason for reverting a change.
All old thrift releases can be found at https://archive.apache.org/dist/thrift/
|
2025-04-01T06:37:54.506635
| 2023-03-19T19:37:53
|
1631089429
|
{
"authors": [
"codecov-commenter",
"jadami10"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3661",
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/10444"
}
|
gharchive/pull-request
|
optimize queries where lhs and rhs of predicate are equal
This is a minor performance bugfix.
this fixes NullPointerExceptions in existing optimizers when performing WHERE 1=1 queries. These would fail because the filter expression had no function call
I noticed that WHERE 1=1 was no simplified, but WHERE col1>0 AND 1=1 was actually being simplified in the NumericalFilterOptimizer. So I put that part in a separate class to be used more generally for future cases like this
it does a little more work than expected once it sees and AND/OR/NOT expression
something else is converting 1=1 to literal TRUE, but I'm not sure where that is
This adds a IdenticalPredicateFilterOptimizer class that converts WHERE 1=1 or WHERE "colA"!="colA" to TRUE/FALSE respectively
I've added a bunch more test cases, and I've tested manually in the Quickstart app. This is my first contribution to the query parsing part of the code base, so I don't have a great sense what test coverage looks like. But I imagine between unit and integration tests, this should catch any glaring breaks?
Codecov Report
Merging #10444 (c7c578f) into master (d9c4315) will decrease coverage by 50.31%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## master #10444 +/- ##
=============================================
- Coverage 64.21% 13.90% -50.31%
+ Complexity 6089 237 -5852
=============================================
Files 2007 2009 +2
Lines 109281 109337 +56
Branches 16692 16708 +16
=============================================
- Hits 70177 15208 -54969
- Misses 33993 92897 +58904
+ Partials 5111 1232 -3879
Flag
Coverage Δ
unittests1
?
unittests2
13.90% <0.00%> (-0.02%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
.../pinot/controller/recommender/io/InputManager.java
93.22% <ø> (ø)
...che/pinot/core/query/optimizer/QueryOptimizer.java
0.00% <ø> (-100.00%)
:arrow_down:
...imizer/filter/BaseAndOrBooleanFilterOptimizer.java
0.00% <0.00%> (ø)
.../optimizer/filter/FlattenAndOrFilterOptimizer.java
0.00% <0.00%> (-77.78%)
:arrow_down:
...izer/filter/IdenticalPredicateFilterOptimizer.java
0.00% <0.00%> (ø)
...ery/optimizer/filter/MergeEqInFilterOptimizer.java
0.00% <0.00%> (-92.60%)
:arrow_down:
...ery/optimizer/filter/NumericalFilterOptimizer.java
0.00% <0.00%> (-80.90%)
:arrow_down:
... and 1351 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Looks good in general, great job!
thank you! i see all checks passed. let me know if you have further comments, though
|
2025-04-01T06:37:54.518353
| 2023-08-09T10:54:02
|
1842980175
|
{
"authors": [
"codecov-commenter",
"gortiz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3662",
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/11303"
}
|
gharchive/pull-request
|
separate tags with commas as indicated in action doc
@erichgess found that https://github.com/apache/pinot/pull/10528 broke the codecov coverate.
In fact coverage was uploaded, but tags were incorrectly configured and therefore they are not uploaded with the expected metadata.
As indicated in https://github.com/codecov/codecov-action, different tags should be separated by commas.
Codecov Report
Merging #11303 (7b23f93) into master (6fa4268) will increase coverage by 0.00%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #11303 +/- ##
=========================================
Coverage 0.11% 0.11%
=========================================
Files 2231 2157 -74
Lines 120139 116982 -3157
Branches 18218 17772 -446
=========================================
Hits 137 137
+ Misses 119982 116825 -3157
Partials 20 20
Flag
Coverage Δ
integration1temurin11
?
integration1temurin17
?
integration1temurin20
?
integration2temurin11
?
integration2temurin17
?
integration2temurin20
?
java-20
0.11% <ø> (?)
temurin
0.11% <ø> (?)
unittests1temurin11
?
unittests1temurin17
?
unittests1temurin20
?
unittests2
0.11% <ø> (?)
unittests2temurin11
?
unittests2temurin17
?
unittests2temurin20
?
Flags with carried forward coverage won't be shown. Click here to find out more.
see 76 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
2025-04-01T06:37:54.533875
| 2023-10-01T16:33:52
|
1920854058
|
{
"authors": [
"abhioncbr",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3663",
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/11721"
}
|
gharchive/pull-request
|
Added UTs for null handling in CaseTransform function.
Added unit test cases for null handling in CaseTransformFunction
Also, added test case for isNullLiteralTransformation function asked in the PR
cc: @shenyu0127 @Jackie-Jiang
Codecov Report
Merging #11721 (6a96f1a) into master (ae16812) will decrease coverage by 48.69%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #11721 +/- ##
=============================================
- Coverage 63.11% 14.42% -48.69%
+ Complexity 1117 201 -916
=============================================
Files 2342 2342
Lines 125802 125800 -2
Branches 19336 19336
=============================================
- Hits 79395 18150 -61245
- Misses 40745 106116 +65371
+ Partials 5662 1534 -4128
Flag
Coverage Δ
integration
?
integration1
?
integration2
?
java-11
14.42% <ø> (-48.64%)
:arrow_down:
java-17
?
java-20
?
temurin
14.42% <ø> (-48.69%)
:arrow_down:
unittests
14.42% <ø> (-48.68%)
:arrow_down:
unittests1
?
unittests2
14.42% <ø> (-0.06%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Files
Coverage Δ
...ator/transform/function/CaseTransformFunction.java
0.00% <ø> (-57.98%)
:arrow_down:
... and 1521 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
2025-04-01T06:37:54.566056
| 2024-09-13T00:28:12
|
2523582643
|
{
"authors": [
"Jackie-Jiang",
"codecov-commenter",
"itschrispeck"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3664",
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/13994"
}
|
gharchive/pull-request
|
Enhance optimizeDictionary to optionally optimize var-width type cols
Changes
Add noDictionaryCardinalityRatioThreshold config.
If populated, and optimizeDictionary is true, then Pinot will override dictionary encoding with raw encoding based on the condition cardinality / numDocs > noDictionaryCardinalityRatioThreshold.
If the new config is omitted, optimizeDictionary behavior is unchanged
Motivation
When storing log data, often columns will contain many repeated values. It's useful to take advantage of Pinot's dictionary encoding which usually provides better storage/query performance for these columns. Dictionary encoding high cardinality columns is cost/storage prohibitive, so we'd like to avoid applying dictionary encoding unless it is safe. Since column cardinality/values can change rapidly we'd like to make these decisions within Pinot itself.
In our experience, cardinality is a good indicator of whether to dictionary or raw encode a col. With a 0.10 threshold (10%), we see roughly 40-60% improvement in storage compared to raw encoding everything.
Codecov Report
Attention: Patch coverage is 0% with 26 lines in your changes missing coverage. Please review.
Project coverage is 0.00%. Comparing base (59551e4) to head (8c368f9).
Report is 1029 commits behind head on master.
Files with missing lines
Patch %
Lines
.../segment/index/dictionary/DictionaryIndexType.java
0.00%
12 Missing :warning:
...ocal/segment/index/loader/ForwardIndexHandler.java
0.00%
4 Missing :warning:
...ot/segment/spi/creator/SegmentGeneratorConfig.java
0.00%
4 Missing :warning:
...ment/creator/impl/SegmentColumnarIndexCreator.java
0.00%
3 Missing :warning:
.../apache/pinot/spi/config/table/IndexingConfig.java
0.00%
3 Missing :warning:
:exclamation: There is a different number of reports uploaded between BASE (59551e4) and HEAD (8c368f9). Click for more details.
HEAD has 48 uploads less than BASE
Flag
BASE (59551e4)
HEAD (8c368f9)
integration
7
2
integration2
3
2
temurin
12
2
java-21
7
2
skip-bytebuffers-true
3
1
skip-bytebuffers-false
7
1
unittests
5
0
unittests1
2
0
java-11
5
0
unittests2
3
0
integration1
2
0
custom-integration1
2
0
Additional details and impacted files
@@ Coverage Diff @@
## master #13994 +/- ##
=============================================
- Coverage 61.75% 0.00% -61.76%
=============================================
Files 2436 2514 +78
Lines 133233 139046 +5813
Branches 20636 21371 +735
=============================================
- Hits 82274 0 -82274
- Misses 44911 139046 +94135
+ Partials 6048 0 -6048
Flag
Coverage Δ
custom-integration1
?
integration
0.00% <0.00%> (-0.01%)
:arrow_down:
integration1
?
integration2
0.00% <0.00%> (ø)
java-11
?
java-21
0.00% <0.00%> (-61.63%)
:arrow_down:
skip-bytebuffers-false
0.00% <0.00%> (-61.75%)
:arrow_down:
skip-bytebuffers-true
0.00% <0.00%> (-27.73%)
:arrow_down:
temurin
0.00% <0.00%> (-61.76%)
:arrow_down:
unittests
?
unittests1
?
unittests2
?
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Certain settings are impossible, e.g. only apply cardinality based optimization and skip optimization for fixed-length type. I'm not too worried about this limitation, but it would be good if we make it possible
If I understand the concern correctly, I think users can set noDictionaryCardinalityRatioThreshold = 0 to effectively skip optimization for fixed-length type?
I'd prefer to merge as is, since providing a way to use the cardinality ratio threshold instead of size ratio threshold means making the old size ratio threshold config optional, which is backwards compatible and could be done in the future if the need is found
Basically we specify:
Size based only for fixed length type
Cardinality based only for var-length type
Let's document this behavior so that user doesn't expect wrong type being applied
|
2025-04-01T06:37:54.587725
| 2022-05-27T12:38:58
|
1250721694
|
{
"authors": [
"codecov-commenter",
"gortiz",
"richardstartin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3665",
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/8790"
}
|
gharchive/pull-request
|
simplify segment pruning
Segment pruning feels over-engineered:
DataSchemaSegmentPruner and ValidSegmentPruner are just one liners which can be applied when necessary
ColumnValueSegmentPruner and SelectionQuerySegmentPruner are mutually exclusive so never both need to run, and these cases can be identified easily by examining the QueryContext
No new segment pruners have been in a very long time
This leads to inefficiencies like
None of the pruners inline
Lots of lists are created unnecessarily
We end up tracing one liner checks
This PR removes the two trivial pruners and applies them inline within the two remaining pruners. It adds a new method to identify based on the query context whether the pruner should run at all.
Codecov Report
Merging #8790 (853cdc9) into master (c4549e2) will decrease coverage by 48.69%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## master #8790 +/- ##
=============================================
- Coverage 62.86% 14.17% -48.70%
+ Complexity 4601 168 -4433
=============================================
Files 1690 1688 -2
Lines 89212 89211 -1
Branches 13411 13415 +4
=============================================
- Hits 56082 12642 -43440
- Misses 29079 75623 +46544
+ Partials 4051 946 -3105
Flag
Coverage Δ
unittests1
?
unittests2
14.17% <0.00%> (-0.03%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c4549e2...853cdc9. Read the comment docs.
LGTM
|
2025-04-01T06:37:54.591281
| 2018-10-22T05:47:03
|
372400465
|
{
"authors": [
"Wei-1",
"dszeto"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3666",
"repo": "apache/predictionio",
"url": "https://github.com/apache/predictionio/pull/486"
}
|
gharchive/pull-request
|
[PIO-187] Livedoc with Docker Installation Update
After #462, we have Docker support in the repo.
This PR will modify the corresponding updates in Livedoc.
@marevol, please help to check if I make any mistake.
Thanks for writing this up! One small request: we have to declare that any Docker container image published to Docker Hub be not official ASF releases. Those can only be referred to as convenience binaries. We can say that the Docker build files in our Git repo as official though.
For more information please refer to http://www.apache.org/legal/release-policy.html.
@dszeto, I will modify it for sure. thanks!
LGTM. @marevol do you want to take a second look?
Going to merge this now. Thanks @Wei-1 !
@marevol if you see issues please open a separate ticket.
|
2025-04-01T06:37:54.599891
| 2022-02-25T04:08:25
|
1150019402
|
{
"authors": [
"Jason918",
"codelipenghui",
"michaeljmarshall"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3667",
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/pull/14467"
}
|
gharchive/pull-request
|
[Broker] Fix producerFuture not completed in ServerCnx#handleProducer
Motivation
producerFuture should be completed and removed from producers when exception occurs.
Modifications
Add producerFuture.completeExceptionally
Verifying this change
[ ] Make sure that the change passes the CI checks.
This change is a trivial rework / code cleanup without any test coverage.
Does this pull request potentially affect one of the following parts:
If yes was chosen, please highlight the changes
Dependencies (does it add or upgrade a dependency): (no)
The public API: (no)
The schema: (no)
The default values of configurations: (no)
The wire protocol: (no)
The rest endpoints: (no)
The admin cli options: (no)
Anything that affects deployment: (no)
Documentation
Check the box below and label this PR (if you have committer privilege).
Need to update docs?
[x] no-need-doc
bug fix.
@codelipenghui - it'd be great to include this in 2.10.0 rc 2, if possible.
@michaeljmarshall Yes, I have cherry-picked to branch-2.10
|
2025-04-01T06:37:54.610539
| 2024-07-29T15:10:35
|
2435688227
|
{
"authors": [
"codecov-commenter",
"nodece"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3668",
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/pull/23093"
}
|
gharchive/pull-request
|
[improve][build] Move docker-push profile to submodule
Motivation
The profile refactoring breaks the pulsar release in the #23091.
Modifications
Move docker-push profile to docker/pulsar and docker/pulsar-all modules
Documentation
[ ] doc
[ ] doc-required
[x] doc-not-needed
[ ] doc-complete
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 73.44%. Comparing base (bbc6224) to head (6022c9e).
Report is 479 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #23093 +/- ##
============================================
- Coverage 73.57% 73.44% -0.14%
- Complexity 32624 33524 +900
============================================
Files 1877 1919 +42
Lines 139502 144087 +4585
Branches 15299 15745 +446
============================================
+ Hits 102638 105824 +3186
- Misses 28908 30145 +1237
- Partials 7956 8118 +162
Flag
Coverage Δ
inttests
27.58% <ø> (+2.99%)
:arrow_up:
systests
24.76% <ø> (+0.43%)
:arrow_up:
unittests
72.51% <ø> (-0.34%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
see 516 files with indirect coverage changes
|
2025-04-01T06:37:54.620822
| 2022-07-31T14:09:54
|
1323472602
|
{
"authors": [
"hzh0425",
"mxsm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3669",
"repo": "apache/rocketmq",
"url": "https://github.com/apache/rocketmq/issues/4746"
}
|
gharchive/issue
|
[Conf] Add controller start config file
Add the configuration file for startup controller to the configuration file directory. when controller Independent Deployment.
Well, thanks for you attention, could you please submit a pr to add it?
Well, thanks for you attention, could you please submit a pr to add it?
yes I will submit a PR for this
|
2025-04-01T06:37:54.623600
| 2020-03-05T15:26:27
|
576334626
|
{
"authors": [
"coveralls",
"zhangjidi2016"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3670",
"repo": "apache/rocketmq",
"url": "https://github.com/apache/rocketmq/pull/1824"
}
|
gharchive/pull-request
|
[ISSUE #1770]Add a query message trace command in mqadmin.
What is the purpose of the change
Add a query message trace command in mqadmin.
Brief changelog
Add a query message trace command in mqadmin.
Coverage decreased (-0.09%) to 50.829% when pulling 4fca91fa6f3c91ba0b6c8f13f4fdfcfffb2401d7 on zhangjidi2016:add_query_trace_command into 3974677f04815609951c17059d85d3795eb51247 on apache:develop.
The command result in console,@zongtanghu @duhenglucky ,please help to review it,thanks!
|
2025-04-01T06:37:54.625675
| 2020-05-28T01:14:29
|
626140279
|
{
"authors": [
"HaoTianZhao",
"coveralls"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3671",
"repo": "apache/rocketmq",
"url": "https://github.com/apache/rocketmq/pull/2047"
}
|
gharchive/pull-request
|
Fix 2046
fix selectOneMessageQueue must select lastFailBroker
As you say, if lastBrokerName is null, it will select one MessageQueue, assuming MessageQueue-a. If it occurs some exception while producing to MessageQueue-a, it will always select MessageQueue-a since then.
Coverage increased (+0.2%) to 51.023% when pulling aea101b6e26f05d3677e5842702d42312537d921 on HaoTianZhao:fix-2046 into 8ef01a6c635f6972847c40d5540b1945180d7cbd on apache:master.
|
2025-04-01T06:37:54.636326
| 2023-04-11T16:43:26
|
1662878073
|
{
"authors": [
"HScarb",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3672",
"repo": "apache/rocketmq",
"url": "https://github.com/apache/rocketmq/pull/6577"
}
|
gharchive/pull-request
|
[ISSUE #6576] Fix pop lmq message
Make sure set the target branch to develop
What is the purpose of the change
fix #6576
Brief changelog
If use pop to consume a LMQ message, recode message, set LMQ's POP_CK, topic, queueId and queueOffset
Verifying this change
Follow this checklist to help us incorporate your contribution quickly and easily. Notice, it would be helpful if you could finish the following 5 checklist(the last one is not necessary)before request the community to review your PR.
[x] Make sure there is a Github issue filed for the change (usually before you start working on it). Trivial changes like typos do not require a Github issue. Your pull request should address just this issue, without pulling in other changes - one PR resolves one issue.
[x] Format the pull request title like [ISSUE #123] Fix UnknownException when host config not exist. Each commit in the pull request should have a meaningful subject line and body.
[x] Write a pull request description that is detailed enough to understand what the pull request does, how, and why.
[x] Write necessary unit-test(over 80% coverage) to verify your logic correction, more mock a little better when cross module dependency exist. If the new feature or significant change is committed, please remember to add integration-test in test module.
[x] Run mvn -B clean apache-rat:check findbugs:findbugs checkstyle:checkstyle to make sure basic checks pass. Run mvn clean install -DskipITs to make sure unit-test pass. Run mvn clean test-compile failsafe:integration-test to make sure integration-test pass.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
Codecov Report
Merging #6577 (b4d9cdf) into develop (f44a1c3) will increase coverage by 0.03%.
The diff coverage is 77.77%.
@@ Coverage Diff @@
## develop #6577 +/- ##
=============================================
+ Coverage 43.08% 43.11% +0.03%
- Complexity 8994 8998 +4
=============================================
Files 1107 1107
Lines 78257 78278 +21
Branches 10201 10203 +2
=============================================
+ Hits 33716 33750 +34
+ Misses 40318 40305 -13
Partials 4223 4223
Impacted Files
Coverage Δ
...rocketmq/broker/processor/PopMessageProcessor.java
39.67% <77.77%> (+2.06%)
:arrow_up:
... and 16 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
2025-04-01T06:37:54.644020
| 2024-09-28T17:22:59
|
2554415918
|
{
"authors": [
"Hisoka-X",
"YuriyGavrilov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3673",
"repo": "apache/seatunnel",
"url": "https://github.com/apache/seatunnel/issues/7766"
}
|
gharchive/issue
|
[Feature][Transformer] Supporting fake data generation in transformer for sensitive data masking options
Search before asking
[X] I had searched in the feature and found no similar feature requirement.
Description
Hi All Following to the short discussion I create this issue.
https://github.com/apache/seatunnel/discussions/7746
So there is an idea and goal to source and sink completely full postgres (or etc) to another postgres (source) with data masking or generation fake data for sensitive attributes. Good to know that there are a lot of fakesource available with random generators but at this moment I don't know is it working in transformer or not. Also some good news that there is dynamic compilation available for some completely custom cases.
What do you think?
Usage Scenario
Some maybe will try to use Transformer in case of masking and fake generation.
The real case is to make data synchronization from prod to test environment with some predefined option by user request
[ ] Support fake data generation in transformer for sensitive data masking options in full DB sync case or partial
Related issues
Supporting fake data generation in transformer
Are you willing to submit a PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
How about support join with dimension table (fake source is one type of dimension table)?
I think we can extend this requirement to any source.
eg:
join with jdbc
transform {
JoinWithSource {
join_on = "source.id = type_bin.item_id"
source = [
Jdbc {
url = "jdbc:mysql://localhost/test?serverTimezone=GMT%2b8"
driver = "com.mysql.cj.jdbc.Driver"
connection_check_timeout_sec = 100
user = "root"
password = "123456"
query = "select * from type_bin"
}
]
}
}
or join with fake source
transform {
JoinWithSource {
join_on = "source.id = fake.c_int"
source = [
FakeSource {
row.num = 5
schema {
fields {
c_string = string
c_tinyint = tinyint
c_smallint = smallint
c_int = int
c_bigint = bigint
c_float = float
c_double = double
}
}
}
]
}
}
Then we can use SQL transform to filter data you want.
Or join with sql transform
env {
parallelism = 10
job.mode = "BATCH"
}
source {
Jdbc {
url = "jdbc:mysql://localhost/test?serverTimezone=GMT%2b8"
driver = "com.mysql.cj.jdbc.Driver"
connection_check_timeout_sec = 100
user = "root"
password = "123456"
table_path = "testdb.table1"
query = "select * from testdb.table1"
split.size = 10000
}
FakeSource {
row.num = 5
schema {
fields {
c_string = string
c_tinyint = tinyint
c_smallint = smallint
c_int = int
c_bigint = bigint
c_float = float
c_double = double
}
}
}
}
transform {
sql {
query = "select * from table1 join table2 on table1.id = table2.id"
}
}
sink {
Console {}
}
|
2025-04-01T06:37:54.687255
| 2022-10-12T06:34:09
|
1405650260
|
{
"authors": [
"Once2012",
"yu199195"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3674",
"repo": "apache/shenyu",
"url": "https://github.com/apache/shenyu/issues/4072"
}
|
gharchive/issue
|
[BUG] Shenyu-Admin > BasicConfig > Plugin list page 2 cannot display
Is there an existing issue for this?
[X] I have searched the existing issues
Current Behavior
I do not know when to start, the second page of BasicConfig > Plugin cannot be displayed(empty white page),but the first and third page is OK(Page size is 12)
F12 of Chrome show this:
Expected Behavior
No response
Steps To Reproduce
No response
Environment
ShenYu version(s):2.5.0
Debug logs
No response
Anything else?
No response
And when I open Request plugin,all page of Shenyu-Admin can not display,so I have to close the Request plugin by modifing Database table (plugin.enable => 0).
The Request plugin is in Plug in list Page 2,maybe it cause the problem?
The stacktrace of Shenyu-Admin is below:
can you execute right SQL ? about some plugin~
|
2025-04-01T06:37:54.694622
| 2023-09-01T09:11:41
|
1877052634
|
{
"authors": [
"codecov-commenter",
"xuziyang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3675",
"repo": "apache/shenyu",
"url": "https://github.com/apache/shenyu/pull/5107"
}
|
gharchive/pull-request
|
[type:refactor] Put the creation logic of HttpClient in a separate class
The creation logic of HttpClient is too complicated. A better way than putting it in the HttpClientPluginConfiguration is to write it into a separate class.
Codecov Report
Merging #5107 (d9a6391) into master (74cc301) will decrease coverage by 0.16%.
Report is 1 commits behind head on master.
The diff coverage is 48.05%.
:exclamation: Current head d9a6391 differs from pull request most recent head cb4230e. Consider uploading reports for the commit cb4230e to get more accurate results
@@ Coverage Diff @@
## master #5107 +/- ##
============================================
- Coverage 61.81% 61.65% -0.16%
+ Complexity 8497 8482 -15
============================================
Files 1227 1228 +1
Lines 36963 36958 -5
Branches 3514 3511 -3
============================================
- Hits 22849 22787 -62
- Misses 12156 12216 +60
+ Partials 1958 1955 -3
Files Changed
Coverage Δ
...t/starter/plugin/httpclient/HttpClientFactory.java
47.36% <47.36%> (ø)
...ugin/httpclient/HttpClientPluginConfiguration.java
81.25% <100.00%> (+30.67%)
:arrow_up:
... and 33 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
2025-04-01T06:37:54.698685
| 2022-05-24T08:55:52
|
1246221402
|
{
"authors": [
"alexismanin",
"desruisseaux"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3676",
"repo": "apache/sis",
"url": "https://github.com/apache/sis/pull/27"
}
|
gharchive/pull-request
|
Fix/sql temporal
@desruisseaux I'd like your review please (still cannot assign pull request to you).
PR content:
added tests upon date/time conversion from sql to java types.
Changed output API of date conversion from java.sql.Date to java.time.LocalDate
Fix timezone management for TIME WITH TIMEZONE and TIMESTAMP WITH TIMEZONE sql types.
Will apply this pull request with one amendment. This pull request changes the TIMESTAMP_WITH_TIMEZONE mapping from java.time.OffsetDateTime to java.time.Instant. I propose to keep the previous OffsetDataTime. Some searches on internet suggest that this mapping is part of JDBC 4.2 specification:
JDBC Maintenance Release 4.2
Using Java 8 Date and Time classes in PostgreSQL
Mapping between PostgreSQL and Java date/time types
|
2025-04-01T06:37:54.703917
| 2021-01-11T03:47:23
|
783043086
|
{
"authors": [
"withyanni",
"wu-sheng"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3677",
"repo": "apache/skywalking",
"url": "https://github.com/apache/skywalking/issues/6166"
}
|
gharchive/issue
|
在UI管理页面的日志页面,无法查看client js上报的error page字段和调用栈
Please answer these questions before submitting your issue.
Why do you submit this issue?
[ ] Question or discussion
[x] Bug
[ ] Requirement
[ ] Feature or performance improvement
Question
What do you want to know?
Bug
Which version of SkyWalking, OS, and JRE?
SkyWalking client js
SkyWalking v8.3.0 for H2/MySQL/TiDB/InfluxDB/ElasticSearch 7
Which company or project?
What happened?
If possible, provide a way to reproduce the error. e.g. demo application, component version.
vue开发的前端控制台报了错误,但在UI管理页面貌似没有正确展示字段和调用栈
能否展示更多字段,以及展示调用栈。
Requirement or improvement
Please describe your requirements or improvement suggestions.
Please use English on Github.
Please use English on Github.
|
2025-04-01T06:37:54.714236
| 2021-02-05T10:42:04
|
802056734
|
{
"authors": [
"libinglong",
"wu-sheng"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3678",
"repo": "apache/skywalking",
"url": "https://github.com/apache/skywalking/pull/6330"
}
|
gharchive/pull-request
|
add debug info when type not match in instrumentation process
[x] If this is non-trivial feature, paste the links/URLs to the design doc.
[x] Update the documentation to include this new feature.
[x] Tests(including UT, IT, E2E) are added to verify the new feature.
[x] If it's UI related, attach the screenshots below.
[x] If this pull request closes/resolves/fixes an existing issue, replace the issue number. Closes #.
[x] Update the CHANGES log.
Principle, we should not add log for helping developing. The logs are for product env debug only.
Principle, we should not add log for helping developing. The logs are for product env debug only.
I still do not think the log is just for product env. It's a worth as long as it can help any people resolve their problems and save their time. :)
Principle, we should not add log for helping developing. The logs are for product env debug only.
I still do not think the log is just for product env. It's a worth as long as it can help any people resolve their problems and save their time. :) And I think adding logs here is cheap.
Then a project has to face countless PR to add logs, because any line of codes could have potential risk, people could ask to log out anything. Then your system breaks.
All internal systems are being added logs randomly, one way, it is a bad thing, and also, you only face limited developers, so damages are controllable. But in the open source, especially like SkyWalking, we have 400+ code contributors, and more in potential. We cant afford to argue with every one, this log is acceptable, and others are not.
The easy, clear, and affordable principle could be only, you will need this in the runtime, or are facing the requirements.
Then a project has to face countless PR to add logs, because any line of codes could have potential risk, people could ask to log out anything. Then your system breaks.
All internal systems are being added logs randomly, one way, it is a bad thing, and also, you only face limited developers, so damages are controllable. But in the open source, especially like SkyWalking, we have 400+ code contributors, and more in potential. We cant afford to argue with every one, this log is acceptable, and others are not.
The easy, clear, and affordable principle could be only, you will need this in the runtime, or are facing the requirements.
How many contributors is agent contributors of the 400+ code contributors?
I think this log is important because I feel it. Actually I merely log anything in my project. I am not a crazy logger.
The easy, clear, and affordable principle could be only, you will need this in the runtime, or are facing the requirements.
This is a acceptable reason but still good for me. :)
Anyway, thank you for spending you energy on this pr. :)
How many contributors is agent contributors of the 400+ code contributors?
Over 70% focused on or worked on agent, AFAIK. Agent side clearly has more plugins than the server side, and easier.
I think this log is important because I feel it. Actually I merely log anything in my project. I am not a crazy logger.
You may not, but how the community should answer the question, when another people want to add logs and quote this PR? How could I prove this is useful than another log? :) It is hard for the community, and hard to understand to the new contributors.
I know where I was wrong.
I should requset a more valuable pr, add this log in passing.
Should we add a troubleshoutting at the plugin develop doc?
Should we add a troubleshoutting at the plugin develop doc?
That depends, how it could be written. It is not easy to provide such kind of documentation. Usually it is a presentation of showcases, but documentation will require it more like a book.
|
2025-04-01T06:37:54.718630
| 2023-11-28T11:25:03
|
2014221476
|
{
"authors": [
"HoustonPutman",
"almogtavor",
"idolaman"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3679",
"repo": "apache/solr-operator",
"url": "https://github.com/apache/solr-operator/issues/661"
}
|
gharchive/issue
|
Adding support to dynamic value in values.yaml
First of all this project is amazing!
I encountered an issue when i tried to use this chart as an dependency to my chart.
in my use case i want to be able to give a dynamic value in values yaml.
for an example i want that the configmap name to be dynamic because i want to deploy multiple solr cluters in the same k8s envirement.
my chart values.yaml
global:
configmap: idolaman
solr:
podOptions:
volumes:
- name: my-configmap
source:
configMap:
name: "{{ .Values.global.configmap }}"
...
my Chart.yaml
name: idolaman-solr
dependencies:
- name: solr
....
an solution might be change the solr chart to evaluate using tpl every value it gets
Is this common for other Helm charts?
I understand how it could be useful, but it would add a lot of complexity to the already fairly complex helm chart.
@HoustonPutman I also think that it would be nice and actually I think it's quite common. I've found this article that helps with this issue. In this example, the tpl function is used in the Helm chart templates to allow for dynamic referencing of values. Useful for scenarios where you want to deploy multiple instances of an application (like multiple Solr clusters).
This example aligns with this scenario where we want to dynamically set the configmap name for deploying multiple Solr clusters.
To show the need for the feature in general I'm referring you to a SO question the requests the same.
|
2025-04-01T06:37:54.723700
| 2022-05-04T20:11:40
|
1225879066
|
{
"authors": [
"atarora",
"epugh"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3680",
"repo": "apache/solr",
"url": "https://github.com/apache/solr/pull/835"
}
|
gharchive/pull-request
|
SOLR-16179 : Updated the documentation as reported in the ticket , along with the examples
https://issues.apache.org/jira/browse/SOLR-16179
Description
terms.fl can be specified in the query multiple times, which is not clear from the documentation.
Solution
Added the respective documentation with examples in JSON and XML
Tests
Reviewed the created doc with asciidoctor and verified the changes.
Checklist
Please review the following and check all that apply:
[x] I have reviewed the guidelines for How to Contribute and my code conforms to the standards described there to the best of my ability.
[x] I have created a Jira issue and added the issue ID to my pull request title.
[x] I have given Solr maintainers access to contribute to my PR branch. (optional but recommended)
[x] I have developed this patch against the main branch.
[x] I have run ./gradlew check.
[ ] I have added tests for my changes.
[x ] I have added documentation for the Reference Guide
The rest of the Terms page ONLY has xml... And since the xml output and the json output are really jsut the same, I don't think having both formats helps the readability. I could see a case for just using JSON everywhere????
|
2025-04-01T06:37:54.762843
| 2016-01-02T22:02:07
|
124613653
|
{
"authors": [
"AmplabJenkins",
"JoshRosen",
"SparkQA",
"rxin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3681",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/10558"
}
|
gharchive/pull-request
|
[SPARK-10359][PROJECT-INFRA] Use more random number in dev/test-dependencies.sh; fix version switching
This patch aims to fix another potential source of flakiness in the dev/test-dependencies.sh script.
@pwendell's original patch and my version used $(date +%s | tail -c6) to generate a suffix to use when installing temporary Spark versions into the local Maven cache, but this value only changes once per second and thus is highly collision-prone when concurrent builds launch on AMPLab Jenkins. In order to reduce the potential for conflicts, this patch updates the script to call Python's random number generator instead.
I also fixed a bug in how we captured the original project version; the bug was causing the exit handler code to fail.
/cc @rxin
Test build #48589 has started for PR 10558 at commit 8e86e9c.
Test build #48589 has finished for PR 10558 at commit 8e86e9c.
This patch fails build dependency tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48589/
Test FAILed.
Test failure was due to Python 3.
Test build #48591 has started for PR 10558 at commit 77a23bf.
Test build #48591 has finished for PR 10558 at commit 77a23bf.
This patch fails build dependency tests.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48591/
Test FAILed.
Merged build finished. Test FAILed.
Test build #48594 has started for PR 10558 at commit 0a6b120.
Test build #2298 has started for PR 10558 at commit 0a6b120.
Test build #48594 has finished for PR 10558 at commit 0a6b120.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48594/
Test PASSed.
Test build #2298 has finished for PR 10558 at commit 0a6b120.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
lgtm
On Saturday, January 2, 2016, Apache Spark QA<EMAIL_ADDRESS>wrote:
Test build #2298 has finished
https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2298/consoleFull
for PR 10558 at commit 0a6b120
https://github.com/apache/spark/commit/0a6b120b13cca8b4c4264bbda6ceb7c3ec5b7135
.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
—
Reply to this email directly or view it on GitHub
https://github.com/apache/spark/pull/10558#issuecomment-168449633.
Jenkins, retest this please.
Test build #48595 has started for PR 10558 at commit 0a6b120.
Test build #48595 has finished for PR 10558 at commit 0a6b120.
This patch fails build dependency tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48595/
Test FAILed.
Aha! It looks like the code for resetting the version has a problem:
+ build/mvn --force -q versions:set '-DnewVersion=
``
`OLD_VERSION` isn't being set properly:
OLD_VERSION='
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, no dependency information available
[WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be resolved: Could not find artifact org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in central (https://repo1.maven.org/maven2)
2.0.0-SNAPSHOT'
Pushed a fix for the version issue, so I'm going to run this a few more times then will merge if it's passing.
Test build #48596 has started for PR 10558 at commit ae3d7a3.
Test build #48596 has finished for PR 10558 at commit ae3d7a3.
This patch fails MiMa tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48596/
Test FAILed.
Test build #48599 has started for PR 10558 at commit a2d59e5.
Test build #48599 has finished for PR 10558 at commit a2d59e5.
This patch fails from timeout after a configured wait of `250m`.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48599/
Test FAILed.
Merged build finished. Test FAILed.
Jenkins, retest this please.
Test build #48615 has started for PR 10558 at commit a2d59e5.
Test build #48615 has finished for PR 10558 at commit a2d59e5.
This patch fails PySpark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48615/
Test FAILed.
Jenkins, retest this please.
Test build #48640 has started for PR 10558 at commit a2d59e5.
Test build #48640 has finished for PR 10558 at commit a2d59e5.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48640/
Test PASSed.
Merged build finished. Test PASSed.
Merging now.
|
2025-04-01T06:37:54.779321
| 2016-04-30T10:46:27
|
152023669
|
{
"authors": [
"AmplabJenkins",
"SparkQA",
"mengxr",
"yanboliang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3682",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/12813"
}
|
gharchive/pull-request
|
[SPARK-15030] [ML] [SparkR] Support formula in spark.kmeans in SparkR
What changes were proposed in this pull request?
RFormula supports empty response variable like ~ x + y.
Support formula in spark.kmeans in SparkR.
Fix some outdated docs for SparkR.
How was this patch tested?
Unit tests.
Test build #57439 has started for PR 12813 at commit f1ba442.
Test build #57439 has finished for PR 12813 at commit f1ba442.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57439/
Test FAILed.
Merged build finished. Test FAILed.
Jenkins, test this please.
Test build #57442 has started for PR 12813 at commit f1ba442.
Test build #57442 has finished for PR 12813 at commit f1ba442.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57442/
Test FAILed.
Test build #57445 has started for PR 12813 at commit 79d1be4.
Test build #57446 has started for PR 12813 at commit 5bdce92.
Test build #57445 has finished for PR 12813 at commit 79d1be4.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57445/
Test PASSed.
Merged build finished. Test PASSed.
Test build #57446 has finished for PR 12813 at commit 5bdce92.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57446/
Test PASSed.
LGTM. Merged into master. Thanks!
|
2025-04-01T06:37:54.784069
| 2017-04-24T23:04:57
|
223975734
|
{
"authors": [
"AmplabJenkins",
"JoshRosen",
"SparkQA",
"rxin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3683",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/17753"
}
|
gharchive/pull-request
|
[SPARK-20453] Bump master branch version to 2.3.0-SNAPSHOT
This patch bumps the master branch version to 2.3.0-SNAPSHOT.
Test build #76122 has started for PR 17753 at commit 983f746.
Test build #76122 has finished for PR 17753 at commit 983f746.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76122/
Test PASSed.
Merging in master.
|
2025-04-01T06:37:54.813818
| 2017-07-04T01:12:19
|
240289166
|
{
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"SparkQA",
"cloud-fan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3684",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/18521"
}
|
gharchive/pull-request
|
[SPARK-19507][SPARK-21296][PYTHON] Avoid per-record type dispatch in schema verification and improve exception message
What changes were proposed in this pull request?
Context
While reviewing https://github.com/apache/spark/pull/17227, I realised here we type-dispatch per record. The PR itself is fine in terms of performance as is but this prints a prefix, "obj" in exception message as below:
from pyspark.sql.types import *
schema = StructType([StructField('s', IntegerType(), nullable=False)])
spark.createDataFrame([["1"]], schema)
...
TypeError: obj.s: IntegerType can not accept object '1' in type <type 'str'>
I suggested to get rid of this but during investigating this, I realised my approach might bring a performance regression as it is a hot path.
Only for SPARK-19507 and https://github.com/apache/spark/pull/17227, It needs more changes to cleanly get rid of the prefix and I rather decided to fix both issues together.
Propersal
This PR tried to
get rid of per-record type dispatch as we do in many code paths in Scala so that it improves the performance (roughly ~25% improvement) - SPARK-21296
This was tested with a simple code spark.createDataFrame(range(1000000), "int"). However, I am quite sure the actual improvement in practice is larger than this, in particular, when the schema is complicated.
improve error message in exception describing field information as prose - SPARK-19507
How was this patch tested?
Manually tested and unit tests were added in python/pyspark/sql/tests.py.
Benchmark - codes: https://gist.github.com/HyukjinKwon/c3397469c56cb26c2d7dd521ed0bc5a3
Error message - codes: https://gist.github.com/HyukjinKwon/b1b2c7f65865444c4a8836435100e398
Before
Benchmark:
Results: https://gist.github.com/HyukjinKwon/4a291dab45542106301a0c1abcdca924
Error message
Results: https://gist.github.com/HyukjinKwon/57b1916395794ce924faa32b14a3fe19
After
Benchmark
Results: https://gist.github.com/HyukjinKwon/21496feecc4a920e50c4e455f836266e
Error message
Results: https://gist.github.com/HyukjinKwon/7a494e4557fe32a652ce1236e504a395
Closes #17227
Test build #79116 has started for PR 18521 at commit d7f6778.
Test build #79116 has finished for PR 18521 at commit d7f6778.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79116/
Test PASSed.
cc @ueshin and @holdenk who were reviewing it and, @dgingrich the author of that PR.
cc @cloud-fan who I believe reviewed my related few PRs before and @davies who I believe is used to this code path.
Test build #79128 has started for PR 18521 at commit 5b80a8b.
Test build #79128 has finished for PR 18521 at commit 5b80a8b.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79128/
Test PASSed.
Merged build finished. Test PASSed.
Test build #79131 has started for PR 18521 at commit 9ee8d03.
Test build #79134 has started for PR 18521 at commit 420b4bf.
Test build #79131 has finished for PR 18521 at commit 9ee8d03.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
class DataTypeVerificationTests(unittest.TestCase):
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79131/
Test PASSed.
Test build #79134 has finished for PR 18521 at commit 420b4bf.
This patch fails PySpark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79134/
Test FAILed.
Merged build finished. Test FAILed.
Test build #79140 has started for PR 18521 at commit 15c575f.
Test build #79141 has started for PR 18521 at commit 826dcfd.
Test build #79140 has finished for PR 18521 at commit 15c575f.
This patch fails PySpark pip packaging tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79140/
Test FAILed.
Test build #79141 has finished for PR 18521 at commit 826dcfd.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79141/
Test PASSed.
@cloud-fan, I believe it is ready for another look.
LGTM, merging to master!
|
2025-04-01T06:37:54.843526
| 2017-09-13T19:55:43
|
257501816
|
{
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"SparkQA",
"felixcheung",
"goldmedal",
"viirya"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3685",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/19223"
}
|
gharchive/pull-request
|
[SPARK-21513][SQL][FOLLOWUP] Allow UDF to_json support converting MapType to json for PySpark and SparkR
What changes were proposed in this pull request?
In previous work SPARK-21513, we has allowed MapType and ArrayType of MapTypes convert to a json string but only for Scala API. In this follow-up PR, we will make SparkSQL support it for PySpark and SparkR, too. We also fix some little bugs and comments of the previous work in this follow-up PR.
For PySpark
>>> data = [(1, {"name": "Alice"})]
>>> df = spark.createDataFrame(data, ("key", "value"))
>>> df.select(to_json(df.value).alias("json")).collect()
[Row(json=u'{"name":"Alice")']
>>> data = [(1, [{"name": "Alice"}, {"name": "Bob"}])]
>>> df = spark.createDataFrame(data, ("key", "value"))
>>> df.select(to_json(df.value).alias("json")).collect()
[Row(json=u'[{"name":"Alice"},{"name":"Bob"}]')]
For SparkR
# Converts a map into a JSON object
df2 <- sql("SELECT map('name', 'Bob')) as people")
df2 <- mutate(df2, people_json = to_json(df2$people))
# Converts an array of maps into a JSON array
df2 <- sql("SELECT array(map('name', 'Bob'), map('name', 'Alice')) as people")
df2 <- mutate(df2, people_json = to_json(df2$people))
How was this patch tested?
Add unit test cases.
cc @viirya @HyukjinKwon
Can one of the admins verify this patch?
ok to test
Test build #81739 has started for PR 19223 at commit 29e7323.
Test build #81739 has finished for PR 19223 at commit 29e7323.
This patch fails some tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81739/
Test FAILed.
Test build #81761 has started for PR 19223 at commit 158140e.
LGTM except for one comment left.
Test build #81766 has started for PR 19223 at commit af8d941.
Test build #81766 has finished for PR 19223 at commit af8d941.
This patch fails due to an unknown error code, -9.
This patch merges cleanly.
This patch adds no public classes.
Test build #81761 has finished for PR 19223 at commit 158140e.
This patch fails due to an unknown error code, -9.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81766/
Test FAILed.
Merged build finished. Test FAILed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81761/
Test FAILed.
retest this please.
Test build #81769 has started for PR 19223 at commit af8d941.
Test build #81769 has finished for PR 19223 at commit af8d941.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81769/
Test PASSed.
@HyukjinKwon @felixcheung @viirya
I has finished those change at your suggestions for this PR and it also passed all tests. Please take a look when you are available. Thanks :)
Test build #81780 has started for PR 19223 at commit 8a3a068.
Test build #81780 has finished for PR 19223 at commit 8a3a068.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81780/
Test FAILed.
Test build #81781 has started for PR 19223 at commit 66bc5b7.
Test build #81781 has finished for PR 19223 at commit 66bc5b7.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81781/
Test PASSed.
LGTM
AppVeyor didn't run on this?
D'oh, yes. I wonder why it was not triggered. I manually triggered via my account:
Build started: [SparkR] ALL
Diff: https://github.com/apache/spark/compare/master...spark-test:8981A5F1-E2DC-4015-8266-12E9ADEE189B
@HyukjinKwon Thanks for triggering AppVeyor. In normal case, will AppVeyor be triggered automatically?
Yes, when there are some changes in:
https://github.com/apache/spark/blob/828fab03567ecc245a65c4d295a677ce0ba26c19/appveyor.yml#L29-L35
It should run the R tests on Windows via AppVeyor.
ok. I got it. Thanks :)
Looks passed fine. Let me merge this one.
Thanks @felixcheung @HyukjinKwon
Merged to master.
Thanks @HyukjinKwon @felixcheung @viirya
|
2025-04-01T06:37:54.860768
| 2017-11-28T15:17:07
|
277421801
|
{
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"SparkQA",
"james64",
"jerryshao",
"jiangxb1987"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3686",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/19834"
}
|
gharchive/pull-request
|
[SPARK-22585][Core] Path in addJar is not url encoded
What changes were proposed in this pull request?
This updates a behavior of addJar method of sparkContext class. If path without any scheme is passed as input it is used literally without url encoding/decoding it.
How was this patch tested?
A unit test is added for this.
Test build #84262 has started for PR 19834 at commit 1fc5db3.
@srowen Let's continue our discussion here.
So I have removed those three commented lines. Is there anything else to do before merge?
Test build #84266 has started for PR 19834 at commit bd667d9.
Test build #84262 has finished for PR 19834 at commit 1fc5db3.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84262/
Test PASSed.
Test build #84266 has finished for PR 19834 at commit bd667d9.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84266/
Test FAILed.
LGTM.
Another LGTM
retest this please
Test build #84295 has started for PR 19834 at commit bd667d9.
Test build #84295 has finished for PR 19834 at commit bd667d9.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84295/
Test FAILed.
retest this please
Test build #84302 has started for PR 19834 at commit bd667d9.
Test build #84302 has finished for PR 19834 at commit bd667d9.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/84302/
Test PASSed.
Merged to master.
Thanks!
|
2025-04-01T06:37:54.868994
| 2018-08-07T08:04:16
|
348211126
|
{
"authors": [
"AmplabJenkins",
"SparkQA",
"kiszk",
"srowen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3687",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/22020"
}
|
gharchive/pull-request
|
[SPARK-25041][build] upgrade genJavaDoc-plugin from 0.10 to 0.11
What changes were proposed in this pull request?
This PR fixes a build error with sbt using Scala-2.12. Since [genJavaDoc-plugin] (https://mvnrepository.com/artifact/com.typesafe.genjavadoc/genjavadoc-plugin) 0.10 is not prepared for Scala-2.12.6, the recent version of genJavaDoc-plugin is necessary.
The version 0.11 of genJavaDoc-plugin is also prepared for Scala-2.11.12.
genJavaDoc-0.10
genJavaDoc-0.11
How was this patch tested?
Manually tested for Scala-2.12.
cc @ueshin @HyukjinKwon @srowen
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/1899/
Test PASSed.
Test build #94356 has started for PR 22020 at commit 1b41ce4.
Test build #94356 has finished for PR 22020 at commit 1b41ce4.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/94356/
Test PASSed.
Merged to master
|
2025-04-01T06:37:54.916588
| 2018-12-30T14:37:37
|
394876854
|
{
"authors": [
"AmplabJenkins",
"Hellsen83",
"HyukjinKwon",
"SparkQA",
"chanansh",
"srowen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3688",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/23414"
}
|
gharchive/pull-request
|
[SPARK-26449][PYTHON] add a transform method to the Dataframe class
What changes were proposed in this pull request?
added a transform method to the Dataframe class, see https://issues.apache.org/jira/browse/SPARK-26449
How was this patch tested?
Tested manually by injecting the proposed method to the current spark version dataframe class.
I've tried to compile spark from scratch and test using ./build/mvn test. However, unrelated tests fails before my change.
Please review http://spark.apache.org/contributing.html before opening a pull request.
Can one of the admins verify this patch?
Can one of the admins verify this patch?
Can one of the admins verify this patch?
ok to test
Test build #100560 has started for PR 23414 at commit def5b2c.
Test build #100560 has finished for PR 23414 at commit def5b2c.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100560/
Test FAILed.
Test build #100562 has started for PR 23414 at commit def5b2c.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6494/
Test PASSed.
Test build #100562 has finished for PR 23414 at commit def5b2c.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100562/
Test FAILed.
Merged build finished. Test FAILed.
Test build #100592 has finished for PR 23414 at commit b370363.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100592/
Test FAILed.
@HyukjinKwon I get the following errors:
[error] running /home/jenkins/workspace/SparkPullRequestBuilder@2/dev/lint-python ; received return code 1
Attempting to post to Github...
> Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error?
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100592/
Test FAILed.
Finished: FAILURE
Can you please help?
Looks it's failed for below reasons.
pycodestyle checks failed:
./python/pyspark/sql/dataframe.py:2048:1: W293 blank line contains whitespace
./python/pyspark/sql/dataframe.py:2064:1: W293 blank line contains whitespace
added doctest and removed more empty line with spaces. please re-test
Test build #100594 has started for PR 23414 at commit f5aaa1a.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6523/
Test PASSed.
Test build #100594 has finished for PR 23414 at commit f5aaa1a.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100594/
Test FAILed.
removed *args **kwargs (albeit I think they're useful). Please re-test
Test build #100595 has started for PR 23414 at commit 0b1f562.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6524/
Test FAILed.
Test build #100595 has finished for PR 23414 at commit 0b1f562.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100595/
Test FAILed.
@HyukjinKwon I am sorry for being newbie but I don't understand the fail reason:
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress https://github.com/apache/spark.git +refs/pull/23414/*:refs/remotes/origin/pr/23414/*" returned status code 128:
stdout:
stderr: error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6525/
Test PASSed.
@HyukjinKwon what do you mean when you say the Scala impl has this? I'm missing it.
I don't see the value in this. From the blog post at https://medium.com/@mrpowers/chaining-custom-pyspark-transformations-4f38a8c7ae55 why is ...
actual_df = (source_df
.transform(lambda df: with_greeting(df))
.transform(lambda df: with_something(df, "crazy")))
better than just
actual_df = with_greeting(source_df)
actual_df = with_something(actual_df, "crazy")
The idea is to be able to chain function easily when you have 10 stages. no need for keeping temporary variables.
You can also...
actual_df = source_df
for f in [...]:
actual_df = f(actual_df)
Unless I'm really missing something this doesn't exist for Scala (?) and I can't see adding an API method for this. The small additional maintenance and user cognitive load just doesn't seem to buy much at all.
@srowen the motivation is from this blogpost https://medium.com/@mrpowers/chaining-custom-pyspark-transformations-4f38a8c7ae55
I was referring:
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala#L2497
If it were new API, I won't encourage to add but it's existing. I think we should rather deprecate Scala side one if we don't see some values on that. Otherwise, I thought matching it is fine.
Oh hm I had never seen that! Yah seems fine for consistency then.
@chanansh, also please fix the PR title to [SPARK-26449][PYTHON] ... so that it automatically links your PR to the JIRA.
Test build #100610 has started for PR 23414 at commit 9919e28.
Test build #100610 has finished for PR 23414 at commit 9919e28.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100610/
Test FAILed.
Test build #100611 has started for PR 23414 at commit e54d2f7.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6535/
Test PASSed.
Test build #100611 has finished for PR 23414 at commit e54d2f7.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100611/
Test FAILed.
Test build #100612 has started for PR 23414 at commit 3d9a751.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/6536/
Test PASSed.
Test build #100612 has finished for PR 23414 at commit 3d9a751.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/100612/
Test PASSed.
Looks fine except https://github.com/apache/spark/pull/23414/files#r244654162
Closing this due to author's inactivity.
sorry, please reopen I will do it.
HS
On Mon, Feb 11, 2019 at 12:10 PM Hyukjin Kwon<EMAIL_ADDRESS>wrote:
Closed #23414 https://github.com/apache/spark/pull/23414.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/apache/spark/pull/23414#event-2130116400, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AFtFzGK1zXfQq-EhAgPgm1uYJcDhZCMjks5vMUGsgaJpZM4Zk9No
.
Just push more commits; I think that reopens it.
is this one still open? I would want to PR basically the same thing.
You can pick up commits and create new PR. Looks the author is inactive.
|
2025-04-01T06:37:54.926246
| 2019-03-02T21:25:33
|
416443895
|
{
"authors": [
"AmplabJenkins",
"SparkQA",
"srowen",
"steveloughran"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3689",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/23938"
}
|
gharchive/pull-request
|
[MINOR][DOCS] Clarify that Spark apps should mark Spark as a 'provided' dependency, not package it
What changes were proposed in this pull request?
Spark apps do not need to package Spark. In fact it can cause problems in some cases. Our examples should show depending on Spark as a 'provided' dependency.
How was this patch tested?
Doc build
Test build #102943 has started for PR 23938 at commit f8fcc52.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/8412/
Test PASSed.
Test build #102943 has finished for PR 23938 at commit f8fcc52.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/102943/
Test PASSed.
@dongjoon-hyun yeah, that one I wasn't sure about as it's some support code that sounded like it was meant to be bundled in an app. @steveloughran is that correct -- hadoop-cloud should be a compile scope dependency, not provided by the cluster?
you should compile with hadoop-cloud and add those JARs it pulls in to the spark tarball placed on the shared cluster FS for YARN to pick up. Don't know about other deployment engines I'm afraid. The build also adds it to the SPARK_HOME/lib, which gives it to you for spark-standalone during spark submit, either for anything related to JAR upload, or for any store which implements delegation tokens (HADOOP-14456, HADOOP-16068, etc), so it collects the tokens for all stores listed in spark.yarn.hadoopFilesystems.
@steveloughran to be clear do you compile your app, or Spark, with this dependency? it sounds like "Spark" not the app. If so I'll update this further.
sorry, yeah, spark.
Even if the spark team doesn't redist those JARs, it'd be really useful if the release process published the POM. that way, if you want your build to pick up the exact set of dependencies which are in sync with spark, excluding all the stuff which will cause grief, you'd just add it as a dependency.
Ah OK on further review @steveloughran , the docs here are saying to include the dependency in your app, which would be the right thing if not bundled by Spark, and that's the current state of things for a default cluster. I think that much of the doc is then OK, and shouldn't change to mentioned provided.
Merged to master/2.4/2.3
|
2025-04-01T06:37:54.954889
| 2019-08-08T08:14:10
|
478317513
|
{
"authors": [
"AmplabJenkins",
"MaxGekk",
"SparkQA",
"cloud-fan",
"dongjoon-hyun"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3690",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/25388"
}
|
gharchive/pull-request
|
[SPARK-28656][SQL] Support millennium, century and decade at extract()
What changes were proposed in this pull request?
In the PR, I propose new expressions Millennium, Century and Decade, and support additional parameters of extract() for feature parity with PostgreSQL (https://www.postgresql.org/docs/11/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT):
millennium - the current millennium for given date (or a timestamp implicitly casted to a date). For example, years in the 1900s are in the second millennium. The third millennium started January 1, 2001.
century - the current millennium for given date (or timestamp). The first century starts at 0001-01-01 AD.
decade - the current decade for given date (or timestamp). Actually, this is the year field divided by 10.
Here are examples:
spark-sql> SELECT EXTRACT(MILLENNIUM FROM DATE '1981-01-19');
2
spark-sql> SELECT EXTRACT(CENTURY FROM DATE '1981-01-19');
20
spark-sql> SELECT EXTRACT(DECADE FROM DATE '1981-01-19');
198
Also the expressions are registered as functions - millennium, century and decade. For example:
spark-sql> SELECT MILLENNIUM('2019-08-08');
3
spark-sql> SELECT CENTURY('2019-08-08');
21
spark-sql> SELECT DECADE('2019-08-08');
201
How was this patch tested?
Added new tests to DateExpressionsSuite, DateFunctionsSuite, and uncommented existing tests in pgSQL/date.sql.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13887/
Test PASSed.
Test build #108806 has started for PR 25388 at commit 6755bce.
Can one of the admins verify this patch?
Test build #108806 has finished for PR 25388 at commit 6755bce.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108806/
Test FAILed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13906/
Test PASSed.
Test build #108829 has started for PR 25388 at commit 381f214.
Test build #108829 has finished for PR 25388 at commit 381f214.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108829/
Test PASSed.
Hi, @MaxGekk .
Supporting Extract seems to be enough for PostgreSQL feature parity.
Supporting the followings are easy, but I'm not sure about these. In general, I'd recommend not to register as functions. PMC may have different opinions.
spark-sql> SELECT MILLENNIUM('2019-08-08');
3
spark-sql> SELECT CENTURY('2019-08-08');
21
spark-sql> SELECT DECADE('2019-08-08');
201
How do you think about (2) which adds these new functions , @gatorsmile and @cloud-fan ?
Let's not add builtin functions that only exist in Spark.
Thank you for the decision, @cloud-fan !
Test build #108867 has started for PR 25388 at commit 9d9a0ad.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13940/
Test PASSed.
Test build #108867 has finished for PR 25388 at commit 9d9a0ad.
This patch fails due to an unknown error code, -9.
This patch merges cleanly.
This patch adds the following public classes (experimental):
sealed trait RewritableTransform extends Transform
case class ArrayForAll(
case class DescribeTable(table: NamedRelation, isExtended: Boolean) extends Command
trait V2CreateTablePlan extends LogicalPlan
case class DescribeColumnStatement(
case class DescribeTableStatement(
case class InsertAdaptiveSparkPlan(
case class DescribeTableExec(table: Table, isExtended: Boolean) extends LeafExecNode
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108867/
Test FAILed.
jenkins, retest this, please
Test build #108868 has started for PR 25388 at commit 9d9a0ad.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/13941/
Test PASSed.
Test build #108868 has finished for PR 25388 at commit 9d9a0ad.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
sealed trait RewritableTransform extends Transform
case class ArrayForAll(
case class DescribeTable(table: NamedRelation, isExtended: Boolean) extends Command
trait V2CreateTablePlan extends LogicalPlan
case class DescribeColumnStatement(
case class DescribeTableStatement(
case class InsertAdaptiveSparkPlan(
case class DescribeTableExec(table: Table, isExtended: Boolean) extends LeafExecNode
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/108868/
Test PASSed.
|
2025-04-01T06:37:54.966386
| 2020-08-08T11:22:53
|
675510094
|
{
"authors": [
"AmplabJenkins",
"SparkQA",
"dongjoon-hyun",
"maropu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3691",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/29390"
}
|
gharchive/pull-request
|
[SPARK-32564][SQL][TEST][3.0] Inject data statistics to simulate plan generation on actual TPCDS data
What changes were proposed in this pull request?
TPCDSQuerySuite currently computes plans with empty TPCDS tables, then checks if plans can be generated correctly. But, the generated plans can be different from actual ones because the input tables are empty (e.g., the plans always use broadcast-hash joins, but actual ones use sort-merge joins for larger tables). To mitigate the issue, this PR defines data statistics constants extracted from generated TPCDS data in TPCDSTableStats, then injects the statistics via spark.sessionState.catalog.alterTableStats when defining TPCDS tables in TPCDSQuerySuite.
Please see a link below about how to extract the table statistics:
https://gist.github.com/maropu/f553d32c323ee803d39e2f7fa0b5a8c3
For example, the generated plans of TPCDS q2 are different with/without this fix:
==== w/ this fix: q2 ====
== Physical Plan ==
* Sort (43)
+- Exchange (42)
+- * Project (41)
+- * SortMergeJoin Inner (40)
:- * Sort (28)
: +- Exchange (27)
: +- * Project (26)
: +- * BroadcastHashJoin Inner BuildRight (25)
: :- * HashAggregate (19)
: : +- Exchange (18)
: : +- * HashAggregate (17)
: : +- * Project (16)
: : +- * BroadcastHashJoin Inner BuildRight (15)
: : :- Union (9)
: : : :- * Project (4)
: : : : +- * Filter (3)
: : : : +- * ColumnarToRow (2)
: : : : +- Scan parquet default.web_sales (1)
: : : +- * Project (8)
: : : +- * Filter (7)
: : : +- * ColumnarToRow (6)
: : : +- Scan parquet default.catalog_sales (5)
: : +- BroadcastExchange (14)
: : +- * Project (13)
: : +- * Filter (12)
: : +- * ColumnarToRow (11)
: : +- Scan parquet default.date_dim (10)
: +- BroadcastExchange (24)
: +- * Project (23)
: +- * Filter (22)
: +- * ColumnarToRow (21)
: +- Scan parquet default.date_dim (20)
+- * Sort (39)
+- Exchange (38)
+- * Project (37)
+- * BroadcastHashJoin Inner BuildRight (36)
:- * HashAggregate (30)
: +- ReusedExchange (29)
+- BroadcastExchange (35)
+- * Project (34)
+- * Filter (33)
+- * ColumnarToRow (32)
+- Scan parquet default.date_dim (31)
==== w/o this fix: q2 ====
== Physical Plan ==
* Sort (40)
+- Exchange (39)
+- * Project (38)
+- * BroadcastHashJoin Inner BuildRight (37)
:- * Project (26)
: +- * BroadcastHashJoin Inner BuildRight (25)
: :- * HashAggregate (19)
: : +- Exchange (18)
: : +- * HashAggregate (17)
: : +- * Project (16)
: : +- * BroadcastHashJoin Inner BuildRight (15)
: : :- Union (9)
: : : :- * Project (4)
: : : : +- * Filter (3)
: : : : +- * ColumnarToRow (2)
: : : : +- Scan parquet default.web_sales (1)
: : : +- * Project (8)
: : : +- * Filter (7)
: : : +- * ColumnarToRow (6)
: : : +- Scan parquet default.catalog_sales (5)
: : +- BroadcastExchange (14)
: : +- * Project (13)
: : +- * Filter (12)
: : +- * ColumnarToRow (11)
: : +- Scan parquet default.date_dim (10)
: +- BroadcastExchange (24)
: +- * Project (23)
: +- * Filter (22)
: +- * ColumnarToRow (21)
: +- Scan parquet default.date_dim (20)
+- BroadcastExchange (36)
+- * Project (35)
+- * BroadcastHashJoin Inner BuildRight (34)
:- * HashAggregate (28)
: +- ReusedExchange (27)
+- BroadcastExchange (33)
+- * Project (32)
+- * Filter (31)
+- * ColumnarToRow (30)
+- Scan parquet default.date_dim (29)
This comes from the @cloud-fan comment: https://github.com/apache/spark/pull/29270#issuecomment-666098964
This is the backport of #29384.
Why are the changes needed?
For better test coverage.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Existing tests.
Test build #127221 has started for PR 29390 at commit 750a632.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/31842/
Test PASSed.
Test build #127221 has finished for PR 29390 at commit 750a632.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/127221/
Test PASSed.
Thanks, @maropu . Merged to branch-3.0.
Thanks a lot, @dongjoon-hyun !
|
2025-04-01T06:37:54.973385
| 2022-05-12T02:58:12
|
1233381140
|
{
"authors": [
"LuciferYang",
"huaxingao"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3692",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/36515"
}
|
gharchive/pull-request
|
[SPARK-39156][SQL] Clean up the usage of ParquetLogRedirector in ParquetFileFormat.
What changes were proposed in this pull request?
SPARK-17993 introduce ParquetLogRedirector for Parquet version < 1.9, PARQUET-305 change to use slf4j instead of jul in Parquet 1.9, Spark uses Parquet 1.12.2 now and no longer relies on Parquet version 1.6 now , the ParquetLogRedirector is no longer needed, so this pr clean up the usage of ParquetLogRedirector in ParquetFileFormat.
Why are the changes needed?
Clean up the usage of ParquetLogRedirector in ParquetFileFormat.
Does this PR introduce any user-facing change?
No
How was this patch tested?
Pass GA
Manual test:
Build Spark client manually before and after this pr
Change parquet log4j level to debug:
logger.parquet1.name = org.apache.parquet
logger.parquet1.level = debug
logger.parquet2.name = parquet
logger.parquet2.level = debug
Try to read Parquet file write with 1.6 , for example sql/core/src/test/resources/test-data/dec-in-i32.parquet.
java -jar parquet-tools-1.10.1.jar meta /${basedir}/dec-in-i32.parquet
file: file:/${basedir}/dec-in-i32.parquet
creator: parquet-mr version 1.6.0
extra: org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"i32_dec","type":"decimal(5,2)","nullable":true,"metadata":{}}]}
file schema: spark_schema
--------------------------------------------------------------------------------
i32_dec: OPTIONAL INT32 O:DECIMAL R:0 D:1
row group 1: RC:16 TS:102 OFFSET:4
--------------------------------------------------------------------------------
i32_dec: INT32 GZIP DO:0 FPO:4 SZ:131/102/0.78 VC:16 ENC:RLE,PLAIN_DICTIONARY,BIT_PACKED ST:[no stats for this column]
spark.read.parquet("file://${basedir}/ptable/dec-in-i32.parquet").show()
The log contents before and after this pr are consistent, and there is no error log mentioned in SPARK-17993
Looks OK. Could you cross link the fix (JIRA) from Parquet side?
Should be PARQUET-305 in Parquet 1.9
hmm... @sunchao any other need changes?
Thanks! Merged to master.
thanks @huaxingao @sunchao
|
2025-04-01T06:37:54.977513
| 2022-05-26T11:51:13
|
1249472516
|
{
"authors": [
"peter-toth"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3693",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/36687"
}
|
gharchive/pull-request
|
[SPARK-36681][CORE][TESTS][FOLLOW-UP] Handle LinkageError when Snappy native library is not available in low Hadoop versions
What changes were proposed in this pull request?
This is a follow-up to https://github.com/apache/spark/pull/36136 to fix LinkageError handling in FileSuite to avoid test suite abort when Snappy native library is not available in low Hadoop versions:
23:16:22 FileSuite:
23:16:22 org.apache.spark.FileSuite *** ABORTED ***
23:16:22 java.lang.RuntimeException: Unable to load a Suite class that was discovered in the runpath: org.apache.spark.FileSuite
23:16:22 at org.scalatest.tools.DiscoverySuite$.getSuiteInstance(DiscoverySuite.scala:81)
23:16:22 at org.scalatest.tools.DiscoverySuite.$anonfun$nestedSuites$1(DiscoverySuite.scala:38)
23:16:22 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
23:16:22 at scala.collection.Iterator.foreach(Iterator.scala:941)
23:16:22 at scala.collection.Iterator.foreach$(Iterator.scala:941)
23:16:22 at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
23:16:22 at scala.collection.IterableLike.foreach(IterableLike.scala:74)
23:16:22 at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
23:16:22 at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
23:16:22 at scala.collection.TraversableLike.map(TraversableLike.scala:238)
23:16:22 ...
23:16:22 Cause: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
23:16:22 at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
23:16:22 at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
23:16:22 at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
23:16:22 at org.apache.spark.FileSuite.$anonfun$new$12(FileSuite.scala:145)
23:16:22 at scala.util.Try$.apply(Try.scala:213)
23:16:22 at org.apache.spark.FileSuite.<init>(FileSuite.scala:141)
23:16:22 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
23:16:22 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
23:16:22 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
23:16:22 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
Scala's Try can handle only NonFatal throwables.
Why are the changes needed?
To make the tests robust.
Does this PR introduce any user-facing change?
Nope, this is test-only.
How was this patch tested?
Manual test.
cc @HyukjinKwon, @viirya, @dongjoon-hyun
Thanks all for the review.
|
2025-04-01T06:37:54.980725
| 2022-07-12T23:46:31
|
1302729165
|
{
"authors": [
"HeartSaVioR",
"viirya"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3694",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/37167"
}
|
gharchive/pull-request
|
[SPARK-39748][SQL][FOLLOWUP] Add missing origin logical plan on DataFrame.checkpoint on building LogicalRDD
What changes were proposed in this pull request?
This PR adds missing origin logical plan on building LogicalRDD in DataFrame.checkpoint, via review comment https://github.com/apache/spark/pull/37161#discussion_r919204026.
Why are the changes needed?
This is missing spot on previous PR and @viirya helped to find out.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
N/A
cc. @viirya
lgtm
Thanks! Merging to master.
|
2025-04-01T06:37:54.984338
| 2022-12-30T13:52:41
|
1514539132
|
{
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"mattshma"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3695",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/39315"
}
|
gharchive/pull-request
|
[SPARK-41790][SQL] Set TRANSFORM reader and writer's format correctly
What changes were proposed in this pull request?
We'll get wrong data when transform only specify reader or writer 's row format delimited, the reason is using the wrong format to feed/fetch data to/from running script now. we should set the format correctly.
Currently in Spark:
spark-sql> CREATE TABLE t1 (a string, b string);
spark-sql> INSERT OVERWRITE t1 VALUES("1", "2"), ("3", "4");
spark-sql> SELECT TRANSFORM(a, b)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> USING 'cat'
> AS (c)
> FROM t1;
c
spark-sql> SELECT TRANSFORM(a, b)
> USING 'cat'
> AS (c)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> FROM t1;
c
1 23 4
The same sql in hive:
hive> SELECT TRANSFORM(a, b)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> USING 'cat'
> AS (c)
> FROM t1;
c
1,2
3,4
hive> SELECT TRANSFORM(a, b)
> USING 'cat'
> AS (c)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> FROM t1;
c
1 2
3 4
Why are the changes needed?
Fix transform writer format and reader format.
Does this PR introduce any user-facing change?
When we set transform's row format delimited in the sql, we may get the wrong data.
How was this patch tested?
New tests.
Can one of the admins verify this patch?
cc @AngersZhuuuu
Merged to master.
|
2025-04-01T06:37:54.986889
| 2023-02-16T00:11:28
|
1586786247
|
{
"authors": [
"HyukjinKwon",
"dongjoon-hyun",
"rithwik-db"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3696",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/40045"
}
|
gharchive/pull-request
|
[SPARK-41591][PYTHON][FOLLOW-UP] Remove gRPC version check for Distributor
What changes were proposed in this pull request?
Removing redundant check for whether GPUs exist on the driver node.
Why are the changes needed?
For slightly cleaner code. Could close PR if we don't need to merge it in.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
As long as normal tests work, we don't expect any other failures.
Yeah, let's close. I don't think this is an issue.
Thank you, @HyukjinKwon and @rithwik-db .
|
2025-04-01T06:37:54.990816
| 2023-03-19T19:40:44
|
1631090329
|
{
"authors": [
"aokolnychyi",
"cloud-fan",
"dongjoon-hyun"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3697",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/40478"
}
|
gharchive/pull-request
|
[SPARK-42779][SQL][FOLLOWUP] Allow V2 writes to indicate advisory shuffle partition size
What changes were proposed in this pull request?
This PR addresses non-blocking comments for PR #40421.
Why are the changes needed?
These changes are needed to make sure the new logic only applies in expected cases.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Existing tests.
cc @cloud-fan @dongjoon-hyun
thanks, merging to master!
Thank you, @cloud-fan !
Thanks, @dongjoon-hyun @cloud-fan!
|
2025-04-01T06:37:54.993608
| 2023-04-21T08:47:47
|
1678140768
|
{
"authors": [
"wangyum"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3698",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/40897"
}
|
gharchive/pull-request
|
[SPARK-43228][SQL] Join keys also match PartitioningCollection in CoalesceBucketsInJoin
What changes were proposed in this pull request?
This PR updates CoalesceBucketsInJoin.satisfiesOutputPartitioning to support matching PartitioningCollection. A common case is that we add an alias on the join key. For example:
SELECT *
FROM (SELECT /*+ BROADCAST(t3) */ t1.i AS t1i, t1.j AS t1j, t3.*
FROM t1 JOIN t3 ON t1.i = t3.i AND t1.j = t3.j) t
JOIN t2 ON t.t1i = t2.i AND t.t1j = t2.j
The left side outputPartitioning is:
(hashpartitioning(t1i#41, t1j#42, 8) or hashpartitioning(i#46, t1j#42, 8) or hashpartitioning(t1i#41, j#47, 8) or hashpartitioning(i#46, j#47, 8))
Why are the changes needed?
Enhance CoalesceBucketsInJoin to support more cases.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Unit test.
cc @cloud-fan
|
2025-04-01T06:37:54.997029
| 2023-05-11T23:11:27
|
1706686592
|
{
"authors": [
"dongjoon-hyun",
"ueshin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3699",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/41148"
}
|
gharchive/pull-request
|
[SPARK-42945][CONNECT][FOLLOWUP] Disable JVM stack trace by default
What changes were proposed in this pull request?
This is a follow-up of #40575.
Disables JVM stack trace by default.
% ./bin/pyspark --remote local
...
>>> spark.conf.set("spark.sql.ansi.enabled", True)
>>> spark.sql('select 1/0').show()
...
Traceback (most recent call last):
...
pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
>>>
>>> spark.conf.set("spark.sql.pyspark.jvmStacktrace.enabled", True)
>>> spark.sql('select 1/0').show()
...
Traceback (most recent call last):
...
pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
JVM stacktrace:
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
at org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226)
at org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674)
...
Why are the changes needed?
Currently JVM stack trace is enabled by default.
% ./bin/pyspark --remote local
...
>>> spark.conf.set("spark.sql.ansi.enabled", True)
>>> spark.sql('select 1/0').show()
...
Traceback (most recent call last):
...
pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
JVM stacktrace:
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
at org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226)
at org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674)
...
Does this PR introduce any user-facing change?
Users won't see the JVM stack trace by default.
How was this patch tested?
Existing tests.
Merged to master. Thank you, @ueshin and @allisonwang-db .
|
2025-04-01T06:37:55.003734
| 2024-05-26T14:31:51
|
2317741445
|
{
"authors": [
"HyukjinKwon",
"Ngone51"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3700",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/46747"
}
|
gharchive/pull-request
|
[SPARK-48394][3.5][CORE] Cleanup mapIdToMapIndex on mapoutput unregister
This PR backports https://github.com/apache/spark/pull/46706 to branch 3.5.
What changes were proposed in this pull request?
This PR cleans up mapIdToMapIndex when the corresponding mapstatus is unregistered in three places:
removeMapOutput
removeOutputsByFilter
addMapOutput (old mapstatus overwritten)
Why are the changes needed?
There is only one valid mapstatus for the same mapIndex at the same time in Spark. mapIdToMapIndex should also follows the same rule to avoid chaos.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Unit tests.
Was this patch authored or co-authored using generative AI tooling?
No.
https://github.com/apache/spark/pull/46749 should fix the issue in the build in this case.
For now, you could rebase/force push and that should fix up the build
It seems I mistakenly pushed the branch to apache repo rather than my own repo. Will remove that branch after the PR merged.
@Ngone51 the build won't trigger if the branch is in apache repo. Let's just open a new PR with your forked repository.
@HyukjinKwon Oh, I see. Thanks for the reminder.
FYI created a new PR (https://github.com/apache/spark/pull/46768) to replace this one.
|
2025-04-01T06:37:55.007699
| 2024-08-15T07:42:50
|
2467580654
|
{
"authors": [
"LuciferYang",
"yaooqinn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3701",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/47767"
}
|
gharchive/pull-request
|
[MINOR][SQL][TESTS] Changes the test:runMain in the code comments to Test/runMain
What changes were proposed in this pull request?
This PR only changes the test:runMain description related to run command in the code comments to Test/runMain.
Why are the changes needed?
When we use the execution command in the code comments, we will see the following compilation warning:
build/sbt "sql/test:runMain org.apache.spark.sql.execution.benchmark.TopKBenchmark"
[warn] sbt 0.13 shell syntax is deprecated; use slash syntax instead: sql / Test / runMain
The relevant comments should be updated to eliminate the compilation warnings when run the command.
Does this PR introduce any user-facing change?
No
How was this patch tested?
Manually run the test using the updated command and check that the corresponding compilation warning is no longer present.
Was this patch authored or co-authored using generative AI tooling?
No
Please add [TEST] to the PR title
Please add [TEST] to the PR title
done
Merged into master. Thanks @yaooqinn
|
2025-04-01T06:37:55.011377
| 2024-12-23T09:29:32
|
2755613743
|
{
"authors": [
"cloud-fan",
"stefankandic"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3702",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/49269"
}
|
gharchive/pull-request
|
[SPARK-50649] Fix inconsistencies with casting between different collations
What changes were proposed in this pull request?
Fixing the inconsistent behavior of casts between different collations. Currently, we are allowed to do casts between them in dataframe API but not in the SQL API. I propose allowing casts in SQL as well (we are already allowing them for complex types anyways).
Also, this means changing the behavior or CAST(x AS STRING) which was previously not altering the collation of x, and will now change it to the default collation.
Why are the changes needed?
To make collation casts between the dataframe and SQL api consistent.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Added new unit tests for the dataframe API which we didn't have before and also updated the existing tests for the SQL API to match the new behavior.
Was this patch authored or co-authored using generative AI tooling?
No.
@cloud-fan please take a look when you can
The Spark Connect test failure is unrelated and flaky, I'm merging it to master, thanks!
|
2025-04-01T06:37:55.018452
| 2015-04-23T06:59:34
|
70329246
|
{
"authors": [
"AmplabJenkins",
"SparkQA",
"liancheng",
"rxin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3703",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/5651"
}
|
gharchive/pull-request
|
[SPARK-7069][SQL] Rename NativeType -> AtomicType.
Also renamed JvmType to InternalType.
Test build #30817 has started for PR 5651 at commit cbd4028.
LGTM pending Jenkins.
Test build #30817 has finished for PR 5651 at commit cbd4028.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
protected[sql] abstract class AtomicType extends DataType
abstract class NumericType extends AtomicType
class Encoder[T <: AtomicType](columnType: NativeColumnType[T]) extends compression.Encoder[T]
class Decoder[T <: AtomicType](buffer: ByteBuffer, columnType: NativeColumnType[T])
class Encoder[T <: AtomicType](columnType: NativeColumnType[T]) extends compression.Encoder[T]
class Decoder[T <: AtomicType](buffer: ByteBuffer, columnType: NativeColumnType[T])
class Encoder[T <: AtomicType](columnType: NativeColumnType[T]) extends compression.Encoder[T]
class Decoder[T <: AtomicType](buffer: ByteBuffer, columnType: NativeColumnType[T])
This patch does not change any dependencies.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/30817/
Test PASSed.
|
2025-04-01T06:37:55.033776
| 2015-09-08T02:18:30
|
105287792
|
{
"authors": [
"AmplabJenkins",
"SparkQA",
"hhbyyh",
"holdenk",
"mengxr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3704",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/8650"
}
|
gharchive/pull-request
|
[SPARK-10482] [ML] Add Python interface for ml.CountVectorizer
jira: https://issues.apache.org/jira/browse/SPARK-10482
Add Python interface for feature transformer: ml.CountVectorizer
Merged build triggered.
Merged build started.
Test build #42112 has started for PR 8650 at commit 0f1fa34.
Test build #42112 has finished for PR 8650 at commit 0f1fa34.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42112/
Test FAILed.
Merged build finished. Test FAILed.
Merged build triggered.
Merged build started.
Test build #42122 has started for PR 8650 at commit d22ba5a.
Test build #42122 has finished for PR 8650 at commit d22ba5a.
This patch fails Python style tests.
This patch merges cleanly.
This patch adds no public classes.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42122/
Test FAILed.
Merged build finished. Test FAILed.
Merged build triggered.
Merged build started.
Test build #42125 has started for PR 8650 at commit dd0e933.
Test build #42125 has finished for PR 8650 at commit dd0e933.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
class CountVectorizer(JavaEstimator, HasInputCol, HasOutputCol):
class CountVectorizerModel(JavaModel):
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42125/
Test PASSed.
LGTM except some minor issues
This seems to do the same work as the outstanding PR https://github.com/apache/spark/pull/8561
@holdenk Yes, I just noticed it. Could you merge some changes in this PR into yours? I think the doctest from @hhbyyh is better and the default values are specified correctly in this PR. I will make a pass after.
@hhbyyh Since this duplicates #8561, do you mind closing this PR? You can check opening PRs at https://spark-prs.appspot.com/#mllib.
Ok, I'll merge in the doc tests.
@mengxr Sorry for the extra effort during review.
|
2025-04-01T06:37:55.039107
| 2015-11-20T06:43:55
|
117975649
|
{
"authors": [
"AmplabJenkins",
"JoshRosen",
"SparkQA"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3705",
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/9857"
}
|
gharchive/pull-request
|
[SPARK-11877] Prevent agg. fallback conf. from leaking across test suites
This patch fixes an issue where the spark.sql.TungstenAggregate.testFallbackStartsAt SQLConf setting was not properly reset / cleared at the end of TungstenAggregationQueryWithControlledFallbackSuite. This ended up causing test failures in HiveCompatibilitySuite in Maven builds by causing spilling to occur way too frequently.
This configuration leak was inadvertently introduced during test cleanup in #9618.
Test build #46402 has started for PR 9857 at commit ffe29f7.
The failing HiveCompatibilitySuite test, mapjoin_mapjoin, has passed in the Maven pull request builder, and the modified TungstenAggregationQueryWithControlledFallbackSuite also passed tests, so I'm going to merge this now so that the overnight Maven builds have the opportunity to exhibit new test failures now that this one has been fixed.
Test build #46402 has finished for PR 9857 at commit ffe29f7.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/46402/
Test FAILed.
|
2025-04-01T06:37:55.056910
| 2015-08-11T21:06:15
|
100412398
|
{
"authors": [
"HeartSaVioR",
"jerrypeng",
"knusbaum"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3706",
"repo": "apache/storm",
"url": "https://github.com/apache/storm/pull/675"
}
|
gharchive/pull-request
|
[STORM-949] On the topology summary UI page, added Elapsed time since error column.
Current on the topology summary UI page and tells you the last error and highlights it red if it happened within the last 30 min. However, I think its useful if there was a column that told you the time that has elapsed since the most recent error occurred. I found this useful for monitoring the well being of my storm cluster.
Not sure why travis is failing again. I just built and ran all tests on my local machine and everything was fine
Hi, there's a storm-hive compilation issue.
Your patch modifies html pages, so you don't need to worry about build. :)
I'll take a look when I have some time. Thanks!
@jerrypeng
I'd like to see applied screenshot first before taking a detail look. Could you post it to comment?
fixed the formating issues
I'm more in favor of putting a time/date here than an elapsed time.
The actual description for STORM-949 is "On the topology summary UI page, last shown error should have the time and date".
Yes but the exact time and date can be found if you drill down to the component. Having the elapsed time since time I feel like will tell administrators in a finer grain when the error happened
@HeartSaVioR can you take a look at my pull request again
@jerrypeng
I'm with @knusbaum. PR is a bit different from JIRA title, and I'm also more in favor of putting time/date.
How about gathering consensus about this feature from dev mailing list and reflect feedback?
Just modified the UI to have error time shown as a time and date. If you hover over that time a tooltip will pop up displaying the elapsed time.
Modified version of UI is what I and @knusbaum , @d2r suggested, and it seems that there're no other opinions now.
Build failure is not related to.
So I'm +1.
|
2025-04-01T06:37:55.061522
| 2021-05-21T16:08:41
|
898200806
|
{
"authors": [
"junlincc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3707",
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/14754"
}
|
gharchive/issue
|
[Explore]'Is temporal' checkbox should take effect
Current behavior:
Only columns that have 'Is Temporal' box checked in the Edit dataset shows in Time Column select dropdown which is the right behavior ✅
What need to be done:
ICON CHANGE - When user uncheck 'Is Temporal' on a column that was detected as Time, column icon should change to "?" or "#", vise versa to use the 🕑 icon
if the column was previous selected as time column, time range where clause should be removed automatically from the query
related project
[explore]Can't remove unnecessary Datetime column from time filter
https://user-images.githubusercontent.com/67837651/119166999-2a7ed700-ba14-11eb-9e26-a75d668ef8ec.mov
other related project:
[Explore]Search data panel column by key words
Drag and Drop
Not able to sort column in Edit dataset modal is super annoying.. 🤣
@geido
another related issue
when clicking "SYNC COLUMNS FROM SOURCE", Is temporal is not detected
|
2025-04-01T06:37:55.069517
| 2021-06-22T15:28:55
|
927362849
|
{
"authors": [
"amitmiran137",
"laveenamurjani789"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3708",
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/15299"
}
|
gharchive/issue
|
Custom Color Theming Not working for embedded chart
While adding custom color theme for dashboard using below (where testpart23456789,Info,due,beforeTime,High are label names), the colors are getting applied only when charts are seen in the dashboard or during dashboard embed.
Issue we are facing is that when chart is explored or embedded, these label colors are getting overridden by default theming.
Also, while exploring chart, these label colors appear when "Superset Colors" theme is chosen but after saving, again the label colors get overridden by default theming.
"label_colors": {
"testpart23456789": "#FFFF00",
"Info": "#FFFF00",
"due": "#8B0000",
"beforeTime": "#008000",
"High": "#8B0000"
}
We got reference for above from FAQ- https://superset.apache.org/docs/frequently-asked-questions ( Is there a way to force the use specific colors?)
Expected results
While chart embed or explore, the label colors should appear as provided i.e should follow below:
"label_colors": {
"testpart23456789": "#FFFF00",
"Info": "#FFFF00",
"due": "#8B0000",
"beforeTime": "#008000",
"High": "#8B0000"
}
Actual results
While adding custom color theme for dashboard using below (where testpart23456789,Info,due,beforeTime,High are label names), the colors are getting applied only when charts are seen in the dashboard or during dashboard embed.
Issue we are facing is that when chart is explored or embedded, these label colors are getting overridden by default theming.
Also, while exploring chart, these label colors appear when "Superset Colors" theme is chosen but after saving, again the label colors get overridden by default theming.
Screenshots
How to reproduce the bug
Edit dashboard to add above provided config in Advanced Dashboard properties and save.
Open dashboard and explore any chart under it.
You will see that the colors provided are not being followed for labels.
Environment
(please complete the following information):
superset version: 1.1.0
python version: 3.8.5
node.js version: v14.17.0
Checklist
Make sure to follow these steps before submitting your issue - thank you!
[ ] I have checked the superset logs for python stacktraces and included it here as text if there are any.
[x] I have reproduced the issue with at least the latest released version of superset.
[x] I have checked the issue tracker for the same issue and I haven't found one similar.
This should be solved in 1.4
|
2025-04-01T06:37:55.082055
| 2019-09-18T11:26:55
|
495165654
|
{
"authors": [
"LoveMyBaby",
"capttrousers",
"everton3x",
"hiwaveSupport",
"mechgt",
"sammigachuhi",
"villebro"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3709",
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/8249"
}
|
gharchive/issue
|
Unable to display data table in sqlite?
I have successfully installed superset in docker mode, and the example data is also loaded successfully. When I add local sqlite for testing, it can't display the specific table name in sql lab.
superset version: 0.28.0
python version: 2.7.5
node.js version: 10.16.3
npm version: 6.9.0
Checklist
[ X] I have checked the superset logs for python stacktraces and included it here as text if there are any.
[ X] I have reproduced the issue with at least the latest released version of superset.
[ X] I have checked the issue tracker for the same issue and I haven't found one similar.
Additional context
no any table display in main schema, when i exec select sql: sqlite error: no such table: xxx
add datasource:
Test Connection: OK
expose in SQL Lab: selected
No data table displayed in bottom !
View In SQL Lab Editor:
schema: main(only)
see table schema(0 in main)
View database by sqlite3 in Centos7: display table
This issue refers to an old version of Superset (also deprecated version of Python); please try the most recent official release (0.34) and reopen if the issue persists.
I installed the superset as per the tutorial via Docker Compose on Windows 11 and added a SQLite database using the PREVENT_UNSAFE_DB_CONNECTIONS = False directive
The connection test is successful.
The database is fully functional and accessible.
The superset version is 0.0.0dev (as stated in the About section of the Settings menu).
But the same thing is happening to me:
View In SQL Lab Editor:
schema: main (only)
see table schema (0 in main)
I installed the superset as per the tutorial via Docker Compose on Windows 11 and added a SQLite database using the PREVENT_UNSAFE_DB_CONNECTIONS = False directive
The connection test is successful.
The database is fully functional and accessible.
The superset version is 0.0.0dev (as stated in the About section of the Settings menu).
But the same thing is happening to me:
View In SQL Lab Editor:
schema: main (only)
see table schema (0 in main)
Can we repoen this?
I cloned the repo just now and got it running with docker compose and followed this comment https://github.com/apache/superset/issues/9748#issuecomment-1124323169 to get the sqlite db added but I have the same experience, adding the sqlite db file via the path copied into superset_home works with the driver connection string properly formatted, but adding a dataset from the sqlite db results in a single schema main and no tables found.
exact same experience just now as capttrousers. Brand new install, managed to get sqlite db to connect, but main only and no data :(
I just installed superset using manual steps and have it running. I added a sqlite3 db file under database connections , where it showed the connection to be ok with unsafe flag set to FALSE. I also only see main as schema under the newly attached database and don't see the table which exists when I query the db file via sqlite3 commain line.
Infra:
SQLite 3.41.2 2023-03-22 11:56:21 0d1fc92f94cb6b76bffe3ec34d69cffde2924203304e8ffc4155597af0c191da
zlib version 1.2.13
gcc-11.2.0
Loaded your LOCAL configuration at [/home/vibhu/src/talkAItive/superset/superset_config.py]
Python 3.10.13
Flask 2.2.5
Werkzeug 2.3.8
Help please.
the same problem persists even as of May 2024. A solution please?
|
2025-04-01T06:37:55.096509
| 2019-09-27T09:35:24
|
499341600
|
{
"authors": [
"B-Cheye",
"Nikomahal",
"muneneg",
"rusackas",
"stevensuting",
"syazshafei",
"syazwan0913",
"timurista"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:3710",
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/8314"
}
|
gharchive/issue
|
How to change Big Number CSS to be like that?
How to configure to make big number like that?
Example:
@syazwan0913 you'll probably have to add your own component for this. Superset abstracts the common components to their plugins and exposes them as individual packages. https://github.com/apache-superset/superset-ui-plugins/blob/master/packages/.
If you take a look at the big number chart imported via the MainPresets.js file. You will find it does use a mainColor variable (ie the colorPicker) in the linear gradient.
The source code is pretty straightforward if you want to read it here: https://github.com/apache-superset/superset-ui-plugins/blob/484d63993b81d593183f1f1a2b8f9d91aeef310f/packages/superset-ui-legacy-preset-chart-big-number/src/BigNumber/BigNumber.jsx#L207
As you can see it's tightly coupled with the AreaSeries chart from @data-ui/xy-chart. You might want a new component or a new render section here that just takes a bg image. Probably making an issue request / PR and seeing if you can add it to the ui-plugins would be best.
I'd like that feature too :)
@timurista i see. thanks for your guide.
@syazwan0913 You could, in theory, do this with CSS. When you "Edit Dashboard" there's an option to add CSS. If the number and sequence of these Big Number components on your dashboard is static, you can use CSS nth-of-type trickery. I made a quick example to illustrate the case:
Syles applied here are as follows.
Set the background color of each instance.
.superset-legacy-chart-big-number:nth-of-type(1){ background: orange; }
Put the subheader where you want it (you can set the width so it wraps, etc.)
.superset-legacy-chart-big-number .subheader-line { text-align: left; position: absolute; bottom: 10px; left: 10px; }
Add a CSS pseudo element, and give it the icon you want as a background.
.superset-legacy-chart-big-number:nth-of-type(1)::after { content: ''; display: block; height: 60px; width: 60px; background: url(https://image.flaticon.com/icons/png/512/121/121901.png); background-size: contain; position: absolute; bottom: 20px; right: 20px; }
That's super hacky, but might get you the result you want without having to make new components.
@rusackas That is great. I will try it out. Thanks
@rusackas What version of superset do you have installed?, I have tried this hack on v.0.28.1 and did not work.
Thanks
@rusackas What version of superset do you have installed?, I have tried this hack on v.0.28.1 and did not work.
Thanks
I'm not sure what I was running at the time, but I usually run the latest code on master. Not sure where you added the CSS, but just for clarity, do the following (which I did on the example Baby Names dashboard):
click "Edit Dashboard"
click the dropdown arrow at the far right next to "Switch to view mode", and select "Edit CSS" from the dropdown menu.
Paste in this block of CSS:
.superset-legacy-chart-big-number:nth-of-type(1){
background: orange;
}
.superset-legacy-chart-big-number .subheader-line {
text-align: left;
position: absolute;
bottom: 10px;
left: 10px;
}
.superset-legacy-chart-big-number:nth-of-type(1)::after {
content: '';
display: block;
height: 60px;
width: 60px;
background: url(https://image.flaticon.com/icons/png/512/121/121901.png);
background-size: contain;
position: absolute;
bottom: 20px;
right: 20px;
}
You should see the result instantly, but you can close the modal/overlay, and click "Switch to view mode" to finish editing.
The result should look like so:
@muneneg Another hacky way I found out is using the chart-id
#chart-id-225{background: green;}
#chart-id-226{background: orange;}
#chart-id-227{background: red;}
@B-Cheye How did you manage to make the entire box coloured. With the above CSS you have shared only the chart area will get coloured but not the dashboard-component. Using your code above results in the image shown.
@stevensuting If you want the whole dashboard-component color to change then you will need to target the whole dashboard div
@B-Cheye How do you do that when this is how the CSS is ordered?
Could you share your CSS snippet?
@stevensuting it looks like @B-Cheye 's solution references the id attribute on the same line I'd annotated with "But it's here..." in the screenshot. So I don't think it is coloring the entire chart wrapper on the dash, but just the chart area itself. You could get hacky with nth-child/nth-of-type CSS selectors on the dashboard to color the whole wrapper, if your layout is fairly stable.
So I am asking in 2024 with v 3.1.1. how do we do this with CSS? I particularly want my value to be centered
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.