id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
84511098 | [LIBCLOUD-716] Fix port type error
Fixes issue: https://issues.apache.org/jira/browse/LIBCLOUD-716
Thanks, the change looks good to me.
It looks like things would only break if the API or auth service was on a non-standard port, right?
On a related note - some tests would also be good.
Thinking more about it - this code has been like that since pretty much the beginning and I worked with many OpenStack installations which ran on a custom port and I never encountered any issues.
Curious if we have introduced a regression somewhere recently or there is something else going on (e.g. some Python versions don't accept a string).
My libcloud runs on the following python version:
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
in an ubuntu 14.04 system. My devstack is from a latest version.
I am using the following snippet to run openstack operations which fail, as described in the above jira issue link.
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
OpenStack = get_driver(Provider.OPENSTACK)
driver = OpenStack('demo', '******',
ex_tenant_name='demo',
ex_force_auth_url='http://1.2.3.4:5000',
ex_force_auth_version='2.0_password')
print driver.list_sizes()
My python2.7 socket.py create_connection complains that port is not an int or str. Looking into this, I can see that it receives a port from unicode type and that is probably the reason it fails.
Adding traces in _tuple_from_url to print netloc and its type, I can see that netloc for the nova service is unicode.
1.2.3.4:5000, <type 'str'>
1.2.3.4:5000, <type 'str'>
1.2.3.4:8774, <type 'unicode'>
From looking into libcloud code it seems to me that port can be a string if it is fetched out from a url (_tuple_from_url).
It seems that libcloud -> openstack communication is done through urls: connecting into keystone first (passing keystone url through ex_force_auth_url) followed by connecting into the proper openstack service API (e.g. nova) which is retrieved from the catalog returned by the keystone service, which is a url as well. The later is parsed as a unicode which leads to a unicode port which again, probably leads to my socket.py failure.
Is that correct?
I would like to raise a question:
Looking into _tuple_from_url, I can see:
if ":" in netloc:
netloc, port = netloc.rsplit(":")
port = port
I am curious whether the 2nd port assignment is somewhat redundant.
Perhaps port can be consistent and be set as an integer here, as I can see in _tuple_from_url that port can be set with integers of 80 or 443 (if not set before).
This one line change, fixed my issue and I had to adapt only three openstack tests (committed here) for the unit tests to pass.
Hi,
I noticed someone else reported the exact issue I have and suggested a very similar solution:
http://mail-archives.apache.org/mod_mbox/libcloud-dev/201504.mbox/<CAFvoK-0B_PXYL7RNJ2_+pc4wSYMN2TJizzueXLJgL7r9DG3iQw@mail.gmail.com>
@aviweit Thanks for the additional details and clarification.
I added some additional casting to int inside the connect method to make sure port is always an int there and merged PR into trunk. Thanks.
The merged fix works for me. I tested with libcloud managing OpenStack, Amazon and Softlayer.
I am closing this PR. Thanks.
Great. Thanks.
| gharchive/pull-request | 2015-06-03T11:37:56 | 2025-04-01T04:33:29.479075 | {
"authors": [
"Kami",
"aviweit"
],
"repo": "apache/libcloud",
"url": "https://github.com/apache/libcloud/pull/533",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1762954673 | Explanation of the overall steps for creating a new UDF
Explanation of the overall steps for creating a new UDF
to Preview the docs
UDF使用介绍单独作为一篇文档放在这里
| gharchive/pull-request | 2023-06-19T07:48:57 | 2025-04-01T04:33:29.481304 | {
"authors": [
"ahaoyao",
"casionone"
],
"repo": "apache/linkis-website",
"url": "https://github.com/apache/linkis-website/pull/718",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1574148196 | Build reproducibility on Windows
Jar files generated on Windows do not match those published on Maven Central.
The problem is caused by line endings of resources (in version 0.2.0 just log4j-changelog.xsd), which are checked out as CRLF on Windows and LF on UNIX.
Isn't this usually handled with git config autocrlf?
@garydgregory, locally yes, but the value of core.autocrlf is not inherited, when you clone a repository.
I believe that we should add an appropriate .gitattributes configuration.
| gharchive/issue | 2023-02-07T11:20:25 | 2025-04-01T04:33:29.483074 | {
"authors": [
"garydgregory",
"ppkarwasz"
],
"repo": "apache/logging-log4j-tools",
"url": "https://github.com/apache/logging-log4j-tools/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
870078218 | LUCENE-9943 DOC: Fix spelling (camelCase it like GitHub)
Description
Please provide a short description of the changes you're making with this pull request.
[x] Docs have been updated
Solution
Please provide a short description of the approach taken to implement your solution.
docs update => spelling: github
Tests
Please describe the tests you've developed or run to confirm this patch implements the feature or solves the problem.
Checklist
Please review the following and check all that apply:
[x] I have reviewed the guidelines for How to Contribute and my code conforms to the standards described there to the best of my ability.
[x] I have created a Jira issue and added the issue ID to my pull request title.
[x] I have given Lucene maintainers access to contribute to my PR branch. (optional but recommended)
[x] I have developed this patch against the main branch.
[x] I have run ./gradlew check.
[x] I have added tests for my changes.
Other information:
Signed-off-by: Ayushman Singh Chauhan ascb508@gmail.com
Thanks!
| gharchive/pull-request | 2021-04-28T15:29:33 | 2025-04-01T04:33:29.487948 | {
"authors": [
"ayushman17"
],
"repo": "apache/lucene",
"url": "https://github.com/apache/lucene/pull/111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1236036281 | Path problems in git-for-windows bash
I am currently evaluating mvnd on Windows and found a strange path problem.
When executing mvn clean verify from CMD, PowerShell, or git-for-windows-bash, then I can build my sample project just fine.
When executing mvnd clean verify from CMD or PowerShell, then I still can build my sample project just fine (and, BTW, much faster).
But when ececuting mvnd clean verify from git-for-windows-bash, then apparently the buildnumber-maven-plugin is unable to find the git binary (which is in particularly weird as this is git-for-windows-bash actually!):
[ERROR] Failed to execute goal org.codehaus.mojo:buildnumber-maven-plugin:1.4:create (default) on project jakarta.ws.rs-api: Cannot get the branch information from the scm repository :
[ERROR] Exception while executing SCM command.: Error while executing command. Error while executing process. Cannot run program "git" (in directory "C:\Users\markus\git\jaxrs-api-master\jaxrs-api"): CreateProcess error=2, System cannot find the specified file
Is this a bug of mvnd or is there something the installation instructions are missing?
Do you have a simple project to reproduce the problem ?
| gharchive/issue | 2022-05-14T15:56:13 | 2025-04-01T04:33:29.490723 | {
"authors": [
"gnodet",
"mkarg"
],
"repo": "apache/maven-mvnd",
"url": "https://github.com/apache/maven-mvnd/issues/651",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
327910937 | FaultDomain, conventions for additional hierarchy.
Added notes from the "Convention for Additional Hierarchy" of the original design doc: https://docs.google.com/document/d/1gEugdkLRbBsqsiFv3urRPRNrHwUC-i1HwfFfHR_MvC8/edit#heading=h.emfys1xszpir
xref https://issues.apache.org/jira/browse/MESOS-8967
@jdef would you like to keep this PR open? Doing some grooming today.
Yes, please keep this open.
On Mon, Aug 27, 2018 at 2:10 PM Greg Mann notifications@github.com wrote:
@jdef https://github.com/jdef would you like to keep this PR open?
Doing some grooming today.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/apache/mesos/pull/294#issuecomment-416316351, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ACPVLICJznVXoE0OBBsIT0Tuk5bUKwJLks5uVDYDgaJpZM4UUEAK
.
--
James DeFelice
585.241.9488 (voice)
650.649.6071 (fax)
| gharchive/pull-request | 2018-05-30T21:46:03 | 2025-04-01T04:33:29.495710 | {
"authors": [
"greggomann",
"jdef"
],
"repo": "apache/mesos",
"url": "https://github.com/apache/mesos/pull/294",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
553368413 | Issue in init and deInit of BLE and understanding of code
sometimes i am having this problem after i call nimble_port_stop(); nimble_port_deinit();
assertion "ret == pdPASS" failed: file "F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/npl/freertos/src/npl_os_freertos.c", line 291, function: npl_freertos_sem_release
I am using esp-idfv3.3.1 with ESP32-WROOM
and tested with both nimble-1.1.0-idfv3.3 and nimble-1.2.0-idf
here is full assertion log
assertion "ret == pdPASS" failed: file "F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/npl/freertos/src/npl_o
s_freertos.c", line 291, function: npl_freertos_sem_release
abort() was called at PC 0x4010fd17 on core 0
0x4010fd17: __assert_func at /Users/ivan/e/newlib_xtensa-2.2.0-bin/newlib_xtensa-2.2.0/xtensa-esp32-elf/newlib/libc/stdlib/../../../.././newlib/libc/stdlib/assert.c:63 (discriminator 8)
ELF file SHA256: a82d01ccf69a5c9f5068f08aad415292749fc0a546dfafccaf1963fd72c2cfba
Backtrace: 0x4008e978:0x3fff0c80 0x4008ebc5:0x3fff0ca0 0x4010fd17:0x3fff0cc0 0x4017db73:0x3fff0cf0 0x4017cd76:0x3fff0d20 0x4017590a:0x3fff0d40 0x40175922:0x3fff0d60 0x4017596e:0x3fff0d80 0x4017059e:0x3fff0da0 0x4017142c:0x3fff0dc0 0x40171454:0x3fff0e20 0x40176da2:0x3fff0
e80 0x40176dd2:0x3fff0eb0 0x40173d4d:0x3fff0ed0 0x4017cdce:0x3fff0ef0 0x40137ab8:0x3fff0f10 0x40097cc9:0x3fff0f30
0x4008e978: invoke_abort at F:/msys32/home/umeri/esp/esp-idf/components/esp32/panic.c:715
0x4008ebc5: abort at F:/msys32/home/umeri/esp/esp-idf/components/esp32/panic.c:715
0x4010fd17: __assert_func at /Users/ivan/e/newlib_xtensa-2.2.0-bin/newlib_xtensa-2.2.0/xtensa-esp32-elf/newlib/libc/stdlib/../../../.././newlib/libc/stdlib/assert.c:63 (discriminator 8)
0x4017db73: npl_freertos_sem_release at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/npl/freertos/src/npl_os_freertos.c:369
0x4017cd76: ble_npl_sem_release at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/npl/freertos/include/nimble/nimble_npl_os.h:202
(inlined by) ble_hs_stop_cb at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/nimble/src/nimble_port.c:94
0x4017590a: ble_hs_stop_done at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_hs_stop.c:57 (discriminator 3)
0x40175922: ble_hs_stop_terminate_next_conn at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_hs_stop.c:75
0x4017596e: ble_hs_stop_gap_event at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_hs_stop.c:117
0x4017059e: ble_gap_event_listener_call at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_gap.c:1682
0x4017142c: ble_gap_conn_broken at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_gap.c:1682
0x40171454: ble_gap_rx_disconn_complete at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_gap.c:1682
0x40176da2: ble_hs_hci_evt_disconn_complete at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_hs_hci_evt.c:167
0x40176dd2: ble_hs_hci_evt_process at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/nimble/host/src/ble_hs_hci_evt.c:800
0x40173d4d: ble_hs_event_rx_hci_ev at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/npl/freertos/include/nimble/nimble_npl_os.h:109
0x4017cdce: ble_npl_event_run at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/npl/freertos/include/nimble/nimble_npl_os.h:121
(inlined by) nimble_port_run at F:/msys32/home/umeri/esp/esp-idf/components/nimble/nimble/porting/nimble/src/nimble_port.c:81
0x40137ab8: bleprph_host_task at F:/msys32/home/umeri/esp/cowlar-finger-print-scanner-esp32/components/iot-core/ble/app_ble.c:579
0x40097cc9: vPortTaskWrapper at F:/msys32/home/umeri/esp/esp-idf/components/freertos/port.c:403
i have problem in understanding this part of code
this function never returns because of SLIST_FOREACH keep calling the listener callback function and i could not found a code which remove the callback function from slist
/**
* Called when a stop procedure has completed.
*/
static void
ble_hs_stop_done(int status)
{
struct ble_hs_stop_listener_slist slist;
struct ble_hs_stop_listener *listener;
ble_hs_lock();
ble_gap_event_listener_unregister(&ble_hs_stop_gap_listener);
slist = ble_hs_stop_listeners;
SLIST_INIT(&ble_hs_stop_listeners);
ble_hs_enabled_state = BLE_HS_ENABLED_STATE_OFF;
ble_hs_unlock();
SLIST_FOREACH(listener, &slist, link) {
listener->fn(status, listener->arg);
}
}
and sometime in future in nimble_port.c @ 127 ble_npl_sem_deinit(&ble_hs_stop_sem); semaphore gets deinit and this function
ble_hs_stop_cb(int status, void *arg)
{
ble_npl_sem_release(&ble_hs_stop_sem);
}
is still trying to release it and there rtos issue assert
I'm having a similar issue when trying to deinit on esp32. I've traced it to ble_npl_sem_pend() inside the nimble_port_stop() function and it never gets past that. Were you able to resolve it?
Has this been resolved? I'm on ESP32 and nimble_port_deinit(); seems to hang. It seems like basic functionality to be able to start/stop the BTLE.
| gharchive/issue | 2020-01-22T07:52:31 | 2025-04-01T04:33:29.508692 | {
"authors": [
"felixcollins",
"tennten10",
"umer-ilyas"
],
"repo": "apache/mynewt-nimble",
"url": "https://github.com/apache/mynewt-nimble/issues/736",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2344605297 | NIFI-13373: Adding support for banner text
NIFI-13373:
Adding support for banner text.
will review
| gharchive/pull-request | 2024-06-10T18:46:30 | 2025-04-01T04:33:29.528191 | {
"authors": [
"mcgilman",
"rfellows"
],
"repo": "apache/nifi",
"url": "https://github.com/apache/nifi/pull/8947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2202583955 | Some errors is reported when the sqlite database is compiled
Hello,when I get the latest code(apps and nuttx) from the repository and then add sqlite after selecting the sim/nsh configuration, it will produce a lot of errors during compilation, such as assert, unknown type name 'off64_t', etc.
My computer is ubuntu20.04.I hope to get help to solve these problems
@laoniaokkk please share your .config
config.txt
@acassis Hello,this is my .config, I just use 'sim/nsh' and select database as:
[] SQLITE library
(3.45.1) SQLITE version
[] SQLite cmd line tool
(8192) SQLite3 cmd line tool stack size
Thank you @laoniaokkk ! Using your config I got the same issue. Seems like HOST_ARM64 is not supported yet.
Then I decided to change it to HOST_X86_64 and got some errors and fixed them.
However the SQLite didn't work when I run "nsh> sqlite3". Please find the files I modified to get it compiling.
@yinshengkai @xiaoxiang781216 I appreciate if you guys could include an example using the SIM or better yet some real board.
config_sim_sqlite.txt
Makefile.txt
sqlite3.c.txt
@Gary-Hobson will provide patch soon.
Compilation error problems can be solved by adding this patch: https://github.com/apache/nuttx/pull/12303
The following patch adds an example of sqlite in sim: https://github.com/apache/nuttx/pull/12305
Since hostfs does not support FIOC_FILEPATH, it cannot currently be used in hostfs
Steps for usage:
./tools/configure.sh sim:sqlite
make -j
./nuttx
nsh> cd tmp
nsh> ls
/tmp:
nsh> sqlite3 test.db
SQLite version 3.45.1 2024-01-30 16:01:20
Enter ".help" for usage hints.
sqlite>
CREATE TABLE COMPANY(
ID INT PRIMARY KEY sqlite> (x1...> NOT NULL,
NAME TEXT NOT NULL,
AGE (x1...> (x1...> INT NOT NULL,
ADDRESS CHAR(50),
SALARY (x1...> (x1...> REAL
);(x1...>
sqlite> .schema COMPANY
CREATE TABLE COMPANY(
ID INT PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR(50),
SALARY REAL
);
sqlite> .quit
sqlite>
nsh>
nsh> ls -l
/tmp:
-rwxrwxrwx 12288 test.db
| gharchive/issue | 2024-03-22T14:07:32 | 2025-04-01T04:33:29.534662 | {
"authors": [
"Gary-Hobson",
"acassis",
"laoniaokkk",
"xiaoxiang781216"
],
"repo": "apache/nuttx-apps",
"url": "https://github.com/apache/nuttx-apps/issues/2336",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2436646150 | nsh_dd: fix lseek return check.
As was merged in commit 1852731df87e5895e299b74aabdb1984b7e3f795 on dd_main.c: lseek returns -1 on error.
Should be consistent in nsh_ddcmd.c and nsh_main.c.
Summary
Impact
Testing
@dry-75 could you rebase the patch to the latest mainline? ci broken is fixed by https://github.com/apache/nuttx-apps/pull/2455.
@xiaoxiang781216 rebased.
| gharchive/pull-request | 2024-07-30T00:32:33 | 2025-04-01T04:33:29.537026 | {
"authors": [
"dry-75",
"xiaoxiang781216"
],
"repo": "apache/nuttx-apps",
"url": "https://github.com/apache/nuttx-apps/pull/2456",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2566245338 | [BUG] flock unlocking issues
Description / Steps to reproduce the issue
File lock flock interface (and generally fs_lock.c implementation, I suppose the issue is valid for direct fnctl locks as well) does not seem to handle file unlocking correctly. A simple scenario where two threads open and access the file and both attempt to lock it with flock(fd, LOCK_EX); ends with the following result:
thread 1 locks file and accesses it
thread 2 locks file and blocks
thread 1 unlocks file
thread 2 should access the file, but it stays blocked
So far I have figured out two issues. One is the incorrect obtain of lock pid if the lock is taken from a POSIX thread created from the main process instead of a completely separate process. Imho task id (gettid()) should be used instead of pid (getpid()) to ensure every thread is treated independently and locking between threads is also possible. The following diff solves this.
diff --git a/fs/vfs/fs_lock.c b/fs/vfs/fs_lock.c
index 944b8a5ea1..45755c6c8f 100644
--- a/fs/vfs/fs_lock.c
+++ b/fs/vfs/fs_lock.c
@@ -670,12 +686,11 @@ int file_setlk(FAR struct file *filep, FAR struct flock *flock,
if (ret < 0)
{
goto out_free;
}
- request.l_pid = getpid();
-
+ request.l_pid = gettid();
But the main issue I am facing is with the unlocking as described above. From my tests it seems the lock from thread 1 is unlocked successfully, deleted from the list and semaphore is posted, which releases the second thread. At this moment, the second thread jumps to retry label (see this line) and takes list_for_every_entry() which search through the list of active locks. This should be empty now as we released the only held lock. But for some reason the list returns an existing lock and goes to file_lock_is_conflict(). But the data in the returned lock are not valid! Or at least is seems to be that way, I get pid value of something like 541213036, so it seems we are accessing a bad part of the memory.
I have tested this while using SmartFS file system and NOR flash, but I suppose this is reproducible on other file systems as well as the issue seems to be in the common part of locking infrastructure or list implementation. One more change is required for SmartFS, I have not committed it yet to the mainline:
diff --git a/fs/smartfs/smartfs_smart.c b/fs/smartfs/smartfs_smart.c
index d41476065a..bc4e6477cd 100644
--- a/fs/smartfs/smartfs_smart.c
+++ b/fs/smartfs/smartfs_smart.c
@@ -1035,7 +1035,7 @@ static int smartfs_ioctl(FAR struct file *filep, int cmd, unsigned long arg)
}
break;
default:
- ret = -ENOSYS;
+ ret = -ENOTTY;
break;
}
I have CONFIG_FS_LOCK_BUCKET_SIZE=8, I suppose this is the only configuration needed.
On which OS does this issue occur?
[OS: Linux]
What is the version of your OS?
Ubuntu 22.04.5, 6.8.0-45-generic
NuttX Version
master
Issue Architecture
[Arch: arm]
Issue Area
[Area: File System]
Verification
[X] I have verified before submitting the report.
Pinging the flock implementation author @crafcat7, any ideas?
But the main issue I am facing is with the unlocking as described above. From my tests it seems the lock from thread 1 is unlocked successfully, deleted from the list and semaphore is posted, which releases the second thread. At this moment, the second thread jumps to retry label (see this line) and takes list_for_every_entry() which search through the list of active locks. This should be empty now as we released the only held lock. But for some reason the list returns an existing lock and goes to file_lock_is_conflict(). But the data in the returned lock are not valid! Or at least is seems to be that way, I get pid value of something like 541213036, so it seems we are accessing a bad part of the memory.
Hi, Thanks for providing the steps to reproduce the issue, which has been fixed in the commit https://github.com/apache/nuttx/pull/13826
So far I have figured out two issues. One is the incorrect obtain of lock pid if the lock is taken from a POSIX thread created from the main process instead of a completely separate process. Imho task id (gettid()) should be used instead of pid (getpid()) to ensure every thread is treated independently and locking between threads is also possible. The following diff solves this.
Regarding the design that gettid / getpid should be used in file locks, I expect the implementation to be consistent with that in Linux. In the file lock implementation in Linux, I read that they use groupid instead of threadid if I understand correctly (refer to https://github.com/torvalds/linux/blob/27cc6fdf720183dce1dbd293483ec5a9cb6b595e/fs/locks.c#L528-L533)
Hi, Thanks for providing the steps to reproduce the issue, which has been fixed in the PR #13826
Thanks for the quick fix!
Regarding the design that gettid / getpid should be used in file locks, I expect the implementation to be consistent with that in Linux. In the file lock implementation in Linux, I read that they use groupid instead of threadid if I understand correctly (refer to https://github.com/torvalds/linux/blob/27cc6fdf720183dce1dbd293483ec5a9cb6b595e/fs/locks.c#L528-L533)
Hmm, I have tested the same app against Linux (kernel version 6.8.0-45) and thread locking works there. But yes, from the code you sent it seems they are using thread group ID, which should be the same as what we get from getpid() in NuttX.
| gharchive/issue | 2024-10-04T12:21:10 | 2025-04-01T04:33:29.548767 | {
"authors": [
"crafcat7",
"michallenc"
],
"repo": "apache/nuttx",
"url": "https://github.com/apache/nuttx/issues/13821",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1850900886 | sim: Remove unnecessary configurations
Summary
LIBUV will auto select CONFIG_PSEUDOFS_SOFTLINKS and CONFIG_SCHED_HAVE_PARENT
Impact
Testing
/usr/bin/ld: nuttx.rel: in function uv__fs_readlink': /github/workspace/sources/apps/system/libuv/libuv/src/unix/fs.c:743: undefined reference to NXreadlink'
/usr/bin/ld: nuttx.rel: in function uv__fs_readlink': /github/workspace/sources/apps/system/libuv/libuv/src/unix/fs.c:743: undefined reference to NXreadlink'
This error is due to the missing commit https://github.com/apache/nuttx-apps/pull/1908.
/usr/bin/ld: nuttx.rel: in function uv__fs_readlink': /github/workspace/sources/apps/system/libuv/libuv/src/unix/fs.c:743: undefined reference to NXreadlink' This error is due to the missing commit apache/nuttx-apps#1908. They're loop dependent, and this commit needs to merge first
Please find other defconfig which use LIBUV and fix the similar issue in this patch too.
| gharchive/pull-request | 2023-08-15T04:49:16 | 2025-04-01T04:33:29.552426 | {
"authors": [
"Zhangshoukui",
"xiaoxiang781216"
],
"repo": "apache/nuttx",
"url": "https://github.com/apache/nuttx/pull/10228",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2577396093 | net/local: fix the return address is incorrect when accept
Summary
The argument addr is a pointer to a sockaddr structure. This structure is filled in with the address of the peer socket, as known to the communications layer.
Impact
Testing
sim:local
[Experimental Bot, please feedback here]
No, this PR does not meet the NuttX Requirements.
Here's why:
Insufficient Summary: The provided summary describes a technical detail but lacks crucial information:
Missing "Why": It doesn't explain the reason for the change. Is this a bug fix? A performance improvement? A new feature?
Missing Context: It mentions "the argument addr" without specifying where this argument is used. Which function or data structure is being modified?
Missing Impact Assessment: The "Impact" section is completely empty. You must address each impact category, even if it's to state "NO" with a brief justification.
Inadequate Testing:
Missing Details: "sim:local" is not descriptive enough. Specify the simulator used, the NuttX configuration, and any relevant build options.
Missing Logs: The testing logs sections are empty. Provide actual output from your tests, both before and after the change, to demonstrate the problem and the solution.
To make your PR meet the requirements, you need to:
Expand the Summary:
Clearly state the reason for the change.
Provide context about the code being modified.
Briefly explain how the change addresses the issue.
Complete the Impact Assessment:
Address each impact category, even if it's just to say "NO - this change has no impact on X"
Provide specific details and justifications where applicable.
Provide Detailed Testing Information:
List the specific simulator(s), NuttX configurations, and build options used.
Include relevant logs from your tests, demonstrating the issue before and the solution after the change.
| gharchive/pull-request | 2024-10-10T02:49:05 | 2025-04-01T04:33:29.560104 | {
"authors": [
"nuttxpr",
"zhhyu7"
],
"repo": "apache/nuttx",
"url": "https://github.com/apache/nuttx/pull/14018",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2042851191 | HDDS-9926. Publish WIP website to staging branch (1/2)
Part 1/2. Followed up by #56
Create a staging branch for the new website where builds can be pushed. Add a description of it in the README and auto publishing information to .asf.yaml. See staging branches for .asf.yaml documention.
Testing
I don't think changes to .asf.yaml can be tested until the change is merged.
Thanks for the review @adoroszlai. I will merge this and we can try it out with #56
| gharchive/pull-request | 2023-12-15T03:58:05 | 2025-04-01T04:33:29.562150 | {
"authors": [
"errose28"
],
"repo": "apache/ozone-site",
"url": "https://github.com/apache/ozone-site/pull/55",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1197799195 | HDDS-6566. [Multi-Tenant] Fix a permission check bug that prevents non-delegated admins from assigning/revoking users to/from the tenant
What changes were proposed in this pull request?
A permission check bug prevents non-delegated admins from assigning users to the tenant or revoking users from the tenant.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-6566
How was this patch tested?
[x] Add a new test case that verifies a non-delegated admin have the permission to assign or revoke user accessIds in the tenant.
[x] All existing test cases shall pass.
@errose28 No problem! Thanks for the review and comments.
| gharchive/pull-request | 2022-04-08T20:39:23 | 2025-04-01T04:33:29.564444 | {
"authors": [
"smengcl"
],
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/3288",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2047227327 | HDDS-9776. Migrate simple client integration tests to JUnit5
What changes were proposed in this pull request?
Migrating all the tests under integration-test/ozone/client to Junit5 and making sure that Junit4 is not used in any of the tests
Tested that using grep -rlE "org.junit.[A-Z]" . --include '*.java' command in hadoop-ozone/integration-test/
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-9776
How was this patch tested?
Full CI run: https://github.com/VarshaRaviCV/ozone/actions/runs/7248042617
@adoroszlai @Galsza Please review
| gharchive/pull-request | 2023-12-18T18:40:56 | 2025-04-01T04:33:29.566813 | {
"authors": [
"VarshaRaviCV"
],
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/5819",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2440017062 | HDDS-11255. Add replication offpeak parameter for scm
What changes were proposed in this pull request?
Although the current SCM-issued replication commands can be controlled via parameters, many clusters experience distinct business peak and off-peak periods. Therefore, adding a parameter to indicate non-peak business periods and using a ratio to represent the InFlightLimit proportion between off-peak and peak periods would help.
This approach can prevent replication from causing increased DN iowait during peak periods, while fully utilizing DN resources during off-peak periods to accelerate replication.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-11255
How was this patch tested?
unit tests
@errose28 Hi, can you help review this PR?
@weimingdiit , thanks for proposing this. Do we observe any performance or stability improvement after this is applied?
@weimingdiit , thanks for proposing this. Do we observe any performance or stability improvement after this is applied?
@ChenSammi Thanks, I will update the code and give some detailed instructions
@weimingdiit , thanks for proposing this. Do we observe any performance or stability improvement after this is applied?
Thanks, I will update the code and give some detailed instructions
@ChenSammi Hi, thanks for your review, i have updated the code according to your suggestion. Please review it again.
@smengcl Hi, could you help review this PR?
cc @sodonnel
For this to work, you probably also need to adjust the number of replication threads in the DNs, otherwise the requests will simply queue at the DN side.
I am also not sure about the motivation for this - holding back replication during peak could end up with data loss if the problems are not repaired quickly enough. It also feels like the same goal could be achieved by making the replication parameter dynamically configurable and then adjusting them with an external command without needing to restart any services.
A better solution, although much more difficult, is that the cluster can adjust the replication rate based on the load the cluster is under.
Additionally, in many clusters, there could be full days that are off peak (eg Saturday and Sunday) plus during the night.
I feel it would be better to give this some more thought about other ways of solving the problem.
@weimingdiit This looks like a full fledged feature which will a design doc and a review for usability, scale and correctness. We should move this PR to draft while we work on understanding the use case and how it should be implemented.
For this to work, you probably also need to adjust the number of replication threads in the DNs, otherwise the requests will simply queue at the DN side.
I am also not sure about the motivation for this - holding back replication during peak could end up with data loss if the problems are not repaired quickly enough. It also feels like the same goal could be achieved by making the replication parameter dynamically configurable and then adjusting them with an external command without needing to restart any services.
A better solution, although much more difficult, is that the cluster can adjust the replication rate based on the load the cluster is under.
Additionally, in many clusters, there could be full days that are off peak (eg Saturday and Sunday) plus during the night.
I feel it would be better to give this some more thought about other ways of solving the problem.
@sodonnel Thank you for your comments and suggestions.
I think the solution to this issue could be divided into two steps:
Step 1: [It also feels like the same goal could be achieved by making the replication parameter dynamically configurable and then adjusting them with an external command without needing to restart any services.]
I agree with this approach. In this way, the two newly added parameters in the aforementioned PR are unnecessary. We just need to ensure that the key parameters related to replication in SCM and DN are dynamically configurable, and then control them through external scripts. This method should solve most of the issues.
Step 2: [A better solution, although much more difficult, is that the cluster can adjust the replication rate based on the load the cluster is under.]
As you mentioned, it requires fully dynamic control of the entire replication process based on certain load data, finding a proper balance between replication speed and the read-write latency caused by IO. But what metrics should be collected from the DN in this case? Memory, CPU, IO (perhaps the most important)? Based on these metrics, we could determine which nodes should handle the replication and at what speed. This method is indeed more elegant.
@sodonnel Perhaps the issue title should be modified to "Make key replication-related parameters in SCM and DN dynamically configurable." What do you think?
Perhaps the issue title should be modified to "Make key replication-related parameters in SCM and DN dynamically configurable."
It is fine to leave this PR for future reference, and create a new Jira and PR to make the parameters dynamically configurable. That is a useful feature by itself. This PR could be revisited in the future so it is good to leave it just in case.
@sodonnel Thanks for your suggestion, I have created a new Jira. https://issues.apache.org/jira/browse/HDDS-11451
Are we ok to close this for now, if we are going forward with HDDS-11451? It can be re-opened at any time and still referenced later. Closing just indicates it is not actively undergoing reviews.
Are we ok to close this for now, if we are going forward with HDDS-11451? It can be re-opened at any time and still referenced later. Closing just indicates it is not actively undergoing reviews.
@errose28 OK, I agree to close this PR
| gharchive/pull-request | 2024-07-31T13:05:59 | 2025-04-01T04:33:29.578904 | {
"authors": [
"ChenSammi",
"errose28",
"kerneltime",
"sodonnel",
"weimingdiit"
],
"repo": "apache/ozone",
"url": "https://github.com/apache/ozone/pull/7010",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2368055372 | [C++] Unable to read parquetjs-created file using low-level parquet-cpp API
Follow-up to PARQUET-1482: basic support for reading Parquet files with Data page V2 pages was added as a part of PARQUET-1482.
However, this was only added to the higher-level Arrow API, and not to the lower-level Parquet API. We could port this fix to the lower-level API so that more users can read Parquet files with Data page V2 pages.
Reporter: Rylan Dmello / @rdmello
Assignee: Rylan Dmello / @rdmello
Related issues:
[C++] Implement support for DataPageV2 (is caused by)
Original Issue Attachments:
all_types_gzip_02.parquet
feeds1kMicros.parquet
Note: This issue was originally created as PARQUET-1560. Please see the migration documentation for further details.
Wes McKinney / @wesm:
DataPageV2 support in parquet-cpp is currently broken
Tera G:
attached the gzip compressed data page v2 file: all_types_gzip_02.parquet
| gharchive/issue | 2019-04-10T21:07:16 | 2025-04-01T04:33:29.585406 | {
"authors": [
"asfimport"
],
"repo": "apache/parquet-java",
"url": "https://github.com/apache/parquet-java/issues/2320",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
360172488 | PHOENIX-4875: Don't acquire a mutex while dropping a table and while creating a view
Removed acquiring mutex in create view and drop table code path
Modified test accordingly
@twdsilva @JamesRTaylor please review. Thanks!
Thanks @twdsilva!
| gharchive/pull-request | 2018-09-14T06:26:30 | 2025-04-01T04:33:29.586911 | {
"authors": [
"ChinmaySKulkarni"
],
"repo": "apache/phoenix",
"url": "https://github.com/apache/phoenix/pull/348",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1897529169 | log errors without exceptions in query console
this is a bugfix for #11596.
similar to #11513, this will log errors when the controller returns a structured error response like {code: 500, error: ...}. In my case, it's returning error=null, so I'm including a separate log for that.
This is what it looks like replicating the error from #11597
cc @xiangfu0 since you recently worked on this
Codecov Report
Merging #11598 (6a97122) into master (8abe86b) will decrease coverage by 48.57%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #11598 +/- ##
=============================================
- Coverage 63.05% 14.49% -48.57%
+ Complexity 1106 201 -905
=============================================
Files 2326 2326
Lines 124974 124972 -2
Branches 19078 19078
=============================================
- Hits 78803 18112 -60691
- Misses 40569 105326 +64757
+ Partials 5602 1534 -4068
Flag
Coverage Δ
integration
?
integration1
?
integration2
?
java-11
?
java-17
?
java-20
14.49% <ø> (-48.42%)
:arrow_down:
temurin
14.49% <ø> (-48.57%)
:arrow_down:
unittests
14.49% <ø> (-48.56%)
:arrow_down:
unittests1
?
unittests2
14.49% <ø> (-0.01%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
see 1502 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
| gharchive/pull-request | 2023-09-15T00:45:34 | 2025-04-01T04:33:29.599959 | {
"authors": [
"codecov-commenter",
"jadami10"
],
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/11598",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
975966775 | Add segmentNameGeneratorType config to IndexingConfig
Add segmentNameGeneratorType to IndexingConfing
Add SegmentNameGeneratorFactory
Add the test
TODO: in the long term, we should have a dedicated
segment generator related config in the table config.
Codecov Report
Merging #7346 (2802506) into master (f3ce66f) will decrease coverage by 2.16%.
The diff coverage is 88.65%.
:exclamation: Current head 2802506 differs from pull request most recent head 89866fc. Consider uploading reports for the commit 89866fc to get more accurate results
@@ Coverage Diff @@
## master #7346 +/- ##
============================================
- Coverage 71.53% 69.36% -2.17%
+ Complexity 3287 3206 -81
============================================
Files 1503 1112 -391
Lines 74128 52337 -21791
Branches 10787 7869 -2918
============================================
- Hits 53029 36305 -16724
+ Misses 17488 13434 -4054
+ Partials 3611 2598 -1013
Flag
Coverage Δ
integration1
?
integration2
?
unittests1
69.36% <88.65%> (+0.08%)
:arrow_up:
unittests2
?
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
...t/local/segment/store/SegmentLocalFSDirectory.java
68.11% <ø> (+0.49%)
:arrow_up:
.../pinot/segment/spi/store/ColumnIndexDirectory.java
40.00% <ø> (ø)
...ache/pinot/segment/spi/store/SegmentDirectory.java
50.00% <ø> (ø)
...va/org/apache/pinot/spi/utils/CommonConstants.java
25.45% <ø> (ø)
.../spi/creator/name/SegmentNameGeneratorFactory.java
52.38% <52.38%> (ø)
...rocessing/framework/SegmentProcessorFramework.java
97.01% <88.88%> (-1.35%)
:arrow_down:
...rg/apache/pinot/common/lineage/SegmentLineage.java
84.78% <100.00%> (-6.53%)
:arrow_down:
...ent/local/segment/store/FilePerIndexDirectory.java
92.22% <100.00%> (ø)
.../pinot/segment/local/segment/store/IndexEntry.java
91.66% <100.00%> (+5.95%)
:arrow_up:
...he/pinot/segment/local/segment/store/IndexKey.java
65.00% <100.00%> (+6.17%)
:arrow_up:
... and 611 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f3ce66f...89866fc. Read the comment docs.
| gharchive/pull-request | 2021-08-20T22:51:19 | 2025-04-01T04:33:29.621030 | {
"authors": [
"codecov-commenter",
"snleee"
],
"repo": "apache/pinot",
"url": "https://github.com/apache/pinot/pull/7346",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1487881751 | [Doc] customizing log levels for java functions is not clear
Search before asking
[X] I searched in the issues and found nothing similar.
What issue do you find in Pulsar docs?
Following the step 1 https://github.com/apache/pulsar/blob/0d707aec92595868192380758a46f8f02886128f/site2/docs/functions-develop-log.md?plain=1#L57-L64
I've changed the property pulsar.log.level to debug in functions_log4j2.xml.
But the step 2 is confused. The root log level is hard code in file https://github.com/apache/pulsar/blob/ac7a34fe757fd8acaf7b87aac427b9e515b0aa7c/conf/functions_log4j2.xml#L123
But the doc says the level is ${sys:pulsar.log.level}
https://github.com/apache/pulsar/blob/0d707aec92595868192380758a46f8f02886128f/site2/docs/functions-develop-log.md?plain=1#L66-L76
So I forgot to change the root log level, can't get the debug level log, spent so mujch time to debug.
What is your suggestion?
If we should change the root logger level, the doc should remind us. Or can we change the root logger level to ${sys:pulsar.log.level}?
Or perhaps it should be ${sys:pulsar.log.root.level}, like
https://github.com/apache/pulsar/blob/0d707aec92595868192380758a46f8f02886128f/conf/log4j2.yaml#L134-L141
Any reference?
No response
Are you willing to submit a PR?
[X] I'm willing to submit a PR!
@vagetablechicken Thanks for your issue!
@freeznet Could you take a look at this issue? I think it may be the log configuration issue that we need to fix.
@vagetablechicken nice catch and I see you have checked the I'm willing to submit a PR! box, so I will assign this task to you.
| gharchive/issue | 2022-12-10T03:43:23 | 2025-04-01T04:33:29.630140 | {
"authors": [
"RobertIndie",
"freeznet",
"vagetablechicken"
],
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/issues/18861",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1648220015 | [Bug] pulsar - avro.version has a vulnerability CVE-2021-43045
Search before asking
[X] I searched in the issues and found nothing similar.
Version
For the pulsar version : 2.11.0 facing a high vulnerability CVE-2021-43045 related to packages:
org.apache.avro:avro
org.apache.avro:avro-protobuf
org.yaml:snakeyaml
Below are the versions available in pulsar client.
<avro.version>1.10.2</avro.version>
<snakeyaml.version>1.31</snakeyaml.version>
Maven Dependency
pulsar
org.apache.pulsar
2.11.0
Minimal reproduce step
Run the blackDuck Scan.
What did you expect to see?
Internal Jar need to expected to fix the blackDuck issues.
What did you see instead?
vulnerability
Anything else?
No response
Are you willing to submit a PR?
[ ] I'm willing to submit a PR!
@velagalasantosh The linked CVE seems to be only related to C# version of Avro, right?
I understand this is getting flagged (and it will be fixed ASAP). From a risk perspective though it wouldn't apply as a vulnerability:
Closing as invalid.
This is a vulnerability report for .NET SDK. We're using the Java SDK only.
| gharchive/issue | 2023-03-30T19:51:50 | 2025-04-01T04:33:29.636722 | {
"authors": [
"merlimat",
"tisonkun",
"velagalasantosh"
],
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/issues/19973",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1039131327 | [website][upgrade]feat: docs migration for 2.7.1 / concepts
(If this PR fixes a github issue, please add Fixes #<xyz>.)
Fixes #
(or if this PR is one task of a github issue, please add Master Issue: #<xyz> to link to the master issue.)
Master Issue: #
Motivation
Explain here the context, and why you're making that change. What is the problem you're trying to solve.
Modifications
Click to Preview
Snapshot
Verifying this change
[ ] Make sure that the change passes the CI checks.
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end deployment with large payloads (10MB)
Extended integration test for recovery after broker failure
Does this pull request potentially affect one of the following parts:
If yes was chosen, please highlight the changes
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API: (yes / no)
The schema: (yes / no / don't know)
The default values of configurations: (yes / no)
The wire protocol: (yes / no)
The rest endpoints: (yes / no)
The admin cli options: (yes / no)
Anything that affects deployment: (yes / no / don't know)
Documentation
Check the box below and label this PR (if you have committer privilege).
Need to update docs?
[ ] doc-required
(If you need help on updating docs, create a doc issue)
[x] no-need-doc
(Please explain why)
[ ] doc
(If this PR contains doc changes)
Fix #11766
@Anonymitaet conflict resolved, PTAL
| gharchive/pull-request | 2021-10-29T03:26:05 | 2025-04-01T04:33:29.645921 | {
"authors": [
"Anonymitaet",
"urfreespace"
],
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/pull/12525",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1164233500 | [owasp] Suppress ZooKeeper 3.8.0 vulnerabilities
Motivation
After the ZK upgrade the OWASP check fails because ZK has known vulnerability. The owasp suppression aims ZK 3.6.2.
Modifications
Edited the ZK suppression to 3.8.0
[x] no-need-doc
We released 3.8.0 last week and the owasp checker passed (I actually cancelled one RC due to a owasp check failure)
I am not aware of any security issue in ZooKeeper
Please explain more
I get this output
zookeeper-3.8.0.jar (pkg:maven/org.apache.zookeeper/zookeeper@3.8.0, cpe:2.3:a:apache:zookeeper:3.8.0:*:*:*:*:*:*:*) : CVE-2021-28164, CVE-2021-29425, CVE-2021-34429
zookeeper-prometheus-metrics-3.8.0.jar (pkg:maven/org.apache.zookeeper/zookeeper-prometheus-metrics@3.8.0, cpe:2.3:a:apache:zookeeper:3.8.0:*:*:*:*:*:*:*, cpe:2.3:a:prometheus:prometheus:3.8.0:*:*:*:*:*:*:*) : CVE-2021-28164, CVE-2021-29425, CVE-2021-34429
For example
https://nvd.nist.gov/vuln/detail/CVE-2021-34429#match-7614933
https://nvd.nist.gov/vuln/detail/CVE-2021-28164#match-7615085
I checked and ZK doesn't contain the mentioned libraries. I don't know how it is the process for that but is ZooKeeper problem.
I will highlight the problem to the ZooKeeper maintainers (or @eolivelli I'll let you proceed if you want)
For now I think we can keep the specific vulnerabilites; actually they are wrong (at a first glance at the ZK repo) so it makes sense to suppress them. btw they are already suppressed now, this will unblock the CI
in the ZK project we added these exclusions due to false positives:
https://github.com/apache/zookeeper/commit/3004c909b78b3056985c8e39925e14bde3baa430
<suppress>
<!-- Seems like false positives about zookeeper-jute -->
<cve>CVE-2021-29425</cve>
<cve>CVE-2021-28164</cve>
<cve>CVE-2021-34429</cve>
</suppress>
So it is fine to add the same exclusions there
/pulsarbot rerun-failure-checks
/pulsarbot rerun-failure-checks
| gharchive/pull-request | 2022-03-09T17:27:31 | 2025-04-01T04:33:29.652282 | {
"authors": [
"eolivelli",
"nicoloboschi"
],
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/pull/14630",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
439898941 | Allow users to update auth data during function update
Motivation
Currently, after a function is submitted there is no way to update the auth data associated with the function.
Users may want to be able to update the auth data of a function that is running.
This will especially support use cases involving initially running Pulsar without authentication enabled but then enabling authentication. Functions need to be able to be updated to have auth data for a seamless transition
Modifications
Allow users to update the auth data when updating a function.
Have a flag to control whether to update the auth data of the function or simply keep using the existing auth data
Improve the FunctionAuthProvider interface to support this use case
rerun java8 tests
rerun java8 tests
| gharchive/pull-request | 2019-05-03T06:01:11 | 2025-04-01T04:33:29.654535 | {
"authors": [
"jerrypeng"
],
"repo": "apache/pulsar",
"url": "https://github.com/apache/pulsar/pull/4198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1095289463 | [DISCUSS] Add a system property to override Runtime.getRuntime().availableProcessors()
I deploy rocketmq in docker environment. While there are some properties which take the number of runtime processors as default. It causes the jvm to start up with the physical setting which is not suitable for my virtual host.
I do know there are some ways to correct this, such as:
add -XX:ActiveProcessorCount=count to start up. But only after jdk8u191
upgrade jdk to jdk 11.
However, it is hard to push such an upgrade in a big company like us.
Or
3. do some code to change all the properties that refer to runtime.
It's too much work to do so and may not be suitable for different versions.
So I suggest adding an optional system property to accomplish this.
Thx.
Best regards.
I think Runtime.getRuntime().availableProcessors() can be used as default, and supporting parameter configuration should solve your problem.
@Git-Yang well, I think the configuration class parameters should be more concise and concise, configuration parameters in code now appear in many places, Of course this is just my personal opinion😀
As this problem occurs in docker environment, I think it would be more appropriate to upgrade the JDK version in container, other than change codes in application level. Because such uncompleted support to container in JDK does not only affect Runtime.getRuntime().availableProcessors(), but also ParallelGCThreads, CICompilerCount, etc.
BTW, it seems that this problem has resolved after jdk8u131: https://blogs.oracle.com/java/post/java-se-support-for-docker-cpu-and-memory-limits
| gharchive/issue | 2022-01-06T12:48:11 | 2025-04-01T04:33:29.676039 | {
"authors": [
"Git-Yang",
"Kvicii",
"caigy",
"gogodjzhu"
],
"repo": "apache/rocketmq",
"url": "https://github.com/apache/rocketmq/issues/3719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1321691704 | Remove useless GetRouteInfoRequestHeader.split constant to ensure compatibility
The issue tracker is used for bug reporting purposes ONLY whereas feature request needs to follow the RIP process. To avoid unnecessary duplication, please check whether there is a previous issue before filing a new one.
It is recommended to start a discussion thread in the mailing lists in cases of discussing your deployment plan, API clarification, and other non-bug-reporting issues.
We welcome any friendly suggestions, bug fixes, collaboration, and other improvements.
Please ensure that your bug report is clear and self-contained. Otherwise, it would take additional rounds of communication, thus more time, to understand the problem itself.
Generally, fixing an issue goes through the following steps:
Understand the issue reported;
Reproduce the unexpected behavior locally;
Perform root cause analysis to identify the underlying problem;
Create test cases to cover the identified problem;
Work out a solution to rectify the behavior and make the newly created test cases pass;
Make a pull request and go through peer review;
As a result, it would be very helpful yet challenging if you could provide an isolated project reproducing your reported issue. Anyway, please ensure your issue report is informative enough for the community to pick up. At a minimum, include the following hints:
Remove useless GetRouteInfoRequestHeader.split constant to ensure compatibility
/assigned
| gharchive/issue | 2022-07-29T02:46:44 | 2025-04-01T04:33:29.680306 | {
"authors": [
"RongtongJin",
"misselvexu"
],
"repo": "apache/rocketmq",
"url": "https://github.com/apache/rocketmq/issues/4719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2553627239 | [Bug][Seatunnel-web][DB2] DB2 Datasource fails due to improper use of database and schema names.
Corrected all db2 related issues mentioned in the corresponding issue
Purpose of this pull request
Check list
[ ] Code changed are covered with tests, or it does not need tests for reason:
[ ] If any new Jar binary package adding in your PR, please add License Notice according
New License Guide
[ ] If necessary, please update the documentation to describe the new feature. https://github.com/apache/seatunnel/tree/dev/docs
After fix, able to select the schema for the schema list
Executed db2 as source job successfully
| gharchive/pull-request | 2024-09-27T19:46:28 | 2025-04-01T04:33:29.684070 | {
"authors": [
"arshadmohammad"
],
"repo": "apache/seatunnel-web",
"url": "https://github.com/apache/seatunnel-web/pull/220",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1476157048 | [Feature] The performance of queryTrace by traceId in Huge dataset is slow in ES
Search before asking
[X] I had searched in the issues and found no similar feature requirement.
Description
#5023 had mentioned poor performance in trace query. Still, when we need to store trace in months, and everyday's segment is huge, the latency of trace query is unacceptable.
Use case
Maybe we could use an indepent trace_index store only traceid and startTime and endTime.
And Searching the startTime and endTime in the trace_index before querying trace in limited segment index.
And Just like metrics record procedure, doc with specific trace_id of trace_index can only be cached and updated by specific skywalking instance.
Related issues
#5023
Are you willing to submit a PR?
[X] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
There is nothing changed due to this proposal. You will still need to query segments by trace ID.
And you referred a years ago issue, it doesn't mean anything for the current codes.
Thanks. Anyway, consider add duaration option for traceid search? So that it might help narrowing down the query range.
AFAIK, no, we are not planning for this. A lot of users would complain if we add this. And more importantly, no proposal/vote is started by a committer, so, nothing should be changed.
| gharchive/issue | 2022-12-05T09:54:51 | 2025-04-01T04:33:29.732299 | {
"authors": [
"dylanforest",
"wu-sheng"
],
"repo": "apache/skywalking",
"url": "https://github.com/apache/skywalking/issues/10094",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
144022738 | [SPARK-14206][SQL] buildReader() implementation for CSV
What changes were proposed in this pull request?
Major changes:
Implement FileFormat.buildReader() for the CSV data source.
Add an extra argument to FileFormat.buildReader(), physicalSchema, which is basically the result of FileFormat.inferSchema or user specified schema.
This argument is necessary because the CSV data source needs to know all the columns of the underlying files to read the file.
How was this patch tested?
Existing tests should do the work.
Test build #54326 has started for PR 12002 at commit dd2afe6.
cc @marmbrus @yhuai @cloud-fan
Test build #54330 has started for PR 12002 at commit 20aabfc.
Test build #54326 has finished for PR 12002 at commit dd2afe6.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54326/
Test PASSed.
Test build #54330 has finished for PR 12002 at commit 20aabfc.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54330/
Test PASSed.
Test build #54424 has started for PR 12002 at commit 292ad6d.
Test build #54424 has finished for PR 12002 at commit 292ad6d.
This patch fails to build.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54424/
Test FAILed.
Test build #54426 has started for PR 12002 at commit e6ed363.
Test build #54426 has finished for PR 12002 at commit e6ed363.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54426/
Test FAILed.
Test build #54438 has started for PR 12002 at commit 2f1ae7f.
Test build #54438 has finished for PR 12002 at commit 2f1ae7f.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54438/
Test PASSed.
Merged build finished. Test PASSed.
Test build #54508 has started for PR 12002 at commit 84eddff.
Test build #54508 has finished for PR 12002 at commit 84eddff.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54508/
Test PASSed.
Test build #54518 has started for PR 12002 at commit e2c9ce1.
Test build #54518 has finished for PR 12002 at commit e2c9ce1.
This patch fails to build.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54518/
Test FAILed.
Test build #54519 has started for PR 12002 at commit 1c48f7b.
Test build #54519 has finished for PR 12002 at commit 1c48f7b.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54519/
Test FAILed.
retest this please.
The last build failure was caused by an irrelevant test case.
retest this please
Test build #54534 has started for PR 12002 at commit 1c48f7b.
Test build #54534 has finished for PR 12002 at commit 1c48f7b.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/54534/
Test PASSed.
LGTM
LGTM
| gharchive/pull-request | 2016-03-28T16:44:57 | 2025-04-01T04:33:29.767035 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"cloud-fan",
"liancheng",
"yhuai"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/12002",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
204235987 | Branch 2.0
What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Please review http://spark.apache.org/contributing.html before opening a pull request.
Can one of the admins verify this patch?
Hi @kishorbp , it seems mistakenly open. Would you please close this?
| gharchive/pull-request | 2017-01-31T08:55:07 | 2025-04-01T04:33:29.769342 | {
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"kishorbp"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/16752",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
219813995 | Branch 1.6
What changes were proposed in this pull request?
Find spark1.6 version of the memory management of the Bug, spark.memory.storageFraction and the official website of the problem.
How was this patch tested?
I added some log information to the UnifiedMemoryManager file as follows:
logInfo("storageMemoryPool.memoryFree %f".format(storageMemoryPool.memoryFree/1024.0/1024.0)) logInfo("onHeapExecutionMemoryPool.memoryFree %f".format(onHeapExecutionMemoryPool.memoryFree/1024.0/1024.0)) logInfo("storageMemoryPool.memoryUsed %f".format( storageMemoryPool.memoryUsed/1024.0/1024.0)) logInfo("onHeapExecutionMemoryPool.memoryUsed %f".format(onHeapExecutionMemoryPool.memoryUsed/1024.0/1024.0)) logInfo("storageMemoryPool.poolSize %f".format( storageMemoryPool.poolSize/1024.0/1024.0)) logInfo("onHeapExecutionMemoryPool.poolSize %f".format(onHeapExecutionMemoryPool.poolSize/1024.0/1024.0))
When I run the PageRank program, the input file for PageRank is generated by the BigDataBench-Chinese Academy of Sciences and is used to evaluate large data analysis system tools with a size of 676M. The information submitted is as follows:
./bin/spark-submit --class org.apache.spark.examples.SparkPageRank
--master yarn
--deploy-mode cluster
--num-executors 1
--driver-memory 4g
--executor-memory 7g
--executor-cores 6
--queue thequeue
./examples/target/scala-2.10/spark-examples-1.6.2-hadoop2.2.0.jar
/test/Google_genGraph_23.txt 6
The configuration is as follows:
spark.memory.useLegacyMode=false
spark.memory.fraction=0.75
spark.memory.storageFraction=0.2
Log information is as follows:
17/02/28 11:07:34 INFO memory.UnifiedMemoryManager: storageMemoryPool.memoryFree 0.000000
17/02/28 11:07:34 INFO memory.UnifiedMemoryManager: onHeapExecutionMemoryPool.memoryFree 5663.325877
17/02/28 11:07:34 INFO memory.UnifiedMemoryManager: storageMemoryPool.memoryUsed 0.299123
17/02/28 11:07:34 INFO memory.UnifiedMemoryManager: onHeapExecutionMemoryPool.memoryUsed 0.000000
17/02/28 11:07:34 INFO memory.UnifiedMemoryManager: storageMemoryPool.poolSize 0.299123
17/02/28 11:07:34 INFO memory.UnifiedMemoryManager: onHeapExecutionMemoryPool.poolSize 5663.325877
According to the configuration, storageMemoryPool.poolSize at least 1G or more, but the log information is only 0.299123M, so there is an error.
Please review http://spark.apache.org/contributing.html before opening a pull request.
Can one of the admins verify this patch?
@zhangwei72 close this too. Please be careful not to open these.
\spark-1.6.2\core\src\main\scala\org\apache\spark\memory
| gharchive/pull-request | 2017-04-06T07:50:12 | 2025-04-01T04:33:29.776137 | {
"authors": [
"AmplabJenkins",
"srowen",
"zhangwei72"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/17548",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
260986358 | [SPARK-22143][SQL] Fix memory leak in OffHeapColumnVector
What changes were proposed in this pull request?
WriteableColumnVector does not close its child column vectors. This can create memory leaks for OffHeapColumnVector where we do not clean up the memory allocated by a vectors children. This can be especially bad for string columns (which uses a child byte column vector).
How was this patch tested?
I have updated the existing tests to always use both on-heap and off-heap vectors. Testing and diagnoses was done locally.
cc @ala @michal-databricks @ueshin
Test build #82243 has started for PR 19367 at commit 4b494c5.
Test build #82247 has started for PR 19367 at commit 6156758.
Test build #82248 has started for PR 19367 at commit 84caf03.
Test build #82243 has finished for PR 19367 at commit 4b494c5.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82243/
Test PASSed.
Merged build finished. Test PASSed.
Test build #82247 has finished for PR 19367 at commit 6156758.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82247/
Test PASSed.
Test build #82248 has finished for PR 19367 at commit 84caf03.
This patch fails SparkR unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82248/
Test FAILed.
retest this please
Test build #82251 has started for PR 19367 at commit 84caf03.
Test build #82251 has finished for PR 19367 at commit 84caf03.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merging to master.
| gharchive/pull-request | 2017-09-27T14:02:51 | 2025-04-01T04:33:29.790563 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"hvanhovell"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/19367",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
523254512 | [SPARK-29728][SQL] Datasource V2: Support ALTER TABLE RENAME TO
What changes were proposed in this pull request?
This PR adds ALTER TABLE a.b.c RENAME TO x.y.x support for V2 catalogs.
Why are the changes needed?
The current implementation doesn't support this command V2 catalogs.
Does this PR introduce any user-facing change?
Yes, now the renaming table works for v2 catalogs:
scala> spark.sql("SHOW TABLES IN testcat.ns1.ns2").show
+---------+---------+
|namespace|tableName|
+---------+---------+
| ns1.ns2| old|
+---------+---------+
scala> spark.sql("ALTER TABLE testcat.ns1.ns2.old RENAME TO testcat.ns1.ns2.new").show
scala> spark.sql("SHOW TABLES IN testcat.ns1.ns2").show
+---------+---------+
|namespace|tableName|
+---------+---------+
| ns1.ns2| new|
+---------+---------+
How was this patch tested?
Added unit tests.
cc: @cloud-fan @rdblue @viirya
Test build #113844 has started for PR 26539 at commit bd3b844.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/18715/
Test PASSed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/113844/
Test FAILed.
Test build #114033 has started for PR 26539 at commit b6be4a9.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/18894/
Test PASSed.
Test build #114033 has finished for PR 26539 at commit b6be4a9.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114033/
Test PASSed.
thanks, merging to master!
| gharchive/pull-request | 2019-11-15T04:58:06 | 2025-04-01T04:33:29.800907 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"cloud-fan",
"imback82"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/26539",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
583582788 | [SPARK-31182][CORE][ML] PairRDD support aggregateByKeyWithinPartitions
What changes were proposed in this pull request?
1, impl aggregateByKeyWithinPartitions and reduceByKeyWithinPartitions
2, use aggregateByKeyWithinPartitions in RobustScaler
Why are the changes needed?
When implementing RobustScaler, I was looking for a way to guarantee that the QuantileSummaries in aggregateByKey are compressed at the map side.
(before merge and qurey, the QuantileSummaries must be compressed)
Then I only found a tricky method to work around (yet not applied), and there was no method for this.
previous discussions were here
Does this PR introduce any user-facing change?
Yes, add new methods for PairRDD
How was this patch tested?
added testsuites and existing ones
Test build #119982 has started for PR 27947 at commit 964b9f8.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/24704/
Test PASSed.
Test build #119982 has finished for PR 27947 at commit 964b9f8.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/119982/
Test PASSed.
friendly ping @srowen
Naive question from a newbie: you are introducing new methods for PairRDDs, so would it make sense to also expose them in the Java API (as methods of JavaPairRDD)? Or is it generally expected that the Java API is updated separately?
Well, I think that's part of the issue here - if it's public you kind of need to support it everywhere and for a long time. I don't know if it's worth it but I've lost the thread on this PR and would have to recall the motivation.
I tend to close it, since I can always workaround it. Maybe it is not necessary.
| gharchive/pull-request | 2020-03-18T09:17:23 | 2025-04-01T04:33:29.809527 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"srowen",
"wetneb",
"zhengruifeng"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/27947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
768388500 | [SPARK-33752][SQL][3.1] Avoid the getSimpleMessage of AnalysisException adds semicolon repeatedly
What changes were proposed in this pull request?
This PR related to #30724. This PR backport the version to branch-3.1
Why are the changes needed?
Fix a bug, because it adds semicolon repeatedly.
Does this PR introduce any user-facing change?
Yes. the message of AnalysisException will be correct.
How was this patch tested?
Jenkins test.
Test build #132851 has started for PR 30792 at commit d0c8ca2.
Test build #132851 has finished for PR 30792 at commit d0c8ca2.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/132851/
retest this please
Test build #132855 has started for PR 30792 at commit d0c8ca2.
Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/37457/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/37453/
Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/37457/
Test build #132855 has finished for PR 30792 at commit d0c8ca2.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/37457/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/132855/
Test build #132871 has started for PR 30792 at commit 30fc30e.
Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/37473/
Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/37473/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/37473/
Test build #132871 has finished for PR 30792 at commit 30fc30e.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/132871/
cc @HyukjinKwon
Thanks.
Merged to branch-3.1.
@HyukjinKwon Thank you
| gharchive/pull-request | 2020-12-16T02:23:10 | 2025-04-01T04:33:29.825799 | {
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"SparkQA",
"beliefer"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/30792",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
795617413 | [SPARK-34272][SQL] Pretty SQL should check NonSQLExpression
What changes were proposed in this pull request?
Add NonSQLExpression check int usePrettyExpression .
Why are the changes needed?
We should respect NonSQLExpression during toPrettyExpression, use .toString instead of .sql.
Does this PR introduce any user-facing change?
Yes.
How was this patch tested?
Add test.
Test build #134580 has started for PR 31372 at commit deaa535.
Test build #134580 has started for PR 31372 at commit deaa535.
Test build #134580 has finished for PR 31372 at commit deaa535.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Test build #134580 has finished for PR 31372 at commit deaa535.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/134580/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/134580/
retest this please
retest this please
Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39166/
Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39166/
Test build #134585 has started for PR 31372 at commit deaa535.
Test build #134585 has started for PR 31372 at commit deaa535.
Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39166/
Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39166/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/39166/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/39166/
Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39173/
Kubernetes integration test starting
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39173/
Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39173/
Kubernetes integration test status success
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/39173/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/39173/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/39173/
Test build #134585 has finished for PR 31372 at commit deaa535.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test build #134585 has finished for PR 31372 at commit deaa535.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/134585/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/134585/
What do you think about the NonExpressionSQL behavior ? @cloud-fan @gengliangwang
What do you think about the NonExpressionSQL behavior ? @cloud-fan @gengliangwang
The semantic of NonSQLExpression also confuses me: what's the real difference between toString and sql?
The semantic of NonSQLExpression also confuses me: what's the real difference between toString and sql?
I believe it's right that NonSQLExpression has no sql but just toString. E.g., the serializer of encode/decode is the special expression of Spark.
But the question is why ScalaUDF is NonSQLExpression, seems the usage is same with hive udf.
I believe it's right that NonSQLExpression has no sql but just toString. E.g., the serializer of encode/decode is the special expression of Spark.
But the question is why ScalaUDF is NonSQLExpression, seems the usage is same with hive udf.
I mean, wouldn't it be simpler to give NonSQLExpression a .sql method for any future context in which it's used? if (despite the name) it always has a SQL representation?
give NonSQLExpression a .sql method
Seems it's no meaning for NonSQLExpression to do this, otherwise NonSQLExpression is just same with Expression. Maybe we can give a method to decide use children.sql or children.toString ?
Fair enough, but it exists and already purports to provide a .sql method. You're already proposing a different/better .sql representation. Why wouldn't it be the natural home of that logic? is there any other place this representation is used, such that it has to be different in pretty SQL representations?
| gharchive/pull-request | 2021-01-28T02:45:06 | 2025-04-01T04:33:29.853209 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"cloud-fan",
"srowen",
"ulysses-you"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/31372",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1022463605 | [SPARK-36972][PYTHON]Add max_by/min_by API to PySpark
What changes were proposed in this pull request?
Add max_by/min_by to PySpark
Why are the changes needed?
for pyspark users' convenience
Does this PR introduce any user-facing change?
yes, new methods are added
How was this patch tested?
unit tests
Test build #144076 has finished for PR 34240 at commit 14e6c9a.
This patch fails PySpark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Kubernetes integration test status failure
URL: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/48555/
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/144079/
Can one of the admins verify this patch?
the test failure on TPC-DS is not related.
Test build #144113 has started for PR 34240 at commit 99cac03.
Test build #144113 has finished for PR 34240 at commit 99cac03.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Test build #144131 has finished for PR 34240 at commit d2b3560.
This patch fails PySpark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Test build #144137 has started for PR 34240 at commit e32bdc2.
Test build #144137 has finished for PR 34240 at commit e32bdc2.
This patch fails PySpark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Test build #144141 has finished for PR 34240 at commit e32bdc2.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Finally I switched to show and took screenshot of the documents.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/144146/
Test build #144149 has started for PR 34240 at commit db23c69.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/48626/
| gharchive/pull-request | 2021-10-11T09:18:18 | 2025-04-01T04:33:29.869899 | {
"authors": [
"AmplabJenkins",
"HyukjinKwon",
"SparkQA",
"yoda-mon"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/34240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1128114038 | [SPARK-36553][ML] KMeans avoid compute auxiliary statistics for large K
What changes were proposed in this pull request?
SPARK-31007 introduce an auxiliary statistics to speed up computation in KMeasn.
However, it needs a array of size k * (k + 1) / 2, which may cause overflow or OOM when k is too large.
So we should skip this optimization in this case.
Why are the changes needed?
avoid overflow or OOM when k is too large (like 50,000)
Does this PR introduce any user-facing change?
No
How was this patch tested?
existing testsuites
@zhengruifeng Thanks for picking this one up!
cc @srowen
I think I made it too complex.
according to @anders-rydbirk your description in the ticket:
Possible workaround:
Roll back to Spark 3.0.0 since a KMeansModel generated with 3.0.0 cannot be loaded in 3.1.1.
Reduce K. Currently trying with 45000.
maybe we just need to chang k * (k + 1) / 2 to (k.toLong * (k + 1) / 2).toInt?
scala> val k = 50000
val k: Int = 50000
scala> k * (k + 1) / 2
val res8: Int = -897458648
scala> (k.toLong * (k + 1) / 2).toInt
val res9: Int = 1250025000
scala> val k = 45000
val k: Int = 45000
scala> k * (k + 1) / 2
val res10: Int = 1012522500
scala> (k.toLong * (k + 1) / 2).toInt
val res11: Int = 1012522500
Sorry, I guess I mean make it into an array of arrays, not one big array.
@srowen yes, using arrays of sizes (1, 2, ..., k) is another choice
Array sizes can't be long so if it doesn't fit in an int it won't work
there are two limits:
1, array size, must be less than Int.MaxValue;
2, its size should fit in memory for initialization and broadcasting.
with --driver-memory=8G, I can not create an array of 1250025000 doubles. If we switch to arrays of arrays, I am afraid it's prone to OOM for large K.
@srowen
I can switch to array[array[double] if you perfer it, I am netural on it.
my main concern is, this optional statistics may be too large. In this case, k=50,000, it is much larger than the clustering centers (dim=3).
Your current design is fine, I trust your judgment
I think this should also be back-ported to 3.1/3.2
@srowen It is used in both training (in the .ml side) and prediction (in the .mllib side), the switch is done by just changing the type of stats in distanceMeasureInstance.findClosest(centers, stats, point) from Array[Double] to Option[Array[Double]]
Do existing call sites bind to the new method? I can't see how a new method is called when nothing new calls it, but if you understand it and it works, nevermind
Do existing call sites bind to the new method?
NO.
existing two methods are used in DistanceMeasure and DistanceMeasureSuite;
but def findClosest(centers: Array[VectorWithNorm], point: VectorWithNorm) is also used in KMeans initialization algorithm initKMeansParallel and BisectingKMeans.
Merged to master/3.2/3.1. Thanks!
@huaxingao @srowen @dongjoon-hyun Thanks for reviewing!
| gharchive/pull-request | 2022-02-09T06:38:47 | 2025-04-01T04:33:29.880065 | {
"authors": [
"anders-rydbirk",
"huaxingao",
"srowen",
"zhengruifeng"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/35457",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1342011182 | [SPARK-39799][SQL] DataSourceV2: View catalog interface
What changes were proposed in this pull request?
ViewCatalog API described in SPIP.
Why are the changes needed?
First step towards DataSourceV2 view support.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
N/A
cc @dongjoon-hyun who had comments on the SPIP
Thank you for pinging me, @holdenk .
cc @viirya , @sunchao , @huaxingao , @aokolnychyi , @RussellSpitzer
Find this message strange from https://github.com/jzhuge/spark/runs/7886846067?check_suite_focus=true:
[info] Found the following changed modules: catalyst, hive-thriftserver
This PR does not change module hive-thriftserver.
Interesting, the failure in CliSuite seems like a red-herring (e.g. that text is present), maybe try a re-base + re-run?
pending CI and any other concerns I plan to merge this on Friday.
Looking at error:
SparkThrowableSuite.Error classes are correctly formatted
Puzzled by this pyspark test failures. Seems unrelated.
[info] compiling 25 Scala sources to /__w/spark/spark/connector/docker-integration-tests/target/scala-2.12/test-classes ...
[error] /__w/spark/spark/sql/hive/src/test/scala/org/apache/spark/sql/HiveCharVarcharTestSuite.scala:49:13: exception during macro expansion:
[error] java.util.MissingResourceException: Can't find bundle for base name org.scalactic.ScalacticBundle, locale en_US
Current diff LGTM. We've been testing this out internally (including changes from the umbrella PR as well) and with the recent changes I think this PR looks like it should set us up for success with the rest of the feature.
Looking at error:
SparkThrowableSuite.Error classes are correctly formatted
Fixed
Hi @jzhuge @holdenk, can we merge this PR? Or is there any other concerns?
LGTM I'll merge this now to the current dev branch.
Thanks @holdenk, @wmoustafa, @xkrogen, @ljfgem for the reviews!
| gharchive/pull-request | 2022-08-17T16:25:33 | 2025-04-01T04:33:29.887218 | {
"authors": [
"dongjoon-hyun",
"holdenk",
"jzhuge",
"ljfgem",
"xkrogen"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/37556",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1343745077 | [SPARK-40141][CORE] Remove unnecessary TaskContext addTaskXxxListener overloads
What changes were proposed in this pull request?
TaskContext currently defines two sets of functions for registering listeners:
def addTaskCompletionListener(listener: TaskCompletionListener): TaskContext
def addTaskCompletionListener[U](f: (TaskContext) => U): TaskContext
def addTaskFailureListener(listener: TaskFailureListener): TaskContext
def addTaskFailureListener(f: (TaskContext, Throwable) => Unit): TaskContext
Before JDK8 and scala-2.12, the overloads were a convenient way to register a new listener without having to instantiate a new class. However, with the introduction of functional interfaces in JDK8+, and subsequent SAM support in scala-2.12, the two function signatures are now equivalent because a function whose signature matches the only method of a functional interface can be used in place of that interface.
Result: cryptic ambiguous overload errors when trying to use the function-only overload, which prompted a scala bug report (which was never addressed), as well as an attempted workaround that makes addTaskCompletionListener gratuitously generic, so that the compiler no longer considers e.g. addTaskCompletionListener[Unit] as equivalent to the overload that accepts a TaskFailureListener. The latter workaround was never applied to addTaskFailureListener for some reason.
Now that scala-2.12 on JDK8 is the minimum supported version, we can dispense with the overloads and rely entirely on language SAM support to work as expected. The vast majority of call sites can now use the function form instead of the class form, which simplifies the code considerably.
While we're at it, standardize the call sites. One-liners use the functional form:
addTaskCompletionListener(_ => ...)
while multi-liners use the block form:
addTaskCompletionListener { _ =>
...
}
Why are the changes needed?
Scala SAM feature conflicts with the existing overloads. The task listener interface becomes simpler and easier to use if we align with the language.
Does this PR introduce any user-facing change?
Developers who rely on this developer API will need to remove the gratuitous [Unit] when registering functions as listeners.
How was this patch tested?
All use sites in the spark code base have been updated to use the new mechanism. The fact that they continue to compile is the strongest evidence that the change worked. The tests that exercise the changed code sites also verify correctness.
This reminds me about so many pair methods for both Scala and Java friendly... Even there are several such methods in DataFrame. When we migrated to Scala 2.12, we figured out such issue, and our decision was let them leave to not break anything. But I agree it would be nice if we could reconsider removal of one of pair methods which can be taken care by Scala.
Another thing to check is that whether this changes enforce end users to rebuild their app jar or not. If we all consider that end users should rebuild their app jar based on the Spark version then that is OK, but we probably may not want to enforce this in every minor version.
Can one of the admins verify this patch?
Hi, @ryan-johnson-databricks . Apache Spark uses the PR contributor's GitHub Action resources instead of Apache Spark GitHub Action resources. Please enable GitHub Action in your repository. Currently, it seems to be disabled in your repo.
I did enable it -- and it even worked briefly -- but something seems to have gone wrong. I have an open ticket w/ github support but they've not yet responded.
This is developer API so I'm fine with this cleanup. Can you push an empty commit to retrigger the Github Action tests?
For Spark's own internal development, I'm wondering whether this change will introduce source-compatibility concerns in our own patch backports: If I write a bugfix patch using the new lambda syntax then cherry-pick that patch to older branches then I'll run into compile failures. Of course, the option to be compatible exists but a developer might forget to use it (esp. since IDEs are likely to suggest a replacement to the lambda syntax).
I'm not sure that concern is very troublesome?
The task listener code is very slow-moving -- code involving a task listener tends to not change after being added, other than a half dozen refactors that move/indent it. Full analysis below.
In the last four years, 11 new listeners were added by a half dozen code authors. Each case would have required the author to figure out that [Unit] was needed. Meanwhile, there has only been ONE bugfix "backport" since [Unit] was added in 2018 -- and that one backport was for one of the 11 new-code changes, from master back to a recent branch cut, to fix a regression found during release testing. So today the new dev cost of keeping [Unit] is 10x higher than the backport cost of removing it.
it's pretty easy to add [Unit] back in if needed?
(***) By "very slow moving" I mean:
There are no prod uses of addTaskFailureListener in the spark code base today -- which probably explains why nobody realized it needed to become polymorphic before now.
Out of 45 prod files that use addTaskCompletionListener[Unit], 28 have no changes since the Aug 2018 refactor that added [Unit] as part of scala-2.12 effort. Of the 17 remaining files (full list below) only one change was a bug fix that would have needed a backport. That change added a new listener (2022-03-09) and merged to master less than a week before spark-3.3 branch cut (2022-03-15); a perf regression was during release testing and fixed by revert (2022-04-26).
List of recent changes to addTaskCompletionListener[Unit]:
d3d22928d4f in Jun 2022, ArrowConverters.scala (adding a null-check to the task context a listener gets added to, as part of a refactor)
20ffbf7b308 in Apr 2022, DataSourceRDD.scala (refactor-induced indentation change)
6b5a1f9df28 in Apr 2022, ShuffledHashJoinExec.scala (revert a Mar 2022 change to code first added in Aug 2020)
8714eefe6f9 in Aug 2021, Columnar.scala (refactor-induced indentation change)
3257a30e539 in Jun 2021, RocksDB.scala (implement new RocksDB connector)
cc9a158712d in Jun 2021, SortMergeJoinExec.scala (added a new completion listener to update a spill size metric)
f11950f08f6 in Mar 2021, ExternalSorter.scala (refactor that moved existing code to a new location and also reformatted it)
d871b54a4e9 in Jan 2021, ObjectAggregationIterator.scala (added a new completion listener for capturing metrics)
21413b7dd4e in Nov 2020, streaming/state/package.scala (new code)
713124d5e32 in Aug 2020, InMemoryRelation.scala (refactor that moved an existing completion listener)
d7b268ab326 in Dec 2019, CollectMetricsExec.scala (added a new completion listener that collects metrics)
05988b256e8 in Sep 2019, BaseArrowPythonRunner.scala (refactor that moved existing code to a new file, otherwise unchanged)
3663dbe5418 in Jul 2019, AvroPartitionReaderFactory.scala (new code, avro DSv2 support)
23ebd389b5c in Jun 2019, ParquetPartitionReaderFactory.scala (new code, parquet DSv2 support)
d50603a37c4 in Apr 2019, TextPartitionReaderFactory.scala (new code, text DSv2 support)
8126d09fb5b in Feb 2019, ArrowRunner.scala (new code)
1280bfd7564 in Jan 2019, PipedRDD.scala (bug fix that required adding a new completion listener)
Since this is a source level incompatibility (as per @JoshRosen's analysis), and given this has been around for a while as a DeveloperApi, I would mark it as deprecated - and relook at removing it in the next major release.
We can ofcourse cleanup our own use
Since this is a source level incompatibility (as per @JoshRosen's analysis), and given this has been around for a while as a DeveloperApi, I would mark it as deprecated - and relook at removing it in the next major release. We can ofcourse cleanup our own use
We can't "just" mark it as deprecated and also clean up our own call sites, because this is an ambiguous overload. If we mark the polymorphic function overload as deprecated, that just forces everyone to either ignore the warning, or to create an actual listener object until we get around to removing the deprecated overload.
Neither of those seems to provide much benefit?
Yep deprecation doesn't help. The caller can use casts to disambiguate it, but that's ugly. I wouldn't object strongly to remove this before 4.0 as it's a developer API, but by the same token, it's a developer API. Is it worth the binary-compatibility breakage vs just having devs use casts?
| gharchive/pull-request | 2022-08-18T23:18:54 | 2025-04-01T04:33:29.905528 | {
"authors": [
"AmplabJenkins",
"HeartSaVioR",
"cloud-fan",
"mridulm",
"ryan-johnson-databricks",
"srowen"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/37573",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1478642592 | [SPARK-41317][CONNECT][TESTS][FOLLOWUP] Import WriteOperation only when pandas is available
What changes were proposed in this pull request?
This is the last piece to recover pyspark-connect tests on a system where pandas is unavailable.
Why are the changes needed?
BEFORE
$ python/run-tests.py --modules pyspark-connect
...
ModuleNotFoundError: No module named 'pandas'
AFTER
$ python/run-tests.py --modules pyspark-connect
Running PySpark tests. Output is in /Users/dongjoon/APACHE/spark-merge/python/unit-tests.log
Will test against the following Python executables: ['python3.9']
Will test the following Python modules: ['pyspark-connect']
python3.9 python_implementation is CPython
python3.9 version is: Python 3.9.14
Starting test(python3.9): pyspark.sql.tests.connect.test_connect_column_expressions (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/fc8f073e-edbe-470f-a3cc-f35b3f4b5262/python3.9__pyspark.sql.tests.connect.test_connect_column_expressions___lzz5fky.log)
Starting test(python3.9): pyspark.sql.tests.connect.test_connect_basic (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/c14dfac4-14df-45b4-b54e-6dbc3c58f128/python3.9__pyspark.sql.tests.connect.test_connect_basic__x8rz2dz1.log)
Starting test(python3.9): pyspark.sql.tests.connect.test_connect_column (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/5810e639-75c1-4e6c-91b3-2a1894dab319/python3.9__pyspark.sql.tests.connect.test_connect_column__3e_qmue7.log)
Starting test(python3.9): pyspark.sql.tests.connect.test_connect_function (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/8bebb3be-c146-4030-96b3-82ff7a873d49/python3.9__pyspark.sql.tests.connect.test_connect_function__hwe9aca8.log)
Finished test(python3.9): pyspark.sql.tests.connect.test_connect_column_expressions (1s) ... 9 tests were skipped
Starting test(python3.9): pyspark.sql.tests.connect.test_connect_plan_only (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/97cbccca-1e9a-44d9-958f-e2368266b437/python3.9__pyspark.sql.tests.connect.test_connect_plan_only__dffs5ux1.log)
Finished test(python3.9): pyspark.sql.tests.connect.test_connect_column (1s) ... 2 tests were skipped
Starting test(python3.9): pyspark.sql.tests.connect.test_connect_select_ops (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/297a054f-c577-4aee-b1ff-48d48beb423c/python3.9__pyspark.sql.tests.connect.test_connect_select_ops__4e__w6gh.log)
Finished test(python3.9): pyspark.sql.tests.connect.test_connect_function (1s) ... 5 tests were skipped
Finished test(python3.9): pyspark.sql.tests.connect.test_connect_basic (1s) ... 48 tests were skipped
Finished test(python3.9): pyspark.sql.tests.connect.test_connect_select_ops (0s) ... 2 tests were skipped
Finished test(python3.9): pyspark.sql.tests.connect.test_connect_plan_only (1s) ... 28 tests were skipped
Tests passed in 2 seconds
Skipped tests in pyspark.sql.tests.connect.test_connect_basic with python3.9:
test_channel_properties (pyspark.sql.tests.connect.test_connect_basic.ChannelBuilderTests) ... skip (0.002s)
test_invalid_connection_strings (pyspark.sql.tests.connect.test_connect_basic.ChannelBuilderTests) ... skip (0.000s)
test_metadata (pyspark.sql.tests.connect.test_connect_basic.ChannelBuilderTests) ... skip (0.000s)
test_sensible_defaults (pyspark.sql.tests.connect.test_connect_basic.ChannelBuilderTests) ... skip (0.000s)
test_valid_channel_creation (pyspark.sql.tests.connect.test_connect_basic.ChannelBuilderTests) ... skip (0.000s)
test_agg_with_avg (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_agg_with_two_agg_exprs (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_collect (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_count (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_create_global_temp_view (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_create_session_local_temp_view (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_crossjoin (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_deduplicate (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_drop (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_drop_na (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_empty_dataset (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_explain_string (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_fill_na (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_first (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_head (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_input_files (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_is_empty (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_is_local (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_is_streaming (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_join_condition_column_list_columns (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_limit_offset (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_print_schema (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_range (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_replace (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_repr (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_schema (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_select_expr (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_session (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_show (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_simple_datasource_read (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_simple_explain_string (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_simple_read (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_simple_udf (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_sort (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
test_sql (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_subquery_alias (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_tail (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_take (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_toDF (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_to_pandas (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_with_columns (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_with_columns_renamed (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.000s)
test_write_operations (pyspark.sql.tests.connect.test_connect_basic.SparkConnectTests) ... skip (0.001s)
Skipped tests in pyspark.sql.tests.connect.test_connect_column with python3.9:
test_column_operator (pyspark.sql.tests.connect.test_connect_column.SparkConnectTests) ... skip (0.002s)
test_columns (pyspark.sql.tests.connect.test_connect_column.SparkConnectTests) ... skip (0.001s)
Skipped tests in pyspark.sql.tests.connect.test_connect_column_expressions with python3.9:
test_binary_literal (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.002s)
test_column_alias (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
test_column_literals (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
test_float_nan_inf (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
test_map_literal (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.001s)
test_null_literal (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
test_numeric_literal_types (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
test_simple_column_expressions (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
test_uuid_literal (pyspark.sql.tests.connect.test_connect_column_expressions.SparkConnectColumnExpressionSuite) ... skip (0.000s)
Skipped tests in pyspark.sql.tests.connect.test_connect_function with python3.9:
test_aggregation_functions (pyspark.sql.tests.connect.test_connect_function.SparkConnectFunctionTests) ... skip (0.006s)
test_math_functions (pyspark.sql.tests.connect.test_connect_function.SparkConnectFunctionTests) ... skip (0.025s)
test_normal_functions (pyspark.sql.tests.connect.test_connect_function.SparkConnectFunctionTests) ... skip (0.002s)
test_sort_with_nulls_order (pyspark.sql.tests.connect.test_connect_function.SparkConnectFunctionTests) ... skip (0.001s)
test_sorting_functions_with_column (pyspark.sql.tests.connect.test_connect_function.SparkConnectFunctionTests) ... skip (0.000s)
Skipped tests in pyspark.sql.tests.connect.test_connect_plan_only with python3.9:
test_all_the_plans (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.013s)
test_coalesce_and_repartition (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.002s)
test_crossjoin (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_crosstab (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_datasource_read (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_deduplicate (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_drop (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_drop_na (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_except (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_fill_na (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_filter (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_intersect (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_join_condition (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_join_using_columns (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_limit (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_offset (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_range (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_relation_alias (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_replace (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_sample (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_simple_project (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_simple_udf (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.000s)
test_sort (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_sql_project (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.002s)
test_summary (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_union (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.002s)
test_unsupported_functions (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
test_write_operation (pyspark.sql.tests.connect.test_connect_plan_only.SparkConnectTestsPlanOnly) ... skip (0.001s)
Skipped tests in pyspark.sql.tests.connect.test_connect_select_ops with python3.9:
test_join_with_join_type (pyspark.sql.tests.connect.test_connect_select_ops.SparkConnectToProtoSuite) ... skip (0.085s)
test_select_with_columns_and_strings (pyspark.sql.tests.connect.test_connect_select_ops.SparkConnectToProtoSuite) ... skip (0.000s)
Does this PR introduce any user-facing change?
No
How was this patch tested?
Pass the CIs.
cc @grundprinzip , @hvanhovell , @bjornjorgensen , @zhengruifeng , @HyukjinKwon , @amaliujia
| gharchive/pull-request | 2022-12-06T09:25:47 | 2025-04-01T04:33:29.912204 | {
"authors": [
"dongjoon-hyun"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/38934",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2013752686 | [SPARK-46133] Refine ShuffleWriteProcessor write method's comment
What changes were proposed in this pull request?
Modify comment for ShuffleWriteProcessor.write
Why are the changes needed?
After SPARK-44605, we move rdd.iterator(partition, context) from ShuffleWriteProcessor#write to ShuffleMapTask#runTask.
We should modify the comments synchronously.
Does this PR introduce any user-facing change?
No, make comment right for developers
How was this patch tested?
No need
Was this patch authored or co-authored using generative AI tooling?
No
Due to fail publish fork repo package, rebuild fork repo.
| gharchive/pull-request | 2023-11-28T06:32:21 | 2025-04-01T04:33:29.916666 | {
"authors": [
"zwangsheng"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/44047",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2224687905 | [SPARK-47727][PYTHON] Make SparkConf to root level to for both SparkSession and SparkContext
What changes were proposed in this pull request?
This PR proposes to make SparkConf to root level to for both SparkSession and SparkContext.
Why are the changes needed?
SparkConf is special. SparkSession.builder.options can take it as an option, and this instance can be created without JVM access. So it can be shared with pure Python pysaprk-connect package.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
CI in this PR should verify them.
Was this patch authored or co-authored using generative AI tooling?
No.
Let me fix it up. It shouldn't be a breaking change.
| gharchive/pull-request | 2024-04-04T07:19:04 | 2025-04-01T04:33:29.919463 | {
"authors": [
"HyukjinKwon"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/45873",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2289379314 | [SPARK-48228][PYTHON][CONNECT][FOLLOWUP] Also apply _validate_pandas_udf in MapInXXX
What changes were proposed in this pull request?
Also apply _validate_pandas_udf in MapInXXX
Why are the changes needed?
to make sure validation in pandas_udf is also applied in MapInXXX
Does this PR introduce any user-facing change?
no
How was this patch tested?
ci
Was this patch authored or co-authored using generative AI tooling?
no
thanks @HyukjinKwon
merged to master
| gharchive/pull-request | 2024-05-10T09:39:29 | 2025-04-01T04:33:29.921895 | {
"authors": [
"zhengruifeng"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/46524",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2326600692 | [SPARK-47578][R] Migrate RPackageUtils with variables to structured logging framework
What changes were proposed in this pull request?
Migrate logging with variables of the Spark RPackageUtils module to structured logging framework. This transforms the log* entries of APIs like this:
def logWarning(msg: => String): Unit
to
def logWarning(entry: LogEntry): Unit
Why are the changes needed?
To enhance Apache Spark's logging system by implementing structured logging.
Does this PR introduce any user-facing change?
Yes, Spark core logs will contain additional MDC
How was this patch tested?
Compiler and scala style checks, as well as code review.
Was this patch authored or co-authored using generative AI tooling?
Brief but appropriate use of GitHub copilot
cc @gengliangwang
Thanks, merging to master
| gharchive/pull-request | 2024-05-30T21:59:24 | 2025-04-01T04:33:29.925023 | {
"authors": [
"dtenedor",
"gengliangwang"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/46815",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2363164948 | [SPARK-48665][PYTHON][CONNECT] Support providing a dict in pyspark lit to create a map.
What changes were proposed in this pull request?
Added the option to provide a dict to pyspark.sql.functions.lit in order to create a map
Why are the changes needed?
To make it easier to create a map in pyspark.
Currently, it is only possible via create_map which requires a sequence of key,value,key,value...
Scala already supports such functionality using typedLit
A similar PR was done in the past to add similar functionality for the creating of an array using a list, so I tried to follow all the changes done there as well.
Does this PR introduce any user-facing change?
Yes, docstring of lit was edited, and new functionality was added
Before:
from pyspark.sql import functions as F
F.lit({"a":1})
# pyspark.errors.exceptions.captured.SparkRuntimeException: [UNSUPPORTED_FEATURE.LITERAL_TYPE] The feature is not supported: Literal for '{asd=2}' of class java.util.HashMap.
After:
from pyspark.sql import functions as F
F.lit({"a":1, "b": 2})
# Column<'map(a, 1, b, 2)'>
How was this patch tested?
Manual tests + unittest in CI
Was this patch authored or co-authored using generative AI tooling?
No
CC @itholic @HyukjinKwon @zhengruifeng
tagging you since you authored reviewed the similar PR for the support of list :)
| gharchive/pull-request | 2024-06-19T21:41:13 | 2025-04-01T04:33:29.929301 | {
"authors": [
"Ronserruya"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/47031",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2660143073 | [SPARK-50302] Ensure secondary index sizes equal primary index sizes for TransformWithState stateful variables with TTL
What changes were proposed in this pull request?
This PR ensures that the secondary indexes that state variables with TTL use are at most the size of the corresponding state variable's primary index. This change will eliminate unnecessary work done during the cleanup of stateful variables with TTL.
Why are the changes needed?
Context
The TransformWithState operator (hereon out known as "TWS") will allow users write procedural logic over streams of records. To store state between micro-batches, Spark will provide users stateful variables, which persist between micro-batches. For example, you might want to emit an average of the past 5 records, every 5 records. You might only receive 2 records in the first micro-batch, so you have to buffer these 2 records until you get 3 more in a subsequent batch. TWS supports 3 different types of stateful variables: single values, lists, and maps.
The TWS operator also supports stateful variables with Time To Live; this allows you to say, "keep a certain record in state for d units of time". This TTL is per-record. This means that every record in a list (or map) can expiry at a different point in time, depending on when the element in the list is inserted. A record inserted into a stateful list (or map) at time t1 will expire at t1 + d, and a second that expires at t2 + d will expire at t2 + d. (For value state, there's only one value, so "everything" expires at the same time.)
A very natural question to now ask is, how do we efficiently determine which elements have expired in the list, without having to do a full scan of every record in state? The idea here is to keep a secondary index from expiration timestamp, to the specific record that needs to be evicted. Not so hard, right?
The state cleanup strategy today
Today's cleanup strategy is about as simple as I indicated earlier: for every insert to a value/map/list, you:
Write to the primary index
Using the current timestamp, you write into the secondary index
The issue with this approach is that we do two unconditional writes. This means that if the same state variable is written to with different timestamps, there will exist one element in the primary index, while there exists two elements in the secondary index. Consider the following example for a state variable foo with value v1, and TTL delay of 500:
For batch 0, batchTimestampMs = 100, foo updates to v1:
Primary index: [foo -> (v1, 600)]
Secondary index: [(600, foo) -> EMPTY]
Note that the state variable is included in the secondary index key because we might have several elements with the same expiration timestamp; we want (600, foo) to not overwrite a (600, bar), just because they both expire at 600.
Batch 1: batchTimestampMs = 200, foo updates to v2.
Primary index: [foo -> (v2, 700)]
Secondary index: [(600, foo) -> EMPTY, (700, foo) -> EMPTY]
Now, we have two entries in our secondary index. If the current timestamp advanced to something like 800, we'd take the following steps:
We'd take the first element from the secondary index (600, foo), and lookup foo in the primary index. That would yield (v2, 700). The value of 700 in the primary index is still less than 800, so we would remove foo from the primary index.
Then, we would look at (700, foo). We'd look up foo in the primary index and see nothing, so we'd do nothing.
You'll notice here that step 2 is entirely redundant. We read (700, foo) and did a get to the primary index, for something that was doomed—it would have never returned anything.
While this isn't great, the story is unfortunately significantly worse for lists. The way that we store lists is by having a single key in RocksDB, whose value is the concatenated bytes of all the values in that list. When we do cleanup for a list, we go through all of its records and Thus, it's possible for us to have a list that looks something like:
Primary index: [foo -> [(v1, 600), (v2, 700), (v3, 900)]]
Secondary index: [(600, foo) -> EMPTY, (700, foo) -> EMPTY, (900, foo) -> EMPTY]
Now, suppose that the current timestamp is 800. We need to expire the records in the list. So, we do the following:
We take the first element from the secondary index, (600, foo). This tells us that the list foo needs cleaning up. We clean up everything in foo less than 800. Since we store lists as a single key, we issue a RocksDB clear operation, iterate through all of the existing values, eliminate (v1, 600) and (v2, 700), and write back (v3, 900).
But we still have things left in our secondary index! We now get (700, foo), and we unknowingly do cleanup on foo again. This consists of clearing foo, iterating through its elements, and writing back (v3, 900). But since cleanup already happened, this step is entirely redundant.
We encounter (900, foo) from the secondary index, and since 900 > 800, we can bail out of cleanup.
Step 2 here is extremely wasteful. If we have n elements in our secondary index for the same key, then, in the worst case, we will do the extra cleanup n-1 times; and each time is a linear time operation! Thus, for a list that has n elements, d of which need to be cleaned up, the worst-case time complexity is in O(d*(n-d)), instead of O(n). And it's completely unnecessary work.
How does this PR fix the issue?
It's pretty simple to fix this for value state and map state. This is because every key in value or map state maps to exactly one element in the secondary index. We can maintain a one-to-one correspondence. Any time we modify value/map state, we make sure that we delete the previous entry in the secondary index. This logic is implemented by OneToOneTTLState.
The trickier aspect is handling this for ListState, where the secondary index goes from grouping key to the map that needs to be cleaned up. There's a one to many mapping here; one grouping key maps to multiple records, all of which could expire at a different time. The trick to making sure that secondary indexes don't explode is by making your secondary index store only the minimum expiration timestamp in a list. The rough intuition is that you don't need to store anything larger than that, since when you clean up due to the minimum expiration timestamp, you'll go through the list anyway, and you can find the next minimum timestamp; you can then put that into your secondary index. This logic is implemented by OneToManyTTLState.
How should reviewers review this PR?
Start by reading this long description. If you have questions, please ping me in the comments. I would be more than happy to explain.
Then, understand the class doc comments for OneToOneTTLState and OneToManyTTLState in TTLState.scala.
Then, I'd recommend going through the unit tests, and making sure that the behavior makes sense to you. If it doesn't, please leave a question.
Finally, you can look at the actual stateful variable implementations.
Does this PR introduce any user-facing change?
No, but it is a format difference in the way TWS represents its internal state. However, since TWS is currently private[sql] and not publicly available, this is not an issue.
How was this patch tested?
Existing UTs have been modified to conform with this new behavior.
New UTs added to verify that the new indices we added
Was this patch authored or co-authored using generative AI tooling?
Generated-by: GitHub Copilot
@neilramaswamy - thx for writing up a detailed PR description !
Before reading through the code, I read the PR description and the direction looks great to me. I think the behavior should have been like the proposed one, and somehow we missed this. Thanks for the fix!
Thanks for the detailed review, @HeartSaVioR. I have reacted with 👍 for comments that I have addressed, and I will let you resolve the conversation if you think it's sufficient. To summarize where we are:
I'm not sure whether we want some of the internal-only TTLState members/methods to be protected or private[sql]. I think they should be private[sql] so that we can use in testing, but not available to users.
I think that TTLState might be confusing because it's not acting fully as a mix-in, but rather a bunch of TTL utilities. Maybe I can make it an abstract class, and rename it to TTLUtils?
In my most recent revisions, I have also tried to make the comments less verbose and not provide content better suited for a design doc.
Thanks! Merging to master.
| gharchive/pull-request | 2024-11-14T21:39:37 | 2025-04-01T04:33:29.949044 | {
"authors": [
"HeartSaVioR",
"anishshri-db",
"neilramaswamy"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/48853",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2757666518 | [SPARK-50659][SQL] Move Union-related errors to QueryCompilationErrors
What changes were proposed in this pull request?
Move Union-related NUM_COLUMNS_MISMATCH and INCOMPATIBLE_COLUMN_TYPE errors to QueryCompilationErrors.
Why are the changes needed?
To improve the code health and to reuse those in the single-pass Analyzer.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Existing tests.
Was this patch authored or co-authored using generative AI tooling?
No.
LGTM
thanks, merging to master!
| gharchive/pull-request | 2024-12-24T11:49:55 | 2025-04-01T04:33:29.952306 | {
"authors": [
"cloud-fan",
"gotocoding-DB",
"vladimirg-db"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/49284",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
97497268 | [SPARK-9358][SQL]UnsafeRowConcat to concatenate UnsafeRow together
JIRA: https://issues.apache.org/jira/browse/SPARK-9358
cc @rxin
Merged build triggered.
Merged build started.
Test build #38556 has started for PR 7698 at commit 19b3825.
Merged build triggered.
Merged build started.
Test build #38556 has finished for PR 7698 at commit 19b3825.
This patch fails Scala style tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
Test build #38557 has started for PR 7698 at commit ec67e3e.
Test build #38557 has finished for PR 7698 at commit ec67e3e.
This patch fails Scala style tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
public class UnsafeRowConcat
Merged build finished. Test FAILed.
Merged build triggered.
Merged build started.
Merged build finished. Test FAILed.
Jenkins, retest this please.
Merged build triggered.
Merged build started.
Merged build triggered.
Test build #117 has started for PR 7698 at commit 6384f35.
Merged build started.
Merged build finished. Test FAILed.
Test build #117 has finished for PR 7698 at commit 6384f35.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test FAILed.
these unrelated failure occurs occasionally and hard to reproduce locally.
org.apache.spark.sql.hive.execution.HiveCompatibilitySuite.auto_sortmerge_join_16
org.apache.spark.sql.hive.execution.HiveCompatibilitySuite.correlationoptimizer11
org.apache.spark.sql.hive.execution.SQLQuerySuite.CTAS with serde
org.apache.spark.sql.hive.execution.SortMergeCompatibilitySuite.auto_sortmerge_join_16
org.apache.spark.sql.hive.execution.SortMergeCompatibilitySuite.correlationoptimizer11
Jenkins, retest this please.
Merged build triggered.
Merged build started.
Test build #123 has started for PR 7698 at commit 6384f35.
Merged build triggered.
Merged build started.
Merged build finished. Test FAILed.
Those tests have been flaky.
One thing is that you should just pre-build the positions of the variable length data and use a loop to go through it, rather than loop through everything all the time.
Test build #123 has finished for PR 7698 at commit 6384f35.
This patch fails Spark unit tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
public class UnsafeRowConcat
Merged build finished. Test FAILed.
Merged build triggered.
Merged build started.
Test build #38644 has started for PR 7698 at commit cbef82c.
Test build #38644 has finished for PR 7698 at commit cbef82c.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Merged build triggered.
Merged build started.
Test build #39138 has started for PR 7698 at commit a2ce40a.
Close since it's taken over by @rxin.
Test build #39138 has finished for PR 7698 at commit a2ce40a.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
abstract class UnsafeRowConcat
class InterpretedUnsafeRowConcat(
Merged build finished. Test PASSed.
| gharchive/pull-request | 2015-07-27T16:03:41 | 2025-04-01T04:33:29.974435 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"rxin",
"yjshen"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/7698",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
98516967 | [SPARK-9520][SQL] Support in-place sort in UnsafeFixedWidthAggregationMap
This pull request adds a sortedIterator method to UnsafeFixedWidthAggregationMap that sorts its data in-place by the grouping key.
This is needed so we can fallback to external sorting for aggregation.
Merged build triggered.
Merged build started.
cc @JoshRosen / @yhuai
Not the prettiest code -- but works for 1.5 ...
I need to hook this up directly with the external sorter next.
Test build #39337 has started for PR 7849 at commit 4da683a.
Merged build triggered.
Merged build started.
Test build #39337 has finished for PR 7849 at commit 4da683a.
This patch fails to build.
This patch merges cleanly.
This patch adds the following public classes (experimental):
public static final class BytesToBytesMapIterator implements Iterator<Location>
public abstract class UnsafeKeyValueSorter
abstract class InternalRow extends GenericSpecializedGetters with Serializable
trait GenericSpecializedGetters extends SpecializedGetters
class SpecificUnsafeProjection extends $
abstract class UnsafeRowJoiner
|class SpecificUnsafeRowJoiner extends $
case class SortArray(base: Expression, ascendingOrder: Expression)
case class GetArrayItem(child: Expression, ordinal: Expression)
case class GetMapValue(child: Expression, key: Expression)
case class SubstringIndex(strExpr: Expression, delimExpr: Expression, countExpr: Expression)
class ArrayBasedMapData(val keyArray: ArrayData, val valueArray: ArrayData) extends MapData
class GenericArrayData(array: Array[Any]) extends ArrayData with GenericSpecializedGetters
abstract class MapData extends Serializable
public abstract class KVIterator<K, V>
Merged build finished. Test FAILed.
Merged build triggered.
Merged build started.
Test build #39342 has started for PR 7849 at commit 75018c6.
Merged build finished. Test FAILed.
Test build #39342 has finished for PR 7849 at commit 75018c6.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
From a technical perspective this set of changes looks fine to me; my only comments were concerning code comments and some API docs. @rxin is going to address these comments as part of a larger followup PR today, so I'm going to merge this now in order to unblock work on patches which build on this one.
| gharchive/pull-request | 2015-08-01T07:32:24 | 2025-04-01T04:33:29.986299 | {
"authors": [
"AmplabJenkins",
"JoshRosen",
"SparkQA",
"rxin"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/7849",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
117584536 | [SPARK-11815] [ML] [PySpark] PySpark DecisionTreeClassifier & DecisionTreeRegressor should support setSeed
PySpark DecisionTreeClassifier & DecisionTreeRegressor should support setSeed like what we do at Scala side.
Test build #46215 has started for PR 9807 at commit 9dd8870.
Test build #46215 has finished for PR 9807 at commit 9dd8870.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):\n * public class JavaGradientBoostedTreeClassifierExample \n * public class JavaGradientBoostedTreeRegressorExample \n * public class JavaRandomForestClassifierExample \n * public class JavaRandomForestRegressorExample \n * case class SerializeWithKryo(child: Expression) extends UnaryExpression \n * case class DeserializeWithKryo[T](child: Expression, tag: ClassTag[T]) extends UnaryExpression \n
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/46215/
Test PASSed.
cc @jkbradley
Test build #47511 has started for PR 9807 at commit 9dd8870.
Test build #47511 has finished for PR 9807 at commit 9dd8870.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/47511/
Test PASSed.
This looks good to me. cc @jkbradley
Looks fine except for those small comments.
Test build #48819 has started for PR 9807 at commit e6ef361.
Test build #48819 has finished for PR 9807 at commit e6ef361.
This patch passes all tests.
This patch merges cleanly.
This patch adds no public classes.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/48819/
Test PASSed.
Thanks for the updates!
LGTM
Merging with master
| gharchive/pull-request | 2015-11-18T13:16:04 | 2025-04-01T04:33:29.999499 | {
"authors": [
"AmplabJenkins",
"SparkQA",
"jkbradley",
"thunterdb",
"yanboliang"
],
"repo": "apache/spark",
"url": "https://github.com/apache/spark/pull/9807",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1888646403 | Implement unit tests for pipeline element 'TextFilterProcessor'
This issue is about implementing unit tests for our pipeline element TextFilterProcessor.
An example of how to write unit tests for pipeline elements can be found here.
Mentoring
As this issue is marked as a good first issue and for the hacktoberfest event: one of @dominikriemer, @tenthe, @RobertIndie, or @bossenti are happy to provide help for getting started, just tag (one of) them if you want to start working on this issue and need some help.
@bossenti, can I work on this issue??
@bossenti, can I work on this issue??
Sure, feel free to go ahead 🙂
In case you need any assistance we are happy to help!
@bossenti, will you please give me an overview on what needs to be done.
What are you missing or is unclear to you @Aditi840?
In my opinion, the problem description and that of the parent problem (https://github.com/apache/streampipes/issues/1884) are pretty self-explanatory.
OK @bossenti, will start working on it.
@bossenti may I take up the task since it has been more than a month
@rahulbiswas876 you can take on this issue
There is no one assigned currently
@bossenti please assign it to me.
| gharchive/issue | 2023-09-09T08:36:56 | 2025-04-01T04:33:30.004794 | {
"authors": [
"Aditi840",
"bossenti",
"rahulbiswas876"
],
"repo": "apache/streampipes",
"url": "https://github.com/apache/streampipes/issues/1902",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
529382072 | SUBMARINE-68. Add tests to FileSystemOperations class
What is this PR for?
Adding tests to FileSystemOperations class and also performed some minor refactor in the test infrastructure.
What type of PR is it?
Refactoring
Todos
[ ] - new UTs should pass
What is the Jira issue?
SUBMARINE-68
How should this be tested?
Only the new UTs should pass.
Screenshots (if appropriate)
Questions:
Does the licenses files need update? No
Is there breaking changes for older versions? No
Does this needs documentation? No
@szilard-nemeth, the patch is rebased. Could you please take a look into this?
Rebased to current master, will follow up on the UT errors.
Fixing UT error, pushed again.
@adamantal,
LGTM
Thanks for the contributions.
Thanks!
| gharchive/pull-request | 2019-11-27T14:29:55 | 2025-04-01T04:33:30.009262 | {
"authors": [
"adamantal",
"yuanzac"
],
"repo": "apache/submarine",
"url": "https://github.com/apache/submarine/pull/111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
844153239 | Can't sync column from source on complex queries
I can't use "sync column from source" on virtual datasets whose query doesn't start with SELECT.
Expected results
Whatever my query is, I should be able to sync column from source for my virtual datasets.
Actual results
When my query starts with SELECT, there's no problem.
When my query start with "WITH" statements :
I can create a virtual dataset using the "EXPLORE" feature in SQL Lab
I can't update it and sync columns from source : I get the "ONLY SELECT STATEMENTS ARE ALLOWED" error
I'm forced to re-run the query in SQL Lab, and save is over the previous virtual dataset
What's more, contrary to the previous version of superset (0.x.x), the time columns are not detected, but this seems to be a different issue.
Screenshots
How to reproduce the bug
Run a query in SQL Lab starting with a "WITH" statement
Click on Explore and create a virtual dataset
Go to dataset, update the query (you can add/remove/modify a column, anything works, even a mere filter)
Save
Click on 'Sync Columns from source' (it should get your new column list)
See error
Environment
(please complete the following information):
superset version: 1.0.1
python version: 3.7.9
node.js version: doesn't apply, I run on Kubernetes, using gunicorn as server
source : Athena
Checklist
Make sure to follow these steps before submitting your issue - thank you!
[ ] I have checked the superset logs for python stacktraces and included it here as text if there are any.
[X] I have reproduced the issue with at least the latest released version of superset.
[X] I have checked the issue tracker for the same issue and I haven't found one similar.
Additional context
Nothing to report
Taking a look.
I tested with a simple query and it worked:
WITH RAW AS (SELECT name, color FROM bart_lines)
SELECT * FROM RAW
We've had problems in the past with our SQL parser not being able to identify CTEs as SELECT statements correctly, and I fixed a few of them lately:
https://github.com/apache/superset/pull/16769
https://github.com/apache/superset/pull/17654
https://github.com/apache/superset/pull/17329
I suspect the original SQL that triggered this problem fell into this category.
@ValentinC-BR I'm going to close the ticket, but feel free to reopen if you can still repro it — if I have the SQL of the virtual dataset I can fix it.
My bad, it didn't come from queries that don't start with SELECT, but with queries with comments.
.
It seems that the comments (-- MY COMMENT HERE) are not well supported.
The title of this issue could be changed, as it is not related to complex queries.
No worries, sorry for the delay, this ticket fell under my radar.
If the problem was comments then https://github.com/apache/superset/pull/16769 should fix it. :)
| gharchive/issue | 2021-03-30T07:07:20 | 2025-04-01T04:33:30.019168 | {
"authors": [
"ValentinC-BR",
"betodealmeida"
],
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/13865",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
897900382 | As a dashboard consumer, I want to be able to modify the granularity of a time-series chart
Issue
Chart builders can specify the reporting time-range of a time-serie chart, and dashboard consumers can also modify that reporting time-range themselves using either dashboard (time) filter or the line chart time-range filter “footer”.
Only Chart builders can specify the granularity of aggregation over time.
We believe it would be beneficial for Dashboard consumers to be able to change the aggregation granularity of the time-serie chart themselves.
Features
Just like for the time range filter (cf. Line chart) that a dashboard consumer can “play with”, we would like to have a time granularity dropdown selector to modify how the metrics are aggregated against time.
Similarly, the “granularity change” event could be propagated to other charts in the dashboard (if “propagate” enabled in chart options),
Chart builder should have the possibility to {en, dis}able this feature (like for showing the range filter),
Chart builder should have the possibility to specify which time granularity to enable (i.e. they might want to not offer for “per-second” or “per-year” aggregation),
Possible time granularity should cover the different orders of magnitude (ms, s, min, h, day, week, month, quarter, year), but also — possibly user-defined — multiples thereof (1 min, 5 min, 15 min, 20 min, 30 min, etc.). This could either be pre-defined by the Chart builder and showed to Dashboard consumer as a fixed list of options they can chose from, or given as “you can have it all” selector where the Dashboard consumer can specify the time unit (ms, s, min, etc.) and the count thereof (1, 2, 5, etc.) they want to aggregate time with.
Alternatives
One already-feasible possibility is to generate multiple charts with all the wished for granularities, and add tabs in the dashboard (one tab for each time granularity). This is however quite inefficient, as changing one graph definition actually leads to modify n charts.
Another (not implemented yet) approach would be to offer time granularity as a dashboard-generic user input (and not chart specify) — cf. #14622.
An alternative currently exists (Superset 1.1.0), when using a FilterBox “chart” in the dashboard, and selecting its “Show {SQL, Druid} granularity dropdown” when setting it up.
Time grain isn't, however, part of the native filter (in Superset 1.1.0) — yet it's on the roadmap (cf. #13591 ).
Using filters is slightly different than the feature requested here (i.e., it's dashboard-specific rather than chart-specific); but (imho) it's likely to be satisfactory for most use cases.
@EBoisseauSierra you are correct - this is available on native filters, and will be available by default on 1.3.0 (will be cut soon after 1.2.0 is released).
| gharchive/issue | 2021-05-21T10:12:43 | 2025-04-01T04:33:30.025892 | {
"authors": [
"EBoisseauSierra",
"villebro"
],
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/14747",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1218793434 | UI Issue - Access Denied error pops up on initial load even for admin user.
Hi Team,
Whenever a user logs in to superset before everything loads up for a brief period of time a "Access is Denied" error comes up below is the screen shot:
@nijkap - I had a similar issue going on. The root cause for me was related to redirects and how I auth into superset (keycloak). How are you doing auth?
@nijkap - I had a similar issue going on. The root cause for me was related to redirects and how I auth into superset (keycloak). How are you doing auth?
we are using SSO n superset and redirect it to Dashboard URL directly by passing standalone=true
I was linking directly to a dashboard from a separate service. So, it was not hitting the /login endpoint first.
To get around that, I am linking to the dashboard with a URL like this:
https://superset/.${TLD}/login/?next=https%3A%2F%2Fsuperset.${TLD}%2Fsuperset%2Fdashboard%2F${DASHBOARD_NAME}%2F
so it is hitting the /login endpoint first and than being redirected to the dashboard I wanted to link to. I handle the redirect in the /login endpoint code.
Closing this out because (a) there's a solution posted, (b) it's stale, and (c) I haven't seen any other mentions of it happening lately. Happy to re-open if anyone is still facing this.
I remember that there was a PR merged in relation to this as well a little while ago ... It might be useful to find and link that PR to this issue in case anybody comes across this issue in the future
We are still facing this issue for 2.0.0, the above workaround is not feasible as we might have to change at lot places and some of query string is added dynamically. Please let me know if this is fixed with 2.1.0?
Please repoen if its not fixed for 2.1.0
I'm not 100% sure. I'll re-open just to play it safe. Perhaps I was a bit too optimistic due to the thread being silent for a year.
Issue still persist after 2.1.0 latest upgrade.
Closing this as stale since it's been silent for so long, and we're trying to steer toward a more actionable Issues backlog. If people are still encountering this in current versions (currently 3.x) please re-open this issue, open a new Issue with error reproduction instructions, or raise a PR to address the problem. Thanks!
| gharchive/issue | 2022-04-28T14:12:14 | 2025-04-01T04:33:30.031997 | {
"authors": [
"cwegener",
"nijkap",
"pfinnerty",
"rusackas"
],
"repo": "apache/superset",
"url": "https://github.com/apache/superset/issues/19884",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2091723695 | fix(dashboard): drag and drop indicator UX
SUMMARY
Following up #26313
This commit addresses and resolves the issue of an unclear drop zone indicator that was negatively impacting the user experience in the dashboard editor.
When a component was being dragged towards the edge of the tab container or the row/column containers, multiple drop indicators were often displayed. This created confusion about the exact insertion point of the element. [fig. 1]
fig. 1
The root of the problem was that these dashboard components were wrapped by both draggable and droppable interfaces, which led to overlapping and conflicting drop zones. This commit modifies this by making the dashboard components draggable only, and builds a distinct, non-conflicting area for the drop zone. Moreover, it also highlights the drop zone during the dragging process to clearly indicate where the element will be placed. [fig. 2]
fig. 1
BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
Before:
https://github.com/apache/superset/assets/1392866/315309e9-a8ad-45c4-a292-190a318c8d67
After:
https://github.com/apache/superset/assets/1392866/2342eee1-418d-4c35-9a21-f5772136c0ec
TESTING INSTRUCTIONS
Go to dashboard and edit mode
Drag and drag multiple components and then save
Verify the dashboard shown as designed
ADDITIONAL INFORMATION
[ ] Has associated issue:
[ ] Required feature flags:
[x] Changes UI
[ ] Includes DB Migration (follow approval process in SIP-59)
[ ] Migration is atomic, supports rollback & is backwards-compatible
[ ] Confirm DB migration upgrade and downgrade tested
[ ] Runtime estimates and downtime expectations provided
[ ] Introduces new feature or API
[ ] Removes existing feature or API
/testenv up
/testenv up
@sadpandajoe
@justinpark this looks and feels so much better! definitely a much needed improvement :)
A few things I noticed:
When adding tabs to a dashboard, the drop area extends a bit too far, into the right panel
When adding a chart element, the dashboard header row is also highlighted as a possible drop area
I managed to get an error when dragging a chart in between the header and the dashboard area somewhere, but I can't figure out how to repro it :(
@justinpark this looks and feels so much better! definitely a much needed improvement :)
A few things I noticed:
When adding tabs to a dashboard, the drop area extends a bit too far, into the right panel
When adding a chart element, the dashboard header row is also highlighted as a possible drop area
I managed to get an error when dragging a chart in between the header and the dashboard area somewhere, but I can't figure out how to repro it :(
agree with Sophie on all of the above!
Definitely an improvement to the dnd, thank you @justinpark ☺️
one comment about the header thing Sophie mentioned: when you try to hover over it it shows a red box (error), which gave me an idea. I think maybe it could be used in places where there is not enough space to place the item, for example if the row is full. What do you think?
@yousoph Sorry for the late response. Here are my answers.
When adding tabs to a dashboard, the drop area extends a bit too far, into the right panel
Given that the tab (in the header) will occupy the whole row, including the right panel as shown in the following screenshot, it seems more logical for this highlight area to indicate its transition into a global header layout.
When adding a chart element, the dashboard header row is also highlighted as a possible drop area
As discussed during our plans for further D&D improvements, I've earmarked this particular enhancement for @rtexelm's project. In the interim, this version will highlight in red (instead of the primary color) when you hover over the drop area, as shown in the following screenshot.
I managed to get an error when dragging a chart in between the header and the dashboard area somewhere, but I can't figure out how to repro it :(
I tried to reproduce the same problem but cannot get the issue yet. please let me know if you find the problem again.
one comment about the header thing Sophie mentioned: when you try to hover over it it shows a red box (error), which gave me an idea. I think maybe it could be used in places where there is not enough space to place the item, for example if the row is full.
I had a similar thought as well, but there are complexities when it comes to dropping an existing component into a space that's just enough.
Specifically, it's challenging to programmatically differentiate this from the prohibited drop zones.
(For instance, the highlight box should not turn red when item [1] is dropped into a zone without space, as it's possible to reorder among the same row items. However, it should turn red when item [4] is dropped into a zone without space.)
|[ 1 ]|[ 2 ]|[ 3 ][no space drop zone]|
|[ 4 ]|[ ]|
Therefore, I've decided to retain the post notification as it is currently displayed.
@kasiazjc @yousoph, barring any further objections from your end, I intend to proceed to the next step.
| gharchive/pull-request | 2024-01-19T23:58:13 | 2025-04-01T04:33:30.050216 | {
"authors": [
"justinpark",
"kasiazjc",
"michael-s-molina",
"yousoph"
],
"repo": "apache/superset",
"url": "https://github.com/apache/superset/pull/26699",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
181503967 | THRIFT-3546: Remove global namespace objects from nodejs generated code
Modified the node.js code generator to not generate namespace object by default, as using require() is the idiomatic way to reference classes contained in other modules for node, instead of relying on a object heirarchy existing in the global namespace.
This change makes the generated code work under Javascript's "use strict" mode.
In case others are relying on the existing behavior, I added a ":with_ns" flag to the compiler that enables the current behavior for backwards compatibility.
at quick run through patch looks ok to me, @Jens-G @RandyAbernethy can either of you please review also
| gharchive/pull-request | 2016-10-06T19:26:27 | 2025-04-01T04:33:30.052261 | {
"authors": [
"bgould",
"jfarrell"
],
"repo": "apache/thrift",
"url": "https://github.com/apache/thrift/pull/1111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
752981068 | Add support for unix domain sockets.
First implementation added to
org.apache.coyote.http11.Http11AprProtocol.
Depends on https://github.com/apache/tomcat-native/pull/8.
Is this complete from your POV? I'd like to give this a spin next month on FreeBSD.
Did you run some numbers how it compares for your usecase against localhost?
It's complete from my POV.
My chief interest is getting rid of passwords rather than performance. If I run a server on localhost I need to prevent someone or something trying to connect to that endpoint through the backdoor, and that means shared secrets to protect credentials that show up in backups, etc.
What I want is for httpd to do it's proxy magic, and connect to tomcat over UDS. I can configure this so that only httpd is allowed to connect to tomcat and nothing else. I can then pass certificate credentials from httpd to tomcat using unencrypted JWT, and life becomes easy.
Exposing tomcat directly is no good as there are many tomcats in my case, and I want them separate from one another, but exposed through the same webserver.
AJP over UDS for credential transfer is also theoretically possible, but people are starting to withdraw support for AJP.
So you have shared machine where everyone can snoop on localhost? Since the socket files will be owned by a Tomcat system user you want to add HTTPd to that group to make it interact with Tomcat?
So you have shared machine where everyone can snoop on localhost? Since the socket files will be owned by a Tomcat system user you want to add HTTPd to that group to make it interact with Tomcat?
Yes.
In this particular example it's a mailserver, with a whole host of related daemons running. If any of the those daemons allows anything shady, open ports on localhost are an obvious target. This shuts this all down completely.
You can get away with it if you use passwords, or session cookies, but in this case it's 100% certificates, and that creates a problem.
By the way, there were talks at dev@ about dropping/deprecating AprProtocol and recommending the use of NIO(2). Maybe for 10.1.x, not decided yet.
That's correct, it was supposed to be dropped already in 10.0 [it will happen in 10.1]. Instead, it got some defaults changes so that using it requires more deliberate configuration.
That's correct, it was supposed to be dropped already in 10.0 [it will happen in 10.1]. Instead, it got some defaults changes so that using it requires more deliberate configuration.
At this stage JEP-380 is too far away for practical use, so having a library able to make native calls gives tomcat a significant edge.
The ability to use normal PEM files in the SSL configuration is also a significant benefit.
That's correct, it was supposed to be dropped already in 10.0 [it will happen in 10.1]. Instead, it got some defaults changes so that using it requires more deliberate configuration.
At this stage JEP-380 is too far away for practical use, so having a library able to make native calls gives tomcat a significant edge.
The ability to use normal PEM files in the SSL configuration is also a significant benefit.
I absolutely agree. This is so simple with APR/OpenSSL.
That's correct, it was supposed to be dropped already in 10.0 [it will happen in 10.1]. Instead, it got some defaults changes so that using it requires more deliberate configuration.
At this stage JEP-380 is too far away for practical use, so having a library able to make native calls gives tomcat a significant edge.
The ability to use normal PEM files in the SSL configuration is also a significant benefit.
I absolutely agree. This is so simple with APR/OpenSSL.
Nobody has cared about UDS for the past 15 years. And PEM files are supported for JSSE and JSSE/OpenSSL. So I don't understand what the problem is.
That's correct, it was supposed to be dropped already in 10.0 [it will happen in 10.1]. Instead, it got some defaults changes so that using it requires more deliberate configuration.
At this stage JEP-380 is too far away for practical use, so having a library able to make native calls gives tomcat a significant edge.
The ability to use normal PEM files in the SSL configuration is also a significant benefit.
I absolutely agree. This is so simple with APR/OpenSSL.
Nobody has cared about UDS for the past 15 years. And PEM files are supported for JSSE and JSSE/OpenSSL. So I don't understand what the problem is.
Just because you never cared doesn't mean someone else does not. UDS is a fine thing on Unix. And for PEM files, they are supported because Tomcat supports it, not SunJSSE or OpenJSSE.
Nobody has cared about UDS for the past 15 years. And PEM files are supported for JSSE and JSSE/OpenSSL. So I don't understand what the problem is.
While you may not have cared, 389ds LDAP does UDS, all the milters and the various in postfix do UDS, as well as most web applications based on FastCGI, as does Windows 10 and Windows Server 2019.
As explained already, the problem is getting access to the filesystem permission model.
Even if the Tomcat dev team decides to drop support for the APR connector I don't see a problem the code to be extracted to a new project and be supported by the community.
A few nits in docs.
I wonder whether we should set default permissions at all and rely on the umask.
Tomcat has a umask check (startup listener) which these default permissions we basically break that promise...
Relying on the umask makes no practical sense, unfortunately.
The typical umask is 0027, meaning full access for tomcat itself, read access for members of the tomcat group (so that logfiles can be read but not changed), and no access for anyone else.
The unix domain socket is useless if you can't write to it. What that means is that only the tomcat user can send requests to tomcat, and members of the tomcat group can't send requests at all, which is completely pointless.
To be in any way useful the socket must be writable, and to do that it either needs to default to being writable, or needs to explicitly set as writable with at least pathPermissions="rw-rw----".
The typical umask is 0027, meaning full access for tomcat itself, read access for members of the tomcat group (so that logfiles can be read but not changed), and no access for anyone else.
The unix domain socket is useless if you can't write to it. What that means is that only the tomcat user can send requests to tomcat, and members of the tomcat group can't send requests at all, which is completely pointless.
Exactly, that's the whole problem.
To be in any way useful the socket must be writable, and to do that it either needs to default to being writable, or needs to explicitly set as writable with at least pathPermissions="rw-rw----".
So not to undermine the default umask, are we good to take your pathPermissions="rw-rw----" proposal?
To be in any way useful the socket must be writable, and to do that it either needs to default to being writable, or needs to explicitly set as writable with at least pathPermissions="rw-rw----".
So not to undermine the default umask, are we good to take your pathPermissions="rw-rw----" proposal?
I'm not following - the umask makes no sense, not even as a default, so we have to override the umask to make it work at all.
I think a sensible approach is "defaults to the same behaviour as localhost, visible to all on the box, while offering posixPermissions to the unix people, and a protected parent directory for the windows people."
That's where we stand now.
To be in any way useful the socket must be writable, and to do that it either needs to default to being writable, or needs to explicitly set as writable with at least pathPermissions="rw-rw----".
So not to undermine the default umask, are we good to take your pathPermissions="rw-rw----" proposal?
I'm not following - the umask makes no sense, not even as a default, so we have to override the umask to make it work at all.
I think a sensible approach is "defaults to the same behaviour as localhost, visible to all on the box, while offering posixPermissions to the unix people, and a protected parent directory for the windows people."
That's where we stand now.
OK, my slight counter proposal is not use rw-rw-rw- as default, but rw-rw---- because this would reflect the default umask of 027, i.e, not to create anything world readable. For those who need more permissions, they can supply a custom string.
I also do understand that localhost is open for everyone on that box, but isn't that the whole point of UDS to have more control of the socket?
OK, my slight counter proposal is not use rw-rw-rw- as default, but rw-rw---- because this would reflect the default umask of 027, i.e, not to create anything world readable. For those who need more permissions, they can supply a custom string.
The problem with this is that it makes the default behaviour between windows and unix inconsistent, and this is likely to cause headaches for people who either don't read the docs properly, or read a response on stack overflow aimed at unix people and use it thinking it also applies to windows.
Setting a default on windows is itself hard - windows doesn't have a concept of a "primary group" like posix, but the possibility of zero or more users and/or groups that have access to a file or directory. There is no practical default behaviour for any of that, which is why java itself doesn't try. Java gives you "access to owner" and "access to everyone", and that's it. "Access to owner" is the same as "no uds support", that leaves just "access to everyone, protect me by protecting my parent directory".
I also do understand that localhost is open for everyone on that box, but isn't that the whole point of UDS to have more control over the socket?
Yes - and the most simplest way to protect a socket is to put it in a suitably protected directory. You don't have to protect the socket file itself, just make it impossible for the file to be seen by making its parent directory inaccessible.
I am very mindful of decisions made now being difficult to change down the line. Adding new behaviour in future is easy, but changing existing behaviour (like a default) is a headache for all concerned.
OK, my slight counter proposal is not use rw-rw-rw- as default, but rw-rw---- because this would reflect the default umask of 027, i.e, not to create anything world readable. For those who need more permissions, they can supply a custom string.
The problem with this is that it makes the default behaviour between windows and unix inconsistent, and this is likely to cause headaches for people who either don't read the docs properly, or read a response on stack overflow aimed at unix people and use it thinking it also applies to windows.
While I agree here, you cannot really achieve consistency due to two completely diametral approach in both OS types. I wouldn't try to achive, as sad as it sounds.
Setting a default on windows is itself hard - windows doesn't have a concept of a "primary group" like posix, but the possibility of zero or more users and/or groups that have access to a file or directory. There is no practical default behaviour for any of that, which is why java itself doesn't try. Java gives you "access to owner" and "access to everyone", and that's it. "Access to owner" is the same as "no uds support", that leaves just "access to everyone, protect me by protecting my parent directory".
I know it is hard, maybe we should not try at all? I believe that it will quite some time to be picked up by Windows users at all.
I also do understand that localhost is open for everyone on that box, but isn't that the whole point of UDS to have more control over the socket?
Yes - and the most simplest way to protect a socket is to put it in a suitably protected directory. You don't have to protect the socket file itself, just make it impossible for the file to be seen by making its parent directory inaccessible.
I am very mindful of decisions made now being difficult to change down the line. Adding new behaviour in future is easy, but changing existing behaviour (like a default) is a headache for all concerned.
Agree!
@minfrin Do you want to peform anymore changes or do want to me run verifcation on it? Do you think a test would be possible to start up and shut down a UDS?
There is some issue with stopping an embedded Tomcat:
=== Started
^CDec 11, 2020 11:59:45 AM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["https-openssl-apr-/tmp/tomcat-uds.sock"]
Dec 11, 2020 11:59:45 AM org.apache.catalina.core.StandardService stopInternal
INFO: Stopping service [Tomcat]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.catalina.loader.WebappClassLoaderBase (file:/home/ubuntu/git/mg.solutions/http2-server-perf-tests/java/tomcat/target/tomcat-embedded-1.0-SNAPSHOT.jar) to field java.io.ObjectStreamClass$Caches.localDescs
WARNING: Please consider reporting this to the maintainers of org.apache.catalina.loader.WebappClassLoaderBase
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Dec 11, 2020 11:59:45 AM org.apache.coyote.AbstractProtocol stop
INFO: Stopping ProtocolHandler ["https-openssl-apr-/tmp/tomcat-uds.sock"]
Dec 11, 2020 11:59:55 AM org.apache.tomcat.util.net.Acceptor stop
WARNING: The acceptor thread [https-openssl-apr-/tmp/tomcat-uds.sock-Acceptor] did not stop cleanly
=== Stopped
and the application hangs.
Thread dump:
2020-12-11 12:02:45
Full thread dump OpenJDK 64-Bit Server VM (16-ea+26-1764 mixed mode):
Threads class SMR info:
_java_thread_list=0x0000ffff806d7a50, length=14, elements={
0x0000ffff801d1390, 0x0000ffff801d2b30, 0x0000ffff801ffda0, 0x0000ffff802012e0,
0x0000ffff80202810, 0x0000ffff802042e0, 0x0000ffff802058e0, 0x0000ffff80206e70,
0x0000ffff80297670, 0x0000ffff802a23f0, 0x0000ffff80697560, 0x0000fffef8001100,
0x0000ffff806e3050, 0x0000ffff800248b0
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=0.42ms elapsed=413.75s tid=0x0000ffff801d1390 nid=0x2c113f waiting on condition [0x0000ffff609fc000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@16-ea/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@16-ea/Reference.java:243)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@16-ea/Reference.java:215)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=0.46ms elapsed=413.75s tid=0x0000ffff801d2b30 nid=0x2c1140 in Object.wait() [0x0000ffff607fc000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@16-ea/Native Method)
- waiting on <0x00000007148016c8> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@16-ea/ReferenceQueue.java:155)
- locked <0x00000007148016c8> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@16-ea/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@16-ea/Finalizer.java:171)
"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.53ms elapsed=413.75s tid=0x0000ffff801ffda0 nid=0x2c1141 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Service Thread" #5 daemon prio=9 os_prio=0 cpu=0.38ms elapsed=413.75s tid=0x0000ffff802012e0 nid=0x2c1142 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Monitor Deflation Thread" #6 daemon prio=9 os_prio=0 cpu=2.44ms elapsed=413.75s tid=0x0000ffff80202810 nid=0x2c1143 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #7 daemon prio=9 os_prio=0 cpu=439.33ms elapsed=413.75s tid=0x0000ffff802042e0 nid=0x2c1144 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"C1 CompilerThread0" #10 daemon prio=9 os_prio=0 cpu=605.34ms elapsed=413.75s tid=0x0000ffff802058e0 nid=0x2c1145 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
No compile task
"Sweeper thread" #11 daemon prio=9 os_prio=0 cpu=0.09ms elapsed=413.75s tid=0x0000ffff80206e70 nid=0x2c1146 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Notification Thread" #12 daemon prio=9 os_prio=0 cpu=0.10ms elapsed=413.72s tid=0x0000ffff80297670 nid=0x2c1147 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Common-Cleaner" #13 daemon prio=8 os_prio=0 cpu=0.69ms elapsed=413.71s tid=0x0000ffff802a23f0 nid=0x2c1149 in Object.wait() [0x0000ffff50ffc000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@16-ea/Native Method)
- waiting on <0x0000000714802c70> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@16-ea/ReferenceQueue.java:155)
- locked <0x0000000714802c70> (a java.lang.ref.ReferenceQueue$Lock)
at jdk.internal.ref.CleanerImpl.run(java.base@16-ea/CleanerImpl.java:140)
at java.lang.Thread.run(java.base@16-ea/Thread.java:831)
at jdk.internal.misc.InnocuousThread.run(java.base@16-ea/InnocuousThread.java:134)
"Catalina-utility-1" #15 prio=1 os_prio=0 cpu=22.45ms elapsed=412.77s tid=0x0000ffff80697560 nid=0x2c1151 waiting on condition [0x0000ffff127fd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@16-ea/Native Method)
- parking to wait for <0x00000007149aaa60> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@16-ea/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@16-ea/AbstractQueuedSynchronizer.java:505)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@16-ea/ForkJoinPool.java:3137)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@16-ea/AbstractQueuedSynchronizer.java:1614)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@16-ea/ScheduledThreadPoolExecutor.java:1177)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@16-ea/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@16-ea/ThreadPoolExecutor.java:1056)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@16-ea/ThreadPoolExecutor.java:1116)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@16-ea/ThreadPoolExecutor.java:630)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(java.base@16-ea/Thread.java:831)
"Catalina-utility-2" #16 prio=1 os_prio=0 cpu=22.19ms elapsed=412.77s tid=0x0000fffef8001100 nid=0x2c1152 waiting on condition [0x0000ffff125fd000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@16-ea/Native Method)
- parking to wait for <0x00000007149aaa60> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@16-ea/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@16-ea/AbstractQueuedSynchronizer.java:505)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@16-ea/ForkJoinPool.java:3137)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@16-ea/AbstractQueuedSynchronizer.java:1614)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@16-ea/ScheduledThreadPoolExecutor.java:1170)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@16-ea/ScheduledThreadPoolExecutor.java:899)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@16-ea/ThreadPoolExecutor.java:1056)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@16-ea/ThreadPoolExecutor.java:1116)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@16-ea/ThreadPoolExecutor.java:630)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(java.base@16-ea/Thread.java:831)
"https-openssl-apr-/tmp/tomcat-uds.sock-Acceptor" #28 daemon prio=5 os_prio=0 cpu=0.24ms elapsed=412.76s tid=0x0000ffff806e3050 nid=0x2c115e runnable [0x0000ffff10dfe000]
java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at org.apache.tomcat.util.net.AprEndpoint.serverSocketAccept(AprEndpoint.java:729)
at org.apache.tomcat.util.net.AprEndpoint.serverSocketAccept(AprEndpoint.java:82)
at org.apache.tomcat.util.net.Acceptor.run(Acceptor.java:106)
at java.lang.Thread.run(java.base@16-ea/Thread.java:831)
"DestroyJavaVM" #30 prio=5 os_prio=0 cpu=1003.31ms elapsed=169.91s tid=0x0000ffff800248b0 nid=0x2c1138 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"VM Thread" os_prio=0 cpu=8.42ms elapsed=413.76s tid=0x0000ffff801c2f70 nid=0x2c113e runnable
"GC Thread#0" os_prio=0 cpu=5.94ms elapsed=413.78s tid=0x0000ffff80073930 nid=0x2c1139 runnable
"GC Thread#1" os_prio=0 cpu=4.91ms elapsed=412.94s tid=0x0000ffff48004f60 nid=0x2c114c runnable
"GC Thread#2" os_prio=0 cpu=4.92ms elapsed=412.94s tid=0x0000ffff48005af0 nid=0x2c114d runnable
"GC Thread#3" os_prio=0 cpu=4.90ms elapsed=412.94s tid=0x0000ffff48006680 nid=0x2c114e runnable
"GC Thread#4" os_prio=0 cpu=4.87ms elapsed=412.94s tid=0x0000ffff48007210 nid=0x2c114f runnable
"GC Thread#5" os_prio=0 cpu=4.88ms elapsed=412.94s tid=0x0000ffff48007da0 nid=0x2c1150 runnable
"G1 Main Marker" os_prio=0 cpu=0.11ms elapsed=413.78s tid=0x0000ffff800848c0 nid=0x2c113a runnable
"G1 Conc#0" os_prio=0 cpu=0.06ms elapsed=413.78s tid=0x0000ffff80085950 nid=0x2c113b runnable
"G1 Refine#0" os_prio=0 cpu=0.08ms elapsed=413.78s tid=0x0000ffff8013ac50 nid=0x2c113c runnable
"G1 Service" os_prio=0 cpu=15.07ms elapsed=413.78s tid=0x0000ffff8013bc70 nid=0x2c113d runnable
"VM Periodic Task Thread" os_prio=0 cpu=45.58ms elapsed=413.72s tid=0x0000ffff80299160 nid=0x2c1148 waiting on condition
JNI global refs: 27, weak refs: 0
Heap
garbage-first heap total 256000K, used 17190K [0x0000000706600000, 0x0000000800000000)
region size 2048K, 9 young (18432K), 2 survivors (4096K)
Metaspace used 19940K, committed 20160K, reserved 1073152K
class space used 1918K, committed 2048K, reserved 1048576K
The application I use to test it could be found at https://github.com/martin-g/http2-server-perf-tests/blob/feature/jakartaee-9/java/tomcat/src/main/java/info/mgsolutions/tomcat/TomcatEmbedded.java
In my test application TLS is configured but it is not used/needed by UDS so HTTP2 does not work:
$ curl --http2 --unix-socket /tmp/tomcat-uds.sock http://localhost/testbed/plaintext
curl: (56) Recv failure: Connection reset by peer
$ ubuntu@martin-arm64 /tmp [56]> curl --unix-socket /tmp/tomcat-uds.sock http://localhost/testbed/plaintext
curl: (56) Recv failure: Connection reset by peer
If I use h2c then all is fine:
INFO: The ["http-apr-/tmp/tomcat-uds.sock"] connector has been configured to support HTTP upgrade to [h2c]
$ curl --unix-socket /tmp/tomcat-uds.sock http://localhost/testbed/plaintext
Hello world!⏎
I think it would be good to document this.
Load tested it with Vegeta:
$ echo "GET http://localhost/testbed/plaintext" | vegeta attack -unix-socket /tmp/tomcat-uds.sock -rate 0 -max-workers 128 -duration 30s | vegeta encode | vegeta report --type json | jq .
{
"latencies": {
"total": 2849931195122,
"mean": 2189702,
"50th": 1623066,
"90th": 4702159,
"95th": 5418223,
"99th": 7096980,
"max": 69859908,
"min": 56120
},
"bytes_in": {
"total": 15618180,
"mean": 12
},
"bytes_out": {
"total": 0,
"mean": 0
},
"earliest": "2020-12-11T12:27:16.43339275Z",
"latest": "2020-12-11T12:27:46.433339493Z",
"end": "2020-12-11T12:27:46.436590425Z",
"duration": 29999946743,
"wait": 3250932,
"requests": 1301515,
"rate": 43383.910349897116,
"throughput": 43379.20957953359,
"success": 1,
"status_codes": {
"200": 1301515
},
"errors": []
}
Throughput: 43379. Not bad at all!
With TCP I was able to get 16654 on the same server, but with Tomcat 9.0.x and vegeta was executed on another machine.
Here are the results for load testing APR protocol over TCP, both Tomcat and Vegeta running on the same machine:
echo "GET http://localhost:8080/testbed/plaintext" | vegeta attack -rate 0 -max-workers 128 -duration 3
0s | vegeta encode | vegeta report --type json | jq .
{
"latencies": {
"total": 2993301754968,
"mean": 2558679,
"50th": 2097687,
"90th": 5000518,
"95th": 5898497,
"99th": 8044202,
"max": 151339083,
"min": 71830
},
"bytes_in": {
"total": 14038344,
"mean": 12
},
"bytes_out": {
"total": 0,
"mean": 0
},
"earliest": "2020-12-11T12:45:29.355079583Z",
"latest": "2020-12-11T12:45:59.355129897Z",
"end": "2020-12-11T12:45:59.356131037Z",
"duration": 30000050314,
"wait": 1001140,
"requests": 1169862,
"rate": 38995.3345996245,
"throughput": 38994.03331892302,
"success": 1,
"status_codes": {
"200": 1169862
},
"errors": []
}
TCP: 38994
UDS: 43379
@minfrin Could you kindly add a test case for this? I would like to finalize this and checkout @martin-g's comments.
I am not convinced about adding that feature to the APR endpoint ...
Anyway:
The changes to IntrospectionUtils are too much given the actual use, strings could be used and the endpoint is the only place that actually deals with the two types so it seems enough
I don't get the idea behind the "permissions", since I don't think Tomcat is the party that is supposed to be creating the socket
The UDS feature should already work with NIO and Java 16 EA by using an inherited channel. The limitation is that there is only one endpoint that can use a UDS. I'm ok with adding full UDS support to NIO using the compat package (the amount of reflection needed does not seem too bad so I may try it to see how that would work).
I added the feature for NIO, since it wasn't too difficult using https://openjdk.java.net/jeps/380 . Testing with curl works fine, I'll add a test in the testsuite next. It does have specific unlock accept, some compatibility code for port/address annoyances ("TCP local addresses" for access logs, JMX name, connector name), and the port attribute becomes optional. This cannot be added to NIO2 since the feature is not available.
Some possible changes:
The permission attribute, is it really useful ?
Reflection is used for this attribute for now since this is NIO only
The socket is not deleted on shutdown (although the channel is closed)
I see no reason why this cannot work which Java UDS and APR UDS.
@rmaucher https://github.com/apache/tomcat/commit/884b997f5a9a7da9f696d00574d3b727afbfae8c#diff-117ff4ae372c7a4f6643546174bcc2dbf5a25bd399fe1b89f55e72d2d4150285R212
It can 100% work with APR, except I personally don't want to add features to that component at this point.
It can 100% work with APR, except I personally don't want to add features to that component at this point.
If you personally don't want to, @minfrin happily will.
The permission attribute, is it really useful ?
In the absence of a permission attribute (and without the "everyone" default), the socket is equivalent to a TCP port that has been firewalled off, and thus pointless.
Ignoring special cases like a personal development environment, or a system with no user separation, daemons (like tomcat) are secured with a user tomcat, group tomcat, and a typical umask of 0750 (or some variation). This means that the a) the tomcat user can write, b) the tomcat group can read (typically allowing read access to log files), and c) everyone else get nothing.
In order for any unix domain socket to be of use to anyone, it must be possible to write to it. If you can't write to it, you cannot submit a request. A unix domain socket that only the tomcat user can write to pointless, as you've giving the client control over the tomcat process. A read only unix domain socket for a request/response protocol like HTTP has no practical effect - having written nothing you will read nothing.
For this reason, every daemon out there that I have seen has a mechanism to make the socket writable to a group, and defaulting to being accessible to everyone:
https://github.com/Cisco-Talos/clamav-devel/blob/31824a659dff37ae03e3419395bb68e659c2b165/etc/clamd.conf.sample#L104
https://github.com/trusteddomainproject/OpenDMARC/blob/b0d6408d0859adb336428e3d0bd87749513a9e79/opendmarc/opendmarc.conf.sample#L357
https://github.com/rspamd/rspamd/blob/9c2d72c6eba3fc05fd7459e388ea7c92eb87095f/conf/options.inc#L48
In the absence of an explicit control over permissions, making the permissions world writable by default allows the admin to secure the socket by restricting permissions on the parent directory, such as the following example:
[root@localhost clamav-milter]# ls -al
total 0
drwx--x---. 2 clamilt clamilt 60 Jan 11 13:03 .
drwxr-xr-x. 39 root root 1080 Jan 11 13:06 ..
srw-rw-rw-. 1 clamilt clamilt 0 Jan 11 13:03 clamav-milter.socket
In the above, the socket itself is world writable, but the parent directory is protected, and therefore the socket is protected.
The socket is not deleted on shutdown (although the channel is closed)
If the socket is not deleted on shutdown, the server cannot subsequently be started up. Deleting the socket on shutdown is the most common behaviour. Deleting the socket is startup is not done, as it means that multiple daemons can be started without error.
I think I saw a commit go past fixing this, need to verify.
The permission attribute, is it really useful ?
In the absence of a permission attribute (and without the "everyone" default), the socket is equivalent to a TCP port that has been firewalled off, and thus pointless.
Ignoring special cases like a personal development environment, or a system with no user separation, daemons (like tomcat) are secured with a user tomcat, group tomcat, and a typical umask of 0750 (or some variation). This means that the a) the tomcat user can write, b) the tomcat group can read (typically allowing read access to log files), and c) everyone else get nothing.
In order for any unix domain socket to be of use to anyone, it must be possible to write to it. If you can't write to it, you cannot submit a request. A unix domain socket that only the tomcat user can write to pointless, as you've giving the client control over the tomcat process. A read only unix domain socket for a request/response protocol like HTTP has no practical effect - having written nothing you will read nothing.
For this reason, every daemon out there that I have seen has a mechanism to make the socket writable to a group, and defaulting to being accessible to everyone:
https://github.com/Cisco-Talos/clamav-devel/blob/31824a659dff37ae03e3419395bb68e659c2b165/etc/clamd.conf.sample#L104
https://github.com/trusteddomainproject/OpenDMARC/blob/b0d6408d0859adb336428e3d0bd87749513a9e79/opendmarc/opendmarc.conf.sample#L357
https://github.com/rspamd/rspamd/blob/9c2d72c6eba3fc05fd7459e388ea7c92eb87095f/conf/options.inc#L48
In the absence of an explicit control over permissions, making the permissions world writable by default allows the admin to secure the socket by restricting permissions on the parent directory, such as the following example:
[root@localhost clamav-milter]# ls -al
total 0
drwx--x---. 2 clamilt clamilt 60 Jan 11 13:03 .
drwxr-xr-x. 39 root root 1080 Jan 11 13:06 ..
srw-rw-rw-. 1 clamilt clamilt 0 Jan 11 13:03 clamav-milter.socket
In the above, the socket itself is world writable, but the parent directory is protected, and therefore the socket is protected.
The socket is not deleted on shutdown (although the channel is closed)
If the socket is not deleted on shutdown, the server cannot subsequently be started up. Deleting the socket on shutdown is the most common behaviour. Deleting the socket is startup is not done, as it means that multiple daemons can be started without error.
I think I saw a commit go past fixing this, need to verify.
Life is too short to fight with git.
Opened a fresh PR at https://github.com/apache/tomcat/pull/401.
Life is too short to fight with git.
Opened a fresh PR at https://github.com/apache/tomcat/pull/401.
@minfrin Do you want to peform anymore changes or do want to me run verifcation on it? Do you think a test would be possible to start up and shut down a UDS?
I've added a test based on the existing APR tests. Can you confirm this is all ok?
@minfrin Do you want to peform anymore changes or do want to me run verifcation on it? Do you think a test would be possible to start up and shut down a UDS?
I've added a test based on the existing APR tests. Can you confirm this is all ok?
I added the feature for NIO, since it wasn't too difficult using https://openjdk.java.net/jeps/380 .
Thank you for doing this, it is a huge help.
This patch is all about supporting people who are not yet in a position to use Java 16.
I added the feature for NIO, since it wasn't too difficult using https://openjdk.java.net/jeps/380 .
Thank you for doing this, it is a huge help.
This patch is all about supporting people who are not yet in a position to use Java 16.
The permission attribute, is it really useful ?
In the absence of a permission attribute (and without the "everyone" default), the socket is equivalent to a TCP port that has been firewalled off, and thus pointless.
Ignoring special cases like a personal development environment, or a system with no user separation, daemons (like tomcat) are secured with a user tomcat, group tomcat, and a typical umask of 0750 (or some variation). This means that the a) the tomcat user can write, b) the tomcat group can read (typically allowing read access to log files), and c) everyone else get nothing.
In order for any unix domain socket to be of use to anyone, it must be possible to write to it. If you can't write to it, you cannot submit a request. A unix domain socket that only the tomcat user can write to pointless, as you've giving the client control over the tomcat process. A read only unix domain socket for a request/response protocol like HTTP has no practical effect - having written nothing you will read nothing.
For this reason, every daemon out there that I have seen has a mechanism to make the socket writable to a group, and defaulting to being accessible to everyone:
https://github.com/Cisco-Talos/clamav-devel/blob/31824a659dff37ae03e3419395bb68e659c2b165/etc/clamd.conf.sample#L104
https://github.com/trusteddomainproject/OpenDMARC/blob/b0d6408d0859adb336428e3d0bd87749513a9e79/opendmarc/opendmarc.conf.sample#L357
https://github.com/rspamd/rspamd/blob/9c2d72c6eba3fc05fd7459e388ea7c92eb87095f/conf/options.inc#L48
In the absence of an explicit control over permissions, making the permissions world writable by default allows the admin to secure the socket by restricting permissions on the parent directory, such as the following example:
[root@localhost clamav-milter]# ls -al
total 0
drwx--x---. 2 clamilt clamilt 60 Jan 11 13:03 .
drwxr-xr-x. 39 root root 1080 Jan 11 13:06 ..
srw-rw-rw-. 1 clamilt clamilt 0 Jan 11 13:03 clamav-milter.socket
In the above, the socket itself is world writable, but the parent directory is protected, and therefore the socket is protected.
The socket is not deleted on shutdown (although the channel is closed)
If the socket is not deleted on shutdown, the server cannot subsequently be started up. Deleting the socket on shutdown is the most common behaviour. Deleting the socket is startup is not done, as it means that multiple daemons can be started without error.
I think I saw a commit go past fixing this, need to verify.
Ok, I still dislike the permission attribute quite a bit but I can understand things can be annoying without it in some cases.
The permission attribute, is it really useful ?
In the absence of a permission attribute (and without the "everyone" default), the socket is equivalent to a TCP port that has been firewalled off, and thus pointless.
Ignoring special cases like a personal development environment, or a system with no user separation, daemons (like tomcat) are secured with a user tomcat, group tomcat, and a typical umask of 0750 (or some variation). This means that the a) the tomcat user can write, b) the tomcat group can read (typically allowing read access to log files), and c) everyone else get nothing.
In order for any unix domain socket to be of use to anyone, it must be possible to write to it. If you can't write to it, you cannot submit a request. A unix domain socket that only the tomcat user can write to pointless, as you've giving the client control over the tomcat process. A read only unix domain socket for a request/response protocol like HTTP has no practical effect - having written nothing you will read nothing.
For this reason, every daemon out there that I have seen has a mechanism to make the socket writable to a group, and defaulting to being accessible to everyone:
https://github.com/Cisco-Talos/clamav-devel/blob/31824a659dff37ae03e3419395bb68e659c2b165/etc/clamd.conf.sample#L104
https://github.com/trusteddomainproject/OpenDMARC/blob/b0d6408d0859adb336428e3d0bd87749513a9e79/opendmarc/opendmarc.conf.sample#L357
https://github.com/rspamd/rspamd/blob/9c2d72c6eba3fc05fd7459e388ea7c92eb87095f/conf/options.inc#L48
In the absence of an explicit control over permissions, making the permissions world writable by default allows the admin to secure the socket by restricting permissions on the parent directory, such as the following example:
[root@localhost clamav-milter]# ls -al
total 0
drwx--x---. 2 clamilt clamilt 60 Jan 11 13:03 .
drwxr-xr-x. 39 root root 1080 Jan 11 13:06 ..
srw-rw-rw-. 1 clamilt clamilt 0 Jan 11 13:03 clamav-milter.socket
In the above, the socket itself is world writable, but the parent directory is protected, and therefore the socket is protected.
The socket is not deleted on shutdown (although the channel is closed)
If the socket is not deleted on shutdown, the server cannot subsequently be started up. Deleting the socket on shutdown is the most common behaviour. Deleting the socket is startup is not done, as it means that multiple daemons can be started without error.
I think I saw a commit go past fixing this, need to verify.
Ok, I still dislike the permission attribute quite a bit but I can understand things can be annoying without it in some cases.
| gharchive/pull-request | 2020-11-29T18:51:28 | 2025-04-01T04:33:30.125324 | {
"authors": [
"martin-g",
"michael-o",
"minfrin",
"rmaucher"
],
"repo": "apache/tomcat",
"url": "https://github.com/apache/tomcat/pull/382",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1755259199 | Change style
Just change some CSS style
I removed any unnecessary spaces and utilized 4 spaces instead of 1 tab. Additionally, I updated the style of the buttons.
Thanks for addressing the feedback. The remaining question is "What is the reason for these changes?"
Thanks for addressing the feedback. The remaining question is "What is the reason for these changes?"
The reason behind these modifications is to enhance the visual appeal of the software and provide a more modern look. By removing unnecessary elements such as table borders.
I would like to redesign the frontend architecture of the project in a modern way if you agree.
Please provide an updated screenshot showing the results of all your changes as there have been additional changes since the PR was opened.
Closing as this is a cosmetic change and requested updates to the PR have not been made in ~6 weeks.
| gharchive/pull-request | 2023-06-13T16:18:46 | 2025-04-01T04:33:30.131342 | {
"authors": [
"markt-asf",
"sheikhoo"
],
"repo": "apache/tomcat",
"url": "https://github.com/apache/tomcat/pull/628",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
716715431 | Change ORT to Calculate parent_pending Itself
Currently, ORT uses the parent_pending flag in a TO endpoint to determine whether any parents of the current server have updates pending.
We should change ORT to calculate this itself. It has all the data to do so.
The calculation itself is logic (mostly SQL) performed on the data. Logic that becomes more complex with the move from DeliveryServiceServers to Topologies.
This is just one more thing that should be done in ORT because it can, and because ORT is canaryable, where the TO Monolith is not. Not a huge issue, but a code/canary/rollback improvement.
I'm submitting a ...
improvement request (usability, performance, tech debt, etc.)
Traffic Control components affected ...
Traffic Ops ORT
Current behavior:
TO performs logic to determine parents pending, and TO requests that.
New behavior:
ORT performs logic to determine parents pending.
Minimal reproduction of the problem with instructions:
N/A. Behavior does not change.
Anything else:
Related to https://github.com/apache/trafficcontrol/issues/3687
Related to https://github.com/apache/trafficcontrol/pull/3689
| gharchive/issue | 2020-10-07T17:22:37 | 2025-04-01T04:33:30.135599 | {
"authors": [
"rob05c"
],
"repo": "apache/trafficcontrol",
"url": "https://github.com/apache/trafficcontrol/issues/5117",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
974872018 | Role and User struct changes for permission based roles
This PR changes the V4 Role and User structs to account for the changes as specified in the Roles and Perms blueprint.
Which Traffic Control components are affected by this PR?
Traffic Ops
What is the best way to verify this PR?
Make sure all the unit tests and API tests pass.
PR submission checklist
[x] This PR has tests
[x] This PR DOES NOT FIX A SERIOUS SECURITY VULNERABILITY (see the Apache Software Foundation's security guidelines for details)
I was starting to review this when I realized it still targets API v5, but we're not doing that at this point, per mailing list discussion.
Yeah I'm in the process of backing out the v5 changes. Should have it ready today.
Working on the test fixes.
TP Role creation form appears to be broken:
TP Role creation form appears to be broken:
The controls are all broken functionally, as well as visually
Should be fixed now.
| gharchive/pull-request | 2021-08-19T17:06:43 | 2025-04-01T04:33:30.141261 | {
"authors": [
"ocket8888",
"srijeet0406"
],
"repo": "apache/trafficcontrol",
"url": "https://github.com/apache/trafficcontrol/pull/6124",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
249433000 | Memory leak on TSVConnClose in TS_VCONN_PRE_ACCEPT_HOOK
Branch: 7.1.x
When a plugin closes a connection using TSVConnClose in a TS_VCONN_PRE_ACCEPT_HOOK, the SSLNextProtocolTrampoline does not seem to be deleted. Resulting in the follwing valgrind output:
==32021== 72 bytes in 1 blocks are definitely lost in loss record 1,410 of 1,898
==32021== at 0x4A075FC: operator new(unsigned long) (vg_replace_malloc.c:298)
==32021== by 0x790DAC: SSLNextProtocolAccept::mainEvent(int, void*) (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x517453: Continuation::handleEvent(int, void*) (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x7AAD0C: UnixNetVConnection::acceptEvent(int, Event*) (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x517453: Continuation::handleEvent(int, void*) (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x7CB37D: EThread::process_event(Event*, int) (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x7CB60C: EThread::execute() (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x7CA997: spawn_thread_internal(void*) (in /opt/trafficserver/bin/traffic_server)
==32021== by 0x4EAAAA0: start_thread (in /lib64/libpthread-2.12.so)
==32021== by 0x3E4F8E8BBC: clone (in /lib64/libc-2.12.so)
A minimalisic plugin for testing:
int closeAll(TSCont contp, TSEvent event, void* ctx){
TSVConnClose((TSVConn)ctx);
return TS_SUCCESS;
}
void TSPluginInit(int argc, const char *argv[]) {
TSReturnCode ret;
TSCont contp;
TSPluginRegistrationInfo info;
info.plugin_name = "Memory leak on TSVConnClose in TS_VCONN_PRE_ACCEPT_HOOK";
info.vendor_name = "github";
info.support_email = "email@email.com"
ret = TSPluginRegister(&info);
if (ret == TS_ERROR) {
TSError("Failed to register plugin");
return;
}
contp = TSContCreate(closeAll, TSMutexCreate());
TSHttpHookAdd(TS_VCONN_PRE_ACCEPT_HOOK, contp);
}
Before you return TS_SUCCESS;, you should do TSVConnReenable(reinterpret_cast<TSVConn>(edata)); first.
Please refer to /example/ssl-preaccept/sslpreaccept.cc.
I've tried what you are suggesting, but calling TSVConnReenable on a closed connection led to sporadic segfaults within ats. I'm trying to reproduce what I did last time and provide you with more information.
@Enteee @shinrich It is a bug.
We should increase the netvc->recursion before call out to api hooks and check it in the SSLNetVC::do_io_close.
@Enteee @shinrich
According to the TSVConnTunnel, in my opinion,
It is prohibit to perform TSVConnClose(do_io_close) on a SSLNetVC object.
It is the responsibility to TSVConnTerminate to terminate the SSL session and then close the tcp connection.
We can create the TSVConnTerminate by reference to TSVConnTunnel and the SSL_HOOK_OP_TERMINATE is already supported in the SSLNetVC.
@oknet since you confirmed the bug, you don't need me to reproduce it anymore?
@Enteee No, you don't.
Currently, the plugin cannot close a SSLNetVC directly. But It is allowed to switch the SSLNetVC into blind tunnel with TSVConnTunnel.
If you want a API that is used to close a SSLNetVC, please request a new API.
With master (pre 9.0 branch), I cannot replicate the memory mentioned in this issue.
| gharchive/issue | 2017-08-10T18:17:58 | 2025-04-01T04:33:30.148287 | {
"authors": [
"Enteee",
"clearswift",
"oknet",
"randall"
],
"repo": "apache/trafficserver",
"url": "https://github.com/apache/trafficserver/issues/2361",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
308784163 | traffic_server using 100% CPU
It looks like latest master branch (3f59ddc6cec352cd38a0ffaa715716e07e7a958d) is spinning CPU.
PID COMMAND %CPU TIME #TH
28661 traffic_server 100.3 01:28.77 28/1
A thread is in this while loop. https://github.com/apache/trafficserver/blob/3f59ddc6cec352cd38a0ffaa715716e07e7a958d/mgmt/ProcessManager.cc#L163-L177
(lldb) bt
* thread #2
* frame #0: 0x000000010025eb90 traffic_server`ProcessManager::processSignalQueue(this=0x0000000000000000) at ProcessManager.cc:267
frame #1: 0x000000010025d86e traffic_server`ProcessManager::processManagerThread(arg=0x0000000000000000) at ProcessManager.cc:173
frame #2: 0x00007fff63f7b6c1 libsystem_pthread.dylib`_pthread_body + 340
frame #3: 0x00007fff63f7b56d libsystem_pthread.dylib`_pthread_start + 377
frame #4: 0x00007fff63f7ac5d libsystem_pthread.dylib`thread_start + 13
It looks like #3214 introduced this. It removed a sleep intentionally, but I'm not sure this is right approach.
removes static wait, previously in ProcessManager.cc:175
- mgmt_sleep_sec(pmgmt->timeout);
I don't see this issue when I revert #3214.
The static wait is not causing this issue. The thread should spend most of it's time in select waiting for messages on the socket. If you check the CPU usage of traffic_server when traffic_manager is also running, it is much lower.
| gharchive/issue | 2018-03-27T00:51:53 | 2025-04-01T04:33:30.151944 | {
"authors": [
"masaori335",
"xf6wang"
],
"repo": "apache/trafficserver",
"url": "https://github.com/apache/trafficserver/issues/3350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
380902496 | adding TSHttpTxnRedoCacheLookup
This pull request incorporates a new API to redo cache look-ups changing the url. A review would be welcome.
[approve ci]
[approve ci]
@randall @SolidWallOfCode I'd really appreciate your review of this patch. I can add docs to go with the release if the approach looks correct to you.
[approve ci]
[add to whitelist]
Ok, I'm very +1 on this general concept! We removed the second CacheSM a while ago, for good reasons, and this is exactly what we'd want to do instead.
Still need thorough review though.
I think my code is great and all, but has it been tested in any way?
@SolidWallOfCode I wrote a very simple plugin to test this api and it seem to work as expected.
[approve ci clang-analyzer]
[approve ci clang-analyzer]
I've added documentation for this new function. Is there anything else I need to change to move this PR to "For Review"?
Thanks @SolidWallOfCode :raised_hands: :bug: :smile:
Cherry-picked to v9.0.x branch.
| gharchive/pull-request | 2018-11-14T21:51:23 | 2025-04-01T04:33:30.156257 | {
"authors": [
"SolidWallOfCode",
"bryancall",
"calavera",
"danm-netlify",
"ezelkow1",
"randall",
"shukitchan",
"zwoop"
],
"repo": "apache/trafficserver",
"url": "https://github.com/apache/trafficserver/pull/4607",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
719845006 | Add AuTest for HTTP/2 Graceful Shutdown
Add a basic test case of HTTP/2 Graceful Shutdown. Please note that this doesn't cover the issues we discussed in #7267.
Yep, the gold file is assuming 2 test cases are finished before it is checked. I agree it's an ugly hack and increases maintenance cost a bit.
But I can't find any other good approaches. I tried getting metrics via traffic_ctl, but it's tricky because ATS needs time to provide stats. You can see how it's tricky on other AuTests ( -e.g. openclose.test.py).
| gharchive/pull-request | 2020-10-13T04:16:30 | 2025-04-01T04:33:30.158058 | {
"authors": [
"masaori335"
],
"repo": "apache/trafficserver",
"url": "https://github.com/apache/trafficserver/pull/7271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2438016666 | [Relax] Allow out_sinfo to be omitted from R.call_tir
Prior to this commit, the Relax type produced by calling a TIR PrimFunc needed to be explicitly specified using the out_sinfo argument. These output shapes are required in order to allocate output tensors during the CallTIRRewrite lowering pass. However, specifying them explicitly, especially in hand-written functions, duplicates information that is already present in the PrimFunc signature, and introduces the potential for inconsistencies.
This commit updates the MakeCallTIR function to infer out_sinfo if not explicitly specified. This inference uses the number of relax arguments to identify output parameters in the signature of the PrimFunc, which then become the return values from R.call_tir. Currently, this inference of out_sinfo occurs when constructing the relax::Call object, after which the out_sinfo is always present in the Relax IR.
Just want to note that it is not always possible to do such inference.
class IRModule:
@T.prim_func
def reshape(A : Buffer((2, 4)), B: Buffer((n, m)):
def main(A: Buffer((2, 4))):
lv0 = R.call_tir(reshape, [A], R.Tensor((1, 8)))
For example, the above code is a valid tir call, but needs the output sinfo to be explicitly specified. Because we have such cases, and call_tir is a lower level function, it is safer to always ask for sinfo, but checks its consistency with the corresponding prim_func signature if needed
For example, the above code is a valid tir call, but needs the output sinfo to be explicitly specified. Because we have such cases, and call_tir is a lower level function, it is safer to always ask for sinfo, but checks its consistency with the corresponding prim_func signature if needed
That's a good point, and I agree that we should always be able to explicitly specify the output struct info, as output tensor shapes in TIR may define symbolic shapes. However, I don't think it should a required argument.
I've added a new test case, based on your example with reshape, to validate the behavior when the output shape cannot be inferred. While the initial implementation did identify this failure and throw an error, the error message wasn't ideal. I've added an earlier check for non-inferable output shapes, so that the error message can direct the user to provide the out_sinfo field.
Does the udpated check/error messages address your concerns for this PR?
I think this is mainly a design consideration here on what do we view the intended use of CreateCallTIR, in terms of different expectations we have on caller of the function. I can see some merits on auto deduction or call for explicitness
Given call_tir is lower level, having "less automation" here during pass and have explicitly checking would ensure correctness while indeed asking pass writers to do a bit more. It is like explicitly annotating types when writing c++ code versus writing auto. I think encouraging pass writers to explicitly think about the DPS pattern and always provide the return argument helps to reduce uncertainty here. While I can indeed see some merits of automated decusion, given it is not always possible, I still prefer we have the explicitness and provide good amount of consistency checking
I think encouraging pass writers to explicitly think about the DPS pattern and always provide the return argument helps to reduce uncertainty here.
While I think this would be an interesting point to discuss, I don't think it's relevant to this specific change. This PR keeps the exact same out_sinfo in the C++ IR types, and still requires pass writers to explicitly provide the output info. The MakeCallTIR function is not exposed to the back-end C++ API, only through the front-end Python API.
This change is solely in the front-end, for cases where an IRModule is being hand-written. I'd like to make that use-case less error-prone.
Having such explicit argument makes the "intent" clear, with the explicit sinfo, we can write down the semantics in a clear fashion
Good point on the semantics. This change would add an additional step to the user-facing semantics of R.call_tir.
def call_tir(func, args, out_sinfo):
if out_sinfo is None:
out_sinfo = infer_out_sinfo(func, args) # may throw
out = alloc_outputs(out_sinfo)
func(*args, unpack_outputs(out))
return out
I suppose that I'm getting stuck on is the "intent" part. While there are exceptions, in the majority of cases, there's one and only one correct value for out_sinfo. Since the user doesn't have any choice in it, we can't infer any intention from the user about it. On the other hand, if the user has the option of omitting the out_sinfo, then we could distinguish between the intent of "use whichever output is valid" (e.g. R.call_tir(unary_abs, [x])) and "verify and use the output I expect" (e.g. R.call_tir(unary_abs, [x], R.Tensor([16],'float16'))).
In this particular case, having good well form check about consistency would help a lot toward that direction
Agreed. I think for now, let's put this PR on hold, and I'll update the well-formed checker to verify consistent between the R.call_tir callee and the input/output arguments. (Since that's a change that we both agree on, and covers many of the same error modes.)
closed in favor of #17285
| gharchive/pull-request | 2024-07-30T14:41:19 | 2025-04-01T04:33:30.167685 | {
"authors": [
"Lunderberg",
"tqchen"
],
"repo": "apache/tvm",
"url": "https://github.com/apache/tvm/pull/17216",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1236706409 | Write a TLA+ spec of SeqModelChecker
Sequential model checker has a number of tuning options such as: checking invariants before or after taking a step, discarding disabled transitions, excluding executions with views, etc. ADR-003 captures only one combination of these options. In order to document and understand better how the model checker is interacting with the solver, we have to write a specification, instead of a single sequence chart (as in ADR-003).
That was a curious exercise but it happened to be much harder that it was useful
| gharchive/issue | 2022-05-16T06:55:23 | 2025-04-01T04:33:30.180568 | {
"authors": [
"konnov"
],
"repo": "apalache-mc/apalache",
"url": "https://github.com/apalache-mc/apalache/issues/1756",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2161645657 | 🛑 HedgeDoc is down
In 3eb7dd3, HedgeDoc (https://pad.interhop.org) was down:
HTTP code: 502
Response time: 357 ms
Resolved: HedgeDoc is back up in 46c5a0f after 4 hours, 16 minutes.
| gharchive/issue | 2024-02-29T16:28:49 | 2025-04-01T04:33:30.215407 | {
"authors": [
"aparrot89"
],
"repo": "aparrot89/interhop-status",
"url": "https://github.com/aparrot89/interhop-status/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
132869634 | docker: handle scheme-ish image names
Passing in an image name of httpd would cause the parsing function to treat
the input as a partial image name, rather than a URL.
@wallyqs @jpoler
LGTM :dragon:
Too slow @jpoler!
You just saved me from having to open a JIRA in this. +100
Sent from my iPhone
On Feb 10, 2016, at 9:24 PM, Alex Toombs notifications@github.com wrote:
Too slow @jpoler!
—
Reply to this email directly or view it on GitHub.
| gharchive/pull-request | 2016-02-11T01:53:36 | 2025-04-01T04:33:30.219872 | {
"authors": [
"alextoombs",
"sdemura",
"wallyqs"
],
"repo": "apcera/util",
"url": "https://github.com/apcera/util/pull/34",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2023529571 | Missing resource fields in UMA RPT
Hey there,
according to the UMA spec (link), upon successful token introspection the authorization server should respond to the resource server with a JSON response, including at least resource_id and resource_scopes. However since a regular Access Token is used, CAS currently does not include resource information in the response, making it impossible for the resource server to determine the validity of the user request.
For now this PR only provides a change to your puppeteer test to point out the problem. We would be happy to provide a solution for this, but need to know whether you are interested in it or rather prefer to fix it yourselves.
Kind regards,
Fabian
Thank you very much for the feedback and the updated test.
For now this PR only provides a change to your puppeteer test to point out the problem. We would be happy to provide a solution for this, but need to know whether you are interested in it or rather prefer to fix it yourselves.
The answer to this question is almost always this: we would love to see your solution for this, and are happy to work with you to get this improved. We cannot guarantee the solution would be accepted, or accepted in time as it very much depends on the scope of the fix, how critical it might be and your availability and ours. If you're OK with those parameters, you're most welcome to contribute.
| gharchive/pull-request | 2023-12-04T10:35:59 | 2025-04-01T04:33:30.254150 | {
"authors": [
"bussef",
"mmoayyed"
],
"repo": "apereo/cas",
"url": "https://github.com/apereo/cas/pull/5892",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
367201719 | Interpolate or duplicate missing data values
AFAIK currently there's no way to draw lines on missing data on line charts, only points are drawn.
https://codepen.io/anon/pen/WaxKdp
This doesn't look alright when there's many missing data
Although one can fill that data by specifying interpolated values or repeating former value it would be nice to have an option to just have an Apex charts option on how these null values should be handled.
And there's no way to set a different stroke for such "filled" line segments.
I would suggest the following:
options:
nullValues: {
fill: false | 'interpolate' | 'repeat',
stroke: {
width: 2,
dashSpacing: 5,
}
}
Alternative option title could be seriesNulls or seriesEmptyValueHandling
+1, you can still pre-process data to drop null value, but that's definitely a required feature
Replacing null values with the former values while drawing seems like an appropriate way to handle it. I will research more on what are other options and see the possibilities of adding this feature.
Check python pandas interpolate documentation if you want to see more options ;-)
Le 12 mars 2019 09:09:43 GMT+01:00, Juned Chhipa notifications@github.com a écrit :
Replacing null values with the former values while drawing seems like
an appropriate way to handle it. I will research more on what are other
options and see the possibilities of adding this feature.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apexcharts/apexcharts.js/issues/146#issuecomment-471897530
--
Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma brièveté.
+1
+1
This would be very helpful!
Similar to #747 with more details on how this feature would look like.
@junedchhipa any update on this issue?
+1.
Is there any plans to add a feature improvement for this?
It seems this issue links to #747, and vice versa, with no clear resolution or intention.
Thanks! 😃
+1
Fuck bots :(
Implement this feature please
+1
+1 switching to recharts because of this
+1
Just came to this now. There's now a way to do this with the connectNulls prop which is passed to Line
Just came to this now. There's now a way to do this with the connectNulls prop which is passed to Line
Oh wow, really? That's nice!
Just came to this now. There's now a way to do this with the connectNulls prop which is passed to Line
i dont this is true? are you confusing this with echarts?
| gharchive/issue | 2018-10-05T13:02:20 | 2025-04-01T04:33:30.266888 | {
"authors": [
"AshConnolly",
"Filaind",
"FlorianMarcon",
"Gobot1234",
"arpit016",
"digows",
"eLvErDe",
"jofftiquez",
"johnwinkcq",
"junedchhipa",
"muehling",
"ngranja19",
"northrhino",
"pokhrelashok",
"rozium",
"steven-prybylynskyi"
],
"repo": "apexcharts/apexcharts.js",
"url": "https://github.com/apexcharts/apexcharts.js/issues/146",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
936567309 | browserPoolOptions.maxOpenPagesPerBrowser seems to be ignored in PlaywrightCrawler
Describe the bug
Setting maxOpenPagesPerBrowser to 1 always open 2+ tabs on the same context, making it impossible to use isolated contexts
To Reproduce
create a new Apify.PlaywrightCrawler instance
set browserPoolOptions: { maxOpenPagesPerBrowser: 1 }
have 2 or more URLs. their context will be used regardless of request userData parameters
Expected behavior
Setting maxOpenPagesPerBrowser to 1 is expected to act like using useIncognitoPages parameter, one browser with a session per tab
System information:
OS: Windows 10 64
Node.js version 16.4.1
Apify SDK version 1.2.1
Additional context
useIncognitoPages seems to being ignored
Most likely related to this https://github.com/apify/apify-js/pull/1013
I can force it to open "1 window = 1 tab" by setting maxOpenPagesPerBrowser: 0, but the problems it creates are even worse. tried using browserController.close at the end of each handlePageFunction but of course breaks the rest of the tabs opened in the same browser context. only after 3-4 urls, it will be opening 1 tab per instance
@szmarczak This has most likely been fixed by some of your recent fixes in browser-pool, right?
Correct, exactly by https://github.com/apify/browser-pool/pull/44
| gharchive/issue | 2021-07-05T00:02:59 | 2025-04-01T04:33:30.310139 | {
"authors": [
"mnmkng",
"pocesar",
"szmarczak"
],
"repo": "apify/apify-js",
"url": "https://github.com/apify/apify-js/issues/1089",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1600929861 | Fix PSC Southbound Endpoint Attachment Module
Fix PSC Southbound Endpoint Attachment Module
(Issue)[https://github.com/apigee/terraform-modules/blob/9c99a5107a1c1a30e078894124276841f60a3945/modules/sb-psc-attachment/main.tf#L30]
organizations/ prefix needs to be added.
Fixed as part of #102.
| gharchive/issue | 2023-02-27T10:48:34 | 2025-04-01T04:33:30.316542 | {
"authors": [
"g-greatdevaks"
],
"repo": "apigee/terraform-modules",
"url": "https://github.com/apigee/terraform-modules/issues/101",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
372402718 | remote debugging via chrome
Hi there,
I come from web based front end background. Wonder if expo+remote-redux-devtools can delivery the similar debugging experience.
E.G. I start up react native app and debug it via expo. Can I use chrome to check redux state like the way that I did with react web? my dev environment is win10.
It could be great if you can provide more details or references in steps. Much appreciated!
@AlexSun98 Hi,
Yes, of course, you can use Redux DevTools for debugging Perfi, all already configured. All that you need it's be in the same Internet network as your mobile device, run your app and run Redux DevTools.
If you want to use it with your project you can see how we add devToolsEnhancer in store.js. The process is very easy and similar like in web.
| gharchive/issue | 2018-10-22T06:02:27 | 2025-04-01T04:33:30.321411 | {
"authors": [
"AlexSun98",
"DaxiALex"
],
"repo": "apiko-dev/Perfi",
"url": "https://github.com/apiko-dev/Perfi/issues/70",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1773499899 | 🛑 OSC is down
In 25af7be, OSC (https://openstorage.xyz) was down:
HTTP code: 0
Response time: 0 ms
Resolved: OSC is back up in 4949e56.
| gharchive/issue | 2023-06-25T21:31:14 | 2025-04-01T04:33:30.339166 | {
"authors": [
"apinter"
],
"repo": "apinter/OSC-mon",
"url": "https://github.com/apinter/OSC-mon/issues/325",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
652262072 | Sporadically asyncio.TimeoutError during project export
When exporting a project (from an own OBS instance), roughly one of three attempts fails with:
$ ./obsgit -l DEBUG export test:prj ./prj-export
DEBUG:asyncio:Using selector: EpollSelector
Initialized the git repository
Git LFS extension enabled in the repository
...
DEBUG:__main__:End download ...
...
Traceback (most recent call last):
File "./obsgit", line 1212, in <module>
loop.run_until_complete(args.func(args, config))
File "/usr/lib64/python3.6/asyncio/base_events.py", line 488, in run_until_complete
return future.result()
File "./obsgit", line 1044, in export
await exporter.project(project)
File "./obsgit", line 630, in project
*(self.git.delete(package) for package in packages_delete),
File "./obsgit", line 700, in package
*(self.git.delete(package, filename) for filename in files_delete),
File "./obsgit", line 106, in download
await self._download(url_path, filename_path, **params)
File "./obsgit", line 97, in _download
chunk = await resp.content.read(1024 * 4)
File "/home/git-home/obsgit/venv/lib/python3.6/site-packages/aiohttp/streams.py", line 368, in read
await self._wait('read')
File "/home/git-home/obsgit/venv/lib/python3.6/site-packages/aiohttp/streams.py", line 296, in _wait
await waiter
File "/home/git-home/obsgit/venv/lib/python3.6/site-packages/aiohttp/helpers.py", line 596, in __exit__
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
Error occurs after a large portion has already be downloaded, the last successfully downloaded file may be different from run to run. Not sure if this may be related to network/server setup. The exported projects has about 12 packages, ~1000 files to download (according to debug output).
$ git show-ref --abbrev HEAD
ac6e396 refs/remotes/upstream/HEAD
$ python --version
Python 3.6.10
$ pip list
Package Version
----------------- -------
aiohttp 3.6.2
async-timeout 3.0.1
asyncio 3.4.3
attrs 19.3.0
cached-property 1.5.1
cffi 1.14.0
chardet 3.0.4
idna 2.9
idna-ssl 1.1.0
multidict 4.7.6
pip 20.1.1
pycparser 2.20
pygit2 1.2.1
setuptools 47.3.1
typing-extensions 3.7.4.2
wheel 0.34.2
yarl 1.4.2
$ osc -A ... api /about
<about>
<title>Open Build Service API</title>
<description>API to the Open Build Service</description>
<revision>2.10.5</revision>
<last_deployment>2020-05-19 14:34:00 +0200</last_deployment>
<commit>b8dac32ad172fdad843a31f455a9adff9a3efd71</commit>
</about>
Why use an outdated revision?
The Timeout comes from the server. In an older version of obsgit the TO was detected and retry, but for simplification of the code I decided to drop it.
Another aspect is that is easy to DoS OBS with this code, as asyncio can easily hammer the server, but I can implement a better approach this time. In any case the bug is confirmed.
Regarding the outdated version: silly me! I've worked on copy of upstream and this copy was always "up-to-date".
Same for https://github.com/aplanas/obsgit/issues/13
I face the issue again with latest revision 0d2a585. Before it worked "by accident" when triggering the same export command a second or third time.
Now for one concrete project it seems to always run into this (server) timeout. I checked git history to find mentioned TO detection but wasn't successful.
I was just able to reproduce on boo:
$ ./obsgit --config ./.obsgitrc-obs export SUSE:SLE-15:GA sle15
Initialized the git repository
Git LFS extension enabled in the repository
SUSE:SLE-15:GA/opie ...
SUSE:SLE-15:GA/sblim-cmpi-base ...
SUSE:SLE-15:GA/clucene-core ...
SUSE:SLE-15:GA/gnome-menus ...
SUSE:SLE-15:GA/python-azure-mgmt-containerregistry ...
SUSE:SLE-15:GA/python-slip ...
SUSE:SLE-15:GA/libdbi ...
SUSE:SLE-15:GA/liblognorm ...
...
SUSE:SLE-15:GA/createrepo_c ...
SUSE:SLE-15:GA/python-appdirs ...
SUSE:SLE-15:GA/bc ...
SUSE:SLE-15:GA/spice ...
SUSE:SLE-15:GA/python-PyJWT ...
Traceback (most recent call last):
File "./obsgit", line 1339, in <module>
loop.run_until_complete(args.func(args, config))
File "/usr/lib64/python3.6/asyncio/base_events.py", line 488, in run_until_complete
return future.result()
File "./obsgit", line 1151, in export
await exporter.project(project)
File "./obsgit", line 683, in project
*(self.git.delete(package) for package in packages_delete),
File "./obsgit", line 755, in package
*(self.git.delete(package, filename) for filename in files_delete),
File "./obsgit", line 106, in download
await self._download(url_path, filename_path, **params)
File "./obsgit", line 94, in _download
async with self.client.get(f"{self.url}/{url_path}", params=params) as resp:
File "/home/git-home/obsgit/venv/lib/python3.6/site-packages/aiohttp/client.py", line 1012, in __aenter__
self._resp = await self._coro
File "/home/git-home/obsgit/venv/lib/python3.6/site-packages/aiohttp/client.py", line 582, in _request
break
File "/home/git-home/obsgit/venv/lib/python3.6/site-packages/aiohttp/helpers.py", line 596, in __exit__
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
$ git version
git version 2.26.2
$ pip list
Package Version
----------------- -------
aiohttp 3.6.2
async-timeout 3.0.1
asyncio 3.4.3
attrs 19.3.0
cached-property 1.5.1
cffi 1.14.0
chardet 3.0.4
cryptography 3.0
idna 2.9
idna-ssl 1.1.0
multidict 4.7.6
pip 20.2.2
pycparser 2.20
pygit2 1.2.1
setuptools 47.3.1
six 1.15.0
typing-extensions 3.7.4.2
wheel 0.34.2
yarl 1.4.2
$ git ls -1
* 0d2a585 (HEAD -> master, upstream/master, upstream/HEAD, origin/master) Allows upload of empty data
$ cat .obsgitrc-obs
[export]
url = https://api.opensuse.org
username = xxx
password = xxx
link = always
[import]
url = https://api.opensuse.org
username = xxx
password = xxx
[storage]
type = lfs
[git]
prefix = packages
Traceback looks here a bit different but still TO issue.
The current TO is 5*60 secs, that is the default value for the TCPConnector instance. Seems to me that is enough for the client, and is a clear indication that the server (BOO) is under pressure.
Do you see that indeed the TO consume the 5 minutes, or is happening sooner? If you retry, do the TO trigger in a different project?
I played around with the TO intervals and set it to 160 and 1060, used time for simple measurement:
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
real 1m5.210s
user 0m9.398s
sys 0m0.814s
raise asyncio.TimeoutError from None
concurrent.futures._base.TimeoutError
real 10m27.069s
user 0m24.614s
sys 0m4.524s
For me this looks like that it is related to provided TO interval, but also extending the TO (10min) does not help.
Please advice.
Still is not clear for me one point. Is it happening under the same package / project? If the command is executed again, is fetching more data and failing in a different package / project, or is advancing?
Hi Alberto
Nope, it happens randomly to me. I tried import from different projects,
and even in my small one, home:heliochissini:carwos, if tails in different
times, sometimes retrigger works
On Thu, Aug 27, 2020 at 10:47 AM Alberto Planas notifications@github.com
wrote:
Still is not clear for me one point. Is it happening under the same
package / project? If the command is executed again, is fetching more data
and failing in a different package / project, or is advancing?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/aplanas/obsgit/issues/12#issuecomment-681807471, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AADGDAFD2JO7QYAYBJ42LWLSCYMSJANCNFSM4OSXT7PQ
.
@heliocastro I see, so is clearly an OBS load issue. I will implement the automatic re-try to help, but still the root cause of the OBS TO cannot be fixed in obsgit.
Still is not clear for me one point. Is it happening under the same package / project? If the command is executed again, is fetching more data and failing in a different package / project, or is advancing?
Same project (boo/SUSE:SLE-15:GA), first time with TO set to 1min in obsgit, second set TO to 10min.
Running export for the same project using the same TO its failing after similar time. Stdout is listing 3334 packages which correlates with the amount of packages in the actual project. Size and amount of files exported to local directory differs from run to run:
3335 directories, 19071 files
10483152 sle15.1/
3335 directories, 7533 files
3304652 sle15.2/
Size and amount of files exported to local directory differs from run to run:
Thanks! That is indeed an indication of an OBS issue. As commented, there is nothing that obsgit can do to fix the core problem, but I can do an automatic retry to hide some of the consequences.
As a side note, boo should not be the correct place to fetch SUSE:SLE-15:GA. Do you have access to the internal build service?
@aplanas I just used boo to make it reproducible also for others ;)
The core problem is up to the server, that's understood. However having automatic retries or something handling these kind of exceptions would be awesome. We just need to ensure that everything gets exported/imported correclty at the end. I don't care for now how many retries are needed in the background or if every package gets fetched successively.
I added a basic retry for the aiohttp methods. In my current internet connection I am not able to reproduce the original issue, so I am not properly tested the decorator. @jloeser can you give a try?
First of all I was finally able to export the desired project multiple times without any TO issues using recent rev c49d735 - that's good.
However after these successful exports I re-tried the same with the previous rev 0d2a585 again and it worked, too. Would this mean that by accident the server acts slightly different today as the days before? Adding a log output to the wrapper function indicates that all the functions are executed only once, so no TO error hit at all.
Btw exporting boo/SUSE:SLE-15:GA results in aiohttp.client_exceptions.ServerDisconnectedError - but that's OK as boo may be not happy with that amount of parallel requests.
However after these successful exports I re-tried the same with the previous rev 0d2a585 again and it worked, too. Would this mean that by accident the server acts slightly different today as the days before?
I think so, yes. I was not able to reproduce the TO issue with this project. Lets keep the retry code and see in the future.
| gharchive/issue | 2020-07-07T11:56:10 | 2025-04-01T04:33:30.368482 | {
"authors": [
"aplanas",
"coolo",
"heliocastro",
"jloeser"
],
"repo": "aplanas/obsgit",
"url": "https://github.com/aplanas/obsgit/issues/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2628315512 | Feature Request: Support for ANY Clause in WHERE for List Filtering
Thank you for your work on this project!
I am suggest a feature that would greatly enhance the flexibility of this library.
Feature Request:
I would like to request the addition of support for the ANY clause within WHERE conditions. Specifically, the goal is to allow filtering over a list of dictionaries (e.g., [{key: value}, {key: value}]) to check if any element within the list meets a given condition.
Current Situation:
At the moment, I am working with data structured as lists of dictionaries, and there doesn’t seem to be a way to use the WHERE clause effectively to filter and return items based on whether any element in such lists meets certain criteria.
Example Scenario:
MATCH (n:NodeLabel)
WHERE ANY(item IN n.listProperty WHERE item.key1 = "specificValue")
RETURN n
Benefit of the Feature:
Adding this functionality would provide users the ability to:
• Perform more complex and expressive queries involving lists and nested data structures.
• Enhance the ability to filter and return results based on conditions within list properties.
This is great, thank you for the request and thoughtful context!
Adding this to the features roadmap for sure!! Unfortunately I don't have an idea of when this will become available, but this does seem like a great feature for us to support here.
Thank you for your response!
I also had tried to implement this functionality myself but ran into some challenges with this lib processing logic.
If you could provide some guidance or insight into how the code operates, I would be more than happy to give it another shot.
So I can imagine "proper" addition of this to the language, which would involve a new grammar addition... Or maybe a cheating version: we could think about this is by "unrolling" the array into a where X₁ or X₂ or X₃ or ...
But to do "the right thing," here's a sketch:
Update the grammar support for compound_clause to also allow ANY() and ALL()
Create a new function in the transformer (taking inspiration from the existing boolean logic operations) to catch these and "unroll" them into chained ANDs or ORs (I need to think a bit more about performance characteristics of doing this... but I THINK it's mostly harmless)
Transparently call into the existing infix boolean logic operations like or to reuse our existing support for these
Let me know if this is a helpful pointer?
If not, maybe I'll have some time in the next few weeks to take a crack at this too :)
Thanks! This is really helpful hint.
I will work on developing this feature.
| gharchive/issue | 2024-11-01T04:50:04 | 2025-04-01T04:33:30.376713 | {
"authors": [
"SDJohn-sudo",
"j6k4m8"
],
"repo": "aplbrain/grand-cypher",
"url": "https://github.com/aplbrain/grand-cypher/issues/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
382144697 | 关于server.tomcat配置的疑问
我看只配置了
server.tomcat.accept-count=5000
却没有配置
server.tomcat.max-threads=?
默认最大线程是200,由于是异步长连接方式处理,server端线程长时间不会释放(DeferredResult的超时时间为60秒),如果客户端非常多,这200个线程被瞬间占满,剩下4800个会被卡在阻塞队列里面(注意到,如果要等60秒才能得到处理,那么这4000多个请求何时才能处理完,而且每个请求都是有超时时间的,恐怕还没轮到处理就已经超时了)
为了印证,我特意查看了tomcat的源代码,并做了实验。
线程池处理请求的关键代码如下:
while (running) {
...
//if we have reached max connections, wait
countUpOrAwaitConnection(); //先检查maxConnections,达到最大值则等待
...
// Accept the next incoming connection from the server socket
socket = serverSock.accept();
...
processSocket(socket); // 线程异步处理(实际上调用Executor.execute(sc) )
...
countDownConnection(); //计数-1
closeSocket(socket);
}
void processSocket(SocketWrapperBase<S> socket) {
SocketProcessorBase<S> sc = createSocketProcessor(socket);
Executor executor = getExecutor();
executor.execute(sc); // 交给线程池执行
...
}
对于Apollo这样的系统,如果客户端连接很多(超过200个),是否应该把tomcat.max-threads也设置大一点?
async servlet和普通的servlet不一样的,async servlet只会占连接,不会占tomcat线程,可以看一下Asynchronous Processing
多谢解答。之前我已有了解过 AsyncServlet。
这篇文章我也看了,只是它没有说明底层的实现原理,
如何能不占线程?据我了解,即便是wait状态,仍然是会占线程的,
所以,这个异步处理,不是简单的把线程wait,
我想请求的线程已经释放了、回归到了线程池,但请求的状态被保存了起来,以便处理完成后由其他线程继续完成响应。
但是目前我从Tomcat的源码中还没找到这段代码,以后再看吧。
问题先关了,再次感谢!
差不多是你理解的样子,具体可以看org.apache.catalina.core.AsyncContextImpl和org.apache.coyote.http11.Http11Processor
多谢解答。之前我已有了解过 AsyncServlet。 这篇文章我也看了,只是它没有说明底层的实现原理,
如何能不占线程?据我了解,即便是wait状态,仍然是会占线程的,
所以,这个异步处理,不是简单的把线程wait,
我想请求的线程已经释放了、回归到了线程池,但请求的状态被保存了起来,以便处理完成后由其他线程继续完成响应。
但是目前我从Tomcat的源码中还没找到这段代码,以后再看吧。
问题先关了,再次感谢!
由tomcat工作线程把AsyncContext对象传递给业务处理线程,同时tomcat工作线程归还到工作线程池,这一步就是异步开始
开启业务逻辑处理线程,并将AsyncContext 传递给业务线程。executor.execute(new AsyncRequestProcessor(asyncCtx, secs));
| gharchive/issue | 2018-11-19T10:18:23 | 2025-04-01T04:33:30.389521 | {
"authors": [
"ZJRui",
"nobodyiam",
"zollty"
],
"repo": "apolloconfig/apollo",
"url": "https://github.com/apolloconfig/apollo/issues/1682",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
523072436 | Fragment generated as nullable in Kotlin
Given a nested fragment:
query User {
viewer {
viewerRole {
...nestedData
}
}
}
fragment nestedData on ViewerRole {
// stuff
...availabilityData
}
fragment availabilityData on ViewerRole {
available
// stuff
}
it is generated as nullable for me:
class UserData(
// stuff
val fragments: Fragments
) {
data class Fragments(
val availabilityData: AvailabilityData?
) {
}
This doesn't seem correct, since available and more fields under availabilityData are non-nullable. Should it be generated as nullable in this case or it's a bug?
Sorry for not a very clear repro, I have pretty big schema and query files and it's kind of difficult to isolate just the relevant parts. Maybe related, fragment availabilityData is in a different query file than the rest.
Version
1.2.1
It's probably not a bug as nullability of fragments.availabilityData depends on the type condition of UserData rather than on the fact all fields aren't nullable. For instance, having this query and fragments:
query TestQuery {
hero {
__typename
...HeroDetails
...HumanDetails
}
}
fragment HeroDetails on Character {
__typename
name
... HumanDetails
}
fragment HumanDetails on Human {
__typename
name
}
Will generate this:
data class Fragments(
val heroDetails: HeroDetails,
val humanDetails: HumanDetails?
)
Why:
HeroDetails is defined on Character and because hero field in the schema has type Character Apollo knows exactly that it won't be optional
HumanDetails on the other hand defined on Human, that is the subclass of Character and it can be a case when the response can return another subclass Droid , that's why it's optional.
So it's not a bug.
Cleanup, closing.
Hey, sorry for late response, I didn't quite have time to get back into this issue. There are no multiple types involved, though, the schema is pretty straightforward:
type Viewer {
viewerRole: ViewerRole!
}
type ViewerRole {
available: String!
}
When I move the query so that it is not nested:
query User {
viewer {
viewerRole {
...availabilityData
...nestedData
}
}
}
fragment nestedData on ViewerRole {
// stuff
}
fragment availabilityData on ViewerRole {
available
// stuff
}
Then UserQuery.Data#viewer#viewerRole#fragments#availabilityData is not nullable. It only starts being nullable when I use fragment within fragment on the same type, which still doesn't seem right to me.
When I inline ...availabilityData fragment in any place, I get non-nullable values for all the fields defined inside the fragment, as expected. When I don't nest fragments, they are non-nullable as well. I'd expect in this case that wrapping stuff in a fragment will always result in a non-nullable fragment with perhaps nullable fields, if anything inside could be not present.
@sav007 Pinging in case you missed message in an already closed issue. Based on the above example it still seems to me the fragment shouldn't be generated as nullable, could you take another look?
| gharchive/issue | 2019-11-14T19:57:34 | 2025-04-01T04:33:30.396846 | {
"authors": [
"lwasyl",
"sav007"
],
"repo": "apollographql/apollo-android",
"url": "https://github.com/apollographql/apollo-android/issues/1769",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2093853034 | fix not used implementation
fix: https://github.com/apollographql/apollo-ios/issues/3323
fix Custom Scalar should not be edited. output
@matsudamper: Thank you for submitting a pull request! Before we can merge it, you'll need to sign the Apollo Contributor License Agreement here: https://contribute.apollographql.com/
| gharchive/pull-request | 2024-01-22T12:51:48 | 2025-04-01T04:33:30.408600 | {
"authors": [
"apollo-cla",
"matsudamper"
],
"repo": "apollographql/apollo-ios-codegen",
"url": "https://github.com/apollographql/apollo-ios-codegen/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
222298645 | Any plan to support aerojs?
Hi there,
Just want to know if there is any plan to support aerojs https://github.com/aerojs/aero
Thanks
@darting no plans that I know of. We're always more than happy accepting PRs that add a package for your node server framework of choice though :wink:
| gharchive/issue | 2017-04-18T04:30:44 | 2025-04-01T04:33:30.438561 | {
"authors": [
"darting",
"helfer"
],
"repo": "apollographql/graphql-server",
"url": "https://github.com/apollographql/graphql-server/issues/365",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
184415677 | Why depend on typed-graphql?
I had a graphql project that had a dev-dependency on @types/graphql and I wanted to introduce graphql-utils. I've realised that graph-utils depends on typed-graphql and this made my tsc compiler throw a lot of "duplicate" errors.
I had to remove my dev-dependency and introduce my self a dependency on typed-graphql to make it work but I didn't really like it.
Is the dependency to typed-graphql really needed? Can't it be switched to a dev-dependency to @types/graphql?
Last time we checked, @types version was still missing defintions that exists on typed-graphql
Also, merging fixes to @types repo takes much more time.
I belive it will happen in the future not in the near term..
Ok got it, I guess I can leave with tpyed-graphql until the dependency will change to @types
Yeah it's really unfortunate that the @types/graphql version looks more "legit" even though it's not as complete.
| gharchive/issue | 2016-10-21T07:18:52 | 2025-04-01T04:33:30.457318 | {
"authors": [
"DxCx",
"nhack",
"stubailo"
],
"repo": "apollostack/graphql-tools",
"url": "https://github.com/apollostack/graphql-tools/issues/181",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
362297636 | NAT support in HTTP proxies
Description
This PR adds support for native NAT in HTTP proxies. The proxies can expose public ports that are different than the ports that the application is actually listening and we can implement the port translation ourselves.
/build - automatically fired by gogo with following PRs and commit SHAs v1.0.0
[
{
"project": "nat-support",
"component": "gaia",
"pr-id": "194",
"commit-sha": "879a3d3a59ead2bbbfa9362bed18306caa553b1b"
},
{
"project": "nat-support",
"component": "trireme-lib",
"pr-id": "619",
"commit-sha": "8f3d24de9b11f82919faa9f9035caece8e0eefa2"
},
{
"project": "nat-support",
"component": "enforcerd",
"pr-id": "791",
"commit-sha": "aa07cbfbd116b2517d43e3a9736c5ee86bda1e8b"
},
{
"project": "nat-support",
"component": "apoctl",
"pr-id": "143",
"commit-sha": "d24469e9628535f0d410ea0286acf6ed2b5a9c8f"
},
{
"project": "nat-support",
"component": "backend",
"pr-id": "127",
"commit-sha": "0fa29aee1c4733f27fe57908c763fedac982d718"
}
]
/build - automatically fired by gogo with following PRs and commit SHAs v1.0.0
[
{
"project": "nat-support",
"component": "gaia",
"pr-id": "194",
"commit-sha": "879a3d3a59ead2bbbfa9362bed18306caa553b1b"
},
{
"project": "nat-support",
"component": "trireme-lib",
"pr-id": "619",
"commit-sha": "8f3d24de9b11f82919faa9f9035caece8e0eefa2"
},
{
"project": "nat-support",
"component": "enforcerd",
"pr-id": "791",
"commit-sha": "aa07cbfbd116b2517d43e3a9736c5ee86bda1e8b"
},
{
"project": "nat-support",
"component": "apoctl",
"pr-id": "143",
"commit-sha": "d24469e9628535f0d410ea0286acf6ed2b5a9c8f"
},
{
"project": "nat-support",
"component": "backend",
"pr-id": "127",
"commit-sha": "0fa29aee1c4733f27fe57908c763fedac982d718"
}
]
/build - automatically fired by gogo with following PRs and commit SHAs v1.0.0
[
{
"project": "nat-support",
"component": "gaia",
"pr-id": "194",
"commit-sha": "629065fdc7701430c8c31406e01310cb5b29e534"
},
{
"project": "nat-support",
"component": "trireme-lib",
"pr-id": "619",
"commit-sha": "251110b58e85c62d1d40db090fe103a6015a7232"
},
{
"project": "nat-support",
"component": "enforcerd",
"pr-id": "791",
"commit-sha": "aa07cbfbd116b2517d43e3a9736c5ee86bda1e8b"
},
{
"project": "nat-support",
"component": "apoctl",
"pr-id": "143",
"commit-sha": "d24469e9628535f0d410ea0286acf6ed2b5a9c8f"
},
{
"project": "nat-support",
"component": "backend",
"pr-id": "127",
"commit-sha": "0fa29aee1c4733f27fe57908c763fedac982d718"
}
]
/build - automatically fired by gogo with following PRs and commit SHAs v1.0.0
[
{
"project": "nat-support",
"component": "gaia",
"pr-id": "194",
"commit-sha": "629065fdc7701430c8c31406e01310cb5b29e534"
},
{
"project": "nat-support",
"component": "trireme-lib",
"pr-id": "619",
"commit-sha": "251110b58e85c62d1d40db090fe103a6015a7232"
},
{
"project": "nat-support",
"component": "enforcerd",
"pr-id": "791",
"commit-sha": "9523f749d4482bcf673c8bd0fe6605829b7c05fd"
},
{
"project": "nat-support",
"component": "apoctl",
"pr-id": "143",
"commit-sha": "d24469e9628535f0d410ea0286acf6ed2b5a9c8f"
},
{
"project": "nat-support",
"component": "backend",
"pr-id": "127",
"commit-sha": "0fa29aee1c4733f27fe57908c763fedac982d718"
}
]
/build - automatically fired by gogo with following PRs and commit SHAs v1.0.0
[
{
"project": "nat-support",
"component": "apoctl",
"pr-id": "143",
"commit-sha": "d24469e9628535f0d410ea0286acf6ed2b5a9c8f"
},
{
"project": "nat-support",
"component": "backend",
"pr-id": "127",
"commit-sha": "0fa29aee1c4733f27fe57908c763fedac982d718"
},
{
"project": "nat-support",
"component": "gaia",
"pr-id": "194",
"commit-sha": "629065fdc7701430c8c31406e01310cb5b29e534"
},
{
"project": "nat-support",
"component": "trireme-lib",
"pr-id": "619",
"commit-sha": "251110b58e85c62d1d40db090fe103a6015a7232"
},
{
"project": "nat-support",
"component": "enforcerd",
"pr-id": "791",
"commit-sha": "9523f749d4482bcf673c8bd0fe6605829b7c05fd"
}
]
/build - automatically fired by gogo with following PRs and commit SHAs v1.0.0
[
{
"project": "nat-support",
"component": "gaia",
"pr-id": "194",
"commit-sha": "16618a5c79b1a2f4bb2884061718f75ff09af869"
},
{
"project": "nat-support",
"component": "trireme-lib",
"pr-id": "619",
"commit-sha": "251110b58e85c62d1d40db090fe103a6015a7232"
},
{
"project": "nat-support",
"component": "enforcerd",
"pr-id": "791",
"commit-sha": "9523f749d4482bcf673c8bd0fe6605829b7c05fd"
},
{
"project": "nat-support",
"component": "apoctl",
"pr-id": "143",
"commit-sha": "d24469e9628535f0d410ea0286acf6ed2b5a9c8f"
},
{
"project": "nat-support",
"component": "backend",
"pr-id": "127",
"commit-sha": "0fa29aee1c4733f27fe57908c763fedac982d718"
}
]
| gharchive/pull-request | 2018-09-20T17:56:07 | 2025-04-01T04:33:30.465602 | {
"authors": [
"aporeto-dimitri",
"dstiliadis"
],
"repo": "aporeto-inc/trireme-lib",
"url": "https://github.com/aporeto-inc/trireme-lib/pull/619",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
506460031 | Unit Tests for JsonUtil
Type of request
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Refactoring
[ ] Documentation or documentation changes
Related Issue(s)
Concept
Increase Unit Test Coverage
Description of the change
Added Unit Tests for JsonUtil #34
Description of current state
Description of target state
Estimated investment / approximate investment
Initiator
name / company / position / created at
Sreekanth Nelakurthi
Technical information
Platform / target area
known side-effects
other
DB Changes (if so: incl. scripts with definitions and testdata)
How has this Tested?
Unit Test with coverage
Screenshots, Uploads, other documents (if appropriate):
Checklist:
[ ] All the commits are signed.
[x] My code follows the code style of this project.
[x] I agree with die CLA.
[x] I have read the CONTRIBUTING docs.
[x] I have added/updated necessary documentation (if appropriate).
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Kavya seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
Hi,
thanks for your interest into our project.
Please notify, that we only accept signed commits. As I've seen your commit is not signed please do that as described in https://help.github.com/en/articles/signing-commits.
Thank you again for the interest on our project @shreenelakurthi - in addition to re-sign your commit, please take into account the following:
You have to sign our CLA and the commit author is not matching your user account. This might generate problems, so please check this GitHUb help page to get more details.
Every new source file in the project requires the license header to be added; otherwise, checks wont pass. You can either add it manually from this file or, if you are using Eclipse, launch the org.aposin.licensescout.core_license_format.launch configuration.
Marked as invalid as it is re-open as #46
| gharchive/pull-request | 2019-10-14T05:53:46 | 2025-04-01T04:33:30.477000 | {
"authors": [
"CLAassistant",
"StefanFranzWeiser",
"d-gs",
"shreenelakurthi"
],
"repo": "aposin/LicenseScout",
"url": "https://github.com/aposin/LicenseScout/pull/44",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2645626539 | To do...
[x] - CVE Links in reports
[x] - Verbosity
[x] - Fix instructions
Now just to keep an eye out for bug reports and unexpected results.
| gharchive/issue | 2024-11-09T04:48:22 | 2025-04-01T04:33:30.478547 | {
"authors": [
"appatalks"
],
"repo": "appatalks/ghes-cve-check",
"url": "https://github.com/appatalks/ghes-cve-check/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
132878946 | Question: Can any CNI main plugins work with any CNI IPAM plugins?
I'd like to know if there is any restriction about which main plugin can work with which IPAM plugin? E.g., Can bridge and ptp plugin work with dhcp plugin? Or they have to work with host-local plugin? Can macvlan plugin work with host-local plugin? Or it has to work with dhcp plugin?
I'll go into some of your examples in a bit, upfront there are certain restrictions which stem from the nature of the interface types and how they are used in these plugins.
ptp: we use veth devices to create point-to-point connections between two network namespaces. DHCP won't work out of the box with this, as it needs layer 2 connectivity to the a network the DHCP server listening on. Technically you could combine PTP and DHCP plugins but it doesn't make much sense as it wasn't design like that in CNI.
bridge: bridge is more likely to work with DHCP, but you still have to go through the extra effort and either run a DHCP server on the interface directly, or attach an interface to the bridge that has layer 2 connectivity to a DHCP server.
macvlan: this can work out of the box with DHCP if you provide a master interface that again has layer 2 connectivity to a DHCP server, but it will also work with host-local if you configure the range to be one of the master interface's network addresses that doesn't conflict with an existing DHCP range
I suggest that you take a look at the CNI docs, the code and also the kernel's documentation on the specific drivers. If there's any chance I would like to encourage you to extend CNI's documentation anywhere in the repository with information that is useful for all people getting started with CNI. Simply put, pleas write up what you learned and create a pull request to include it in our docs :-)
@qianzhangxa is the issue resolved for you or do you have any other questions?
@qianzhangxa If your question has been answered, could you please close the issue? Otherwise further discussions/comments/questions are really welcome.
| gharchive/issue | 2016-02-11T03:27:26 | 2025-04-01T04:33:30.482124 | {
"authors": [
"leodotcloud",
"qianzhangxa",
"steveeJ"
],
"repo": "appc/cni",
"url": "https://github.com/appc/cni/issues/122",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1366519740 | fix: Prevent a case where OpenAPI would generate invalid YAML
If the document was large enough, line breaks may appear in unexpected places. This pull request fixes this issue. This change includes a test using the same data the bug was reported with and is now parsing the generated YAML to spot syntax errors.
Resolves https://github.com/applandinc/vscode-appland/issues/454
The Travis job failed so I am restarting it.
:tada: This PR is included in version 0.41.2 :tada:
The release is available on:
v0.41.2
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2022-09-08T15:01:29 | 2025-04-01T04:33:30.545810 | {
"authors": [
"appland-release",
"dustinbyrne",
"kgilpin"
],
"repo": "applandinc/vscode-appland",
"url": "https://github.com/applandinc/vscode-appland/pull/455",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
180412984 | Allow authGSSClientInit principal kwarg to be None.
This PR simply applies the patch from https://github.com/apple/ccs-pykerberos/issues/49 for easier merging.
Fixes #49
Hi @kwlzn
We don't yet have any automated tests hooked up for this project. Some of the included tests require admin-level access to a kerberized service of some variety. If you happen to have such a configuration at your disposal, would you mind running through the included tests (with your patch applied) and posting your results?
I just added some testing notes to clarify how the tests are used, and the requirements / preconditions for each. Normally I wouldn't ask you to do this, but since I don't have an environment handy for this, and since you asked for expediting, this might be a chance to speed things along a bit. If not, that's fine too, I will just need to set up a test environment (which shouldn't take all that long...).
@dreness I unfortunately do not have admin-level access within our infra, sorry. :(
Just spent a lot of hours trying to get the tests to run against a system which I am now declaring to be too complicated for this purpose (apple's Server.app web service). I will try to find the simplest docker thing that I can find instead; unfortunately I don't remember exactly what I used last time.
This change looks OK to me.
| gharchive/pull-request | 2016-09-30T22:12:22 | 2025-04-01T04:33:30.549058 | {
"authors": [
"cyrusdaboo",
"dreness",
"kwlzn"
],
"repo": "apple/ccs-pykerberos",
"url": "https://github.com/apple/ccs-pykerberos/pull/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
381313951 | Add "Document Symbols" to status feature list
This is very similar to sourcekitd's "document structure", which can provide a hierarchical view of entities in the current file.
Nitpick, could you also change 'Open Quickly' to the LSP name, 'Workspace Symbols' I think ?
Sure: https://github.com/apple/sourcekit-lsp/pull/6
| gharchive/pull-request | 2018-11-15T19:51:16 | 2025-04-01T04:33:30.720852 | {
"authors": [
"akyrtzi",
"benlangmuir"
],
"repo": "apple/sourcekit-lsp",
"url": "https://github.com/apple/sourcekit-lsp/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
695249861 | Add support for WASI in CFInternal.h
Atomics are not supported on WASI due to SR-12097. <xlocale.h> header is also not available as WASI hosts do not ship locale or time zone files and it's a responsibility of applications to bundle that data.
@swift-ci please test
| gharchive/pull-request | 2020-09-07T16:13:24 | 2025-04-01T04:33:30.726232 | {
"authors": [
"MaxDesiatov"
],
"repo": "apple/swift-corelibs-foundation",
"url": "https://github.com/apple/swift-corelibs-foundation/pull/2872",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
870130388 | Re-enable testLitDriverDependenciesTests when run on Apple Silicon
This test was previously failing in swift CI.
I cannot trigger the failure locally so re-enabling here to test with cross-repository PR testing.
The failure was due to having an x86 binary for python3 on the CI bots, which is now resolved.
Running cross-repository test in https://github.com/apple/swift/pull/36936 to confirm, and then this is good to merge.
@swift-ci please test
| gharchive/pull-request | 2021-04-28T16:26:12 | 2025-04-01T04:33:30.728181 | {
"authors": [
"artemcm"
],
"repo": "apple/swift-driver",
"url": "https://github.com/apple/swift-driver/pull/618",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2333964134 | FoundationEssentialsTests: we convert to FSR when checking
The internal function returns a FSR value where as the value being returned from the appendingPathComponent is not the FSR. Explicitly convert to the FSR for the check.
I'm not sure if there is a more elegant manner to handle this scenario. This value is being compared to an internal API which returns a FSR representation so it seems reasonable that the check would convert to the FSR when trying to check the result.
@swift-ci please test
I'm not sure if there is a more elegant manner to handle this scenario. This value is being compared to an internal API which returns a FSR representation so it seems reasonable that the check would convert to the FSR when trying to check the result.
Yeah normally I'd want to avoid this and use a different API instead, but given we're testing an internal function I think it's probably fine. Presumably converting to the file system representation is normalizing something about the path?
Correct - is the path separator being canonicalized.
| gharchive/pull-request | 2024-06-04T16:47:37 | 2025-04-01T04:33:30.730652 | {
"authors": [
"compnerd",
"jmschonfeld"
],
"repo": "apple/swift-foundation",
"url": "https://github.com/apple/swift-foundation/pull/654",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1011664690 | reduce duplicate code in test tool
motivation: code cleanup
changes: remove almost identitical code in the two variants of the test method, and replace with a shared implementation that takes an output redirection and either prints or collects the output
@swift-ci please smoke test
| gharchive/pull-request | 2021-09-30T03:36:30 | 2025-04-01T04:33:30.737597 | {
"authors": [
"tomerd"
],
"repo": "apple/swift-package-manager",
"url": "https://github.com/apple/swift-package-manager/pull/3772",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1665586352 | Format test target in the macro package consistently with other targets
Cherry-pick https://github.com/apple/swift-package-manager/pull/6421 to release/5.9.
@swift-ci Please smoke test
@swift-ci Please smoke test macOS
| gharchive/pull-request | 2023-04-13T02:41:25 | 2025-04-01T04:33:30.739028 | {
"authors": [
"ahoppen"
],
"repo": "apple/swift-package-manager",
"url": "https://github.com/apple/swift-package-manager/pull/6420",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2080927113 | Recipe: Extract platform specific toolset options
Move platform specific logic to LinuxRecipe
@swift-ci test
@swift-ci test
| gharchive/pull-request | 2024-01-14T22:38:21 | 2025-04-01T04:33:30.740089 | {
"authors": [
"MaxDesiatov",
"kateinoigakukun"
],
"repo": "apple/swift-sdk-generator",
"url": "https://github.com/apple/swift-sdk-generator/pull/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1618388837 | [Diagnostic formatter] Add ":" to filename header.
When printing the filename header for diagnostics, also include the line at which the displqy starts using the standard :<line number> format, which is recognized by various tools to quickly go to that source file and line.
Fixes https://github.com/apple/swift-syntax/issues/1385.
@swift-ci please test
Thinking about this some more i think we can do even better, though this already is an improvement so let's merge :)
I'll follow up with some ideas soon 👍
| gharchive/pull-request | 2023-03-10T05:29:30 | 2025-04-01T04:33:30.742022 | {
"authors": [
"DougGregor",
"ktoso"
],
"repo": "apple/swift-syntax",
"url": "https://github.com/apple/swift-syntax/pull/1402",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2223546567 | Deprecate ByteSourceRange in favor of Range<AbsolutePosition>
While being a little more verbose, this has a few advantages:
We only have a single type to represent ranges in SwiftSyntax instead of Range<AbsolutePosition> and ByteSourceRange, both of which we needed to use in sourcekit-lsp
Unifying the convenience functions on the two results in a single type that has all the convenience functions, instead of spreading them to two distinct sets
The use of AbsolutePosition and SourceLength makes type-system guarantees that these are UTF-8 byte positions / length, making it harder to accidentally add eg. UTF-16 lengths with UTF-8 lengths.
rdar://125624626
@swift-ci Please test
https://github.com/apple/sourcekit-lsp/pull/1158
@swift-ci Please test
https://github.com/apple/sourcekit-lsp/pull/1158
@swift-ci Please test macOS
@swift-ci Please test
@swift-ci Please test
https://github.com/apple/sourcekit-lsp/pull/1158
@swift-ci Please test
https://github.com/apple/sourcekit-lsp/pull/1158
@swift-ci Please test
@swift-ci Please test Windows
https://github.com/apple/sourcekit-lsp/pull/1158
@swift-ci Please test Windows
| gharchive/pull-request | 2024-04-03T17:49:22 | 2025-04-01T04:33:30.748146 | {
"authors": [
"ahoppen"
],
"repo": "apple/swift-syntax",
"url": "https://github.com/apple/swift-syntax/pull/2588",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.