added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:03.258087
| 2024-02-08T23:30:08
|
2126231043
|
{
"authors": [
"AdSchellevis",
"jeffstearns"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9508",
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/7219"
}
|
gharchive/issue
|
Kea log file search results are inconsistent and unpredictable when the search string contains punctuation
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
[x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Describe the bug
The Kea log file page at .../ui/diagnostics/log/core/kea is somewhat broken. The Search box does not work as expected when the search string contains punctuation characters.
To Reproduce
Steps to reproduce the behavior:
Configure the Kea DHCP server and let it run until is serves some leases.
Go to '.../ui/diagnostics/log/core/kea'
Note that the log file looks similar to the log file shown below.
Click in the Search box and enter this string: ]]][[[[[[||^%$@[[[!~**
Note that the page shows many results. Note that none of the results contain the search string.
Slowly delete characters from the text in the Search box. Note that more line matches are displayed as the search string is shortened, although they don't match any better than the other lines did.
Expected behavior
Only lines containing the search string are displayed.
If there are rules regarding metacharacters or punctuation or regular expressions, I was expecting to find them on the .../ui/diagnostics/log/core/kea page.
Describe alternatives you considered
I searched for documentation about the Kea integration in OPNsense in hopes of learning the rules for search text. I couldn't find that documentation.
Screenshots
Screenshot 2024-02-07 at 11.14.27 AM.pdf
Relevant log files
Date
Severity
Process
Line
.
2024-02-07T11:13:22-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 44:61:32:d6:1d:5d], cid=[ff:32:d6:1d:5d:00:03:00:01:44:61:32:d6:1d:5d], tid=0x258769e8: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:13:07-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 04:5d:4b:aa:87:b9], cid=[01:04:5d:4b:aa:87:b9], tid=0x159fe5f8: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:12:53-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 a8:b5:7c:49:58:eb], cid=[no info], tid=0x21643b6f: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:12:51-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 bc:24:11:a6:25:09], cid=[ff:11:a6:25:09:00:01:00:01:2d:30:5e:80:bc:24:11:ca:ca:2a], tid=0x86303a20: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:12:49-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 44:61:32:2f:c1:e5], cid=[ff:32:2f:c1:e5:00:03:00:01:44:61:32:2f:c1:e5], tid=0x21322d81: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:12:16-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 c8:e0:eb:3c:6e:43], cid=[01:c8:e0:eb:3c:6e:43], tid=0xb7c7b775: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:12:16-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_INIT_REBOOT [hwtype=1 c8:e0:eb:3c:6e:43], cid=[01:c8:e0:eb:3c:6e:43], tid=0xb7c7b775: client is in INIT-REBOOT state and requests address <IP_ADDRESS>
2024-02-07T11:11:38-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d1fd00] DHCP4_LEASE_ALLOC [hwtype=1 c0:56:e3:6f:42:95], cid=[01:c0:56:e3:6f:42:95], tid=0xe30c7536: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:10:57-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d20400] DHCP4_LEASE_ALLOC [hwtype=1 94:05:bb:10:16:58], cid=[01:94:05:bb:10:16:58], tid=0xb8b7280a: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:10:40-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d20400] DHCP4_LEASE_ALLOC [hwtype=1 90:03:b7:fa:2b:a5], cid=[01:90:03:b7:fa:2b:a5], tid=0x111fd258: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:10:36-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d20400] DHCP4_LEASE_ALLOC [hwtype=1 94:bf:2d:76:82:cd], cid=[01:94:bf:2d:76:82:cd], tid=0xe4ea1ee0: lease <IP_ADDRESS> has been allocated for 600 seconds
2024-02-07T11:10:36-08:00
Informational
kea-dhcp4
INFO [kea-dhcp4.leases.0x833d20400] DHCP4_INIT_REBOOT [hwtype=1 94:bf:2d:76:82:cd], cid=[01:94:bf:2d:76:82:cd], tid=0xe4ea1ee0: client is in INIT-REBOOT state and requests address <IP_ADDRESS>
Additional context
None.
Environment
OPNsense 24.1.1-amd64
FreeBSD 13.2-RELEASE-p9
OpenSSL 3.0.13
this is more a generic log search thing, input is cleansed quite aggressively, given the low number of reports in the past, I don't think we should aim for accepting almost anything here as the number of relevant cases is rather low.
|
2025-04-01T04:35:03.260609
| 2016-01-10T09:56:11
|
125810796
|
{
"authors": [
"8191",
"fichtner"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9509",
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/pull/629"
}
|
gharchive/pull-request
|
Enhance outbound NAT
Hide Manual rules of outbound NAT if modes "automatic" or "disabled" are chosen, which do not obey manual rules. #106
cherry-picked the remaining chunk, also changed the "or" according to the comment earlier. Thanks a bunch! :)
|
2025-04-01T04:35:03.335788
| 2024-06-24T09:22:49
|
2369703174
|
{
"authors": [
"CLAassistant",
"samiulsami"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9510",
"repo": "ops-center/james-project",
"url": "https://github.com/ops-center/james-project/pull/4"
}
|
gharchive/pull-request
|
Merge bugfix from chibenwa/fix-and, allowing usage of emails containing "&"
https://github.com/apache/james-project/pull/2303
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.2 out of 8 committers have signed the CLA.:white_check_mark: shn27:white_check_mark: samiulsami:x: chibenwa:x: quantranhong1999:x: romainmoreau:x: glennosss:x: vttranlina:x: jeantilYou have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:35:03.338743
| 2015-01-28T17:00:08
|
55783699
|
{
"authors": [
"dblessing",
"jtimberman",
"rodrigdav"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9511",
"repo": "opscode-cookbooks/users",
"url": "https://github.com/opscode-cookbooks/users/issues/78"
}
|
gharchive/issue
|
ChefSpec matchers missing.
The supermarket version of this cookbook which is 1.7.0 is missing the matchers for chefspec. Any chance we could get a version with those in it?
These matchers are in master. We just need a release to the Supermarket :) There are a lot of nice fixes going back over the past year that need released.
Any idea when they will be released? Or should I just get the master branch?
Good question. I'm hoping a Chef maintainer will come along and release this soon.
@jtimberman can you provide any help on how to get the supermarket version updated?
1.8.0 was released yesterday. https://github.com/opscode-cookbooks/users/commit/79563f635ae807a950985dbf51e5aa8f0032a11f
@jtimberman thank you.
|
2025-04-01T04:35:03.370995
| 2020-11-18T07:59:38
|
745427452
|
{
"authors": [
"MJTheOne",
"Tarpsvo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9512",
"repo": "optimistdigital/nova-multiselect-field",
"url": "https://github.com/optimistdigital/nova-multiselect-field/issues/76"
}
|
gharchive/issue
|
Help text not showing
When using the Nova ->help() addition to a Field nothing is rendered.
Checked the source of the HTML to make sure it wasn't a CSS issue or something, it just doesn't render the help at all.
Hi! Fixed in version 1.11.2. Good luck!
|
2025-04-01T04:35:03.373862
| 2019-08-20T02:28:00
|
482611898
|
{
"authors": [
"Tarpsvo",
"jplhomer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9513",
"repo": "optimistdigital/nova-multiselect-field",
"url": "https://github.com/optimistdigital/nova-multiselect-field/pull/7"
}
|
gharchive/pull-request
|
Allow native array casting
Rather than storing a stringified array without casting, it would be nice to have the option to use Laravel's native Array & JSON attribute casting to set/get the column attribute as a native Array.
This is already possible by chaining ->fillUsing() on MultiSelect's field:
Multiselect::make('Conditions')
->options(...)
->fillUsing(function($request, $model, $attribute, $requestAttribute) {
$model->$attribute = json_decode($request->$attribute, true);
}),
However, since values on the front-end are sent through JSON.parse(), this leads to JavaScript errors.
This PR checks to see if the value is already an array.
Oh ha - just noticed this: https://github.com/optimistdigital/nova-multiselect-field/pull/5
I think this PR still takes care of the other components 👍
Cheers! Thanks for helping out.
|
2025-04-01T04:35:03.376136
| 2016-08-24T16:22:58
|
172998234
|
{
"authors": [
"aliabbasrizvi",
"coveralls",
"mikeng13"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9514",
"repo": "optimizely/node-sdk",
"url": "https://github.com/optimizely/node-sdk/pull/4"
}
|
gharchive/pull-request
|
Add environment detection to event builder so it can distinguish betw…
The node-sdk prefix is hardcoded so we always send it regardless of whether the event was sent on server or client. This adds environment detection so we can attribute events properly.
@haleybash-optimizely @aliabbasrizvi @vraja2 @delikat
Coverage increased (+0.003%) to 98.917% when pulling c145f84429bf9f4c6188f1180e193691965c75ab on mng/environment-detection into 4d6b0e8cabbded96308ca4d58aafdc1b21d2273f on master.
LGTM
|
2025-04-01T04:35:03.386189
| 2022-10-07T12:58:21
|
1401136417
|
{
"authors": [
"DarknightCanada",
"HopHouse",
"Tylous",
"mgeeky",
"pr0b3r7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9515",
"repo": "optiv/Freeze",
"url": "https://github.com/optiv/Freeze/issues/2"
}
|
gharchive/issue
|
No Output in Windows
Hello,
The tool does not generate any output in Windows. It works fine in Linux but in Windows no.
` ___________
_ /_ ____ ____ ________ ____
| ) _ __ _/ __ _/ __ \_ // __
| \ | | /\ /\ / / /\ /
_ / || _ >_ >____ \___ >
/ / / / /
(@Tyl0us)
Soon they will learn that revenge is a dish... best served COLD...
[!] Missing Garble... Downloading it now
exec: no command:
[] Encrypting Shellcode Using AES Encryption
[+] Shellcode Encrypted
[!] Selected Process to Suspend: notepad.exe
[+] Loader Compiled
[] Compiling Payload
exec: no command:
[+] Payload loader.exe Compiled`
Even though it says payload compiled but no output
Looks to be an issue related to where garble is put. I'll have to make some changes for windows...
Hi man! Any plans on supporting Windows anytime soon? :)
Would love to add support for your Freeze in my ProtectMyTooling but currently its impossible :(
Apologies for the delay. I am in the process of updating multiple tools including this one. For right now I've heard from people that using it with WSL on Windows works fine. I am not sure if that's been thoroughly tested but it might be a workaround atm. If you want to test that for me and let me know it can help me with developing a fix.
Ack! Will try it out, thanks :)
@mgeeky did that work for you or do I need to retool it?
Aaaay sorry Matt, didn't try it yet. Christmas coming now so unsure if I can sit down to this :(
@Tylous Do you mind looking the the pull request #9 ?
I think it would close this issue and make it works with @mgeeky ProtectMyTooling tool.
Hi @HopHouse - thanks for picking this up!
Tried compiling your fork and using it with PMT, but no joy:
cmd> D:\dev2\ProtectMyTooling\contrib\Freeze\Freeze.exe -I "calc64.bin" -O "foo.exe"
[.] Command returned:
------------------------------
___________
\_ _____/______ ____ ____ ________ ____
| __) \_ __ \_/ __ \_/ __ \\___ // __ \
| \ | | \/\ ___/\ ___/ / /\ ___/
\___ / |__| \___ >\___ >_____ \\___ >
\/ \/ \/ \/ \/
(@Tyl0us)
Soon they will learn that revenge is a dish... best served COLD...
[!] Missing Garble... Downloading it now
[+] Executed code:
$env:GOBINB=$GOBIN;
$env:GOBIN="d:\test\.lib";
go install mvdan.cc/garble@latest
$env:GOBIN=$GOBINB;
$env:GOBINB=$null
[!] Selected Process to Suspend: notepad.exe
[+] Loader Compiled
[+] Executed code:
$env:GOPRIVATEB=go env GOPRIVATE;
go env -w GOPRIVATE=*
$env:GOOS="windows";
$env:GOARCH="amd64";
d:\test\.lib\garble.exe -seed=random -literals build -o "foo.exe"
go env -w GOPRIVATE=$GOPRIVATEB;
$env:GOPRIVATEB=$null
[*] Compiling Payload
go list error: exit status 1: go: cannot find main module, but found .git/config in d:\test
to create a module there, run:
cd ..\.. && go mod init
[+] Payload foo.exe Compiled
Looks like there needs to be more setup made ahead to satisfy golang dynamic compilation requirements.
Let me take a look at this as well. I will get back to you all shortly.
After looking at it @mgeeky its something I need to tweak ahead of time. @HopHouse I appreciate your pull request but it didn't work for me. I will work on addressing this shortly.
Running into the same error, even when from inside the cloned repo after a fresh build and having go and garble installed...
C:\Tools\TA0005 Defense Evasion\Freeze>"C:\Tools\TA0005 Defense Evasion\Freeze\Freeze.exe" -I ".\beacon.exe" -O ".\freeze_beacon.exe" -process "MsMpEng.exe" -sandbox
___________
\_ _____/______ ____ ____ ________ ____
| __) \_ __ \_/ __ \_/ __ \\___ // __ \
| \ | | \/\ ___/\ ___/ / /\ ___/
\___ / |__| \___ >\___ >_____ \\___ >
\/ \/ \/ \/ \/
(@Tyl0us)
Soon they will learn that revenge is a dish... best served COLD...
[!] Missing Garble... Downloading it now
exec: no command:
[!]
|
2025-04-01T04:35:03.399258
| 2022-08-29T19:44:11
|
1354791391
|
{
"authors": [
"arabellayao",
"smiraj99"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9516",
"repo": "oracle-livelabs/em-omc",
"url": "https://github.com/oracle-livelabs/em-omc/pull/55"
}
|
gharchive/pull-request
|
WMS ID 5721 + 9522: Advanced Machine Learning Using OCI Logging Analytics
WMS ID: 5721
This workshop shows how to use advanced machine learning capabilities of OCI Logging Analytics in a pre-configured environment.
Hi! Please fix the following issues of your PR:
You should use this file for the “Need Help?” Lab for desktop and liveLabs: “https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md” instead of “https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md”
Please merge the latest changes from main into your branch.
|
2025-04-01T04:35:03.402400
| 2022-06-27T09:43:23
|
1285533559
|
{
"authors": [
"achepuri",
"anooshapilli"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9517",
"repo": "oracle-livelabs/sprints",
"url": "https://github.com/oracle-livelabs/sprints/pull/22"
}
|
gharchive/pull-request
|
WMS ID: 11123 - Added new sprints for OGG EM Plug-in
Hi Kevin,
Could you please approve this pull request: for WMS ID: 11123 - Added the following new sprints for OGG EM Plug-in:
How do I discover GoldenGate Microservices instances using the EM CLI verb in Oracle GoldenGate Enterprise Manager Plug-in?
How do I discover GoldenGate (classic) instances using the EM CLI verb in Oracle GoldenGate Enterprise Manager Plug-in?
How do I create credentials for Oracle GoldenGate Classic instance in Enterprise Manager Plug-in?
How do I create credentials for Oracle GoldenGate Microservices (MA) instance in Enterprise Manager Plug-in?
How do I start and stop processes using the EM CLI verb in Oracle GoldenGate Enterprise Manager Plug-in?
How do I download Diagnostic Logs in Oracle GoldenGate Enterprise Manager Plug-in?
Hi @achepuri - This is a good example of a sprint - link and click here for the sprint template.
The sprints do not have a separate prerequisites or video section. Please update the sprints accordingly to approve the PR.
|
2025-04-01T04:35:03.403874
| 2021-06-07T17:35:15
|
913790403
|
{
"authors": [
"Djelibeybi",
"totalamateurhour"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9518",
"repo": "oracle/docker-images",
"url": "https://github.com/oracle/docker-images/pull/1954"
}
|
gharchive/pull-request
|
updates for Go 1.16
Update 1.13, 1.14 and 1.15 Dockerfiles
add 1.16 Dockerfile
Signed-off-by: Sergio Leunissen<EMAIL_ADDRESS>
The PR says you updated the "1.13, 1.14 and 1.15" Dockerfiles, but I'm not seeing that in the commits. Can you confirm?
|
2025-04-01T04:35:03.511552
| 2024-02-21T14:41:12
|
2146923644
|
{
"authors": [
"iamstolis",
"neon-dev"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9519",
"repo": "oracle/graaljs",
"url": "https://github.com/oracle/graaljs/issues/801"
}
|
gharchive/issue
|
Parallel ScriptEngine script compilation
Using the ScriptEngine API to compile scripts fails when called from different threads.
This is because the GraalJSBindings of the engine context are accessed during compilation, leading to at least three different types of errors:
Exception in thread "main" java.lang.NullPointerException
at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486)
at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:542)
at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:567)
at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:670)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:765)
at Test.main(Test.java:25)
Caused by: java.lang.NullPointerException: Cannot invoke "java.util.Map.entrySet()" because "this.global" is null
at com.oracle.truffle.js.scriptengine.GraalJSBindings.entrySet(GraalJSBindings.java:198)
at java.base/java.util.AbstractMap.containsKey(AbstractMap.java:148)
at java.scripting/javax.script.SimpleScriptContext.getAttribute(SimpleScriptContext.java:158)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.createSource(GraalJSScriptEngine.java:462)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.compile(GraalJSScriptEngine.java:653)
...
java.lang.IllegalStateException: Multi threaded access requested by thread Thread[#32,ForkJoinPool.commonPool-worker-10,5,main] but is not allowed for language(s) js.
at com.oracle.truffle.polyglot.PolyglotEngineException.illegalState(PolyglotEngineException.java:133)
at com.oracle.truffle.polyglot.PolyglotContextImpl.throwDeniedThreadAccess(PolyglotContextImpl.java:1392)
at com.oracle.truffle.polyglot.PolyglotContextImpl.checkAllThreadAccesses(PolyglotContextImpl.java:1058)
at com.oracle.truffle.polyglot.PolyglotContextImpl.enterThreadChanged(PolyglotContextImpl.java:884)
at com.oracle.truffle.polyglot.PolyglotEngineImpl.enterCached(PolyglotEngineImpl.java:2080)
at com.oracle.truffle.polyglot.HostToGuestRootNode.execute(HostToGuestRootNode.java:109)
at com.oracle.truffle.api.impl.DefaultCallTarget.callDirectOrIndirect(DefaultCallTarget.java:85)
at com.oracle.truffle.api.impl.DefaultCallTarget.call(DefaultCallTarget.java:102)
at com.oracle.truffle.polyglot.PolyglotMap.entrySet(PolyglotMap.java:124)
at com.oracle.truffle.js.scriptengine.GraalJSBindings.entrySet(GraalJSBindings.java:198)
at java.base/java.util.AbstractMap.containsKey(AbstractMap.java:148)
at java.scripting/javax.script.SimpleScriptContext.getAttribute(SimpleScriptContext.java:158)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.createSource(GraalJSScriptEngine.java:462)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.compile(GraalJSScriptEngine.java:653)
...
Suppressed: Attached Guest Language Frames (1)
java.lang.IllegalStateException: Multi threaded access requested by thread Thread[#44,ForkJoinPool.commonPool-worker-11,5,main] but is not allowed for language(s) js.
at com.oracle.truffle.polyglot.PolyglotEngineException.illegalState(PolyglotEngineException.java:133)
at com.oracle.truffle.polyglot.PolyglotContextImpl.throwDeniedThreadAccess(PolyglotContextImpl.java:1392)
at com.oracle.truffle.polyglot.PolyglotContextImpl.checkAllThreadAccesses(PolyglotContextImpl.java:1058)
at com.oracle.truffle.polyglot.PolyglotContextImpl.enterThreadChanged(PolyglotContextImpl.java:884)
at com.oracle.truffle.polyglot.PolyglotEngineImpl.enterCached(PolyglotEngineImpl.java:2080)
at com.oracle.truffle.polyglot.PolyglotEngineImpl.enterIfNeeded(PolyglotEngineImpl.java:2008)
at com.oracle.truffle.polyglot.PolyglotValueDispatch.hostEnter(PolyglotValueDispatch.java:1256)
at com.oracle.truffle.polyglot.PolyglotContextImpl.parse(PolyglotContextImpl.java:1649)
at com.oracle.truffle.polyglot.PolyglotContextDispatch.parse(PolyglotContextDispatch.java:65)
at org.graalvm.polyglot.Context.parse(Context.java:483)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.checkSyntax(GraalJSScriptEngine.java:680)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.compile(GraalJSScriptEngine.java:664)
at com.oracle.truffle.js.scriptengine.GraalJSScriptEngine.compile(GraalJSScriptEngine.java:654)
...
How to reproduce
Here is a simple reproducer: reproducer.zip
Commented out lines work around these issues (explained below).
Possible workaround
Synchronize on the engine bindings to fix all compliation errors:synchronized (scriptEngine.getBindings(ScriptContext.ENGINE_SCOPE)) {
((Compilable) scriptEngine).compile(jsScript);
}
Add this line right after instantiating the script engine:scriptEngine.getBindings(ScriptContext.ENGINE_SCOPE).entrySet();
It resolves compilation slowdowns by initializing the GraalJSBindings context early.
Synchonization from 1) somehow does not take care of this internal race condition. Maybe it's a thread visibility issue, not sure.
Another workaround would be to just replace the engine bindings with SimpleBindings, but this massively degrades script compilation performance, because the engine will then create new GraalJSBindings + context per compilation.
Solution?
Synchronize on the context here
https://github.com/oracle/graaljs/blob/39b63b9a2202e1c10248357161ea5e63e2934792/graal-js/src/com.oracle.truffle.js.scriptengine/src/com/oracle/truffle/js/scriptengine/GraalJSScriptEngine.java#L462 and here
https://github.com/oracle/graaljs/blob/39b63b9a2202e1c10248357161ea5e63e2934792/graal-js/src/com.oracle.truffle.js.scriptengine/src/com/oracle/truffle/js/scriptengine/GraalJSScriptEngine.java#L680
Maybe additional synchronization here would fix performance issues from 2):
https://github.com/oracle/graaljs/blob/39b63b9a2202e1c10248357161ea5e63e2934792/graal-js/src/com.oracle.truffle.js.scriptengine/src/com/oracle/truffle/js/scriptengine/GraalJSBindings.java#L88-L92
Using the ScriptEngine API to compile scripts fails when called from different threads.
A ScriptEngine (similar to Java Collection) may or may not be thread safe. If it is not explicitly documented to be thread safe then you should assume that it is not. ScriptEngine provided by graal-js is not thread safe. So, if you want to use it from multiple threads then it is up to you to ensure that it is not used concurrently.
BTW: ScriptEngine API gives you a hint on whether a particular implementation is thread safe through ScriptEngineFactory.getParameter("THREADING"), see the JavaDoc of this method.
The JavaDoc only mentions executing script, not compiling them. Compilation afaik does not include execution of scripts.
Do you see dangers in compiling scripts concurrently via the mentioned workaround?
The JavaDoc only mentions executing scripts, not compiling them.
That's why I wrote that it gives you a hint. If you want to be that strict about wording then you can try to find any JavaDoc saying that a multi-threaded usage of other parts (like the compile() method) of the scripting API is thread safe ;-).
Do you see dangers in compiling scripts concurrently via the mentioned workaround?
As I mentioned above, if you want to use our ScriptEngine from multiple threads then you should not do so concurrently.
You may also want to know that no compilation (just parsing) takes places when the compile() method is invoked. We compile just the frequently used methods and the compilation is based on the information collected during the execution of these methods. Considering that there was no execution yet, there is no useful code that we can produce.
On the other hand, the usage of compile()/CompiledScript is recommented because once some hot method is compiled then the compiled code is linked with the corresponding CompiledScript object. If you keep this object alive then you keep the compiled code alive. If you execute some script repeatedly without the usage of CompiledScript, you risk garbage collection of profiling/compilation data (and new profiling/compilation of the script).
|
2025-04-01T04:35:03.513376
| 2020-11-25T20:07:49
|
751088724
|
{
"authors": [
"spericas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9520",
"repo": "oracle/helidon",
"url": "https://github.com/oracle/helidon/issues/2547"
}
|
gharchive/issue
|
Enable ignoreExceptionResponse property when upgrading to latest Jersey
See https://github.com/eclipse-ee4j/jersey/pull/4641 for more information. We should enable this property in Helidon once Jersey is released. We should also create a test to verify responses in exceptions thrown by the Client API are ignored after the property is set.
PR #2727
|
2025-04-01T04:35:03.516666
| 2024-05-03T00:46:02
|
2276716852
|
{
"authors": [
"behnazh-w",
"tromai",
"vinkris01"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9521",
"repo": "oracle/macaron",
"url": "https://github.com/oracle/macaron/issues/729"
}
|
gharchive/issue
|
Implement license checks in Macaron
Implement license filtering in Macaron against a configurable, pre defined set of licenses. Macaron pulls down code and metadata today from GH repositories for performing various analyses. With this feature, the users would have a certain degree of control over the code and data being fetched and can make it conditional on specific licenses that the code and data are subject to.
Expected outcome:
Set of licenses (as per SPDX identifier format) that are user configurable in a .ini file of similar
Macaron produces suitable log messages while performing the license checks
Macaron fetches code and data subject to license checks
@vinkris01 Thanks for the opening this issue. We can also add a check to report whether the license complies with the allowed licenses and the user can enforce policies based on the check result.
One thing that we might need to do is to clone the repository to check the license. So, it might not be possible to totally avoid pulling down the source code.
One thing that we might need to do is to clone the repository to check the license.
We could obtain the content of the LICENSE from a github repository using GitHub API - https://docs.github.com/en/rest/licenses/licenses?apiVersion=2022-11-28#get-the-license-for-a-repository without cloning it. I'm not sure how much extra overhead it would introduce.
|
2025-04-01T04:35:03.556070
| 2022-07-27T03:10:16
|
1318962720
|
{
"authors": [
"galiacheng",
"majguo"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9522",
"repo": "oracle/weblogic-azure",
"url": "https://github.com/oracle/weblogic-azure/issues/181"
}
|
gharchive/issue
|
Base image deployment failure at ResourcePurchaseValidationFailed
A customer failed at base image deployment on 2022-07-06.
Image: 122140-jdk8-ol76
Error message:
{"statusCode":"BadRequest","statusMessage":"{\"error\":{\"code\":\"ResourcePurchaseValidationFailed\",\"message\":\"User failed validation to purchase resources. Error message: 'The HTTP resource that matches the request URI 'https://storeapi.azure.com/orders/validatePurchase?api-version=2014-09-01' does not support the API version '2014-09-01'.'\"}}","eventCategory":"Administrative","entity":"/subscriptions/xxxxx-xxxx-xxxxxx/resourcegroups/abcpruebas/providers/Microsoft.Compute/virtualMachines/hcis","message":"Microsoft.Compute/virtualMachines/write","hierarchy":"xxxx"}
I encountered the similar issue once before with the following error message:
{
"status": "Failed",
"error": {
"code": "ResourcePurchaseValidationFailed",
"message": "User failed validation to purchase resources. Error message: 'The HTTP resource that matches the request URI 'https://storeapi.azure.com/orders/validatePurchase?api-version=2014-09-01' does not support the API version '2014-09-01'.'"
}
}
However, it seems it's an intermittent issue as I can't reporduce it after that.
|
2025-04-01T04:35:03.559160
| 2020-04-29T14:54:47
|
609123126
|
{
"authors": [
"ashageetha",
"markxnelson"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9523",
"repo": "oracle/weblogic-kubernetes-operator",
"url": "https://github.com/oracle/weblogic-kubernetes-operator/pull/1611"
}
|
gharchive/pull-request
|
Fix for create-rcu-schema scripts (issue 1610)
Hi @markxnelson @rjeberhard @mriccell @rdas0405,
This PR has fixes for #1610 -
Support for namespace
Fix for image pull secret in rcu yaml
Support for sys and schema passwords
Use output dir for creating rcu.yaml based on the template rather updating existing common/rcu.yaml directly
Use sys user instead of demo user scott which may not be available in all Database for connectivity check.
Update README.md with latest changes.
Please review and approve these changes.
Thanks,
Asha
this needs to be targeted at develop - we do not update things in master like this without comprehensive testing first
|
2025-04-01T04:35:03.560162
| 2021-07-08T19:38:54
|
940162997
|
{
"authors": [
"russgold",
"tbarnes-us"
],
"license": "UPL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9524",
"repo": "oracle/weblogic-kubernetes-operator",
"url": "https://github.com/oracle/weblogic-kubernetes-operator/pull/2447"
}
|
gharchive/pull-request
|
Don't remove other conditions when adding ConfigChangesPendingRestart
Adding a condition of type ConfigChangesPendingRestart is currently removing any other statuses. This fixes that and does some code cleanup.
The change LGTM given Russ' answer to my question. I recommend getting sign-off from Ryan, Dongbo, and Johnny.
|
2025-04-01T04:35:03.584928
| 2019-06-10T14:35:20
|
454202505
|
{
"authors": [
"johnou"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9525",
"repo": "orbit/orbit",
"url": "https://github.com/orbit/orbit/pull/398"
}
|
gharchive/pull-request
|
Async Hosting.
We will be deploying async hosting this week, will update here with the results.
May have found a bug in cloud.orbit.actors.test.MessageTimeoutTest looking into that..
|
2025-04-01T04:35:03.610087
| 2022-07-18T18:38:29
|
1308366283
|
{
"authors": [
"iannbing",
"mausworks",
"yannickperrenet"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9526",
"repo": "orchest/orchest",
"url": "https://github.com/orchest/orchest/pull/1124"
}
|
gharchive/pull-request
|
Release v2022.07.3
Description
New Orchest release: v2022.07.3 :smile_cat:
Checklist
[ ] I have manually tested my changes and I am happy with the result.
[x] Do sessions work as expected
[x] New feature: Renaming in the file-manager
[x] New feature: Creating a step on file creation
[x] CLI edge cases
[x] The documentation reflects the changes.
[x] I haven't introduced breaking changes that would disrupt existing jobs, i.e. backwards compatibility is maintained.
[x] In case I changed the dependencies in any requirements.in I have run pip-compile to update the corresponding requirements.txt.
[x] In case I changed one of the services' models.py I have performed the appropriate database migrations (refer to the DB migration docs).
[x] In case I changed code in the orchest-sdk I followed its release checklist.
[x] In case I changed code in the orchest-cli I followed its release checklist.
During manual testing I ran into quite a few bugs/improvements/nitpicks. I am writing them all down here and we can choose which ones should get fixed before we release. Please refer to them explicitly in this thread (e.g. using the resp. number).
1. Environments can't be deleted
To reproduce:
Create an environment
Try to delete the environment through the table in the /environments page.
@mausworks Could you fix?
2. Pipeline in read-only mode without hint to explain "why?"
When the pipeline is put in read-only mode due to environments being build, we show a dialog explaining why the pipeline is in read-only mode. However, when the pipeline is in read-only mode because JupyterLab is building then we don't show this dialog (which we should show).
https://user-images.githubusercontent.com/26223174/179678348-293d1607-3c97-47b6-938f-a67c1bc9ac7e.mp4
@iannbing @ricklamers Any specific reason this isn't happening yet? Otherwise I think it would be great if it can be added.
3. JupyterLab build not started when unsaved changes
See video. All dialogs are prompted, however, the build is actually never started.
https://user-images.githubusercontent.com/26223174/179678712-0f51c6e3-f5cb-43f2-850e-77412104a8f8.mp4
@iannbing How easy would this be to fix?
4. Select root folder in create-file dialog
When you have a specific folder selected in the file-manager, then pressing "create file" will try to create a file in the selected root folder. However, as a user, I might actually want to create the file elsewhere. To achieve that, I would have to cancel the file creation and select the correct folder to create the file in (why can't I just change it/type it in the dialog?). This is quite cumbersome in my opinion.
@ncspost Is this picked up by the new design?
https://user-images.githubusercontent.com/26223174/179678856-c42091c7-738e-4cea-bb6f-271572730601.mp4
5. Step title doesn't get auto-focused
When creating a new step, the title is auto-focused. However, when creating a new step automatically when a new file is created, then we should also auto-focus the title (which is currently not the case) so that the user can enter it. This way the "new step" button and automatic step creation (on file creation) would work the same way.
https://user-images.githubusercontent.com/26223174/179679511-e6b31a9a-f366-44bd-9a2f-9c1a0db8cc87.mp4
@mausworks Can you fix? Probably slipped through in https://github.com/orchest/orchest/pull/1097.
6. "Infinite" loop when canceling pipeline rename
When trying to rename the pipeline through the file-manager, then as a user I might want to cancel this rename (due to the required session restart). However, when canceling I am left in an infinite loop (see video).
https://user-images.githubusercontent.com/26223174/179679760-742856a4-2cec-4644-ad26-434f57b2b84c.mp4
@mausworks Can you fix? Probably slipped through in https://github.com/orchest/orchest/pull/1115.
"Infinite" loop when canceling pipeline rename
This is sort of intended behavior. To cancel the rename, you hit ESC. Clicking outside of the rename field always saves the rename, thus triggering the dialog.
Do we want to always cancel the rename if the user hits cancel on any dialog, or just this one?
JupyterLab build not started when unsaved changes
It prompts a warning if user attempts to build JupyterLab while having active sessions. And if you click "Confirm", it stops sessions but stays in the same view with "BUILD" button disabled. I think you probably pressed ENTER to dismiss the first "unsaved warning" and also dismissed this warning at the same time (because CONFIRM in this dialog is auto-focused).
But, indeed, it's very strange that user stays in the same view without seeing any updates about stopping all sessions. Would it be better to navigate back to PipelineEditor, so user could see if all sessions are stoped?
I think you probably pressed ENTER to dismiss the first "unsaved warning" and also dismissed this warning at the same time (because CONFIRM in this dialog is auto-focused).
I actually used the mouse for both. I just reproduced the issue again and made sure not to press my keyboard.
It prompts a warning if user attempts to build JupyterLab while having active sessions. And if you click "Confirm", it stops sessions but stays in the same view with "BUILD" button disabled.
The sessions are indeed stopped, but I find it strange that the BUILD button is disabled for two reasons:
It never "snaps back"
Because it is in a disabled state it appears as if the build was invoked (because when pressing the black build button it would become this "disabled" button), however, the build isn't actually started so as a user I would wait for a very long time (without anything ever happening).
Would it be better to navigate back to PipelineEditor, so user could see if all sessions are stoped?
I don't think it would be better to navigate back to the PipelineEditor, for two reasons:
I am afraid we would (in special occasions) retrigger a session start
From a UX perspective I don't like us suddenly redirecting to another page in this context
But, indeed, it's very strange that user stays in the same view without seeing any updates about stopping all sessions.
Why can't we just start the JupyterLab build the moment the sessions are stopped? I am pretty sure this is the old behavior that we had.
Why can't we just start the JupyterLab build the moment the sessions are stopped? I am pretty sure this is the old behavior that we had.
You're right. Fix this issue with showing a Snackbar while stopping all sessions. see #1130.
Fix this issue with showing a Snackbar while stopping all sessions
Absolutely awesome fix! Thanks for picking that up so quickly :heart:
|
2025-04-01T04:35:03.612740
| 2022-11-18T08:00:59
|
1454653277
|
{
"authors": [
"astersnake",
"isvsergeev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9527",
"repo": "orchidsoftware/platform",
"url": "https://github.com/orchidsoftware/platform/issues/2450"
}
|
gharchive/issue
|
Filter media by group
Is your feature request related to a problem? Please describe.
No available to filter attachments by group
Describe the solution you'd like
I suggest adding an additional filtering field to the controllers.
Will send the proposed solution to the PR
I will close this cause is solved and merged in #2451
|
2025-04-01T04:35:03.619319
| 2023-08-25T02:07:13
|
1866142818
|
{
"authors": [
"DrJingLee",
"casey"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9528",
"repo": "ordinals/ord",
"url": "https://github.com/ordinals/ord/issues/2366"
}
|
gharchive/issue
|
The defination of NFT and Inscriptions :are inscriptions NFTs?
Why are sat inscriptions called "digital artifacts" instead of "NFTs"?
An inscription is an NFT, but the term "digital artifact" is used instead, because it's simple, suggestive, and familiar.
The phrase "digital artifact" is highly suggestive, even to someone who has never heard the term before. In comparison, NFT is an acronym, and doesn't provide any indication of what it means if you haven't heard the term before.
Additionally, "NFT" feels like financial terminology, and the both word "fungible" and sense of the word "token" as used in "NFT" is uncommon outside of financial contexts.
The handbook and casey's video say Inscriptions are NFTs but list a lot differences with NFTs. And We all know Casey express his dislike with "NFTs" publicly.
This could be very much confusing to the new people who are coming into Ordinals. We try to educate people to use "Digital artifacts" or "Inscriptions" instead of "NFTs".
To make the defination clear ,We should let people understand clearly that which are correct at following ?
1,NFTs > Inscriptions
2,Inscriptions = digital artifacts ," inscriptions is digital arttifacts "
3,digital artifacts > Inscriptions;
NFT > Digital artifacts >= Inscriptions If this is correct, the handbook also says :"What are digital artifacts? Simply put, they are the digital equivalent of physical artifacts."
So "digital artifacts > NFTs " sounds correct as well.
With the development of BRC20 ,it making even more confusing :
Inscriptions = BRC 20 + other "digital artifacts "
So
Inscriptions > BRC20
and
Inscriptions < NFT
So NFT > Inscriptions > BRC 20 = FT
@casey Thought ?
Inscriptions are definitely NFTs.
Inscriptions are definitely NFTs.
Inscriptions are definitely NFTs.
Ser you are trying to say this is correct ?
Maybe this Chart is more likely to describe what is an Inscriptions.
|
2025-04-01T04:35:03.620480
| 2024-01-17T05:42:32
|
2085434335
|
{
"authors": [
"afeezaziz",
"raphjaph"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9529",
"repo": "ordinals/ord",
"url": "https://github.com/ordinals/ord/issues/3028"
}
|
gharchive/issue
|
Allow ord daemon to work with other chains
Is it possible for the daemon to connect to other chains such as BSV or Litecoin or Liquid and create ordinals?
I think there are forks of ord that do this but we will probably not support that.
|
2025-04-01T04:35:03.641781
| 2017-10-24T15:10:25
|
268070908
|
{
"authors": [
"JonasDuclos",
"entmike"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9530",
"repo": "org-scn-design-studio-community/lumiradesignercommunityext",
"url": "https://github.com/org-scn-design-studio-community/lumiradesignercommunityext/issues/13"
}
|
gharchive/issue
|
Issue with Simple Date Component
Hello all,
I have installed SDK Community on my local Lumira 2.0 SP02, but i'm not able to run my application which use SIMPLEDATE Component.
It seems Lumira 2.0 is not able to use rollDays function
"Message: org.mozilla.javascript.EcmaError: ReferenceError: "internal_setDateByIndex" is not defined. (DATE.rollDays()#7)
Stack trace: com.sap.ip.bi.zen.rt.framework.jsengine.JsEngineException: org.mozilla.javascript.EcmaError: ReferenceError: "internal_setDateByIndex" is not defined. (DATE.rollDays()#7)"
I'm not 100% sure my extension is well installed, you can see it here. (i took this link : https://github.com/org-scn-design-studio-community/lumiradesignercommunityext/blob/master/releases/org.scn.community.sdk.extensions.zip?raw=true, link found here : https://blogs.sap.com/2017/08/18/scn-lumira-designer-2.0-sdk-components/)
Best regards !
Jonas.
Hey Jonas,
Thanks for reporting the problem. The good news for you is that it's not your problem. The bad news for me is that I'll need to fix something :)
Give me a day or 2 and I can create a fix.
Good to know i'm not responsible for this issue :D Thank for your responsiveness, i appreciate !
Good luck for this !
Hey Jonas,
Give it another try when you have time and let me know if it's fixed for you, now.
Hello Mike,
I tried to re-download your extension but i have an issue when i try to install it
I failed somewhere ?
I will take a look today. Perhaps try redownloading it and removing it the old version of the extension in the meantime.
Sent from my Windows phone.
On Oct 25, 2017, at 2:32 AM, JonasDuclos<EMAIL_ADDRESS>wrote:
Hello Mike,
I tried to re-download your extension but i have an issue when i try to install it
I failed somewhere ?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
I pushed a fix up if you could try to download once more and try again.
Great !
But i'm sorry, still have an issue. Seems you fixed rollDays but there is the same issue for rollDays (and i guess for others functions in this component)
Message: org.mozilla.javascript.EcmaError: ReferenceError: "getYear" is not defined. (DATE.internal_rollMonths()#11)
Stack trace: com.sap.ip.bi.zen.rt.framework.jsengine.JsEngineException: org.mozilla.javascript.EcmaError: ReferenceError: "getYear" is not defined. (DATE.internal_rollMonths()#11)
Let's try again. I missed a spot, you are right.
Nice !
Works perfectly !
Big thank for your support Mike !
Glad to be able to help. Sorry for the bugs. Lumira 2.0 changed the way script scope worked, so I'm only hearing these issues as people are using the script APIs. Please feel free to report any new issues you encounter going forward! Going to close this issue as it sounds like it's been corrected.
Thanks!
|
2025-04-01T04:35:03.661462
| 2020-03-29T19:06:42
|
589860668
|
{
"authors": [
"RaphaelPage0110",
"orichalque"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9531",
"repo": "orichalque/yet-an-other-gloomhaven-companion",
"url": "https://github.com/orichalque/yet-an-other-gloomhaven-companion/issues/37"
}
|
gharchive/issue
|
Click Use on a card placed on the board for one turn
If you click on the 'use' button of a card placed on the board using the circle arrow symbol (the one signifying that the card should stay on the board only for a turn), a NaN String appears on the card :
I think we should remove the option to 'use' a card in this situation. What do you think @orichalque ?
The use button should not even be there. I'll fix this asap
I fixed this bug, and also left the number visible all time (not only on mouse hover)
78acc12
|
2025-04-01T04:35:03.667568
| 2015-11-17T14:39:11
|
117367638
|
{
"authors": [
"lvca",
"robfrank"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9532",
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/5335"
}
|
gharchive/issue
|
shutdown.sh must work with PID+signals
Now the password are not in clear, we need to change shutdown.sh to send a signal to the server process in case no password is provided. So when the server.sh starts, a PID file must be written.
The current alternative to get the password as parameter must stay there too.
Well, this will solve on *nix systems. How to deal on win? any idea?
Try to see Tomcat or other Java app how they do that.
I fixed the shutdown on java side.
handling the pid is another story and it is feasible only on linux, IMHO.
I think the impl. must be reviewed.
just pushed
server.sh saves the pid file in the orientdb home
shutdown.sh has two different behaviour
shutdown.sh without params, check for pid file and if found it simply send a kill -15
if params are passed, calls OshutdownMain
Params are no more positional:
shutdown.sh -h host -P 2424 -u root -p hello
or
shutdown.sh --host host --ports 2424 --user root --password hello
default value are:
host: localhost
ports: 2424-2430
user: root
Cool.
|
2025-04-01T04:35:03.670551
| 2017-09-05T14:51:26
|
255304899
|
{
"authors": [
"luigidellaquila",
"zhangshusheng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9533",
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/7720"
}
|
gharchive/issue
|
link in one class
OrientDB Version: v2.20
Java Version: 1.8.0_131
OS: CentOS7
One question
Two vertex that belong to one edge are in a same class. Can be?
For example, a class about car routing infomation that has start and target atributes. Can we build one edge from start to target (start and target in same class)?
Hi @zhangshusheng
Yes, absolutely, there are no constraints about this. You can also have an edge that starts and ends from/to the same vertex
Thanks
Luigi
Hi @luigidellaquila
OK, I will try to build an edge .
Thank you .
Hi @zhangshusheng
I'm closing this ticket, please feel free to reopen if you need further info
Thanks
Luigi
|
2025-04-01T04:35:03.673629
| 2018-06-17T04:13:32
|
333033906
|
{
"authors": [
"PhantomYdn",
"luigidellaquila"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9534",
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/8338"
}
|
gharchive/issue
|
OUser.hasRole() does not work correctly in case of child ORole class
OrientDB Version: 2.2.30, 3.0.3
There is such line in code: https://github.com/orientechnologies/orientdb/blob/develop/core/src/main/java/com/orientechnologies/orient/core/metadata/security/OSecurityShared.java#L319
public ORole getRole(final OIdentifiable iRole) {
final ODocument doc = iRole.getRecord();
if (doc != null && "ORole".equals(doc.getClassName()))
return new ORole(doc);
return null;
}
As you can see, there is explicit check for className - it should be equal ORole. But if you extend ORole and use for your project child class: it will lead to bug when hasRole doesn't work.
Hi @PhantomYdn
Thank you for reporting, I'm fixing it now
Thanks
Luigi
Hi @PhantomYdn
I just pushed a fix, now it should be OK.
The fix will be released with v 2.2.37 and v 3.0.3
Thanks
Luigi
|
2025-04-01T04:35:03.680175
| 2016-08-16T07:14:25
|
171340678
|
{
"authors": [
"dheeraja00",
"orizens",
"tamarshore"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9535",
"repo": "orizens/angular2-infinite-scroll",
"url": "https://github.com/orizens/angular2-infinite-scroll/issues/63"
}
|
gharchive/issue
|
RC-5 is out
RC-5 is out, so can you please update this to rc-5, it will be great that way..
hi.
currently, it is updated to rc4 - and it should work out of the box with rc5.
However, this is considered.
+1
released and closed with 2d784323640063da9d0ea33a3f56b4c016269a77
|
2025-04-01T04:35:03.729486
| 2017-12-08T20:24:38
|
280606893
|
{
"authors": [
"orta",
"seanpoulter"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9536",
"repo": "orta/vscode-jest",
"url": "https://github.com/orta/vscode-jest/pull/188"
}
|
gharchive/pull-request
|
Bug fix: We removed the CodeLens AND dot decorators for passing tests
This fixes #187 and reverts commit f614b89ea9b8be08150bbbaf04e050d5274459af.
|
2025-04-01T04:35:03.734796
| 2016-01-11T13:29:58
|
125944111
|
{
"authors": [
"ideadx",
"j0k3r",
"jackyzhai",
"orthes"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9537",
"repo": "orthes/medium-editor-insert-plugin",
"url": "https://github.com/orthes/medium-editor-insert-plugin/pull/263"
}
|
gharchive/pull-request
|
add fileDeleteOptions
This is a clean version without changes in dist and package.json.
I do recommend adding a main field in package.json tho, so that people can use webpack.
Fix #248
Yeah, you could re-add the main field in package.json :+1:
Also, I'll re-add my question:
Just to be sure, if I want to use your new option, I just need to do sth like that, right?
$('.editable').mediumInsert({
enabled: true,
addons: {
images: {
label: '<span class="fa fa-camera"></span>',
fileDeleteOptions: {
headers: {
'X-CSRF-TOKEN': 'MyAwesomeToken'
}
}
}
}
});
Paste my answer here as well.
You are right. Just note that the old options are still available. The way I use it is:
$('.editable').mediumInsert({
addons: {
images: {
deleteScript: '/images/delete',
deleteMethod: 'POST',
fileDeleteOptions: {
headers: {
'X-CSRF-TOKEN': Cookies.get('X-CSRF-Token')
}
}
}
}
});
Sure, I'll add package.json back.
@orthes are you ok with that?
We'll need to update the doc accordingly.
:+1:
Thanks @jackyzhai !
Doc updated :white_check_mark:
https://github.com/orthes/medium-editor-insert-plugin/wiki/v2.x-Configuration
Thanks guys.
@jackyzhai Thanks.
I updated plugin and Delete Request is no longer been called when I remove the image?
deleteScript: '{!! action('XXX\JoyController@putSomeMethod') !!}',
deleteMethod: 'PUT',
fileDeleteOptions: {
_token: "{!! Session::getToken() !!}",
id : 1
},
|
2025-04-01T04:35:03.797529
| 2016-04-26T17:18:31
|
151190523
|
{
"authors": [
"Gergely",
"golith"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9538",
"repo": "osCommerce/oscommerce2",
"url": "https://github.com/osCommerce/oscommerce2/issues/383"
}
|
gharchive/issue
|
Deprecated Class constructors
Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP
The backwards compatibilty with older php versions enable to use __construct() because stated v2.4 version will not be compatible older v2.3 components.
fixed
Could I have a look at the code please :-)
@golith
v2.4 is figured out in development branch so I couldnt give exact commits.
v2.3.5 core muted with deprecated errors.
|
2025-04-01T04:35:03.812036
| 2024-05-16T12:32:10
|
2300280765
|
{
"authors": [
"achilleas-k",
"cdrage",
"chunfuwen",
"ondrejbudai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9539",
"repo": "osbuild/bootc-image-builder",
"url": "https://github.com/osbuild/bootc-image-builder/pull/438"
}
|
gharchive/pull-request
|
Add support for arbitrary kickstart file injection into ISOs (HMS-3879)
See https://github.com/osbuild/images/pull/631
Does this PR aim to address issue Lhttps://github.com/osbuild/bootc-image-builder/issues/433?
@achilleas-k I'm trying to figure out how to test this, I've got a cluster machine that uses RAID5 so I'm hoping to inject something like this:
# Use the entire disk automatically
clearpart --all --initlabel
# Create RAID partitions on each disk
part raid.01 --size=1 --ondisk=sda --asprimary
part raid.02 --size=1 --ondisk=sdb --asprimary
part raid.03 --size=1 --ondisk=sdc --asprimary
part raid.04 --size=1 --ondisk=sdd --asprimary
part raid.1 --size=100 --grow --ondisk=sda --asprimary
part raid.2 --size=100 --grow --ondisk=sdb --asprimary
part raid.3 --size=100 --grow --ondisk=sdc --asprimary
part raid.4 --size=100 --grow --ondisk=sdd --asprimary
# Create the RAID 5 array
raid / --level=5 --device=md0 --fstype="ext4" raid.1 raid.2 raid.3 raid.4
# Specify bootloader installation (adjust as necessary for your setup)
bootloader --location=mbr --driveorder=sda,sdb,sdc,sdd --append="rhgb quiet"
into it, then boot it and have it auto-configure.
For this PR, I'm unable to see what volume / param I should pass in to bootc-image-builder image to pass in the kickstart file?
My second question is that I'm assuming that I won't need to elaborate on the entire install in the kickstart file / just inject what I need?
I haven't tested it here yet, though it's the same code as in obuild/images which has been tested. The way to add it is through the config.toml (toml is more convenient than json in this case):
[customizations.installer.kickstart]
contents = """
<bunch of kickstart stuff>
"""
My second question is that I'm assuming that I won't need to elaborate on the entire install in the kickstart file / just inject what I need?
bootc-image-builder will add the ostreecontainer line for the base container and then append the stuff from customizations and that's it. Everything else is up to the user.
@achilleas-k So when pass in user own kickstart file to bootc-image-builder , what happen to previously existed kickstart for unattended iso ? Would both kickstart files be merged as one ?
@achilleas-k So when pass in user own kickstart file to bootc-image-builder , what happen to previously existed kickstart for unattended iso ? Would both kickstart files be merged as one ?
bootc-image-builder will add the ostreecontainer line for the base container and then append the stuff from customizations and nothing else.
It's up to the user to write a kickstart that makes the installation unattended, interactive, or anything they want. Doing anything more would be incredibly error prone. We would have to parse the user's kickstart contents to figure out what we need to add (and where) and that's far beyond the scope of this change.
Tests added. Ready for review.
Also, one usability thought: IIUC, the kickstart is now passed via customization.toml... I wonder if we should also allow just mounting it in, e.g. -v kickstart.ks:/kickstart.ks. This way, users wouldn't have to embed their already existing kickstarts into a blueprint.
This is awesome, can you please document it in README.md?
Done!
Also, one usability thought: IIUC, the kickstart is now passed via customization.toml... I wonder if we should also allow just mounting it in, e.g. -v kickstart.ks:/kickstart.ks. This way, users wouldn't have to embed their already existing kickstarts into a blueprint.
Not sure how much easier that makes it, but if you want we can add it. Here or follow-up?
Please re ~review~ nitpick :)
|
2025-04-01T04:35:03.815975
| 2024-06-26T10:33:13
|
2374974351
|
{
"authors": [
"lzap"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9540",
"repo": "osbuild/image-builder",
"url": "https://github.com/osbuild/image-builder/pull/1251"
}
|
gharchive/pull-request
|
main: check for nil in logrus hook
https://github.com/osbuild/image-builder/issues/1248
Apologies.
And there is already an existing fix, what a day today for me: https://github.com/osbuild/image-builder/pull/1250
|
2025-04-01T04:35:03.818863
| 2024-06-03T12:53:46
|
2331084759
|
{
"authors": [
"achilleas-k",
"andremarianiello"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9541",
"repo": "osbuild/images",
"url": "https://github.com/osbuild/images/issues/724"
}
|
gharchive/issue
|
force_password_reset fails when building tar image type
Moved from https://github.com/osbuild/osbuild/issues/1798
Stage: org.osbuild.users
Output:
chroot: failed to run command ‘passwd’: No such file or directory
Traceback (most recent call last):
File "/run/osbuild/bin/org.osbuild.users", line 145, in <module>
r = main(args["tree"], args["options"])
File "/run/osbuild/bin/org.osbuild.users", line 130, in main
subprocess.run(["chroot", tree, "passwd", "--expire", name], check=True)
File "/usr/lib64/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['chroot', '/run/osbuild/tree', 'passwd', '--expire', 'foouser']' returned non-zero exit status 127.
This doesn't happen with qcow2, ova, image-installer, etc.
Opened #741
Hello! I am still having this issue. I think it is because the passwd binary is not provided by shadow-utils or pam, but by the passwd package. I will make a PR with my fix
Hello! I am still having this issue. I think it is because the passwd binary is not provided by shadow-utils or pam, but by the passwd package. I will make a PR with my fix
You're right. That was an oversight on my part. Thanks!
|
2025-04-01T04:35:03.822390
| 2021-08-25T12:44:41
|
979104157
|
{
"authors": [
"msehnout",
"ondrejbudai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9542",
"repo": "osbuild/osbuild-composer",
"url": "https://github.com/osbuild/osbuild-composer/pull/1673"
}
|
gharchive/pull-request
|
spec: stop providing golang-github-osbuild-composer
The golang-github-osbuild-composer package was created by mistake a long
time ago. Stop providing it in Fedora 34 and newer and add a note about
removing the condition when Fedora 33 hits EOL.
This pull request includes:
[ ] adequate testing for the new functionality or fixed issue
[ ] adequate documentation informing people about the change such as
[ ] create a file in news/unreleased directory if this change should be mentioned in the release news
[ ] submit a PR for the guides repository if this PR changed any behavior described there: https://www.osbuild.org/guides/
Can you please rebase, @msehnout? One of the test is failing because of the deleted test RPMs. Sorry :(
|
2025-04-01T04:35:03.839824
| 2022-10-10T10:48:24
|
1402970267
|
{
"authors": [
"7flying",
"achilleas-k",
"gicmo",
"mcattamoredhat",
"runcom"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9543",
"repo": "osbuild/osbuild-composer",
"url": "https://github.com/osbuild/osbuild-composer/pull/3055"
}
|
gharchive/pull-request
|
Extend firewall customizations to add sources
Requires https://github.com/osbuild/osbuild/pull/1137 (see below)
This patch adds support to specify firewalld sources for zones
(will add tests asap)
Signed-off-by: Antonio Murdaca<EMAIL_ADDRESS>
osbuild-composer counterpart to osbuild/osbuild#1157.
Signed-off-by: Irene Diez<EMAIL_ADDRESS>This pull request includes:
[x] adequate testing for the new functionality or fixed issue
[ ] adequate documentation informing people about the change such as
[ ] submit a PR for the guides repository if this PR changed any behavior described there: https://www.osbuild.org/guides/
The tests are all green but the CentOS9 ones on Packit
Looks good, but the git history is a bit weird with the addition of the customization and then changing it. Not a big issue though. Did a rebase to hopefully get everything green again and merge.
Looks good, but the git history is a bit weird with the addition of the customization and then changing it. Not a big issue though. Did a rebase to hopefully get everything green again and merge.
Yeah, I wanted to build upon @runcom's work now that I have permissions to write on other people's branches. Seemed more 'fair/accountable' to me.
Thanks for the review Achilleas!
Looks good, but the git history is a bit weird with the addition of the customization and then changing it. Not a big issue though. Did a rebase to hopefully get everything green again and merge.
Yeah, I wanted to build upon @runcom's work now that I have permissions to write on other people's branches. Seemed more 'fair/accountable' to me.
We sometimes use the Co-authored-By: git trailer for this, see https://docs.github.com/en/pull-requests/committing-changes-to-your-project/creating-and-editing-commits/creating-a-commit-with-multiple-authors
Hello, I have been trying to test locally the firewall zones customization, may you please help me to clarify which is the proper syntax to include it in the blueprint?
Example 1:
[customizations.firewall.zones]
name = "dmz"
sources = ["<IP_ADDRESS>"]
or
Example 2:
[[customizations.firewall.zones]]
name = "dmz"
sources = ["<IP_ADDRESS>"]
Example 1 did throws an error at pushing though, Example 2 didn't, but the compose fails.
Hello, I have been trying to test locally the firewall zones customization, may you please help me to clarify which is the proper syntax to include it in the blueprint?
Example 1: [customizations.firewall.zones] name = "dmz" sources = ["<IP_ADDRESS>"]
or
Example 2: [[customizations.firewall.zones]] name = "dmz" sources = ["<IP_ADDRESS>"]
Example 1 did throws an error at pushing though, Example 2 didn't, but the compose fails.
Example 2 is the way to go. You need to use an array of tables.
My tests have been done using osbuild at commit 1ecc7843866af7a1896f55ac0bba3482e7ad15d8 and osbuild-composer with this branch rebased in the current main, you need the commits from #3099 .
CS9 CI runners are failing to install osbuild because the repo snapshots we have configured are too old.
We're going to have to update the repos in test/data/repositories/centos-stream-9.json to 20221101 from 20220330.
CS9 CI runners are failing to install osbuild because the repo snapshots we have configured are too old. We're going to have to update the repos in test/data/repositories/centos-stream-9.json to 20221101 from 20220330.
Can I do it myself with a new commit on this PR or do you want to handle that separately?
CS9 CI runners are failing to install osbuild because the repo snapshots we have configured are too old. We're going to have to update the repos in test/data/repositories/centos-stream-9.json to 20221101 from 20220330.
Can I do it myself with a new commit on this PR or do you want to handle that separately?
Go ahead and update them. If we get any issues (from new packages), we can think about dealing with them separately.
Status as of 4/11:
Some of the tests in Gitlab's CI fail in CentOS-9 because there is a missing GPG key on containers-common (bugzilla).
We need to wait until containers-common-1-45.el9 appears in the composes, the Osbuild folks need to make a new snapshot and then we can update the repos at test/data/repositories with the proper snapshot.
@7flying can you rebase and update the repo to 20221115?
@runcom, I will wait until #3136 goes in to rebase again
this needs a rebase :angel:
this needs a rebase angel
Yes I was working on it, but the whole firewall-customizations (https://github.com/osbuild/osbuild-composer/pull/3055/files#diff-35ef1e16afb78c169160d0547c67bcc94cb891f2519ee115dfcabd776bfd54d9R69) pre new-shiny-rhel are nowhere to be found, the stuff that was there before our changes.
this needs a rebase angel
Yes I was working on it, but the whole firewall-customizations (https://github.com/osbuild/osbuild-composer/pull/3055/files#diff-35ef1e16afb78c169160d0547c67bcc94cb891f2519ee115dfcabd776bfd54d9R69) pre new-shiny-rhel are nowhere to be found, the stuff that was there before our changes.
Found the thing, ignore this.
|
2025-04-01T04:35:03.842812
| 2024-05-07T07:24:07
|
2282497951
|
{
"authors": [
"bcl",
"ondrejbudai",
"supakeen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9544",
"repo": "osbuild/otk",
"url": "https://github.com/osbuild/otk/issues/48"
}
|
gharchive/issue
|
Do we want to support multiple omnifest formats?
Currently omnifests are expected to be YAML. Internally we only deal with tree structures which can also be represented in TOML, and JSON.
Do we want to accept those formats for omnifests?
Note that this could clash with #47.
I don't see a reason to accept more formats. Additionally, yaml is basically a superset of json, so we actually have jsn covered.
Originally I thought it would be nice to also support TOML, but then I converted some example into TOML and, well, as much as it pains me to say it, even for humans I think the YAML version is far easier to read and edit.
Originally I thought it would be nice to also support TOML, but then I converted some example into TOML and, well, as much as it pains me to say it, even for humans I think the YAML version is far easier to read and edit.
This has also been my experience. When things get nested deeply and especially when sequences and maps are mixed together at those deeper levels things start getting very funky in TOML.
We will go with only YAML support initially.
|
2025-04-01T04:35:03.862887
| 2022-08-08T14:42:13
|
1331977500
|
{
"authors": [
"oscartbeaumont"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9545",
"repo": "oscartbeaumont/rspc",
"url": "https://github.com/oscartbeaumont/rspc/issues/26"
}
|
gharchive/issue
|
Specta types for other crates & limitations
Specta currently makes the assumption rust_decimal::Decimal is using the serde-with-str feature and that the type is a string in Typescript. This could be an incorrect assumption but I don't really know how to handle the alternatives at this stage due to it being controlled in the downstream crate by a feature.
The assumption is also made that serde is not set to show human-readable types as this would result in a potential mismatch between JSON and Specta types.
Specta is also missing support for:
Any internal sqlx type due to launchbadge/sqlx#2030
bit-vec - tbh I think this should be done upstream due to it being a struct
I just added support for most of the sqlx compatible types with exceptions above. The features flags match sqlx except for decimal in sqlx is called rust_decimal in Specta to avoid confusion with the decimal crate.
|
2025-04-01T04:35:03.865560
| 2024-06-30T05:34:12
|
2382094003
|
{
"authors": [
"oscbyspro"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9546",
"repo": "oscbyspro/Ultimathnum",
"url": "https://github.com/oscbyspro/Ultimathnum/issues/37"
}
|
gharchive/issue
|
No Endianness or Signedness protocols
The Endianness or Signedness types are just constant Bool(s) at this point. The methods that used them are either inlineable, too complicated for a dynamic comparison to matter, or designated slow paths. There is some generic stuff you can do with BinaryInteger.Mode, but they are no longer relevant. I still prefer named cases over Bool(s) so I might turn them into enums, we will see.
The enum cases also let me upgrade som function signatures:
DataInt.signum(
of: x, mode: .signed
)
DataInt.comparison(
lhs: x, mode: .signed,
rhs: y, mode: .unsigned,
)
|
2025-04-01T04:35:03.888437
| 2020-08-28T09:45:33
|
687951355
|
{
"authors": [
"Belval",
"Stupesmith",
"fawazahmed0",
"frederick0291",
"oschwartz10612",
"p2k-ko",
"yogi2806"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9547",
"repo": "oschwartz10612/poppler-windows",
"url": "https://github.com/oschwartz10612/poppler-windows/issues/5"
}
|
gharchive/issue
|
Empty file while creating Tiff-files from PDF
Hi, during testing the Python module pdf2image I recognized an error while creating Tiff-Files from a PDF-document.
pdf2image uses pdftocairo in case of the Tiff-format:
https://github.com/Belval/pdf2image/issues/155
I used the 0.90.1 release on Windows 10 and tried to execute pdftocairo directly:
D:\pdf2image_test
(venv) λ pdftocairo.exe -v
pdftocairo version 0.90.1
Copyright 2005-2020 The Poppler Developers - http://poppler.freedesktop.org
Copyright 1996-2011 Glyph & Cog, LLC
D:\pdf2image_test
(venv) λ pdftocairo.exe -tiff example.pdf
D:\pdf2image_test
(venv) λ dir example*
Datenträger in Laufwerk D: ist Daten
Verzeichnis von D:\pdf2image_test
28.08.2020 11:39 0 example-1.tif
01.07.2020 11:36 761.716 example.pdf
2 Datei(en), 761.716 Bytes
0 Verzeichnis(se), 16.826.216.448 Bytes frei
I tried the same example.pdf on a Debian 10 Linux (with poppler-utils 0.71.):
➜ ~ pdftocairo -tiff example.pdf
➜ ~ ls example* -1
example-1.tif
example-2.tif
example-3.tif
example-4.tif
example-5.tif
example-6.tif
example-7.tif
example-8.tif
example.pdf
➜ ~
On Debian 10 it is working as expected. Am I missing any dependencies for pdftocairo on Windows 10?
Best regards
Stephan
Hi Stephan,
I replicated your issue and also got an empty file. Because it ran without any errors it leads me to believe that it is not a missing dependency.
Unfortunately I am not very familiar with the poppler library itself. This repository was thrown together to package it from conda-forge in a zip for Belval's project and ease of use.
I am sorry to send you further down the rabbit hole, but I would ask the guys over at poppler-feedstock as this likely would need to be fixed in the conda package before I can updated it here.
I apologize that I could not be of more help!
Owen
From looking at the recipe it seems like libtiff is included: https://github.com/conda-forge/poppler-feedstock/blob/f98dc28d3138c459ca8239811f794eaa749af79b/.ci_support/win_.yaml#L22
@p2k-ko if you don't have the time I will probably contact the feedstock maintainers because someone else is having the same issue. Would you be kind enough to see if you can reproduce the issue with this build: https://blog.alivate.com.au/poppler-windows/ ?
@Belval I tried the mentioned build. The issue also occures with this version:
λ pdftocairo.exe -v
pdftocairo version 0.68.0
Copyright 2005-2018 The Poppler Developers - http://poppler.freedesktop.org
Copyright 1996-2011 Glyph & Cog, LLC
λ pdftocairo.exe -tiff example.pdf
-: Error writing TIFF header.
Error writing example-1.tif
The error message "Error writing TIFF header" was not present with the Poppler 0.90
Then the error is probably not related to feedstock. Do we have anyone who ever successfully converted to TIFF on a Windows machine?
@Elephant940 confirmed that the Windows version was never tested after my freetype patches, and I confess I did not either after it built.
Hello everyone,
If i can add any informations here. I have p2k-ko's exact same problem.
I work with PDF files full of jbig2 encoded images.
I have this result when i try to extract the images :
D:\working\extract_tiff>pdfimages -v
pdfimages version 0.90.1
Copyright 2005-2020 The Poppler Developers - http://poppler.freedesktop.org
Copyright 1996-2011 Glyph & Cog, LLC
D:\working\extract_tiff>pdfimages -jbig2 my_pdf.pdf .\extract\
D:\working\extract_tiff>dir .\extract
Répertoire de D:\working\extract_tiff\extract
31/08/2020 16:47 <DIR> .
31/08/2020 16:47 <DIR> ..
31/08/2020 16:51 33 831 -000.jb2e
1 fichier(s) 33 831 octets
2 Rép(s) 49 204 383 744 octets libres
Here i should have also .jb2g file which is the header necessary to build the image.
It could maybe explain p2k-ko's error : "Error writing TIFF header"
And another test directly in python :
I use the convert_from_path method of pdf2image
test = convert_from_path(os.path.join(dir, file), fmt="tiff")
Traceback (most recent call last):
File "C:\Python\venv\Projet_IA\lib\site-packages\IPython\core\interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-9e55175c5bca>", line 1, in <module>
test = convert_from_path(os.path.join(dir, file), fmt="tiff")
File "C:\Python\venv\Projet_IA\lib\site-packages\pdf2image\pdf2image.py", line 206, in convert_from_path
images += _load_from_output_folder(
File "C:\Python\venv\Projet_IA\lib\site-packages\pdf2image\pdf2image.py", line 499, in _load_from_output_folder
images.append(Image.open(os.path.join(output_folder, f)))
File "C:\Python\venv\Projet_IA\lib\site-packages\PIL\Image.py", line 2930, in open
raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file '...\\AppData\\Local\\Temp\\tmphatmweuv\\da729fb9-2968-4fd2-8e72-3574d3bbacf4-1.tif'
The .tif is created in the temp folder but like in our problem it's empty.
Not sure this post will help, but here it is.
I will also try with the 0.68 build.
Have we made any progress on this issue?
Even I faced this issue
ok, this is how I got tiff working , Link to StackOverflow
if someone want to look into this issue, you want to see how the build is done on windows at here and here and maybe replicate the same build steps at GitHub actions
ok, this is how I got tiff working , Link to StackOverflow
Thank you so much for this solution it's very nice and easy to use.
Another way to build poppler and being able to use pdftocairo to extract tiff from pdf is to use WSL.
I succefully did it.
But your solution is way easier and "callable" from python.
That is a great workaround!
I just built poppler-20.09.0 so see if this fixes the issue.
That is a great workaround!
I just built poppler-20.09.0 so see if this fixes the issue.
Nope, doesn't seem to work
Okay. I will take this up with the poppler-feedstock guys shortly.
Any ETA on this issue ?
I have raised same issue in poppler's forum if you could follow-up or get some sort of solution from them:
https://gitlab.freedesktop.org/poppler/poppler/-/issues/985
A similar issue was already raised at here: https://gitlab.freedesktop.org/poppler/poppler/-/issues/820 , you can use msys2 package, it doesn't have this problem, here's the steps you can follow
A similar issue was already raised at here , you can use msys2 package, it doesn't have this problem, here's the steps you can follow
Sure thanks, let me try that
I apologize for being slow, I have been quite busy.
After building libtiff from the latest source I could find and trying to use it I got the same result.
I also installed msys2 and used their libtiff dlls and it throws the following error:
I will reach out to the poppler feedstock guys to get their take tonight.
Peter Williams at poppler-feedstock has also identified this as an issue with the libtiff-feedstock and has opened an issue on their repo.
This should be fixed in the latest release: https://github.com/oschwartz10612/poppler-windows/releases/tag/v21.01.0
Please let me know if there are any further issues!
Hello,
I am currently having issues with this on poppler. I am trying to convert a 200 page pdf to TIFF. But Poppler was only converting the first page.
I tried running it directly in pdftocairo and it is indeed only converting one page in pdf2cairo.
Converting the pdf file into other image types did not cause any issues and all the pages were converted.
Anybody who can take a look on this?
|
2025-04-01T04:35:03.961461
| 2017-10-16T11:47:43
|
265741119
|
{
"authors": [
"prabin525",
"spyhunter99"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9548",
"repo": "osmdroid/osmdroid",
"url": "https://github.com/osmdroid/osmdroid/issues/740"
}
|
gharchive/issue
|
Offline Map not showing
I am trying to use offline maps with osmdroid. I have created the tiles and stored in the osmdroid directory.
`map.setUseDataConnection(false);
File f = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/osmdroid/");
if (f.exists()) {
File[] list = f.listFiles();
if (list != null) {
for (int i = 0; i < list.length; i++) {
if (list[i].isDirectory()) {
continue;
}
String name = list[i].getName().toLowerCase();
if (!name.contains(".")) {
continue; //skip files without an extension
}
name = name.substring(name.lastIndexOf(".") + 1);
if (name.length() == 0) {
continue;
}
if (ArchiveFileFactory.isFileExtensionRegistered(name)) {
try {
OfflineTileProvider tileProvider = new OfflineTileProvider(new SimpleRegisterReceiver(context),
new File[]{list[i]});
String source = "";
IArchiveFile[] archives = tileProvider.getArchives();
if (archives.length > 0) {
Set<String> tileSources = archives[0].getTileSources();
if (!tileSources.isEmpty()) {
source = tileSources.iterator().next();
map.setTileSource(FileBasedTileSource.getSource(source));
} else {
map.setTileSource(TileSourceFactory.MAPNIK);
Toast.makeText(context,"error 1", Toast.LENGTH_SHORT).show();
}
} else {
map.setTileSource(TileSourceFactory.MAPNIK);
Toast.makeText(context,"error 2", Toast.LENGTH_SHORT).show();
}
Toast.makeText(context, "Using " + list[i].getAbsolutePath() + " " + source, Toast.LENGTH_SHORT).show();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
}
}`
The code i have used is as above and the toast message also appears using OSMPublicTransport but i can't see the tiles of the map.
The logcat is as below
10-16 17:14:33.134 10626-10626/np.com.techkunja.vtracker.vtracker_driverapp I/StorageUtils: /storage/emulated/0 is writable 10-16 17:14:33.135 10626-10626/np.com.techkunja.vtracker.vtracker_driverapp I/StorageUtils: /sdcard is writable 10-16 17:14:33.138 10626-10626/np.com.techkunja.vtracker.vtracker_driverapp I/StorageUtils: /mnt/sdcard is writable 10-16 17:14:33.194 10626-10626/np.com.techkunja.vtracker.vtracker_driverapp I/OsmDroid: Using tile source: Mapnik 10-16 17:14:33.229 10626-10626/np.com.techkunja.vtracker.vtracker_driverapp I/OsmDroid: sdcard state: mounted
the logcat gives using tile source of Mapnik still. What's the problem here
a few things to try.
did you step through the code to determine if the correct offline source is being selected?
unfortunately we don't have the ability to determine what zoom levels and bounds are available in each archive or tile source. You may want to consider setting the map center and zoom level based on some tiles that you know are in the archives you have
closing due to lack of feedback/response. reopen or comment back if this is still an issue
|
2025-04-01T04:35:03.976553
| 2019-02-26T09:49:27
|
414520745
|
{
"authors": [
"iandees",
"tuukka"
],
"license": "unlicense",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9549",
"repo": "osmlab/onosm.org",
"url": "https://github.com/osmlab/onosm.org/issues/71"
}
|
gharchive/issue
|
Make some fields obligatory
Make some fields obligatory to prevent accidental incomplete submissions
Which fields are you thinking should be obligatory?
Sorry, I don't remember the case exactly anymore. Looking at the UI now, probably category and name?
In 3bd71f58005004d0732c3db0a93980485aeee09e, I made it so that at least one category must be selected, name must be entered, and either website or phone must be entered.
Thanks!
|
2025-04-01T04:35:03.977440
| 2020-01-28T07:20:19
|
556027726
|
{
"authors": [
"Sanych",
"SomeoneElseOSM"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9550",
"repo": "osmlab/osm-community-index",
"url": "https://github.com/osmlab/osm-community-index/pull/329"
}
|
gharchive/pull-request
|
Fix invalid geojsons
The first and the last points were not the same
@Sanych thanks - that's what happens when you copy and paste something and it is handled OK by the software using it!
|
2025-04-01T04:35:03.983905
| 2022-10-25T01:01:22
|
1421681528
|
{
"authors": [
"czarcas7ic",
"mattverse"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9551",
"repo": "osmosis-labs/osmosis",
"url": "https://github.com/osmosis-labs/osmosis/pull/3137"
}
|
gharchive/pull-request
|
feat: milestone 3
Closes: #XXX
What is the purpose of the change
Add a description of the overall background and high level changes that this PR introduces
(E.g.: This pull request improves documation of area A by adding ....
Brief Changelog
(for example:)
The metadata is stored in the blob store on job creation time as a persistent artifact
Deployments RPC transmits only the blob storage reference
Daemons retrieve the RPC data from the blob cache
Testing and Verifying
(Please pick one of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added unit test that validates ...
Added integration tests for end-to-end deployment with ...
Extended integration test for ...
Manually verified the change by ...
Documentation and Release Note
Does this pull request introduce a new feature or user-facing behavior changes? (yes / no)
Is a relevant changelog entry added to the Unreleased section in CHANGELOG.md? (yes / no)
How is the feature or change documented? (not applicable / specification (x/<module>/spec/) / Osmosis docs repo / not documented)
Im wondering why there's so much merge conflicts? 🤔
|
2025-04-01T04:35:03.987809
| 2018-11-01T16:57:19
|
376479191
|
{
"authors": [
"hdoupe",
"martinholmer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9552",
"repo": "ospc-org/ospc.org",
"url": "https://github.com/ospc-org/ospc.org/pull/939"
}
|
gharchive/pull-request
|
Update to Tax-Calculator 0.22.2
This PR updates PolicyBrain to Tax-Calculator 0.22.2. I hope to resolve https://github.com/ospc-org/ospc.org/issues/770 with this PR, too.
Everything looks good with the update except that there's an issue installing taxpuf. I'm going to keep tinkering with that and then take a look at resolving #770.
cc @martinholmer @MattHJensen
I referred to 0.20.2 instead of 0.22.2 in describing this PR, but the changes in the code are correct. That is, Tax-Calculator is updated to 0.22.2.
@hdoupe asked in #939:
are these all of the columns that are expected for the difference table?
Yes, it looks like it. There are now two more than there used to be, right?
Thanks for fixing this.
Yep. No problem, this was a simple fix. It should have been done long ago.
I'll put this on the test server today and try to get it into production tomorrow or on Monday.
I plan to merge #939 once the tests pass. The merging of this PR was delayed due to installation complications on the PolicyBrain and the PUF package sides.
|
2025-04-01T04:35:04.017735
| 2013-04-02T23:09:13
|
598226382
|
{
"authors": [
"osrf-migration",
"scpeters"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9553",
"repo": "osrf/sdformat",
"url": "https://github.com/osrf/sdformat/issues/4"
}
|
gharchive/issue
|
sdformat.com doesn't have documentation of sdf elements and versions
Original report (archived issue) by Steve Peters (Bitbucket: Steven Peters, GitHub: scpeters).
I'm thinking of this: http://gazebosim.org/sdf/dev.html
http://sdformat.org/spec
|
2025-04-01T04:35:04.024097
| 2023-04-29T18:54:44
|
1689654312
|
{
"authors": [
"JamesKunstle",
"sgoggins"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9554",
"repo": "oss-aspen/8Knot",
"url": "https://github.com/oss-aspen/8Knot/pull/313"
}
|
gharchive/pull-request
|
alternate ports experiment for multiple instances on a compute
Opening PR solely to help facilitate the parameterization of ports by exposing what I thought would work to eliminate this error when changing redis ports and eightknot ports:
flower_1 | raise ConnectionError(str(exc)) from exc
flower_1 | kombu.exceptions.OperationalError: Error 111 connecting to cache:6388. Connection refused.
callback-worker_1 | [2023-04-29 18:51:54,827: ERROR/MainProcess] consumer: Cannot connect to redis://cache:6388//: Error 111 connecting to cache:6388. Connection refused..
callback-worker_1 | Trying again in 6.00 seconds... (3/100)
@sgoggins I understand the request- I'll look over what needs to be done. Does the 8Knot instance need to be on a port other than 8050? I bet Flower needs to be on something other than 5555.
Its actually just redis that doesn't appear to be working:
flower_1 | raise ConnectionError(str(exc)) from exc
flower_1 | kombu.exceptions.OperationalError: Error 111 connecting to cache:6388. Connection refused.
So for multiple instances on the same machine, you'd like the ports to be non-colliding. I can automate that with a script that'll feed parameters to the docker-compose script so that instead of running 'docker-compose up --build' command you run './8knot_up.sh' and then the defaults will be '8050, 5555, and 6379' but you can override the defaults.
So for multiple instances on the same machine, you'd like the ports to be non-colliding. I can automate that with a script that'll feed parameters to the docker-compose script so that instead of running 'docker-compose up --build' command you run './8knot_up.sh' and then the defaults will be '8050, 5555, and 6379' but you can override the defaults.
Would I override the defaults in the 8knot_up.sh script? Or somewhere else? Let me know when I can experiment with this. :)
currently investigating automatically assigning the next available port up from 8050, 6379, and 5555 respectively. i.e. some netstat | grep '8050' kind of thing to iteratively find the next available port.
otherwise the startup script will work like ssh-keygen w.r.t how you can just press enter through the defaults or change them if needed.
It appears to be no problem to have two apps running via docker-compose by using the syntax:
'docker compose -p up --build'
challenge is that ports are mapped at build-time, so the 'cache' container's running redis instance expects to be mapped 6379:6379, as does flower and the webserver.
once option would be to have a script that does something like:
echo "Web-server port (default 8050):"
read webserver-port
echo "Flower port (default 5555):"
read flower-port
echo "Cache port (default 6379):"
read cache-port
echo "Application name:"
read app-name
echo "WEBSERVER-PORT=$webserver-port" > .build-env.sh
echo "FLOWER-PORT=$flower-port" >> .build-env.sh
echo "CACHE-PORT=$cache-port" >> .build-env.sh
....
Ok. Let me know when I can try it. Looking to release next Tuesday.
|
2025-04-01T04:35:04.036542
| 2024-08-13T15:46:31
|
2463672763
|
{
"authors": [
"spencerschrock"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9555",
"repo": "ossf-tests/scorecard-action",
"url": "https://github.com/ossf-tests/scorecard-action/pull/11"
}
|
gharchive/pull-request
|
remove testing of 'default' format
This was previously supported by Scorecard Action because it hooked the underlying Scorecard cobra entrypoint, which accepts --format default. However, this was removed from Scorecard Action's action.yaml before the first release and was never in the documentation.
https://github.com/ossf/scorecard-action/pull/16
https://github.com/ossf/scorecard-action/pull/31
Fixes https://github.com/ossf/scorecard-action/issues/1430
I'll also note the test was originally added in 4615c4037b609027b82d674dbc3bfc7505c44c8f, which is after the two PRs I listed above. However, none of the public GitHub repos use "default" except our test code:
https://github.com/search?q="results_format%3A+default"+path%3A.github%2Fworkflows&type=code
|
2025-04-01T04:35:04.065606
| 2023-02-22T14:49:47
|
1595241695
|
{
"authors": [
"chundonglinlin",
"jorbig",
"winlinvip"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9556",
"repo": "ossrs/srs",
"url": "https://github.com/ossrs/srs/issues/3430"
}
|
gharchive/issue
|
HLS not loading on iPhones/iOS devices
Note: Please read FAQ before file an issue, see #2716
Description
Streaming works fine, except that the HLS is infinitely loading on iOS devices (both Safari and Chrome).
SRS Version: all
SRS Log:
No errors here.
SRS Config:
Default.
Replay
Please describe how to replay the bug?
Try playing a HLS file in Safari on iOS.
Expect
I expected it to work fine, but it doesn't.
I test HLS with iphone's safari in SRS6.0.26 and it runs no problem. Can you supply more information, likely SRS config and logs. It's very important for us to find.
@chundonglinlin I should have been more specific, sorry for that. It seems to be the same issue as for aruts Nginx + RTMP module described here: https://github.com/arut/nginx-rtmp-module/issues/1656 (which was there fixed in a fork of that module).
The problem only occurs when streaming live. My config is very basic and as follows:
listen 1935;
max_connections 1000;
srs_log_tank console;
daemon off;
http_api {
enabled on;
listen 1985;
}
http_server {
enabled on;
listen 8080;
dir ./objs/nginx/html;
}
vhost __defaultVhost__ {
hls {
enabled on;
}
http_remux {
enabled on;
mount [vhost]/[app]/[stream].flv;
}
}
For streaming, I use either Adobe FMLE or Larix Broadcaster, with audio only, 32kbps AAC.
It doesn't seem to throw any errors, the .m3u8 of the livestream just infinitely loads in iOS.
I've tried and reproduced this on SRS 5.0-a4 and 6.0.10 (both Windows and Docker).
Could you reproduce it now as well?
Can you show me SRS logs? I cannot find error messge.
Do you test ffmpeg publish streaming ?
ffmpeg -re -i doc/source.200kbps.768x320.flv -c copy -f flv rtmp://<IP_ADDRESS>:1935/live/livestream
Sorry, I don't have error logs. The problem is not with pushing a FLV, but with livestreaming a microphone to RTMP. The HLS just keeps loading infinitely until I stop the RTMP. It's only when live streaming to rtmp://[SERVER]/live/livestream
I tested it on several iOS devices (through BrowserStack), and it's the same problem. For now, I switch to MistServer, where this issue doesn't appear. But I'll hope to come back to SRS once you get this fixed.
Please follow issue template to file an issue.
This issue will be eliminated, see #2716
|
2025-04-01T04:35:04.077915
| 2022-07-18T17:26:52
|
1308282255
|
{
"authors": [
"cgwalters"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9557",
"repo": "ostreedev/ostree-rs-ext",
"url": "https://github.com/ostreedev/ostree-rs-ext/pull/341"
}
|
gharchive/pull-request
|
chunking: Recurse on duplicate directory trees
See https://github.com/coreos/fedora-coreos-tracker/issues/1258
Regression from https://github.com/ostreedev/ostree-rs-ext/pull/331
Currently rpm-ostree emits two identical subdirectories in
/usr/lib/sysimage/rpm-ostree-base-db and /usr/share/rpm, and the
chunking export skips emitting this incorrectly.
Closes: https://github.com/ostreedev/ostree-rs-ext/issues/339
Depends https://github.com/ostreedev/ostree-rs-ext/pull/338
|
2025-04-01T04:35:04.080510
| 2024-08-16T06:25:39
|
2469585765
|
{
"authors": [
"henryfw",
"zejacky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9558",
"repo": "ostris/ai-toolkit",
"url": "https://github.com/ostris/ai-toolkit/issues/75"
}
|
gharchive/issue
|
Any point to use xformers with Flux?
First of all, thanks for this great repo! I'm running out of vram on 24gb trying to train a 128 rank Flux lora. When I install xformers and try to use it, I get an error:
Traceback (most recent call last):
File "D:\ai-toolkit\run.py", line 90, in <module>
main()
File "D:\ai-toolkit\run.py", line 86, in main
raise e
File "D:\ai-toolkit\run.py", line 78, in main
job.run()
File "D:\ai-toolkit\jobs\ExtensionJob.py", line 22, in run
process.run()
File "D:\ai-toolkit\jobs\process\BaseSDTrainProcess.py", line 1701, in run
loss_dict = self.hook_train_loop(batch)
File "D:\ai-toolkit\extensions_built_in\sd_trainer\SDTrainer.py", line 1483, in hook_train_loop
noise_pred = self.predict_noise(
File "D:\ai-toolkit\extensions_built_in\sd_trainer\SDTrainer.py", line 891, in predict_noise
return self.sd.predict_noise(
File "D:\ai-toolkit\toolkit\stable_diffusion_model.py", line 1650, in predict_noise
noise_pred = self.unet(
File "D:\ai-toolkit\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\diffusers\models\transformers\transformer_flux.py", line 400, in forward
encoder_hidden_states, hidden_states = torch.utils.checkpoint.checkpoint(
File "D:\ai-toolkit\venv\lib\site-packages\torch\_compile.py", line 31, in inner
return disable_fn(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 600, in _fn
return fn(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\torch\utils\checkpoint.py", line 488, in checkpoint
ret = function(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\diffusers\models\transformers\transformer_flux.py", line 395, in custom_forward
return module(*inputs)
File "D:\ai-toolkit\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\ai-toolkit\venv\lib\site-packages\diffusers\models\transformers\transformer_flux.py", line 201, in forward
attn_output, context_attn_output = self.attn(
ValueError: not enough values to unpack (expected 2, got 1)
Hello @henryfw
So far I was reading , Xformers can have compatibility issues with flux1-dev model.
I'm currently using CUDA 12.4 and PyTorch 2.4.0 combination, without xformers. The same for comfyui.
The training with a person was successful so far (ca. 1h, 24 minutes)
|
2025-04-01T04:35:04.083179
| 2024-08-22T16:05:06
|
2481209187
|
{
"authors": [
"ewandel",
"jaretburkett"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9559",
"repo": "ostris/ai-toolkit",
"url": "https://github.com/ostris/ai-toolkit/pull/124"
}
|
gharchive/pull-request
|
changed requirements.txt to specific diffuser version 0.30.0, fixes issue #123
See this issue: https://github.com/ostris/ai-toolkit/issues/123
This avoids an error message when running when using the latest (as of 22 August 2024) diffuser version:
not working diffuser version: 0.31.0.dev0
Error message: Error running job: cannot import name 'apply_rope' from 'diffusers.models.attention_processor'
Thank you. There was a breaking change in diffusers, but it is fixed with https://github.com/ostris/ai-toolkit/commit/338c77d67733a2d6d9c4fdd55623ae04f5ed5ead . I plan to switch to the packaged version as soon as all the flux code stabilizes.
|
2025-04-01T04:35:04.164497
| 2020-11-26T05:38:23
|
751300628
|
{
"authors": [
"Kashomon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9560",
"repo": "otrego/clamshell",
"url": "https://github.com/otrego/clamshell/pull/148"
}
|
gharchive/pull-request
|
Add a find-blunders method
This PR fleshes out the FindBlunders method. It's not very configurable, but it does work end-to-end
Example test-run:
kashomon@Joshs-Air:~/inprogress/clamshell (blunders)$ go run katalyze/main.go --config=katalyze/testdata/analysis_example.cfg --model=katalyze/testdata/g170e-b10c128-s1141046784-d204142634.bin.gz katalyze/testdata/example-game.sgf
I1125 21:36:17.982544 83641 katago.go:68] Starting Katago analyzer
I1125 21:36:17.982660 83641 katago.go:69] Using model "katalyze/testdata/g170e-b10c128-s1141046784-d204142634.bin.gz"
I1125 21:36:17.982685 83641 katago.go:70] Using gtp config "katalyze/testdata/analysis_example.cfg"
I1125 21:36:25.707929 83641 katago.go:87] Katago Startup Complete
I1125 21:36:25.707978 83641 main.go:103] using files [katalyze/testdata/example-game.sgf]
I1125 21:36:25.708011 83641 main.go:105] Processing file "katalyze/testdata/example-game.sgf"
I1125 21:36:39.888000 83641 main.go:126] Finished processing file "katalyze/testdata/example-game.sgf"
I1125 21:36:39.888056 83641 main.go:133] Finished adding to game for file "katalyze/testdata/example-game.sgf"
I1125 21:36:39.888253 83641 main.go:146] Found Positions: [.0:64 .0:72 .0:80 .0:86 .0:92 .0:96 .0:100 .0:108 .0:122 .0:124 .0:134 .0:138]
I1125 21:36:39.888303 83641 katago.go:196] Shutting down Katago analyzer
Some miscellaneous changes:
Make a treepath-clone helper
Add better input-validation for katalyze main.go
Fix flag: analysisThreads => analysis_threads
It might be nice for FindBlunders to return something that has treepath and point-value of blunders.
That's a good idea. Filed #155
|
2025-04-01T04:35:04.179971
| 2024-07-02T13:13:50
|
2386225485
|
{
"authors": [
"ntrehout",
"skonto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9561",
"repo": "otwld/ollama-helm",
"url": "https://github.com/otwld/ollama-helm/issues/60"
}
|
gharchive/issue
|
Knative Use question
Hi folks! I am a Knative maintainer. Saw the integration here: https://github.com/otwld/ollama-helm/pull/43.
I am wondering if you are using Knative for ollama deployment in practice.
Hi there! Thanks for reaching out. Currently, we're not utilizing Knative for ollama in practice.
However, if you have any specific issues or suggestions, we'd definitely welcome a pull request (PR).
Let us know how we can collaborate!
Feel free to join the discord to discuss about it :)
Have a nice day !
|
2025-04-01T04:35:04.182992
| 2018-12-14T01:37:50
|
390930972
|
{
"authors": [
"coreystaten",
"stuhlmueller",
"zjmiller"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9562",
"repo": "oughtinc/mosaic",
"url": "https://github.com/oughtinc/mosaic/pull/314"
}
|
gharchive/pull-request
|
Adds checkboxes for admin control of workspace status in tree view, #307.
Added admin checkboxes controlling isStale and isEligibleForOracle flags on workspaces in tree view. This reuses the checkbox pattern from the admin checkboxes on the root workspace page. Similar code is now being used in 4 spots, so might want to factor that out into an AdminCheckbox component.
Thanks!
Looking at the tree, there's some redundancy now:
The indicators and check boxes present the same information. Large trees will be less readable, since the checkboxes take up quite a bit of space.
How difficult would it be to integrate the checkboxes and indicators in a way that lets admins change the status, but also displays the status to non-admins?
Not difficult, just made a commit to do that. Current caveat is that "is eligible for oracle" is displaying even when not in oracle mode (as opposed to the "Oracle Only" status display). We can easily make it not display when not in oracle mode if desired.
Just noticed that this displays messily when "Was Answered By Oracle" is displayed as well, fixing now.
Also, as a future PR, I think it would be great to use your approach to refactor the admin controls on the front page: i.e., use updateQuery instead of refetching "RootWorkspacesQuery". This will make those checkboxes much more responsive.
|
2025-04-01T04:35:04.183946
| 2016-08-16T15:10:47
|
171439930
|
{
"authors": [
"marshalc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9563",
"repo": "ouh-churchill/diakonia",
"url": "https://github.com/ouh-churchill/diakonia/issues/6"
}
|
gharchive/issue
|
Read and Digest: OUH IM&T Strategy 2012-2017
Add to repository and make relevant notes in the Requirements documentation
Largely irrelevant now. Awaiting a 2017 onward edition from our new CIO
|
2025-04-01T04:35:04.189608
| 2019-03-27T19:00:14
|
426133135
|
{
"authors": [
"atique81",
"oulutan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9564",
"repo": "oulutan/ACAM_Demo",
"url": "https://github.com/oulutan/ACAM_Demo/issues/14"
}
|
gharchive/issue
|
actor_infos keep increasing, never cleared up
Hello @oulutan ,
thanks very much for this wonderful repo. I have a few confusions regarding the paper and the implementation itself.
Although you mentioned in your paper how your work differs from and improves upon the Actor Conditioned Relation Network (ACRN) paper, I am still confused about the main improvements. I would highly appreciate if you could provide some intuitions about how your work improves upon ACRN.
It seems that actor_infos keep increasing and never cleared up, whereas, the tracks corresponding to the actors may have been deleted. Shouldn't you delete the actors from actor_infos whose corresponding tracks are deleted?
Thanks.
Thank you!
We do have an updated version of the paper coming up, that will make things more clear. It should be out in a day or two on arxiv. ACRN takes the attention idea from relational reasoning and applies it to actor and context. They use these relation features and apply convolutions on them. In contrast, we leverage these relations to generate attention maps, kind of similar to attention. These attention maps basically alter the original context features to be relevant to each actor. Additionally, since I3D Tail is being used, altering original features work better than generating completely new features (like relation feats).
That is a little bit tricky. I was doing something like that initially, but if you want to keep track of what happened in the video (like a summary) and dump that into some json file, you should keep all the actor information over time. These don't take too much memory and should be okay to keep them unless there are memory issues.
Thank you so much @oulutan ! I appreciate your feedback. I will be eagerly waiting for the updated version of the paper!
Updated version of our paper is now available on arxiv!
Thanks! I will check it out.
|
2025-04-01T04:35:04.288062
| 2023-05-19T04:01:40
|
1716569907
|
{
"authors": [
"roncli"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9565",
"repo": "overload-development-community/otl.gg",
"url": "https://github.com/overload-development-community/otl.gg/issues/708"
}
|
gharchive/issue
|
Evaluate and improve performance on tblDamage
There are over 1.2 million records in tblDamage, and it is slowing down a number of queries. Need to find which queries are problematic and add appropriate indexes.
I've added some basic indexes to the database for the quick wins, but there's still a number of other places in the app where query usage is pretty bad. What I'll probably start doing is going through one query at a time to find a way to improve its performance until things get under control.
This will be resolved with v10.
|
2025-04-01T04:35:04.335618
| 2021-05-04T18:05:26
|
875706197
|
{
"authors": [
"hydrogen18"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9566",
"repo": "ovrclk/akash",
"url": "https://github.com/ovrclk/akash/issues/1243"
}
|
gharchive/issue
|
Provider inventory for endpoints is wrong
For some reason the provider never really updates its count of available endpoints. The total # available always decreases. Restarting the provider gets the real number back
@arijitAD this is related to #1317 . Endpoints are part of what we consider inventory.
|
2025-04-01T04:35:04.341943
| 2016-05-31T09:08:45
|
157623800
|
{
"authors": [
"mxgoncharov",
"owen2345"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9567",
"repo": "owen2345/camaleon-cms",
"url": "https://github.com/owen2345/camaleon-cms/issues/404"
}
|
gharchive/issue
|
Social Login
Is there some solution for social login?
On demo there is Social Login plugin which broken and returns 500 error.
Hi @mxgoncharov this plugin is out date, I will update the next week! these days I am working in the video "how to create themes and plugins"
Hi @mxgoncharov this plugin is out date, I will update the next week! these days I am working in the video "how to create themes and plugins"
@owen2345 thanks so much. Will be nice.
|
2025-04-01T04:35:04.343693
| 2024-09-09T09:51:45
|
2513482895
|
{
"authors": [
"Marigold"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9568",
"repo": "owid/etl",
"url": "https://github.com/owid/etl/pull/3255"
}
|
gharchive/pull-request
|
:hammer: engineering: switch from threads to asyncio in grapher upserts
Switch from threads to asyncio when upserting to MySQL. This simplifies the code a bit and might make it slightly faster, but at the cost of introducing asyncio to our codebase. Whether this is worth it depends on how faster and convenient grapher step becomes.
@pabloarosado if you run into a situation where grapher upserts are annoyingly long, can you ping me and I can revisit the performance on that concrete example? (By the way, we've recently merged an improvement that upserts only indicators that changed, not the entire dataset. This could already speed things up)
With a heavy heart, I'm closing this one. Recent optimizations, like upserting variables only when checksum changes, have made it less painful and asyncio approach is not much faster than threads. It's more efficient and "elegant", but that's not worth mixing async and sync code.
One day, we might open it again...
|
2025-04-01T04:35:04.477473
| 2021-09-17T13:10:37
|
999338803
|
{
"authors": [
"butonic",
"dschmidt",
"wkloucek"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9569",
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/2523"
}
|
gharchive/issue
|
allow serving ocis-web theme from a dedicated assets folder
In order to serve a custom theme it would be nice to be able to configure ocis to serve only the theme assets with the static middleware.
cc @kulmann @labkode
should be done together with https://github.com/owncloud/ocis/issues/1899
I needed this and came up with this workaround:
docker-compose.yaml:
services:
ocis:
[...]
environment:
WEB_UI_THEME_PATH: /themes/owncloud/custom.theme.json
volumes:
- ./dev/docker/ocis.proxy.config.yaml:/etc/ocis/proxy.yaml
theme-server:
image: joseluisq/static-web-server:2
networks:
ocis-net:
environment:
SERVER_ROOT: "/assets"
SERVER_CORS_ALLOW_ORIGINS: "*"
SERVER_CORS_ALLOW_HEADERS: "origin, content-type, x-request-id"
ports:
- 3890:80
volumes:
- ./dev/docker/themes:/assets/themes/owncloud
./dev/docker/ocis.proxy.config.yaml:
additional_policies:
- name: ocis
routes:
- endpoint: /themes/owncloud/custom.theme.json
backend: http://theme-server
unprotected: true
This is just a workaround and overly complex for a simple task (especially given that config.json can just be mounted into the container). I would really like to see a simpler solution to this.
|
2025-04-01T04:35:04.493861
| 2021-09-24T09:02:10
|
1006244595
|
{
"authors": [
"butonic",
"grgprarup",
"phil-davis",
"saw-jan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9570",
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/2540"
}
|
gharchive/issue
|
Share path in the response is different between share states
Describe the bug
Different share path between share states
Steps to reproduce
Steps to reproduce the behavior:
Create users Alice and Brian
As Alice, create folder ToShare and share it with user Brian
As Brian, get all pending sharescurl -uBrian:1234 https://localhost:9200/ocs/v1.php/apps/files_sharing/api/v1/shares\?format\=json\&state\=1\&shared_with_me\=true -vk
...
"path": "/ToShare",
...
As Brian, accept the share shared by Alice
As Brian, get all accepted sharescurl -uBrian:1234 https://localhost:9200/ocs/v1.php/apps/files_sharing/api/v1/shares\?format\=json\&state\=0\&shared_with_me\=true -vk
...
"path": "/Shares/ToShare",
...
As Brian, decline the share of folder ToShare
As Brian, get all declined sharescurl -uBrian:1234 https://localhost:9200/ocs/v1.php/apps/files_sharing/api/v1/shares\?format\=json\&state\=2\&shared_with_me\=true -vk
...
"path": "/ToShare",
...
Expected behavior
In oC10.8.1 prealpha (git), consistent response in all state
Pending share:
{
"id": "1167",
"share_type": 0,
"uid_owner": "Alice",
"displayname_owner": "Alice",
"permissions": 31,
"stime":<PHONE_NUMBER>,
"parent": null,
"expiration": null,
"token": null,
"uid_file_owner": "Alice",
"displayname_file_owner": "Alice",
"additional_info_owner": null,
"additional_info_file_owner": null,
"state": 1,
"path": "/Shares/ToShare",
"mimetype": "httpd/unix-directory",
"storage_id": "home::Alice",
"storage": 2676,
"item_type": "folder",
"item_source":<PHONE_NUMBER>,
"file_source":<PHONE_NUMBER>,
"file_parent":<PHONE_NUMBER>,
"file_target": "/Shares/ToShare",
"share_with": "Brian",
"share_with_displayname": "Brian",
"share_with_additional_info": null,
"mail_send": 0,
"attributes": null
}
Accepted share:
{
"id": "1167",
"share_type": 0,
"uid_owner": "Alice",
"displayname_owner": "Alice",
"permissions": 31,
"stime":<PHONE_NUMBER>,
"parent": null,
"expiration": null,
"token": null,
"uid_file_owner": "Alice",
"displayname_file_owner": "Alice",
"additional_info_owner": null,
"additional_info_file_owner": null,
"state": 0,
"path": "/Shares/ToShare",
"mimetype": "httpd/unix-directory",
"storage_id": "shared::/Shares/ToShare",
"storage": 2676,
"item_type": "folder",
"item_source":<PHONE_NUMBER>,
"file_source":<PHONE_NUMBER>,
"file_parent":<PHONE_NUMBER>,
"file_target": "/Shares/ToShare",
"share_with": "Brian",
"share_with_displayname": "Brian",
"share_with_additional_info": null,
"mail_send": 0,
"attributes": null
}
Declined share:
{
"id": "1167",
"share_type": 0,
"uid_owner": "Alice",
"displayname_owner": "Alice",
"permissions": 31,
"stime":<PHONE_NUMBER>,
"parent": null,
"expiration": null,
"token": null,
"uid_file_owner": "Alice",
"displayname_file_owner": "Alice",
"additional_info_owner": null,
"additional_info_file_owner": null,
"state": 2,
"path": "/Shares/ToShare",
"mimetype": "httpd/unix-directory",
"storage_id": "home::Alice",
"storage": 2676,
"item_type": "folder",
"item_source":<PHONE_NUMBER>,
"file_source":<PHONE_NUMBER>,
"file_parent":<PHONE_NUMBER>,
"file_target": "/Shares/ToShare",
"share_with": "Brian",
"share_with_displayname": "Brian",
"share_with_additional_info": null,
"mail_send": 0,
"attributes": null
}
Actual behavior
In oCIS, path is different between accepted and pending declined states
Pending share:
{
"id": "8e55b7b2-4585-4337-8143-76756158af87",
"share_type": 0,
"uid_owner": "Alice",
"displayname_owner": "Alice",
"additional_info_owner"<EMAIL_ADDRESS> "permissions": 1,
"stime":<PHONE_NUMBER>,
"parent": "",
"expiration": "",
"token": "",
"uid_file_owner": "Alice",
"displayname_file_owner": "Alice",
"additional_info_file_owner"<EMAIL_ADDRESS> "state": 1,
"path": "/ToShare",
"item_type": "folder",
"mimetype": "httpd/unix-directory",
"storage_id": "shared::/Shares/ToShare",
"storage": 0,
"item_source": "MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjE4ZGYyOTcxLWJiMTYtNDg4ZC1hOGM4LTM5NTE1MjJmM2U4Yw==",
"file_source": "MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjE4ZGYyOTcxLWJiMTYtNDg4ZC1hOGM4LTM5NTE1MjJmM2U4Yw==",
"file_parent": "",
"file_target": "/Shares/ToShare",
"share_with": "Brian",
"share_with_displayname": "Brian",
"share_with_additional_info"<EMAIL_ADDRESS> "mail_send": 0,
"name": ""
}
Accepted share:
{
"id": "8e55b7b2-4585-4337-8143-76756158af87",
"share_type": 0,
"uid_owner": "Alice",
"displayname_owner": "Alice",
"additional_info_owner"<EMAIL_ADDRESS> "permissions": 1,
"stime":<PHONE_NUMBER>,
"parent": "",
"expiration": "",
"token": "",
"uid_file_owner": "Alice",
"displayname_file_owner": "Alice",
"additional_info_file_owner"<EMAIL_ADDRESS> "state": 0,
"path": "/Shares/ToShare",
"item_type": "folder",
"mimetype": "httpd/unix-directory",
"storage_id": "shared::/Shares/ToShare",
"storage": 0,
"item_source": "MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjE4ZGYyOTcxLWJiMTYtNDg4ZC1hOGM4LTM5NTE1MjJmM2U4Yw==",
"file_source": "MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjE4ZGYyOTcxLWJiMTYtNDg4ZC1hOGM4LTM5NTE1MjJmM2U4Yw==",
"file_parent": "",
"file_target": "/Shares/ToShare",
"share_with": "Brian",
"share_with_displayname": "Brian",
"share_with_additional_info"<EMAIL_ADDRESS> "mail_send": 0,
"name": ""
}
Declined share:
{
"id": "8e55b7b2-4585-4337-8143-76756158af87",
"share_type": 0,
"uid_owner": "Alice",
"displayname_owner": "Alice",
"additional_info_owner"<EMAIL_ADDRESS> "permissions": 1,
"stime":<PHONE_NUMBER>,
"parent": "",
"expiration": "",
"token": "",
"uid_file_owner": "Alice",
"displayname_file_owner": "Alice",
"additional_info_file_owner"<EMAIL_ADDRESS> "state": 2,
"path": "/ToShare",
"item_type": "folder",
"mimetype": "httpd/unix-directory",
"storage_id": "shared::/Shares/ToShare",
"storage": 0,
"item_source": "MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjE4ZGYyOTcxLWJiMTYtNDg4ZC1hOGM4LTM5NTE1MjJmM2U4Yw==",
"file_source": "MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjE4ZGYyOTcxLWJiMTYtNDg4ZC1hOGM4LTM5NTE1MjJmM2U4Yw==",
"file_parent": "",
"file_target": "/Shares/ToShare",
"share_with": "Brian",
"share_with_displayname": "Brian",
"share_with_additional_info"<EMAIL_ADDRESS> "mail_send": 0,
"name": ""
}
Setup
Please describe how you started the server and provide a list of relevant environment variables.
OCIS_VERSION=vX.X.X
BRANCH=vX.X.X
STORAGE_FRONTEND_UPLOAD_DISABLE_TUS=false
Additional context
Add any other context about the problem here.
Note: oC10 core behavior changed a bit in PR https://github.com/owncloud/core/pull/39241 and now we need to sort out exactly what to do in oCIS and oC10 to "bring this all together".
see https://github.com/owncloud/core/pull/39241#issuecomment-930155337
we have added separate scenarios for each server
https://github.com/owncloud/core/blob/574b3c8bc75df34cae8a41ad5ed0524ec08c5519/tests/acceptance/features/apiShareManagementToShares/acceptShares.feature#L62-L70
Maybe we can close this issue if current behaviors are expected one.
The 2nd example is tagged:
@skipOnAllVersionsGreaterThanOcV10.8.0 @skipOnOcis
So that is not being run on current oC10 or on oCIS. It is only for when the core test suite is run against old oC10 version 10.8.0 or earlier.
The main example is tagged @issue-ocis-2540 and runs in both oC10 and oCIS CI. There are entries for that issue in the oCIS expected-failures, for example, for apiShareManagementToShares/acceptShares.feature:65
So the test scenario passes for oC10 but fails on oCIS.
@saw-jan please find an example of the test fail on oCIS and paste it in this issue. Then we will know what is the current status. Then "someone" can decide what the behavior should be, and either tests or code or both can be sorted out.
OCS is deprecated.
OCS is deprecated.
Removed from expected failures and adjusted the tests for now. The complete scenarios can be deleted after the removal of OCS.
|
2025-04-01T04:35:04.499036
| 2024-05-09T09:28:28
|
2287283169
|
{
"authors": [
"PrajwolAmatya",
"S-Panta",
"SagarGi",
"amrita-shrestha"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9571",
"repo": "owncloud/ocis",
"url": "https://github.com/owncloud/ocis/issues/9119"
}
|
gharchive/issue
|
[contract testing] [share ng] Extend propfind contract testing
While adding tests related to share-ng, found that ocis-tests miss some contract testing as mentioned below.
Scenario: remove share link and propfind resource to see if the response contain or not
<oc:share-types>
<oc:share-type>3</oc:share-type>
</oc:share-types>
<oc:privatelink>
https://localhost:9200/f/a3be74ba-6183-4942-9458-440b92c5895b$68962830-8cbb-4226-b3b4-458c50793487%210e590fe7-8816-4d75-bd42-5ffa2fa81410
</oc:privatelink>
Scenario: user Alice shared resource with user Brian, Brian is deleted , user Alice propfind the shared resource to check if the response contains shares details or not after share deleted
Question
Should we add such scenario to apiContract->propfind.feature ❓
cc @ScharfViktor @saw-jan @phil-davis
The bug is reported here . https://github.com/owncloud/ocis/issues/9463
For the above issue first lets find out if we have covered propfind test when sharee is not deleted. If it is not covered then we can add one when sharee is not deleted. And then we can check the propfind when sharee gets deleted.
In summary:
[ ] Check for propfind test when sharer do a propfind for shared resources (sharee is not deleted). if there not then add one
[ ] Check for propfind test when sharer do a propfind for shared resources (sharee is deleted).
Moving this issue to blocked since there's an issue regarding PROPFIND request to shared resource when the user is deleted. Reported here: https://github.com/owncloud/ocis/issues/9463
|
2025-04-01T04:35:04.520653
| 2024-09-02T11:52:03
|
2500770178
|
{
"authors": [
"Boshen",
"DonIsaac",
"RabbitShare"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9572",
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/issues/5400"
}
|
gharchive/issue
|
npx oxlint -c .oxlintrc.json not working
oxlint not working with .oxlintrc.json config.
Command npx oxlint -c .oxlintrc.json not found any cycle
Config example:
{
"rules": {
"import/no-cycle": "error"
}
}
npx oxlint --import-plugin -A all -D no-cycle working fine
npx oxlint --import-plugin -A all -D no-cycle
× eslint-plugin-import(no-cycle): Dependency cycle detected
╭─[src/test.ts:1:23]
1 │ import { test2 } from "./App";
· ───────
2 │
╰────
help: These paths form a cycle:
-> ./App - src/App.tsx
-> ./test - src/test.ts
× eslint-plugin-import(no-cycle): Dependency cycle detected
╭─[src/App.tsx:4:22]
3 │ import "./App.css";
4 │ import { test } from "./test";
· ────────
5 │ console.log(test());
╰────
help: These paths form a cycle:
-> ./test - src/test.ts
-> ./App - src/App.tsx
Finished in 8ms on 5 files with 1 rules using 10 threads.
Found 0 warnings and 2 errors.
Try npx oxlint -c .oxlintrc.json --import-plugin
@DonIsaac It seems unintuitive that --import-plugin needs to be supplied here along with -c 🤔
I agree, having plugins enabled in configs will assuage this.
Let's improve the documentation a little bit before closing.
|
2025-04-01T04:35:04.522510
| 2024-09-09T07:31:46
|
2513146750
|
{
"authors": [
"7086cmd",
"Boshen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9573",
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/issues/5636"
}
|
gharchive/issue
|
Use auto.fix fix to untested snapshots
Related: https://github.com/rolldown/rolldown/issues/2190#issuecomment-2336778348.
It can be useful when we don't have enough bandwith / performance to run the tests when we temporarily modify some code, or sometimes we simply forgot to update shapshots.
The whole idea of snapshot testing is to not accidentally commit incorrect snapshots. I'm going to close this one as not planned.
|
2025-04-01T04:35:04.525698
| 2024-12-15T14:47:47
|
2740679152
|
{
"authors": [
"Boshen",
"camc314",
"overlookmotel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9574",
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/issues/7917"
}
|
gharchive/issue
|
bug(codegen): missing , when generating type parameters with jsx
Playground link
input:
const genericFn = <T,>(foo: T) => {
type Bar = {};
};
output:
const genericFn = <T>(foo: T) => {
type Bar = {};
};
expected output:
const genericFn = <T,>(foo: T) => {
type Bar = {};
};
Please excuse my lack of TypeScript knowledge, but does the trailing comma in <T,> have any semantic meaning?
It breaks for .tsx.
iirc, the parser will interpret <T> as an opening jsx tag and not as a type parameter.
it's fine in non-jsx files because <T> cant be an opening jsx element.
|
2025-04-01T04:35:04.528884
| 2024-10-16T12:21:13
|
2591723741
|
{
"authors": [
"Dunqing",
"overlookmotel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9575",
"repo": "oxc-project/oxc",
"url": "https://github.com/oxc-project/oxc/pull/6622"
}
|
gharchive/pull-request
|
fix(isolated_declarations): fix potential memory leak
Scope contains 2 x FxHashMaps, which own data outside of the arena. So this data will not be dropped when the allocator is dropped.
The scope for this becoming a memory leak in practice is limited for 2 reasons:
All Scopes except the root one are popped from the stack by the end of traversal. That last scope's hashmaps are always empty, unless there are unresolved references (references to globals).
oxc_allocator::Vec is currently Drop.
However, oxc_allocator::Vec will cease to be Drop in future, at which point this would become a real memory leak.
Additionally, it doesn't make sense to store temporary data in the arena, as the arena is intended to hold data that needs to live as long as the AST, which temporary data doesn't.
Merge activity
Oct 16, 11:05 AM EDT: The merge label '0-merge' was detected. This PR will be added to the Graphite merge queue once it meets the requirements.
|
2025-04-01T04:35:04.544089
| 2021-04-03T01:33:44
|
849551878
|
{
"authors": [
"Neotriple",
"goulart-paul"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9576",
"repo": "oxfordcontrol/osqp",
"url": "https://github.com/oxfordcontrol/osqp/issues/297"
}
|
gharchive/issue
|
Building from source with Python bindings (in a conda environment)
Hi there,
I'm trying to build from source while also building the python bindings. The reason I'm attempting to do this is to find out the number of times the QDLDL_solve functionality is called during a particular solve.
When I attempt to build from source, I run into this issue:
[ 5%] Building C object lin_sys/direct/CMakeFiles/linsys_pardiso.dir/pardiso/pardiso_interface.c.o
cc1: warning: /home/pshah/anaconda3/envs/osqp_test/bin/python: not a directory
In file included from /home/pshah/Applications/osqp/include/types.h:8:0,
from /home/pshah/Applications/osqp/include/lin_alg.h:9,
from /home/pshah/Applications/osqp/lin_sys/direct/pardiso/pardiso_interface.h:8,
from /home/pshah/Applications/osqp/lin_sys/direct/pardiso/pardiso_interface.c:1:
/home/pshah/Applications/osqp/include/glob_opts.h:44:13: fatal error: Python.h: No such file or directory
# include <Python.h>
^~~~~~~~~~
compilation terminated.
lin_sys/direct/CMakeFiles/linsys_pardiso.dir/build.make:62: recipe for target 'lin_sys/direct/CMakeFiles/linsys_pardiso.dir/pardiso/pardiso_interface.c.o' failed
make[2]: *** [lin_sys/direct/CMakeFiles/linsys_pardiso.dir/pardiso/pardiso_interface.c.o] Error 1
CMakeFiles/Makefile2:252: recipe for target 'lin_sys/direct/CMakeFiles/linsys_pardiso.dir/all' failed
make[1]: *** [lin_sys/direct/CMakeFiles/linsys_pardiso.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
A quick google makess me think this is because I'm not using python3.X-dev (which should include the python header file). Is it possible to build from source without the python3.X-dev? Alternatively, is there a way to read / find out how many times the QDLDL_solve function is being called?
The number of calls to QDLDL_solve should just be the number of iterations, plus one if the solver has polishing enabled.
The number of calls to QDLDL_factor is also logged (see here in the C source)
Thanks for the information!
|
2025-04-01T04:35:04.568086
| 2023-03-02T14:11:39
|
1606879093
|
{
"authors": [
"aronerben",
"dinosaure",
"mabiede"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9577",
"repo": "oxidizing/letters",
"url": "https://github.com/oxidizing/letters/pull/55"
}
|
gharchive/pull-request
|
rename and update package dependencies, run formatter
As the pipeline fails and I cannot edit the PR #53 (thanks @mbacarella), here is an updated version:
related to https://github.com/ocaml/opam-repository/pull/23311
update dune-project
add missing dependency tls-lwt
update sendmail and colombe version due >= 0.7.0 (LOGIN mechanism)
update lock file
add (implicit_transitive_deps false) and specify dependencies
@joseferben @mikonieminen FYI, I'm merging this
I would like to mention that a release of mrmime with some breaks will be done soon. You probably shoud integrate these breaks before a release 👍 . I can do something when the mrmime release is done.
I would like to mention that a release of mrmime with some breaks will be done soon. You probably shoud integrate these breaks before a release +1 . I can do something when the mrmime release is done.
Thanks for the heads-up! I will wait with a release then (cc: @mabiede)
|
2025-04-01T04:35:04.574391
| 2020-08-25T09:03:33
|
685309186
|
{
"authors": [
"Tpt",
"pchampin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9578",
"repo": "oxigraph/oxigraph",
"url": "https://github.com/oxigraph/oxigraph/issues/50"
}
|
gharchive/issue
|
The SPARQL query should override the options, not the other way around
The documentation of QueryOption states:
[the values set by with_default_graph and with_named_graph] override the FROM and FROM_NAMED elements of the evaluated query.
I think is wrong, as it breaks the general expectation that a local configuration should have precedence over a global configuration. In my view, the query string is "more local" than the QueryOptions. I expect that the latter could be set once and for all and reused for several queries in a given application or module.
Hi! I definitely agree with you on the general rule. However, doing so would make two use cases difficult to implement:
Implementation of the SPARQL protocol that states that the default-graph-uri and named-graph-uri parameters should override the FROM and FROM NAMED parameters.
Access control where the lib user wants to forbid querying some named graphs.
Ok, those are very valid use cases.
I still suggest that you make it explicit and salient in the documentation of prepare_query and query that options overrides query. Currently, one has to dig into the doc of QueryOption to find out.
Yes, definitely. I am going to do that.
PS: have you seen oxrdflib?
Graph overriding won't be part of the QueryOption API anymore in Oxigraph 0.2. A new API is going to be provided as part of the Query type. So, there won't be any ambiguity anymore.
Graph overriding won't be part of the QueryOption API anymore in Oxigraph 0.2. A new API is going to be provided as part of the Query type. So, there won't be any ambiguity anymore.
|
2025-04-01T04:35:04.596589
| 2018-02-06T14:14:50
|
294774090
|
{
"authors": [
"StefBrito"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9579",
"repo": "oystparis/oyst-1click-magento",
"url": "https://github.com/oystparis/oyst-1click-magento/issues/225"
}
|
gharchive/issue
|
Put by default the settings of the shipping in OC
As you can see there all the values are not put by default it is quite confusing for the installation.
It should be better to put all the settings by default
release
|
2025-04-01T04:35:04.621167
| 2022-09-22T19:35:08
|
1382922391
|
{
"authors": [
"danielssonn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9580",
"repo": "ozmago/JPEGX",
"url": "https://github.com/ozmago/JPEGX/pull/3"
}
|
gharchive/pull-request
|
Feat/premium distribution
Checklist
[ ] Title follows the Naming Conventions.
[ ] Add a description.
[ ] Add testing instructions.
[ ] Add at least one reviewer.
[ ] Leave comments on changes you would like to discuss.
[ ] Remove changes that are not related to this pull request.
[ ] Remove debugging code.
[ ] Test your changes locally and on staging (if applicable).
[ ] Ensure the build is passing.
Naming Conventions
Common types according to commitlint-config-conventional (based on the Angular convention) can be:
build—changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm).
chore—other changes that don’t modify src or test files.
ci—changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs).
docs—documentation only changes.
feat—a new feature.
fix—a bug fix.
perf—a code change that improves performance.
refactor—a code change that neither fixes a bug nor adds a feature.
revert—reverts a previous commit.
style—changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc).
test—adding missing tests or correcting existing tests.
Branches
Branches are created from main and can follow the naming convention below. For common types, see Types.
Convention:
type/description
Example:
feat/add-xyz
premium streaming and distributions
|
2025-04-01T04:35:04.679226
| 2018-08-10T21:30:42
|
349663216
|
{
"authors": [
"bakman2",
"oznu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9581",
"repo": "oznu/homebridge-config-ui-x",
"url": "https://github.com/oznu/homebridge-config-ui-x/issues/152"
}
|
gharchive/issue
|
Install failed while trying to build
Not sure if this is something you can control or not, but node tries to build and fails:
make: Entering directory<EMAIL_ADDRESS>
CXX(target) Release/obj.target/pty/src/unix/pty.o
cc1plus: error: unrecognized command line option "-std=gnu++0x"
../src/unix/pty.cc:1: sorry, unimplemented: 64-bit mode not compiled in make: *** [Release/obj.target/pty/src/unix/pty.o] Error 1
make: Leaving directory<EMAIL_ADDRESS>
Is there a way to force it via 32bit ?
Hi @bakman2,
I haven't tested this plugin on 32-bit i386 Linux. This is the upstream library you'll need to modify somehow to get it working:
https://github.com/Microsoft/node-pty
You can test this by trying to install it independently:
npm install node-pty
I found several issues; node/npm/gcc/homebridge out of date.
Installed/updated everything, it is running now, thanks!
|
2025-04-01T04:35:04.681948
| 2020-07-08T16:40:03
|
653442117
|
{
"authors": [
"ashdwells",
"oznu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9582",
"repo": "oznu/homebridge-config-ui-x",
"url": "https://github.com/oznu/homebridge-config-ui-x/issues/770"
}
|
gharchive/issue
|
ffmpeg exited with code: 1 when using hb-service
Hi
I am receiving this error when using sudo hb-service install:
[Camera-ffmpeg] [FFMPEG] ffmpeg exited with code: 1 and signal: null (error)
However camera works fine when not using hb-service by running Homebridge from the terminal.
"Camera-ffmpeg", here is my config:
"name": "Dining Room",
"audio": false,
"videoConfig": {
"source": "-re -f avfoundation -video_size 1280x720 -framerate 30
I'm stuck! Please help
It's most likey running as a different user when you're running as a service. Check the service user has the same permissions.
This is more of an Camera-ffmpeg issue, you can raise a report over there if you need help.
|
2025-04-01T04:35:04.685713
| 2015-03-24T14:07:21
|
64003203
|
{
"authors": [
"clarkjefcoat",
"mparizernc"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9583",
"repo": "ozone-development/ozp-center",
"url": "https://github.com/ozone-development/ozp-center/issues/147"
}
|
gharchive/issue
|
Search & Discovery - Show a specific message if search returns no results
Select some combination of categories, types, and agencies that results in no matches.
The main portion of the screen is blank (indicating no search results).
The blank screen is confusing to users. Suggest adding at a minimum a message that says "No search results returned". Or if possible a message that reflects their search - something like "Your search for ZZZ and XXX did not return any results".
All Browsers
I verified that the No Results messages appears when the filters are selected with no results and when a search term has no results.
|
2025-04-01T04:35:04.722710
| 2022-11-11T23:33:26
|
1446099303
|
{
"authors": [
"drupol",
"p-bizouard"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9584",
"repo": "p-bizouard/CloudBackup",
"url": "https://github.com/p-bizouard/CloudBackup/pull/2"
}
|
gharchive/pull-request
|
🏗️ Decorate EcPhp CasGuardAuthenticator instead of copy/pasting it
Following @drupol recomandations, implementation of a decorator of EcPhp's CasGuardAuthenticator instead of copy/pasting it.
Let me know how it goes !
Top!
Thank you very much @drupol for your help. I still don't realize someone came from nowhere and gave me advises like you did 🙏
That's the beauty of open source ;)
|
2025-04-01T04:35:04.730883
| 2023-01-19T23:31:18
|
1550104027
|
{
"authors": [
"nirurin",
"p4535992"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9585",
"repo": "p4535992/foundryvtt-mountup",
"url": "https://github.com/p4535992/foundryvtt-mountup/issues/20"
}
|
gharchive/issue
|
[BUG] When mounting using the 'drag and drop' option, the pop-up message asking for a confirmation does not appear.
Module Version: v3.3.5
Before open any issue
Enable the module setting "Enable debugging"
Click F12 go to the console tab
make the test you want and replicate the error
Go to the tab console open on point 2) and just right click and click 'Save as' and 'Save' or send a screenshot of the exception on the console.
Attach the text file on the github issue with all the logs related tot he module, or send a screenshot of the messages on the console.
Describe the bug
When mounting using the 'drag and drop' option, the pop-up message asking for a confirmation does not appear. It does seem to automatically attach the rider to the mount, but the movement is a bit broken (they detach and move disconnectedly.
Video4.webm
To Reproduce
Steps to reproduce the behavior:
Make an actor
Make a horse
Drag actor onto horse
See broken
Expected behavior
A pop-up confirmation is meant to appear isn't it? Plus the movement should work the same as the normal 'click icon to mount' behaviour (which works fine)
Screenshots
Video4.webm
Browser:
Foundry app, chrome and edge.
Foundry Version:
10.291
Game System:
Dnd 2.1.2
Additional context
Add any other context (like other modules installed) about the problem here.
THIS MODULES IS DEPRECATED ON V11 IN FAVOR OF Rideable
|
2025-04-01T04:35:04.772895
| 2023-09-13T10:33:42
|
1894241201
|
{
"authors": [
"miminar",
"pabateman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9586",
"repo": "pabateman/kubectl-nsenter",
"url": "https://github.com/pabateman/kubectl-nsenter/pull/11"
}
|
gharchive/pull-request
|
prefer nerdctl over crictl for containerd if available
Many thanks for this useful tool.
We prefer to use containerd's native nerdctl CLI tool over crictl which isn't even available on the k8s hosts.
This PR will use it if available and fallback to crictl otherwise.
Tested on our k0s cluster.
Could you please take a look?
By the way, there is also containerd's internal ctr CLI tool, which I didn't consider to include in this PR. Because it comes with an unsupported interface as its description says:
ctr is an unsupported debug and administrative client for interacting
with the containerd daemon. Because it is unsupported, the commands,
options, and operations are not guaranteed to be backward compatible or
stable from release to release of the containerd project.
Moreover, it doesn't support anything like --format or --output for its ctr containers info command, which would add yet another dependency on some json parsing CLI tool.
Also, could you add nerdctl support in bottom of README.md, in Supported technologies chapter? With description of how cli utilities will be chosen in case of containerd?
I did my best, feel free to suggest your preferred wording :-)
Also, could you add nerdctl support in bottom of README.md, in Supported technologies chapter? With description of how cli utilities will be chosen in case of containerd?
I did my best, feel free to suggest your preferred wording :-)
LGTM, thx :)
@miminar
kubectl krew upgrade nsenter
New version with the feature :)
Great, thanks a lot!
|
2025-04-01T04:35:04.775017
| 2015-01-01T09:03:27
|
53216080
|
{
"authors": [
"pablojim",
"yujiosaka"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9587",
"repo": "pablojim/highcharts-ng",
"url": "https://github.com/pablojim/highcharts-ng/issues/241"
}
|
gharchive/issue
|
options are not reflected in directives
options object is watched for change.
however, highcharts-ng does not reflects the values when it is used in directives.
In the example, line has dash style as specified in options,
but it is not reflected when it's drawn.
By using a controller function this seems to work: http://jsfiddle.net/Lnd8skdp/
Does this help?
|
2025-04-01T04:35:04.793387
| 2020-01-28T18:58:26
|
556403541
|
{
"authors": [
"deathflash1411",
"kolya5544",
"proditis",
"whitebeardj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9588",
"repo": "pachev/pachev_ftp",
"url": "https://github.com/pachev/pachev_ftp/issues/4"
}
|
gharchive/issue
|
Directory traversal vulnerability.
Recently found and leaked exploit by "1F98D" seem to lead to directory traversal. I am not able to test it now, but users should be reported.
https://www.exploit-db.com/exploits/47956
Hey @kolya5544, were you able to compile the FTP Server and test the Directory Traversal exploit? if yes could you please share the compiled binary, I really need it.
You can find my POC for that exploit here: https://github.com/whitebeardj/Pachev-PathTraversal-POC
feel free to test your exploits over at https://echoctf.red/target/36
@LeoBreaker1411 if you're still looking for it, you can grab the server binary from there :smiley:
|
2025-04-01T04:35:04.795919
| 2022-07-07T01:08:46
|
1296660537
|
{
"authors": [
"gonzalobenegas",
"lauraluebbert"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9589",
"repo": "pachterlab/gget",
"url": "https://github.com/pachterlab/gget/issues/25"
}
|
gharchive/issue
|
gget ref for non-vertebrate species
Hello,
Thank you for this awesome tool! I'd love to get ftp links for plant species available in Ensembl Plants. Would this be possible with gget?
Thanks,
Gonzalo
Hi Gonzalo,
Thank you for the feedback! I just implemented a connection to Ensembl Plants for gget ref. Just upgrade to the latest gget version (pip install --upgrade gget) and pass a plant species to gget ref (e.g. gget ref zea_mays). Please let me know if there are any issues! :)
Best,
Laura
Fantastic, thanks for the quick update!
|
2025-04-01T04:35:04.797852
| 2018-05-09T08:55:07
|
321485692
|
{
"authors": [
"jdoliner",
"kaktus42"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9590",
"repo": "pachyderm/pachyderm",
"url": "https://github.com/pachyderm/pachyderm/issues/2899"
}
|
gharchive/issue
|
update-pipeline command defaulting to creating the pipeline
I would like the update-pipeline command to have a switch that lets it create the pipeline if it does not exist.
For now I have to do pachctl update-pipeline -f /tmp/pipeline.json || pachctl create-pipeline -f /tmp/pipeline.json
I approve of this, I'm not sure if we need a switch or if update-pipeline should just have upsert behavior by default.
update pipeline now has upsert behavior, so use that if you want the pipeline to always get created and aren't worried about overwriting another pipeline with the same name.
|
2025-04-01T04:35:04.800014
| 2021-07-19T17:06:11
|
947853221
|
{
"authors": [
"PFedak",
"nadegepepin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9591",
"repo": "pachyderm/pachyderm",
"url": "https://github.com/pachyderm/pachyderm/issues/6555"
}
|
gharchive/issue
|
Once pachctl trace commit or an equivalent command is released, document the case in deferred processing.
In the Advanced Concepts / Deferred Processing page (Section Configure a Staging Branch in an Input repository),
document the rare case where
a pipeline is directly provenant on both master and an input to staging. In this case, the commit IDs may differ. Deferred processing will still occur correctly, but the new commits won't be in the old commit set.
ONCE a command that helps give more information in those cases (pachctl trace commit) is released.
To add a little more context, we discussed in an eng office hours some possible difficulties around deferred processing (and triggers, which present similar problems). The issues stem from triggers and branch movements being a sort of "shadow" provenance: relevant for the history of the commits involved, but not readily surfaced to users. By trace commit, we were imagining a command that would show all of the commit(sets) involved in the history of a given subcommit/subjob even absent provenance relationships, to make debugging easier in those cases.
|
2025-04-01T04:35:04.801932
| 2023-01-27T04:10:06
|
1559134057
|
{
"authors": [
"aaronmberger-nwfsc",
"kellijohnson-NOAA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9592",
"repo": "pacific-hake/hake-assessment",
"url": "https://github.com/pacific-hake/hake-assessment/issues/1022"
}
|
gharchive/issue
|
2023 exploratory run: increase survey input sample size
Increase the input sample size for the survey to see if the D-M parameters change such that the survey multiplier is not so close to 1.0. Suggested by Allan.
This one can be post assessment draft submission, right Kelli?
We may want to just pin this for 2024 assessment milestone now that we are past the SRG.
They never really asked for it this year like we thought they were going to and now Allan wants bootstrapped input sample sizes so I think we can just close it.
|
2025-04-01T04:35:04.804756
| 2023-09-05T11:09:46
|
1881747336
|
{
"authors": [
"erdii"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9593",
"repo": "package-operator/package-operator",
"url": "https://github.com/package-operator/package-operator/pull/457"
}
|
gharchive/pull-request
|
[MTSRE-1426] Fix bootstrap controller shutdown when context is cancelled
Summary
Cancelling the manager context resulted in a failed shutdown because the environment manager runnable did not stop within the shutdown deadline of 30 seconds.
I refactored the re-probe ticker code to respect context cancelling.
Now the bootstrapper will properly shut down when the installed PKO becomes available.
Change Type
Bug Fix
Check List Before Merging
[ ] This PR passes all pre-commit hook validations.
[ ] This PR is fully tested and regression tests are included.
[x] Relevant documentation has been updated.
Additional Information
/lgtm
|
2025-04-01T04:35:04.812377
| 2022-11-22T12:05:44
|
1459735714
|
{
"authors": [
"jpopelka",
"nforro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9594",
"repo": "packit/specfile",
"url": "https://github.com/packit/specfile/pull/141"
}
|
gharchive/pull-request
|
A few improvements
RELEASE NOTES BEGIN
Section and Tag objects now have normalized_name property for more convenient comparison.
There is a new method, Specfile.get_active_macros(), to get active macros in the context of the spec file.
The underlying rpm.spec instance is now exposed as Specfile.rpm_spec property.
There is a new utility class for parsing NEVRA strings.
RELEASE NOTES END
recheck
I'm wondering about the Zuul errors and this looks strange:
"nforro wants to merge 5 commits into missing-sources from misc"
Is that expected, i.e. don't you actually want to merge one of them to main?
recheck
recheck
/packit build
/packit-stg build
|
2025-04-01T04:35:04.816990
| 2024-08-16T14:39:17
|
2470408746
|
{
"authors": [
"ruslK"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9595",
"repo": "pactumjs/pactum",
"url": "https://github.com/pactumjs/pactum/issues/373"
}
|
gharchive/issue
|
Cookies Jar for withFollowRedirects
Is your feature request related to a problem? Please describe.
In some scenarios for Redirection, next call required to use Cookies from previously call, but we don't see that happing in Punctum
Describe the solution you'd like
Global or call config to enable Cookie Jar, like .enableCookiesJar()
something I see config in postman, where Cookie Jar Enable all the time, we user can disable it:
I tried use HTTP Cookie Agent for PHIN Config:
const tough = require("tough-cookie");
const cookiesAgent = require('http-cookie-agent/http');
const jar = new tough.CookieJar();
...
.withCore(new cookiesAgent.MixedCookieAgent({ cookies: { jar } }))
...
but no luck yet
|
2025-04-01T04:35:04.837965
| 2015-12-28T12:25:53
|
124061153
|
{
"authors": [
"YChebotaev",
"sasindumendis"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9596",
"repo": "pagekit/pagekit",
"url": "https://github.com/pagekit/pagekit/issues/516"
}
|
gharchive/issue
|
Internal server error caused by MYSQL_ATTR_INIT_COMMAND
I was trying to install the latest Pagekit (0.10.1) with PHP built-in web server and SQLite on Ubuntu.
STEPS I FOLLOWED:
Extracted Pagekit archive into /.../pagekit/directory.
Launched PHP dev server. cd /.../pagekit/ && php -S localhost:8000
Visited http://localhost:8000 on browser.
WHAT I SAW:
http://localhost:8000 gave a blank screen instead of the installation wizard.
PHP dev server gave Undefined class constant 'MYSQL_ATTR_INIT_COMMAND' in /.../pagekit/app/modules/database/index.php on line 85.
// app/modules/database/index.php line 85
'driverOptions' => [
PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8 COLLATE utf8_unicode_ci'
]
Obviously I didn't have MySQL installed and configured but ideally that shouldn't be a problem because I was going to use SQLite. Commenting out PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8 COLLATE utf8_unicode_ci' fixed the problem and I was able to complete the installation. However uncommenting it after the installation caused 500 error again.
SYSTEM:
PHP 5.6.10
SQLite 3.8.2
Required PHP modules installed: JSON, Session, ctype, Tokenizer, SimpleXML, DOM, mbstring, PCRE 8.0+, ZIP, PDO, pdo_sqlite, sqlite3, cURL, iconv.
Just note that i have same issue except i downloaded and extracted latest archive from an official site and use php-fpm module on nginx.
Here's my phpinfo.
|
2025-04-01T04:35:04.849813
| 2016-04-07T18:52:30
|
146712974
|
{
"authors": [
"MalteScharenberg",
"ecmel"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9597",
"repo": "pagekit/pagekit",
"url": "https://github.com/pagekit/pagekit/issues/582"
}
|
gharchive/issue
|
Minor Setup Problem
Problem
I got a "Whoops, looks like something went wrong." error at the end of both graphical and command line setup but everything seems to set up normally.
Technical Details
Pagekit version: 0.11.3
Webserver: Apache/2.4.7 (Ubuntu)
Database: sqlite 3.8.2
PHP Version: 5.5.9-1ubuntu4.14
Troubleshooting
[X] I have enabled debug mode: https://pagekit.com/docs/troubleshooting/debug-mode
[X] I have verified the server requirements: https://pagekit.com/docs/getting-started/requirements
[X] I have tried disabling all installed extensions
[X] I have checked the browser developer console for JavaScript errors
Do you have other extensions than Blog and Theme-One located inside your packages folder?
For debugging it would be helpful if you could install Pagekit again from a fresh extracted zip archive with debug mode turned on (debug => true in app/installer/config.php) and past the contents of the last request made during installation.
WEB SETUP
Request:
{"config":{"database":{"default":"sqlite","connections":{"sqlite":{"prefix":"pk_"}}}},"option":{"system":{"admin":{"locale":"en_US"},"site":{"locale":"en_US"}},"system/site":{"title":"TEST"}},"user":{"username":"admin","password":"password","email":"ecmel@example.com"},"locale":"en_US"}
Response:
{"status":"success","message":""}
This one is difficult to reproduce. Is it possible for you to use a PHP debugger and find the exact position there the error occurs?
Please reopen if the issue still persists.
|
2025-04-01T04:35:04.855251
| 2016-02-09T06:00:29
|
132338500
|
{
"authors": [
"jeffkaufman",
"pra85"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9599",
"repo": "pagespeed/ngx_pagespeed",
"url": "https://github.com/pagespeed/ngx_pagespeed/pull/1112"
}
|
gharchive/pull-request
|
Fix a typo
neet → need
Thanks!
I know this is silly for such a small change, but could you sign our cla? https://cla.developers.google.com/about/google-individual
Sure, I have signed it previously while contributing to other Google projects. Attached the screenshot of the same below.
CLA looks good, thanks!
|
2025-04-01T04:35:04.896394
| 2020-12-16T08:04:47
|
768562919
|
{
"authors": [
"RAnders00",
"alazymeme"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9600",
"repo": "pajbot/pajbot",
"url": "https://github.com/pajbot/pajbot/pull/1105"
}
|
gharchive/pull-request
|
Fix changelog enforcer breaking changes
Pull request checklist:
[x] CHANGELOG.md was updated, if applicable
[x] Documentation in docs/ or install-docs/ was updated, if applicable
Dependent on https://github.com/pajbot/pajbot/pull/1104
I doubt this will fix the issue because it's still the same problem. The Changelog checker CI gets triggered and doesnt recognize the label as being there yet (Something about the way dependabot adds the labels means the CI run doesnt recognize them and also the CI doesnt get re-triggered once they are added)
I doubt this will fix the issue because it's still the same problem. The Changelog checker CI gets triggered and doesnt recognize the label as being there yet (Something about the way dependabot adds the labels means the CI run doesnt recognize them and also the CI doesnt get re-triggered once they are added)
skipLabel has been deprecated in the new version, I'm more just looking at fixing that. The side changes are just removing extra labels that dependabot doesn't need to add.
Can't we just re-run individual checks?
Waiting for a fixed version of the changelog enforcer first: https://github.com/dangoslen/changelog-enforcer/issues/58
|
2025-04-01T04:35:04.908298
| 2022-03-15T20:39:36
|
1170225986
|
{
"authors": [
"dmikusa-pivotal",
"jjsheridan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9601",
"repo": "paketo-buildpacks/ca-certificates",
"url": "https://github.com/paketo-buildpacks/ca-certificates/issues/94"
}
|
gharchive/issue
|
Enable CA certificates to be baked into an image
What happened?
Right now you need to pass ca-certificates bindings ~at build and~ at runtime. The buildpack will add a helper which loads the certs in the runtime environment. You need to have the certs present at runtime though so the helper can load them.
It's annoying to have to specify them twice.
We should support a more flexible configuration:
~1. No build ca certs required to enable the helper. If you just need certs at runtime, then just set them at runtime.~ (this already works this way, it's enabled by default but you can opt-out and disable it).
2. Embed ca certs into the image. CA certs are public certs, they are not secrets and we could include them in the image. It's just less flexible because you can't change the certs included at runtime. There is still utility in this though and we should allow it.
Checklist
[ ] I have included log output.
[ ] The log output includes an error message.
[ ] I have included steps for reproduction.
@dmikusa-pivotal If possible, we would like to have the certs we add also added to the java cert store.
@jjsheridan - That happens elsewhere. The JVM provider buildpacks (Bellsoft, Microsoft, Amazon, etc...) will load the system certs into the JVM.
If you add certs using ca-certificates buildpack, they'll get installed to the right place so that the JVM provider buildpacks can pick them up.
@dmikusa-pivotal It sounds like I can accomplish adding custom certs by creating our own ca-cert buildpack. Is that correct? If so, what would be the drawbacks to this method?
@dmikusa-pivotal It sounds like I can accomplish adding custom certs by creating our own ca-cert buildpack. Is that correct? If so, what would be the drawbacks to this method?
Sorry, not sure I follow what you'd need to create a buildpack to do.
The Paketo ca-certificates buildpack will handle copying certs from bindings to a location that is suitable for them to be used by OpenSSL. In addition, all of the Paketo Java buildpacks, the ones that provide various vendors' OpenJDK binaries, should load all of the OpenSSL certificate into the JVM keystore.
So the two buildpacks work together to get your bound CA certificates into the JVM keystore.
How were you thinking another buildpack would fit in here? Is there a gap in functionality? If so, we can try to cover that in the Paketo buildpacks. We'd like them to cover most situations out-of-the-box.
@dmikusa-pivotal As a way of including our custom certs in the system cert store. Couldn't this be done by creating our own ca-certificates buildpack that would include all the certs that come with the buildpack and just adding ours?
Couldn't this be done by creating our own ca-certificates buildpack that would include all the certs that come with the buildpack and just adding ours?
If you want to create a buildpack that embeds your ca-certificates with the buildpack itself, you could do that. The Paketo ca-certificates buildpack doesn't have a way to do that.
Typically what I see being done though, instead of a custom buildpack, is a custom stack (build/run image) with the CA certs already trusted in the base image. This has a couple of advantages:
It's in the stack, so your users won't need to include an extra buildpack.
When you create the build/run base images, you can have root access, so you can install the CA certificates into the default locations for your Linux distro. This means you don't need the special OpenSSL env variables that the ca-certificates buildpack uses. A buildpack cannot add CA certificates to the default locations because those are almost always under /etc or some other location to which the buildpack cannot write.
There are probably other ways to attack this problem as well.
@dmikusa-pivotal Thanks. The drawback I see is we would need to have some automation in place to upgrade our stack as updates are released. Given the way we're creating our images via AWS CodePipeline and CodeBuild, it might complicate things a bit too much.
@jjsheridan If you want, you can give https://github.com/paketo-buildpacks/ca-certificates/releases/tag/v3.2.0 a try. This has a PR I committed which allows one to include CA certificates into the built image. You then do not need to include them as a binding at runtime.
This is convenient as it simplifies running the image. The downside is that it's less flexible. If you need to change the certs included with the image then you must rebuild the image.
This should be included in the Paketo buildpacks composite & builder releases on Friday. To try now, just add -b gcr.io/paketo-buildpacks/ca-certificates:3.2.0 -b paketo-buildpacks/java to one of your pack build commands.
Great, thanks @dmikusa-pivotal !
@dmikusa-pivotal I'm finally getting around to trying this, but since no binding is required, I'm not sure where the certs should be stored for pack to pick them up?
@jjsheridan You would still need a binding, but only during build. So you would set BP_EMBED_CERTS=true and pass the binding when you run pack build.
After that, the CA certs are part of the image, so you do not need the binding at runtime.
|
2025-04-01T04:35:04.912361
| 2022-01-28T19:21:45
|
1117755840
|
{
"authors": [
"fg-j",
"sophiewigmore"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9602",
"repo": "paketo-buildpacks/occam",
"url": "https://github.com/paketo-buildpacks/occam/pull/122"
}
|
gharchive/pull-request
|
Add API docs for occam matchers
Summary
Resolves #79
Resolves #80
Resolves #81
Use Cases
Checklist
[ ] I have viewed, signed, and submitted the Contributor License Agreement.
[ ] I have linked issue(s) that this PR should close using keywords or the Github UI (See docs)
[ ] I have added an integration test, if necessary.
[ ] I have reviewed the styleguide for guidance on my code quality.
[ ] I'm happy with the commit history on this PR (I have rebased/squashed as needed).
This looks good. I noticed that this repository doesn't have a README. I think creating a fully complete README could be a separate issue. However, I think it would be nice if we could add a README with a link to the GoDoc site like we have in packit
|
2025-04-01T04:35:04.924165
| 2022-02-02T15:23:35
|
1122061048
|
{
"authors": [
"glynternet",
"hpryce"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9603",
"repo": "palantir/log4j-sniffer",
"url": "https://github.com/palantir/log4j-sniffer/pull/95"
}
|
gharchive/pull-request
|
Consolidate WalkFn and close into WalkCloser
We can pass a WalkCloser around instead of both a WalkFn and a func() error to make many of the signatures cleaner.
Just a little cleanup to make working on https://github.com/palantir/log4j-sniffer/issues/40 a little easier.
👍
|
2025-04-01T04:35:04.936338
| 2021-01-11T17:11:35
|
783549072
|
{
"authors": [
"carterkozak",
"svc-autorelease",
"wenhoujx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9604",
"repo": "palantir/sls-version-java",
"url": "https://github.com/palantir/sls-version-java/pull/390"
}
|
gharchive/pull-request
|
update javadoc link
Before this PR
click the link leads to 404
After this PR
==COMMIT_MSG==
Update javadoc sls-spec link
==COMMIT_MSG==
Possible downsides?
@carterkozak thanks for the quick review. can you click the changelog generation? I don't have the permission.
@carterkozak thanks for the quick review. can you click the changelog generation? I don't have the permission.
Unfortunately the robot cannot push changes to forks, so that won't do anything. You may have to write the changelog manually like a cave-engineer.
Unfortunately the robot cannot push changes to forks, so that won't do anything. You may have to write the changelog manually like a cave-engineer.
Released 0.13.1
Released 0.13.1
|
2025-04-01T04:35:04.941438
| 2021-01-26T17:30:41
|
794422192
|
{
"authors": [
"pkoenig10",
"svc-autorelease"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9605",
"repo": "palantir/sls-version-java",
"url": "https://github.com/palantir/sls-version-java/pull/400"
}
|
gharchive/pull-request
|
OrderableSlsVersion implements Comparable
Before this PR
It is cumbersome to use OrderableSlsVersion with Comparator combinators because OrderableSlsVersion is not Comparable, even though it has a natural and canonical Comparator implementation.
After this PR
OrderableSlsVersion implements Comparable.
Traced this back to https://g.p.b/foundry/apollo/pull/42.
Looks like OrderableSlsVersion initially implemented the raw Comparable class with comment that said:
Need to use Comparable<T> with generics
So I can't imagine there's a good reason why we didn't just implement Comparable<OrderableSlsVersion>.
Maybe @uschi2000 can comment.
Traced this back to https://g.p.b/foundry/apollo/pull/42.
Looks like OrderableSlsVersion initially implemented the raw Comparable class with comment that said:
Need to use Comparable<T> with generics
So I can't imagine there's a good reason why we didn't just implement Comparable<OrderableSlsVersion>.
Maybe @uschi2000 can comment.
Released 0.13.2
Released 0.13.2
|
2025-04-01T04:35:04.945208
| 2018-05-16T22:16:36
|
323807368
|
{
"authors": [
"WorldMaker",
"giladgray"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9606",
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/issues/3915"
}
|
gharchive/issue
|
no-internal-module flagged for class keywords
Bug Report
TSLint version: 5.9.1
TypeScript version: 2.8.3
Running TSLint via: tslint-language-service 0.9.9 via VSCode extension TSLint (vnext) 0.0.4
TypeScript code being linted
class ClassName { // no-internal-module
}
export default class SomeOtherClass { // no-internal-module
}
with tslint.json configuration:
{
"extends": ["tslint-config-standard"],
"linterOptions": {
"exclude": [
"config/**/*.js",
"node_modules/**/*.ts"
]
}
}
Actual behavior
Any usage of class is getting a tslint warning for no-internal-module.
Expected behavior
No tslint error/warning for class keyword usage.
@WorldMaker I cannot reproduce this in 5.11. can you?
I haven't seen it in a while. I'm wondering if it might have been a confused combination of VS Code plugin version, tslint version, and TS version.
|
2025-04-01T04:35:04.948230
| 2019-11-26T11:52:02
|
528674963
|
{
"authors": [
"JoshuaKGoldberg",
"tonyhallett"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9607",
"repo": "palantir/tslint",
"url": "https://github.com/palantir/tslint/issues/4894"
}
|
gharchive/issue
|
Missing colon in documentation
Error template in docs is missing colon in two places.
Actual behavior
[error] Variables named after %s are not allowed!
and
[error] Variables named after %s are not allowed: '%s'
Expected behavior
[error]: Variables named after %s are not allowed!
and
[error]: Variables named after %s are not allowed: '%s'
🤖 Beep boop! 👉 TSLint is deprecated 👈 and you should switch to typescript-eslint! 🤖
🔒 This issue is being locked to prevent further unnecessary discussions. Thank you! 👋
|
2025-04-01T04:35:04.953259
| 2023-07-27T14:56:19
|
1824558165
|
{
"authors": [
"mrcnk",
"teddyjfpender"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9608",
"repo": "palladians/pallad",
"url": "https://github.com/palladians/pallad/pull/39"
}
|
gharchive/pull-request
|
chore(readme): add contributors
Describe changes
Added automatic contributors avatars.
Updated descriptions and package list in Readme.
Ticket or discussion link
Review checklist
[x] Proper documentation added
[ ] Proper tests added
Screenshots
@all-contributors please add @mrcnk for code
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.