added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:10:33.843575
| 2019-11-22T13:27:01
|
527194710
|
{
"authors": [
"bnarum",
"odow"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14667",
"repo": "JuliaOpt/Gurobi.jl",
"url": "https://github.com/JuliaOpt/Gurobi.jl/issues/271"
}
|
gharchive/issue
|
GRBgetBasisHead gives segmentation fault
I got a segmentation fault when trying to use the function GRBgetBasisHead. Am I doing something wrong here?
A simple replication is:
using Gurobi
using JuMP
mdl = Model(solver = GurobiSolver())
@variable(mdl, x >= 0)
@constraint(mdl, x >= 1)
@objective(mdl, Min, x)
solve(mdl)
grb_mdl = mdl.internalModel.inner
ret = Gurobi.@grb_ccall("getBasisHead",
Cint,
(Ptr{Cvoid}, Cint),
grb_mdl, 1)
I'm using Julia 1.2 with package versions
JuMP v0.18.6
Gurobi v0.7.3
and Gurobi 8.1.1
It fails when doing the @grb_call:
Academic license - for non-commercial use only
Optimize a model with 1 rows, 1 columns and 1 nonzeros
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 1e+00]
Presolve removed 1 rows and 1 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 1.0000000e+00 0.000000e+00 0.000000e+00 0s
Solved in 0 iterations and 0.00 seconds
Optimal objective 1.000000000e+00
signal (11): Segmentation fault: 11
in expression starting at /Users/bnarum/Desktop/test_gurobi.jl:15
GRBgetBasisHead at /usr/local/lib/libgurobi81.dylib (unknown line)
top-level scope at /Users/bnarum/.julia/packages/Gurobi/TX8tY/src/grb_common.jl:56
jl_toplevel_eval_flex at /Users/sabae/buildbot/worker/package_macos64/build/src/toplevel.c:809
jl_parse_eval_all at /Users/sabae/buildbot/worker/package_macos64/build/src/ast.c:873
jl_load at /Users/sabae/buildbot/worker/package_macos64/build/src/toplevel.c:879 [inlined]
jl_load_ at /Users/sabae/buildbot/worker/package_macos64/build/src/toplevel.c:886
include at ./boot.jl:328 [inlined]
include_relative at ./loading.jl:1094
include at ./Base.jl:31
exec_options at ./client.jl:295
_start at ./client.jl:464
true_main at /usr/local/bin/julia (unknown line)
main at /usr/local/bin/julia (unknown line)
Allocations: 21121973 (Pool: 21117029; Big: 4944); GC: 45
Segmentation fault: 11
See the documentation: https://www.gurobi.com/documentation/8.1/refman/c_grbgetbasishead.html
You need something like:
function getBasisHead(model::JuMP.Model)
grb_model = model.internalModel
bhead = zeros(Cint, Gurobi.num_constrs(grb_model.inner))
ret = Gurobi.@grb_ccall(
getBasisHead,
Cint,
(Ptr{Cvoid}, Ptr{Cint}),
grb_model.inner, bhead
)
if ret != 0
throw(Gurobi.GurobiError(grb_model.env.inner, ret))
end
return bhead .+ 1
end
Great, that did it! Thanks for the quick reply.
|
2025-04-01T04:10:33.882730
| 2023-03-23T15:10:55
|
1637735483
|
{
"authors": [
"JuliaRegistrator",
"frankier"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14668",
"repo": "JuliaPsychometricsBazaar/RIrtWrappers.jl",
"url": "https://github.com/JuliaPsychometricsBazaar/RIrtWrappers.jl/issues/5"
}
|
gharchive/issue
|
@JuliaRegistrator register
@JuliaRegistrator register
@JuliaRegistrator register
@JuliaRegistrator register
Registration pull request updated: JuliaRegistries/General/85187
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.1.0 -m "<description of version>" bdeb8251901e26648473d21ff541bc5f066e3465
git push origin v0.1.0
@JuliaRegistrator register
Registration pull request updated: JuliaRegistries/General/85187
After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.
This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
git tag -a v0.1.0 -m "<description of version>" dc059f2d606625510298c9c9339864b2cc36b9b5
git push origin v0.1.0
|
2025-04-01T04:10:33.884091
| 2015-04-11T05:57:12
|
67739973
|
{
"authors": [
"amitjamadagni"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14669",
"repo": "JuliaQuantum/QuBase.jl",
"url": "https://github.com/JuliaQuantum/QuBase.jl/pull/24"
}
|
gharchive/pull-request
|
Operators, Tests.
Added :
Spin Operators,
Related tests using Commutator,
Momentum and position operators.
Reference #22.
@acroy I guess this is fine.
|
2025-04-01T04:10:33.935050
| 2020-03-17T23:48:11
|
583367872
|
{
"authors": [
"DilumAluthge",
"KristofferC",
"maleadt",
"timholy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14670",
"repo": "JuliaRegistries/General",
"url": "https://github.com/JuliaRegistries/General/pull/11114"
}
|
gharchive/pull-request
|
Use RetroCap.jl to add monotonic "caps" (upper-bounded compat entries) to all packages
This pull request uses RetroCap.jl to add monotonic upper-bounded compat entries to all packages.
cc: @KristofferC
The script to reproduce this PR:
julia> import Pkg
julia> Pkg.add("RetroCap")
julia> import RetroCap
julia> rm("General"; force = true, recursive = true)
julia> run(`git clone git@github.com:JuliaRegistries/General.git General`)
julia> cd("General")
julia> run(`git checkout master`)
julia> run(`git checkout -B dpa/add-caps`)
julia> RetroCap.add_caps(RetroCap.MonotonicUpperBound(), RetroCap.CapLatestVersion(), pwd())
julia> run(`git add -A`)
julia> run(`git commit -m "Use RetroCap.jl to add monotonic \"caps\" (upper-bounded compat entries) to all packages"`)
julia> run(`git push --force origin dpa/add-caps`)
julia> cd("..")
julia> rm("General"; force = true, recursive = true)
@maleadt Can you run PkgEval on the master branch of the General registry and on this PR branch (dpa/add-caps), and compare the two runs? Since this PR modifies so many packages, we'd like to make sure that it doesn't break anything.
Done. Report here: https://github.com/maleadt/retrocap_report/blob/master/report.md
A bit too many just to be able to merge but not too crazy. Most of those resolver errors would be good to investigate.
@DilumAluthge, does RetroCap.jl respect yanked releases?
@DilumAluthge, does RetroCap.jl respect yanked releases (ignores them)?
No. We should fix that, I.e. tell RetroCap to ignore yanked releases.
Some of these failures are due to the fact that in 1.3.1, if something is "fixed" in the project manifest, then it doesn't get re-resolved when we resolve test dependencies.
That is, we first resolve the project dependencies, and we fix those, and then we resolve test dependencies with no possibility of changing the project dependencies.
I believe this has been fixed in either 1.4 or 1.5, in which we now have the ability to re-resolve project dependencies when we resolve test-dependencies.
I think the tiered resolver will be available in 1.4. Let's put this on hold until 1.4 is released, and then we can return to it.
So, I think most of the "unsatisfiable" errors should be fixed on 1.4 with the tiered resolver.
As for the others, here is my stream of consciousness:
One of these is really puzzling. The latest release of AIBECS is AIBECS version 0.4.13. It was released 21 days ago. But the retrocap pkgeval job tested AIBECS 0.2.9:
https://github.com/maleadt/retrocap_report/blob/07b1ace730eacd66e8d479b20e88a1a02e954bfa/AIBECS.retrocap.log
The master pkgeval job correctly tested AIBECS 0.4.13.:
https://github.com/maleadt/retrocap_report/blob/07b1ace730eacd66e8d479b20e88a1a02e954bfa/AIBECS.master.log
Why did the two jobs test such drastically different versions of the same package?
Actually, that's not the only package I saw this on. Take InverseDistanceWeighting. Master ran it on 0.3.2:
https://github.com/maleadt/retrocap_report/blob/07b1ace730eacd66e8d479b20e88a1a02e954bfa/InverseDistanceWeighting.master.log
But retrocap branch ran it on 0.3.0:
https://github.com/maleadt/retrocap_report/blob/07b1ace730eacd66e8d479b20e88a1a02e954bfa/InverseDistanceWeighting.retrocap.log
A whole bunch of these are a _vec not defined error in OrdinaryDiffEq:
ERROR: LoadError: UndefVarError: _vec not defined
Stacktrace:
[1] include at ./boot.jl:328 [inlined]
[2] include_relative(::Module, ::String) at ./loading.jl:1105
[3] include(::Module, ::String) at ./Base.jl:31
[4] top-level scope at none:2
[5] eval at ./boot.jl:330 [inlined]
[6] eval(::Expr) at ./client.jl:425
[7] top-level scope at ./none:3
in expression starting at /home/pkgeval/.julia/packages/OrdinaryDiffEq/TUKTm/src/OrdinaryDiffEq.jl:41
ERROR: LoadError: Failed to precompile OrdinaryDiffEq [1dea7af3-3e70-54e6-95c3-0bf5283fa5ed] to /home/pkgeval/.julia/compiled/v1.3/OrdinaryDiffEq/DlSvy_Eo4xI.ji.
Some of these seem to be network errors during the build step of a package. For example:
Error: Download failed: curl: (7) Failed to connect to github.com port 443: Connection timed out
Some of these are artifact installation errors. For example:
ERROR: Unable to automatically install 'OpenSpecFun' from '/home/pkgeval/.julia/packages/OpenSpecFun_jll/HMSwk/Artifacts.toml'
ERROR: Unable to automatically install 'Rmath' from '/home/pkgeval/.julia/packages/Rmath_jll/dkU4a/Artifacts.toml'
ERROR: Unable to automatically install 'Xorg_fixesproto' from '/home/pkgeval/.julia/packages/Xorg_fixesproto_jll/m5mh5/Artifacts.toml'
It would be nice if we could automatically retry the ones that are artifact installation errors.
There are some others that seem like compat problems on the master branch of General.
For example, the latest registered version of BEAST (BEAST 1.1.0) claims compatibility with CompScienceMeshes = "0.2.2-0". But, BEAST directly accesses fields of structs in CompScienceMeshes. And CompScienceMeshes 0.2.5 has different field names for one of the structs than does CompScienceMeshes 0.2.2. But BEAST tries to access the field using the field names from CompScienceMeshes 0.2.2. So, BEAST is not actually compatible with CompScienceMeshes 0.2.2. But the master branch of General says that it is. This isn't actually a problem with this PR. But anyway, we can fix that in this PR by e.g. manually changing the compat entry for BEAST 1.1.0 to be CompScienceMeshes = "0.2.2".
@maleadt Do you think you could do another PkgEval comparison, but this time, use the latest Julia-1.4 release candidate?
You don't need to do all packages. Just do the bad packages. I think its 93 in total.
Of course, before we merge this PR, we will do another full PkgEval run.
But right now, I just want to make sure that the resolver errors get fixed by 1.4. So I think just a run of the bad 93 is fine for now. Let's see how many get fixed by 1.4.
On v1.4.0-rc2+retrocap, out of 3254 packages 1288 passed, 1660 failed, 5 got killed and 301 were skipped.
Comparing against v1.4.0-rc2+general:
In summary, 0 packages now succeed, while 0 have started to fail.
Well, that’s quite nice!
@kristofferc how do you feel about merging?
1.4.0-rc2 is exactly the same as 1.4.0, right?
If so, what do you guys say we merge this PR?
I don't really understand how 1.4 could fix those cases like
A whole bunch of these are a _vec not defined error in OrdinaryDiffEq:
Also, I am surprised not a single package started failing or passing. Usually, there are a few packages that are noisy and pass / fail a bit randomly.
For example, I tried to add AIBECS and test it and got:
(Caps) pkg> test AIBECS
Testing AIBECS
...
ERROR: Unsatisfiable requirements detected for package Unitful [1986cc42]:
Unitful [1986cc42] log:
├─possible versions are: [0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 1.0.0] or uninstalled
├─restricted to versions 1 by an explicit requirement, leaving only versions 1.0.0
└─restricted by compatibility requirements with WorldOceanAtlasTools [04f20302] to versions: [0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.18.0] — no versions left
└─WorldOceanAtlasTools [04f20302] log:
├─possible versions are: [0.2.0-0.2.4, 0.3.0-0.3.5, 0.4.0-0.4.1] or uninstalled
└─restricted to versions * by an explicit requirement, leaving only versions [0.2.0-0.2.4, 0.3.0-0.3.5, 0.4.0-0.4.1]
Seems like that should have happened in PkgEval as well?
A lot of the failures are from RecipesBase that tagged a new breaking release because it caused the highly coupled Plots.jl to fail........
A lot of the failures are from RecipesBase that tagged a new breaking release because it caused the highly coupled Plots.jl to fail........
I was surprised as well, but made sure to compare this PR against two commits prior so that I could make sure one package got legitimately upgraded. Anyway, let me rerun just to be sure. @DilumAluthge, can you rebase once more to make sure it includes a fixed RecipesBase (or instead rebase right before that breaking release?).
A lot of the failures are from RecipesBase that tagged a new breaking release because it caused the highly coupled Plots.jl to fail........
I was surprised as well, but made sure to compare this PR against two commits prior so that I could make sure one package got legitimately upgraded. Anyway, let me rerun just to be sure. @DilumAluthge, can you rebase once more to make sure it includes a fixed RecipesBase (or instead rebase right before that breaking release?).
Yep I'll do it when I get back to a computer.
Alright, I ran RetroCap on the latest General master.
OK, so this was a false positive indeed, sorry for that. New results up here:
In summary, 44 packages now succeed, while 112 have started to fail.
I think something weird is happening with the second run, almost all failures seem to be related do some download problems:
Cloning failure:
[?25l[2K[?25hERROR: failed to clone from https://github.com/queryverse/IteratorInterfaceExtensions.jl.git, error: GitError(Code:ERROR, Class:OS, failed to connect to github.com: Connection timed out)
Why is it cloning? Means the tarball download failed:
Installed IterTools ────────── v1.3.0
Cloning [a81c6b42-2e10-5240-aca2-a61377ecd94b] Compose from https://github.com/GiovineItalia/Compose.jl.git
[?25l Fetching: [> ] 0.0 %
Other download failure:
│ [ Info: Downloading https://github.com/bicycle1885/ZlibBuilder/releases/download/v1.0.4/Zlib.v1.2.11.x86_64-linux-gnu.tar.gz to /home/pkgeval/.julia/packages/ImageMagick/vMfoS/deps/usr/downloads/Zlib.v1.2.11.x86_64-linux-gnu.tar.gz...
│ [ Info: Downloading https://github.com/JuliaIO/LibpngBuilder/releases/download/v1.0.3/libpng.v1.6.37.x86_64-linux-gnu.tar.gz to /home/pkgeval/.julia/packages/ImageMagick/vMfoS/deps/usr/downloads/libpng.v1.6.37.x86_64-linux-gnu.tar.gz...
│ [ Info: Downloading https://github.com/JuliaIO/LibJPEGBuilder/releases/download/v10/libjpeg.v9.0.0-b.x86_64-linux-gnu.tar.gz to /home/pkgeval/.julia/packages/ImageMagick/vMfoS/deps/usr/downloads/libjpeg.v9.0.0-b.x86_64-linux-gnu.tar.gz...
│ ┌ Error: Download failed: curl: (22) The requested URL returned error: 401 UNAUTHORIZED
``
etc.
Maybe GitHub rate limited me? I can try rerunning. But aren't failures due to this change mostly going to manifest as failures to install
401 does suggest rate limiting.
The reason it doesn’t manifest at install time is that the build step doesn’t actually throw errors. So we don’t realize that the build failed until we try to import.
Should I rerun RetroCap, and then we can do another PkgEval run?
sure! I'll try in a US data center this time, maybe there's less rate limiting there (as I haven't seen it happen much on pkgeval runs)
Alright it's all set to go.
Bump @maleadt what’s the latest PkgEval for this branch?
https://github.com/maleadt/retrocap_report/blob/e586dc05448a50aa4a3a9c03930598afcb2012f5/report.md
Are you testing the same version of the package on each run?
That is, is the version of Foo.jl that is tested on General master the same version of Foo.jl that is tested on this PR branch.
Look at e.g. SwitchOnSafety.jl.
There might be some new compat that prevents the latest SwitchOnSafety to be used?
Opened a few other PRs that should fix a few of the problems.
Are you testing the same version of the package on each run?
Of course. The only package where I purposefully don't here ERFA.jl to ensure I was using the correct Registry commits (ERFA was the last package upgraded before your commit on dpa/add_caps):
julia> data[data.name .== "ERFA", :]
2×8 DataFrame. Omitted printing of 1 columns
│ Row │ julia │ name │ uuid │ version │ status │ reason │ duration │
│ │ VersionNumber │ String │ Base.UUID │ Version…⍰ │ Symbol │ Symbol⍰ │ Float64 │
├─────┼───────────────────┼────────┼──────────────────────────────────────────────┼───────────┼────────┼─────────┼──────────┤
│ 1 │ v"1.4.0-retrocap" │ ERFA │ UUID("17511681-8477-586a-8d98-4cfd5a1f2ec3") │ v"0.6.0" │ ok │ missing │ 82.531 │
│ 2 │ v"1.4.0-general" │ ERFA │ UUID("17511681-8477-586a-8d98-4cfd5a1f2ec3") │ v"0.5.0" │ ok │ missing │ 139.93 │
Of course.
IIUC, we are testing the version that Pkg.add gives us in a clean project (which might not be the same with the new compat info).
IIUC, we are testing the version that Pkg.add gives us in a clean project (which might not be the same with the new compat info).
Right, what I meant is that I verified we're using the registry at the point where identical versions of packages are available. PkgEval currently doesn't try to install specific versions, I'm not sure if that would be more valuable (for the end user it just matters if something is installable and works).
Regenerated the report, only one unsatisfiable left: https://github.com/maleadt/retrocap_report/blob/2d913a74333a245beabe4e9b65be21382337669b/report.md. Also includes script to generate the report.
I think we’re basically just waiting on https://github.com/JuliaGraphs/GraphPlot.jl/pull/106
Alright, I think we have fixed all of the unsatisfiables.
@maleadt Want to run PkgEval again?
New report, new unsatisfiable: https://github.com/maleadt/retrocap_report/blob/77fd53a5d7c7a2266d3337ff392a774cbe00990a/report.md
FWIW, this was run by reapplying RetroCap.jl (with a more recent registry), not reusing the exact branch here.
I think it will be hard to get this 100% perfect. If it is only GraphPlot and only when testing, I think it is ok. We should write up a post about this and post on discourse though.
Also, the GraphPlot issue should be solved once they register a new version: https://github.com/JuliaGraphs/GraphPlot.jl/commit/a15382bb4a251a3789b2c243a82ddb8b09a4590f
A new version of GraphPlot has been registered.
I completely forgot about this, but we should do this.
Here's a draft post for Discourse:
We have used the RetroCap.jl package to add "caps" (upper-bounded [compat] entries) to all packages in the General registry.
More specifically, for each package in the General registry, we iterate over each of the package's registered versions. For each registered version of the package, we iterate over each of the package's dependencies. For each dependency:
If the package does not have a [compat] entry for the dependency, then RetroCap adds an upper-bounded [compat] entry for the dependency.
If the package has a [compat] entry for the dependency but the [compat] entry is not upper-bounded, then RetroCap replaces the original [compat] entry with an upper-bounded [compat] entry for the dependency.
The upper bounds that we have added are "monotonic," meaning that older releases have older dependencies.
The script used to generate these changes is:
julia> import Pkg
julia> Pkg.add("RetroCap")
julia> import RetroCap
julia> rm("General"; force = true, recursive = true)
julia> run(`git clone git@github.com:JuliaRegistries/General.git General`)
julia> cd("General")
julia> run(`git checkout master`)
julia> run(`git checkout -B YOUR-INITIALS/add-caps`)
julia> RetroCap.add_caps(RetroCap.MonotonicUpperBound(), RetroCap.CapLatestVersion(), pwd())
julia> run(`git add -A`)
julia> run(`git commit -m "Use RetroCap.jl to add monotonic \"caps\" (upper-bounded compat entries) to all packages"`)
These changes were made in this PR: https://github.com/JuliaRegistries/General/pull/11114
If these changes have broken anything, please open an issue on the General registry repo and CC @DilumAluthge and @KristofferC.
I just ran RetroCap on the latest General registry master and pushed the changes to this branch.
Also, see my draft of the Discourse post in the previous comment.
@KristofferC @maleadt What do you guys think about making this change on Monday? If we merge this sometime during the morning in Europe time, then that gives us the full day during both the Europe and U.S. work day to fix any issues that crop up.
Since this branch will probably get out of date between now and Monday, you can just re-run this script before merging:
julia> import Pkg
julia> Pkg.add("RetroCap")
julia> import RetroCap
julia> rm("General"; force = true, recursive = true)
julia> run(`git clone git@github.com:JuliaRegistries/General.git General`)
julia> cd("General")
julia> run(`git checkout master`)
julia> run(`git checkout -B dpa/add-caps`)
julia> RetroCap.add_caps(RetroCap.MonotonicUpperBound(), RetroCap.CapLatestVersion(), pwd())
julia> run(`git add -A`)
julia> run(`git commit -m "Use RetroCap.jl to add monotonic \"caps\" (upper-bounded compat entries) to all packages"`)
julia> run(`git push --force origin dpa/add-caps`)
julia> cd("..")
julia> rm("General"; force = true, recursive = true)
I think this would help a lot with many weird results people get from the resolver. At the same time, I am worried people will start getting resolver errors with their tests... So hard :/
Hm, I wonder if we should try PkgEval again, and if the result isn't too bad, we just do this. @DilumAluthge
Hm, I wonder if we should try PkgEval again, and if the result isn't too bad, we just do this. @DilumAluthge
Sounds good to me. I've updated the branch. @maleadt Can you run PkgEval?
It's running.
@maleadt Are the results back yet?
No, running PkgEval takes a while.
You can't rush perfection.
12 unsatisfiables: https://github.com/maleadt/retrocap_report/blob/master/report.md
Not terrible.
At a quick glance, it looks like at least some of these unsatisfiables are due to the monotonicity of the caps. We should be able to fix those by making manual PRs to loosen the monotonicity.
I'd be fine with merging with those PkgEval results, and making a Discourse post, and then manually fixing up the individual cases.
Just as an example, if I apply the following patch to the dpa/add-caps branch, it fixes the unsatisfiable error for ODEInterfaceDiffEq:
From 4c6496dcb96065fd26777b22ed484a7f27db3d42 Mon Sep 17 00:00:00 2001
From: Dilum Aluthge<EMAIL_ADDRESS>Date: Wed, 2 Sep 2020 06:27:23 -0400
Subject: [PATCH] Fix ODEInterfaceDiffEq
---
D/DiffEqBiological/Compat.toml | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/D/DiffEqBiological/Compat.toml b/D/DiffEqBiological/Compat.toml
index f8ee030c24..c0bc7e5b8b 100644
--- a/D/DiffEqBiological/Compat.toml
+++ b/D/DiffEqBiological/Compat.toml
@@ -34,7 +34,6 @@ Parameters = "0.10.3 - 0.12"
DiffEqBase = "5.1-5"
["3.8-3"]
-DataStructures = "0.0.0 - 0.17"
MacroTools = "0.0.0 - 0.5"
Parameters = "0.0.0 - 0.12"
Reexport = "0.0.0 - 0.2"
@@ -46,6 +45,9 @@ DiffEqBase = "5"
["3.8-4"]
DiffEqJump = "6"
+["3.8-4.0.1"]
+DataStructures = "0.0.0 - 0.17"
+
["3.8.1-3.9.0"]
Compat = "0.0.0 - 3"
@@ -62,7 +64,6 @@ Latexify = ["0.11", "2"]
["4.0"]
Compat = "0.0.0 - 3"
-DataStructures = "0.0.0 - 0.17"
DynamicPolynomials = "0.0.0 - 0.3"
HomotopyContinuation = "0.0.0 - 1"
MacroTools = "0.0.0 - 0.5"
@@ -73,6 +74,7 @@ SymEngine = "0.0.0 - 0.7"
julia = "1"
["4.0.2-4.0"]
+DataStructures = "0.0.0 - 0.18"
Latexify = "0.11.0 - 0.12"
["4.1"]
--
2.28.0
To explain a little bit on the strategy for fixing these: I look at the log for the pre-cap registry (i.e. the master branch of the registry) - the log includes a full printout of all of the specific package versions that were used during the test suite. Then, I add my dpa/add-caps registry to Julia. Now I go down the list and do ] add<EMAIL_ADDRESS>] pin<EMAIL_ADDRESS>for each Foo in the list of packages that were resolved during the test suite. This makes it much easier to identify the problem.
@KristofferC you cool with me updating this branch and merging it?
Should be ok, but it would indeed be good to follow up with the ones that started failing.
Yep. There are 12 unsatisfiables, which I will fix as soon as I merge this.
Hopefully there will only be at most a handful of additional error reports.
Thanks for doing this, @DilumAluthge! I do think this will make a lot of things more robust.
We fixed a couple of compat issues for Julia 1.0 in https://github.com/JuliaIO/ImageMagick.jl/pull/186. (Reference example: https://github.com/JuliaImages/ImageCore.jl/pull/140/checks?check_run_id=1090119276). Note that this was a conflict between the package (which loaded fine) and a test dependency (where version checks are performed after Julia commits to a particular set of versions for the package itself).
Note that this was a conflict between the package (which loaded fine) and a test dependency (where version checks are performed after Julia commits to a particular set of versions for the package itself).
Yep, on more recent versions of Julia, the tiered resolver can work around this, but obviously that's not available on Julia 1.0.
Let me know if you also want to add those compat fixes into the General registry itself!
Let's see how far we get with the one-package-at-a-time approach. Releasing an updated version of ImageMagick was enough to solve the issues for ImageCore---I suspect that most of the issues will arise from packages that jumped to a minimum of 1.3 to exploit the lovely artifacts system, and JuliaImages doesn't really have that many such packages.
|
2025-04-01T04:10:33.940461
| 2024-12-10T21:54:00
|
2731273866
|
{
"authors": [
"JuliaRegistrator",
"fredrikekre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14671",
"repo": "JuliaRegistries/General",
"url": "https://github.com/JuliaRegistries/General/pull/121152"
}
|
gharchive/pull-request
|
New package: PRASCapacityCredits v0.7.0
Registering package: PRASCapacityCredits
Repository: https://github.com/NREL/PRAS
Created by: @GordStephen
Version: v0.7.0
Commit: bc3c4674cbabb577ea6beaab8615d4a245a2b412
Reviewed by: @GordStephen
Reference: https://github.com/NREL/PRAS/commit/bc3c4674cbabb577ea6beaab8615d4a245a2b412#commitcomment-150177776
Description: NREL's Probabilistic Resource Adequacy Suite (PRAS)
https://github.com/JuliaRegistries/General/pull/121151#issuecomment-2533062671
|
2025-04-01T04:10:33.977656
| 2017-04-02T15:56:25
|
218773311
|
{
"authors": [
"alichaddad",
"rkhamis"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14672",
"repo": "Jumpscale/jumpscale_core8",
"url": "https://github.com/Jumpscale/jumpscale_core8/issues/806"
}
|
gharchive/issue
|
Starting portal doesn't start Mongodb
Please remove the doneSet when starting mongo in the cuisine module.
https://github.com/Jumpscale/jumpscale_core8/commit/55f03f0e5f5497939ce3a54d82c55635e0ebde08
|
2025-04-01T04:10:33.985869
| 2024-03-11T19:20:36
|
2180026144
|
{
"authors": [
"bwJuniper",
"chrismarget-j",
"rajagopalans"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14673",
"repo": "Juniper/terraform-provider-apstra",
"url": "https://github.com/Juniper/terraform-provider-apstra/issues/612"
}
|
gharchive/issue
|
How to support SONiC with apstra_agent_profile resource?
The apstra_agent_profile resource currently only allows three values for the platform attribute:
junos
eos
nxos
And it defaults to junos.
There's no provision for SONiC. Should there be?
In the web UI it is possible to omit this field altogether by select nothing from the dropdown box. This produces an Agent Profile with "" in the platform field.
Should this be an option?
Does a "" mean sonic?
I dont think so, it just means "i didnt select one" you can manually specify user name / pw and not select one it will try to figure out what device it is and apply the correct agent.
|
2025-04-01T04:10:34.059383
| 2021-05-18T04:24:14
|
893932333
|
{
"authors": [
"twyatt",
"wrsx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14674",
"repo": "JuulLabs/kable",
"url": "https://github.com/JuulLabs/kable/issues/102"
}
|
gharchive/issue
|
[Android] peripheral.read() hangs after manually disabling bluetooth
We're frequently polling peripheral.read() on a background coroutine and noticed that manually disabling bluetooth will not always throw an exception
An exception will be thrown if the read occurs after the OS disconnects from the device:
2021-05-18 11:39:35.032 14297-15286/com.v I/FTRepository: start read
2021-05-18 11:39:35.471 14297-15286/com.v I/FTRepository: finish read
2021-05-18 11:39:35.481 14297-15286/com.v I/FTRepository: start read
2021-05-18 11:39:35.650 14297-15286/com.v I/FTRepository: finish read
2021-05-18 11:39:35.654 14297-16277/com.v D/BluetoothGatt: onBluetoothStateChange: up=false
2021-05-18 11:39:35.654 14297-16277/com.v D/BluetoothGatt: Bluetooth is turned off, disconnect all client connections
2021-05-18 11:39:35.654 14297-16277/com.v D/BluetoothGatt: close()
2021-05-18 11:39:35.658 14297-16277/com.v D/BluetoothGatt: unregisterApp() - mClientIf=15
2021-05-18 11:39:35.661 14297-15286/com.v I/FTRepository: start read
2021-05-18 11:39:35.662 14297-15286/com.v E/FTRepository: com.juul.kable.GattRequestRejectedException
And exception will not be thrown if the read occurs before the OS disconnects:
2021-05-18 11:11:36.786 11390-12135/com.v I/FTRepository: start read
2021-05-18 11:11:36.956 11390-12135/com.v I/FTRepository: finish read
2021-05-18 11:11:36.966 11390-12135/com.v I/FTRepository: start read
2021-05-18 11:11:37.135 11390-12135/com.v I/FTRepository: finish read
2021-05-18 11:11:37.146 11390-12135/com.v I/FTRepository: start read <-- hangs forever
2021-05-18 11:11:37.251 11390-12401/com.v D/BluetoothGatt: onBluetoothStateChange: up=false
2021-05-18 11:11:37.251 11390-12401/com.v D/BluetoothGatt: Bluetooth is turned off, disconnect all client connections
2021-05-18 11:11:37.251 11390-12401/com.v D/BluetoothGatt: close()
2021-05-18 11:11:37.254 11390-12401/com.v D/BluetoothGatt: unregisterApp() - mClientIf=15
2021-05-18 11:11:37.255 11390-12200/com.v I/BluetoothAdapter: onBluetoothStateChange: up=false
@wrsx can you try the following SNAPSHOT and check if it resolves your issue (i.e. throws when BLE is turned off rather than hanging)?
repositories {
maven("https://oss.sonatype.org/content/repositories/snapshots")
}
dependencies {
implementation("com.juul.kable:core:0.6.0-issue-102-1-SNAPSHOT")
}
Thank you!
Just tested and seems to be disconnecting now, thanks!
Thanks for testing @wrsx, we'll be releasing 0.7.0 soon and it will include this fix.
Fix is included in 0.7.0 release.
|
2025-04-01T04:10:34.149693
| 2018-05-31T09:51:37
|
328070675
|
{
"authors": [
"karltaylor"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14675",
"repo": "KBLNY/react-native-message-bar",
"url": "https://github.com/KBLNY/react-native-message-bar/issues/47"
}
|
gharchive/issue
|
Is this repo still maintained?
👋 asking for a friend
I see that @talor-a is maintaining his own fork 👍
Related: #36
|
2025-04-01T04:10:34.241496
| 2017-05-31T20:32:46
|
232687220
|
{
"authors": [
"Olympic1",
"bfishman"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14676",
"repo": "KSP-CKAN/CKAN-meta",
"url": "https://github.com/KSP-CKAN/CKAN-meta/pull/1270"
}
|
gharchive/pull-request
|
Update DockingPortAlignmentIndicator-6.6.0.ckan
"ksp_version_min" changed to 1.3.0
If I merge this the min version will get overwritten again to 1.2.0 in the next few hours, so I made a new PR to keep it at 1.3.0.
https://github.com/KSP-CKAN/NetKAN/pull/5565
This can now be merged as it wont get overwritten anymore
|
2025-04-01T04:10:34.242266
| 2016-08-01T12:34:41
|
168635246
|
{
"authors": [
"linuxgurugamer",
"politas"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14677",
"repo": "KSP-CKAN/NetKAN",
"url": "https://github.com/KSP-CKAN/NetKAN/pull/4475"
}
|
gharchive/pull-request
|
SnapDock
closes #4472
Can't quite work out from the description what this mod does, but this seems to install it correctly. Merging with thanks!
|
2025-04-01T04:10:34.264619
| 2023-12-20T12:51:28
|
2050492876
|
{
"authors": [
"belanglos",
"vsdkth"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14678",
"repo": "KTH/kurs-pm-web",
"url": "https://github.com/KTH/kurs-pm-web/pull/323"
}
|
gharchive/pull-request
|
Issues/KUI-1160 fix render mismatch
Cpyress-tests encountered a mismatch between SSR code and the first render on the client side.
This was resolved by putting the mismatching variable into state, setting the initial value to the same value it has on the server and updating the value on first render using a useEffect.
Also ran into problems with eslint and had to move one function.
Ser bra ut! Approved!
|
2025-04-01T04:10:34.308932
| 2018-09-22T17:55:43
|
362877289
|
{
"authors": [
"corrieann",
"jzuern",
"mihirdcoder"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14682",
"repo": "Kaggle/kaggle-api",
"url": "https://github.com/Kaggle/kaggle-api/issues/99"
}
|
gharchive/issue
|
Competition dataset not linking with Colab
I am trying to use colab and kaggle API to work for nyc taxi fare competition. But it is giving me error while fetching datasets.
I have uploded kaggle API key, placed it in root folder but when I try to run
!kaggle competitions download -c new-york-city-taxi-fare-prediction
It is throwing 404 error !
Yes I am able to download, I an able to use data grom gdrive but can't connect it directly from kaggle API, the !kaggle datasets list command also does not work, throws same error.
Can you share your Colab notebook?
It's working for me --
https://colab.research.google.com/gist/corrieann/3abf2e79f833200d72b58ff10b1df39e/kaggle-api.ipynb
@corrieann Is the notebook still working for you? I tried to run it and it throws a 404 when trying to list the datasets. Might be a kaggle server issue?
Still works for me.
Maybe try https://colab.research.google.com/github/corrieann/kaggle/blob/master/kaggle_api_in_colab.ipynb
It avoids using Google Drive for fetching your kaggle.json file which is, I suspect, where you're running into trouble.
|
2025-04-01T04:10:34.318610
| 2022-02-08T14:26:04
|
1127327160
|
{
"authors": [
"Kaiede"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14683",
"repo": "Kaiede/Bedrockifier",
"url": "https://github.com/Kaiede/Bedrockifier/issues/28"
}
|
gharchive/issue
|
Support Backing up Mods/etc Folders
This could be done adding something like this in the configuration file:
java:
- name: <container name>
extras:
- <folder path>
worlds:
- <world path>
The result would be a series of zip files. So if I had a container called “minecraft_server” and the world was “Yosemite”, the result would be:
minecraft_server.extras.Timestamp.zip
Yosemite.Timestamp.zip
Inside the extras zip would be each folder that was listed in the extras part of the configuration.
Would need to ivestigate how this interacts with the trimming logic. It should be fine, but it would at least need to identify the extras as a proper bucket to trim.
Enable Backup of extra folders
|
2025-04-01T04:10:34.355909
| 2021-07-02T08:32:34
|
935546300
|
{
"authors": [
"Asteriajojo",
"HanZ1203",
"KaiyangZhou",
"gscahhh",
"gshuangchun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14684",
"repo": "KaiyangZhou/mixstyle-release",
"url": "https://github.com/KaiyangZhou/mixstyle-release/issues/8"
}
|
gharchive/issue
|
T-SNE
Dear Dr. zhou,
Thank you for your contribution!
Could you please share the T-SNE code which can plot the Figure 4?
the code has been updated
please refer to README in imcls/ for the details
thank you!
Hi,could you please share the T-SNE code for the style statistics(bottom) in Figure 4? I can only find the code for visualization of flattened feature maps... or could you please tell me what does the single point(bottom) in the Figure 4 mean? Thanks a lot!
I still can not understand what 'style statistics' stands for. Could you please clarify how you get Fig.4 a little bit further? Thank you so much.
|
2025-04-01T04:10:34.358258
| 2023-07-19T19:43:20
|
1812621931
|
{
"authors": [
"ju-Skinner",
"rwc9u"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14685",
"repo": "Kajabi/sage-lib",
"url": "https://github.com/Kajabi/sage-lib/pull/1766"
}
|
gharchive/pull-request
|
[CHORE] - upgrade node v16.20
Description
Upgrade to Node v16.
https://github.com/Kajabi/sage-lib/actions/runs/5603412429/jobs/10249992415?pr=1766#step:5:8
|
2025-04-01T04:10:34.363160
| 2021-04-23T17:47:59
|
866314844
|
{
"authors": [
"QuintonJason",
"monkeypox8"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14686",
"repo": "Kajabi/sage-lib",
"url": "https://github.com/Kajabi/sage-lib/pull/428"
}
|
gharchive/pull-request
|
Tiny-mce - add toolbar button wrapping
Description
This PR wraps the buttons in the tiny-mce toolbar row. The interaction currently requires scrolling along the x-axis to reveal additional buttons. Some browsers can be configured to not reveal scrollbars unless the user interaction with the active area. This will reduce the cognitive load the user.
When the buttons wrap, there will be undesirable borders for the buttons. This is intentional as it mirrors the old tiny-mce editor. Until we are able to update tiny-mce to the latest stable release, the result is the compromise
Screenshots
Before
After
Test notes
Turn on the link and check the pages listed in the Estimate Impact
Verify that the editor's buttons wrap when there is overflow in the toolbar
Estimated impact
(MEDIUM) Reduce size of the tiny-mce editor buttons. Also allows the buttons to wrap to multiple lines.
[ ] Coaching session edit form
[ ] Podcast episode edit form
[ ] Product post page
[ ] Offer edit form
Related
BUILD-1025
Closes #426
|
2025-04-01T04:10:34.365954
| 2024-06-22T11:31:11
|
2367762436
|
{
"authors": [
"Hemu21"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14687",
"repo": "Kajol-Kumari/InfluMart",
"url": "https://github.com/Kajol-Kumari/InfluMart/pull/72"
}
|
gharchive/pull-request
|
Creating a connect request api at backend added
Description
Created the models and connection request api. made averything according to the mentioned instructions. If any changes needed please let me know
Issue
Issue Link resolve #66
Screenshot or Video
https://github.com/Kajol-Kumari/InfluMart/assets/106808387/2139cc2c-c5ad-4863-9a11-ddca7dc5c270
Request: Can you please make this level3 why because creating complete connection request api with all requried endpoints are included. as you have given level3 for pr 51
@Kajol-Kumari @RohitGupta1237 can you please check when you get time.
@Kajol-Kumari update can you please check when you get free time.
|
2025-04-01T04:10:34.372857
| 2018-10-09T13:18:56
|
368206276
|
{
"authors": [
"Kalaborative",
"statawesomeguy"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14688",
"repo": "Kalaborative/survivio-plus",
"url": "https://github.com/Kalaborative/survivio-plus/issues/76"
}
|
gharchive/issue
|
Can you apply hacks for proxy websites?
So my school blocked the main surviv.io website which the hacks work on, but survivio has a proxy website called thecircleisclosing.com and survivio plus isn't loaded on that website because the extension doesn't recognize the website as surviv.io. Is there any way to apply the hacks to proxy sites? That would be greatly appreciated.
Other Official surviv.io proxy sites
http://surviv2.io
https://surviv2.io
http://2dbattleroyale.com
https://2dbattleroyale.com
http://2dbattleroyale.org
https://2dbattleroyale.org
http://piearesquared.info
https://piearesquared.info
http://thecircleisclosing.com
https://thecircleisclosing.com
I'll add them to the manifest, thanks!
Redownload the extension for the changes to take place!
|
2025-04-01T04:10:34.374500
| 2022-12-28T08:33:59
|
1512550157
|
{
"authors": [
"euriconicacio",
"mukund109"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14689",
"repo": "Kalebu/alright",
"url": "https://github.com/Kalebu/alright/pull/99"
}
|
gharchive/pull-request
|
Fixes for firefox
I'm using firefox and don't have the chrome driver installed. This throws a ModuleNotFound error at this line
from webdriver_manager.chrome import ChromeDriverManager
I've fixed it by only having this line execute when a custom driver is not provided.
Also, when doing messenger.send_message on Firefox, only 1 character was being sent. Resolved this by chaining the send_keys and Shift+Enter operations.
@mukund109, please, follow the guidelines provided by @Kalebu. Feel free to PR the updated version whenever possible. For now, this one is closed due to the lack of activity.
|
2025-04-01T04:10:34.394697
| 2023-01-27T15:41:20
|
1559974329
|
{
"authors": [
"apupier"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14690",
"repo": "KaotoIO/kaoto-standalone",
"url": "https://github.com/KaotoIO/kaoto-standalone/issues/14"
}
|
gharchive/issue
|
Fix build ERROR: failed to solve: executor failed running [/bin/sh -c git clone -b ${UI_TAG} --depth 1 --single-branch https://github.com/KaotoIO/kaoto-ui.git]: exit code: 128
https://github.com/KaotoIO/kaoto-standalone/actions/runs/4023124675/jobs/6913620810#step:10:169
reverted commits to fix it
|
2025-04-01T04:10:34.403235
| 2022-07-28T19:21:37
|
1321377554
|
{
"authors": [
"KarnerTh",
"gvidon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14691",
"repo": "KarnerTh/mermerd",
"url": "https://github.com/KarnerTh/mermerd/pull/14"
}
|
gharchive/pull-request
|
Enable the copy-button
Proposed formatting enables the copy-button which appears in the top right corner of the copy widget. Which makes copy'n pasting the code fragment much easier.
@gvidon thx for the PR 👍
|
2025-04-01T04:10:34.419244
| 2020-04-23T21:27:54
|
605886102
|
{
"authors": [
"spaghett1c0de"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14692",
"repo": "KasperOlesen/DataTable-AltEditor",
"url": "https://github.com/KasperOlesen/DataTable-AltEditor/issues/99"
}
|
gharchive/issue
|
Does this work with jQuery 3.4 and Bootstrap 4.3?
I'm using this library with jQuery 3.4 and Bootstrap 4.3. I initialized the Datatable just like the examples, but the buttons do not trigger anything (not even error in Chrome DevTools). I'm wondering if it's because of the jQuery and Bootstrap versions that are not compatible with this library? 🤔
Best,
Vincent
I tried it again and it works fine. Not sure what it was, but I can't replicate it now lol
|
2025-04-01T04:10:34.438978
| 2017-07-17T09:45:37
|
243342757
|
{
"authors": [
"SajanJosan",
"StephenVinouze"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14693",
"repo": "KasualBusiness/MaterialNumberPicker",
"url": "https://github.com/KasualBusiness/MaterialNumberPicker/issues/21"
}
|
gharchive/issue
|
How To Add Drawable XML To Background Pls Add This Option
I want to set multiple background colors, border lines and curve corners with xml background. can i do it.
Hi @SajanJosan
Could you be more specific about what you want to achieve. Just a friendly reminder: this library is merely a wrapper around the native NumberPicker, and it's access is rather limited (mostly reflection used to get over it)
|
2025-04-01T04:10:34.465749
| 2024-11-01T17:46:42
|
2629560972
|
{
"authors": [
"Kavinraja-G",
"patpicos"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14694",
"repo": "Kavinraja-G/crossplane-docs",
"url": "https://github.com/Kavinraja-G/crossplane-docs/issues/7"
}
|
gharchive/issue
|
No content being generated
I tried running crossplane docs from a few paths and everytime the readme returns empty.
At a minimum, i am interested in having the XRD definition documented.
➜ apis git:(main) crossplane-docs -v
crossplane-docs version 0.1.2
➜ apis git:(main) crossplane-docs markdown ./XCluster -o XRD_README.md
➜ apis git:(main) ✗ cat XRD_README.md
@patpicos Thanks for looking into this tool! Is it possible for you to share a sample definition.yaml (redacted) one that you are trying to gen docs for?
Here's an example
---
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: xefss.gitops.k8s.sourced.io
spec:
defaultCompositionRef:
name: xefs.kcl.aws.gitops.k8s.sourced.io
group: gitops.k8s.sourced.io
names:
kind: XEFS
plural: xefss
claimNames:
kind: EFS
plural: efss
versions:
- name: v1beta1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
parameters:
type: object
properties:
creationToken:
description: A unique name (a maximum of 64 characters are allowed) used as reference when creating the Elastic File System
type: string
availabilityZoneName:
description: AWS Availability Zone in which to create the file system. Used to create a file system that uses One Zone storage classes.
type: string
encrypted:
type: boolean
description: If true, the disk will be encrypted.
default: true
enableBackup:
type: boolean
description: A flag to enable or disable backup policy
default: true
kmsKeyId:
description: The ARN for the KMS encryption key
type: string
performanceMode:
type: string
description: The file system performance mode. Can be either 'generalPurpose' or 'maxIO' (Defaults to 'generalPurpose').
enum:
- generalPurpose
- maxIO
default: generalPurpose
throughputMode:
type: string
description: Throughput mode for the file system. Defaults to elastic.
enum:
- bursting
- provisioned
- elastic
default: elastic
provisionedThroughputInMibps:
type: integer
description: The throughput, measured in MiB/s, that you want to provision for the file system. Only applicable with throughput_mode set to provisioned.
securityGroup:
type: object
properties:
vpcId:
type: string
rules:
type: array
minItems: 1
items:
type: object
properties:
name:
type: string
description: Unique name to identify the sg rule
fromPort:
type: integer
default: 2049
toPort:
type: integer
default: 2049
cidrBlocks:
type: array
minItems: 1
items:
type: string
sourceSecurityGroupId:
type: string
oneOf:
- required: ["name","cidrBlocks"]
- required: ["name","sourceSecurityGroupId"]
required:
- vpcId
accessPoints:
description: List of access points to be created
type: array
items:
type: object
properties:
name:
type: string
description: Unique name to identify the access point
posixUser:
type: object
default: {}
properties:
gid:
type: integer
description: POSIX group ID used for all file system operations using this access point.
default: 1000
uid:
type: integer
description: POSIX user ID used for all file system operations using this access point.
default: 1000
rootDirectory:
type: object
description: Directory on the Amazon EFS file system that the access point provides access to
default: {}
properties:
path:
type: string
description: Path on the EFS file system to expose as the root directory
default: "/"
creationInfo:
type: object
default: {}
description: POSIX IDs and permissions to apply to the access point's Root Directory.
properties:
ownerGid:
type: integer
description: POSIX group ID to apply to the root_directory.
default: 1000
ownerUid:
type: integer
description: POSIX user ID to apply to the root_directory.
default: 1000
permissions:
type: string
description: POSIX permissions to apply to the RootDirectory, in the format of an octal number representing the file's mode bits.
default: "750"
required:
- name
mountTargets:
description: List of mount targets to be created
type: array
items:
type: object
properties:
name:
type: string
description: Unique name to identify the mount target
subnetId:
type: string
description: The ID of the subnet to add the mount target in.
securityGroups:
type: array
decsription: List of security groups to be added to mount target
items:
type: string
ipAddress:
type: string
description: The address (within the address range of the specified subnet) at which the file system may be mounted via the mount target.
required:
- name
- subnetId
tags:
type: object
additionalProperties:
type: string
dynamicProvisioning:
type: object
default: {}
x-kubernetes-validations:
- rule: " !has(self.enabled) || !self.enabled || (has(self.kubernetesProviderConfigRefName) && size(self.storageClasses) > 0)"
message: Provider config name is required when dynamic provisioning is enabled
properties:
enabled:
type: boolean
description: Flag to enable dynamic provisioning and creating StorageClass
default: false
kubernetesProviderConfigRefName:
type: string
description: Name of the kubernetes provider config to create the StorageClass in.
storageClasses:
type: array
items:
type: object
properties:
name:
type: string
description: name of the StorageClass
directoryPerms:
type: string
default: "700"
basePath:
type: string
default: "/dynamic_provisioning"
reclaimPolicy:
type: string
enum:
- Retain
- Delete
default: Retain
required:
- name
required:
- securityGroup
resourceConfig:
type: object
properties:
region:
description: Geographic location of this bucket
type: string
deletionPolicy:
description: Defaults to Orphan
enum:
- Delete
- Orphan
type: string
default: Orphan
managementPolicies:
type: array
description: List of actions Crossplane can take on managed resources included in this composition and its corresponding external resources.
items:
type: string
enum:
- Create
- Delete
- LateInitialize
- Observe
- Update
default:
- Create
- LateInitialize
- Observe
required:
- region
required:
- resourceConfig
status:
type: object
properties:
fileSystemId:
type: string
Commands i ran from the definition folder
➜ XEFS git:(main) ✗ pwd
/home/k8s/dish/crossplane/templates/apis/XEFS
➜ XEFS git:(main) ✗ crossplane-docs markdown ./ -o XRD_README.md
@Kavinraja-G were you able to analyze why there is no data generated?
Thanks for sharing this @patpicos Apologies got busy lately, I'll look into this weekend.
@patpicos This is now fixed, can you please update crossplane-docs to v0.1.2 and run with --xrd-only flag?
It did the trick. i have other ideas ...will submit separate issues...
|
2025-04-01T04:10:34.483909
| 2017-10-03T18:25:15
|
262535080
|
{
"authors": [
"17cupsofcoffee",
"Keats"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14695",
"repo": "Keats/gutenberg",
"url": "https://github.com/Keats/gutenberg/issues/125"
}
|
gharchive/issue
|
Posts with co-located assets have to explicitly specify a URL
I imagine this is more a 'not implemented yet' thing than a bug, but one thing I found a little unintuitive when I was moving my blog across to Gutenberg was the fact that posts that use an index.md (i.e. ones with assets co-located) have to explicitly specify a URL in the front matter.
For example, I have a post in my project which looks like this:
posts/
writing-an-audio-plugin-in-rust/
index.md
img.png
Without the URL specified in the front matter, the output looks like this:
posts/
index/
index.html
img.png
Whereas I'd expect it to look like this:
posts/
writing-an-audio-plugin-in-rust/
index.html
img.png
Looking at the code, it doesn't seem like it'd be too difficult to have the URL be inferred from the folder for index.md posts - would you accept a PR with this change, if I were to submit one? Or is there some reason I'm missing why it doesn't work this way?
Just wanted to check before I wrote any code 😄
Whoa this bug has been there for ages and for some reason the test for that had a slug so it slipped through.
Fixed it and added a couple of tests in https://github.com/Keats/gutenberg/commit/a24851790c945008a7197eb334c5364960832faa
I'll make a new release today or tomorrow but you can try building it from source in the meantime. Thanks a lot!
Thanks :) Will give it a try this evening.
Let me know if that works for you, I think I got everything I wanted for 0.2.0 and I'll do a release
Built from source and it's working as expected now :+1: Thanks again!
|
2025-04-01T04:10:34.485378
| 2017-11-26T10:17:58
|
276813915
|
{
"authors": [
"BharatKalluri",
"Keats"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14696",
"repo": "Keats/gutenberg",
"url": "https://github.com/Keats/gutenberg/pull/188"
}
|
gharchive/pull-request
|
added section and pages templates
Fix for #183.
Fixes for other default pages.
Added in https://github.com/Keats/gutenberg/pull/175/commits/129e693521a1a7fbf392d7516f0ce688d0e0194b
|
2025-04-01T04:10:34.493842
| 2024-11-04T13:38:31
|
2632846654
|
{
"authors": [
"KejPi",
"beermad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14697",
"repo": "KejPi/AbracaDABra",
"url": "https://github.com/KejPi/AbracaDABra/issues/173"
}
|
gharchive/issue
|
Program can't scan ensembles if using the second RTL-SDR USB device
A bit of a strange one here. I have two USB dongles, as I use welle-cli to record programmes and often want them from multiple multiplexes.
If I start AbracaDABra when there isn't a welle-cli process running, it claims the first dongle and can successfully scan for multiplexes. But if welle-cli is running on the first device, AbracaDABra claims the second dongle but fails to scan successfully for multiplexes. If a scan has already been done with the first dongle, then I restart it with the second one, the result is variable. Sometimes It just fails to connect and tells me "no service", on other occasions, the multiplex list loses some or all multiplexes.
I've tried swapping the dongles round in case there's something different about them, but this makes no difference. I've attached logs from AbracaDABra from scanning with the first (dongle1.log) and second (dongle2.log) devices. There seem to be clear differences which I hope are helpful.
dongle1.log
dongle2.log
Other than this, I have to say I'm very happy with this program, which does its job very nicely. Thanks.
According to your logs it seems you are running the app under Linux. I have tried to reproduce the issue but I do not see any problem with AbracaDABra running as second. Also the messages in your log look like driver problem. Could you please share more details, specifically:
What Linux are you running?
Do you use AbracaDABra AppImage or have you built from source?
Is it happening with welle.io specifically or with any application using RTL-SDR (like rtl_test for example) before starting AbracaDABra?
|
2025-04-01T04:10:34.498031
| 2021-05-28T14:39:54
|
905529594
|
{
"authors": [
"KelvinTegelaar",
"New-Hope-Mills",
"jeremyroe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14698",
"repo": "KelvinTegelaar/RunAsUser",
"url": "https://github.com/KelvinTegelaar/RunAsUser/issues/21"
}
|
gharchive/issue
|
"Hello World" text file not created
When I run the command:
$scriptblock = { "Hello world" | out-file "C:\Temp\HelloWorld.txt" } invoke-ascurrentuser -scriptblock $scriptblock
All it returns is a number, which I assume is the PID. However it does not actually create a txt file.
I can, however, add the file with "Hello World" | Out-File "C:\Temp\HelloWorld.txt"
I'm executing as SYSTEM, using ConnectWise Control, with backstage Admin PowerShell, if that helps.
Likely your C:\Temp does not exist - try creating that folder first or do something like the following:
New-Item -ItemType Directory -Force -Path C:\Temp
Check if first with Test-Path if you want
correct @jeremyroe, thanks for answers. :)
@new-hope-mills FYI Backstage and RunAsUser do not work together. Backstage is an interactive user session so RunAsUser grabs that.
|
2025-04-01T04:10:34.559556
| 2021-10-22T12:05:53
|
1033509434
|
{
"authors": [
"Simply007",
"bwlng"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14699",
"repo": "Kentico/kontent-gatsby-packages",
"url": "https://github.com/Kentico/kontent-gatsby-packages/pull/195"
}
|
gharchive/pull-request
|
Gatsby v4 upgrade
Motivation
Upgrade to be compatible with Gatsby v4
https://v4.gatsbyjs.com/docs/reference/release-notes/migrating-source-plugin-from-v3-to-v4/
https://www.gatsbyjs.com/docs/reference/release-notes/migrating-from-v3-to-v4
Checklist
[x] Code follows coding conventions held in this repo
[ ] Automated tests have been added
[x] Tests are passing
[ ] Docs have been updated (if applicable)
[ ] Temporary settings (e.g. variables used during development and testing) have been reverted to defaults
[ ] Add sample of DSD and SSR to the repo and deploy it to netlify #196
How to test
CI is fine
@Simply007 Tested on our project and it works as expected when upgrading from version 7, but Deferred Static Generation does not work. Adding the defer argument to the createPage action results in the following error.
Gatsby kontent source plugin resulted to error in `createSchemaCustomization` method ENOENT: no such file or directory, open '.cache/query-engine/template.items.schema.gql'
Error: ENOENT: no such file or directory, open '.cache/query-engine/template.items.schema.gql'
I have a simple reproduction in this repo with reproducible steps in the README.
Would you like me to open a separate issue?
@Simply007 Tested on our project and it works as expected when upgrading from version 7, but Deferred Static Generation does not work. Adding the defer argument to the createPage action results in the following error.
Gatsby kontent source plugin resulted to error in `createSchemaCustomization` method ENOENT: no such file or directory, open '.cache/query-engine/template.items.schema.gql'
Error: ENOENT: no such file or directory, open '.cache/query-engine/template.items.schema.gql'
I have a simple reproduction in this repo with reproducible steps in the README.
Would you like me to open a separate issue?
Thanks for testing this out! I was planning to try it myself (#196). And yes please - submit a separate issue. It seems more like a gatsby issue - it might not cache .gql files out of the box (I am using them as a template for schema customization).
Version 8.0.0-beta.2 of packages:
<EMAIL_ADDRESS><EMAIL_ADDRESS>
Should fix #197
Hello everybody, We are planning to release a final version 8.0.0 in the middle of Feb 2022.
It is here! 🚀 I am going to release this version as latest. It is already verified and being used without submitted issues. but one:
https://discordapp.com/channels/821885171984891914/822034016567689256/940114422268067840
tl;dr; - if you want to use the kontent gatsby source plugin - ideally update to gatsby v 4.6.2 (4.6.1) does not work.
ping @CollierCZ @drewfoster-luminary (thx again for the issue fix) as the thumbs-uppers for the latest comment :)
|
2025-04-01T04:10:34.563518
| 2019-01-30T14:29:45
|
404796901
|
{
"authors": [
"Kentzo",
"ppescher"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14700",
"repo": "Kentzo/git-archive-all",
"url": "https://github.com/Kentzo/git-archive-all/pull/70"
}
|
gharchive/pull-request
|
Fix a couple of issues on Windows host
Tested on Windows 7 x64 with Python 3.7.2 64-bit and GitHub Desktop environment (MSYS).
One problem is the Git version string, that contains some text and not only digits, leading to this error message:
Unable to parse Git version "2.14.1.windows.1".
The fix replaces any non-digit segment with 0.
The other problem is that the file paths contain backslashes on Windows, which are not handled correctly by Python codec. I'm not sure but there might be an issue on Linux too, if the backslash is used in some file name.
The fix first removes any backslash and process each segment separately, then joins the output putting back the backslashes.
Should be fixed in 1.20.0
|
2025-04-01T04:10:34.582267
| 2018-11-18T18:55:33
|
381988672
|
{
"authors": [
"barentsen",
"gully",
"mrtommyb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14701",
"repo": "KeplerGO/lightkurve",
"url": "https://github.com/KeplerGO/lightkurve/issues/337"
}
|
gharchive/issue
|
Enable tpf.create_threshold_mask() to return a contiguous mask
Problem
The aperture_mask='threshold' feature of Lightkurve is a bit simplistic in that it returns all pixels above a median flux threshold. This means that the mask returned is often not contiguous and may include neighbor stars, see e.g. https://github.com/KeplerGO/lightkurve/pull/231#issuecomment-439681783. Although we could keep this behavior as an option, most of the time users will want a smarter mask.
Possible solutions
I am aware of four possible ways to make the threshold feature smarter.
It could return the contiguous mask that...
contains the center pixel;
contains the brightest pixel (in terms of median flux);
contains the largest number of pixels; or
contains the most flux.
Thoughts? I wonder if option 4 makes the most sense.
Example implementation
Here's an example implementation of scenario 2 written by @mrtommyb as part of his original PR [#231]:
# identify pixel above the threshold and label contiguous regions
threshold_region = np.where(median_image > mad_cut, 1, 0)
labeled_values = label(threshold_region)[0]
# find brightest pixel
brightest_pixel_y, brightest_pixel_x = np.unravel_index(median_image.argmax(), median_image.shape)
region_value = labeled_values[brightest_pixel_y, brightest_pixel_x]
aperture_mask = np.where(labeled_values == region_value, 1, 0)
return aperture_mask.astype(bool)
Thanks @barentsen. I think this is going to be more important for TESS thanks Kepler due to the greater crowding. The target I was looking at is a pretty good test case. I would guess that you would (almost?) always want the one that includes the central pixel
I suppose that most traditional TPFs, as well as TESS cut-outs, intend to place the target of interest in the center of the TPF. So I agree we should probably go with "the most central mask".
Possible pseudocode:
central_pixel_arg = np.argmin(np.hypot(x - center_x, y - center_y))
central_pixel_x, central_pixel_y = x[central_pixel_arg], y[central_pixel_arg]
region_to_use = labeled_values[central_pixel_y, central_pixel_x]
aperture_mask = np.where(labeled_values == region_to_use, 1, 0)
@christinahedges Would you be willing to see if the above is easy to implement, and open a PR if so?
How about using cluster labels with mahotas for crowded fields? It's what Ben Montet uses in f3.
I had a quick go at solving this in #345. It will need testing.
|
2025-04-01T04:10:34.613817
| 2021-06-17T11:25:20
|
923811298
|
{
"authors": [
"Kethku",
"befedo",
"shaunsingh",
"verajosemanuel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14702",
"repo": "Kethku/neovide",
"url": "https://github.com/Kethku/neovide/issues/732"
}
|
gharchive/issue
|
neovide panicked when loading font
Hi there,
I get
$ RUST_BACKTRACE=1 neovide
Ignored client type property: "methods"
Ignored client type property: "attributes"
thread 'main' panicked at 'Could not load font', src/renderer/fonts/caching_shaper.rs:50:14
stack backtrace:
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
when starting neovide. guifont is set to Fira Code Nerd Font.
Best Regards
befedo
What's the actual text set in the guifont option?
Also what version of neovide are you using?
What's the actual text set in the guifont option?
It’s set guifont=Fira\ Code\ Nerd\ Font and I’m on master.
Thanks for your time.
Can you try set guifont=FiraCode\ Nerd\ Font:h14 instead? I believe there isn't a space between Fira and Code for that font.
Its a good point though. Looks like main crashes if guifont is invalid. That definitely needs fixed.
Hi there,
I'm on g51fa8c0 and have set=guifont FiraCode\ Nerd\ Font:h14 and no panic occurs. When switching to g59629eb I'm not able to insert a ! (for example), but this might be another issue and I'm looking into #445.
Thanks for your time and quick responses,
befedo
same here after upgrade to g8168023-1 any font issues panic
|
2025-04-01T04:10:34.620965
| 2021-05-15T06:00:30
|
892366961
|
{
"authors": [
"Kethku",
"j4qfrost"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14703",
"repo": "Kethku/neovide",
"url": "https://github.com/Kethku/neovide/pull/655"
}
|
gharchive/pull-request
|
Opengl
Ok. Opengl branch seems to be the future. Remaining issues I know of:
DONE The cursor is frequently slightly off from the grid. I believe this stems from the font hinting work
DONE and introduces actual font fallback solving https://github.com/Kethku/neovide/issues/444 once https://github.com/Kethku/neovide/pull/701 merges Fallback is busted. I think we will have to swap to https://docs.rs/swash/0.1.2/swash/index.html rather than rustybuzz because there is a good example for hall font fallback might work in the docs (plus its way faster, and written in rust rather than ported)
There is a bug where dragging selections past the bottom of the window causes it to crash.
DONE once https://github.com/Kethku/neovide/pull/701 merges Font shaping sometimes uses different glyph widths than we expect. I think we need to manually place glyphs in a grid rather than relying on the font shaping.
Cursor freezes sometimes https://github.com/Kethku/neovide/issues/640
I think problem 1 can be ignored for merge. 3 isn't a huge deal so long as its fixed after merge. 2 however is a big deal because it means main will no longer have any font fallback. So moving forward I'm going to try to get swash working and then take a look at implementing the fallback algorithm outlined in the docs.
4 is a hard problem. To do it correctly while still supporting ligatures we will have to draw glyphs at expected grid positions except when they are merged by the font shaping. In that case I propose relying on the font shaping to position the glyph at the expected advance compared to the previous glyph and then continue on. This would not handle shaping of non ligature type substitutions, but honestly we don't support scripts like that well anyway today. I'm pretty ok punting on that for the time being.
I have merged swash support. Next up, I need to work on font fallback.
@Kethku for snap I can either give you a key, or give you admin authority to the snapcraft package.
Honestly I don't know anything about snap. I dont really know what its for. So I will defer to your judgement for which you think is a better choice.
Snap is just a popular package manager. For the CI to work you need to add SNAPCRAFT_TOKEN to the secrets. I can generate a token for you next week. I don't have access to my Ubuntu machine atm.
Problem 2 listed above is now fixed. I used swash to cluster and select fonts from the fallback list manually. Appears to work, but the positioning of double width glyphs is broken.
I'm becoming convinced that font selection and fallback should happen at the editor level. This moves font selection to the editor's responsibility rather than the renderer. Luckily there is no graphics context required in the caching shaper, so this should be an easy change even though this would be introducing a skia dependency to the editor.
I've now tested using skia's built in font selection given a character and it can properly select a font from the systems installed font list and load it dynamically. Getting very close
I've also added an issue to the list, I've noticed that occasionally the window freezes and prints warning about creating a stencil buffer. This must be fixed before merging.
I just merged font fallback into this branch. Now just the crash on drag remains. I have a hunch that the stencil buffer issue has been fixed, but if I encounter it again, I found a flag in skia configuration which avoids the use of stencil buffers entirely
|
2025-04-01T04:10:34.718204
| 2019-09-25T17:30:01
|
498423445
|
{
"authors": [
"ahartman",
"mikeavdeev",
"urba1n"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14704",
"repo": "KhaosT/homebridge-camera-ffmpeg",
"url": "https://github.com/KhaosT/homebridge-camera-ffmpeg/issues/354"
}
|
gharchive/issue
|
Image flip
I set hflip: true, which gives a more natural image.
However, the hflip setting for me works only for the video and not for the stills.
This means that when opening the camera window, you see the still image and the view only flips when the video goes live. This looks odd in my opinion.
I think the flip setting should apply to video AND stills.
Please find attached my json.config.
Kind regards, ahartman, Belgium
config-json.txt
Probleem disappeared
I notice the same problem.
I've got it too
|
2025-04-01T04:10:34.741194
| 2022-01-22T05:50:53
|
1111281220
|
{
"authors": [
"Lucodivo",
"chaoticbob"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14705",
"repo": "KhronosGroup/SPIRV-Reflect",
"url": "https://github.com/KhronosGroup/SPIRV-Reflect/pull/137"
}
|
gharchive/pull-request
|
ignore built-in vars for vertex input description
#135
Hi - is the intent here for the spirv-reflect executable to not display input variable as a vertex attribute if it's a built-in?
The code I edited is only an example on how to use the SPIR-V Reflect library. Specifically it is trying to fill in the information of a VkVertexInputBindingDescription struct based on the reflection data. This information should not include built-in variables. I noticed the problem in my own code when using this example because the built-in variable was added to the stride of my VkVertexInputBindingDescription struct. Which is incorrect.
I totally missed that. Thanks! This makes sense.
|
2025-04-01T04:10:34.767217
| 2018-10-20T07:26:05
|
372183464
|
{
"authors": [
"gabdube"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14706",
"repo": "KhronosGroup/glTF-Blender-Exporter",
"url": "https://github.com/KhronosGroup/glTF-Blender-Exporter/issues/327"
}
|
gharchive/issue
|
Compressed texture are not exported
I have a model that uses compressed textures (using the DDS format). When exporting to the standard gltf or the glb format, the compressed textures are not referenced.
Even if the format is not supported, instead of skipping the file the exporter could simply includes the data with a application/octet-stream mimeType.
I will be working on this feature whether if it ends up in the repo or not, but if someone with more experience with the codebase could point me in the right direction, I'd appreciate it.
Here's an example:
example.zip
Makes sense to me. I'll close this then.
Still if anyone else have my problem and can't, I've hacked together a "fix". Just clone the repo from my branch : https://github.com/gabdube/glTF-Blender-Exporter
|
2025-04-01T04:10:34.770393
| 2020-02-01T03:05:52
|
558468400
|
{
"authors": [
"scurest"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14707",
"repo": "KhronosGroup/glTF-Blender-IO",
"url": "https://github.com/KhronosGroup/glTF-Blender-IO/issues/904"
}
|
gharchive/issue
|
Import: get node importing back up to speed
Here's some stuff that regressed in #857 I'd like to fix. Making an issue to keep track of / refer to it.
[ ] Get rid of "correction" nodes inserted for cameras/lights
[ ] Fix bone lengths/directions. I can carry this further and get #324 done too.
[ ] Make some kind of attempt to guess the original bind pose using the inverseBindMatrices
Will be fixed by, respectively, #945, #946, and #941.
Alright, with this the 2.83 release should now be a strict improvement over the 2.82 release.
|
2025-04-01T04:10:34.845731
| 2016-01-01T19:08:00
|
124557628
|
{
"authors": [
"Thatkookooguy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14708",
"repo": "Kibibit/kibibit-code-editor",
"url": "https://github.com/Kibibit/kibibit-code-editor/pull/39"
}
|
gharchive/pull-request
|
Add robust filetype icons
Change Summary
Added Octicons & Pictonic for filetype icons.
Encapsulated fileName as a component. This will allow us to use this in several different places like: file trees, tab titles, and more.
this commit fixes #39
|
2025-04-01T04:10:34.846869
| 2021-05-30T22:02:23
|
906894328
|
{
"authors": [
"Kick1911"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14709",
"repo": "Kick1911/WorkForce",
"url": "https://github.com/Kick1911/WorkForce/issues/3"
}
|
gharchive/issue
|
Task Retries
Do we need the ability to retry tasks? or should the dev handle retires themselves since some libraries have that builtin like requests.
Maybe wrap coroutines with my own task that catches exceptions.
#11 Added the ability to retry.
|
2025-04-01T04:10:34.905091
| 2023-09-01T19:31:03
|
1877954448
|
{
"authors": [
"KingContaria",
"Linusintro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14710",
"repo": "KingContaria/FastQuit",
"url": "https://github.com/KingContaria/FastQuit/issues/33"
}
|
gharchive/issue
|
issue with Bedrockify when quitting the game while a world is saving
with bedrockify.zip
There is an issue with Bedrockify when quitting the game while a world is saving. It's no malfunction or a crash but a graphical issue. It's this moving thing from the multiplayer screen and I would suggest to just add a toggle to the config file. I've attached an image in a zip file.
i dont think it needs a toggle i'll just add a compat check if bedrockify is present and has that option enabled
i'll put 3.0.0 out of beta first tho, have been procrastinating on that for way too long now
Thanks!
and please for 1.19 too.
|
2025-04-01T04:10:34.920116
| 2023-05-10T07:11:12
|
1703270210
|
{
"authors": [
"Kingtous",
"abldabl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14711",
"repo": "Kingtous/rustdesk_desktop_multi_window",
"url": "https://github.com/Kingtous/rustdesk_desktop_multi_window/issues/17"
}
|
gharchive/issue
|
App crashes after closing and opening new sub-window
OS: Linux - Ubuntu 22.04.2 LTS
Steps to reproduce:
open first sub-window
close first sub-window
open second sub-window
app crashes without logic. it can crash after 2 sub-window creations or after 3 or 4 or 5.
Yes, it's expected unfortunately... You can cache your window, not to close but hide the window and hold the windowId.
Note that Its not the bug for this plugin, it's the drawback of multiple flutter engine shared with one process. The engine is not designed for multiple engine on the head.
i got you, thanks
only one question:
i couldn't run app with your library on windows 10
error was about:
couldn't find window_size_plugin.h
did you know something? or i should give full error?
i got you, thanks
only one question:
i couldn't run app with your library on windows 10
error was about:
couldn't find window_size_plugin.h
did you know something? or i should give full error?
i have added some required header of plugins for rustdesk, u can clean them if your flutter project doesn't contain this plugin.
i got you, thanks
only one question:
i couldn't run app with your library on windows 10
error was about:
couldn't find window_size_plugin.h
did you know something? or i should give full error?
i have added some required headers of plugins for rustdesk, u can clean them if your flutter project doesn't contain these plugins.
|
2025-04-01T04:10:34.968573
| 2023-04-08T03:38:23
|
1659358623
|
{
"authors": [
"Kiyoshika"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14712",
"repo": "Kiyoshika/DataTable",
"url": "https://github.com/Kiyoshika/DataTable/issues/70"
}
|
gharchive/issue
|
DataTable - random numbers/floats within range
Generate random numbers/floats within specific bounds - mainly to be used for benchmarking/testing.
For now users can choose their way of generating random floats and use dt_table_insert_row. This isn't high on my priority list right now
|
2025-04-01T04:10:34.972746
| 2020-12-21T08:31:08
|
771970857
|
{
"authors": [
"KjellConnelly",
"lancer1993"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14713",
"repo": "KjellConnelly/react-native-musicplayercontroller",
"url": "https://github.com/KjellConnelly/react-native-musicplayercontroller/issues/6"
}
|
gharchive/issue
|
react-native-musicplayercontroller:compileDebugJavaWithJavac
I got the below exception after installing the react-native-musicplayercontroller library.
F:\Zincat\extra resource works\MusicPlayerExample\node_modules\react-native-musicplayercontroller\android\src\main\java\com\reactlibrary\RNReactNativeMusicplayercontrollerPackage.java:19: error: method does not override or implement a method from a supertype
@Override
^
1 error
FAILURE: Build failed with an exception
I think the android API got updated, so I can't use @override there anymore. Feel free to look for an alternative, though I haven't/won't be working with Android personally for awhile.
@lancer1993 Hey, I finally got around to working on RN app and got the same error you had. So I removed the @override line and now it works. New version on npm.
@lancer1993 Hey, I finally got around to working on RN app and got the same error you had. So I removed the @override line and now it works. New version on npm.
|
2025-04-01T04:10:34.983522
| 2024-10-19T13:36:15
|
2599177880
|
{
"authors": [
"princedalsaniya",
"vaibhavgeek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14714",
"repo": "Kleo-Network/connect-backend",
"url": "https://github.com/Kleo-Network/connect-backend/pull/36"
}
|
gharchive/pull-request
|
[FEAT] Referrals Implementation
What type of PR is this? (check all applicable)
[ ] Refactor
[X] Feature
[ ] Bug Fix
[ ] Optimization
Description
Related Tickets & Documents
QA Instructions, Screenshots, Recordings
Please replace this line with instructions on how to test your changes, as well
as any relevant images for UI changes.
Added tests?
[ ] yes
[ ] no, because they aren't needed
[ ] no, because I need help
Added to documentation?
[ ] readme
[ ] no documentation needed
[optional] Are there any post deployment tasks we need to perform?
[optional] What gif best describes this PR or how it makes you feel?
Adding the screenshot for future visibility of the work done in this Pull Request, awesome work.
|
2025-04-01T04:10:35.002044
| 2021-02-18T09:37:14
|
810927234
|
{
"authors": [
"alexseif",
"garak"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14715",
"repo": "KnpLabs/KnpMenu",
"url": "https://github.com/KnpLabs/KnpMenu/issues/341"
}
|
gharchive/issue
|
Add template for boostrap 4 etc.?
In every project I use this library, I find myself adding a template of mine to render with Bootstrap 4.
I think it could nice to provide here templates for major CSS vendors (Bootstrap, Foundation, etc.), just like Symfony is doing for its form themes.
I was just searching for this for the same reason.
My suggestion would be to have config in yaml that sets some stuff, I'm trying to write a proper issue
https://github.com/KnpLabs/KnpMenu/issues/368 what do you thing?
@alexseif thanks for your effort, but indeed it's not exactly was I was thinking of.
I'm afraid that your approach is not really strong, because we don't know if in the future the differences between bootstrap classes will be limited to CSS classes.
A template, as mentioned in my original message, would provide better flexibility
|
2025-04-01T04:10:35.006756
| 2024-05-03T00:52:39
|
2276720947
|
{
"authors": [
"KnugiHK",
"jonx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14716",
"repo": "KnugiHK/WhatsApp-Chat-Exporter",
"url": "https://github.com/KnugiHK/WhatsApp-Chat-Exporter/pull/99"
}
|
gharchive/pull-request
|
Handle groups of VCards correctly
A VCard in a message can sometimes be a group of VCards in which case ZVCARDNAME contains a list of VCard names separated by "_$!<Name-Separator>!$_". Note that the first element is the name of the group and is currently dismissed.
ZVCARDSTRING contain the list of VCards separated by "_$!<VCard-Separator>!$_".
Currently the program crashes because it tries to create a Vcard named using a potentially ultra long name (I have groups of hundreds of VCards). This code now handles those groups correctly and saves each VCard in its own VCF file.
You might wanna adjust the content of message.data to your liking as it currently just provide a basic list of the VCards:
"Contact 1 (See file: Contact1.vcf) + Contact 2 (See file: Contact2.vcf)".
Given the path to the VCard can be quite long, I thought you might want to use my format and create a URL link to the actual VCard?
Here is an example:
ZVCARDNAME =
2 contacts_$!<Name-Separator>!$_Contact 1_$!<Name-Separator>!$_Contact 2
ZVCARDSTRING =
BEGIN:VCARD
VERSION:3.0
N:Contact;1;;;
FN:Contact 1
TEL;type=CELL;type=VOICE;waid=11231231234:+1<PHONE_NUMBER>
END:VCARD_$!<VCard-Separator>!$_BEGIN:VCARD
VERSION:3.0
N:Contact;1;;;
FN:Contact 2
ORG:Conty;
TEL;type=CELL;type=VOICE;waid=11231231234:+1<PHONE_NUMBER>
item1.ADR;type=HOME:;;1234 Test Ct.;NYC;NY;Blah;United States
item1.X-ABADR:us
END:VCARD
To improve user experience, I added direct links to the vCard files, allowing users to download them with a simple click instead of searching for the files themselves.
Thanks for you contribution! I will soon adopt your method for Android!
Of course, you're welcome. Thanks for creating this great tool.
|
2025-04-01T04:10:35.009286
| 2022-12-28T15:20:49
|
1512898607
|
{
"authors": [
"KnugiHK"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14717",
"repo": "KnugiHK/Whatsapp-Chat-Exporter",
"url": "https://github.com/KnugiHK/Whatsapp-Chat-Exporter/issues/25"
}
|
gharchive/issue
|
Add an option to allow the Media folder being copied instead of being moved
As raised in https://github.com/KnugiHK/Whatsapp-Chat-Exporter/issues/9. Moving the media folder after the creation of HTML result may not be the expectation of every ones. @cbzittoun proposed to add a warning for that. The discussion continues in this issue.
I suggest to add an option for people who want to copy the media folder and make moving as the default.
Copying media folder to the output directory will be the default starting from commit https://github.com/KnugiHK/Whatsapp-Chat-Exporter/commit/60575c798920729bc4c2a1afb4488b9b3205207c.
Released in 0.9.0.
|
2025-04-01T04:10:35.025158
| 2024-02-26T17:37:09
|
2154742031
|
{
"authors": [
"gsteckman",
"sunny-chung"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14718",
"repo": "KoalaPlot/koalaplot-core",
"url": "https://github.com/KoalaPlot/koalaplot-core/issues/42"
}
|
gharchive/issue
|
Exception thrown for XYGraph: Minimum tick spacing must be greater than 0 and less than or equal to 1
I got an exception when trying to display a XYGraph. I thought it was related to empty data, so I ensure there are some data but still in vain. How to avoid errors to display the chart correctly?
Version: 0.5.2
Target Platform: Desktop
Stacktrace:
Exception in thread "AWT-EventQueue-0" java.lang.IllegalArgumentException: Minimum tick spacing must be greater than 0 and less than or equal to 1
at io.github.koalaplot.core.xygraph.LinearAxisModel.computeMajorTickSpacing(LinearAxisModel.kt:104)
at io.github.koalaplot.core.xygraph.LinearAxisModel.computeMajorTickValues(LinearAxisModel.kt:74)
at io.github.koalaplot.core.xygraph.LinearAxisModel.computeTickValues-0680j_4(LinearAxisModel.kt:92)
at io.github.koalaplot.core.xygraph.AxisDelegate$Companion.createAxis-eqLRuRQ(AxisDelegate.kt:95)
at io.github.koalaplot.core.xygraph.AxisDelegate$Companion.createVerticalAxis-wH6b6FI(AxisDelegate.kt:82)
at io.github.koalaplot.core.xygraph.XYGraphKt$XYGraph$1$1.invoke-0kLqBqw(XYGraph.kt:119)
at io.github.koalaplot.core.xygraph.XYGraphKt$XYGraph$1$1.invoke(XYGraph.kt:98)
at androidx.compose.ui.layout.LayoutNodeSubcompositionsState$createMeasurePolicy$1.measure-3p2s80s(SubcomposeLayout.kt:709)
at androidx.compose.ui.node.InnerNodeCoordinator.measure-BRTryo0(InnerNodeCoordinator.kt:126)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:252)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:251)
at androidx.compose.runtime.snapshots.Snapshot$Companion.observe(Snapshot.kt:2304)
at androidx.compose.runtime.snapshots.SnapshotStateObserver$ObservedScopeMap.observe(SnapshotStateObserver.kt:504)
at androidx.compose.runtime.snapshots.SnapshotStateObserver.observeReads(SnapshotStateObserver.kt:260)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeReads$ui(OwnerSnapshotObserver.kt:133)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeMeasureSnapshotReads$ui(OwnerSnapshotObserver.kt:113)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:1617)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.access$performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:36)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:620)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.measure-BRTryo0(LayoutNodeLayoutDelegate.kt:596)
at androidx.compose.foundation.layout.BoxMeasurePolicy.measure-3p2s80s(Box.kt:122)
at androidx.compose.ui.node.InnerNodeCoordinator.measure-BRTryo0(InnerNodeCoordinator.kt:126)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:252)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:251)
at androidx.compose.runtime.snapshots.Snapshot$Companion.observe(Snapshot.kt:2304)
at androidx.compose.runtime.snapshots.SnapshotStateObserver$ObservedScopeMap.observe(SnapshotStateObserver.kt:504)
at androidx.compose.runtime.snapshots.SnapshotStateObserver.observeReads(SnapshotStateObserver.kt:260)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeReads$ui(OwnerSnapshotObserver.kt:133)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeMeasureSnapshotReads$ui(OwnerSnapshotObserver.kt:113)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:1617)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.access$performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:36)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:620)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.measure-BRTryo0(LayoutNodeLayoutDelegate.kt:596)
at io.github.koalaplot.core.util.HoverableElementAreaKt$HoverableElementArea$2$1.measure-3p2s80s(HoverableElementArea.kt:85)
at androidx.compose.ui.node.InnerNodeCoordinator.measure-BRTryo0(InnerNodeCoordinator.kt:126)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:252)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:251)
at androidx.compose.runtime.snapshots.Snapshot$Companion.observe(Snapshot.kt:2304)
at androidx.compose.runtime.snapshots.SnapshotStateObserver$ObservedScopeMap.observe(SnapshotStateObserver.kt:504)
at androidx.compose.runtime.snapshots.SnapshotStateObserver.observeReads(SnapshotStateObserver.kt:260)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeReads$ui(OwnerSnapshotObserver.kt:133)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeMeasureSnapshotReads$ui(OwnerSnapshotObserver.kt:113)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:1617)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.access$performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:36)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:620)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.measure-BRTryo0(LayoutNodeLayoutDelegate.kt:596)
at androidx.compose.foundation.layout.BoxMeasurePolicy.measure-3p2s80s(Box.kt:122)
at androidx.compose.ui.node.InnerNodeCoordinator.measure-BRTryo0(InnerNodeCoordinator.kt:126)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:252)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:251)
at androidx.compose.runtime.snapshots.Snapshot$Companion.observe(Snapshot.kt:2304)
at androidx.compose.runtime.snapshots.SnapshotStateObserver$ObservedScopeMap.observe(SnapshotStateObserver.kt:504)
at androidx.compose.runtime.snapshots.SnapshotStateObserver.observeReads(SnapshotStateObserver.kt:260)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeReads$ui(OwnerSnapshotObserver.kt:133)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeMeasureSnapshotReads$ui(OwnerSnapshotObserver.kt:113)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:1617)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.access$performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:36)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:620)
at androidx.compose.ui.node.LayoutNode.remeasure-_Sx5XlM$ui(LayoutNode.kt:1145)
at androidx.compose.ui.node.LayoutNode.remeasure-_Sx5XlM$ui$default(LayoutNode.kt:1136)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.doRemeasure-sdFAvZA(MeasureAndLayoutDelegate.kt:356)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.remeasureAndRelayoutIfNeeded(MeasureAndLayoutDelegate.kt:514)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.onlyRemeasureIfScheduled(MeasureAndLayoutDelegate.kt:598)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:624)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:631)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:631)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:631)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:631)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:631)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtree(MeasureAndLayoutDelegate.kt:587)
at androidx.compose.ui.node.RootNodeOwner$OwnerImpl.forceMeasureTheSubtree(RootNodeOwner.skiko.kt:308)
at androidx.compose.ui.node.Owner.forceMeasureTheSubtree$default(Owner.kt:239)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:632)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.measure-BRTryo0(LayoutNodeLayoutDelegate.kt:596)
at org.jetbrains.compose.splitpane.DesktopSplitPaneKt$SplitPane$5.measure-3p2s80s(DesktopSplitPane.kt:100)
at androidx.compose.ui.node.InnerNodeCoordinator.measure-BRTryo0(InnerNodeCoordinator.kt:126)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:252)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:251)
at androidx.compose.runtime.snapshots.Snapshot$Companion.observe(Snapshot.kt:2304)
at androidx.compose.runtime.snapshots.SnapshotStateObserver$ObservedScopeMap.observe(SnapshotStateObserver.kt:504)
at androidx.compose.runtime.snapshots.SnapshotStateObserver.observeReads(SnapshotStateObserver.kt:260)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeReads$ui(OwnerSnapshotObserver.kt:133)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeMeasureSnapshotReads$ui(OwnerSnapshotObserver.kt:113)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:1617)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.access$performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:36)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:620)
at androidx.compose.ui.node.LayoutNode.remeasure-_Sx5XlM$ui(LayoutNode.kt:1145)
at androidx.compose.ui.node.LayoutNode.remeasure-_Sx5XlM$ui$default(LayoutNode.kt:1136)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.doRemeasure-sdFAvZA(MeasureAndLayoutDelegate.kt:356)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.remeasureAndRelayoutIfNeeded(MeasureAndLayoutDelegate.kt:514)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.onlyRemeasureIfScheduled(MeasureAndLayoutDelegate.kt:598)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtreeInternal(MeasureAndLayoutDelegate.kt:624)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.forceMeasureTheSubtree(MeasureAndLayoutDelegate.kt:587)
at androidx.compose.ui.node.RootNodeOwner$OwnerImpl.forceMeasureTheSubtree(RootNodeOwner.skiko.kt:308)
at androidx.compose.ui.node.Owner.forceMeasureTheSubtree$default(Owner.kt:239)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:632)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.measure-BRTryo0(LayoutNodeLayoutDelegate.kt:596)
at org.jetbrains.compose.splitpane.DesktopSplitPaneKt$SplitPane$5.measure-3p2s80s(DesktopSplitPane.kt:100)
at androidx.compose.ui.node.InnerNodeCoordinator.measure-BRTryo0(InnerNodeCoordinator.kt:126)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:252)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$performMeasureBlock$1.invoke(LayoutNodeLayoutDelegate.kt:251)
at androidx.compose.runtime.snapshots.Snapshot$Companion.observe(Snapshot.kt:2304)
at androidx.compose.runtime.snapshots.SnapshotStateObserver$ObservedScopeMap.observe(SnapshotStateObserver.kt:504)
at androidx.compose.runtime.snapshots.SnapshotStateObserver.observeReads(SnapshotStateObserver.kt:260)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeReads$ui(OwnerSnapshotObserver.kt:133)
at androidx.compose.ui.node.OwnerSnapshotObserver.observeMeasureSnapshotReads$ui(OwnerSnapshotObserver.kt:113)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:1617)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate.access$performMeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:36)
at androidx.compose.ui.node.LayoutNodeLayoutDelegate$MeasurePassDelegate.remeasure-BRTryo0(LayoutNodeLayoutDelegate.kt:620)
at androidx.compose.ui.node.LayoutNode.remeasure-_Sx5XlM$ui(LayoutNode.kt:1145)
at androidx.compose.ui.node.LayoutNode.remeasure-_Sx5XlM$ui$default(LayoutNode.kt:1136)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.doRemeasure-sdFAvZA(MeasureAndLayoutDelegate.kt:356)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.remeasureAndRelayoutIfNeeded(MeasureAndLayoutDelegate.kt:514)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.remeasureAndRelayoutIfNeeded$default(MeasureAndLayoutDelegate.kt:491)
at androidx.compose.ui.node.MeasureAndLayoutDelegate.measureAndLayout(MeasureAndLayoutDelegate.kt:377)
at androidx.compose.ui.node.RootNodeOwner$OwnerImpl.measureAndLayout(RootNodeOwner.skiko.kt:290)
at androidx.compose.ui.node.RootNodeOwner.measureAndLayout(RootNodeOwner.skiko.kt:187)
at androidx.compose.ui.scene.MultiLayerComposeSceneImpl.measureAndLayout(MultiLayerComposeScene.skiko.kt:247)
at androidx.compose.ui.scene.BaseComposeScene.doLayout(BaseComposeScene.skiko.kt:225)
at androidx.compose.ui.scene.BaseComposeScene.access$doLayout(BaseComposeScene.skiko.kt:51)
at androidx.compose.ui.scene.BaseComposeScene.render(BaseComposeScene.skiko.kt:164)
at androidx.compose.ui.scene.ComposeSceneMediator$DesktopSkikoView.onRender(ComposeSceneMediator.desktop.kt:490)
at org.jetbrains.skiko.SkiaLayer.update$skiko(SkiaLayer.awt.kt:548)
at org.jetbrains.skiko.redrawer.AWTRedrawer.update(AWTRedrawer.kt:54)
at org.jetbrains.skiko.redrawer.MetalRedrawer$frameDispatcher$1.invokeSuspend(MetalRedrawer.kt:82)
at org.jetbrains.skiko.redrawer.MetalRedrawer$frameDispatcher$1.invoke(MetalRedrawer.kt)
at org.jetbrains.skiko.redrawer.MetalRedrawer$frameDispatcher$1.invoke(MetalRedrawer.kt)
at org.jetbrains.skiko.FrameDispatcher$job$1.invokeSuspend(FrameDispatcher.kt:33)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:108)
at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:318)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:773)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:720)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:714)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:399)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:86)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:742)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [StandaloneCoroutine{Cancelling}@570805d2, SwingDispatcher@6d583e10]
Reproducer:
ChartLayout(title = { AppText("Latencies over Time (95%)") }) {
latenciesMsOverTime.takeIf { it.size >= 2 }?.let { latenciesMsOverTime ->
XYGraph(
xAxisModel = CategoryAxisModel(
latenciesMsOverTime.keys.toList().ifEmpty { listOf(KInstant.now().toMilliseconds()) }),
yAxisModel = LinearAxisModel(
range = 0f..6000f,
),
xAxisLabels = { KZonedInstant(it, KZoneOffset.local()).format("HH:mm:ss") },
xAxisTitle = "Time",
yAxisTitle = "Latency (ms)",
) {
latenciesMsOverTime.forEach { (timestamp, result) ->
DefaultPoint(timestamp, result.at95Percent)
}
}
}
}
I can't run this without your data. But are you putting the graph in a vertically scrollable area without otherwise restricting the height of the graph, e.g. by using a modifier?
Yes. Is this not supported?
The graph will expand to consume all available space, so if you put it in a
scrollable column the vertical space is infinite - and by inspecting the
code I think that will lead to what you are seeing (I didn't try it but
there's a calculation that divides by the available space to find the
relative tick spacing, which would result in it being 0 - in any case you
probably don't want an infinitely tall graph).
If you use a Modifier to set a max height or aspect ratio, this exception
will likely go away.
On Mon, Feb 26, 2024 at 7:33 PM sunny-chung @.***>
wrote:
Yes. Is this not supported?
—
Reply to this email directly, view it on GitHub
https://github.com/KoalaPlot/koalaplot-core/issues/42#issuecomment-1965725769,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGOGXZGYXJDJ2OSPWRMSMILYVVHZNAVCNFSM6AAAAABD2UCZLOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRVG4ZDKNZWHE
.
You are receiving this because you commented.Message ID:
@.***>
Thanks for the prompt response. I overcame this error, and encountered more errors.
exception thrown for autoScaleYRange() when there is no point
java.lang.ClassCastException: class io.github.koalaplot.core.xychart.DefaultPoint cannot be cast to class io.github.koalaplot.core.xygraph.Point
Then, I took an hour to figure out how to reduce minor grid lines.
I encountered an exception when I try to reduce number of grid lines in x-axis by reducing number of items in CategoryAxisModel.
And then, I changed to use LinearAxisModel, which only accepts Float values. My data are timestamps, so they are in Long. I keep encountering an error IllegalArgumentException: Axis range end (1.70904125E12) must be greater than start (1.70904125E12), even if I hardcoded things in the below code. Because x - 2000 and x has the same value after converting to Float, which is a low-precision data type.
val pseudoPoints = points.ifEmpty {
val now = KInstant.now().toMilliseconds()
listOf(DefaultPoint(now - 2000, 0f), DefaultPoint(now, 0f))
}.let {
if (it.size == 1) {
listOf(DefaultPoint(it.first().x - 2000, it.first().y), it.first())
} else {
it
}
}
val floatPoints = points.map { DefaultPoint(it.x.toFloat(), it.y) }
XYGraph(
xAxisModel = LinearAxisModel(
pseudoPoints.first().x.toFloat() .. pseudoPoints.last().x.toFloat(),
minorTickCount = 0,
minimumMajorTickIncrement = 2000f,
),
yAxisModel = LinearAxisModel(
pseudoPoints.autoScaleYRange(),
minorTickCount = 2,
),
And autoScaleXRange() is only available for Float values.
Even if there is no IllegalArgumentException throwing, I would have to deal with precision loss issues raised by conversion from Long to Float and then back to Long.
This is a great library. I like the customizable styles and designs even though they are deeply buried and not documented. The look and feel is also okay. I wish the documentation on getting started could be better, the library could provide more friendly APIs and not coupled to one low-precision inflexible data type. Many of the exceptions I encountered can be avoided with a few lines of documentation. I may have to look for alternatives due to the floating point issues.
Please correct me if I did not understand the library well, so that I can still explore this library.
Thanks for the feedback, this is useful for helping to make the library better and understanding how people want to use it. Have you seen the documentation here https://koalaplot.github.io/docs/? If specific things are missing in the documentation that caused you trouble, please let me know so we can add more information on those topics.
With regards to the data types, that is something that should be fairly easy to add - I'll look into it. I was using Float myself, and and didn't create versions for every data type and was waiting for interest from other users. Since the Number class has no arithmetic operators, it's not as straightforward as just adding a generic type to the existing implementation.
The ClassCastException looks like a bug. The xychart.* classes/functions are deprecated and I plan to remove them in 0.6.0 - I'll do that soon.
Autoscaling will need at least 2 points to compute a scale. I don't know of a way to get the compiler to enforce that.
I just pushed a change adding LongLinearAxisModel (as well as for other data types) and addressing some of the other issues. It's not been heavily tested so give it a try and if you encounter bugs please open a new issue for each one.
The minor grid lines are tied to the minor ticks - there is 1 minor grid line for each minor tick. If you change the number of minor ticks it also changes the number of minor grid lines. I don't think I've ever seen a graph where the number of minor grid lines was different than the number of minor ticks (you can also change the length of minor ticks and set it to 0 if you don't want them to show, in which case you'll only see a change in the number of minor grid lines). If you have an example of what you are trying to achieve I'll take a look at it.
|
2025-04-01T04:10:35.032617
| 2018-10-07T17:28:20
|
367575215
|
{
"authors": [
"Dkhusainov",
"SalomonBrys",
"luca992"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14719",
"repo": "Kodein-Framework/Kodein-DI",
"url": "https://github.com/Kodein-Framework/Kodein-DI/issues/160"
}
|
gharchive/issue
|
kotlin-multiplatform and Gradle 4.10
What's the ETA for supporting new kotlin-multiplatform plugin and 0.4 version of gradle metadata? Right now if I try to import kodein ina kotlin-multiplatform with Gradle 4.10, it gives me an error Unsupported format version 0.3 specified in module metadata. This version of Gradle supports format version 0.4 only. Some other dependencies in my module are already using 0.4, so I can't use the library.
That's dependant on kotlin being compiled with gradle >4.7.
Related. https://github.com/Kotlin/kotlinx.coroutines/issues/564
As a workaround, you could compile and locally publish your other dependencies using gradle 4.7
Looks like I'll have to do that for now. What about support for kotlin-multiplatform?
they didn't build it with kotlin-multiplatform.. but that doesn't mean you can't use it while using kotlin-multiplatform in your project.
Just add kodein-di-core-common as a dependency to your common source set, and kodein-di-core-jvm to your jvm source set, and so on.... for example
Okay, I will do that. You can close the issue. Thanks.
OK, so I've published version 6.0.0 AND version6.0.0-LGM.
LGM stands for Latest Gradle Metadata, and is the exact same kodein, except with Gradle Metadata 0.4.
LGM versions are NOT in JCenter, so you need to add the Kodein-LGM repository to access the artifacts.
http://kodein.org/Kodein-DI/?6.0/core#_latest_gradle
|
2025-04-01T04:10:35.035226
| 2020-08-11T11:41:50
|
676808406
|
{
"authors": [
"asarkar",
"romainbsl"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14720",
"repo": "Kodein-Framework/Kodein-DI",
"url": "https://github.com/Kodein-Framework/Kodein-DI/issues/316"
}
|
gharchive/issue
|
cannot choose between the following variants
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':grpc:extractIncludeProto'.
> Could not resolve all files for configuration ':grpc:compileProtoPath'.
> Could not resolve org.kodein.di:kodein-di:7.0.0.
Required by:
project :grpc
> The consumer was configured to find a component, preferably only the resources files. However we cannot choose between the following variants of org.kodein.di:kodein-di:7.0.0:
- iosArm32-api
- iosArm64-api
- iosX64-api
- js-api
- js-runtime
- jvm-api
- jvm-runtime
- linuxArm32Hfp-api
- linuxMips32-api
- linuxMipsel32-api
- linuxX64-api
- macosX64-api
- metadata-api
- mingwX64-api
- tvosArm64-api
- tvosX64-api
- watchosArm32-api
- watchosArm64-api
- watchosX86-api
Similar to https://github.com/Kodein-Framework/Kodein-DI/issues/264. I'm not using kapt, but the failure might have something to do with grpc-kotlin.
Workaround:
configurations.all {
afterEvaluate {
if (isCanBeResolved) {
attributes {
attribute(KotlinPlatformType.attribute, KotlinPlatformType.jvm)
}
}
}
...
}
dependencies {
attributesSchema {
attribute(KotlinPlatformType.attribute)
}
...
}
Thanks for the workaround, it might help others.
Which version of Gradle are you using?
6.5
|
2025-04-01T04:10:35.081021
| 2021-04-06T09:52:23
|
851256629
|
{
"authors": [
"elbueno222",
"ptvoinfo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14721",
"repo": "Koenkk/Z-Stack-firmware",
"url": "https://github.com/Koenkk/Z-Stack-firmware/issues/281"
}
|
gharchive/issue
|
CC2531 routers randomly losing connection with coordinator during 10 minutes
I have two CC2531 routers connected to a CC2652R coordinator and they are usually working fine, except sometimes they both lose the connection with coordinator for 10 minutes or so, and during that time every device connected to them are unable to send/receive messages, so the whole network is down.
One of the routers (I will call it Router1) is near the coordinator, and the other one (Router2) is far away, so during normal operation Router2 sends the messages to the coordinator through Router1, and Router1 sends the messages straight to the coordinator. After sniffing the traffic I found out the issue during those 10 minutes that the routers lose connection is they interchange their roles, Router1 tries to send the messages through Router2, and Router2 tries to send the messages straight to the coordinator, so none of the messages reach the coordinator.
I saved the sniff just in case anyone wants to have a look.
I don't have any ideas why a router disconnects.
Could you test a router based on my configurable firmware with the following settings:
CC2530 + CC25xx (select your chip)
Router
Output 1 - P30 - Uptime
Expert - Disable resettings of a device by power on/off cycle
Expert - Disable configuring the interval remotely
The Uptime value will allow detecting if a device reboots at that time.
@ptvoinfo the routers don't disconnect, in the sniff I can see they both keep sending the "Report Attributes" message every minute, they just choose the wrong route and they take 10 minutes to realise they are not receiving the ACK from the coordinator and then they fix the route themselves. I would like to know why they change the route (could it be possible to fix the route once it is stable so it doesn´t change), and if it is possible to update the firmware somehow so they take much less time to realise something is wrong and fix it (could they try to re-route after 1 or 2 minutes not receiving the ACK from the coordinator?)
@elbueno222 I cannot change this behavior from my side because it is code inside Zigbee ZStack.
Ok, in that case I guess this issue cannot be fixed. I will try to play with the configurable firmware to see if I can get any extra info.
|
2025-04-01T04:10:35.145780
| 2016-02-27T20:59:28
|
136967011
|
{
"authors": [
"jeanminet"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14722",
"repo": "Koheron/zynq-sdk",
"url": "https://github.com/Koheron/zynq-sdk/pull/79"
}
|
gharchive/pull-request
|
2clks
Improvement of PS-PL communication :
Use different clocks for AXI4-Lite interconnect 0 (143 -> 200 MHz) and AXI4 interconnect 1 (143 MHz)
Remove register slices on AXI4-Lite interconnect
:warning: AXI4-Full is overkill for the BRAMs since there is no simple way to use the burst mode from Linux (http://stackoverflow.com/questions/22373717/any-built-in-linux-methods-for-axi-burst-type-devices). The use of AXI-Lite greatly decreases the FPGA ressources needed and allows clock up to 200 MHz.
With superfluous AXI4-Full :
spectrum block design:
oscillo block design:
With AXI4-Lite :
|
2025-04-01T04:10:35.147494
| 2023-12-23T10:23:16
|
2054744733
|
{
"authors": [
"KoljaB",
"Npahlfer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14723",
"repo": "KoljaB/RealtimeSTT",
"url": "https://github.com/KoljaB/RealtimeSTT/issues/15"
}
|
gharchive/issue
|
Bump Porcupine version to at least 2.0 support mac/amd64
Porcupine 1.9 only supports mac x86_64 arch, but 2.0 adds support for mac amd64.
The downside is that it adds the requirement for an access_key in the pvporcupine.create() method.
Thoughts?
The v1.9.5 used is the last where Pvporcupine offers free usage. Changing this would force everybody to create a chargeable account (and toss a coin for longer usage). I feel that would put people off.
If you want this, pls just change it in the code.
|
2025-04-01T04:10:35.161521
| 2024-07-12T08:02:49
|
2404996117
|
{
"authors": [
"CedricGatay",
"Kolos65"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14724",
"repo": "Kolos65/Mockable",
"url": "https://github.com/Kolos65/Mockable/issues/65"
}
|
gharchive/issue
|
BAD_ACCESS on some types
When mocking a protocol with some types like the following:
func get<T: Decodable, U: Encodable>(url: URLConvertible, body: U?, displayLoader: Bool, useDefaultErrorHandler: Bool) async
-> Result<T, Error>
The mock generation works fine, but it crashes on test execution with BAD_ACCESS, it looks like it is related to https://github.com/swiftlang/swift/issues/61357, do you have any idea on fixing this @Kolos65 ?
@CedricGatay I could not repro this, can you elaborate on this issue with some isolated example protocols and tests?
Closing this for inactivity, reopen if needed.
Narrowing down seems to be related to having the mock in another tuist module than the one running the tests.
If I have a simple protocol like this:
public protocol MyProto {
func onTap()
}
If it sits in a module, I've got a BAD_ACCESS when trying to do thing like this:
let mock = MockMyProto()
when(mock).onTap().perform { print("Works") }
If the same protocol sits within the @testable import of my main app, it works as expected.
Do you have the MOCKING flag enabled for that module?
Don't forget the given clause, your example misses given(mock).onTap().willReturn()
So MyProto is in a framework module A and you are trying to use it in the tests of B that depends on A?
I have the MOCKING flag enabled, otherwise it would not compile
I know it misses things, but this sole line is enough to get the crash, no need to bother adding a given clause
MyProto is in framework module A, and my test are running in MyAppTests which depends on module A
Backtrace is the following
MyApp.debug.dylib`FunctionActionBuilder.perform(_:):
0x107eb90f0 <+0>: stp x20, x19, [sp, #-0x20]!
0x107eb90f4 <+4>: stp x29, x30, [sp, #0x10]
0x107eb90f8 <+8>: add x29, sp, #0x10
0x107eb90fc <+12>: sub sp, sp, #0x90
0x107eb9100 <+16>: stur x8, [x29, #-0x50]
0x107eb9104 <+20>: stur x0, [x29, #-0x90]
0x107eb9108 <+24>: stur x1, [x29, #-0x98]
0x107eb910c <+28>: stur x2, [x29, #-0x60]
0x107eb9110 <+32>: stur x20, [x29, #-0x88]
0x107eb9114 <+36>: stur xzr, [x29, #-0x30]
0x107eb9118 <+40>: stur xzr, [x29, #-0x28]
0x107eb911c <+44>: stur xzr, [x29, #-0x38]
-> 0x107eb9120 <+48>: ldr x0, [x2, #0x20]
0x107eb9124 <+52>: stur x0, [x29, #-0x40]
0x107eb9128 <+56>: ldr x1, [x2, #0x18]
0x107eb912c <+60>: stur x1, [x29, #-0x48]
0x107eb9130 <+64>: stur x1, [x29, #-0x18]
0x107eb9134 <+68>: ldr x2, [x2, #0x10]
0x107eb9138 <+72>: stur x2, [x29, #-0xa0]
0x107eb913c <+76>: stur x2, [x29, #-0x20]
0x107eb9140 <+80>: adrp x3, 1698
0x107eb9144 <+84>: add x3, x3, #0x48 ; protocol requirements base descriptor for Mockable.Builder
0x107eb9148 <+88>: adrp x4, 1698
0x107eb914c <+92>: add x4, x4, #0x50 ; associated conformance descriptor for Mockable.Builder.Mockable.Builder.Service: Mockable.MockableService
0x107eb9150 <+96>: bl 0x1083855c8 ; symbol stub for: swift_getAssociatedConformanceWitness
0x107eb9154 <+100>: ldur x2, [x29, #-0xa0]
0x107eb9158 <+104>: mov x1, x0
0x107eb915c <+108>: mov x0, #0x0 ; =0
0x107eb9160 <+112>: adrp x3, 1698
0x107eb9164 <+116>: add x3, x3, #0x1f8 ; protocol requirements base descriptor for Mockable.MockableService
0x107eb9168 <+120>: adrp x4, 1698
0x107eb916c <+124>: add x4, x4, #0x228 ; associated type descriptor for Mockable.MockableService.Member
0x107eb9170 <+128>: bl 0x1083855d4 ; symbol stub for: swift_getAssociatedTypeWitness
Is this happening with the latest version? (0.0.11)
I have just tried to repro this in a Tuist project and failed... Here is my setup:
Main manifestProject.swift:
let project = Project(
name: "App",
packages: [
.remote(
url: "https://github.com/Kolos65/Mockable",
requirement: .exact("0.0.11")
)
],
targets: [
.target(
name: "App",
destinations: .iOS,
product: .app,
// ...
sources: ["App/Sources/**"],
resources: ["App/Resources/**"],
dependencies: [
.project(target: "Features", path: .relativeToRoot("Modules/Features")),
.package(product: "Mockable", type: .runtime)
]
),
.target(
name: "AppTests",
destinations: .iOS,
product: .unitTests,
sources: ["App/Tests/**"],
dependencies: [
.target(name: "App"),
.package(product: "Mockable", type: .runtime)
]
)
]
)
Features module manifest Project.swift
let project = Project(
name: "Features",
targets: [
.target(
name: "Features",
destinations: .iOS,
product: .framework,
// ...
sources: ["Sources/**"],
dependencies: [
.package(product: "Mockable", type: .runtime)
],
settings: .settings(
configurations: [
.debug(
name: .debug,
settings: ["SWIFT_ACTIVE_COMPILATION_CONDITIONS": "$(inherited) MOCKING"]
)
]
)
)
]
)
Added this in Modules/Features/Sources
import Mockable
@Mockable
public protocol MyProto {
func onTap()
}
And wrote this test in the AppTests target:
import XCTest
import Mockable
@testable import Features
final class AppTests: XCTestCase {
func test_twoPlusTwo_isFour() {
let mock = MockMyProto()
given(mock)
.onTap()
.willReturn()
when(mock)
.onTap()
.perform { print("Works") }
mock.onTap()
verify(mock)
.onTap()
.called(.atLeastOnce)
}
}
This runs just fine. Can you spot any key differences to your setup? If you create a sample project where I can repro this I would be happy to look at it.
I'll try to create a project that presents this issue.
The main project is quite big so I need to trim things.
Ok, I think I've got something, if I'm running tests in my framework it fails.
But my dependency declaration is not like yours, I use Package.swift and I am settings the dependency as external.
If I use the packages field in the Project object (like you), and declare my dependency using .package(product: "Mockable", type: .runtime) it works as expected. Do you have any idea why ?
This seems like a linking issue then... Can you try removing the .external(named: "Mockable") from the app test target dependency list, but keep it in the app target dependency list? You should still be able to import mockable this way.
You're right, not putting "Mockable" in test dependencies solves the problem !
This was a double linking issue I guess, I will update the documentation soon with some guidance on this.
However, it still seems to fail if I run test in my main app test bundle when using external.
If I sum up things:
tests in same module works fine as long as I don't declare .external in test bundle dependencies
tests in another module breaks if I'm using .external
If I use .package, it fails the build of the tests of my main app due to multiple commands producing SwiftSyntaxMacros(one for each module)
Do you have any clue on this @Kolos65 ?
Adding a package dependency to multiple modules that depend on each other is always hard :/
Try changing Xcode build settings (for example only embed Mockable in the app target) and if you find a setup that works, search for the Tuist equivalent of your current build settings. This is the best I got, sorry.
|
2025-04-01T04:10:35.180451
| 2024-01-03T10:34:07
|
2063728693
|
{
"authors": [
"Konano"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14725",
"repo": "Konano/Speechless",
"url": "https://github.com/Konano/Speechless/issues/2"
}
|
gharchive/issue
|
一些已知 Bug
[ ] meterscao/Speechless#42
[x] meterscao/Speechless#38
[ ] meterscao/Speechless#35
修复图片比例变形问题:
https://github.com/Konano/Speechless/commit/07f791bf93baea94daed7c9aa2dd359144916238
https://github.com/Konano/Speechless/commit/44765c07957f63b0236c842de87c2828662a9bc8
|
2025-04-01T04:10:35.192575
| 2019-11-08T22:31:48
|
520265062
|
{
"authors": [
"h3xar0n"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14726",
"repo": "Kong/docs.konghq.com",
"url": "https://github.com/Kong/docs.konghq.com/pull/1572"
}
|
gharchive/pull-request
|
fix(gemlock) netlify previews
Assuming it passes Travis CI, I'm going to merge, @Kong/team-docs
|
2025-04-01T04:10:35.199962
| 2022-05-15T10:35:39
|
1236248914
|
{
"authors": [
"SethFalco",
"filfreire",
"ian-ok"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14727",
"repo": "Kong/insomnia",
"url": "https://github.com/Kong/insomnia/issues/4784"
}
|
gharchive/issue
|
response code 204
Expected Behavior
insomnia resolves request, says no content but request successful
Actual Behavior
insomnia never resolves the request
Reproduction Steps
create a rest api which responds with no content & http code 204
request it with insomnia
Is there an existing issue for this?
[X] I have searched the issue tracker for this problem.
Additional Information
No response
Insomnia Version
2022.3.0
What operating system are you using?
Windows
Operating System Version
Windows 11, 21H2
Installation method
download from insomnia.rest
Last Known Working Insomnia version
No response
Hi @ian-ok, thank you for reporting this issue!
I'm not able to reproduce this for example requesting to httpbin.org/status/204 which forces the status code to 204 and returns no content.
Can you provide more details into what you see, and are you able to resolve the same request elsewhere, for example by sending a curl request?
to send the request i used
I assume you mean to send the response?
I think it's more likely that your request handler is hanging before, or is returning before reaching the line you shared.
i dont think so, if i add content to the request it works as expected
@ian-ok this could have something to do with the implementation of your server, not necessarily with Insomnia.
I've tried reproducing with a simple server setup:
const http = require('http');
const port = 3000;
http.createServer(function (req, res) {
res.writeHead(204, {'Content-Type': 'text/plain'});
res.end('test');
}).listen(port);
And I was able to still get the proper 204 No Content response.
I'm marking this issue as closed for now, but I would encourage you to check if you notice similar behavior using for example a curl request, or with other API clients, as it might indeed be an implementation problem.
If you still face this problem, or you find new ones, please feel free to open an issue.
|
2025-04-01T04:10:35.206758
| 2023-06-28T03:22:16
|
1778106243
|
{
"authors": [
"jackkav",
"kackerx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14728",
"repo": "Kong/insomnia",
"url": "https://github.com/Kong/insomnia/issues/6065"
}
|
gharchive/issue
|
import Multi-line curl error
Expected Behavior
import edge or postman's curl
Actual Behavior
Error while scanning for resources to improt: cannot read properties of undefined (reading 'trim') ??
Reproduction Steps
No response
Is there an existing issue for this?
[X] I have searched the issue tracker for this problem.
Additional Information
No response
Insomnia Version
latest
What operating system are you using?
macOS
Operating System Version
macos 13.1
Installation method
insomnia
Last Known Working Insomnia version
No response
Thanks for the report, could you provide the import file? or more information about how you created it.
sure, such as
curl --location --request POST 'http://localhost:8900/fbi/contract/list' \
--header 'x-rpc-fake_user: miyin' \
--header 'x-rpc-god;' \
--header 'User-Agent: Apifox/1.0.0 (https://www.apifox.cn)' \
--header 'Content-Type: application/json' \
--header 'Accept: */*' \
--header 'Host: localhost:8900' \
--header 'Connection: keep-alive' \
--data-raw '{
"page": 1,
"page_size": 100,
"name": "",
"company_code_list": [],
"supplier_code_list": [],
"currency_list": [],
"usable": false
}'
I'm not sure if it was my mishandling
ah you can paste that directly into the url bar of an empty request and it will be parsed. You are correct though, it does list cURL as one of the import types. Thats our bad. Thanks again.
ah you can paste that directly into the url bar of an empty request and it will be parsed. You are correct though, it does list cURL as one of the import types. Thats our bad. Thanks again.
You mean this...? Seems to be unsupportive
ah its because of this line,
--header 'x-rpc-god;' \
it works if you remove it.
although the code should support valueless headers, so this is a bug
ah its because of this line, --header 'x-rpc-god;' \ it works if you remove it. although the code should support valueless headers, so this is a bug
ah. Thank you very much. I got it!
Fixed in #6069
|
2025-04-01T04:10:35.211084
| 2023-08-23T15:23:32
|
1863555395
|
{
"authors": [
"foxt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14729",
"repo": "Kong/insomnia",
"url": "https://github.com/Kong/insomnia/issues/6374"
}
|
gharchive/issue
|
JSONPath history is broken
Expected Behavior
You can select previously used JSONPath queries
Actual Behavior
The application crashes whenever you hover over a more complex JSONPath.
Failed to render hc. Please report the error to [our Github Issues](https://github.com/Kong/insomnia/issues)
Stack trace
Error: Failed to execute 'querySelector' on 'Element'<EMAIL_ADDRESS>== "Foo")]"]' is not a valid selector.
at file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:78623
at vh (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:24296)
at s6 (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:42448)
at qbe (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:41270)
at Dc (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:40309)
at H_ (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:36912)
at dc (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:116:3288)
at file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-391c2d16.js:118:34286
at node:internal/process/task_queues:140:7
at AsyncResource.runInAsyncScope (node:async_hooks:204:9)
at AsyncResource.runMicrotask (node:internal/process/task_queues:137:8)
Reproduction Steps
Hover over a complex JSONPath in the history, for example<EMAIL_ADDRESS>== "Foo")]
Is there an existing issue for this?
[X] I have searched the issue tracker for this problem.
Additional Information
No response
Insomnia Version
2023.5.6
What operating system are you using?
macOS
Operating System Version
macOS Monterey 12.6.8 (21G725)
Installation method
download from insomnia.rest
Last Known Working Insomnia version
No response
dupe of #5846
|
2025-04-01T04:10:35.216097
| 2023-11-28T11:26:02
|
2014223047
|
{
"authors": [
"JulianNicholls",
"marckong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14730",
"repo": "Kong/insomnia",
"url": "https://github.com/Kong/insomnia/issues/6881"
}
|
gharchive/issue
|
List of available calls disappears, apart from the last one used and then a message appears
Expected Behavior
I should be able to scroll up and down the list of available calls.
Actual Behavior
The list of available calls disappears, apart from one of the last ones used and, after a short delay, a message appears
Failed to render . Please report the error to [our Github Issues]
with the following stack trace
TypeError: Cannot read properties of null (reading 'index')
at file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:119:177794
at Ch (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:44:24296)
at D6 (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:44:42448)
at z_ (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:44:36651)
at Ls (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:42:3288)
at ehe (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:44:41325)
at sc (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:44:40309)
at rF (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:44:35758)
at N (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:29:1615)
at Immediate.me [as _onImmediate] (file:///Applications/Insomnia.app/Contents/Resources/app.asar/index-016b0889.js:29:1987)
at process.processImmediate (node:internal/timers:476:21)
Reproduction Steps
Scrolling the list of available calls seems to trigger the problem.
It is becoming increasingly less difficult to reproduce, I have had it crash three times in the last two days.
I am using the latest version, and it has only started to happen with this version:
Build date: 11/23/2023
OS: Darwin arm64 23.1.0
Electron: 27.0.3
Node: 18.17.1
Node ABI: 118
V8: <IP_ADDRESS>-electron.0
Architecture: arm64
Is there an existing issue for this?
[X] I have searched the issue tracker for this problem.
Additional Information
No response
Insomnia Version
8.4.5
What operating system are you using?
macOS
Operating System Version
Sonoma 14.1.1
Installation method
This was from the last automatic update. Originally installed from insomnia.rest
Last Known Working Insomnia version
8.4.4?
Hi @JulianNicholls would you mind elaborating more on this issue. What do you mean by "Calls"? Did you mean the list of requests or the list of request history? Would you be able to share a screenshot with us?
|
2025-04-01T04:10:35.218354
| 2019-12-06T21:31:59
|
534267059
|
{
"authors": [
"hutchic"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14731",
"repo": "Kong/kong-build-tools",
"url": "https://github.com/Kong/kong-build-tools/pull/180"
}
|
gharchive/pull-request
|
feat(rhel6) drop redhat 6 as a build option
RHEL6 full support ended on May 10, 2016, End of Maintenance Support 1 ended on May 10, 2017.
The RHEL 7 and RHEL 8 ubi images have the ability to install and test our Kong releases without requiring redhat credentials.
These two reasons are strong enough reasons that it's time to ditch rhel6
:tada: This PR is included in version 2.3.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T04:10:35.224753
| 2017-08-25T16:59:50
|
252962048
|
{
"authors": [
"decoursin",
"thibaultcha"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14732",
"repo": "Kong/kong",
"url": "https://github.com/Kong/kong/issues/2851"
}
|
gharchive/issue
|
incorrectly parses multipart/form-data
When using the aws-lambda plugin, the body of messages is automatically being parsed when the header is multipart/form-data. The problem is that it doesn't parse it adequately enough, so it's accidentally excluding important parts, or getting some things wrong altogether. There should be someway to forward the raw message.
Here's my example of it wrongly parsing my data:
This is how you parsed my message:
{
dkim<EMAIL_ADDRESS>: pass}',
to<EMAIL_ADDRESS>subject: 'title',
from: 'Nick Decoursin<EMAIL_ADDRESS>SPF: 'softfail',
sender_ip: '<IP_ADDRESS>',
email: '--f403045f559c6fcc8b055795cf05\r\n--f403045f559c6fcc88055795cf03\r\nbig\r\n--f403045f559c6fcc88055795cf03\r\n<div dir="ltr">body</div>\r\n--f403045f559c6fcc88055795cf03--\r\n--f403045f559c6fcc8b055795cf05\r\newogICAgIm5hbWUiOiAibXkgbmFtZSIsCiAgICAiYXJlYS1vZi11c2UiOiAi\r\nY2F0IiwKICAgICJ1bml0IjogInNvbWUgdW5pdCIsCiAgICAicGFja2FnZXMi\r\nOiBbCgl7CgkgICAgIm51bS1vZi11bml0cyI6IDQsCgkgICAgInByaWNlIjog\r\nIiQ1LjAwIgoJfSwKCXsKCSAgICAibnVtLW9mLXVuaXRzIjogOCwKCSAgICAi\r\ncHJpY2UiOiAiJDEwLjAwIgoJfV0KfQ==\r\n--f403045f559c6fcc8b055795cf05--'
}
This is the raw message:
--xYzZY
Content-Disposition: form-data; name="dkim"
<EMAIL_ADDRESS>: pass}
--xYzZY
Content-Disposition: form-data; name="subject"
title
--xYzZY
Content-Disposition: form-data; name="email"
Received: by mx0025p1las1.sendgrid.net with SMTP id 136ViLNNJx Fri, 25 Aug 2017 16:19:14 +0000 (UTC)
Received: from mail-wr0-f178.google.com (unknown [<IP_ADDRESS>]) by mx0025p1las1.sendgrid.net (Postfix) with ESMTPS id CA2EF22903 for<EMAIL_ADDRESS>Fri, 25 Aug 2017 16:19:13 +0000 (UTC)
Received: by mail-wr0-f178.google.com with SMTP id z91so848344wrc.1 for<EMAIL_ADDRESS>Fri, 25 Aug 2017 09:19:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=weissmaler.de; s=google; h=mime-version:from:date:message-id:subject:to; bh=QpbRN2IhVIV7sFjAE/gbHB905/SCCdbIp7heSMkMdJg=; b=t//9/a2ZdyPYwGhExbRB0KBQpFr5mXY1ddtfuZd22Z4wvGoqV7F7Y5DEVxKACDLfLl 4agVtNIO7TWB+AiBL/mrSud95XzF2ghtWb+mKa37GYPiG/MDm/rr2ohUdudkUFlf8jkm IcqczDJs86OdIo62zdkSODmHbdxmCzfiHW2F4+J4S9zfcWx1brR5lbmB7s6gqtrzSDYA GojkqIR+pZX3eD70KE4yn4JXtXlqLmX7NoLUjPA9FI3TEQwEBpCHenN6EWg8gUGDupDG 6A+dR544ko6QWm1jcenTeRNdML3TrkkHahSNGcKsGhYxvtJT7X7R5uQZbfCZK6kB3Hmx 4FLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=QpbRN2IhVIV7sFjAE/gbHB905/SCCdbIp7heSMkMdJg=; b=G0/WOpieZKSjGypaXoWQAPbudI5eYMj5Yppn+IX2dnebPZlBhnSMFp/oOSXeH7+AXM jqm/i/RV+snU9mLuWhHAYGMo9DX0tcLyMOi8sArTp1JDR6DUnfLILJtWgAc9pLNp3y/F GWq9zrQC+Er04CbwLn6hFSkMWY1kP3p8VgfqSaRQBJv8e1J2vlUR01avW4fXIADxQzcP Th4jMxZ6NgWtguKRpXZksC9ML75ppoEewoYRvxfXpvxoCatZa2fOcwzda/k1l/Q7c2wH EOYH2FGsS2+OabaYzZNa4F8rK2XHtbRqPsYqk6212wPdRmio8QS6goOTGQqWLMA/SEoD +CzA==
X-Gm-Message-State: AHYfb5i8WQJKYjHfTTEsXZFYdIHEeOGrafh45cVWs6kphZYUGl/s5RHg PadyJ6unx/cmquiJ1k/WPAW0aJ6jXD3pjGkObQ==
X-Received: by <IP_ADDRESS> with SMTP id z65mr6068290wrb.218.1503677952191; Fri, 25 Aug 2017 09:19:12 -0700 (PDT)
MIME-Version: 1.0
Received: by <IP_ADDRESS> with HTTP; Fri, 25 Aug 2017 09:19:11 -0700 (PDT)
From: Nick Decoursin<EMAIL_ADDRESS>Date: Fri, 25 Aug 2017 18:19:11 +0200
Message-ID<EMAIL_ADDRESS>Subject: title
To<EMAIL_ADDRESS>Content-Type: multipart/mixed; boundary="f403045f559c32a3fa0557964ec7"
--f403045f559c32a3fa0557964ec7
Content-Type: multipart/alternative; boundary="f403045f559c32a3f70557964ec5"
--f403045f559c32a3f70557964ec5
Content-Type: text/plain; charset="UTF-8"
body
--f403045f559c32a3f70557964ec5
Content-Type: text/html; charset="UTF-8"
<div dir="ltr">body</div>
--f403045f559c32a3f70557964ec5--
--f403045f559c32a3fa0557964ec7
Content-Type: application/json; name="working.bak.json"
Content-Disposition: attachment; filename="working.bak.json"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_j6s30z2j0
ewogICAgIm5hbWUiOiAibXkgbmFtZSIsCiAgICAiYXJlYS1vZi11c2UiOiAi
Y2F0IiwKICAgICJ1bml0IjogInNvbWUgdW5pdCIsCiAgICAicGFja2FnZXMi
OiBbCgl7CgkgICAgIm51bS1vZi11bml0cyI6IDQsCgkgICAgInByaWNlIjog
IiQ1LjAwIgoJfSwKCXsKCSAgICAibnVtLW9mLXVuaXRzIjogOCwKCSAgICAi
cHJpY2UiOiAiJDEwLjAwIgoJfV0KfQ==
--f403045f559c32a3fa0557964ec7--
--xYzZY
Content-Disposition: form-data; name="to"
<EMAIL_ADDRESS>--xYzZY
Content-Disposition: form-data; name="from"
Nick Decoursin<EMAIL_ADDRESS>--xYzZY
Content-Disposition: form-data; name="sender_ip"
<IP_ADDRESS>
--xYzZY
Content-Disposition: form-data; name="envelope"
{"to":["something+BIG<EMAIL_ADDRESS>--xYzZY
Content-Disposition: form-data; name="charsets"
{"to":"UTF-8","subject":"UTF-8","from":"UTF-8"}
--xYzZY
Content-Disposition: form-data; name="SPF"
softfail
--xYzZY--
Your parsed event is completely missing the Content-Disposition: attachment part, and it's incorrectly parsing the Content-Disposition: form-data; name="email" part.
As such, the aws-lambda plugin is useless to me how it stands now, since I'm unable to modify the request since it's a direct message from sendgrid.
Hi,
It seems like this could be an issue in lua-multipart. Would you mind opening an issue there maybe? Leaving this open in the meanwhile.
Thanks for the report!
Sure, but it's also an issue here, since I should be able to request sending the raw data, rather than it being parsed.
@decoursin I believe the aws-lambda plugin will also include a request_body property in the upstream body (forwarded to the Lambda function). This property should include the raw body, as per your request. Doesn't it?
Yes, it's a toggleable flag, which you added in this PR: https://github.com/Kong/kong/pull/2823. Feel free to close this, I only didn't myself because it's still an issue for lua-multipart.
Yes indeed. We will be tracking the lua-multipart issue in its own repository, so thank you for opening the issue there!
Alright then, I'll go ahead and close this.
|
2025-04-01T04:10:35.228958
| 2020-05-14T17:19:41
|
618406094
|
{
"authors": [
"Tieske",
"hbagdi",
"javierguerragiraldez"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14733",
"repo": "Kong/lua-resty-dns-client",
"url": "https://github.com/Kong/lua-resty-dns-client/issues/93"
}
|
gharchive/issue
|
Remove table.move for arm64 compatiblity
https://github.com/Kong/kong/issues/5869
not sure this is the right approach, what are the options to fix LuaJIT @javierguerragiraldez ? wouldn't that be quicker?
The fix to LuaJIT (LuaJIT/LuaJIT#583) is simple and looks safe but I'd like some comment from anybody familiar with the assembly interpreter. Also, table.move is one of those functions that are really not needed. Written in Lua would be just as fast. Finally, it's not used anywhere else.
(Btw, shouldn't it be size = size - count just a few lines above?)
|
2025-04-01T04:10:35.231640
| 2023-09-25T04:49:42
|
1910653048
|
{
"authors": [
"S2kael",
"saltict"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14734",
"repo": "Koniverse/SubWallet-Extension",
"url": "https://github.com/Koniverse/SubWallet-Extension/pull/1945"
}
|
gharchive/pull-request
|
[Issue-1942] [Ledger] Support connect Ledger device for more chains
Related issue(s)
#1942
Is your feature request related to a problem(s)? Please describe.
[Ledger] Add some networks
Describe the solution you'd like
[Ledger] Add some networks
Describe alternatives you've considered
No
Additional context
No
🚀 Deployed on https://pr-1945--sw-web-runner.netlify.app
|
2025-04-01T04:10:35.235697
| 2024-08-03T19:18:38
|
2446556566
|
{
"authors": [
"Kosinkadink",
"schoenid"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14735",
"repo": "Kosinkadink/ComfyUI-AnimateDiff-Evolved",
"url": "https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/issues/437"
}
|
gharchive/issue
|
Most of the workflow example images are not available-
Tried to open the images to copy the workflow was working on the first two or three workflow images.
The rest is not available.
Error on browser: This private-user-images.githubusercontent.com page can’t be found.
I think dragging the images from the README directly into ComfyUI works, but looking/opening them in another tab does not. Not sure if this is a Github thing to avoid some exploit or something like that, I'll try reuploading at some point and seeing if that fixes it.
I updated a few of the workflows (outdated ones are clearly marked with a header) that should work. Hopefully they won't expire like the previous ones.
Thank you very much!
|
2025-04-01T04:10:35.261643
| 2024-06-13T17:03:56
|
2351646864
|
{
"authors": [
"Mr3zee",
"ShayOinif"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14736",
"repo": "Kotlin/kotlinx-rpc",
"url": "https://github.com/Kotlin/kotlinx-rpc/issues/100"
}
|
gharchive/issue
|
kRPC: Endpoints do not terminate when the transport does
Hello.
I don't know if it is an issue or not, but I definitely find it weird.
It seems like there is no way of telling when the client is disconnected like a status flow, or on disconnection lambda to register.
Right now when I consume flow and server goes away, the flow is now completed.
As a workaround I created my little helper flows (Also to maintain single client for my app), like so:
internal val clientCoroutineScope = CoroutineScope(Job())
internal val httpClientSharedFlow = channelFlow {
val client = HttpClient { installRPC() }
send(client)
awaitClose {
client.close()
}
}.shareIn(clientCoroutineScope, SharingStarted.WhileSubscribed(3.seconds), 1)
internal sealed interface ClientState {
data class Disconnected(val message: String) : ClientState
data class Connected(val client: RPCClient) : ClientState
}
@OptIn(ExperimentalCoroutinesApi::class)
internal val myServiceClientStateFlow = httpClientSharedFlow.flatMapLatest<HttpClient, ClientState> {
val client = it.rpc {
url("ws://<IP_ADDRESS>:3024/firstRoute")
rpcConfig { serialization { json() } }
}
flow {
emit(ClientState.Connected(client))
client.coroutineContext.job.join()
}.onCompletion {
client.cancel()
throw Throwable("Remote disconnected")
}
}.retryWhen { cause, _ ->
repeat(3) {
emit(ClientState.Disconnected("Error: $cause, retrying in ${3 - it} seconds..."))
delay(1.seconds)
}
true
}.stateIn(
clientCoroutineScope,
SharingStarted.WhileSubscribed(3.seconds),
ClientState.Disconnected("Initial")
)
val uiClientStateFlow = myServiceClientStateFlow.map {
when (it) {
is ClientState.Connected -> "Connected"
is ClientState.Disconnected -> "Disconnected - ${it.message}"
}
}
@OptIn(ExperimentalCoroutinesApi::class, InternalRPCApi::class)
val uiRpcFlow = myServiceClientStateFlow.transformLatest {
if (it is ClientState.Connected) {
streamScoped {
emitAll(
it.client.withService<MyService>().coldFlow(getPlatform().name)
)
}
}
}.retryWhen { cause, attempt ->
emit("Error - $attempt: $cause")
delay(1.seconds)
true
}
I wonder if it is the intended use or am I missing something or should the interface or behavior change?
Thanks,
Shay Oinif
Hi, thank you for the question!
I'm not sure what do you want to do. If a client or a service disconnects, its corresponding CoroutineScope will cancel. If the there was an error with a flow request - it should throw.
There is no retry mechanism right now, that is true
I simply expected if a service goes away, that the flow I am consuming will end no matter on what coroutine scope I collect it.
In a normal shutdown, krpc protocol send an end message for all clients, but in case of unexpected crash, the coroutine scope of the backing - web socket with ktor krpc - end but the corresponding client flow doesn't, unless collected on the client's coroutine scope.
Hm, that sound not right. Would you be able to provide a reproducer for that behaviour, please?
Clone this:
https://github.com/ShayOinif/MyExample
It builds on your samples but with only jvm server and compose desktop.
On completion of flow it should print done.
If server is around till end if will print done.
If I stop the server process while client collecting the flow simply is stuck and not finished.
The same applies with single shot requests.
I can create client.
Connect to service through web socket with ktor krpc.
Once it succeeds, all operations simply stuck if server goes away.
They don't even throw.
Tell me if you want me to add to this repo a button to make a single shot request so you could see what happens if initial communication to server succeeds but then it goes away.
I can verify, that this is a bug, I'll rename the ticket
UPD: maybe ktor specific, see https://youtrack.jetbrains.com/issue/KTOR-7234/WebSocketSessions-job-does-not-complete-on-abrupt-closure
I guess it's specific to the webSocketSession then.
Anyway we also have the bug with flow cancellation, I verified without ktor, so it's two bugs in total
Oh, and also in your repro there is no invokeOnCompletion, which is the bug indicator. The channel completed ok, but the job does not, that's why I used invokeOnCompletion in the ticket
Any update on this?
Seems like 0.2.-2/3 got better in many areas, so kudos.
But from last attempts, seems that new requests simply get cancelation if web socket session ended, but existing flow still doesn't end.
And in general, I think exposing some api to know when backing web socket is connected would be great.
Right now I simply poll myself the connection and check that I didn't get cancelation exception on it (awkward).
OK, I see.
I guess then it is somewhat related, but for single shot rpc, if web socket session ended, you throw cancelation exception.
Any reason for that?
Inadvertently it makes calling code to get canceled as well.
And now I saw that you opened the issue for Ktor.
It is not quite accurate.
Client should be like this (and then it works):
val ktorClient = HttpClient {
install(WebSockets)
}
val session = ktorClient.webSocketSession(host = "localhost", port = 8080, path = "/test")
session.coroutineContext.job.invokeOnCompletion {
println("completed")
}
session.incoming.consumeEach {
println((it as Frame.Text).readText())
}
session.coroutineContext.job.join()
ktorClient.close()
Server easier to debug with this, but not a must:
install(WebSockets)
routing {
webSocket("/test") {
var i = 1
while (true) {
delay(1.seconds)
outgoing.send(Frame.Text("test ${i++}"))
}
}
}
But main take away from server is to install WeSockets plugin, not RPC plugin (This is your kotlinx-rpc plugin)
|
2025-04-01T04:10:35.329259
| 2024-08-23T07:55:29
|
2482549919
|
{
"authors": [
"howtoscriptinpython",
"hxll-f"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14737",
"repo": "KraoESPfan1n/Loading-Screen-Bloxstrap",
"url": "https://github.com/KraoESPfan1n/Loading-Screen-Bloxstrap/issues/1"
}
|
gharchive/issue
|
powershell
how do i fix it
Did the logging screen ever show up?
Did the logging screen ever show up?
yes
Could you please provide the log as well?
Could you please provide the log as well?
Does the integration work / has it been added? (To check start Roblox / Open Bloxstrap Menu)
Does the integration work / has it been added? (To check start Roblox / Open Bloxstrap Menu)
You could try replacing the username with the actual username used on your User (e.g. "Тимофіи") to check if that works.
You could try replacing the username with the actual username used on your User (e.g. "Тимофіи") to check if that works.
tysm
Ofcourse! For any other questions don't hesitate to open another ticket.
|
2025-04-01T04:10:35.359302
| 2015-03-04T14:58:16
|
59808805
|
{
"authors": [
"KrauseFx",
"powtac"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14738",
"repo": "KrauseFx/snapshot",
"url": "https://github.com/KrauseFx/snapshot/issues/94"
}
|
gharchive/issue
|
Snapfile ignored - syntax error?
When running snapshot (v 0.4.12 and v 0.4.13) with the default Snapfile but uncommented the schema 'MySchema' line it seems that everything inside the Snapfile is ignored. snapshot asks me which schema to choose... But also says Using './Snapfile'
When deleting everything inside the Snapfile but only placing schema 'MySchema' in there, snapfile uses the given schema!
For me it looks like some syntax error within the default Snapfile file. But I'm unable to determine what the problem is.
Sorry, hard to tell, could you post the complete Snapfile which was causing the problem? Sorry for the late reply, somehow missed that issue.
The problem still exists. You can see that Snapfile is found but ignored.
YYYYYs-MacBook-Air:XXXXX.mobile simonbruchner$ snapshot
DEBUG [2015-03-11 19:08:06.44]: Found 10 simulators.
INFO [2015-03-11 19:08:06.47]: Using './Snapfile'
DEBUG [2015-03-11 19:08:06.66]: xcodebuild -workspace '/Users/simonbruchner/Documents/XXXXX.mobile/XXXXX.xcworkspace' -list
DEBUG [2015-03-11 19:08:08.18]: Found available schemes: ["Copy of XXXXX", "XXXXX", "XXXXXActionExtension", "XXXXXSharedCode", "XXXXXSharedCodeResources", "XXXXXTodayExtension"]
Found the following schemes in your project:
You can use 'scheme "Name"' in your Snapfile
--------------------------------------------
1) Copy of XXXXX
2) XXXXX
3) XXXXXActionExtension
4) XXXXXSharedCode
5) XXXXXSharedCodeResources
6) XXXXXTodayExtension
The current Snapfile
# Uncommend the lines below you want to change by removing the # in the beginning
# A list of devices you want to take the screenshots from
devices([
"iPhone 6",
"iPhone 6 Plus",
"iPhone 5",
"iPhone 4s",
"iPad Air"
])
languages([
'en-US',
'de-DE',
'it-IT'
])
# Where should the resulting screenshots be stored?
screenshots_path "./screenshots"
# clear_previous_screenshots # remove the '#'' to clear all previously generated screenshots before creating new ones
# JavaScript UIAutomation file
# js_file './snapshot.js'
# The name of the project's scheme
scheme 'XXXXX'
# Where is your project (or workspace)? Provide the full path here
# project_path './YourProject.xcworkspace'
# By default, the latest version should be used automatically. If you want to change it, do it here
# ios_version '8.1'
I found the solution! It is related to the line encoding! It was "Old Mac Format".
How to reproduce the problem:
In atom.io install the line-ending-converter
Open a Snapfile
Via the Menu select Packages > Line Encoding Converter > Convert To Old Mac Format
Save the Snapfile.
Run snapshot
The Snapfile will be ignored.
BTW: formated with Windows line formatting it worked.
I assume this strange encoding problem happened because I edited the contents of Snapshot on a Windows system and copied it over to Mac using a mouse/keyboard sharing software (http://synergy-project.org/).
Anyway, snapshot is giving me the wrong information when telling Using './Snapfile' but actually fails using it.
I did a diff I think snapshot was correct.
diff --git a/fastlane/Snapfile b/fastlane/Snapfile
index 40e0175..9fb01f6 100644
--- a/fastlane/Snapfile
+++ b/fastlane/Snapfile
@@ -1 +1,60 @@
-# Uncomment the lines below you want to change by removing the # in the beginning^M^M# A list of devices you want to take the screenshots from^Mdevices([^M "iPhone 4s",^M "iPhon
\ No newline at end of file
+# Uncomment the lines below you want to change by removing the # in the beginning
+
+# A list of devices you want to take the screenshots from
+devices([
+ "iPhone 4s",
+# "iPhone 6",
+# "iPhone 6 Plus",
+# "iPhone 5",
So the whole file was somehow formatted in a single line with strange line breaks (see http://en.wikipedia.org/wiki/Newline#Common_problems).
Thanks for investigating and letting me know :+1:
|
2025-04-01T04:10:35.448904
| 2019-04-05T07:58:47
|
429629346
|
{
"authors": [
"arlac77",
"codecov-io",
"coveralls"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14739",
"repo": "Kronos-Integration/kronos-interceptor-decode-json",
"url": "https://github.com/Kronos-Integration/kronos-interceptor-decode-json/pull/441"
}
|
gharchive/pull-request
|
merge package from arlac77/npm-package-template
package.json
chore(package<EMAIL_ADDRESS>chore(package<EMAIL_ADDRESS>chore(scripts): cover@#overwrite c8 --temp-directory build/coverage ava && c8 report -r lcov --temp-directory build/coverage
chore(package): add nyc from template
chore(package): set $.ava.require='esm' as in template
chore(package): set $.ava.files='tests/-test.js,tests/-test.mjs' as in template
chore(package): set $.ava.extensions='js,mjs' as in template
Codecov Report
Merging #441 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #441 +/- ##
=======================================
Coverage 93.93% 93.93%
=======================================
Files 1 1
Lines 33 33
=======================================
Hits 31 31
Misses 2 2
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 06b7140...d0c50de. Read the comment docs.
Coverage remained the same at 91.489% when pulling d0c50dec0491dd08c12a38f2a34c47628f535d56 on template-sync-1 into 06b7140956c7cf4e8d7a15f8430cfff847edc0fa on master.
|
2025-04-01T04:10:35.460197
| 2017-09-29T07:22:55
|
261553070
|
{
"authors": [
"arlac77",
"codecov-io"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14740",
"repo": "Kronos-Integration/kronos-step-passthrough",
"url": "https://github.com/Kronos-Integration/kronos-step-passthrough/pull/233"
}
|
gharchive/pull-request
|
merge package template from Kronos-Tools/npm-package-template-minimal
package.json
chore(devDependencies): update<EMAIL_ADDRESS>from template
Codecov Report
Merging #233 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #233 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 2 2
Lines 9 9
=====================================
Hits 9 9
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9e5fe7f...7402d42. Read the comment docs.
|
2025-04-01T04:10:35.474899
| 2023-09-13T08:37:45
|
1894029736
|
{
"authors": [
"R-Lawton",
"philbrookes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14741",
"repo": "Kuadrant/multicluster-gateway-controller",
"url": "https://github.com/Kuadrant/multicluster-gateway-controller/issues/528"
}
|
gharchive/issue
|
Investigate a better way to perform DNS Geolocation
## WHAT: For MVP we decided to have no constraints on on what a user can input as a geolocation as we found out that DNS providers AWS and GCP offer widely different location. The aim of this issue is to investigate a method for mapping these locations.
HOW:
Example methods to investigate:
We come up with a corresponding location based on the user's preference for latency and cost
We come up with a suggested location based on the users preference of latency and cost but don't add it until we get the go-ahead
Both above options depend on a flag set that tells the controller I'm giving you total input or I'm giving you suggested input or I'm giving you no input at all i want the total reigns for geolocation
DONE:
Proposal of best implementation
Epic filled out with tickets to implement said investigation
This issue is stale because it has been open for 60 days with no activity.
This issue is stale because it has been open for 60 days with no activity.
This issue is stale because it has been open for 60 days with no activity.
This issue was closed because it has been inactive for 30 days since being marked as stale.
|
2025-04-01T04:10:35.497856
| 2022-01-03T11:07:15
|
1092417126
|
{
"authors": [
"KuhlTime"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14742",
"repo": "KuhlTime/hsd-markdown-thesis",
"url": "https://github.com/KuhlTime/hsd-markdown-thesis/issues/5"
}
|
gharchive/issue
|
Images get misplaced
If images are placed so that a page wrap is needed, the text following the image might be placed ahead of the image to fill the rest of the previous page. This behavior can be prevented by adding a \newpage command before the image.
This is very tedious though and would be better addressed automatically, because as soon as the content changes the image might fit again and therefor the \newpage command has to be removed again.
Alternatively, to manually adding a \newpage before every image. You can simply add a \ to the end of the image tag.
https://tex.stackexchange.com/a/276144/166310
\
|
2025-04-01T04:10:35.518761
| 2015-01-18T09:27:35
|
54691290
|
{
"authors": [
"Kuniwak",
"miyakogi"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14743",
"repo": "Kuniwak/vint",
"url": "https://github.com/Kuniwak/vint/issues/107"
}
|
gharchive/issue
|
Invalid line number is reported in line-continuation
Example
let g:v = {'a': 1}
let g:a = deepcopy(
\ g:v["a"])
otuput
vint_check.vim:2:13: Prefer single quoted strings (see Google VimScript Style Guide (Strings))
Line 3 should be reported as an error but line 2 is reported.
vint version 0.3.0
Thanks for reporting the bug.
The cause of this bug is in vim-vimlparser.
I'm gonna report the bug to vim-vimlparser.
I see, thanks.
Reported: https://github.com/ynkdir/vim-vimlparser/issues/16
The bug will be fixed by the next release v0.3.1.
|
2025-04-01T04:10:35.755075
| 2024-10-05T17:53:38
|
2568239280
|
{
"authors": [
"Abhik004",
"Kushal997-das"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14744",
"repo": "Kushal997-das/Hacktoberfest-2024",
"url": "https://github.com/Kushal997-das/Hacktoberfest-2024/pull/218"
}
|
gharchive/pull-request
|
Updated my data
Hacktoberfest! 🎊🎈
🎉 Have you read the Contributing Guidelines? 🤔
Yes
📝 Description
Updated my data in data.json file
Fixes #215 🔧
✅ Checklist
[x] I've read the contribution guidelines. 📚
[x] I've checked the issue list before deciding what to submit. 🔍
[x] I've edited the README.md and linked to my code. 📄
Hi there!
Although this repo is not part of Hacktoberfest 2024, we have some great alternatives for you. Welcome to SkillShow, a repository focused on skill enhancement!
Guidelines:
Please adhere to the existing guidelines.
No additional rules, thanks!
Check it out here: SkillShow.
Note: Start by creating an issue. Once it’s assigned to you, feel free to open a pull request (PR). Directly opening a PR will lead to an invalid label.
Looking forward to your contributions!
|
2025-04-01T04:10:35.759581
| 2024-10-07T10:52:37
|
2570069527
|
{
"authors": [
"BhavnaMogha",
"Kushal997-das"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14745",
"repo": "Kushal997-das/Hacktoberfest-2024",
"url": "https://github.com/Kushal997-das/Hacktoberfest-2024/pull/258"
}
|
gharchive/pull-request
|
Write a program to implement mid-point circle drawing algorithm.
Hacktoberfest! 🎊🎈
🎉 Have you read the Contributing Guidelines? 🤔
(Write your answer here.)
📝 Description
(Write your answer here.)
Fixes #issue_no 🔧
✅ Checklist
[ ] I've read the contribution guidelines. 📚
[ ] I've checked the issue list before deciding what to submit. 🔍
[ ] I've edited the README.md and linked to my code. 📄
🔗 Related Issues or Pull Requests
(Write your answer here.)
Hi there!
Although this repo is not part of Hacktoberfest 2024, we have some great alternatives for you. Welcome to SkillShow, a repository focused on skill enhancement!
Guidelines:
Please adhere to the existing guidelines.
No additional rules, thanks!
Check it out here: SkillShow.
Note: Start by creating an issue. Once it’s assigned to you, feel free to open a pull request (PR). Directly opening a PR will lead to an invalid label.
Looking forward to your contributions!
|
2025-04-01T04:10:35.768195
| 2024-02-23T08:40:31
|
2150627282
|
{
"authors": [
"coveralls",
"healthjyk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14746",
"repo": "KusionStack/kusion",
"url": "https://github.com/KusionStack/kusion/pull/817"
}
|
gharchive/pull-request
|
fix: no assignment of releaseVersion when version in buildInfo is empty
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., design docs, usage docs, etc.:
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
Details
0 of 1 (100.0%) changed or added relevant line in 1 file are covered.
9 unchanged lines in 2 files lost coverage.
Overall coverage decreased (-0.04%) to 76.862%
Files with Coverage Reduction
New Missed Lines
%
pkg/version/types.go
2
91.59%
pkg/cmd/destroy/options.go
7
81.86%
Totals
Change from base Build<PHONE_NUMBER>:
-0.04%
Covered Lines:
9072
Relevant Lines:
11803
💛 - Coveralls
|
2025-04-01T04:10:35.771155
| 2022-10-12T18:22:29
|
1406623704
|
{
"authors": [
"Tburm",
"platschi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14747",
"repo": "Kwenta/kwenta",
"url": "https://github.com/Kwenta/kwenta/issues/1479"
}
|
gharchive/issue
|
Mobile: Spot History
Implement a mobile version of spot history to match the futures history table design with a drawer:
Needs design mockup; how much additional data there is for the drawer?
|
2025-04-01T04:10:35.777627
| 2017-11-19T10:35:49
|
275148736
|
{
"authors": [
"JuanjoSalvador",
"Kylart",
"xdk78"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14748",
"repo": "Kylart/KawAnime",
"url": "https://github.com/Kylart/KawAnime/issues/21"
}
|
gharchive/issue
|
Search and download custom subtitles for anime
i found some sites http://kitsunekko.net/ http://thesubdb.com/
SubDB seems the best choice for this feature.
I don't think this is still relevant to this day, next release will allow the user to look for torrents on nyaa freely so they already have a lot of subs availables.
|
2025-04-01T04:10:35.789766
| 2020-03-23T10:00:09
|
586080679
|
{
"authors": [
"Kyusung4698",
"Panoptis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14749",
"repo": "Kyusung4698/PoE-Overlay",
"url": "https://github.com/Kyusung4698/PoE-Overlay/issues/439"
}
|
gharchive/issue
|
[Windows] requireAdminstrator set
Why?
I'd consider this a major security issue as the app is parsing data it can not trust.
This seems to work perfectly fine:
C:\Windows\System32\cmd.exe /min /C "set __COMPAT_LAYER=RUNASINVOKER && start "" "F:\Games\poe-overlay\poe-overlay.exe"
Thanks for the great app, keep it up!
Duplicate of #361
|
2025-04-01T04:10:35.795335
| 2022-08-29T23:59:49
|
1354995761
|
{
"authors": [
"L3MON4D3",
"amarakon",
"hiberabyss",
"leiserfg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14750",
"repo": "L3MON4D3/LuaSnip",
"url": "https://github.com/L3MON4D3/LuaSnip/issues/560"
}
|
gharchive/issue
|
Support Global Snippets (Snipmate-like)
I use the vim-snippets repository for a set of default snippets for each file type. But they also have a file called _.snippets which contains global snippets which means they should work on all file types. On nvim-snippy, these global snippets work. On LuaSnip though, they do not.
We use 'all' for that
We use 'all' for that
Thanks, it works.
Note that you can use ls.filetype_extend("all", {"_"}) to make _-snippets global
Note that you can use ls.filetype_extend("all", {"_"}) to make _-snippets global
In my case, the global snippets did not loaded with the setting. The result of require("luasnip").get_snippet_filetypes() is
{ "cpp", "all", "_" }
|
2025-04-01T04:10:35.934047
| 2019-11-06T20:10:51
|
518706825
|
{
"authors": [
"jsoenyun",
"justinlittman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14751",
"repo": "LD4P/sinopia_editor",
"url": "https://github.com/LD4P/sinopia_editor/issues/1697"
}
|
gharchive/issue
|
Permit copying data created in one template in Sinopia into another template
Describe the feature you'd like
Develop the Copy feature so that data created in one profile or template can be ported into another profile or template
Give an example
If dealing with an online instance of a print resource that has already been described, it would be useful to copy the data for the print instance into a different template tailored specifically for an online instance.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
@michelleif has mentioned a workaround where the RDF could be copied and used for a new description after deleting the name of the original template, causing the system to prompt for template name. Manually re-entering the data would be another but more laborious option.
Options include:
Asking the user when copying which RT to use.
Allowing the user to change the RT for the currently open RT.
I personally like the second option, as it is more generally usable.
@justinlittman agreed, sounds like much less hassle to change than to have to select from the list
@astridu Design needed. This would be a button / link on the editor tab that would open a modal to allow the user to change the resource template. (I can imagine including as another button in the Save, Copy, Preview row or perhaps next to the resource template name.)
|
2025-04-01T04:10:35.937216
| 2019-11-08T18:06:26
|
520156720
|
{
"authors": [
"jgreben",
"michelleif"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14752",
"repo": "LD4P/sinopia_editor",
"url": "https://github.com/LD4P/sinopia_editor/issues/1716"
}
|
gharchive/issue
|
inputLookupSinopia has "undefinedrepository" in URL
for a template that has a field that's a lookup to Sinopia (example on DEV: template = rt1:sinopia:lookup
after you do a lookup and select a resource from Sinopia, the URI that is saved begins with undefinedrepostitory instead of the expected trellis address
AFAIKT this is happening because of how the search results are being returned from elastic-search with a truncated uri:
{
'totalHits': 1,
'results': [
{
'uri': 'repository/stanford/0ebfb83d-ab52-492f-9cd1-05eed8a1d4e6',
'label': 'Blue',
'created': '2019-11-14T18: 18: 48.224Z',
'modified': '2019-11-14T18: 18: 48.224Z',
'type': [
'http: //id.loc.gov/ontologies/bibframe/Work'
]
}
],
'authLabel': 'Sinopia resources',
'authURI': 'urn: ld4p: sinopia',
'label': 'Sinopia resources',
'id': 'urn: ld4p: sinopia'
}
Looks like it is getting the _id and not the _source{"uri"}:
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "sinopia_resources",
"_type": "sinopia",
"_id": "repository/stanford/0ebfb83d-ab52-492f-9cd1-05eed8a1d4e6",
"_score": 1,
"_source": {
"uri": "http://platform:8080/repository/stanford/0ebfb83d-ab52-492f-9cd1-05eed8a1d4e6",
"title": [
"Blue"
],
"label": "Blue",
"text": [
"Something",
"Blue"
],
"created": "2019-11-14T18:18:48.224Z",
"modified": "2019-11-14T18:18:48.224Z",
"type": [
"http://id.loc.gov/ontologies/bibframe/Work"
]
}
}
]
}
}
|
2025-04-01T04:10:35.938768
| 2021-11-24T17:19:16
|
1062684300
|
{
"authors": [
"justinlittman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14753",
"repo": "LD4P/sinopia_editor",
"url": "https://github.com/LD4P/sinopia_editor/pull/3373"
}
|
gharchive/pull-request
|
Display relationships in search results.
closes #3269
Why was this change made?
Easier discovery of BF resources.
How was this change tested?
Which documentation and/or configurations were updated?
|
2025-04-01T04:10:35.945850
| 2016-04-10T19:43:10
|
147256779
|
{
"authors": [
"BrandonKMLee",
"OliverDeBrisbane",
"jmillerdesign",
"katiemcculloch"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14754",
"repo": "LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words",
"url": "https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/pull/79"
}
|
gharchive/pull-request
|
All the dirty words from Google's "wdyl" project
I also sorted the file alphabetically, which moved a few existing words.
http://fffff.at/googles-official-list-of-bad-words/
@jmillerdesign I imported this to our DoltHub bad-words repository, and included your changes. DoltHub is GitHub for Dolt. You can see your additions in this commit, which I submitted in a PR.
Damn is not a bad word
Some Christians may or may not disagree.
|
2025-04-01T04:10:35.947652
| 2021-01-27T07:41:37
|
794856190
|
{
"authors": [
"lfpratik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14755",
"repo": "LF-Engineering/da-ds",
"url": "https://github.com/LF-Engineering/da-ds/pull/20"
}
|
gharchive/pull-request
|
Bugfix/Da 3389
Fix reop clone issues
Configured the logger
Errors and debug logs will print on the terminal and will write logs into the "gitops.log" file
Script abnormal termination case handle.
@lukaszgryglicki, can you take a look.
@lukaszgryglicki, can you take a look.
|
2025-04-01T04:10:35.971337
| 2019-04-08T13:31:53
|
430453578
|
{
"authors": [
"adriansuhov",
"chvalean",
"stefanlupsa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14756",
"repo": "LIS/LISAv2",
"url": "https://github.com/LIS/LISAv2/pull/98"
}
|
gharchive/pull-request
|
SR-IOV, NVMe: Rescind PCI device test
Create utils function for rescind / hotplug PCI device
Remove old SR-IOV Hotplug test script and merge the test into SRIOV-VerifyVF-Connection.sh
Add test case for Rescind PCI device for NVMe
[azure] [LISAv2 Test Results Summary]
[azure] Test Run On : 04/08/2019 12:48:58
[azure] ARM Image Under Test : canonical : ubuntuserver : 18.04-lts : latest
[azure] Total Test Cases : 2 (2 Passed, 0 Failed, 0 Aborted, 0 Skipped)
[azure] Total Time (dd:hh:mm) : 0:0:18
[azure]
[azure] ID TestArea TestCaseName TestResult TestDuration(in minutes)
[azure] -------------------------------------------------------------------------------------------------------------------------------------------
[azure] 1 SRIOV SRIOV-PCI-RESCIND PASS 5.59
[azure] 2 NVME NVME-PCI-RESCIND PASS 1.54
[azure] [LISAv2 Test Results Summary]
[azure] Test Run On : 04/08/2019 13:41:54
[azure] ARM Image Under Test : RedHat : RHEL : 7.5 : latest
[azure] Total Test Cases : 2 (2 Passed, 0 Failed, 0 Aborted, 0 Skipped)
[azure] Total Time (dd:hh:mm) : 0:0:22
[azure]
[azure] ID TestArea TestCaseName TestResult TestDuration(in minutes)
[azure] -------------------------------------------------------------------------------------------------------------------------------------------
[azure] 1 SRIOV SRIOV-PCI-RESCIND PASS 8.38
[azure] 2 NVME NVME-PCI-RESCIND PASS 2.8
I do not really like keeping the SRIOV-VerifyVF-Connection.sh name, as it now does more than that.
Maybe also rename the file to SRIOV-VerifyRescindVF-Connection.sh
this is a generic TC, and let's not change script names if we don't refactor, this being just an addition
will approve once the comments are addressed
Addressed comments:
amend function description
use case statement for device type
added retry pattern for bus scanning and GPU support
removed exits from RescindPCI function opting for returns for consistency
[azure] [LISAv2 Test Results Summary]
[azure] Test Run On : 04/09/2019 12:42:54
[azure] ARM Image Under Test : canonical : ubuntuserver : 18.04-lts : latest
[azure] Total Test Cases : 2 (2 Passed, 0 Failed, 0 Aborted, 0 Skipped)
[azure] Total Time (dd:hh:mm) : 0:0:18
[azure]
[azure] ID TestArea TestCaseName TestResult TestDuration(in minutes)
[azure] -------------------------------------------------------------------------------------------------------------------------------------------
[azure] 1 SRIOV SRIOV-PCI-RESCIND PASS 5.65
[azure] 2 NVME NVME-PCI-RESCIND PASS 1.5
|
2025-04-01T04:10:36.037200
| 2021-07-21T21:21:45
|
950120438
|
{
"authors": [
"chrisgarrity",
"deepankarmalhan"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14757",
"repo": "LLK/scratchjr",
"url": "https://github.com/LLK/scratchjr/issues/476"
}
|
gharchive/issue
|
Visual feedback for add sprite button is flaky
Expected Behavior
When you tap the add character button in the editor, there should be a brief visual cue (grey highlight) that the button has been tapped before the library appears.
Actual Behavior
iOS: sometimes (first time perhaps?) the grey overlay remains visible on the library screen
Android: grey overlay never appears.
Steps to Reproduce
Android: tap the add character button - you hear the tap sound, but there is no visual cue
iOS: difficult to reproduce - may be a timing issue?
Added as a project requirement (accessibility requirement), closing this out.
|
2025-04-01T04:10:36.045950
| 2023-05-31T21:58:04
|
1735088358
|
{
"authors": [
"srwopschall"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14758",
"repo": "LLNL/Tribol",
"url": "https://github.com/LLNL/Tribol/pull/8"
}
|
gharchive/pull-request
|
Feature/srwopschall/testmesh_tets
Adds ability to create tet mesh decomposition of hex mesh on the TestMesh class.
Adds ability to convert tet "test mesh" to mfem tet mesh.
Added tests for the tet TestMesh construction and the mfem tet mesh construction.
Adding finite element support for tet meshes and updating examples, tests, and methods appropriately will be in follow on PR.
@ebchin - I fixed the tribol_coupling_scheme test and all checks have passed. Memory management was ok. I had to update some element type modifications I had made elsewhere.
|
2025-04-01T04:10:36.053875
| 2021-07-20T02:33:49
|
948191836
|
{
"authors": [
"cyrush",
"white238"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14759",
"repo": "LLNL/axom",
"url": "https://github.com/LLNL/axom/pull/614"
}
|
gharchive/pull-request
|
fix exported conduit dependency
I missed one of the two directories that need to be moved in parallel. This affects downstream users.
There are some files left over that can leave you in a bad state.
@white238 thanks for hunting this down!
This solves the mystery that Kenny was having.
To summarize:
There are few files hanging around in the old place.
Axom, when building was looking in the right place, but the imported targets was still looking in the old place.
Since the old files where hanging around, it found a partial setup.
@kennyweiss this fix may also solve the Threads::Threads issue.
I'll make the fix in conduit to remove the old files.
|
2025-04-01T04:10:36.056246
| 2018-11-09T19:45:57
|
379297689
|
{
"authors": [
"rrsettgast",
"white238"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14760",
"repo": "LLNL/blt",
"url": "https://github.com/LLNL/blt/issues/201"
}
|
gharchive/issue
|
cuda dependency does not propagate into blt_add_executable
If you have a lib that depends on cuda:
blt_add_library( NAME mylib DEPENDS_ON cuda )
and a executable that depends on mylib:
blt_add_executable( NAME myex DEPENDS_ON mylib)
then myex doesn't inherit the cuda dependency from mylib...and bad things happen.
This is because the code here: https://github.com/LLNL/blt/blob/062ee3ff58e23713b9cd4d2daa2f9b5fca02d3bc/cmake/BLTMacros.cmake#L604
doesn't expand the dependencies. maybe the solution is to have a macro to expand the dependencies and use it prior to this check?
This should be easy to fix because we already have the logic to expand DEPENDS_ON in blt_setup_target() it could be pulled out and used here.
|
2025-04-01T04:10:36.096909
| 2022-08-31T11:33:06
|
1357202981
|
{
"authors": [
"Solovertical",
"rajarsheechatterjee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14761",
"repo": "LNReader/lnreader",
"url": "https://github.com/LNReader/lnreader/issues/418"
}
|
gharchive/issue
|
Fix Release date
Steps to reproduce
On opening Ltnovel.com
Expected behavior
It should show release date frequently
Actual behavior
It's showing date when the ch release
LNReader version
1.1.12
Android version
10
Device
Mi
Other details
Ltnovel.com it's a good extension ,Please fix this , There are very few extensions where I got this great novel.
Acknowledgements
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
[X] I have written a short but informative title.
[X] If this is an issue with an source, I should be opening an issue in the sources repository.
[X] I have updated the app to version 1.1.12.
[X] I will fill out all of the requested information in this form.
If this is an issue with an source, I should be opening an issue in the sources repository.
|
2025-04-01T04:10:36.110032
| 2024-07-04T09:14:45
|
2390360907
|
{
"authors": [
"arthurmloureiro"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14762",
"repo": "LSSTDESC/Smokescreen",
"url": "https://github.com/LSSTDESC/Smokescreen/issues/44"
}
|
gharchive/issue
|
Use Firecrown's RequiredParameters instead of requesting the user to provide these values.
More information here: https://github.com/LSSTDESC/firecrown/issues/383#issuecomment-2148006065
We may be able to remove the required dict of firecrown systematics by using this new feature.
This is in the firecrown master branch but not yet released!
|
2025-04-01T04:10:36.119182
| 2023-04-18T20:05:06
|
1673758405
|
{
"authors": [
"LTH14",
"gshaikov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14763",
"repo": "LTH14/mage",
"url": "https://github.com/LTH14/mage/issues/26"
}
|
gharchive/issue
|
Forcing FP32 in Attention
Hi, I wonder why you manually disable autocast to fp16 in your Attention implementation:
https://github.com/LTH14/mage/blob/1becb14475354fc40df35ba7c7c7bf418a137cd3/models_mage.py#L34-L35
Did you observe any instabilities when you were training with fp16? I am not aware of this issue on a scale of ViT-B and ViT-L, at least in MAE.
Thanks.
Hi -- I observed NaN when training ViT-L without such modification. ViT-B works fine.
|
2025-04-01T04:10:36.121872
| 2023-07-26T03:16:15
|
1821502770
|
{
"authors": [
"Angelina1996",
"LTH14"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14764",
"repo": "LTH14/targeted-supcon",
"url": "https://github.com/LTH14/targeted-supcon/issues/11"
}
|
gharchive/issue
|
Code Execution for Cifar and INAT datasets
Dear author,
Thanks for such great work. I am just wondering if you could guide how to execute the code for Cifar and INAT datasets.
Cheers
Hi, thanks for your interest! Please refer to this supplementary material here -- I submitted with the CVPR camera ready but I don't know why it does not appear online. For inat you can simply change the imagenet-LT to inat.
Our implementation in Cifar is not based on the moco and imagenet framework, so it could be hard to directly adapt the released code to Cifar. Our implementation in Cifar is based on this repo: https://github.com/HobbitLong/SupContrast. You need to replace the supcon losses file in their code with the targeted supcon loss (uncleaned version). Hope it can make your reproduction easier.
|
2025-04-01T04:10:36.130900
| 2016-09-07T18:38:33
|
175575635
|
{
"authors": [
"Spasi",
"kenzierocks"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14765",
"repo": "LWJGL/lwjgl3",
"url": "https://github.com/LWJGL/lwjgl3/issues/232"
}
|
gharchive/issue
|
APITrace doesn't attach to LWJGL applications
Application: https://github.com/kenzierocks/APITraceTest
Run ./gradlew installDist to prepare the application. Run the app like so: <apitrace> trace -v -a gl ./build/install/APITraceTest/bin/APITraceTest.
You'll notice that apitrace will not attach to the program or generate a trace. I'm not sure why, but I suspect LWJGL is not loading the apitrace wrapper OpenGL library. However, I don't know how to make that happen.
I've run apitrace on a regular C OpenGL application just fine.
You can make LWJGL load the apitrace wrapper using the Configuration.OPENGL_LIBRARY_NAME option (or the -Dorg.lwjgl.opengl.libname system property).
The problem is GLFW though, it also loads the OpenGL library and there may be trouble if LWJGL uses another one. So, depending on your OS, you'll also have to use the workarounds described here, under Tracing Manually.
Ok, that works well. I'm currently trying to track down a mysterious INVALID_OPERATION that only appears if I check errors more often, which is very curious.
Have you tried using a debug context? Use glfwWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, GLFW_TRUE) and call GLUtil.setupDebugMessageCallback() after GL.createCapabilities().
It looks like there's no support for that on OSX: My OpenGL version only goes up to 4.1 and none of the extensions are available.
|
2025-04-01T04:10:36.133645
| 2017-05-06T13:06:21
|
226769152
|
{
"authors": [
"SushiFu",
"johnoppenheimer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14766",
"repo": "LaBaleineFr/BFMBaleine",
"url": "https://github.com/LaBaleineFr/BFMBaleine/issues/3"
}
|
gharchive/issue
|
Store !help that return nothing
Store somewhere the asked word when someone send a !help word that return nothing.
So that admin can see what people want to know.
Transfer to https://github.com/LaBaleineFr/ToolsBaleine
|
2025-04-01T04:10:36.136775
| 2016-11-03T23:28:20
|
187217094
|
{
"authors": [
"agude"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14767",
"repo": "Lab41/nbserver",
"url": "https://github.com/Lab41/nbserver/pull/10"
}
|
gharchive/pull-request
|
Add pulling of Docker images to Travis
This will pull images before trying to build them, speeding up build time by not having to rebuild older layers.
HOLD
It looks like it has limited success in making the build faster:
Old: https://travis-ci.org/Lab41/nbserver/builds/173114625 (20:20)
New: https://travis-ci.org/Lab41/nbserver/builds/173119584 (19:01)
We need a solution to CI and to pushing containers after merge, but this isn't it.
|
2025-04-01T04:10:36.150361
| 2024-12-03T18:55:54
|
2715778196
|
{
"authors": [
"Alonsocom",
"awesomekling"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14768",
"repo": "LadybirdBrowser/ladybird",
"url": "https://github.com/LadybirdBrowser/ladybird/issues/2732"
}
|
gharchive/issue
|
[CSS Grid] Infinite recursion on https://shopify.com
Simplified reduction:
<!doctype html><style>
body {
display: grid;
max-height: fit-content;
}
</style>hello
https://github.com/LadybirdBrowser/ladybird/issues/2732#issue-2715778196
<!doctype html>hello
|
2025-04-01T04:10:36.167165
| 2022-12-09T10:47:21
|
1486529210
|
{
"authors": [
"Lailloken",
"Oldmate88",
"killersquad235",
"kung69"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:14769",
"repo": "Lailloken/Lailloken-UI",
"url": "https://github.com/Lailloken/Lailloken-UI/issues/143"
}
|
gharchive/issue
|
Error at line 21
Not sure if i am doing something wrong here?
It's possible the AutoHotkey install on your system is outdated.
I got the same Issue.
Downloaded ahk fresh and still got the Error.
Got the same Issue
Problem solving: simply unzip the File.
Got the same problem. AHK installation was updated today, error message stays the same
killersquad234 mentioned that this error occurs when running the script from the zip file, and I can confirm it. So make sure to unzip the script folder correctly and run it afterwards.
killersquad235 mentioned that this error occurs when running the script from the zip file, and I can confirm it. So make sure to unzip the script folder correctly and run it afterwards.
It is unzipped. I tried running the release before the latest and get another error in another line. then overwrote it with the newest one again, now i get
Script also run in admin-mode, no change. is the poe-client needed running maybe? I have the client running, but I#m in the cue for leaguestart, may that cause the issue? tried my other ahk-scripts and they run without problems.
No, the game-client is not required. This new error message is definitely because of an outdated AutoHotkey installation, there have been many cases in the past. Recently, someone also thought they had updated their AHK installation, but then the script was opened with an older version all the time. Maybe you have the same issue. I don't know how that happens, but maybe check if there are two installations for some reason. Or manually unistall AHK, delete the folder, and install it fresh.
there was only one properly installed, but still the script used another one, i searched and deleted all ahk-installations manually and reinstalled the newest from their website. It works now, thank you very much!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.