id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
119833729 | Adding KeyError to accepted exceptions in conftest
gwcs imports pyasdf.tests.helpers, that imports from its ..conftest.
However, conftest.py exists in both packages, and as they both use the astropy package-template, both has PYTEST_HEADER_MODULES variable. None of the packages has the dependency of h5py, and trying to delete it from the PYTEST_HEADER_MODULES dict twice raises an Error.
This PR is the workaround to solve this.
Related package template PR: https://github.com/astropy/package-template/pull/143
Related package template PR: https://github.com/astropy/package-template/pull/143
@nden - This fails with the known conda path issue. Do you want to use ci-helpers here, too or should I just update the path?
:+1: for using ci-helpers from me. @embray?
Makes sense--this is just a dumb oversight probably carried over from astropy.
pyasdf should also switch over to the new system. That should be done via package-template though, right?
Just so I understand, is this waiting for astropy/package-template#143 to be merged?
I think so.
@nden, @embray - https://github.com/astropy/package-template/pull/143 is the same as this one, as I understood @embray's comments above it's more like a waiting for https://github.com/astropy/package-template/pull/140 for the ci-helpers stuff in the template. (Alternatively https://github.com/astropy/package-template/pull/143 can be pulled, too and close this without merging).
Please let me know if it was misunderstood, and you are actually waiting me to put the ci-helpers in, with this PR.
Both of those PR for the package-template are ready to be merged when a maintainer is around, I think.
@bsipocz I see now. In this case I think it's best to include in this PR everything necessary to run pyasdf tests now. We can update the templates when they are ready.
Honestly, it would be fine to just merge this PR--it doesn't impact the actual test results in relevant way. If no one disagrees I'll merge.
@embray - Either way. I'm travelling tomorrow, so I can put in the ci stuff then given that this is still unmerged.
OK, done now. Just a note, that you may want to revise the travis matrix, as atm the tests only run on 2.7 as the rest of the versions have only the egg-info checks.
Another change I made that appveyor now installs gwcs as well as a dependency (it seemed that it wasn't installed before).
Strange about the Windows build, but it seems like it might be a path separator issue in pyasdf's setup. I'll have a look at it.
Or is it not pulling in the asdf-standard submodule when it builds pyasdf?
@nden You're right, that's all it is. The build recipe needs to make sure to git submodule --init asdf-standard.
@embray I think this should be merged as is. appveyor can be fixed after that.
@nden Sure, I'll do that. I think maybe @bsipocz is on travel now.
| gharchive/pull-request | 2015-12-01T23:36:27 | 2025-04-01T06:45:50.420854 | {
"authors": [
"bsipocz",
"embray",
"nden"
],
"repo": "spacetelescope/pyasdf",
"url": "https://github.com/spacetelescope/pyasdf/pull/180",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
108118469 | First try to build other users project threw this
I did a fresh install with the most recent Windows ParticleDevSetup.exe
Fresh install
sign in
select Phiton
hit build
Atom Version: 1.0.15
System: ANDYTPT10
Thrown From: spark-dev package, v0.0.26
Stack Trace
Uncaught TypeError: Cannot read property '0' of undefined
At C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\decorators\unhandledRejection.js:80
TypeError: Cannot read property '0' of undefined
at C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\lib\spark-dev.js:761:58
at tryCatchReject (C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\makePromise.js:845:30)
at runContinuation1 (C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\makePromise.js:804:4)
at Fulfilled.when (C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\makePromise.js:592:4)
at Pending.run (C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\makePromise.js:483:13)
at Scheduler._drain (C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\Scheduler.js:62:19)
at Scheduler.drain (C:\Users\Andy\AppData\Local\particle\app-1.0.15\resources\app.asar\node_modules\spark-dev\node_modules\when\lib\Scheduler.js:27:9)
at doNTCallback0 (node.js:416:9)
at process._tickCallback (node.js:345:13)
Commands
-0:22.1.0 spark-dev:append-menu (atom-workspace.workspace.scrollbars-visible-always)
-0:21.2.0 spark-dev:update-menu (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
-0:21.2.0 spark-dev:append-menu (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
-0:21 spark-dev:update-menu (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
-0:21 spark-dev:append-menu (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
-0:21 spark-dev:update-menu (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
-0:21 spark-dev:append-menu (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
-0:12.9.0 spark-dev:compile-cloud (atom-text-editor.editor.is-focused)
-0:11.7.0 spark-dev:update-compile-status (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui)
Config
{}
Installed Packages
# User
No installed packages
# Dev
No dev packages
It's a duplicate of #105. Try opening directory instead of single file.
I had opened a directory and just selected one of the files in there to edit it.
| gharchive/issue | 2015-09-24T12:11:12 | 2025-04-01T06:45:50.502869 | {
"authors": [
"ScruffR",
"suda"
],
"repo": "spark/spark-dev",
"url": "https://github.com/spark/spark-dev/issues/115",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
699021872 | deal with deadlock issue
A deadlock issue will come up in a high probability. That's because the recv channel is a buffered channel with only 5 buffers and the channel is full before it gets started. In the method p.recvICMP(), it keeps sending packet to the recv channel, and sometimes makes the channel full in a very short time. As you know, the goroutine will be blocked when you try to send items to a full channel. This issue always happens when you try to ping large amount of IPs.
I have noticed that some people have realized this issue. Some of them try to improve the buffers of the channel, but it can't guarantee that the channel will not be full again. Some of them try to give a short time break after sending packet to the recv channel, but it will influence the effects of the interval parameter.
Therefor the way to solve this problem is to let the recv channel get started before it receives packets. I create a goroutine before p.recvICMP() runs, and it guarantees that the recv channel will not be blocked again.
Hope it helps.
I have a simpler solution. You must use select when sending data to the recv channel here.
it will look like this:
select {
case recv <- &packet{bytes: bytes, nbytes: n, ttl: ttl}:
case <- p.done:
}
Hi @brzstizc6. #85 actually solves this same issue much more succinctly so I think we'd prefer to merge that but thank you.
| gharchive/pull-request | 2020-09-11T08:30:01 | 2025-04-01T06:45:50.538704 | {
"authors": [
"CHTJonas",
"brzstizc6",
"dbzyuzin"
],
"repo": "sparrc/go-ping",
"url": "https://github.com/sparrc/go-ping/pull/95",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1843750950 | Getting an "No card present" error
Hey guys!
I'm using sparrow with a start 9 node. I can ping my local RPC address without any issues (it resolves to an IPv6 and IPv4) but when I try to connect in Sparrow I'm getting this error:
ERROR [Thread-69] c.s.s.i.c.CardTransport [null:-1] No card present
(it simply creates an ongoing Thread-## here – always has a different thread number)
Any ideas what I'm doing wrong?
Sparrow version: 1.7.8
Thanks
The No card present is related to your smart card reader, and is entirely unrelated.
Pinging the address does not mean that the service you are connecting to is up and running. If you are connecting to Bitcoin Core, make sure you have enabled the RPC interface (server=1 in the config file, see docs for details). If you are connecting to an Electrum server, make sure it is running, and has finished indexing. You may want to ask Start9 for additional guidance.
| gharchive/issue | 2023-08-09T17:56:01 | 2025-04-01T06:45:50.547004 | {
"authors": [
"craigraw",
"p-bateman"
],
"repo": "sparrowwallet/sparrow",
"url": "https://github.com/sparrowwallet/sparrow/issues/1058",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
946868535 | Transaction amount field missing- Windows
Using latest Windows version of Sparrow. I noticed when trying to send a transaction out from the wallet the transaction amount field was missing. The only way to send out a transaction was to select and send the entire UTXO.
I can't recreate this. Can you send a screenshot?
I'm sorry I cannot provide a screenshot. I decided to switch to Linux. The
wallet works perfectly so far on Linux.
On Tue, Jul 20, 2021, 2:45 AM craigraw @.***> wrote:
I can't recreate this. Can you send a screenshot?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/sparrowwallet/sparrow/issues/158#issuecomment-883255775,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AU4GAF3Q4QIKHJFK6TGIGLLTYVATXANCNFSM5ARIXYSA
.
Ok - I'm going to close this issue for now then.
| gharchive/issue | 2021-07-17T17:47:06 | 2025-04-01T06:45:50.550902 | {
"authors": [
"AtomicAkorn",
"craigraw"
],
"repo": "sparrowwallet/sparrow",
"url": "https://github.com/sparrowwallet/sparrow/issues/158",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1735450557 | Wrong labeling when sending a transaction to multiple wallets
When I create a transaction with two outputs, the first to Wallet A labeled as "Tx A" and the rest to Wallet B labeled as "Tx B" leaving no change, the transaction in Wallet B will have the label "Tx A". I would have expected the label to be "Tx B" and not "Tx A".
When I expand this transaction the received output is correctly labeled as "Tx B (received)" though.
This is intentional. The label for the transaction is applied consistently throughout the wallets in Sparrow (it is, after all, the same transaction). The UTXOs receive individual labels as you have noted.
| gharchive/issue | 2023-06-01T04:55:45 | 2025-04-01T06:45:50.552361 | {
"authors": [
"craigraw",
"dluvian"
],
"repo": "sparrowwallet/sparrow",
"url": "https://github.com/sparrowwallet/sparrow/issues/979",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
207720899 | Implement Tactical Graphics
Requesting tactical graphics (partially/commonly used), such as 2525C TACGRP.C2GM.SPL.LNE.AMB (ambush) etc.
See #32, this is on the road map, but can't say when, at the moment the plan is to finish tactical points for app6b and release milsymbol 1.0.0 and the plan for that is before the end of mars if everything goes according to plan.
If you have more complete list of tactical graphics you would like, please add issues for them at:
https://github.com/spatialillusions/milgraphics
(The plan is to start with Ambush and different types of axis of advance arrows.)
| gharchive/issue | 2017-02-15T06:40:42 | 2025-04-01T06:45:50.554429 | {
"authors": [
"DragonbornSR",
"spatialillusions"
],
"repo": "spatialillusions/milsymbol",
"url": "https://github.com/spatialillusions/milsymbol/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
524901510 | Fix issue #97
Fixes #97
Thank you! Could you also fix the tests?
@langeuh
@freekmurze
This does not fix the issue but hides it because all pngquant does is skip the optimisation, there's no output image generated.
[root@43c319197150 data]# pngquant --output /root/data/pngquant-output.png --force --quality=65-80 -v -- /root/data/pngquant-input.png
/root/data/pngquant-input.png:
read 2019KB file
used embedded ICC profile to transform image to sRGB colorspace
made histogram...44630 colors found
selecting colors...8%
selecting colors...16%
selecting colors...25%
selecting colors...33%
selecting colors...41%
selecting colors...91%
selecting colors...100%
moving colormap towards local minimum
image degradation MSE=558.901 (Q=0) exceeded limit of 11.334 (65)
Skipped 1 file out of a total of 1 file.
[root@43c319197150 data]# pngquant --output /root/data/pngquant-output.png --force -v -- /root/data/pngquant-input.png
/root/data/pngquant-input.png:
read 2019KB file
used embedded ICC profile to transform image to sRGB colorspace
made histogram...44630 colors found
selecting colors...8%
selecting colors...16%
selecting colors...25%
selecting colors...33%
selecting colors...41%
selecting colors...91%
selecting colors...100%
moving colormap towards local minimum
eliminated opaque tRNS-chunk entries...0 entries transparent
mapped image to new colors...MSE=553.341 (Q=0)
writing 2-color image as pngquant-output.png
Quantized 1 image.
| gharchive/pull-request | 2019-11-19T10:02:51 | 2025-04-01T06:45:50.557452 | {
"authors": [
"freekmurze",
"joejordanbrown",
"langeuh"
],
"repo": "spatie/image-optimizer",
"url": "https://github.com/spatie/image-optimizer/pull/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
695503290 | Laravel 8.x Compatibility
This is an automated pull request from Shift to update your package code and dependencies to be compatible with Laravel 8.x.
Before merging, you need to:
Checkout the l8-compatibility branch
Review all comments for additional changes
Thoroughly test your package
If you do find an issue, please report it by commenting on this PR to help improve future automation.
:alembic: Using this package? If you would like to help test these changes or believe them to be compatible, you may update your project to reference this branch.
To do so, temporarily add Shift's fork to the repositories property of your composer.json:
{
"repositories": [
{
"type": "vcs",
"url": "https://github.com/laravel-shift/laravel-tail.git"
}
]
}
Then update your dependency constraint to reference this branch:
{
"require": {
"spatie/laravel-tail": "dev-l8-compatibility",
}
}
Finally, run: composer update
:warning: Shift detected GitHub Actions which run jobs using a version matrix. Shift attempted to update your configuration for Laravel 8. However, you should review these changes to ensure the desired combination of versions are built for your package.
@freekmurze Any idea when this will be merged?
Laravel 8 launch is today.
Thanks!
| gharchive/pull-request | 2020-09-08T02:51:38 | 2025-04-01T06:45:50.571799 | {
"authors": [
"Swop",
"laravel-shift"
],
"repo": "spatie/laravel-tail",
"url": "https://github.com/spatie/laravel-tail/pull/62",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
298118838 | Thank you
This is not an issue or bug, I just wanted to say thanks for this package. I started to write my own and I soon realized that there a bunch of edge cases on this sort of thing.
I used your package and i just worked. Even with my own css.
Thanks 👍
Awesome, you're welcome 😄
| gharchive/issue | 2018-02-18T21:14:39 | 2025-04-01T06:45:50.577707 | {
"authors": [
"george-silva",
"sebastiandedeyne"
],
"repo": "spatie/vue-tabs-component",
"url": "https://github.com/spatie/vue-tabs-component/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1839529948 | MPI nodes don't handle scalar args correctly
Describe the bug
The dace program fails to compile if the integer args of MPI nodes (dst and tag) are anything other than symbols, symbol expressions and numbers.
To Reproduce
rank = dc.symbol('rank', dtype=dc.int64)
@dc.program
def func(A: dc.int32[N]):
# dace.comm.Send(A[0], rank - 1, 0) # Works
dace.comm.Send(A[0], abs(rank - 1), 0)
The program fails to compile with the following error:
ValueError: Node type "Send" not supported for promotion
Same behavior in other scenarios:
# ...
a = 0
a = rank
a = A[0]
dace.comm.Send(A[0], a, 0)
Desktop (please complete the following information):
Latest DaCe master branch
Possible fix:
The code below should check if the sdfg.arrays entry for the corresponding arg is a Scalar when given an str and fall to the last branch.
https://github.com/spcl/dace/blob/f4b4d01f67cb089b3ef821673e0a12405c94f9b1/dace/frontend/common/distr.py#L424-L436
@kylosus did you end up resolving this? In which PR?
| gharchive/issue | 2023-08-07T14:05:54 | 2025-04-01T06:45:50.600099 | {
"authors": [
"kylosus",
"tbennun"
],
"repo": "spcl/dace",
"url": "https://github.com/spcl/dace/issues/1348",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1779413200 | Make SDFG.name a proper property
Fixes #28
Fixes #28
Dace is really hitting its stride
| gharchive/pull-request | 2023-06-28T17:40:23 | 2025-04-01T06:45:50.601480 | {
"authors": [
"phschaad",
"tbennun"
],
"repo": "spcl/dace",
"url": "https://github.com/spcl/dace/pull/1289",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2362781315 | Replace coingecko with coinpaprika API for market data
https://api.coinpaprika.com/v1/tickers/spr-spectre-network
added the #rank field from the main data object into the cache dictionary in get_spr_market_data (rank should be part of the main data object, not within the quotes["USD"])
This is only a temporary solution to get the explorer market data operational, doesn't need to merged to main.
Fix has been applied manually for now and REST-API switched to Coinpaprika and Explorer has been updated to use new market-data API. Lets keep it as intermediate solution
| gharchive/pull-request | 2024-06-19T16:42:24 | 2025-04-01T06:45:50.728412 | {
"authors": [
"0xA001113",
"x100111010"
],
"repo": "spectre-project/spectre-rest-server",
"url": "https://github.com/spectre-project/spectre-rest-server/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1028148383 | Adapting Progress feature to reactive programming
I'm dealing with a system that sends progress reports using instances of this hierarchy:
(Progress is an abstract class.)
The name of the current task is sent asynchronously, too.
So, all in all, task name and progress come from 2 different channels (reactive sequences) and I need to react accordingly.
In theory, I could receive any event (item of each sequence) in a non particular order. For instance, I would receive a task name message, and after that, a bunch of Percentage progress, followed by a Done progress. Or you could get a Unknown message after an AbsoluteProgress.
I wonder how I could use Spectre.Console to deal with this scenario.
Can someone please tell me if it's possible and how?
Thanks a lot in advance!
Thanks a lot for your idea! Seeing your code I've been able to do my very own.
Should it's useful for anybody, this is my code:
internal class ProgressUpdater : IDisposable
{
private readonly CompositeDisposable disposable = new();
private Maybe<ProgressTask> currentTask = Maybe<ProgressTask>.None;
public ProgressUpdater(IDeployer deployer, IExecutionContext executionContext, ProgressContext ctx)
{
deployer.Messages
.Subscribe(s =>
{
currentTask.Execute(p =>
{
p.StopTask();
p.Value = 1;
});
currentTask = Maybe.From(ctx.AddTask(s, true, 1));
currentTask.Execute(p => { p.IsIndeterminate = true; });
}).DisposeWith(disposable);
executionContext.Operation.Progress.Subscribe(progress =>
{
switch (progress)
{
case Done:
currentTask.Execute(p =>
{
p.Value = 1;
p.StopTask();
});
break;
case AbsoluteProgress<double> absoluteProgress:
currentTask.Execute(p =>
{
p.IsIndeterminate = true;
p.Value = absoluteProgress.Value;
});
break;
case Percentage percentage:
currentTask.Execute(p =>
{
p.Value = percentage.Value;
p.IsIndeterminate = false;
});
break;
case Unknown:
currentTask.Execute(p => { p.IsIndeterminate = true; });
break;
default:
throw new ArgumentOutOfRangeException(nameof(progress));
}
}).DisposeWith(disposable);
}
public void Dispose()
{
disposable.Dispose();
currentTask.Execute(p =>
{
p.Value = 1;
p.StopTask();
});
}
}
To understand it, you need to assume that the system executes only a task at a given time, that's why when a new task arrives (messages indicate the start of a task) the previous one is stopped. Also, you don't know which kind of task you're dealing with until you receive a progress update. That's why it's indeterminate by default. The Execute method is an extension method from the CSharpFunctionalExtensions monad Maybe<T> (that is helpful to avoid having null values)
Well, it's not the best code, as you can see, but it works.
Thanks again!
| gharchive/issue | 2021-10-16T21:06:23 | 2025-04-01T06:45:50.733777 | {
"authors": [
"SuperJMN"
],
"repo": "spectreconsole/spectre.console",
"url": "https://github.com/spectreconsole/spectre.console/issues/592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
729589668 | Update documentation
ExceptionFormat -> ExceptionFormats.
Fix link to documentation.
Add cross reference to Styles.
Render table in example code.
Add code for setting background color.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
@Martin4ndersen Thanks! Much appreciated!
| gharchive/pull-request | 2020-10-26T14:01:15 | 2025-04-01T06:45:50.737753 | {
"authors": [
"CLAassistant",
"Martin4ndersen",
"patriksvensson"
],
"repo": "spectresystems/spectre.console",
"url": "https://github.com/spectresystems/spectre.console/pull/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
117350337 | Syncing from /static fails
Static file changed, syncing
chtimes /: operation not permitted
CRITICAL: 2015/11/17 Error copying static files to /
/cc @spf13
Adding noTimes=true to my config (not as a flag, that doesn't work -- see another bug) replaces this error with another one:
Static file changed, syncing
file already exists
CRITICAL: 2015/11/17 Error copying static files to /
Did you update afero and fsync to the latest build?
On Tue, Nov 17, 2015 at 8:17 AM Bjørn Erik Pedersen <
notifications@github.com> wrote:
Adding noTimes=true to my config (not as a flag, that doesn't work -- see
another bug) replaces this error with another one:
Static file changed, syncing
file already exists
CRITICAL: 2015/11/17 Error copying static files to /
—
Reply to this email directly or view it on GitHub
https://github.com/spf13/hugo/issues/1584#issuecomment-157367942.
No. That works.
| gharchive/issue | 2015-11-17T13:02:33 | 2025-04-01T06:45:50.819609 | {
"authors": [
"bep",
"spf13"
],
"repo": "spf13/hugo",
"url": "https://github.com/spf13/hugo/issues/1584",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
191899480 | Allow default title to have initial caps
Would like to specify the name of new content in all lower case and have the default title created in the front matter converted to initial caps.
$ hugo new post/init-caps.md
Current output:
+++
date = "2016-11-27T18:58:19-06:00"
title = "init caps"
+++
Would like to see
+++
date = "2016-11-27T18:58:42-06:00"
title = "Init Caps"
+++
Note that title casing is a style not used in all languages, also title casing isn't probably what you really want in this case, see #989.
So, to get this right would be a challenging task.
If no one objects, I'll name the config option "newContentTitleFormat" with this fix looking for "initCaps". That way I can avoid title case completely.
This will be handled in round-about way in #2746
| gharchive/issue | 2016-11-28T01:00:24 | 2025-04-01T06:45:50.822278 | {
"authors": [
"bep",
"mdhender"
],
"repo": "spf13/hugo",
"url": "https://github.com/spf13/hugo/issues/2743",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
217576002 | Bump NodeJS SDK version
Description
Currently version of sphere-node-sdk in dependencies is "1.7.0". This version has no repeater and a lot of other fixes. We need to update a version of SDK to the newest.
https://github.com/sphereio/sphere-category-sync/pull/54
| gharchive/issue | 2017-03-28T14:19:27 | 2025-04-01T06:45:50.840283 | {
"authors": [
"LEQADA"
],
"repo": "sphereio/sphere-category-sync",
"url": "https://github.com/sphereio/sphere-category-sync/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1716967372 | Create Interactive System for Test Selection
Develop an interactive system that guides users through the process of selecting the right test for their data. This can be in the form of a decision tree, where users are asked a series of questions about their data (e.g., number of groups, paired or independent samples, etc.). The system should provide layman's explanations for each step and why it matters.
Is the data categorical or numerical?
Categorical: Go to step 2
Numerical: Go to step 4
Are you looking at differences between groups or associations between variables?
Differences: Use a Chi-Square Goodness of Fit Test if there's one group. If there are two or more independent groups, use a Chi-Square Test of Independence.
Associations: Use a Chi-Square Test of Independence
If the assumptions of the Chi-Square Test (expected frequency in each cell is at least 5) are not met, use Fisher's Exact Test (for 2x2 tables only).
Is the numerical data paired or unpaired?
Paired (dependent): Go to step 5
Unpaired (independent): Go to step 6
If paired,
If there's normality, use a Paired T-Test.
If there's no normality, use a Wilcoxon Signed-Rank Test.
If unpaired, how many groups are being compared?
Two groups:
If there's normality and homogeneity of variance, use an Independent T-Test.
If there's no normality or no homogeneity of variance, use a Mann-Whitney U Test.
More than two groups:
If there's normality and homogeneity of variance, use a One-Way ANOVA (for one factor) or Two-Way ANOVA (for two factors).
If there's no normality or no homogeneity of variance, use a Kruskal-Wallis Test (for one factor). There's no non-parametric equivalent in scipy for Two-Way ANOVA.
For regression (prediction) purposes, use Simple or Multiple Linear Regression. If assumptions of linearity, independence, homoscedasticity, and normality aren't met, consider transformations or non-linear regression.
| gharchive/issue | 2023-05-19T09:53:56 | 2025-04-01T06:45:50.864314 | {
"authors": [
"sphussey"
],
"repo": "sphussey/intellistat",
"url": "https://github.com/sphussey/intellistat/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2411300426 | Support for set of North items in POST
Added support for POSTing a set of identifiers to do "north" graph walks on
@bwmcadams can you review this PR please
@bwmcadams please review
| gharchive/pull-request | 2024-07-16T14:21:29 | 2025-04-01T06:45:50.867387 | {
"authors": [
"dpp"
],
"repo": "spice-labs-inc/bigtent",
"url": "https://github.com/spice-labs-inc/bigtent/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1923943279 | fix(snippets): oneko and duck
before:
after:
This would be better served as a fix to the actual repos for the oneko and duck extensions. Might as well fix the problem at the source instead of adding snippets for things that don't even apply to the base install.
oneko and duck extensions
there is no duck extension
fix the problem at the source instead
oneko doesn't have this issue
adding snippets
I didn't add anything, I fixed already added snippets
things that don't even apply to the base install
they do, these snippets are completely standalone
| gharchive/pull-request | 2023-10-03T11:43:03 | 2025-04-01T06:45:50.870406 | {
"authors": [
"SunsetTechuila",
"theRealPadster"
],
"repo": "spicetify/spicetify-marketplace",
"url": "https://github.com/spicetify/spicetify-marketplace/pull/605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1321175916 | ziro: layouting fix
some fix for ziro
sorry for the delay. i made more fix for the theme, mainly aligning and change color on the navbar icon
Spotify for macOS (Intel)
1.1.97.962.g24733a46
Spicetify v2.14.1
Do these changes still work?
Yes, i think it works still
Spotify for macOS (Intel)
1.1.97.962.g24733a46
Spicetify v2.14.3
Newer than 1.1.97.962 since i only tried on that version
| gharchive/pull-request | 2022-07-28T16:10:06 | 2025-04-01T06:45:50.874432 | {
"authors": [
"fortoszone",
"harbassan"
],
"repo": "spicetify/spicetify-themes",
"url": "https://github.com/spicetify/spicetify-themes/pull/816",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2106789981 | Night CI 2024-01-30: Failed
action url: https://github.com/spidernet-io/egressgateway/actions/runs/7704583967
for testing
| gharchive/issue | 2024-01-30T01:47:11 | 2025-04-01T06:45:50.878653 | {
"authors": [
"bzsuni",
"weizhoublue"
],
"repo": "spidernet-io/egressgateway",
"url": "https://github.com/spidernet-io/egressgateway/issues/1180",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1817674732 | Night CI 2023-07-24: Failed
action url: https://github.com/spidernet-io/egressgateway/actions/runs/5632470396
rerun to debug
| gharchive/issue | 2023-07-24T05:46:43 | 2025-04-01T06:45:50.879719 | {
"authors": [
"bzsuni",
"weizhoublue"
],
"repo": "spidernet-io/egressgateway",
"url": "https://github.com/spidernet-io/egressgateway/issues/616",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
183371314 | Frames captured against black background
Hi
I am using tar.js to export jpg images. But the background color of the images is black,
In my canvas, i have set the background color as white
What should i do to get the exported images with white background?
Thanks in advance
Is the background color set explicitly to black in Canvas2D/WebGL, or are you rendering on a transparent canvas on top of a CSS black background? There's a difference, and CCapture can only get the right background color in the first option.
I have set the background color explicitly to white in Canvas2D.
var canvas = document.querySelector('#myCanvas');
canvas.style.backgroundColor = 'rgba(255, 255, 255, 0.9)';
As I said, CSS background can't be captured because it's outside of the canvas. The browser is composing a transparent canvas on top of an element with white color. Try using context.fillRect with white fillStyle color where -i imagine- you're using context.clearRect.
Thanks. That worked.
| gharchive/issue | 2016-10-17T09:48:08 | 2025-04-01T06:45:50.931592 | {
"authors": [
"nirali1",
"spite"
],
"repo": "spite/ccapture.js",
"url": "https://github.com/spite/ccapture.js/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1147609161 | スティックを傾けるとマクロを発動できる機能
TODO
[x] とりあえず動くものを作る
[x] 精度を上げる修正する
[x] リファクタリングする
[x] 特定のボタンを押しているときに「マクロ」もしく「スティックフック」を無視する設定を追加する
https://github.com/splaplapla/procon_bypass_man/pull/80
[x] ボタンを戻した時には発動しないようにする
集計間隔の最後のポジションが中央か、で判定できるかも
[x] 設定ファイルからスレッショルドを入力できるようにする
[x] スプラのマクロプラグインとして定義する
[ ] スティックの判定処理をキューにしてみる
今は時間で区切って計算をしているけど、集計期間によっては判定結果の値にばらつきがあるのでキューにしてみたい。(処理の負荷は上がりそう)
日記
2022/02/23
設定の構文を決めた。スティックの雑な判定ロジックができた。(通して動くようになったらチューニングしていく)
今は時間で区切っているのだと精度が悪そう。最新10件から判定する、というふうにすれば安定するかも。
マクロとして実装した。動作確認はまだ
2022/02/24
スティックをフックに惰性キャンセルができるようになった
open_macro :dacan, steps: [:pressing_r_for_0_09sec, :pressing_r_and_pressing_zl_for_0_1sec], if_tilted_left_stick: true, if_pressed: [:zl]
が、スティックを傾けた判定が甘くて発動がタイミングにばらつきがある
おそらく、データ構造の問題にある
0.1間隔でスティックの勢いを集計しているけど、集計間隔(0.1と0.1)の間にスティック操作を挟むと値が分散してしまい、閾値を超えない模様
閾値を下げて、ホームポジション判定を消したらほぼ確実に発動するようになった。ホームポジション判定のロジックを作り直す
2022/02/25
リファクタリングができた
それと、最新のスティック状態を見ることで暴発も起きなくなった
次は、特定のボタンを教えているときはマクロを無効にするオプションを実装しよ
2022/02/26
macroを無効にする設定を書いた
設定ファイルからスティックを倒す判定の閾値を設定できるようにしたが、使ってみていて誤判定が多いように思う. 使っていて気持ちよくない
メイン連写からイカ移動するとたまにサブの暴発が起きている
バッチ処理方式だと判定のばらつきが起きがちかもしれない
精度がイマイチだけどmasterに入れます。
| gharchive/pull-request | 2022-02-23T04:11:20 | 2025-04-01T06:45:50.939318 | {
"authors": [
"jiikko"
],
"repo": "splaplapla/procon_bypass_man",
"url": "https://github.com/splaplapla/procon_bypass_man/pull/78",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2707365185 | Update to bevy0.15
This is the first time I'm submitting a Pull Request, and I might have overlooked something. Please feel free to point out any mistakes or areas for improvement.
Most of the code changes were made to align with the updated API. However, I'm not entirely sure about the correctness of this specific change:
Code link
And the examples/fast_traversal_ray.rs cannot run because it needs the correct version of smooth_bevy_cameras.
Thank you for taking the time to review my work! 🙏 I’m looking forward to your feedback and learning from your suggestions.
Thank you for your guidance! I’ve made some updates following your advice, but I’m not sure if I’ve fully understood everything. Please let me know if further adjustments are needed!
| gharchive/pull-request | 2024-11-30T13:26:14 | 2025-04-01T06:45:50.942128 | {
"authors": [
"Touma-Kazusa2"
],
"repo": "splashdust/bevy_voxel_world",
"url": "https://github.com/splashdust/bevy_voxel_world/pull/42",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
781125365 | New spam came through today
This is what I got in inbox today:
Text To Speech In 3 Clicks - Real sounding human voices!. For a free demo. Write a reply here: katesepage@gmail.com
Thanks for the information.
I have added a couple of phrases from the text you provided to the blacklist and they will be available as soon as I push a fresh commit this week.
Thanks for the information.
I have added a couple of phrases from the text you provided to the blacklist and they will be available as soon as I push a fresh commit this week.
The following terms were added in commit 65f1967 to address this issue.
s!.
write a reply here
The following terms were added in commit 65f1967 to address this issue.
s!.
write a reply here
| gharchive/issue | 2021-01-07T08:20:22 | 2025-04-01T06:45:50.962260 | {
"authors": [
"salacpavel",
"splorp"
],
"repo": "splorp/wordpress-comment-blacklist",
"url": "https://github.com/splorp/wordpress-comment-blacklist/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1329185955 | docs: update workflow and mention JSON schema
This PR adds a link to the JSON schema for globalConfig.json and updates the Github Actions workflow to use poetry to install docs dependencies.
:tada: This PR is included in version 5.14.0 :tada:
The release is available on:
v5.14.0
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2022-08-04T21:19:23 | 2025-04-01T06:45:50.964664 | {
"authors": [
"artemrys",
"srv-rr-github-token"
],
"repo": "splunk/addonfactory-ucc-generator",
"url": "https://github.com/splunk/addonfactory-ucc-generator/pull/489",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
829321155 | configure_apps.yml does not support a git repo where there is multiple apps
On my gitlab, i've a repo name splunk_apps it contains all the apps for the same "project"!
- app_1
- app_2
- ...
May be i've missed something in my configuration but it does not seems possible to install only app_1 on a server and app_2 on a second
For me the issue lies in the task Download defined Git repos to local Ansible host
- name: Download defined Git repos to local Ansible host
git:
accept_hostkey: true
repo: "{{ item.git_server | default(git_server) }}/{{ item.git_project | default(git_project) }}/{{ item.name }}"
version: "{{ item.git_version | default(git_version) }}"
dest: "{{ git_local_clone_path }}{{ ansible_nodename }}/{{ item.name }}"
key_file: "{{ git_key }}"
force: true
loop: "{{ git_apps }}"
delegate_to: localhost
changed_when: false
check_mode: false
it's looping on the param git_apps and is expecting each of them to be a standalone git repo. For me each standalone repo should be configured by the param git_project only
Did I miss something ? May be we can add the 2 possilities with a new parameter git_multiple_app_per_repo
Possible implementation: Within git_apps vars, have an optional variable where the user can list out the specific folders within the repository that they would like deployed (default to all).
Notes:
The installation path would likely need to be configurable at the folder level as well
Still need to accommodate different handlers as well
Need to evaluate further if this change would break the rsync deployment method
This feature may need to be added at the same time as https://github.com/splunk/ansible-role-for-splunk/issues/48
We tend to use 1 git repo for all roles. This is to simplified the dev process & avoid duplicates or confusion (i can see you props.conf)
On example is :
an app with sourcetype info
an app with inputs script
an app with dashboards
Some of them will need to be deployed to a HF and the other to a single instance.
we tend to have all the app on the same level (some times in a subfolder splunk_apps or directly on the root dir of the git repo
I resolved this challenge by creating a new task called install_multi_app.yml.
It will still take the git_apps variable to run call configure_apps.yml
The determine_handler.yml task is the following:
- name: sync the apps repo
include_tasks: configure_apps.yml
- name: "Synchronize {{ splunk_home }}/{{ splunk_app_deploy_path }}/{{ item.app_name }} to {{ splunk_home }}/{{ splunk_app_deploy_path }}/"
shell:
cmd: "rsync --delete --checksum --recursive --prune-empty-dirs --itemize-changes --no-owner --no-group --no-times {{ splunk_home }}/{{ splunk_app_deploy_path }}/{{ item.app_name }}/* {{ splunk_home }}/{{ splunk_app_deploy_path }}/"
become: true
become_user: "{{ splunk_nix_user }}"
loop: "{{ splunk_apps }}"
notify: "{{ handler }}"
- name: Ensure correct permissions are set
file:
path: "{{ splunk_home }}/{{ splunk_app_deploy_path }}/"
owner: "{{ splunk_nix_user }}"
group: "{{ splunk_nix_group }}"
recurse: true
become: true
- name: remove the temporary app folders
file:
state: absent
path: "{{ splunk_home }}/{{ splunk_app_deploy_path }}/{{ item.app_name }}"
become: true
become_user: "{{ splunk_nix_user }}"
loop: "{{ splunk_apps }}"
| gharchive/issue | 2021-03-11T16:37:33 | 2025-04-01T06:45:50.971702 | {
"authors": [
"ForsetiJan",
"Jalkar",
"mason-splunk"
],
"repo": "splunk/ansible-role-for-splunk",
"url": "https://github.com/splunk/ansible-role-for-splunk/issues/49",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
853631625 | feat(spectracom): Support spectracom ntp appliance
closes #998
:tada: This PR is included in version 1.70.0-develop.3 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 1.70.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2021-04-08T16:11:23 | 2025-04-01T06:45:50.976359 | {
"authors": [
"rfaircloth-splunk"
],
"repo": "splunk/splunk-connect-for-syslog",
"url": "https://github.com/splunk/splunk-connect-for-syslog/pull/1094",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1094552163 | Paying directly with a debit card
Hi, is it possible to enable users to pay directly using a credit card, not needing to create a Paypal account?
I'm rendering them a form for subscription payments. As we can see on the screenshot bellow, they need to login to Paypal first.
This is a PayPal question, not a django-paypal thing. Probably this is relevant:
https://www.paypal.com/us/smarthelp/article/how-do-i-accept-credit-cards-with-checkout-using-the-guest-checkout-option-faq3226
| gharchive/issue | 2022-01-05T16:53:01 | 2025-04-01T06:45:50.996361 | {
"authors": [
"mzuzic",
"spookylukey"
],
"repo": "spookylukey/django-paypal",
"url": "https://github.com/spookylukey/django-paypal/issues/248",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
608838103 | Remove network.fees and change default fee to 1 sat/byte
Currently, if a fee argument is not passed to send() or create_transaction(), the function network.fees.get_fee() is called and returns 2 (default fee of 2 satoshi per byte).
We could simplify the code by removing network.fees as it is no longer needed with Bitcoin Cash (the fee is always predictable and does not change over time like BTC).
Instead, we could add the default argument of fee=1 to transaction.sanitize_tx_data().
What do you think @teran-mckinney?
If we do this, 1 should still be a constant so it can be changed if needed.
What will happen if Bitcoin Cash has a backlog of transactions? Isn't it good to be able to raise your fee in that case to have priority? I realize it's a lot less likely than Bitcoin being overloaded, but it's entirely possible
Fee can still be passed as an argument of send().
For example, Key.send([outputs], fee=10).
I see. Probably could just go to a constant like DEFAULT_FEE then.
Less network access is always better, official bitcoin BCH android wallet has these settings:
low prio: doesn't say (what could be lower than 1?)
default prio: 1satoshi/vByte
high prio: doesn't say
Maybe an optional network function would be useful that returns the required fee to ensure we are included in the next block, thus fastest possible transfer, without wasted satoshi.
As I calced currently it is ~520 satoshi, not sure if that's for byte or the entire transaction.
low prio: doesn't say (what could be lower than 1?)
"Fractional satoshis" are in the Bitcoin Cash roadmap, it would be a necessity to keep transaction fees low if the value of BCH went up.
| gharchive/issue | 2020-04-29T07:26:57 | 2025-04-01T06:45:51.001449 | {
"authors": [
"haplo",
"merc1er",
"mrx23dot",
"teran-mckinney"
],
"repo": "sporestack/bitcash",
"url": "https://github.com/sporestack/bitcash/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1773298357 | Downloading with YTM link set wrong metadata
System OS
Linux
Python Version
3.11 (CPython)
Install Source
pip / PyPi
Install version / commit hash
4.1.11
Expected Behavior vs Actual Behavior
Downloading an album using its YouTube Music link does not correctly set metadata. It looks like spotdl incorrectly searches for the given song on Spotify and uses the metadata from a wrong song. If you take a look at the metadata, there's the URL field which contains the Spotify URL to the song used to get the metadata (i.e. the incorrect metadata).
Steps to reproduce - Ensure to include actual links!
Try to download https://music.youtube.com/playlist?list=OLAK5uy_mBEYJkdTEZYN-ti1lhNSgQUk7FR7Qff_0 and check the resulting metadata.
Traceback
there's no error
Other details
No response
I second this. Majority of stuff downloaded from YTM links/playlists will have wrong metadata.
For eg., when downloading this album from YTM link, it will not match the songs to the appropriate metadata:
https://music.youtube.com/playlist?list=OLAK5uy_lQcDzH79KD6qoGje4O5Xr_AtKcXvmC9WE
But it works completely fine when using the Spotify Link:
https://open.spotify.com/album/3U07SZ34DM1LGRBp20sY4P
This doesn't work even if I add the --ytm-data option.
For YTM playlists, this option needs to work as a fall back in cases where data matching the YTM song titles isn't working with Spotify.
But ideally, the data matching should work just as correctly, ensuring that there is uniformity in the data within playlists.
Like the Spotify popularity variable for eg. which isn't there in YTM data.
Spotify link matching fixed on dev, --ytm-data is currently not available for ytm lists.
Ideally, it wouldn't even need to search on Spotify, but use the YTM metadata (maybe with an additional option), if this us possible.
--ytm-data is exactly for that. Currently it doesn't work but I will try fixing it today. Also it's better to use Spotify data because tym only provides the title, duration, artists and album name.
| gharchive/issue | 2023-06-25T13:53:26 | 2025-04-01T06:45:51.022566 | {
"authors": [
"giowa49h8gpr",
"jnxr",
"xnetcat"
],
"repo": "spotDL/spotify-downloader",
"url": "https://github.com/spotDL/spotify-downloader/issues/1867",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1960386405 | Can't Get 256kbps With Cookies Set
System OS
MacOS
Python Version
3.7 (CPython)
Install Source
pip / PyPi
Install version / commit hash
4.2.1
Expected Behavior vs Actual Behavior
spotdl https://open.spotify.com/playlist/XXX --cookie-file cookies.txt --bitrate disable --audio youtube-music
from within a folder where cookies.txt is from youtube music and bitrate is disabled.
no errors are produced but all files are still 128kbps. i presume the cookies txt is fine. i have yt premium.
afinfo XXXXX.mp3
File: XXXXXX.mp3
File type ID: MPG3
Num Tracks: 1
----
Data format: 2 ch, 48000 Hz, .mp3 (0x00000000) 0 bits/channel, 0 bytes/packet, 1152 frames/packet, 0 bytes/frame
no channel layout.
estimated duration: 233.904000 sec
audio bytes: 3742464
audio packets: 9746
bit rate: 128000 bits per second
Steps to reproduce - Ensure to include actual links!
spotdl https://open.spotify.com/playlist/XXX --cookie-file cookies.txt --bitrate disable --audio youtube-music
Traceback
○ → spotdl https://open.spotify.com/playlist/XXXXX?--cookie-file cookies.txt --bitrate disable --audio youtube-music
Processing query: https://open.spotify.com/playlist/XXXXXX
Found ZZZZ songs in XXXX
Downloaded "Lucky Daye - Over - Sped Up": https://music.youtube.com/watch?v=s5bDk00cRZM
...
Other details
Nothing.
tried flipping the syntax as well to no avail:
spotdl --cookie-file cookies.txt --bitrate disable https://open.spotify.com/playlist/2yJBSJVkuI8xesPvYnfmlV?si=8f26b4d0cf58448c
Processing query: https://open.spotify.com/playlist/2yJBSJVkuI8xesPvYnfmlV?si=8f26b4d0cf58448c
...
No errors all files end up 128kbps still.
tried this as well to no avail:
spotdl download "https://open.spotify.com/playlist/2yJBSJVkuI8xesPvYnfmlV" --audio=youtube-music --format=m4a --cookie-file=cookies.txt --bitrate=disable
Still 128kbps
○ → afinfo Guru\ -\ Introduction.m4a
File: Guru - Introduction.m4a
File type ID: mp4f
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
no channel layout.
estimated duration: 80.257982 sec
audio bytes: 1285008
audio packets: 3458
bit rate: 128029 bits per second
packet size upper bound: 548
maximum packet size: 548
audio data file offset: 144915
optimized
audio 3539377 valid frames + 1600 priming + 15 remainder = 3540992
format list:
[ 0] format: 2 ch, 44100 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame
Channel layout: Stereo (L R)
----
Solved. Docs need updating. You won't get 256 unless you adjust your Youtube Music settings inside the app to stream at High Quality.
https://github.com/spotDL/spotify-downloader/blob/master/docs/usage.md#audio-formats-and-quality
Okay when downloading the cookies using the extension provided, do you click ALL or Current Site. Further, where do you put the cookies.txt file, I put it in the .spotdl folder where I am running the commands from. But I am still not getting the higher bitrate? Also my files are still in mp3.
So I did that but I am still not getting it to work. The files are in m4a, but it is still 128. You got it to work?
@otmanzy99 yep. ensure you downloaded your cookies correctly and have the file labeled as expected. also ensure ur account has high quality streaming enabled.
| gharchive/issue | 2023-10-25T02:16:16 | 2025-04-01T06:45:51.029488 | {
"authors": [
"johnnyshankman",
"otmanzy99"
],
"repo": "spotDL/spotify-downloader",
"url": "https://github.com/spotDL/spotify-downloader/issues/1943",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
695219529 | cli: make skipLibCheck default in tsconfig, but provide tsc:full
skipLibCheck speeds a local yarn tsc up significantly, but it won't catch all issues.
This switches skipLibCheck to be set by default, but disables it for yarn tsc:full, which is then recommended to be used in CI.
@freben yep, but it's one of those you actually use quite a bit for local development. Took it from 13s to 7s for me. Also saw a mix of usages in the repo, so wanted to align that
Regarding the risk, we'll see when the difference between tsc and tsc:full is actually encountered the first time :p.
The risk when it comes to breakage should be very low, since we're running the full check in CI. The risk that I think exists is potentially wasting a bit of time when installing a new dependency, and it happens to have bad types, which you don't notice until the CI build barfs.
| gharchive/pull-request | 2020-09-07T15:37:22 | 2025-04-01T06:45:51.032031 | {
"authors": [
"Rugvip"
],
"repo": "spotify/backstage",
"url": "https://github.com/spotify/backstage/pull/2317",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
88396803 | Clear snapshots on session failure
In case a repair session fails, there is a chance the snapshots created
for that particular session don't get cleaned up. This change adds
an attempt to explicitly clear the snapshots.
Did rename the method. However, there is no way of telling if a snapshot was cleared or not, or if an attempt was made to clear a non-existing snapshot. Cassandra does not expose that information. The only thing that happens it that Cassandra logs a debug line when snapshots have been cleared.
:+1:
| gharchive/pull-request | 2015-06-15T11:32:29 | 2025-04-01T06:45:51.033666 | {
"authors": [
"Bj0rnen",
"rzvoncek"
],
"repo": "spotify/cassandra-reaper",
"url": "https://github.com/spotify/cassandra-reaper/pull/103",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
144030140 | Add user manual doc; organize docs better
Fixes #360
Current coverage is 37.66%
Merging #394 into master will decrease coverage by -2.16% as of 92337c0
@@ master #394 diff @@
======================================
Files 72 72
Stmts 3465 3465
Branches 819 819
Methods 0 0
======================================
- Hit 1380 1305 -75
+ Partial 130 124 -6
- Missed 1955 2036 +81
Review entire Coverage Diff as of 92337c0
Powered by Codecov. Updated on successful CI builds.
whoa, awesome job on the user manual
Thanks. I just copied the order of Docker's API doc: https://docs.docker.com/engine/reference/api/docker_remote_api_v1.18/
:+1:
| gharchive/pull-request | 2016-03-28T17:16:40 | 2025-04-01T06:45:51.038722 | {
"authors": [
"codecov-io",
"davidxia",
"mattnworb"
],
"repo": "spotify/docker-client",
"url": "https://github.com/spotify/docker-client/pull/394",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
95951763 | Interaction between Or and Many
When the first branch of Or matches a zero-length input, the second option is tried anyway.
http://stackoverflow.com/questions/31381923/sprache-monadic-parser-or-and-many-semantics
file Parse.cs line 423/424, there is a test if (fr.Remainder.Equals(i)). If the first parser matches the empty string it still invokes the second parser
This is: https://github.com/sprache/Sprache/blob/master/src/Sprache/Parse.cs#L423
I believe this is probably erroneous, but Or is quite fundamental so more eyes on this would be appreciated if possible.
A workaround in the meantime is to reorder Or branches if possible.
Uncertain about changing this and its impact on existing parsers, happy to reopen if anyone has the time to dig in deeper here.
| gharchive/issue | 2015-07-19T21:53:44 | 2025-04-01T06:45:51.049285 | {
"authors": [
"nblumhardt"
],
"repo": "sprache/Sprache",
"url": "https://github.com/sprache/Sprache/issues/52",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
408840181 | Keep end-of-line of maven wrapper mvnw to LF (linefeed)
Even on windows, mvnw is a bash shell script that needs to be in unix format to be correctly executed.
Well, I have my Git on Windows configured like "Checkout CRLF - checkin LF".
And it works for any project the same way.
I don't think this is project related...
Otherwise you are going to have a burden to commit similar change to all the projects you would like to contribute to.
Closed as "Won't Fix"
The git settings is not the problem (I have same as you) : it is the combination of running windows and wanting to use "git bash" or any unix-like shell on windows to run mvnw. Running the batch file from "cmd.exe" is not a problem, it is running the bash shell script that fails.
For me, it is the same thing as specifying 'apps/' in the .gitignore file : it has to be done for every project.
If I do it for every project under spring-cloud-stream-app-starters, would it have more chance to be accepted ?
I think there should be some configurations for that "git bash" on Windows as well.
Still it doesn't sound like belong to this or any other projects.
Not sure why you are pursuing Git related configuration in the project source...
With respect to git related configuration in the project source : when I checkout a project and i see a maven or gradle wrapper in it, I expect to be able to build it with my choice of tools, i.e. bash on windows in my case. But, as the mvnw is encoded with CRLF, it does not work and there is no configuration for that in "git bash".
It is about having the lowest barrier to entry for potential contributor.
But it is your call, I won't bother with other pr regarding this issue.
| gharchive/pull-request | 2019-02-11T15:37:39 | 2025-04-01T06:45:51.075311 | {
"authors": [
"artembilan",
"marcpa00"
],
"repo": "spring-cloud-stream-app-starters/scriptable-transform",
"url": "https://github.com/spring-cloud-stream-app-starters/scriptable-transform/pull/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
702761156 | Access vault secret with specific path
am trying to get my microservices configuration from a config server connected to 2 sources : git and vault (for secrets). I have the config bellow: in the config-server:
server:
port: 8888
spring:
profiles:
active: git, vault
cloud:
config:
server:
vault:
port: 8200
host: 127.0.0.1
kvVersion: 2
git:
order: 2
uri: git@gitlab.git
and in the client side in the bootstrap.yml:
spring:
application:
name: my-service-name
cloud:
config:
uri: http://localhost:8888
token: //token
label: dev
But in my vault i have the path like this :
secret/cad
|--my-service-name
When i make my secret directly in /secret/my-service-name i can access my secrets, but how can i configure acces to secrets in : /secret/cad/my-service-name
I use:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.SR2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
Thank you.
Setting spring.cloud.config.server.vault.backed=secret/cad will work if all services have that. Also, Finchley and Greenwich are no longer supported. Hoxton is supported thru mid 2021, so I'd suggest upgrading to 2020.0
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.
| gharchive/issue | 2020-09-16T13:22:21 | 2025-04-01T06:45:51.079245 | {
"authors": [
"madiskou",
"spencergibb",
"spring-cloud-issues"
],
"repo": "spring-cloud/spring-cloud-config",
"url": "https://github.com/spring-cloud/spring-cloud-config/issues/1704",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
615045060 | com.example.SpringBootStarterApp required a bean of type 'com.microsoft.azure.functions.ExecutionContext' that could not be found.
I followed the example at -
https://github.com/spring-cloud/spring-cloud-function/blob/master/spring-cloud-function-samples/function-sample-azure/pom.xml
My project has the same dependencies outlined the the above POM. Get the following exception
Code is at - https://github.com/ssampigegithub/azure-function-poc.git
Project is run with an Eclipse launch configuration :-
Base directory: ${workspace_loc:/azure-poc-parent/azure-functions}
Goals: spring-boot:run
Error we get when the project runs is -
APPLICATION FAILED TO START
Description:
Parameter 0 of method loadClaims in com.example.SpringBootStarterApp required a bean of type 'com.microsoft.azure.functions.ExecutionContext' that could not be found.
Action:
Consider defining a bean of type 'com.microsoft.azure.functions.ExecutionContext' in your configuration.
[WARNING]
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.maven.AbstractRunMojo$LaunchRunner.run(AbstractRunMojo.java:558)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'loadClaims' defined in com.example.SpringBootStarterApp: Unsatisfied dependency expressed through method 'loadClaims' parameter 0; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.microsoft.azure.functions.ExecutionContext' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:509)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1305)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1144)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:849)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
at com.example.SpringBootStarterApp.main(SpringBootStarterApp.java:45)
... 6 more
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.microsoft.azure.functions.ExecutionContext' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1654)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1213)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1167)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760)
... 25 more
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.865 s
[INFO] Finished at: 2020-05-08T17:00:14-07:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.1.3.RELEASE:run (default-cli) on project azure-functions: An exception occurred while running. null: InvocationTargetException: Error creating bean with name 'loadClaims' defined in com.example.SpringBootStarterApp: Unsatisfied dependency expressed through method 'loadClaims' parameter 0; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.microsoft.azure.functions.ExecutionContext' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {} -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
example.zip
I encountered the same error on the demo project :
https://github.com/spring-cloud/spring-cloud-function/tree/master/spring-cloud-function-samples/function-sample-azure
Can anyone help ?
/Users/pomverte/Library/Java/JavaVirtualMachines/openjdk-14.0.1/Contents/Home/bin/java -Dmaven.multiModuleProjectDirectory=/Users/pomverte/git/spring-cloud-function "-Dmaven.home=/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3" "-Dclassworlds.conf=/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3/bin/m2.conf" "-Dmaven.ext.class.path=/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven-event-listener.jar" "-javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=56772:/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath "/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds.license:/Applications/IntelliJ IDEA CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds-2.6.0.jar" org.codehaus.classworlds.Launcher -Didea.version2020.1.1 org.springframework.boot:spring-boot-maven-plugin:2.3.0.BUILD-SNAPSHOT:run
[INFO] Scanning for projects...
[INFO]
[INFO] ---------------< io.spring.sample:function-sample-azure >---------------
[INFO] Building function-sample-azure 2.0.0.RELEASE
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] >>> spring-boot-maven-plugin:2.3.0.BUILD-SNAPSHOT:run (default-cli) > test-compile @ function-sample-azure >>>
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ function-sample-azure ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /Users/pomverte/git/spring-cloud-function/spring-cloud-function-samples/function-sample-azure/src/main/resources
[INFO] skip non existing resourceDirectory /Users/pomverte/git/spring-cloud-function/spring-cloud-function-samples/function-sample-azure/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ function-sample-azure ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ function-sample-azure ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /Users/pomverte/git/spring-cloud-function/spring-cloud-function-samples/function-sample-azure/src/test/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ function-sample-azure ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] <<< spring-boot-maven-plugin:2.3.0.BUILD-SNAPSHOT:run (default-cli) < test-compile @ function-sample-azure <<<
[INFO]
[INFO]
[INFO] --- spring-boot-maven-plugin:2.3.0.BUILD-SNAPSHOT:run (default-cli) @ function-sample-azure ---
[INFO] Attaching agents: []
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.0.BUILD-SNAPSHOT)
2020-05-14 21:38:00.370 INFO 12189 --- [ main] example.Config : Starting Config on POMVERTEMAC with PID 12189 (/Users/pomverte/git/spring-cloud-function/spring-cloud-function-samples/function-sample-azure/target/classes started by pomverte in /Users/pomverte/git/spring-cloud-function/spring-cloud-function-samples/function-sample-azure)
2020-05-14 21:38:00.371 INFO 12189 --- [ main] example.Config : No active profile set, falling back to default profiles: default
2020-05-14 21:38:01.301 INFO 12189 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2020-05-14 21:38:01.313 INFO 12189 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-05-14 21:38:01.313 INFO 12189 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.35]
2020-05-14 21:38:01.395 INFO 12189 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2020-05-14 21:38:01.395 INFO 12189 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 987 ms
2020-05-14 21:38:01.444 WARN 12189 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'uppercase' defined in example.Config: Unsatisfied dependency expressed through method 'uppercase' parameter 0; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.microsoft.azure.functions.ExecutionContext' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
2020-05-14 21:38:01.446 INFO 12189 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2020-05-14 21:38:01.458 INFO 12189 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-05-14 21:38:01.631 ERROR 12189 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method uppercase in example.Config required a bean of type 'com.microsoft.azure.functions.ExecutionContext' that could not be found.
Action:
Consider defining a bean of type 'com.microsoft.azure.functions.ExecutionContext' in your configuration.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.145 s
[INFO] Finished at: 2020-05-14T21:38:01+02:00
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "spring" could not be activated because it does not exist.
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.3.0.BUILD-SNAPSHOT:run (default-cli) on project function-sample-azure: Application finished with exit code: 1 -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
I also found the issue in the sample project too. Can anybody from the contributing team please help?
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.microsoft.azure.functions.ExecutionContext' available: expected at least 1 bean wh
ich qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound (DefaultListableBeanFactory.java:1646)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency (DefaultListableBeanFactory.java:1205)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency (DefaultListableBeanFactory.java:1166)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument (ConstructorResolver.java:855)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray (ConstructorResolver.java:758)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod (ConstructorResolver.java:508)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod (AbstractAutowireCapableBeanFactory.java:1288)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance (AbstractAutowireCapableBeanFactory.java:1127)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean (AbstractAutowireCapableBeanFactory.java:538)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean (AbstractAutowireCapableBeanFactory.java:498)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0 (AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton (DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean (AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean (AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons (DefaultListableBeanFactory.java:846)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization (AbstractApplicationContext.java:863)
at org.springframework.context.support.AbstractApplicationContext.refresh (AbstractApplicationContext.java:546)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh (ServletWebServerApplicationContext.java:140)
at org.springframework.boot.SpringApplication.refresh (SpringApplication.java:775)
at org.springframework.boot.SpringApplication.refreshContext (SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run (SpringApplication.java:316)
at org.springframework.boot.SpringApplication.run (SpringApplication.java:1260)
at org.springframework.boot.SpringApplication.run (SpringApplication.java:1248)
at example.Config.main (Config.java:31)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:566)
at org.springframework.boot.maven.AbstractRunMojo$LaunchRunner.run (AbstractRunMojo.java:558)
at java.lang.Thread.run (Thread.java:834)
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] function-sample-azure 2.0.0.RELEASE ................ FAILURE [ 14.031 s]
[INFO] Spring Cloud Function Samples 3.1.0-SNAPSHOT ....... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15.091 s
[INFO] Finished at: 2020-07-01T15:54:05-07:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.1.0.RELEASE:run (default-cli) on project function-sample-azure: An exception occurred while running. null:
InvocationTargetException: Error creating bean with name 'uppercase' defined in example.Config: Unsatisfied dependency expressed through method 'uppercase' parameter 0; nested exception is o
rg.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.microsoft.azure.functions.ExecutionContext' available: expected at least 1 bean which qualifie
s as autowire candidate. Dependency annotations: {} -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Update : I solved the problem by downgrading my JRE to Java 8
Update : I solved the problem by downgrading my JRE to Java 8
I am running JDK 1.8.0_111.
I have also tried jdk-11.0.6 but no luck.
Hi
I have solved implementing a bean into the Spring Project. What do you think if into this repo we add also a SimpleExecutionContext implementation ?
@Component
public class SimpleExecutionContext implements ExecutionContext {
private String name;
public SimpleExecutionContext(String name) {
this.name = name;
}
public SimpleExecutionContext(){};
@Override
public Logger getLogger() {
return Logger.getLogger(SimpleExecutionContext.class.getName());
}
@Override
public String getInvocationId() {
return UUID.randomUUID().toString();
}
@Override
public String getFunctionName() {
return this.name;
}
}
hello,
still the same issue: bean ExecutionContext could not be found.
java 8/11
intellij idea
azure function cli 3.0.2630
spring-boot-starter-parent:2.3.4.RELEASE
azure.functions.maven.plugin.version:1.9.0
org.springframework.cloud:spring-cloud-function-adapter-azure:3.0.10.RELEASE
the issue persists on 3.1.3
@jakubboesche the issue is fixed and the fix is available in 3.1.3. Indeed there are possibilities that something else may still be an issue. If you believe so please provide a sample project that reproduces the issue so we can take a look.
the issue persists on 3.1.3
same here
jdk 11
azure.functions.java.library.version: 1.4.2
org.springframework.cloud:spring-cloud-function-adapter-azure: 3.1.3
Updated: Revert to org.springframework.cloud:spring-cloud-function-adapter-azure: 3.0.14.RELEASE works for me. Now I can inject the ExecutionContext bean.
| gharchive/issue | 2020-05-09T00:03:48 | 2025-04-01T06:45:51.121147 | {
"authors": [
"EthanNguyen132",
"JV-TMCZ",
"jakubboesche",
"olegz",
"pomverte",
"ssampigegithub",
"vimorra"
],
"repo": "spring-cloud/spring-cloud-function",
"url": "https://github.com/spring-cloud/spring-cloud-function/issues/516",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2588796748 | CL.0 Http Request Smuggling vulnerability in Netty
We are using Spring Cloud Gateway as a routing proxy in our application setup. During a penetration test executed by an external security company, a CL.0 Http Request Smuggling vulnerability was reported to us. I tried to reproduce this and came to the conclusion, that the root cause is most likely in the reactive Netty stack used by Spring Cloud Gateway. I reported my findings as comment to an already open Github issue that addressed a similar issue there: https://github.com/netty/netty/issues/13706
I just wanted to make you aware of this problem because it seems there is no reaction on Netty side yet about this. Please note that the TransferEncodingNormalizationHeadersFilter does not solve the problem.
Please report security vulnerabilities here https://github.com/spring-projects/security-advisories
We can then privately determine if it is gateway, reactor netty, or netty itself.
| gharchive/issue | 2024-10-15T13:32:04 | 2025-04-01T06:45:51.125406 | {
"authors": [
"AndreasKasparek",
"spencergibb"
],
"repo": "spring-cloud/spring-cloud-gateway",
"url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/3560",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1481860772 | Fix caffeine class conditional
When using caffeine module , LocalResponseCacheAutoConfiguration will be enabled .
// LocalResponseCacheAutoConfiguration.java
// CaffeineCacheManager class is not found
@Bean
public static CacheManager concurrentMapCacheManager(LocalResponseCacheProperties cacheProperties) {
CaffeineCacheManager caffeineCacheManager = new CaffeineCacheManager();
caffeineCacheManager.setCaffeine(caffeine(cacheProperties));
return caffeineCacheManager;
}
CaffeineCacheManager is in spring-context-support , so when the developer does not import spring-context-support module , CaffeineCacheManager class can't be found.
Here is the complete exception
2022-12-07T20:49:31.028+08:00 WARN 20488 --- [ main] onfigReactiveWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'routeDefinitionRouteLocator' defined in class path resource [org/springframework/cloud/gateway/config/GatewayAutoConfiguration.class]: Unsatisfied dependency expressed through method 'routeDefinitionRouteLocator' parameter 1: Error creating bean with name 'localResponseCacheGatewayFilterFactory' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Unsatisfied dependency expressed through method 'localResponseCacheGatewayFilterFactory' parameter 1: Error creating bean with name 'concurrentMapCacheManager' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Failed to instantiate [org.springframework.cache.CacheManager]: Factory method 'concurrentMapCacheManager' threw exception with message: org/springframework/cache/caffeine/CaffeineCacheManager
2022-12-07T20:49:31.040+08:00 INFO 20488 --- [ main] .s.b.a.l.ConditionEvaluationReportLogger :
Error starting ApplicationContext. To display the condition evaluation report re-run your application with 'debug' enabled.
2022-12-07T20:49:31.053+08:00 ERROR 20488 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'routeDefinitionRouteLocator' defined in class path resource [org/springframework/cloud/gateway/config/GatewayAutoConfiguration.class]: Unsatisfied dependency expressed through method 'routeDefinitionRouteLocator' parameter 1: Error creating bean with name 'localResponseCacheGatewayFilterFactory' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Unsatisfied dependency expressed through method 'localResponseCacheGatewayFilterFactory' parameter 1: Error creating bean with name 'concurrentMapCacheManager' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Failed to instantiate [org.springframework.cache.CacheManager]: Factory method 'concurrentMapCacheManager' threw exception with message: org/springframework/cache/caffeine/CaffeineCacheManager
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:793) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:543) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1324) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1161) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:561) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:521) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:326) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:324) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:961) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:915) ~[spring-context-6.0.2.jar:6.0.2]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:584) ~[spring-context-6.0.2.jar:6.0.2]
at org.springframework.boot.web.reactive.context.ReactiveWebServerApplicationContext.refresh(ReactiveWebServerApplicationContext.java:66) ~[spring-boot-3.0.0.jar:3.0.0]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:730) ~[spring-boot-3.0.0.jar:3.0.0]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:432) ~[spring-boot-3.0.0.jar:3.0.0]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) ~[spring-boot-3.0.0.jar:3.0.0]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1302) ~[spring-boot-3.0.0.jar:3.0.0]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1291) ~[spring-boot-3.0.0.jar:3.0.0]
at com.example.demo.DemoApplication.main(DemoApplication.java:10) ~[classes/:na]
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'localResponseCacheGatewayFilterFactory' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Unsatisfied dependency expressed through method 'localResponseCacheGatewayFilterFactory' parameter 1: Error creating bean with name 'concurrentMapCacheManager' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Failed to instantiate [org.springframework.cache.CacheManager]: Factory method 'concurrentMapCacheManager' threw exception with message: org/springframework/cache/caffeine/CaffeineCacheManager
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:793) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:543) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1324) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1161) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:561) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:521) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:326) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:324) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:254) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.addCandidateEntry(DefaultListableBeanFactory.java:1621) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:1585) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveMultipleBeans(DefaultListableBeanFactory.java:1476) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1363) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1325) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:880) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:784) ~[spring-beans-6.0.2.jar:6.0.2]
... 19 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'concurrentMapCacheManager' defined in class path resource [org/springframework/cloud/gateway/config/LocalResponseCacheAutoConfiguration.class]: Failed to instantiate [org.springframework.cache.CacheManager]: Factory method 'concurrentMapCacheManager' threw exception with message: org/springframework/cache/caffeine/CaffeineCacheManager
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:652) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:640) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1324) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1161) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:561) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:521) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:326) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:324) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:254) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1405) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1325) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:880) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:784) ~[spring-beans-6.0.2.jar:6.0.2]
... 36 common frames omitted
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cache.CacheManager]: Factory method 'concurrentMapCacheManager' threw exception with message: org/springframework/cache/caffeine/CaffeineCacheManager
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:171) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:648) ~[spring-beans-6.0.2.jar:6.0.2]
... 50 common frames omitted
Caused by: java.lang.NoClassDefFoundError: org/springframework/cache/caffeine/CaffeineCacheManager
at org.springframework.cloud.gateway.config.LocalResponseCacheAutoConfiguration.concurrentMapCacheManager(LocalResponseCacheAutoConfiguration.java:75) ~[spring-cloud-gateway-server-4.0.0-RC2.jar:4.0.0-RC2]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:139) ~[spring-beans-6.0.2.jar:6.0.2]
... 51 common frames omitted
Caused by: java.lang.ClassNotFoundException: org.springframework.cache.caffeine.CaffeineCacheManager
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) ~[na:na]
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
... 57 common frames omitted
Fixes #2803 @spencergibb
| gharchive/pull-request | 2022-12-07T12:55:53 | 2025-04-01T06:45:51.130746 | {
"authors": [
"ruansheng8"
],
"repo": "spring-cloud/spring-cloud-gateway",
"url": "https://github.com/spring-cloud/spring-cloud-gateway/pull/2807",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
691827705 | Getting 'StructuredQuery.from cannot have more than one collection selector' from reactive repositories
Spring boot parent version - 2.3.2.RELEASE
Spring cloud version - Hoxton.SR6
When running load test on my application getting below exception quite frequently.
org.springframework.cloud.gcp.data.firestore.util.ObservableReactiveUtil$StreamingObserver.onError(ObservableReactiveUtil.java:126) at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:453) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426) at io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: StructuredQuery.from cannot have more than one collection selector. at io.grpc.Status.asRuntimeException(Status.java:533)
INVALID_ARGUMENT: StructuredQuery.from cannot have more than one collection selector. at io.grpc.Status.asRuntimeException(Status.java:533)
I am not querying on multiple collections in a single query, in-fact I am extending FirestoreReactiveRepository in it's plain form for my collections and writing methods in it for query (just like JPA). I am getting these errors for 3 repositories in my 2 Spring boot services, so I assume it won't be problem with just one of my collection. This happens when there is huge load on service, running small bunch of requests work fine.
@dmitry-s PTAL.
@meltsufin @dmitry-s Any updates on this issue. As I am also getting same issue, although I am accessing single collection only?
When I attempt to access firestore, I first get error "{io.grpc.StatusRuntimeException: UNAVAILABLE: io exception}" and later when I retry same code then get error "{io.grpc.StatusRuntimeException: INVALID_ARGUMENT: StructuredQuery.from cannot have more than one collection selector.}"
Also this issue is not consistent, same code works fine for all cases but sometimes gives this in consistent behavior.
I am also using FirestoreReactiveRepository
@manishjain5238 Could you provide the source code so we can try to replicate the issue?
@dmitry-s , I experienced this issue and managed to resolve it, but I'm not sure exactly why my solution worked. Here's some code roughly the same to what was causing my problem:
private Mono<SeasonMetadata> getSeasonMetadata(Timestamp timestamp) {
Flux<SeasonMetadata> seasonMetadata =
seasonMetadataRepository.findByEndGreaterThan(timestamp);
Mono<SeasonMetadata> seasonMetadata =
seasonMetadata
.filter(metadata -> metadata.getStart().compareTo(timestamp) <= 0)
.next(); // <- annotation 1
return seasonMetadata;
}
public Flux<SeasonStats> updateResults() {
Mono<SeasonMetadata> currentSeasonMetadata = getSeasonMetadata(Timestamp.now());
Flux<Player> players = playerRepository.findAll();
return players.flatMap( // <- annotation 2
player ->
currentSeasonMetadata.flatMap(
metadata -> {
Stats parent = new Stats();
parent.setId(metadata.getId());
SeasonStats playerStats = new SeasonStats();
playerStats.setPlayerId(player.getId());
return firestoreTemplate.withParent(parent).save(playerStats);
}));
};
The call to FirestoreTemplate::save in updateResults() was not successful. When I changed the call of next() instead to publishNext() at the line marked annotation 1, then the problem went away. I realized also that I should invert the nested flatMap calls at annotation 2 to look like:
currentSeasonMetadata.flatMapMany(
metadata ->
players.flatMap(
player -> {
^ And in that case, either next() or publishNext() worked fine.
However, since I'm somewhat new to Reactor, I'm not sure how this could cause the query to be built incorrectly. Any updates would be appreciated. Thanks!
@mhbarney I tried replicating the issue using your code, but it works fine for me.
Would you be able to provide an application that reproduces the error?
Thanks!
@mhbarney I finally was able to reproduce it. I'll let you know when we have updates.
@mhbarney the issue is caused by double subscribing to the currentSeasonMetadata. It happens because you have currentSeasonMetadata.flatMap inside of players.flatMap. That Mono doesn't currently support double subscription, and I will create a fix for allowing that.
But in this case I think you don't actually want to fetch new currentSeasonMetadata for every player, so my suggestion would be to call chache() method on that Mono, like so:
Mono<SeasonMetadata> currentSeasonMetadata = getSeasonMetadata(Timestamp.now()).cache();
And that should fix your issue.
Let me know if that works for you.
Thanks!
@dmitry-s thank you, using cache() also worked. I'll also take a look at the code changes you made to learn more.
| gharchive/issue | 2020-09-03T09:59:46 | 2025-04-01T06:45:51.141532 | {
"authors": [
"adirepo",
"dmitry-s",
"manishjain5238",
"meltsufin",
"mhbarney"
],
"repo": "spring-cloud/spring-cloud-gcp",
"url": "https://github.com/spring-cloud/spring-cloud-gcp/issues/2510",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
293841764 | Ribbon not loading a list of servers during spring boot startup or initialisation
Bug:
I want to use the listofservers which are defined in a configuration file during spring boot startup (i.e. in a @Postconstruct annotation) to connect to an aws s3 instance.
ribbon:
listOfServers: localhost:8080
ServerListRefreshInterval: 15000
Eureka is disabled.
This is being set using the
@RibbonClient(name = "test", configuration = TestConfiguration.class).
Am autowiring the loadbalacer client
@Autowired
private LoadBalancerClient loadBalancer;
eg: @PostConstruct
loadBalancer.choose("test").getUri().toString()
& loading the servers in a config file as
@Bean
public ServerList ribbonServerList(IClientConfig config) {
ConfigurationBasedServerList serverList = new ConfigurationBasedServerList();
serverList.initWithNiwsConfig(config);
return serverList;
}.
But the list of servers during initialization is always empty, so i get back an exception that riboon client cannot find the list of servers during springboot startup.
: No up servers available from load balancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=s3,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:com.netflix.loadbalancer.ConfigurationBasedServerList@3efedc6f
WARN 19548 --- [main] c.netflix.loadbalancer.BaseLoadBalancer : LoadBalancer [s3]: Error choosing server for key default
even though the serverList is defined in the application.yml file, if i don't read the serverList from @postcontruct annotation it is working fine.
I looked up online but could not find examples of using this in a @postcontruct annotation, but i need this to be used there is there any other alternative?
user.zip
Attached a test project with which the issue is reproducible
Please learn how to format code on GitHub.
I would imagine that your @PostConstruct code is running before Ribbon is fully initialized. I am not sure what we could do about that.
Ok.
Yep you are right, but the loadbalancerclient is autowired so I thought the server list to be set before we use it in @PostContruct, we would need the list of servers to be used during initialisation as there are calls to be made to S3 server to create a bucket.
Do you reckon this as a bug otherwise is there an alternative. Appreciate your reply.
We've generally not recommended using ribbon during initialization phases. This may help: http://cloud.spring.io/spring-cloud-static/Edgware.SR1/single/spring-cloud.html#ribbon-child-context-eager-load
Thanks, that property doesn't seem to have any effect, still getting the same error. I have added that to yaml file as follows.
say-hello:
ribbon:
eureka:
enabled: false
eager-load:
enabled: true
clients: localhost:8090,localhost:9092,localhost:9999
listOfServers: localhost:8090,localhost:9092,localhost:9999
ServerListRefreshInterval: 15000
Is that the correct way?
no, clients is the ribbon client name. In your case say-hello.
oops unfortunately even that doesn't seem to have any effect. Getting the same error.
user.zip
say-hello:
ribbon:
eureka:
enabled: false
eager-load:
enabled: true
clients: say-hello
listOfServers: localhost:8090,localhost:9092,localhost:9999
ServerListRefreshInterval: 15000
eager-load can't be under say-hello:. It needs to be:
ribbon:
eureka:
enabled: false
eager-load:
enabled: true
clients: say-hello
say-hello:
ribbon:
listOfServers: localhost:8090,localhost:9092,localhost:9999
ServerListRefreshInterval: 15000
Still no luck tho. Getting the same error in the startup log I noticed that ribbon.eager-load property is matched.
RibbonAutoConfiguration#ribbonApplicationContextInitializer matched:
- @ConditionalOnProperty (ribbon.eager-load.enabled) matched (OnPropertyCondition)
Not sure what else to say:
We've generally not recommended using ribbon during initialization phases.
Just wondering if adding ribbon.eager-load.enabled property does not list the servers during startup, could this be a bug?
@RohitGupta31 only if you provide a minimal project that recreates the problem.
Hi Rohit,
Try deleting
@Bean
public IRule ribbonRule(IClientConfig config) {
return new AvailabilityFilteringRule();
}
from your SayHelloConfiguration.java
@spencergibb I have the same issue like @RohitGupta31. I added the following lines in application.yaml:
ribbon:
eager-load:
enabled: true
clients: merchant-security
eureka:
enabled: false
merchant-security:
ribbon:
listOfServers: localhost:8081, localhost:8082
ServerListRefreshInterval: 15000
I used @FeignClient:
@FeignClient(name = "merchantSecurityPermissionClient", serviceId = "merchant-security")
In the method "public void configure(HttpSecurity http) throws Exception" of the Configuration class (this class extends ResourceServerConfigurerAdapter.java). I called @FeignClient and it return an error:
Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: merchant-security
However, after Spring Boot StartUp, I call @FeignClient again and it works fine.
I have the same problem, I remove the ribbonRule and works fine. thanks @ArtemZubenko .
Why?
Is correct not use ribbonRule?
Closing due to age of the question. If you would like us to look at this issue, please comment and we will look at re-opening the issue.
I have the same problem,I push my code to git : https://github.com/kanghouchao/oldheaven.git
When I run test in restlet for this url : http://localhost:8080/test frist I get right response
When request for this url seacend I get a exception : com.netflix.client.ClientException
Befor the exception I get a warn : No up servers available from load balancer
@kanghouchao unfortunately, your sample is very complex and there are no instructions on how to reproduce it. Please simplify your sample and provide instructions.
@spencergibb thanks for your suggest,I pushed a simply demo as I can to github:https://github.com/kanghouchao/sample-springcloud.git
About instructions, I don't how to explain that demo
sorry for my poor english
I luncher two module's Starter in springboot,the server and client
I want to use the feign client in model named client to get the message from the module named server
The requestmapping in server registed in zookeeper-3.4.13.
And I think the error is from Ribbon,I added Ribbon but I didn't use it,There is not Configuration for Ribbon
that's all, Thanks again.
@spencergibb I forgot saying that I used zookeeper dependencies in client
@spencergibb If I don't use the zookeeper dependecies configuration, Everything is ok
@spencergibb or use RestTemplate.getForObject(...)
| gharchive/issue | 2018-02-02T10:29:58 | 2025-04-01T06:45:51.162547 | {
"authors": [
"ArtemZubenko",
"RohitGupta31",
"anhtuanluu36",
"kanghouchao",
"raulvillalbamedina",
"ryanjbaxter",
"spencergibb",
"spring-issuemaster"
],
"repo": "spring-cloud/spring-cloud-netflix",
"url": "https://github.com/spring-cloud/spring-cloud-netflix/issues/2705",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
405071873 | Zuul takes 5-6 seconds to route request via URL
Hi,
I have a two Spring Boot application running (A & B) on one server, and the Zuul API Gateway (Spring) running on a another server (under the same network).
Application A will make a request to the gateway, and gateway will route the request to application B and return the results back. Everything is working correctly, just that zuul takes 5-6 seconds just to route the request. However, subsequent requests will be very fast (for around 15 seconds).
If it's hard to understand:
Request 1: (Time = 1:00:00) Time taken: 5 seconds
Request 2: (Time = 1:00:05) Time taken: 0.2 seconds
Request 3: (Time = 1:00:08) Time taken: 0.2 seconds
Request 4: (Time = 1:00:10) Time taken: 0.2 seconds
Request 5: (Time = 1:00:15) Time taken: 5 seconds
and so on...
I have enabled debug logging on my gateway, and found that this is the part where it takes 5 seconds in zuul:
2019-01-31 11:48:19 [[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Def
ault (self-tuning)'] DEBUG org.apache.http.wire.wire -line no:[87]- http-outgoi
ng-0 >> "{"param":"Y"}"
2019-01-31 11:48:25 [[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Def
ault (self-tuning)'] DEBUG org.apache.http.wire.wire -line no:[73]- http-outgoi
ng-0 << "HTTP/1.1 200 OK[\r][\n]"
So, I'm suspecting after the initial connection, zuul keeps the connection alive (for only 15 seconds), therefore I have to 'reconnect' after this time? Is there a way I can configure this? I've been playing around with many different zuul properties to no avail.
My application properties:
zuul.routes.app.url=
server.servlet.contextPath=/api
ribbon.eureka.enabled=false
ribbon.listOfServers=
ribbon.ServerListRefreshInterval=10000
ribbon.eager-load.enabled=true
ribbon.eager-load.clients=app
hystrix.command.default.execution.timeout.enabled= false
ribbon.ConnectTimeout = 100000
ribbon.ReadTimeout = 100000
zuul.routes.auth.stripPrefix=false
server.use-forward-headers=true
management.security.enabled=true
zuul.sensitive-headers=
zuul.add-host-header=true
logging.level.com.netflix.loadbalancer.LoadBalancerContext=DEBUG
zuul.host.connect-timeout-millis=50000
zuul.host.socket-timeout-millis=50000
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.
| gharchive/issue | 2019-01-31T04:06:18 | 2025-04-01T06:45:51.170591 | {
"authors": [
"spring-issuemaster",
"weeyen91"
],
"repo": "spring-cloud/spring-cloud-netflix",
"url": "https://github.com/spring-cloud/spring-cloud-netflix/issues/3368",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
106072057 | Use of @EnableScheduling in hystrix-amqp can cause problems with other TaskScheduler beans
I am working on a project that makes use of Spring's WebSocket support via @EnableWebSocketMessageBroker. Next, I decide to enable circuit breaker support by adding @EnableCircuitBreaker. When I try to run the app, it throws an exception:
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:1.2.5.RELEASE:run (default-cli) on project demo: An exception occured while running. null: InvocationTargetException: More than one TaskScheduler exists within the context. Remove all but one of the beans; or implement the SchedulingConfigurer interface and call ScheduledTaskRegistrar#setScheduler explicitly within the configureTasks() callback. No qualifying bean of type [org.springframework.scheduling.TaskScheduler] is defined: expected single matching bean but found 2: messageBrokerSockJsTaskScheduler,taskScheduler
The problem is that hystrix-amqp internally sets @EnableScheduling as seen here. Enabling scheduling in this way creates a taskScheduler bean, which is expected by ScheduledAnnotationBeanPostProcessor. However, only ONE TaskScheduler is expected. This means that this situation can happen whenever there are any task scheduler beans declared, and in this case, because a messageBrokerSockJsTaskScheduler bean is also created for websocket support, we now have two beans, and the post processor throws an exception.
If I had set the @EnableScheduling annotation in my app, I could take responsibility at this point and configure the default scheduler, like the following:
@Configuration
public class TaskSchedulingConfiguration implements SchedulingConfigurer {
@Autowired
private TaskScheduler taskScheduler;
@Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(this.taskScheduler);
}
}
This is in fact, how I worked around this issue in my app. The problem is that I didn't set the @EnableScheduling annotation in my app, and it's presumptuous of me to decide which TaskScheduler to set as default, since I didn't configure either one of them. While this may be a potential solution for this issue within spring-cloud, a better solution may be to not use @EnableScheduling, and thus avoid the post processor magic.
I created a demo project to illustrate this issue.
Cc: @scottfrederick @mstine @rstoyanchev
Since Spring 4.2 ScheduledAnnotationBeanPostProcessor fixes this by choosing a bean named taskScheduler if there is more than one. IMO it's nothing really directly to do with Spring Cloud anyway (since you would have the same issue essentially using Spring Integration with Spring Web Sockets).
| gharchive/issue | 2015-09-11T19:00:38 | 2025-04-01T06:45:51.177517 | {
"authors": [
"dsyer",
"royclarkson"
],
"repo": "spring-cloud/spring-cloud-netflix",
"url": "https://github.com/spring-cloud/spring-cloud-netflix/issues/538",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
151206815 | Unclosed spans with Feign + Hystrix
When an error occurs on the server beign acessed by the Feign Client, it appears that the span created on the request is not being closed.
Here is a sample project:
https://github.com/arabori/springsleuthdemo
You can see messages like these in the log:
2016-04-26 15:13:37.851 WARN [fooservice,66998a15b2b08a9c,35bc664d66066377,false] 25978 --- [ix-fooservice-1] o.s.cloud.sleuth.util.ExceptionUtils : Tried to close span but it is not the current span: [Trace: 66998a15b2b08a9c, Span: 66998a15b2b08a9c, exportable=false]. You may have forgotten to close or detach [Trace: 66998a15b2b08a9c, Span: 35bc664d66066377, exportable=false]
@arabori - you've raised an issue in the best way possible, thank you very much for that. It saved me tons of work :) I've based a lot on your code when writing a test and the impl for fixing this issue.
BTW did you sign the https://support.springsource.com/spring_committer_signup ? Cause something tells me that a PR will come from your side soon ;)
Thanks! I tried to come up with a fix, but couldn't find a proper way to test it, other than looking at the logs. It seems that was actually the easiest way :)
@marcingrzejszczak - Just signed it, I'll try to keep up with the project, and contribute when possible, thanks a lot :)
| gharchive/issue | 2016-04-26T18:29:28 | 2025-04-01T06:45:51.181493 | {
"authors": [
"arabori",
"marcingrzejszczak"
],
"repo": "spring-cloud/spring-cloud-sleuth",
"url": "https://github.com/spring-cloud/spring-cloud-sleuth/issues/257",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
258117310 | Latest sleuth 2.0 milestone is broken wrt latest spring boot 2.0 milestone
sleuth 2.0.0.M2 (as well as prior 2.0 milestones) is broken after upgrade to spring boot 2.0.0.M4
TraceHandlerInterceptor refers to ErrorController which has (finally) been moved to the new package. This was raised several month ago in #592 and has finally fired.
Yup we know and we fixed it AFAIR but some other issues occurred and the build is broken currently
@marcingrzejszczak thanks for the quick response, I'll wait for the new milestone build
| gharchive/issue | 2017-09-15T17:55:57 | 2025-04-01T06:45:51.183358 | {
"authors": [
"eric239",
"marcingrzejszczak"
],
"repo": "spring-cloud/spring-cloud-sleuth",
"url": "https://github.com/spring-cloud/spring-cloud-sleuth/issues/700",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
789017099 | LeaseAwareVaultPropertySource no longer logs INFO messages when the supplied Vault location is not resolvable: Not found
After migrating to spring-boot-starter-parent 2.4.2 and spring-cloud-vault-config/spring-cloud-starter-vault-config 3.0.0 LeaseAwareVaultPropertySource no longer logs INFO messages if I intentionally add some non-existing Vault path to spring.config.import. I wanted to ask if it's by design or it's a bug.
If you would like us to spend some time helping you to diagnose the problem, please spend some time describing it and, ideally, providing a minimal sample that reproduces the problem.
If you would like us to spend some time helping you to diagnose the problem, please spend some time describing it and, ideally, providing a minimal sample that reproduces the problem.
legacy.zip
modern.zip
legacy.zip
modern.zip
Thanks a lot. The issue is caused by the logging config not being configured yet at the time the Vault interaction is happening. You can use for now spring.cloud.vault.fail-fast=true to get feedback from the app if the startup of a Vault component fails.
Thanks a lot. The issue is caused by the logging config not being configured yet at the time the Vault interaction is happening. You can use for now spring.cloud.vault.fail-fast=true to get feedback from the app if the startup of a Vault component fails.
Thanks a lot. The issue is caused by the logging config not being configured yet at the time the Vault interaction is happening. You can use for now spring.cloud.vault.fail-fast=true to get feedback from the app if the startup of a Vault component fails.
spring.cloud.vault.fail-fast=true is already being used but it doesn't provide any useful feedback.
Thanks a lot. The issue is caused by the logging config not being configured yet at the time the Vault interaction is happening. You can use for now spring.cloud.vault.fail-fast=true to get feedback from the app if the startup of a Vault component fails.
spring.cloud.vault.fail-fast=true is already being used but it doesn't provide any useful feedback.
As it looks right now, the lifecycle is controlled from Spring Boot and we don't have any means to do anything useful here. It would make sense to follow up in https://github.com/spring-projects/spring-boot/issues.
As it looks right now, the lifecycle is controlled from Spring Boot and we don't have any means to do anything useful here. It would make sense to follow up in https://github.com/spring-projects/spring-boot/issues.
After consulting with the Boot Team, we could work around the logging issue by obtaining a specific logger that captures startup logs and prints these later on.
After consulting with the Boot Team, we could work around the logging issue by obtaining a specific logger that captures startup logs and prints these later on.
I've opened an issue to help with this on the Boot side: https://github.com/spring-projects/spring-boot/issues/24988. I think that's a prerequisite to fixing this as you need a DeferredLogFactory, not just a Log that is deferred.
I've opened an issue to help with this on the Boot side: https://github.com/spring-projects/spring-boot/issues/24988. I think that's a prerequisite to fixing this as you need a DeferredLogFactory, not just a Log that is deferred.
| gharchive/issue | 2021-01-19T13:20:31 | 2025-04-01T06:45:51.193850 | {
"authors": [
"Asky-GH",
"mp911de",
"wilkinsona"
],
"repo": "spring-cloud/spring-cloud-vault",
"url": "https://github.com/spring-cloud/spring-cloud-vault/issues/565",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
135856517 | initial commit for application.properties
Mongo database properties in a property file added,
What's the purpose of adding MongoDB settings in a props file? The defaults work out of the box with a default installation of MongoDB.
Hey Gregturn,
Thanks for your reply.
Our team was going through with this POC and invested lots of time to connect to the remote mongo machine. So, we came up with the appropriate solution for the same. This might help other needed people who are using the same.
FWIW I think this is a good idea, but the whole file should be commented out (since the defaults work fine).
I agree @dsyer .
I prefer simply linking to http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-mongodb instead of maintaining something that could go out-of-sync.
That's also a valid point of view. I guess as long as the user has some way to find out more everyone should be happy.
Thanks a ton @gregturn . This helps us as well as other team who may use the same.
| gharchive/pull-request | 2016-02-23T20:35:57 | 2025-04-01T06:45:51.197251 | {
"authors": [
"cooligc",
"dsyer",
"gregturn"
],
"repo": "spring-guides/gs-accessing-mongodb-data-rest",
"url": "https://github.com/spring-guides/gs-accessing-mongodb-data-rest/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
412339243 | Multipart upload disabled in application.properties
Probably a mistake (https://github.com/spring-guides/gs-uploading-files/commit/fcdafe4db828a5ed91806dbc3b842fb05e461c7e)?
Fixed in bd40296 (but it was a deprecated property that didn't actually affect the functionality, so it was harmless).
| gharchive/issue | 2019-02-20T09:58:32 | 2025-04-01T06:45:51.198747 | {
"authors": [
"dsyer"
],
"repo": "spring-guides/gs-uploading-files",
"url": "https://github.com/spring-guides/gs-uploading-files/issues/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1905853419 | Replace spring-batch with spring-ai in GH templates
There were still a lot of occurences of spring-batch in all the GH templates.
You might want to enable GH Discussions for the project.
some of these links don't exist. I'll make a pass to clean it up and create an issue for anything that remains... we aren't releasing to maven central just yet ! ;)
Thanks
merged as cffa790a8becf93f6aab25bcf9775cc8a2909a7a
thanks
| gharchive/pull-request | 2023-09-20T23:41:20 | 2025-04-01T06:45:51.206958 | {
"authors": [
"jexp",
"markpollack"
],
"repo": "spring-projects-experimental/spring-ai",
"url": "https://github.com/spring-projects-experimental/spring-ai/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2544222964 | Fixed anthropic stream not return finish reason
Thank you for taking time to contribute this pull request!
You might have already read the [contributor guide][1], but as a reminder, please make sure to:
Fixed it issue https://github.com/spring-projects/spring-ai/issues/1248
Sign the contributor license agreement
Rebase your changes on the latest main branch and squash your commits
Add/Update unit tests as needed
Run a build and make sure all tests pass prior to submission
Hi @Claudio-code , thank you for catching and fixing the case when we have empty generation list.
Rebased, squashed and merged at b468354dd321de36cff3d4b063309668e136c6ce
| gharchive/pull-request | 2024-09-24T03:51:06 | 2025-04-01T06:45:51.210001 | {
"authors": [
"Claudio-code",
"tzolov"
],
"repo": "spring-projects/spring-ai",
"url": "https://github.com/spring-projects/spring-ai/pull/1401",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1961479722 | Fix Super Stream Example in Docs
Back port of issue #3546
Closed by 7a92b41af
| gharchive/issue | 2023-10-25T13:49:50 | 2025-04-01T06:45:51.210811 | {
"authors": [
"garyrussell"
],
"repo": "spring-projects/spring-amqp",
"url": "https://github.com/spring-projects/spring-amqp/issues/2548",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
329552034 | Add licence and notice to all artifacts
remove the epl-license; we haven't had an erlang module since 1.4
cherry-pick to 2.0.x, 1.7.x
... and cherry-picked to 2.0.x.
Back-ported to 1.7.x after some conflicts resolution in the build.gradle.
| gharchive/pull-request | 2018-06-05T17:23:30 | 2025-04-01T06:45:51.212473 | {
"authors": [
"artembilan",
"garyrussell"
],
"repo": "spring-projects/spring-amqp",
"url": "https://github.com/spring-projects/spring-amqp/pull/761",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1285685033 | StringBasedR2dbcQuery incorrectly binds params
I have such query:
@Query("select :param1, :param2;")
fun findSomething(@Param("param1") param1: String, @Param("param2") param2: Int): Mono<Void>
When both params are provided as params and within query then everything seems OK.
But when I modify the code (remove one of params from query) like:
@Query("select :param1, 2021;")
fun findSomething(@Param("param1") param1: String, @Param("param2") param2: Int): Mono<Void>
then I could observe strange results.
My test looks like (written in Kotlin):
@Test
fun name() {
walletsRepository.findSomething("some-string-param", 123).verifyErrorResponse(Throwable::class.java)
}
verifyErrorResponse is custom extension method which verifies that exception is thrown (cause repository method returns Mono<Void> so cannot map it to anything)
I have tried to debug code and have found out that there is something wrong with named parameters binding:
At the end I could observe following logs which indicates that query with wrong params have been triggered:
2022-06-27 13:35:53.807 [Test worker] DEBUG o.s.r2dbc.core.DefaultDatabaseClient : Executing SQL statement [select $1, 2021;]
2022-06-27 13:46:18.642 [Test worker] DEBUG io.r2dbc.h2.client.SessionClient : Request: select $1, 2021 {1: 123}
Spring Data detects if named parameters are used within the query and if so, the parameters are bound by name. Parameters not used in the query (parameter name does not occur within the SQL query) are bound by index as Spring Data assumes you're using native bind markers.
Mixing named and index parameters within the same query asks for trouble as a mixed binding can easily mix up the parameter order. That being said, the second parameter that is not used in the query, can only be bound via index and that is the result you experience here.
The issue was noticed because of the fact that the initial query required to provide 2 parameters.
After some time when the code has evolved next version of the query required only second param (method signature has not changed) which caused the above-mentioned issue.
My expectation as a user is that if within a @Query I have a statement like SELECT :some-param-name and the method signature contains param with the same name this param would be bound by name (because in the query I use its name and not index) and not by index (regardless of a number of method signature params). Otherwise, it could happen that if someone would forget to modify the method signature then the code would start to behave in a wrong way which would be really hard to investigate (especially if both params have the same type - like numbers).
My expectation as a user is that if within a @Query I have a statement like SELECT :some-param-name and the method signature contains param with the same name this param would be bound by name (because in the query I use its name and not index) and not by index (regardless of a number of method signature params).
This is true for the first parameter. Here we have a second parameter that is not used. We cannot tell whether the argument is used as $2, ? or with another binding marker.
It would be possible to switch to named parameters-only once we find that at least one parameter is being referenced by name but that would break queries like SELECT :myparam, $2.
| gharchive/issue | 2022-06-27T11:48:32 | 2025-04-01T06:45:51.272782 | {
"authors": [
"mp911de",
"smilasek"
],
"repo": "spring-projects/spring-data-relational",
"url": "https://github.com/spring-projects/spring-data-relational/issues/1275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2322995402 | Why LocalDateTimeToTimestampConverter has not added to Jsr310TimestampBasedConverters?
there is no LocalDateTimeToTimestampConverter
my spring data jdbc version is 2.4.18
| gharchive/issue | 2024-05-29T10:44:24 | 2025-04-01T06:45:51.274581 | {
"authors": [
"CC-fake"
],
"repo": "spring-projects/spring-data-relational",
"url": "https://github.com/spring-projects/spring-data-relational/issues/1797",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1189967013 | Consider improving Reactor chain and context propagation
From https://github.com/spring-projects/spring-graphql/issues/328#issuecomment-1069628870:
(Emphasis mine)
Currently we decorate each DataFetcher and propagate to it Reactor context from the web layer. Apart from that they are not in a single Reactor chain and Reactor context from one cannot be seen by others. This is something we might try to improve in the future, but for now if you want pass something across data fetchers, just use GraphQLContext.
I think that #459 might have solved this, as the chain of contexts (threadlocal, reactor context, graphql conetxt) is now consistent for the entire execution.
We do have improved context propagation after #459. However, Reactor DataFetchers are still not in a single chain of execution given that graphql.execution.AsyncExecutionStrategy expects each to return CompletableFuture and joins them as such for the final result. So the main way to pass context from one DataFetchers to others remains via GraphQLContext, which should be fine to use. I'm going to close this for now as there isn't anything further we intend to do here at this time.
| gharchive/issue | 2022-04-01T15:33:11 | 2025-04-01T06:45:51.331859 | {
"authors": [
"bclozel",
"ciscoo",
"rstoyanchev"
],
"repo": "spring-projects/spring-graphql",
"url": "https://github.com/spring-projects/spring-graphql/issues/346",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
945217036 | Automated Registration of Querydsl Repositories
The Spring GraphQL Querydsl integration makes it easy to create a DataFetcher from a QuerydslPredicateExecutor but it's still necessary to bind the resulting DataFetcher to the schema manually via RuntimeWiring.Builder.
We can automate this by matching fields in the schema that correspond to the generic type of the QuerydslPredicateExecutor. However, it should probbaly be an explicit mechanism, to avoid unintended side effects. Perhaps an @GraphQlRepository stereotype.
This is now in the main branch. If a repository is annotated with @GraphQlRepository it is automatically registered for top-level queries whose return type matches the repository domain type. We can now extend this further in #99 with similar support for Spring Data Repositories, including queries and mutations.
| gharchive/issue | 2021-07-15T10:06:26 | 2025-04-01T06:45:51.334166 | {
"authors": [
"rstoyanchev"
],
"repo": "spring-projects/spring-graphql",
"url": "https://github.com/spring-projects/spring-graphql/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2495365052 | Invalid sub-module reference from module 'module3.submodule32' to module 'module3.submodule31'
I was testing a bit spring modulith 1.3.0-M2 and created a project with the following structure
commons
module1
module2
module3
submodule31
submodule32
submodule32 is using Module31Interface like in the example below:
https://github.com/aribeth007/modulith-example/blob/main/src/main/java/com/example/modulith/module3/submodule32/internal/Module32Service.java#L12
submodule32 also has the allowed dependencies specified: https://github.com/aribeth007/modulith-example/blob/main/src/main/java/com/example/modulith/module3/submodule32/package-info.java
While running the Modularity Validation Test I am getting the following exception:
`
org.springframework.modulith.core.Violations: - Invalid sub-module reference from module 'module3.submodule32' to module 'module3.submodule31' (via c.e.m.m.s.internal.Module32Service -> c.e.m.m.s.Module31Interface)!
at org.springframework.modulith.core.Violations.and(Violations.java:119)
at java.base/java.util.stream.ReduceOps$1ReducingSink.accept(ReduceOps.java:80)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1787)
at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:657)
at org.springframework.modulith.core.ApplicationModules.detectViolations(ApplicationModules.java:475)
at org.springframework.modulith.core.ApplicationModules.verify(ApplicationModules.java:440)
at com.example.modulith.ModularityTest.verifiesModularStructure(ModularityTest.java:12)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
`
I was expecting the modularity test to pass. However I am not sure if I am doing anything wrong.
Thank you!
This is fixed in the latest snapshots. We missed checking for sibling relationships, but that's in place now.
| gharchive/issue | 2024-08-29T19:21:23 | 2025-04-01T06:45:51.338703 | {
"authors": [
"aribeth007",
"odrotbohm"
],
"repo": "spring-projects/spring-modulith",
"url": "https://github.com/spring-projects/spring-modulith/issues/787",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
450327349 | Update to Spring 4.3.24.RELEASE
Backport of #1685
Fixed via efccb8cac8bdb61e8ef7d8fe1bfc03379d540ed2
| gharchive/issue | 2019-05-30T14:11:00 | 2025-04-01T06:45:51.343478 | {
"authors": [
"jzheaux"
],
"repo": "spring-projects/spring-security-oauth",
"url": "https://github.com/spring-projects/spring-security-oauth/issues/1691",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
295819249 | (Re)add OSGi Manifest header
In Version 2.8.0 the Manifest-file of all springfox .jars is looking like this:
Manifest-Version: 1.0
Build-Time: 2018-01-14T16:14:02-0600
Built-With: gradle-4.4.1, groovy-2.4.12
Implementation-Title: springfox-schema
Implementation-Version: 2.8.0
Built-On: ISDV161716L.local/192.168.1.163
Built-By: d_krishnan
Created-By: 1.7.0_79 (Oracle Corporation)
No OSGi Manifest Header Information can be found.
In https://github.com/springfox/springfox/pull/368 and https://github.com/springfox/springfox/issues/119 this seemed to be already discussed and solved.
Did you remove OSGi support on purpose? If not, can you readd it?
Or am I missing something and there are OSGi-enabled versions of your .jars available?
Looks like it.. after moving to gradle it would appear to have regressed.
I did not work with gradle too much before. But looking at it, it seems you just need to add
apply plugin: 'osgi' to the build.gradle file.
This is the official OSGi Plugin for Gradle: https://docs.gradle.org/current/userguide/osgi_plugin.html
According to its documentation, the necessary import and export statements are added automatically:
The classes in the classes dir are analyzed regarding their package dependencies and the packages they expose. Based on this the Import-Package and the Export-Package values of the OSGi Manifest are calculated.
That said, as soon as Export-Package and Import-Package declarations appear in the MANIFEST.MF, it worked.
I could try to test this myself later if you want.
| gharchive/issue | 2018-02-09T10:15:37 | 2025-04-01T06:45:51.376762 | {
"authors": [
"dilipkrish",
"lukasHoel"
],
"repo": "springfox/springfox",
"url": "https://github.com/springfox/springfox/issues/2244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
381017563 | Example object with RequestBody support
[ ] Question.
I know 'defaultValue' property is no more supported for RequestBody. Is 'Example' property works for 'RequestBody'? I do not see any sample in docs for it.
Tried using SpringFox 2.9.2, following code for reference:
@ApiParam(name = "body",value = "Request body in JSON format...",required=true,
examples = @io.swagger.annotations.Example(value= @ExampleProperty( mediaType = MediaType.APPLICATION_JSON_VALUE,
value = "{foo: whatever, bar: whatever2}"))
) @RequestBody String body,
I would like to display one large JSON as example value for request body and clicking on 'Try it out' application should show same in edit mode. Please refer to following URL for my expectation behavior []https://petstore.swagger.io/#/store/placeOrder
Also note that I am not passing any objects in parameters to method, it is simple 'String' type request body as I have mentioned above in my code snippet.
Please suggest.
Originally posted by @vaibhavn in https://github.com/springfox/springfox/issues/853#issuecomment-438263148
I would appreciate if anyone has any updates or workarounds? Thanks!
+1
+1
any updates?
Check check
On Mon, Mar 30, 2020 at 10:06 AM Allen Montejo notifications@github.com
wrote:
any updates?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/springfox/springfox/issues/2783#issuecomment-606055923,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AH6BG727W4BFISKVQ42AL63RKCYNVANCNFSM4GD4UVCA
.
any updates?
| gharchive/issue | 2018-11-15T06:23:58 | 2025-04-01T06:45:51.383449 | {
"authors": [
"alcmontejo",
"ironijunior",
"minhnguyenvan95",
"njord4",
"vaibhavn",
"ysfAskri"
],
"repo": "springfox/springfox",
"url": "https://github.com/springfox/springfox/issues/2783",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
380668631 | Set deleted status on vpc peer deletion.
Closes #1843.
Thanks!
| gharchive/pull-request | 2018-11-14T12:06:31 | 2025-04-01T06:45:51.407291 | {
"authors": [
"markchalloner",
"spulec"
],
"repo": "spulec/moto",
"url": "https://github.com/spulec/moto/pull/1945",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2529057112 | Add new About page with project details and team members
Created an "About" page with data from our contributors. Also that item was added to the main menu and spacing was standardized for menu items.
TODO:
[x] Add page intro text
[x] Add big list of contributors text
[x] Add Diversity section text
[x] Decide what to do about the remaining content and layout suggestions. I would like @ccordoba12's input on this.
I don´t have any issue with your suggestions CAM, but we will need to wait for @ccordoba12 to give his input
>be Andres
>go to Files Changed tab
>click Add suggestion to batch
>repeat on other suggestions
>click Commit Suggestions
>write banger commit message
>click Commit changes
>whole dev team cheered
be Andres >go to Files Changed tab
click Add suggestion to batch
repeat on other suggestions
click Commit Suggestions
write banger commit message
click Commit changes
whole dev team cheered
Got it, thank you for your detailed tutorial.
I'm still waiting for the missing data of contributors, but the infrastructure to handle it is already there, so maybe this can be reviewed @CAM-Gerlach? check the tooltip on @ccordoba12's profile in the About page.
@conradolandia, the suggestions left by @CAM-Gerlach were for me to review since I was the one that came up with the initial text (fortunately I was ok with most of them, except one that I already fixed). So, for next time please don't be so eager to apply them. Thanks!
In addition, you can add multiple suggestions in a single commit for next time (instead of applying them one by one). That's described here (see Add suggestion to batch), which is also the way we prefer to apply suggestions in all our repos.
Thanks @ccordoba12 for the heads up. I did the first one, and then tried the batch, but for some reason, the buttons appeared disabled for me. I will wait for your comments before approving anything.
| gharchive/pull-request | 2024-09-16T17:33:30 | 2025-04-01T06:45:51.428392 | {
"authors": [
"CAM-Gerlach",
"ccordoba12",
"conradolandia"
],
"repo": "spyder-ide/spyder-website",
"url": "https://github.com/spyder-ide/spyder-website/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
57952530 | IPython Console: Cannot interrupt started processes
From ccordoba12 on 2012-06-04T20:42:46Z
This is a serious one: IPython doesn't let interrupt computations on remote kernels which is precisely what we're using to communicate with the variable explorer.
Fortunately the widget has a 'custom_interrupt' attribute that (hopefully) we could use to send the right signal to the right kernel. See: https://github.com/ipython/ipython/blob/1170c58b456d80fe1c81b24fd4728181c1d94b4d/IPython/frontend/qt/console/frontend_widget.py#L561 and the necessary signal here: https://github.com/ipython/ipython/blob/4b8de0203d2eec87c2d05c3521df8af3365f73a4/IPython/zmq/kernelmanager.py#L925
Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1078
From ccordoba12 on 2012-06-15T20:10:14Z
I have code in my personal repo that solves this issue (and also kernel restarts). It's still not ready to push but I think it's not necessary for anyone else to invest time on it.
From ccordoba12 on 2012-06-18T11:56:44Z
Jed, could test this change works as expected and close the issue? It would be a matter of hitting Ctrl+C to stop a computation or to use the option I introduced in the new menu.
Cc: jed.lud...@gmail.com
From ccordoba12 on 2012-06-18T11:48:24Z
This issue was updated by revision e62ec90fb10d .
Before the console was reporting 'Kernel process is either remote or
unspecified. Cannot interrupt.' Now a Ctrl+C will get a proper interrupt.
From jed.lud...@gmail.com on 2012-06-19T06:41:08Z
Seems to be working correctly on Windows.
Status: Verified
From ccordoba12 on 2012-07-05T20:14:35Z
Blocking: spyderlib:1053
From publieke...@gmail.com on 2015-02-02T03:00:22Z
Was this patch applied to the main repository? In Spyder 2.3.2, pressing control-C or clicking the 'stop the current command' button still gives the same error message for a remote kernel: 'Kernel process is either remote or unspecified. Cannot interrupt'
Would be great to have the possibility to interrupt remote kernels.
From ccordoba12 on 2015-02-02T05:53:14Z
External kernels can't be interrupted because of limitations in the IPython architecture. We'll see if we can do something about it in our next version, but it's a tricky issue.
The patch you're referring to was applied a long time ago to interrupt kernels started by Spyder itself.
From ccordoba12 on 2015-02-15T16:14:34Z
Labels: -Component-IPython
| gharchive/issue | 2015-02-17T17:14:21 | 2025-04-01T06:45:51.439902 | {
"authors": [
"spyder-issues-migrator"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/1078",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
57967697 | Mac app - Update to Qt 4.8.4 for HDPI
From julian.h...@gmail.com on 2013-03-31T04:47:49Z
Spyder looks very blurry on Macs with retina display.
See https://bugreports.qt-project.org/browse/QTBUG-23870 for discussion on qt4 with HiDPI support.
Attachment: 130331-0001.png
Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1337
From ccordoba12 on 2013-04-11T11:42:13Z
Ok, got it. I have to update Qt to 4.8.4 because HDPI support was activated in that release.
Summary: Mac app - Update to Qt 4.8.4 for HDPI <span class="oldvalue"> (was: Support for qt4-hidpi) </span>
From ccordoba12 on 2013-04-16T15:17:51Z
Labels: MS-v2.2
From ccordoba12 on 2013-04-11T11:14:45Z
Hard thing to fix for us because there is no official support for HDPI in Qt4. Please keep us posted when things are fixed in the bug you reported.
Status: Accepted
Labels: -Component-PyQt -Component-UI
From jed.lud...@gmail.com on 2013-04-29T15:08:37Z
Labels: MS-v2.2.1
From jed.lud...@gmail.com on 2013-04-29T15:10:06Z
Labels: -MS-v2.2
From julian.h...@gmail.com on 2013-10-07T15:15:39Z
Some user interface elements are now broken.
Attachment: 131008-0002.png
From ccordoba12 on 2013-06-19T13:19:03Z
Labels: -MS-v2.2.1 MS-v2.3
From ccordoba12 on 2013-10-07T13:19:54Z
I updated our app to Qt 4.8.5 which should solve this issue. Thanks for your patience.
Status: Fixed
Labels: -MS-v2.3 MS-v2.2.5
From ccordoba12 on 2013-10-10T06:44:19Z
I see, though I still haven't released our new dmg with Qt 4.8.5. Please wait until this weekend and report back.
| gharchive/issue | 2015-02-17T19:13:29 | 2025-04-01T06:45:51.452257 | {
"authors": [
"spyder-issues-migrator"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/1337",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
892450686 | Copy Paste not working
I've got no clue if that issue has occurred to anyone else here. Since ~April, I'm unable to copy paste my plots to anywhere else, with plt.savefig or without it. It suggests me to have ctrl+c to select it, but if I go into another app, say Twitter or GMail, it does not let me paste the plot I copied. Has anyone faced a similar issue?
Hey @ElJdP, thanks for reporting. I'm going to ask you to reopen this issue with the information we ask in our issue template (i.e. Spyder version, Qt/PyQt versions, etc). All that is available in the dialog showed when you go to the menu Help > About Spyder.
Without that information we can't properly help. Please don't discard it for next time here nor in other projects that require it.
| gharchive/issue | 2021-05-15T14:01:21 | 2025-04-01T06:45:51.454018 | {
"authors": [
"ElJdP",
"ccordoba12"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/15630",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1179869006 | Pythosn interpreter
Description
What steps will reproduce the problem?
error 1 uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
Traceback
Traceback (most recent call last):
File "c:\epmia\instalaciones\python\python39\lib\site-packages\spyder\plugins\ipythonconsole\plugin.py", line 1309, in close_client
client.close()
File "c:\epmia\instalaciones\python\python39\lib\site-packages\spyder\plugins\ipythonconsole\widgets\client.py", line 576, in close
self.shellwidget.will_close(
File "c:\epmia\instalaciones\python\python39\lib\site-packages\spyder\plugins\ipythonconsole\widgets\shell.py", line 164, in will_close
self.spyder_kernel_comm.close()
File "c:\epmia\instalaciones\python\python39\lib\site-packages\spyder\plugins\ipythonconsole\comms\kernelcomm.py", line 144, in close
self.shutdown_comm_channel()
File "c:\epmia\instalaciones\python\python39\lib\site-packages\spyder\plugins\ipythonconsole\comms\kernelcomm.py", line 87, in shutdown_comm_channel
channel = self.kernel_client.comm_channel
AttributeError: 'NoneType' object has no attribute 'comm_channel'
Failed to send bug report on Github. response={'code': 401, 'json': {'message': 'Bad credentials', 'documentation_url': 'https://docs.github.com/rest'}}
Versions
Spyder version: 5.0.3
Python version: 3.9.2
Qt version: 5.12.10
PyQt5 version: 5.12.3
Operating System: Windows 10
Dependencies
# Mandatory:
atomicwrites >=1.2.0 : 1.4.0 (OK)
chardet >=2.0.0 : 4.0.0 (OK)
cloudpickle >=0.5.0 : 1.6.0 (OK)
cookiecutter >=1.6.0 : 1.7.3 (OK)
diff_match_patch >=20181111 : 20200713 (OK)
intervaltree >=3.0.2 : 3.1.0 (OK)
IPython >=7.6.0 : 7.24.1 (OK)
jedi =0.17.2 : 0.17.2 (OK)
jsonschema >=3.2.0 : 3.2.0 (OK)
keyring >=17.0.0 : 23.0.1 (OK)
nbconvert >=4.0 : 6.0.7 (OK)
numpydoc >=0.6.0 : 1.1.0 (OK)
paramiko >=2.4.0 : 2.7.2 (OK)
parso =0.7.0 : 0.7.0 (OK)
pexpect >=4.4.0 : 4.8.0 (OK)
pickleshare >=0.4 : 0.7.5 (OK)
psutil >=5.3 : 5.8.0 (OK)
pygments >=2.0 : 2.9.0 (OK)
pylint >=1.0 : 2.8.3 (OK)
pyls >=0.36.2;<1.0.0 : 0.36.2 (OK)
pyls_black >=0.4.6 : 0.4.7 (OK)
pyls_spyder >=0.3.2;<0.4.0 : 0.3.2 (OK)
qdarkstyle =3.0.2 : 3.0.2 (OK)
qstylizer >=0.1.10 : 0.2.0 (OK)
qtawesome >=1.0.2 : 1.0.2 (OK)
qtconsole >=5.1.0 : 5.1.0 (OK)
qtpy >=1.5.0 : 1.9.0 (OK)
rtree >=0.9.7 : 0.9.7 (OK)
setuptools >=39.0.0 : 49.2.1 (OK)
sphinx >=0.6.6 : 4.0.2 (OK)
spyder_kernels >=2.0.3;<2.1.0 : 2.0.3 (OK)
textdistance >=4.2.0 : 4.2.1 (OK)
three_merge >=0.1.1 : 0.1.1 (OK)
watchdog >=0.10.3;<2.0.0 : 1.0.2 (OK)
zmq >=17 : 22.1.0 (OK)
# Optional:
cython >=0.21 : None (NOK)
matplotlib >=2.0.0 : 3.4.2 (OK)
numpy >=1.7 : 1.20.3 (OK)
pandas >=1.1.1 : 1.2.4 (OK)
scipy >=0.17.0 : 1.6.3 (OK)
sympy >=0.7.3 : None (NOK)
Hi @yeiscop thank you for the feedback. Could you please update to at least Spyder 5.0.5 and check again? Also, just in case, the latest Spyder release available is 5.2.2
You can check it via our Windows installer available in our releases page: https://github.com/spyder-ide/spyder/releases/tag/v5.2.2
Let us know if updating or using the installer helps!
Closing due to lack of response
| gharchive/issue | 2022-03-24T18:15:43 | 2025-04-01T06:45:51.459151 | {
"authors": [
"dalthviz",
"yeiscop"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/17544",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1628144196 | Command-line input not being read properly
What steps will reproduce the problem?
Command-line input is read from the input box with F5 but not with select and F9 or with Ctrl+Enter if code is in a Spyder cell.
I tried thinking on an example where a selection or code cell could use the value there but I could not came with an example. Taking that into account I guess command-line arguments only make sense when running a full file as @ccordoba12 said.
Maybe an example of how a code selection or cell could use the command line input option could help us understand better what @pscobey wants to achieve? Could it be that the feature is something like a way to setup a context/namespace where selected code or code cells run?
Hello Daniel,
Thanks to you and Carlos for the follow-up.
I think you're right ... Spyder can handle
command-line parameters with no problem
as long as there is just one program that uses
them in the current file.
So let me explain how I encountered the "problem".
I am teaching Python using a zyBooks online text,
and I typically put several program examples from
a given chapter section in a single file, each one
within a Spyder cell. This is very convenient and
time-saving when I'm instructing online, and had
worked very well till I had two separate programs
in two Spyder cells, both of which used command-
line parameters.
If I ran the "file" then both programs in both cells
ran and both used the same command-line parameters.
But ... if I tried to run a cell by itself, no luck.
Anyway, now I know what's going on, so I can deal
with it accordingly.
Thanks again for your help!
Spyder is a great tool for teaching Python online.
Best regards,
Porter Scobey
Department of Mathematics and Computing Science
Saint Mary's University
923 Robie Street
Halifax, NS, Canada
B3H 3C3
Tel: 902-420-5790
Web: http://cs.smu.ca/~porter
From: Daniel Althviz Moré @.>
Sent: Friday, March 17, 2023 4:18 PM
To: spyder-ide/spyder @.>
Cc: Porter Scobey @.>; Mention @.>
Subject: Re: [spyder-ide/spyder] Command-line input not bein read properly (Issue #20696)
I tried thinking on an example where a selection or code cell could use the value there but I could not came with an example. Taking that into account I guess command-line arguments only make sense when running a full file as @ccordoba12https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fccordoba12&data=05|01|porter.scobey%40smu.ca|2188b53a55f54a42110008db271c6fb0|060b02ae57754360abbae2e29cca6627|1|0|638146775301512676|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=SHJbw%2FFuL1aSJ9d3DDSCkl08sCwrnRpIUS3ezzzKYTM%3D&reserved=0 said.
Maybe an example of how a code selection or cell could use the command line input option could help us understand better what @pscobeyhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpscobey&data=05|01|porter.scobey%40smu.ca|2188b53a55f54a42110008db271c6fb0|060b02ae57754360abbae2e29cca6627|1|0|638146775301512676|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=9W7iQn64TkFik2Br4Gcqi%2FD83ccYk3a2eJqmPgdS7n8%3D&reserved=0 wants to achieve? Could it be that the feature is something like a way to setup a context/namespace where selected code or code cells run?
—
Reply to this email directly, view it on GitHubhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fspyder-ide%2Fspyder%2Fissues%2F20696%23issuecomment-1474298976&data=05|01|porter.scobey%40smu.ca|2188b53a55f54a42110008db271c6fb0|060b02ae57754360abbae2e29cca6627|1|0|638146775301512676|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=4krF%2BYK%2BP7yG%2FmH7RQ47HRTB1w9KbXHopvw%2F%2FT2FHiU%3D&reserved=0, or unsubscribehttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACBKD5ZCW2UW2OIYOVKB2H3W4S2JRANCNFSM6AAAAAAV5UZ5SE&data=05|01|porter.scobey%40smu.ca|2188b53a55f54a42110008db271c6fb0|060b02ae57754360abbae2e29cca6627|1|0|638146775301512676|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=PX6UhyPhdIIHkeD2cn1w1n1QOaS%2B0xV7LjQMJfN88%2Bo%3D&reserved=0.
You are receiving this because you were mentioned.Message ID: @.***>
Putting multiple "scripts" that each expect to receive command input all within file will of course not actually work normally with a regular Python interpreter, and isn't really how things are intended to work in Spyder, but it's perfectly possible to make it work. You have a few options; here a couple (from most to least recommended).
The better design is to ensure your code for each "program" is inside a function, which can take an optional parameter with a list of CLI args to parse (instead of just reading sys.argv by default), which is useful for more than just this particular unusual situation. For example, if you're using argparse with code like this:
import argparse
def program_1():
parser = argparse.ArgumentParser()
parser.add_argument("test_arg_1")
parser.add_argument("--test-arg-2")
print(parser.parse_args())
Then, you can modify it to accept a parameter with the argv you want it to parse, as a list:
import argparse
def program_1(argv=None):
parser = argparse.ArgumentParser()
parser.add_argument("test_arg_1")
parser.add_argument("--test-arg-2")
print(parser.parse_args(argv))
If argv is not passed when program_1 is called (either in the file, or from the Console), it will still read from sys.argv as normal, but if a list of args is passed, then it will parse that instead:
>>> program_1(argv=["test_val_1", "--test-arg-2", "test val 2"])
Namespace(test_arg_1='test_val_1', test_arg_2='test val 2')
You could also have it convert argv to an argv list if it is passed as a string:
import argparse
import shlex
def program_1(argv=None):
if isinstance(argv, str):
argv = shlex.split(argv)
parser = argparse.ArgumentParser()
parser.add_argument("test_arg_1")
parser.add_argument("--test-arg-2")
print(parser.parse_args(argv))
so you could also do
>>> program_1(argv="test_val_1 --test-arg-2 'test val 2'")
Namespace(test_arg_1='test_val_1', test_arg_2='test val 2')
This would not only allow you to handle this situation, but also allow you and your students to easily import, run and test your programs from the Python interpreter and other modules without having to re-run a whole new Python process or do other hacks.
But speaking of "other hacks"—there is one other option, that wouldn't require modifying any code even if your program is written directly at the __main__ level. You can just manually set sys.argv with the arguments you want prior to running your code (though I don't recommend it as its more hacky, less flexible, wipes out what the actual sys.argv was, is lower level and is prone to mistakes). I.e.
import shlex
import sys
# As a list
args = ["test_val_1", "--test-arg-2", "test val 2"]
sys.argv = [""] + args
# As a string
args = "test_val_1 --test-arg-2 'test val 2'"
sys.argv = [""] + shlex. split(args)
Thanks very much for all the suggestions, but I'm
teaching first-year students, so I think I'd just better
go back to first principles and run each program
separately ... 🙂
Porter
Porter Scobey
Department of Mathematics and Computing Science
Saint Mary's University
923 Robie Street
Halifax, NS, Canada
B3H 3C3
Tel: 902-420-5790
Web: http://cs.smu.ca/~porter
From: C.A.M. Gerlach @.>
Sent: Friday, March 17, 2023 10:57 PM
To: spyder-ide/spyder @.>
Cc: Porter Scobey @.>; Mention @.>
Subject: Re: [spyder-ide/spyder] Command-line input not being read properly (Issue #20696)
Putting multiple "scripts" that each expect to receive command input all within file will of course not actually work normally with a regular Python interpreter, and isn't really how things are intended to work in Spyder, but it's perfectly possible to make it work. You have a few options; here a couple (from most to least recommended).
The better design is to ensure your code for each "program" is inside a function, which can take an optional parameter with a list of CLI args to parse (instead of just reading sys.argv by default), which is useful for more than just this particular unusual situation. For example, if you're using argparse with code like this:
import argparse
def program_1():
parser = argparse.ArgumentParser()
parser.add_argument("test_arg_1")
parser.add_argument("--test-arg-2")
print(parser.parse_args())
Then, you can modify it to accept a parameter with the argv you want it to parse, as a list:
import argparse
def program_1(argv=None):
parser = argparse.ArgumentParser()
parser.add_argument("test_arg_1")
parser.add_argument("--test-arg-2")
print(parser.parse_args(argv))
If argv is not passed when program_1 is called (either in the file, or from the Console), it will still read from sys.argv as normal, but if a list of args is passed, then it will parse that instead:
program_1(argv=["test_val_1", "--test-arg-2", "test val 2"])
Namespace(test_arg_1='test_val_1', test_arg_2='test val 2')
You could also have it convert argv to an argv list if it is passed as a string:
import argparse
import shlex
def program_1(argv=None):
if isinstance(argv, str):
argv = shlex.split(argv)
parser = argparse.ArgumentParser()
parser.add_argument("test_arg_1")
parser.add_argument("--test-arg-2")
print(parser.parse_args(argv))
so you could also do
program_1(argv="test_val_1 --test-arg-2 'test val 2'")
Namespace(test_arg_1='test_val_1', test_arg_2='test val 2')
This would not only allow you to handle this situation, but also allow you and your students to easily import, run and test your programs from the Python interpreter and other modules without having to re-run a whole new Python process or do other hacks.
But speaking of "other hacks"—there is one other option, that wouldn't require modifying any code even if your program is written directly at the main level. You can just manually set sys.argv with the arguments you want prior to running your code (though I don't recommend it as its more hacky, less flexible, wipes out what the actual sys.argv was, is lower level and is prone to mistakes). I.e.
import shlex
import sys
As a list
args = ["test_val_1", "--test-arg-2", "test val 2"]
sys.argv = [""] + args
As a string
args = "test_val_1 --test-arg-2 'test val 2'"
sys.argv = [""] + shlex. split(args)
—
Reply to this email directly, view it on GitHubhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fspyder-ide%2Fspyder%2Fissues%2F20696%23issuecomment-1474581584&data=05|01|porter.scobey%40smu.ca|511dcf3739964b1e69df08db2754158a|060b02ae57754360abbae2e29cca6627|1|0|638147014315728072|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=HLVi%2BsxP8yhunRDKp1XDw7SOpUJ%2FPF2zZ27QiJSY8Rk%3D&reserved=0, or unsubscribehttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACBKD5ZMS5HJWCZSR7K5B23W4UI7LANCNFSM6AAAAAAV5UZ5SE&data=05|01|porter.scobey%40smu.ca|511dcf3739964b1e69df08db2754158a|060b02ae57754360abbae2e29cca6627|1|0|638147014315728072|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=dpyOG%2F3znXs%2FPiJTBQoU1y4L%2Fkv048F%2F1bprrzx4x6c%3D&reserved=0.
You are receiving this because you were mentioned.Message ID: @.***>
@pscobey, I'm glad you understand better now what's happening here and that you decided to change the way you're teaching Python with Spyder to sort it out.
Given that I think there's nothing else to discuss about it, I'm going to close this issue.
| gharchive/issue | 2023-03-16T19:35:03 | 2025-04-01T06:45:51.490365 | {
"authors": [
"CAM-Gerlach",
"ccordoba12",
"dalthviz",
"pscobey"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/20696",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
203067113 | Wrong binary in Python 3 wheels
I upgraded Spyder to 3.1.2 with sudo pip3 install --upgrade spyder on a Linux Mint 18.1 system. It installs a spyder3.desktop file into /usr/local/share/applications/ and spyder into /usr/local/bin/. The spyder3.desktop has the following contents:
[Desktop Entry]
Version=1.0
Type=Application
Name=Spyder3
GenericName=Spyder3
Comment=Scientific PYthon Development EnviRonment - Python3
TryExec=spyder3
Exec=spyder3 %F
Categories=Development;Science;IDE;Qt;
Icon=spyder3
Terminal=false
StartupNotify=true
MimeType=text/x-python;
but there is no executable with the name spyder3 in the path. After changing the entries TryExec and Exec to TryExec=spyder and Exec=spyder %F the spyder3.desktop file works again.
What steps will reproduce the problem?
Install Spyder with pip3 install spyder.
Try to start Spyder with the installed .desktop file.
Cannot find the executable spyder3.
What is the expected output? What do you see instead?
Spyder should start, but does not start due to wrong Exec entries in the .desktop file.
Versions and main components
Spyder Version: 3.1.2 (but also 3.1.1)
Python Version: 3.5.2
Pip Version: 9.0.1
Operating system: Linux Mint 18.1
Dependencies
jedi >=0.8.1 : 0.9.0 (OK)
matplotlib >=1.0 : 1.5.1 (OK)
nbconvert >=4.0 : 5.1.1 (OK)
numpy >=1.7 : 1.11.0 (OK)
pandas >=0.13.1 : 0.17.1 (OK)
pep8 >=0.6 : 1.7.0 (OK)
psutil >=0.3 : 5.0.1 (OK)
pyflakes >=0.6.0 : 1.5.0 (OK)
pygments >=2.0 : 2.2.0 (OK)
pylint >=0.25 : 1.6.5 (OK)
qtconsole >=4.2.0: 4.2.1 (OK)
rope >=0.9.4 : 0.9.4-1 (OK)
sphinx >=0.6.6 : 1.5.2 (OK)
sympy >=0.7.3 : 0.7.6.1 (OK)
I'm surprised nobody has noticed this error before. We'll fix it in 3.1.3 :-)
We decided to rename Python 3 Spyder executables to spyder3 instead of changing the desktop file (which we can't really do).
| gharchive/issue | 2017-01-25T10:39:27 | 2025-04-01T06:45:51.498983 | {
"authors": [
"ccordoba12",
"sphh"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/4050",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
210840251 | Mayavi 4.5.0 mlab.show does not work in Spyder 3.1.0
Description
Hi, I am using Python 2.X on Ubuntu 16.04 LTS with Spyder 3.1.0. What I am finding is that when I use Mayavi.mlab.show(), the console hangs. I tried it on IPython Console as well, which gives the following error:
Changing backend to Qt for Mayavi
Kernel died, restarting
What steps will reproduce the problem?
Please try the example shown in the following website:
http://docs.enthought.com/mayavi/mayavi/mlab.html#d-plotting-functions-for-numpy-arrays
What is the expected output? What do you see instead?
The expected output is a 3D mesh plot, but instead my console hangs.
Please provide any additional information below
Version and main components
Spyder Version: 3.1.3
Python Version: 2.7.12
Qt Versions: 4.8.7, PyQt4 (API v2) 4.11.4 on Linux
Dependencies
pyflakes >=0.5.0 : 1.5.0 (OK)
pep8 >=0.6 : 1.7.0 (OK)
pygments >=2.0 : 2.2.0 (OK)
qtconsole >=4.2.0: 4.2.1 (OK)
nbconvert >=4.0 : 5.1.1 (OK)
pandas >=0.13.1 : 0.19.2 (OK)
numpy >=1.7 : 1.12.0 (OK)
sphinx >=0.6.6 : 1.5.2 (OK)
rope >=0.9.4 : 0.10.3 (OK)
jedi =0.9.0 : 0.9.0 (OK)
psutil >=0.3 : 5.1.3 (OK)
matplotlib >=1.0 : 2.0.0 (OK)
sympy >=0.7.3 : 1.0 (OK)
pylint >=0.25 : 1.6.5 (OK)
Any updates on this bug? I badly need to run Mayavi for 3D point cloud visualization. Could you please suggest any alternatives?
We'll take a look at it for 3.1.4, to be released in a couple of weeks.
What do you think is the problem? Can Spyder 3.1.3 work with earlier
Mayavi versions?
On Mar 3, 2017 2:45 PM, "Carlos Cordoba" notifications@github.com wrote:
We'll take a look at it for 3.1.4, to be released in a couple of weeks.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284047165,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg487kVypyShXDt1ehcjPdK3_a4Xwks5riGl9gaJpZM4MOn_N
.
I can't reproduce your problem.
Before running your code, please run this in an IPython console
%gui qt
and try again.
By the way, I used the conda packages for VTK and Mayavi, so maybe the problem is in the Ubuntu packages.
I ran it in IPython console inside Spyder, and I got the following error:
"Changing back end to Qt for Mayavi
Kernel died, restarting."
What do you think might be going on? Any help would be highly appreciated.
Srini
On Mar 5, 2017 12:50 PM, "Carlos Cordoba" notifications@github.com wrote:
I can't reproduce your problem.
Before running your code, please run this in an IPython console
%gui qt
and try again.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284246742,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg9kITsle2WZejMO7LMSgFRZAx6Ndks5rivXngaJpZM4MOn_N
.
I used pip packages for Mayavi and VTK. could you try the packages through
pip?
Srini
On Mar 5, 2017 1:03 PM, "Carlos Cordoba" notifications@github.com wrote:
By the way, I used the conda packages for VTK and Mayavi, so maybe the
problem is in the Ubuntu packages.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284247684,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg6k5um49qWuIocpL3ugHfxDWdVknks5rivjsgaJpZM4MOn_N
.
VTK doesn't have pip packages.
I installed VTK THROUGH either Gihub development branch or Ubuntu apt-get
packages. I don't quite remember which one. What do you think might be
the problem going on? Btw, I installed ipython separately through pip and
ran the same set of commands. I got the following error. Any idea what
could be going wrong?
In [1]: % gui qt
In [2]: from numpy import pi, sin, cos, mgrid
...: dphi, dtheta = pi/250.0, pi/250.0*
...: [phi,theta] = mgrid[0:pi+dphi1.5:dphi,0:2pi+dtheta1.5:dtheta]
...: m0 = 4; m1 = 3; m2 = 2; m3 = 3; m4 = 6; m5 = 2; m6 = 6; m7 = 4;*
...: r = sin(m0phi)**m1 + cos(m2phi)*m3 + sin(m4theta)**m5 +
cos(m6theta
...: )*m7
...: x = r*sin(phi)cos(theta)
...: y = rcos(phi)
...: z = r*sin(phi)sin(theta)
...: *
In [3]: from mayavi import mlab
...: s = mlab.mesh(x, y, z)*
...: mlab.show()*
...: *
---------------------------------------------------------------------------
ValueError Traceback (most recent call
last)
in ()
----> 1 from mayavi import mlab
2 s = mlab.mesh(x, y, z)*
3 mlab.show()*
/usr/local/lib/python2.7/dist-packages/mayavi/mlab.py in ()
25 *
26 # Mayavi imports*
---> 27 from mayavi.tools.camera import view, roll, yaw, pitch, move
28 from mayavi.tools.figure import figure, clf, gcf, savefig, \*
29 draw, sync_camera, close, screenshot*
/usr/local/lib/python2.7/dist-packages/mayavi/tools/camera.py in
()
23 # We can't use gcf, as it creates a circular import in camera
management*
24 # routines.*
---> 25 from .engine_manager import get_engine
26 *
27 *
/usr/local/lib/python2.7/dist-packages/mayavi/tools/engine_manager.py in
()
10 *
11 # Local imports*
---> 12 from mayavi.preferences.api import preference_manager
13 from mayavi.core.registry import registry*
14 from mayavi.core.engine import Engine*
/usr/local/lib/python2.7/dist-packages/mayavi/preferences/api.py in
()
2 *
3 # The global PreferenceManager instance*
----> 4 from .preference_manager import preference_manager
5 from .bindings import set_scene_preferences, get_scene_preferences*
/usr/local/lib/python2.7/dist-packages/mayavi/preferences/preference_manager.py
in ()
27 from traits.etsconfig.api import ETSConfig*
28 from traits.api import HasTraits, Instance*
---> 29 from traitsui.api import View, Group, Item
30 from apptools.preferences.api import (ScopedPreferences,
IPreferences,*
31 PreferencesHelper)*
/usr/local/lib/python2.7/dist-packages/traitsui/api.py in ()
34 *
35 try:*
---> 36 from .editors.api import ArrayEditor
37 except ImportError:*
38 # ArrayEditor depends on numpy, so ignore if numpy is not
present.*
/usr/local/lib/python2.7/dist-packages/traitsui/editors/init.py in
()
21 *
22 try:*
---> 23 from .api import ArrayEditor
24 except ImportError:*
25 pass*
/usr/local/lib/python2.7/dist-packages/traitsui/editors/api.py in
()
22 from .button_editor import ButtonEditor*
23 from .check_list_editor import CheckListEditor*
---> 24 from .code_editor import CodeEditor
25 from .color_editor import ColorEditor*
26 from .compound_editor import CompoundEditor*
/usr/local/lib/python2.7/dist-packages/traitsui/editors/code_editor.py in
()
34
#-------------------------------------------------------------------------------*
35 *
---> 36 class ToolkitEditorFactory ( EditorFactory ):
37 """ Editor factory for code editors.*
38 """*
/usr/local/lib/python2.7/dist-packages/traitsui/editors/code_editor.py in
ToolkitEditorFactory()
46 *
47 # Background color for marking lines*
---> 48 mark_color = Color( 0xECE9D8 )
49 *
50 # Object trait containing the currently selected line
(optional)*
*/usr/local/lib/python2.7/dist-packages/traits/traits.pyc in call(self,
*args, *metadata)
520 *
521 def call ( self, *args, *metadata ):
*--> 522 return self.maker_function( *args, *metadata )
523 *
524 class TraitImportError ( TraitFactory ):*
*/usr/local/lib/python2.7/dist-packages/traits/traits.pyc in Color(*args,
*metadata)
1234 from traitsui.toolkit_traits import ColorTrait*
1235 *
*-> 1236 return ColorTrait( *args, *metadata )
1237 *
1238 Color = TraitFactory( Color )*
*/usr/local/lib/python2.7/dist-packages/traitsui/toolkit_traits.pyc in
ColorTrait(*args, *traits)
5 *
6 def ColorTrait ( *args, **traits ):*
*----> 7 return toolkit().color_trait( *args, *traits )
8 *
9 def RGBColorTrait ( *args, **traits ):*
*/usr/local/lib/python2.7/dist-packages/traitsui/toolkit.pyc in
toolkit(toolkits)
159 try:*
160 with provisional_toolkit(toolkit_name):*
--> 161 _toolkit = _import_toolkit(toolkit_name)
162 return _toolkit*
163 except (AttributeError, ImportError) as exc:*
/usr/local/lib/python2.7/dist-packages/traitsui/toolkit.pyc in
_import_toolkit(name)
81 *
82 def _import_toolkit ( name ):*
---> 83 return import( name, globals=globals(), level=1 ).toolkit
84 *
85 *
/usr/local/lib/python2.7/dist-packages/traitsui/qt4/init.py in
()
16 # import pyface.qt before anything else is done so the sipapi*
17 # can be set correctly if needed*
---> 18 import pyface.qt
19 *
20
#----------------------------------------------------------------------------*
/usr/local/lib/python2.7/dist-packages/pyface/qt/init.py in ()
31 except ImportError:*
32 try:*
---> 33 prepare_pyqt4()
34 import PyQt4*
35 qt_api = 'pyqt'*
/usr/local/lib/python2.7/dist-packages/pyface/qt/init.py in
prepare_pyqt4()
15 # Set PySide compatible APIs.*
16 import sip*
---> 17 sip.setapi('QDate', 2)
18 sip.setapi('QDateTime', 2)*
19 sip.setapi('QString', 2)*
ValueError: API 'QDate' has already been set to version 1
On Mar 5, 2017 1:12 PM, "Carlos Cordoba" notifications@github.com wrote:
VTK doesn't have pip packages.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284248293,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg-aiptfYFRULqtgz5VCINQ6fo0Gsks5rivsPgaJpZM4MOn_N
.
I don't know, sorry. We set PyQt4 API to #2 here:
https://github.com/spyder-ide/spyder/blob/master/spyder/utils/site/sitecustomize.py#L261
and yet you have an error about that.
I'm sorry, but since you're using a non-standard installation (using a mix of pip and system packages), it's up to you to figure this problem out.
My recommendation: simply install Anaconda/Miniconda and resume your work.
Hi Carlos,
I installed Conda and ran the set of codes. It worked fine. One thing is
that I had to use iPython to set %gui qt. How do I do that in the normal
Python console with Spyder? What can I do in Python console to get the
same effect as doing "%gui qt" in iPython console.?
Thanks,
Srini
On Sun, Mar 5, 2017 at 1:40 PM, Carlos Cordoba notifications@github.com
wrote:
I don't know, sorry. We set PyQt4 API to #2 here:
https://github.com/spyder-ide/spyder/blob/master/spyder/
utils/site/sitecustomize.py#L261
and yet you have an error about that.
I'm sorry, but since you're using a non-standard installation (using a mix
of pip and system packages), it's up to you to figure this problem out.
My recommendation: simply install Anaconda/Miniconda and resume your work.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284250213,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg2DfD4PtwhSSCQX_8ZYsI0yZEM7vks5riwGAgaJpZM4MOn_N
.
You don't need to execute %gui qt in a Python console. Running your code without it should work fine.
But let me warn you that the Python console is going to be removed in Spyder 3.2, so please keep using the IPython console instead.
It does not work in Python console. It gives me the following error:
RuntimeError: Invalid Qt API 'pyqt5', valid values are: 'pyqt' or 'pyside'
Where do I set the Qt API value? Btw, what exactly does sitecustomize.py
do? I never cared to know it until now.
Srini
On Mar 6, 2017 5:05 PM, "Carlos Cordoba" notifications@github.com wrote:
You don't need to execute %gui qt in a Python console. Running your code
without it should work fine.
But let me warn you that the Python console is going to be removed in
Spyder 3.2, so please keep using the IPython console instead.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284548802,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg-scBGVW3rkqKBstmjJudEA7Eqvgks5rjIM5gaJpZM4MOn_N
.
Hi Carlos,
I found out what the problem was. It seems like Mayavi 4.5.0 is not
compatible with PyQt 5.x. Conda automatically downgrades the PyQt from 5
to 4 when we install Mayavi 4.5.0. That's why Mayavi runs on Conda
distribution. Apparently, pip packages don't care about compatibility.
However, the downside is that now I can't install matplotlib 2.x without
PyQt being 5.x. At least, this is the configuration enforced in Conda
environment.
Srini
On Mon, Mar 6, 2017 at 7:04 PM, Srinivasan Rajaraman <
srinivasan.rajaraman@gmail.com> wrote:
It does not work in Python console. It gives me the following error:
RuntimeError: Invalid Qt API 'pyqt5', valid values are: 'pyqt' or
'pyside'
Where do I set the Qt API value? Btw, what exactly does sitecustomize.py
do? I never cared to know it until now.
Srini
On Mar 6, 2017 5:05 PM, "Carlos Cordoba" notifications@github.com wrote:
You don't need to execute %gui qt in a Python console. Running your code
without it should work fine.
But let me warn you that the Python console is going to be removed in
Spyder 3.2, so please keep using the IPython console instead.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/spyder-ide/spyder/issues/4212#issuecomment-284548802,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AKwmg-scBGVW3rkqKBstmjJudEA7Eqvgks5rjIM5gaJpZM4MOn_N
.
However, the downside is that now I can't install matplotlib 2.x without PyQt being 5.x
We're working to fix that in Continuum.
| gharchive/issue | 2017-02-28T16:35:50 | 2025-04-01T06:45:51.571098 | {
"authors": [
"ccordoba12",
"s0r2637"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/4212",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
57868876 | no output with ipython after pyreadline update
From haber...@schaeffler.com on 2011-04-28T08:33:53Z
What steps will reproduce the problem?
install newest version of pyreadline (1.7)
ipython hardly prints any output in spyder
Example: "dir()" gives no output, only "print dir()" does. What version of the product are you using? On what operating system? Spyder 2.0.10
Windows XP SP3 32
Please provide any additional information below
. In the "normal" ipython command line it is still working.
With pyreadline 1.6.2 everything was fine.
Original issue: http://code.google.com/p/spyderlib/issues/detail?id=646
From pierre.raybaut on 2011-04-28T12:21:05Z
This issue was closed by revision ada5063a2eec .
From pierre.raybaut on 2011-04-28T12:20:53Z
This issue was closed by revision 9f1c060ee1c7 .
Status: Fixed
| gharchive/issue | 2015-02-17T00:36:06 | 2025-04-01T06:45:51.577764 | {
"authors": [
"spyder-issues-migrator"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/646",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
298119054 | Spyder doesn't recognize TensorFlow GPU
Hello.
So I spent a good amount of time installing and tuning TensorFlow GPU version.
And through Anaconda prompt it works perfectly.
I run this piece of code to check if I can use GPU:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
And it gives me the result:
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 11171278383842874612
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 3238717030
locality {
bus_id: 1
}
incarnation: 17164589319225185596
physical_device_desc: "device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0, compute capability: 5.0"
]
However if I run it in Spyder I'm getting that I can only use my CPU:
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 18858102688050260
]
I tried to open Spyder through the environment, since it's the only suggestion I could find on the Internet. Like
activate tensorflow
spyder
But still it shows I can use only CPU.
I also tried to update the spyder... still no luck. Spyder refuses to see that I can use GPU.
What the trick can be? How can I make Spyder see the GPU
Version and main components
Spyder Version: 3.2.6
Python Version: 3.6.1
Qt Versions: 5.6.2, PyQt5 5.6 on Windows
Dependencies
pyflakes >=0.6.0 : 1.5.0 (OK)
pycodestyle >=2.3: 2.3.1 (OK)
pygments >=2.0 : 2.2.0 (OK)
pandas >=0.13.1 : 0.20.1 (OK)
numpy >=1.7 : 1.12.1 (OK)
sphinx >=0.6.6 : 1.5.6 (OK)
rope >=0.9.4 : 0.9.4-1 (OK)
jedi >=0.9.0 : 0.10.2 (OK)
nbconvert >=4.0 : 5.1.1 (OK)
sympy >=0.7.3 : 1.0 (OK)
cython >=0.21 : 0.25.2 (OK)
qtconsole >=4.2.0: 4.3.0 (OK)
IPython >=4.0 : 5.3.0 (OK)
pylint >=0.25 : 1.6.4 (OK)
Thanks for reporting. Perhaps the problem has to do with Tensorflow and IPython, since that's the interpreter Spyder is running your code in? Please try in a standard IPython shell from the Anaconda prompt (and if it does work, a bare qtconsole instance with jupyter qtconsole); if it doesn't work in either, then the problem lies with them rather than with Spyder and you'll want to report it to that repo instead.
Also, just to be sure, make sure you update Python, IPython etc with
conda update conda
conda update anaconda
conda update python ipython
in base and the environment you installed tensorflow into. Also, IIRC, the tensorflow conda environment is old and not well supported, if I'm remembering that right.
I have just reinstalled everything from the scratch (starting with anaconda). Then spyder refused to see firstly tensorflow, then keras, then pillow... Then I reinstall those several times, and opened spyder under tensorflow environment.. Somehow it works now.
Thanks, and glad you got it working.
My Spyder also doesn't use GPU. But if I opened terminal from anaconda, and use "python myfile.py", the program runs on GPU. Do you have any suggestions besides re-installation? Thanks a lot.
Did you read through and follow the relevant guide on working with packages and environments in Spyder? Could be due to different working environments in Spyder vs. Anaconda Prompt (which I'm guessing is what you are referring to when you say "Terminal from Anaconda").
Hello,
I installed tensorflow gpu 2.5 and it will automatically install Cuda and cudnn. I install jupyter and check the config devices and it is showing both CPU and GPU. But when I want to try on spyder. spyder is not detecting GPU. Is there any problem of TensorFlow 2.5 with spyder. Need Suggestion. Thank you
| gharchive/issue | 2018-02-18T21:16:55 | 2025-04-01T06:45:51.586145 | {
"authors": [
"CAM-Gerlach",
"Ehteshamciitwah",
"alenashilina",
"wentinghome"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/6475",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
57869729 | Edit files from remote system through SSH
From techtonik@gmail.com on 2011-10-15T01:55:41Z
Badly need ability to edit remote files over SSH using local Spyder.
Original issue: http://code.google.com/p/spyderlib/issues/detail?id=799
From pierre.raybaut on 2012-03-18T14:06:40Z
Labels: -Type-Enhancement Type-Enh
From techtonik@gmail.com on 2011-10-22T02:11:43Z
The advantage of cross-platform editors that all their features are, well, cross-platform, so you don't need to think about platform anymore once you've mastered them.
Adding SSH will surely add more dependencies to Spyder. Perhaps a plugin would be a better option. Should we add Component-Plugins label to this tracker to filter out features that would be better seen as external modules?
From ccordoba12 on 2011-10-20T19:30:36Z
Anatoly, since you're on Linux you can use sshfs, which lets you mount a directory in a remote machine as a local filesystem through ssh. Then you can use Spyder to open all the files you need from there.
Of course, you can add some functionality to do this process automatically from Spyder
Labels: Cat-Miscelleneous
From sylvain....@gmail.com on 2014-02-12T02:26:37Z
I have got a working version of a remote file explorer plugin. For now, my current version duplicates a lot of things from the existing file explorer and adds a dependency to paramiko. Will clean it up and make a PR.
From techtonik@gmail.com on 2014-02-12T04:37:04Z
That's cool. =)
From sylvain....@gmail.com on 2014-02-12T05:46:47Z
@-techtonik You can already check out the connection to a remote kernel in the PR20 on bitbucket.
@SylvainCorlay any feedback on the status of ?
| gharchive/issue | 2015-02-17T00:47:15 | 2025-04-01T06:45:51.594111 | {
"authors": [
"goanpeca",
"spyder-issues-migrator"
],
"repo": "spyder-ide/spyder",
"url": "https://github.com/spyder-ide/spyder/issues/799",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2216398557 | 🛑 37HRD - Website is down
In 1d4ad22, 37HRD - Website (https://37hrd.uk) was down:
HTTP code: 404
Response time: 268 ms
Resolved: 37HRD - Website is back up in 995029b after 11 minutes.
| gharchive/issue | 2024-03-30T11:30:10 | 2025-04-01T06:45:51.601089 | {
"authors": [
"matthewthowells"
],
"repo": "sqbxmediagroup/status",
"url": "https://github.com/sqbxmediagroup/status/issues/638",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
193499421 | Copy-SqlLogin - Script Analyzer Test failure
We're adding Pester tests and AppVeyor to dbatools, see https://github.com/sqlcollaborative/dbatools/wiki/AppVeyor-and-Pester for more detail.
The following rules were highlighted for this function:
RuleName
Severity
Line
Message
PSAvoidUsingPlainTextForPassword
Warning
117
Parameter '$SourceSqlCredential' should use SecureString, otherwise this will expose sensitive information. See ConvertTo-SecureString for more information.
PSAvoidUsingPlainTextForPassword
Warning
118
Parameter '$DestinationSqlCredential' should use SecureString, otherwise this will expose sensitive information. See ConvertTo-SecureString for more information.
PSPossibleIncorrectComparisonWithNull
Warning
139
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
139
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
171
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
171
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
180
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
180
$null should be on the left side of equality comparisons.
PSAvoidUsingCmdletAliases
Warning
196
'Where' is an alias of 'Where-Object'. Alias can introduce possible problems and make scripts hard to maintain. Please consider changing alias to its full content.
PSAvoidUsingCmdletAliases
Warning
205
'Where' is an alias of 'Where-Object'. Alias can introduce possible problems and make scripts hard to maintain. Please consider changing alias to its full content.
PSPossibleIncorrectComparisonWithNull
Warning
223
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
223
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
242
$null should be on the left side of equality comparisons.
PSPossibleIncorrectComparisonWithNull
Warning
242
$null should be on the left side of equality comparisons.
PSAvoidUsingCmdletAliases
Warning
285
'%' is an alias of 'ForEach-Object'. Alias can introduce possible problems and make scripts hard to maintain. Please consider changing alias to its full content.
PSUseDeclaredVarsMoreThanAssignments
Warning
285
The variable 'passtring' is assigned but never used.
PSUseDeclaredVarsMoreThanAssignments
Warning
299
The variable 'sid' is assigned but never used.
PSAvoidUsingCmdletAliases
Warning
299
'%' is an alias of 'ForEach-Object'. Alias can introduce possible problems and make scripts hard to maintain. Please consider changing alias to its full content.
see https://github.com/sqlcollaborative/dbatools/issues/627 , aka "the issue that tracked them all"
| gharchive/issue | 2016-12-05T13:43:33 | 2025-04-01T06:45:51.627198 | {
"authors": [
"SQLDBAWithABeard",
"niphlod"
],
"repo": "sqlcollaborative/dbatools",
"url": "https://github.com/sqlcollaborative/dbatools/issues/372",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
201722700 | Find-DbaLongRunningQuery
Is this a feature OR bug:
Enhancement
https://trello.com/c/kK7JNqMG
Additional comments
@wsmelton
"Purpose would not be to find blocking tree or anything...that you can get by calling sp_whoisactive with the right parameters. This would just be pulling session duration since start and outputting if any are over particular threshold, or just return all. More or less what you can get from whoisactive, maybe just in format to do quick checks or something."
Created as Get-DbaQueryExecutionTime
I think I'd actually like to add an alias for this Find-DbaLongRunningQuery. Maybe.... prolly not 🤔
| gharchive/issue | 2017-01-18T23:28:50 | 2025-04-01T06:45:51.629967 | {
"authors": [
"SirCaptainMitch",
"potatoqualitee"
],
"repo": "sqlcollaborative/dbatools",
"url": "https://github.com/sqlcollaborative/dbatools/issues/552",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
690213545 | [Bug] Get-DbaEndpoint - wrong Fqdn on docker (update: and on special network setup)
I need your help to analyse an issue I have with availability groups on docker.
The problem is that the Fqdn (full qualified domain name) of the mirroring endpoint returned by Get-DbaEndpoint is not suitable for Add-DbaAgReplica.
I used this demo: https://www.sqlservercentral.com/articles/sql-server-alwayson-with-docker-containers
$credential = New-Object -TypeName PSCredential -ArgumentList "sa", (ConvertTo-SecureString -String 'MssqlPass123' -AsPlainText -Force)
$instance = Connect-DbaInstance -SqlInstance localhost:2500, localhost:2600 -SqlCredential $credential
$instance.Name
$instance.ComputerName
$instance.DomainInstanceName
$instance.NetName
(Get-DbaEndpoint -SqlInstance $instance -Type DatabaseMirroring).Fqdn
This gives me:
localhost,2500
localhost,2600
localhost
localhost
db1
db2
db1
db2
TCP://galloper:5022
TCP://galloper:5022
galloper is the name of my laptop. I need TCP://db1:5022 and TCP://db2:5022.
If you use https://dbatools.io/docker/ the first two lines must change to:
$credential = New-Object -TypeName PSCredential -ArgumentList "sqladmin", (ConvertTo-SecureString -String 'dbatools.IO' -AsPlainText -Force)
$instance = Connect-DbaInstance -SqlInstance localhost:1433, localhost:14333 -SqlCredential $credential
Maybe there is a way to setup my laptop or my docker environment, that the correct Fqdn is returned.
You have to run the code from within the container, the article referenced is doing the same thing. Our code is running [System.Net.Dns]::GetHostEntry and since you run this from your local laptop it will always resolve the localhost value returned by SMO to your machine and not the container.
Containers cannot be "remotely" managed for everything like physical VM because you are binding to your local machine to connect to those containers.
Okay, this could work - but do we have dbatools inside the container? The goal is to test the dbatools commands related to availability groups against linux systems with SQL Server.
At https://dbatools.io/docker/ @potatoqualitee says: "Next, we’ll setup a sample availability groups. Note that since it’s referring to “localhost”, you’ll want to execute this on the computer running Docker." And that's what I did.
When I change the endpoint of the replica bevor creating the availability group, it works.
We do not publish a container with the module included...YET 😉
I do have VS Code Codespaces and Dev Containers configured in the devcontainers branch if you would like to test using that mechanism. This method will put the module on the container (dbatools1).
Oh yes, I read about that (on twitter?) - will try that tomorrow.
I just realized that this will be a problem at one client where we want to set up a dedicated network for the communication inside the availability group.
Let me explain the planned setup: One network (say 10.0.0.x) as public network for admin work and client connections, a separate network (say 192.168.0.x) just for the two cluster nodes with the SQL Server instances. We plan to create endpoints to listen on 192.168.0.x:5022 while the instances listen on 10.0.0.x:1433.
In the SQL to setup the availability group we would say:
“CREATE AVAILABILITY GROUP […] FOR REPLICA ON […] WITH (ENDPOINT_URL = N'TCP://192.168.0.x:5022', […]"
Now there is no way to do this with New-DbaAvailabilityGroup, as the URL is dynamically build inside of Get-DbaEndpoint. We would need a parameter -EndpointUrl as an alternative for -Endpoint at Add-DbaAgReplica to get the correct replica smo. The difficult part would be to change New-DbaAvailabilityGroup as we need a parameter that takes all the endpoint URLs and passes them to the corresponding Add-DbaAgReplica.
Maybe this is just an edge case that dbatools are not made for. Then this is just for documentation and I’ll use SQL at that client.
If you create those endpoints prior to calling New-DbaAvailabiltyGroup then you simply pass the name of that endpoint(s) to -Endpoint and it should work fine.
Sorry, no. The Fqdn is not part of the endpoint object in SQL Server. The Fqdn is dynamically created at runtime of New-DbaAvailabiltyGroup.
The Fqdn is dynamically created at runtime of New-DbaAvailabiltyGroup
It should not do this then and lookup the configured endpoint for the instance.
The Fqdn is dynamically created at runtime of New-DbaAvailabiltyGroup
It should not do this then and instead lookup the configured endpoint for the instance.
Let me repeat this one more time: "The Fqdn is not part of the endpoint object in SQL Server"
The problem is that the Fqdn (full qualified domain name) of the mirroring endpoint returned by Get-DbaEndpoint is not suitable for Add-DbaAgReplica.
Since there is some level of confusion the way you wrote up this issue please close it and properly follow our template for bug reports. Clearly state what issue you found with Get-DbaEndpoint.
| gharchive/issue | 2020-09-01T14:54:55 | 2025-04-01T06:45:51.641353 | {
"authors": [
"andreasjordan",
"wsmelton"
],
"repo": "sqlcollaborative/dbatools",
"url": "https://github.com/sqlcollaborative/dbatools/issues/6786",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
653504722 | Support adding reviewers to the backport PR
Wdyt about adding that support in both the CLI and the configuration file?
Sure! What do you think about backport --reviewer mdelapenya. And in the config file:
{
"reviewers": ["mdelapenya", "..."]
}
?
Love it!
| gharchive/issue | 2020-07-08T18:24:09 | 2025-04-01T06:45:51.669008 | {
"authors": [
"mdelapenya",
"sqren"
],
"repo": "sqren/backport",
"url": "https://github.com/sqren/backport/issues/212",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
407878873 | [RELEASE] 0.1.3-19, 0.1.3, 0.1.4-1
Release 0.1.3-19 to fix @misk/core failed artifact in 0.1.3-16
Use as stable 0.1.3 release
Start new prerelease sequence at 0.1.4-1
latest: 0.1.3
alpha: 0.1.4-1
[CLOSES #3] deyarn all @misk/ packages, Dockerfile, and releasing flow
[CLOSES #145] Add mandatory ci-build to prepare before any NPM Publish for all @misk packages
[#123] Finish releasing updates to SimpleNetwork to prefix all dispatch functions and prevent function name collisions in rootDispatcher
[CLOSES #147] Manually set tsconfig.json for all @misk/ packages to fix CI builds
Again I think this change could benefit from more distinct commits. The bullet points you listed above seem like appropriate commit sizes.
| gharchive/pull-request | 2019-02-07T20:15:37 | 2025-04-01T06:45:51.691397 | {
"authors": [
"adrw",
"wesleyk"
],
"repo": "square/misk-web",
"url": "https://github.com/square/misk-web/pull/149",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
84052721 | Reply executor sharing in okhttp-ws.
It is some kind of dull that it is not possible to share reply executor created in WebSocketCall. It would be cool to use single executor for writing.
We don't have access to the executor on which you are writing. For all we know you're blocking a single thread and writing in a tight loop.
Oh, I see. I didn't think about tight loop thing.
But anyway thought about there is periodically and non-deterministically (I mean on server's behalf) spawning thread makes me sad. Especially on the Android.
Here's another counter-example, what if you never write and only read? How are pongs and responding to close supposed to happen?
Why not to just pass executor like that?
public final class WebSocketCall {
public static WebSocketCall create(OkHttpClient client, Request request) {
ThreadPoolExecutor replyExecutor =
new ThreadPoolExecutor(1, 1, 1, SECONDS, new LinkedBlockingDeque<Runnable>(),
Util.threadFactory(String.format("OkHttp %s WebSocket", url), true));
replyExecutor.allowCoreThreadTimeOut(true);
return create(client, request, replyExecutor);
}
public static WebSocketCall create(
OkHttpClient client, Request request, Executor executor) { /* ... */ }
// ... and so on
}
Even that has problems. If I enqueue 1000 writes on that executor and then receive a ping, the pong has to wait until those 1000 writes succeed in order to send the pong. The current behavior will write the pong between the first and second writes (or if the first write is framed it will go between the frames).
OK, you win. All I came up with requires to break clients, change API greatly, which is not feasible to solve such minor issue like this.
Yeah I'm open to something better, but with all the requirements that are needed we couldn't come up with anything better. Hopefully you either have a ping-free peer which means the thread is never started or you have a ping-heavy peer which means the thread is always doing work and isn't interfering with your writes.
| gharchive/issue | 2015-06-02T14:39:34 | 2025-04-01T06:45:51.697075 | {
"authors": [
"JakeWharton",
"pepyakin"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/1681",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
893328239 | Unexpected CertificatePinner Invalid pattern Exception
java.lang.IllegalArgumentException: Invalid pattern: https://www.domain.com/
at okhttp3.CertificatePinner$Pin. Method String.toCanonicalHost() is not working as expected
i got this crash although it was working on okhttp version 3.+
You shouldn't be passing a URL there. It should be a host string or wildcard pattern.
| gharchive/issue | 2021-05-17T13:22:44 | 2025-04-01T06:45:51.699115 | {
"authors": [
"AbdulrahmanGamal",
"yschimke"
],
"repo": "square/okhttp",
"url": "https://github.com/square/okhttp/issues/6681",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
198748178 | Allow setting image from image file url
The application should be able to set the desktop picture from a file URL.
func setWallpaper(path: String) throws {
guard let screens = NSScreen.screens() else { return }
let workspace = NSWorkspace.shared()
let url = URL(fileURLWithPath: path)
for screen in screens {
try workspace.setDesktopImageURL(url, for: screen, options: [:])
}
}
http://stackoverflow.com/questions/36185506/how-to-loop-through-all-mac-desktop-spaces
http://stackoverflow.com/questions/7547103/change-the-wallpaper-on-all-desktops-in-os-x-10-7-lion/9514471#9514471
http://stackoverflow.com/questions/15608476/changing-os-x-desktop-background-image-scaling-mode
| gharchive/issue | 2017-01-04T16:23:21 | 2025-04-01T06:45:51.704622 | {
"authors": [
"squarefrog"
],
"repo": "squarefrog/dailydesktop",
"url": "https://github.com/squarefrog/dailydesktop/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
317249571 | Rename env var GITBASE_UNSTABLE_SQUASH_ENABLE
Closes #249
Signed-off-by: Manuel Carmona manu.carmona90@gmail.com
@ajnavarro done :+1:
| gharchive/pull-request | 2018-04-24T14:21:33 | 2025-04-01T06:45:51.815658 | {
"authors": [
"mcarmonaa"
],
"repo": "src-d/gitbase",
"url": "https://github.com/src-d/gitbase/pull/250",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
361183915 | App: OKRs review
Update checkboxes with current status:
[x] lookout sdk
[x] lookoutd
[x] gemini
@smola formating is updated to be consistent with #89
@smola shall we merge this guy?
| gharchive/pull-request | 2018-09-18T08:08:33 | 2025-04-01T06:45:51.822321 | {
"authors": [
"bzz"
],
"repo": "src-d/okrs",
"url": "https://github.com/src-d/okrs/pull/88",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
448409624 | Implement methods for syncing changes
Follows: Track changes for a Drive
Done. Supported as part of the drive request client.
| gharchive/issue | 2019-05-25T03:27:56 | 2025-04-01T06:45:51.823405 | {
"authors": [
"sreeise"
],
"repo": "sreeise/graph-rs",
"url": "https://github.com/sreeise/graph-rs/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2279944271 | Updation of email ID
When attempting to update an email ID to one that already exists, the system is incorrectly returning a 200 response. The expected behavior is for an appropriate error message to be displayed.
Add a check based on an email address and return an error message stating that the email address already exists.
| gharchive/issue | 2024-05-06T03:25:53 | 2025-04-01T06:45:51.830809 | {
"authors": [
"srivatsan1303"
],
"repo": "srivatsan1303/FINAL_PROJECT_USER_MANAGEMENT",
"url": "https://github.com/srivatsan1303/FINAL_PROJECT_USER_MANAGEMENT/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2149260334 | Replace utils.LinkByNameOrAlias with netlink.LinkByName
The netlink library now supports AltNames/Aliases so we use that feature.
The netlink library has an issue with querying altnames that exceed the 15 char length.
Refere to: https://github.com/vishvananda/netlink/issues/955
This is blocking us from completing this. Hence we need to wait for a fix.
This is sweet! Bonus points for making it into netlink 💪
| gharchive/pull-request | 2024-02-22T14:53:49 | 2025-04-01T06:45:51.832571 | {
"authors": [
"hellt",
"steiler"
],
"repo": "srl-labs/containerlab",
"url": "https://github.com/srl-labs/containerlab/pull/1908",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
115801599 | Added exception, changed classes struct and added logging instead of printing
Added exception, changed classes struct and added logging instead of printing
Hopefully you've read my response on the tvmaze forums, so you know I want to get these exceptions in place!
It seems like this breaks the fuzzymatch concept though, and perhaps that's something that will need to be restructured to accommodate this model, but I may need your help figuring that out.
Doing a search for "utopia gb" raises the ShowNotFound error immediately when really it needs to error softly and continue in the fuzzymatch module, if that makes sense (sorry if this is a poor explanation).
As far as I can tell, we either need to add a "default parameter" to the show_search function and use it to not error when calling show_search from the fuzzymatch.py "parse_user_text" function, or we need to duplicate the show_search function and put it into fuzzymatch.py with different error handling. Let me know what you think or if you have a better suggestion!
This might not be the perfect place to discuss everything I have to see but it'll have to do.
Here are some thoughts without any particular order:
I apologize that I submitted something that break current behavior. That's not cool and I should have tested it better.
I'm not sure that the whole fuzzymatch is something you should even handle. That is TVMaze responsibility. As I see it, an API library such as this one need to simply 'translate' given API capabilities t into the end user. So for a query like 'get_shows' for example, just return a list of shows objects and let the user decide what to do with it (exactly like you did with show_search btw).
If you do want a more specific show retreival method, I'd do something like this:
def get_show(maze_id=None, tvdb_id=None, tvrage_id=None, show_name=None):
if not any(maze_id, tvdb_id, tvrage_id, show_name):
raise InsufficientParametersError
if maze_id:
do stuff
elif tvdb_id:
do other stuff
That way you can have a single method that a user can use to search given any parameter he might have.
4. You should keep your output consistent, meaning always return objects.
5. Using 'print' is ill advised in a library like we already discussed because your code goes into other people's apps, and you don't want to control their output. This goes back to custom exception and logging.
I hope you take these comments in good spirit because you already did a ton of good work that you can build on.
Another thing:
I forgot to mention that if you do want the user to have the ability to fine tune the search via parameter like language, year etc you can change the method signature from above to such:
def get_show(maze_id=None, tvdb_id=None, tvrage_id=None, show_name=None, show_year=None, show_network=None, show_language=None):
if show_name:
show = get_show_by_search(show_name, show_year, show_network, show_language)
def get_show_by_search(show_name, show_year, show_network, show_language):
return query_endpoint(show_name=show_name, show_year=show_year, show_network=show_network, show_language=show_language)
or something like that.
The point is that you don't need the user to supply a loose string of text you need to parse, you can (and should) require a closed set of information that isn't mandatory that the user can supply IMO.
@liiight This is awesome, thank you! Lots to reply to but I have to run off for the day, just wanted to let you know that overall I agree and will post a more specific reply later, as well as testing and merging this.
No problem. Look into unittests library to add tests.
Already have been, it's on my ever-growing list of things to learn :)
To follow up on this:
I like all of your suggestions and am working on incorporating them right now, along with merging your PR. The only point I'm still debating is the whole fuzzymatch situation. I fully agree that this should be handled by the tvmaze site, however myself and many others have asked the owner to implement this and he seems pretty set on not doing it, at least not anytime soon. In lieu of that happening, are you suggesting that it fall to the user of this API to add their own matching functionality or that I just refine the process I've created? Thanks!
Building on your suggestion, it would look something like this?
def get_show(maze_id=None, tvdb_id=None, tvrage_id=None, show_name=None,
show_year=None, show_network=None, show_language=None):
if maze_id:
return Show(show_main_info(maze_id, embed='episodes'))
elif tvdb_id:
return Show(show_main_info(lookup_tvdb(tvdb_id)['id'],
embed='episodes'))
elif tvrage_id:
return Show(show_main_info(lookup_tvrage(tvrage_id)['id'],
embed='episodes'))
if show_name:
show = get_show_by_search(show_name, show_year, show_network,
show_language)
def get_show_by_search(show_name, show_year, show_network, show_language):
shows = get_show_list(show_name=show_name)
# Code here to match against available parameters
# Return list of Show objects from the TVMaze "Show Search" endpoint
def get_show_list(name):
shows = show_search(name)
if shows:
return [
Show(show_main_info(show['show']['id'], embed='episodes'))
for show in shows
]
else:
raise ShowsNotFound(name + ' did not generate show list')
@liiight
Made these changes in new "dev" branch (plus merged your PR), mind giving your thoughts when you have time?
I'm a little under the weather, but I hope to take a look soon.
Sorry to hear that, hope you feel better soon!
Looks really good dude, great job!
Awesome, thanks for all of your help! The only thing I think I'll change to reduce verbosity is the params in get_show, remove the show_ prefix so it looks like
def get_show(maze_id=None, tvdb_id=None, tvrage_id=None, name=None,
year=None, network=None, language=None, country=None):
Ok. I'd also recommend adding some documentation in comments of methods that deserve explanation
Will do
Comments/docstrings updated, feels ready to merge to me
Looks good to me too
| gharchive/pull-request | 2015-11-09T05:56:50 | 2025-04-01T06:45:51.860076 | {
"authors": [
"liiight",
"srob650"
],
"repo": "srob650/pytvmaze",
"url": "https://github.com/srob650/pytvmaze/pull/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2620092060 | 🛑 NZBGet is down
In 08277c3, NZBGet (https://nzbget.srueg.ch/ping) was down:
HTTP code: 530
Response time: 83 ms
Resolved: NZBGet is back up in 24297e3 after 43 minutes.
| gharchive/issue | 2024-10-29T04:28:55 | 2025-04-01T06:45:51.879215 | {
"authors": [
"srueg"
],
"repo": "srueg/upptime",
"url": "https://github.com/srueg/upptime/issues/1541",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1637110201 | 🛑 Keycloak is down
In 8df6c87, Keycloak (https://sso.srueg.ch/health) was down:
HTTP code: 530
Response time: 1338 ms
Resolved: Keycloak is back up in 9ed1f65.
| gharchive/issue | 2023-03-23T09:00:23 | 2025-04-01T06:45:51.881629 | {
"authors": [
"srueg"
],
"repo": "srueg/upptime",
"url": "https://github.com/srueg/upptime/issues/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
619431379 | Add support to HTTPS -> HTTP proxying (i.e. Cloudflare SSL)
Fixes #76
Cloudflare does not pass X-Forwarded-Port header but X-Forwarded-Proto only.
When this header it is set https and you have Flexible SSL enabled it means the request was forwarded from https (visitor -> Cloudflare) to http (Cloudflare -> origin) - i.e.: the forwarded proto is https.
We should pretend the request was received through https locally as well (although it came over http on port 80) or all assets using template vars will be "broken" (using http:// protocol instead https:// as observed on visitor's end).
Documentation is here.
Thanks @renatofrota
| gharchive/pull-request | 2020-05-16T09:58:05 | 2025-04-01T06:45:51.891220 | {
"authors": [
"michu2k",
"renatofrota"
],
"repo": "sruupl/batflat",
"url": "https://github.com/sruupl/batflat/pull/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
373639382 | Use ssb validate kvt queue
This superseeds #223 and is a bit faster. 172s in 18.5 vanilla and now 162s. The code is also cleaner I think :)
Depends on https://github.com/ssbc/ssb-validate/pull/12.
I won't merge this without second eyes. It passes tests both in secure-scuttlebutt and scuttlebot. There is the async queue function that is a bit tricky. In db.queue I have used the same pattern as in https://github.com/ssbc/secure-scuttlebutt/blob/master/minimal.js#L213.
I wish I had a better understanding of how these internal work, any chance you know of a guide/documentation on learning more about this code? Btw, I'm awful at parsing big diffs, so I ran standard --fix on both your branch and master and produced a smaller diff:
Diff
diff --git a/db.js b/db.js
index e37b12f..64207fd 100644
--- a/db.js
+++ b/db.js
@@ -1,5 +1,4 @@
var path = require('path')
-var ViewLevel = require('flumeview-level')
var ViewHashTable = require('flumeview-hashtable')
module.exports = function (dir, keys, opts) {
diff --git a/index.js b/index.js
index 19d6931..3b3bee8 100644
--- a/index.js
+++ b/index.js
@@ -173,5 +173,6 @@ module.exports = function (_db, opts, keys, path) {
}
}
}
+
return db
}
diff --git a/indexes/last.js b/indexes/last.js
index 1c1fe87..68c347e 100644
--- a/indexes/last.js
+++ b/indexes/last.js
@@ -3,8 +3,8 @@ var path = require('path')
var ltgt = require('ltgt')
var u = require('../util')
var pCont = require('pull-cont')
-// var ViewLevel = require('flumeview-level')
var Reduce = require('flumeview-reduce')
+
function isNumber (n) {
return typeof n === 'number'
}
@@ -14,7 +14,6 @@ function toSeq (latest) {
}
module.exports = function () {
- // TODO: rewrite as a flumeview-reduce
var createIndex = Reduce(1, function (acc, data) {
if (!acc) acc = {}
acc[data.value.author] = {id: data.key, sequence: data.value.sequence, ts: data.value.timestamp}
diff --git a/minimal.js b/minimal.js
index 3f20ef2..890490e 100644
--- a/minimal.js
+++ b/minimal.js
@@ -20,7 +20,7 @@ function unbox (data, unboxers, key) {
var plaintext
if (data && isString(data.value.content)) {
for (var i = 0; i < unboxers.length; i++) {
- var unbox = unboxers[i], value
+ var unbox = unboxers[i]
if (isFunction(unbox)) {
plaintext = unbox(data.value.content, data.value)
} else if (!key && unbox.key) {
@@ -55,14 +55,6 @@ possible, cb when the message is queued.
write a message, callback once it's definitely written.
*/
-function toKeyValueTimestamp (msg) {
- return {
- key: V.id(msg),
- value: msg,
- timestamp: timestamp()
- }
-}
-
function isString (s) {
return typeof s === 'string'
}
@@ -118,7 +110,7 @@ module.exports = function (dirname, keys, opts) {
var append = db.rawAppend = db.append
db.post = Obv()
var queue = AsyncWrite(function (_, cb) {
- var batch = state.queue// .map(toKeyValueTimestamp)
+ var batch = state.queue
state.queue = []
append(batch, function (err, v) {
batch.forEach(function (data) {
@@ -127,9 +119,7 @@ module.exports = function (dirname, keys, opts) {
cb(err, v)
})
}, function reduce (_, msg) {
- state = V.append(state, hmac_key, msg)
- state.queue[state.queue.length - 1] = toKeyValueTimestamp(state.queue[state.queue.length - 1])
- return state
+ return V.append(state, hmac_key, msg)
}, function (_state) {
return state.queue.length > 1000
}, function isEmpty (_state) {
@@ -175,10 +165,12 @@ module.exports = function (dirname, keys, opts) {
db.queue = wait(function (msg, cb) {
queue(msg, function (err) {
+ var data = state.queue[state.queue.length - 1]
if (err) cb(err)
- else cb(null, toKeyValueTimestamp(msg))
+ else cb(null, data)
})
})
+
db.append = wait(function (opts, cb) {
try {
var content = opts.content, recps = opts.content.recps
@@ -208,14 +200,17 @@ module.exports = function (dirname, keys, opts) {
})
})
})
+
db.buffer = function () {
return queue.buffer
}
+
db.flush = function (cb) {
// maybe need to check if there is anything currently writing?
if (!queue.buffer || !queue.buffer.queue.length && !queue.writing) cb()
else flush.push(cb)
}
+
db.addUnboxer = function (unboxer) {
unboxers.push(unboxer)
}
diff --git a/test/append.js b/test/append.js
index bcbce94..b7821fd 100644
--- a/test/append.js
+++ b/test/append.js
@@ -78,7 +78,7 @@ tape('loads', function (t) {
})
}
- a.queue(state.queue[j], function (err) {
+ a.queue(state.queue[j].value, function (err) {
if (err) throw explain(err, 'queued invalid message')
if (!(++j % 1000)) console.log(j)
if (Math.random() < 0.01) { setImmediate(next) } else next()
@christianbundy you can also disable whitespace in "diff settings" drop down, top right.
@arj03 merged into 18.5.4!
Thanks. In the middle of testing the versions of 18.5.x against each other using bench-ssb.
| gharchive/pull-request | 2018-10-24T19:22:35 | 2025-04-01T06:45:51.940008 | {
"authors": [
"arj03",
"christianbundy",
"dominictarr"
],
"repo": "ssbc/secure-scuttlebutt",
"url": "https://github.com/ssbc/secure-scuttlebutt/pull/233",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1076835123 | Add pure Python version of the distance calculation driver
This allows an actual pure Python implementation that a user would expect from PythonFallbackM3C2.
A first try reveals that this harder to implement than expected as it requires instantiation of all parameter structs from Python.
| gharchive/issue | 2021-12-10T13:12:30 | 2025-04-01T06:45:51.949362 | {
"authors": [
"dokempf"
],
"repo": "ssciwr/py4dgeo",
"url": "https://github.com/ssciwr/py4dgeo/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1759837653 | fix:Read me
I would like to fix the read me, as the contributors are not displaying in the read me file, and the project admin LinkedIn is not aligned properly.
Here is the screenshot:
Could you please assign the issue under gssoc?
I am interested to make readme good looking so reques to assign this issue to me under gssoc23
| gharchive/issue | 2023-06-16T02:53:42 | 2025-04-01T06:45:51.977315 | {
"authors": [
"iqrafirdose",
"sujal402"
],
"repo": "ssitvit/Code-Canvas",
"url": "https://github.com/ssitvit/Code-Canvas/issues/344",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
772594179 | Permalink does not grey out goddess cube options when disabled
When using a permalink to update settings, the goddess cube sub-options do not get disabled based on the general goddess cube setting
No longer an issue - behavior removed by consequence of removing the "banned types" setting
| gharchive/issue | 2020-12-22T02:46:03 | 2025-04-01T06:45:51.989792 | {
"authors": [
"cjs8487"
],
"repo": "ssrando/ssrando",
"url": "https://github.com/ssrando/ssrando/issues/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
892468827 | world string template library
The world should have a section for shared string templates, to help DRY up longer strings elsewhere. These will need to be localized from the world template data, rather than the client's config (common/meta verbs should still live in config, since they are needed before the world loads).
This is covered by i18next's nesting/glossary feature and general localization.
| gharchive/issue | 2021-05-15T14:57:42 | 2025-04-01T06:45:52.000275 | {
"authors": [
"ssube"
],
"repo": "ssube/textual-engine",
"url": "https://github.com/ssube/textual-engine/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.