added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:40:26.500773
2015-12-13T21:35:25
121945488
{ "authors": [ "pv" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10805", "repo": "spacetelescope/asv", "url": "https://github.com/spacetelescope/asv/pull/354" }
gharchive/pull-request
BUG: fix graph file name generation for default requirement version The meaning of requirement equal to "" and null was changed in b9bdf379e5, but the Javascript code was not updated so graphs won't load. Sync the javascript side with Python. Extend the web tests to run with null and "" requirements, to cover this. Going to merge soon, as master is broken without this...
2025-04-01T06:40:26.502668
2019-11-26T16:06:29
528823185
{ "authors": [ "coveralls", "hover2pi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10806", "repo": "spacetelescope/awesimsoss", "url": "https://github.com/spacetelescope/awesimsoss/pull/51" }
gharchive/pull-request
v0.3.1 Fixes PYPI upload. Coverage remained the same at 70.48% when pulling 23d6a870926ed9ba003c35406f8592989db8d364 on hover2pi:master into 080c248d417f73e5a529554918b7ece0b014151d on spacetelescope:master.
2025-04-01T06:40:26.570131
2016-03-16T12:47:33
141262836
{ "authors": [ "jesuisnicolasdavid", "syllog1sm" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10807", "repo": "spacy-io/spaCy", "url": "https://github.com/spacy-io/spaCy/issues/293" }
gharchive/issue
NER differences Could someone explain me the difference between doc.ent_type_ and doc.ents[0].label_ ? Is it the same thing but a different way to get it ? Thanks! The .label attribute is available on Span objects. ent_type is available on Token objects. So yes, the values should match in an entity.
2025-04-01T06:40:26.589332
2016-09-01T19:51:14
174610347
{ "authors": [ "bloukingfisher", "jenesaisdiq" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10808", "repo": "spark/photon-tinker-android", "url": "https://github.com/spark/photon-tinker-android/issues/10" }
gharchive/issue
"I forgot my password" from app not working I've verified that it works on an iPhone but on Android (Samsung Galaxy S6 Android version 6.0.1) on the Particle app if you click the link "I forgot my password" it seems to open a mobile web page with the message "Looks like you got lost". The menu item from this page doesn't offer a link to retrieve a password and requires a user to know and go to the website to reset a password. I suspect it is an outdated URL/resource that is not auto directing/updated. Seconded @idokleinman and thirded by Chris.
2025-04-01T06:40:26.591877
2015-08-23T17:45:23
102640356
{ "authors": [ "leo3linbeck", "suda" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10809", "repo": "spark/spark-dev", "url": "https://github.com/spark/spark-dev/issues/107" }
gharchive/issue
Incompatible native module: serialport Loaded spark-dev module, got the following in the Incompatible Package window: Error message: Cannot find module '/Users/leo3/.atom/packages/spark-dev/node_modules/serialport/build/serialport/v1.6.3/Release/atom-shell-v0.22.3-darwin-x64/serialport.node' Particle menu is blank (I'm guess because Spark Dev 0.0.25 stops loading upon error). OS: Mac OSX 10.10.5 Hardware: Mac Pro (Mid 2012) Atom: 1.0.7 Yeah, this is caused by bug in apm. You can solve it by following steps 4 and 5 from Linux instructions
2025-04-01T06:40:26.613844
2015-01-07T15:00:57
53640621
{ "authors": [ "flavorjones", "frivoal", "higherpixels", "kmeister2000", "mejackreed", "nwellnhof" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10810", "repo": "sparklemotion/nokogiri", "url": "https://github.com/sparklemotion/nokogiri/issues/1217" }
gharchive/issue
XSL message in a XSLT transform throws a RuntimeError source = Nokogiri::XML(File.read monograph_filename) xsl = Nokogiri::XSLT(File.read 'ipm.xsl' ) transformed = xsl.transform(source) The last line throws a runtime error because the xls file contains a message <xsl:otherwise> <xsl:message>Warning: Class not handled: <xsl:value-of select="$vDiv"/></xsl:message> </xsl:otherwise> So we get the error RuntimeError: Warning: Class not handled: date lrvdt Is there a way for me to suppress the warning so I don't get a runtime error? @higherpixels - Can you provide a working example of this issue? This would save time in trying to reproduce, as well as saving me the embarassment of admitting I have no idea how XSLT works! Background for this bug-report policy is here: http://www.nokogiri.org/tutorials/getting_help.html Thanks! This should recreate the error source = Nokogiri::XML(File.read "ipm5547_eng.xml") xsl = Nokogiri::XSLT(File.read 'ipm.xsl' ) transformed = xsl.transform(source) http://temp-host.com/download.php?file=nx92xy http://temp-host.com/download.php?file=cc54gq Excellent, thanks so much! I've reproduced it and am looking into it. On Thu, Jan 22, 2015 at 9:57 AM, higherpixels<EMAIL_ADDRESS>wrote: This should recreate the error source = Nokogiri::XML(File.read "ipm5547_eng.xml") xsl = Nokogiri::XSLT(File.read 'ipm.xsl' ) transformed = xsl.transform(source) http://temp-host.com/download.php?file=nx92xy http://temp-host.com/download.php?file=cc54gq — Reply to this email directly or view it on GitHub https://github.com/sparklemotion/nokogiri/issues/1217#issuecomment-71032120 . OK, so I've done a bit of research, and it appears as though using xsl:message is expected to raise an error of some sort. My sources: https://msdn.microsoft.com/en-us/library/ms256441(v=vs.110).aspx http://www.devguru.com/technologies/xslt/8437 The second reference notably mentions "The xsl:message element is primarily used to report errors by displaying a text message". What is the behavior that you expect in this case? I'm seeing the same issue. I came across this issue because I was debugging an xsl template and had read a couple of articles that recommended the use of xsl:message to provide helpful feedback in figuring out the order and selection of templates. To address @flavorjones question in the comment above, based on what I read I would expect the behavior to be displaying the message content on the console or returned by the XSLT.transform method somehow. Reference: "The xsl:message element is optional. Processors are not required to support it. However most do, and usually they do so by printing messages on the console." http://www.ibm.com/developerworks/library/x-tipxslmsg/ I'm affected by this as well, and it would be very useful to have the choice of what to do, and here are two possible ways the API could be changed to support it: xslt = Nokogiri::XSLT::Stylesheet.parse_stylesheet_doc xslt_file output = xslt.transform(input) do |message| puts message #raise an exception if you want, but if you don't we just keep processing end or xslt = Nokogiri::XSLT::Stylesheet.parse_stylesheet_doc xslt_file xslt.handle_messages do |message| puts message #raise an exception if you want, but if you don't we just keep processing end output = xslt.transform(input) Alternatively, instead of raising exceptions directly, the block could take an second parameter xslt = Nokogiri::XSLT::Stylesheet.parse_stylesheet_doc xslt_file xslt.handle_messages do |message, transform| puts message transform.continue end output = xslt.transform(input) @frivoal What would you think about adding the error to the Document#errors array, if the xsl:message does not declare terminate="yes"? @flavorjones Just to be clear on what you're proposing: it does not declare terminate="yes" you would just continue processing and store the error in Document#errors it does declare terminate="yes", abort imediately (and store the message in Document#errors ? and print the message on stderr?) Yes, this would work. Transformations that are supposed to continue would continue, and those that are supposed to stop would stop. I think it would probably be better to put the messages in something like document#messages (at least when terminate="yes" is not declared) since these are not actually errors, and may not even be warnings at all, but that's less important. This would be a question of API intuitiveness, not of capabilities or correctness. Another small limitation of that design would be when the non-terminating messages are meant to give progress reports in a long running task, just storing them in Document#errors and letting the caller read them at the end defeats the purpose. This is secondary, since the processing would still occur normally, but the messages would be less useful than could be. That said, your proposal is actually compatible with mine: the default message handler would do what you said, and you could let users supply their won, using something like a handle_message method on XSLT::Stylesheet, which would take a block, and pass it the message and terminate as a boolean, and allowing it to signal somehow if it wants processing to continue or stop. I guess that would be ideal, getting the best of both worlds. A brief investigation of libxlst and its xsltMessage function leads me to believe that it's not currently possible to filter xsl:message messages differently from other parse errors and warnings. @nwellnhof if you have a moment, please let me know if I'm missing something. And if that's the case, then I can update Nokogiri's code to only raise an exception if the parsing failed (i.e., xsltApplyStylesheet returning NULL), and otherwise we can stash all of the warnings and messages in an accessor. A brief investigation of libxlst and its xsltMessage function leads me to believe that it's not currently possible to filter xsl:message messages differently from other parse errors and warnings. That's correct. libxslt's error handling is extremely limited. It never implemented what libxml2 calls "structured" error handling. Thank you! I'll schedule some time to make Nokogiri's behavior a bit more robust, then.
2025-04-01T06:40:26.616060
2023-04-12T14:23:33
1664673793
{ "authors": [ "chadlwilson", "flavorjones" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10811", "repo": "sparklemotion/nokogiri", "url": "https://github.com/sparklemotion/nokogiri/pull/2856" }
gharchive/pull-request
Update to latest htmlunit neko What problem is this PR intended to solve? This is a WIP for #2565. Feedback and help welcome. I had a look at this but it's probably a bit too involved for my xerces and nokogiri/htmlunit knowledge. Was your thinking to essentially have two xerces versions eventually packaged? (Retain the mainline org.apache one as well as the shaded, refactored/simplified one inside htmlunit)? It looks like nokogiri seems to try and reuse the same xerces xonfiguration for its own direct stuff as well as for use by htmlunit so seemed to require a change to the configuration approach.
2025-04-01T06:40:26.623381
2015-10-08T13:30:23
110451340
{ "authors": [ "donovanhide", "pyrobit" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10812", "repo": "sparsehash/sparsehash", "url": "https://github.com/sparsehash/sparsehash/issues/113" }
gharchive/issue
Is there any current issues to be worked on the sparsehash / densehash structures? Hi there, I checked this code description and in 2005 it informed "It would be nice to rework this C class to follow the C++ API as closely as possible (eg have a set_deleted_key() instead of using a #define like this code does now). I believe the code compiles and runs, if anybody is interested in using it now, but it's subject to major change in the future, as people work on it. Craig Silverstein" I would like to check if this set_deleted_key() issue was resolved, else what are the current issues. (I am looking for a theme to work in software engineering monography.) thanks! Hi, most of the current issues are present here on Github in the issue-tracker :-) Some of them have been addressed in this branch which needs to be merged: https://github.com/sparsehash/sparsehash/tree/issue-fixes There's also a fork here: https://github.com/sparsehash/sparsehash-c11 which aims to remove all the type traits and conditional build logic that isn't required now that C++11 is widely available. Ideally, it needs a better build process (CMake or plain old Makefile). Patches and contributions are more than welcome. This project has been static for some time :-) Cheers, Donovan.
2025-04-01T06:40:26.626533
2021-06-17T14:31:06
923995576
{ "authors": [ "3dhistory", "spartan737" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10813", "repo": "spartan737/Stocksera", "url": "https://github.com/spartan737/Stocksera/issues/3" }
gharchive/issue
No module named 'scheduled_tasks' Something is still a miss. Where does 'scheduled_tasks' come from? I see you import it everywere but ... Traceback (most recent call last): File "D:\02_Stock_Analysis\Stocksera-spartan737\Stocksera\scheduled_tasks\main.py", line 2, in import scheduled_tasks.get_reddit_trending_stocks.scrape_reddit as scrape_reddit ModuleNotFoundError: No module named 'scheduled_tasks' Scheduled_tasks is a folder name found in the main directory Hi. I have rewrote the pipeline of the entire scheduled_tasks section. You only need to edit tasks_to_run.py in the main parent directory directly. I have also removed the section for due diligence, as I have decided to focus on other sections of the application first. Please clone the latest version of the repo
2025-04-01T06:40:26.633665
2015-10-22T18:28:20
112862563
{ "authors": [ "kjellmf", "spatialillusions", "szechyjs" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10814", "repo": "spatialillusions/milsymbol", "url": "https://github.com/spatialillusions/milsymbol/issues/32" }
gharchive/issue
Tactical graphics milsymbol dosen't draw tactical graphics and except a few point symbols that are referenced in other appendixes. If I would implement tactical graphics it would probably be done in another library, so that it will be easy to choose if you want to import it or not. I have for quite some time been collecting ideas for how I would be able to create tactical graphics, and unfortunately my time might be too limited to implement the whole tactical graphics appendix, but I have some ideas that I might try out. My idea is to start out with some, maybe five or ten, different graphics and see how it works out. If anyone has any input, wishes, questions, please post them here and I'll try to answer. (And if you need this done faster, you can always take me in as a consultant.) If you want inspiration, I know of two open source libraries that can draw control measures: milsymb-js / milsymb-java MilSymb I have experimented with milsymb-java/js. The interface is relatively straight forward and it can generate KML and GeoJSON. I have not looked at the implementation, but I know that the point symbology is font based. Usually it only makes sense to draw control measures on a map, so the library should generate output that can be used with OpenLayers and Leaflet. GeoJson is one option, but it does not support curved lines. milsymb-js/java solves this by converting curves to polylines. This works, but it requires many line segments to look good. SVG will look better, but you'll probably need to adjust the SVG output every time the map is zoomed in or out. Canvas is also an option. Thank you for your input. My idea at the moment is SVG, I have looked at the d3 layers examples that exists for both openlayers and Leaflet, so it's something like that I have in mind, but I keep thinking about it and see if I find some time to make some initial tests. :+1: The initial plan is to support point symbols for tactical graphics. I'm adding functionality to inject new SIDC (so that you only will have tactical symbols if you want them and not otherwise) at the moment, and it will also require a rewrite of how the text labels are placed, but I got an idea for how to solv that. I have now added support injecting support for new SIDC and for non standard bounding boxes, this is the first step to supporting point graphics from tactical symbols. Next step is to be able to add new icon building blocks, and to be able to override text placements. All except two tactical points are now implemented, please have a look and give feedback. Since the size of the symbols isn't specified in the standard I had to improvise some. http://www.spatialillusions.com/milsymbol-dev/docs/milsymbol-2525c-tactical-points-svg.html Does this work for you @szechyjs ? Impressive work @spatialillusions! I'll have a look at them. Implementing something for graphics other than points will be done here: https://github.com/spatialillusions/milgraphics
2025-04-01T06:40:26.636142
2019-07-19T10:33:02
470256894
{ "authors": [ "tenevdev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10815", "repo": "spatialos/UnrealGDK", "url": "https://github.com/spatialos/UnrealGDK/pull/1180" }
gharchive/pull-request
WIP: UNR-1545: Add documentation for offloading and actor groups Contributions: We are not currently taking public contributions - see our contributions policy. However, we are accepting issues and we do want your feedback. Description Documentation of offloading includes additions to the GDK concepts doc, a new reference page which covers the newly added SpatialStatics, best practices, and a description of the usage of offloading in the example project. Primary reviewers @ElleEss @mattyoung-improbable Closing this because there's another PR with the most up to date draft.
2025-04-01T06:40:26.639047
2018-05-19T12:51:26
324627470
{ "authors": [ "Gummibeer", "fotonmoton", "freekmurze" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10816", "repo": "spatie/laravel-activitylog", "url": "https://github.com/spatie/laravel-activitylog/issues/383" }
gharchive/issue
Accessor attribute doesn't log if it not in $appends array. Is it intended behavior? If so, I think, it's good to point out this moment in documentation. Update: I foregt to say, that I use 1.16.0 version on laravel 5.4 Update 2: Accessor attribute get equal value in "attributes" and "old" arrays. So, as I can see, I can't use dynamic attributes to log changes. Where I can find contribution process, maybe I can implement this feature. An accessor attribute isn't an attribute like all the others. And primary an accessor attribute doesn't have an old and dirty state. Thanks for clarifying. Maybe I can be wrong, but in $changes property of Activity class contains a serialized model (If I use LogsActivity trait). So, if I append to a serialized model dynamic attribute and in next update it changes log will contain changes. I think this is crucial feature and worth noting in documentation. Sorry for link to your concurrent, but it will be cool if this package has something like this. That's out of scope for this package, implement that in your own app.
2025-04-01T06:40:26.645057
2022-06-12T23:02:26
1268724934
{ "authors": [ "darviscommerce", "devinfd", "lintaba", "patinthehat" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10817", "repo": "spatie/pdf-to-image", "url": "https://github.com/spatie/pdf-to-image/pull/197" }
gharchive/pull-request
add php 8.1 support to composer.json fixes #195 Please pull this PR in Why is this taking so long? I believe this is unnecessary as the ^8.0 constraint means that 8.1 is also included (as is 8.2, and so on). @freekmurze Please correct me if I'm wrong, but I believe this PR can be closed without merging as the referenced issue (#195) is not related to this directly. I believe this is unnecessary as the ^8.0 constraint means that 8.1 is also included (as is 8.2, and so on). @freekmurze Please correct me if I'm wrong, but I believe this PR can be closed without merging as the referenced issue (#195) is not related to this directly. My comment is after testing en composer does not accept 8.1 with this setting.
2025-04-01T06:40:26.651269
2021-03-18T12:30:57
834748223
{ "authors": [ "AdrianMrn", "Muffinman", "SpencerCloud", "ajimatahari", "lorlab", "nlemsieh", "nnerijuss", "tomcoonen", "vicenterusso" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10818", "repo": "spatie/ray", "url": "https://github.com/spatie/ray/issues/379" }
gharchive/issue
Ray not sending integers Describe the bug: I'm using Ray in WordPress and framework-agnostic PHP projects and for some reason integers don't show up in Ray app. Versions: Ray version: 1.14.5 PHP version: 7.4.13 Ray WP plugin: 1.2.4 To Reproduce: Try sending an integer value, i.e.: ray(25); Desktop: OS: macOS Version 10.13.6 I'm also noticing this in Laravel Posting this as a potential workaround for people before this gets fixed. It's simple but took a little too long to occur to me to do this. Just typecast any integers to a string when outputting in the debugger. ray((string) 25); Can confirm I'm also seeing this on Drupal. PHP Library version: 1.21.2 Ray client: 1.14.5 PHP: 8.0.3 MacOS Also having this issue, even when multiple arguments are fed to the ray() function, as soon as an integer is include all these are not showing up. Hapenned to me as well +1. Using Laravel + Linux (ubuntu) Terminal Log: TypeError: e.includes is not a function at D (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:19406) at /tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:18468 at Array.map (<anonymous>) at /tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:18444 at Array.map (<anonymous>) at /tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:367:18198 at s.handle_request (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:156:784) at s (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:149:883) at u.dispatch (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:149:905) at s.handle_request (/tmp/.mount_Ray-1.UH5UqJ/resources/app.asar/dist/main.js:156:784) Have same issue with M1 mac. Until lates update everything works just fine. Thanks for the reports and especially thanks too @vicenterusso for the logs which gave me an idea about what could be causing the bug :) This has been fixed and will be included in the 1.14.6 patch version (releasing now).
2025-04-01T06:40:26.695363
2021-02-08T17:37:44
803779619
{ "authors": [ "joekendal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10819", "repo": "spe-uob/HealthcareLake", "url": "https://github.com/spe-uob/HealthcareLake/issues/112" }
gharchive/issue
DNS: CNAME, NS or Load Balancer We have a few options when it comes to achieving a human-readable domain name for our API endpoint. Here are the first 3 that come to mind: CNAME - Points from UoB subdomain to our own domain NS - We host our own nameservers and have UoB delegate that subdomain Load Balancer - We have a static IP address pointing to an Elastic IP (reserved for us by AWS) that assigns to a NAT Gateway that links to a subnet where the lambda is hosted inside the VPC. We are not deploying to production so can close this
2025-04-01T06:40:26.700474
2022-02-16T15:41:29
1140215486
{ "authors": [ "RobbeSneyders", "coveralls" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10820", "repo": "spec-first/connexion", "url": "https://github.com/spec-first/connexion/pull/1463" }
gharchive/pull-request
Take into account (x-)nullable when validating defaults Fixes # 1462. CC: @nielsbox Pull Request Test Coverage Report for Build<PHONE_NUMBER> 9 of 9 (100.0%) changed or added relevant lines in 2 files are covered. No unchanged relevant lines lost coverage. Overall coverage increased (+0.002%) to 97.063% Totals Change from base Build<PHONE_NUMBER>: 0.002% Covered Lines: 2842 Relevant Lines: 2928 💛 - Coveralls
2025-04-01T06:40:26.704066
2024-03-05T00:01:02
2168016127
{ "authors": [ "jyasskin", "svgeesus" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10821", "repo": "speced/spec-maintenance", "url": "https://github.com/speced/spec-maintenance/issues/26" }
gharchive/issue
Identify TR specs that are too far behind their Editor's Drafts Given https://github.com/w3c/browser-specs/blob/main/index.json, for specs with organization == W3C, the tool can fetch the two URLs available at either {series.releaseUrl, series.nightlyUrl} or {release.url, nightly.url}, and compare their publication dates (<time class="dt-updated">). If they're different, the SLI is the age of the release, and the SLO could be 4 weeks. IETF specs have a similar feature using <time class="published">. Sounds good in general although I would be wary of a metric that encourages republishing unchanged drafts with a new date just to keep the metric happy. Also depending on the document status: updating a CR Snapshot with another CR Snapshot requires a new round of horizontal review, a transition request, a mandatory one-week period for related groups to object, a once-weekly meeting to review the transition request and then scheduling publication. That will never happen in 4 weeks and 3 months is a more realistic minimum, assuming horizontal re-review starts on the day the previous CRS was published. updating a Rec with an edited Rec with proposed corrections or amendments requires a 4-week AC review plus a couple of weeks to review the responses, giving an absolute minimum of 6 weeks even if the process is started the same day the previous Recommendation is published, with no time for public review.
2025-04-01T06:40:26.706607
2023-11-02T14:24:06
1974375133
{ "authors": [ "bimgeek", "paloknapo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10822", "repo": "specklesystems/speckle-sharp", "url": "https://github.com/specklesystems/speckle-sharp/issues/3022" }
gharchive/issue
feat(revit/objects): new rebar class and sending rebar from revit see details in https://www.notion.so/speckle/Rebar-Support-in-Revit-Revit-workflow-c1922ee7c1d0450184b16221c6b939b6?pvs=4 Sending rebar from Revit is implemented. A separate ticket is created for receiving as it is more complicated.
2025-04-01T06:40:26.710937
2024-10-08T12:32:03
2573072600
{ "authors": [ "didimitrie", "nickger" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10823", "repo": "specklesystems/speckle-sharp", "url": "https://github.com/specklesystems/speckle-sharp/issues/3639" }
gharchive/issue
Add export without displayValue. Currently Speckle uses host applications to calculate meshes. First, there can be scenarios when people want to extract only alpha-numerical data. Second, it is very inconvenient to write a code specifically for each software and keep it updated. Third, it is a huge time waist and depends on application quality. E.g. What we are currently after (and hope it will be implemented in the future in Speckle): we extend geometry kit to cover main surface types from OpenCascade save data as collections of faces (surfaces with edge curves; curves are fully covered already) write a single post-process to calculate Breps and mesh them (if necessary) based on Speckle geometry kit using whole power of OpenCascade. Currently in Revit Convertor the method GetElementDisplayValue() is implemented in about 30(!) places. I haven't check other connectors. I presume it could be optimized to quickly turn it off. Hi @nickger, we can look at implementing this as a send setting in Revit (and maybe other host apps where this would make sense), though it can be a rather counterintuitive for the majority of specklers. Regarding meshing breps with opencascade: this is something we've never tried before, but it opens a slippery slope. We currently benefit from not having to mesh things client side/server side and being able to instantly display them! I'm going to close this issue, but the first idea - send without display values - is captured in our backlog! Well, currently you mesh things on client's side, moreover, in the host application during sending. This is unnecessary load at least, and for me as developer - huge head ache: if I want to do further geometry processing, I must implement it on client's side as well. Think of exporting BReps as a third export option. perhaps we need to understand what you're after better: for us meshing things client side saves a lot of headaches. happy to hear what you're building!
2025-04-01T06:40:26.713382
2023-02-09T08:59:38
1577494787
{ "authors": [ "AlanRynne", "teocomi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10824", "repo": "specklesystems/speckle-sharp", "url": "https://github.com/specklesystems/speckle-sharp/pull/2126" }
gharchive/pull-request
fix(objects): Added fix for sketchup blocks using dynamic blockDefinition prop SketchUp sends every detached property with the prefixed @. This causes issues with the transformedGeometry computed property of blocks, which uses BlockInstance.blockDefinition and BlockDefinition.geometry properties. Both of these properties, on sketchup objects, would end up being deserialised as dynamic props, which would throw null reference errors. This PR addresses this issue, while also adding warnings on the code to inform us in the future about this. @JR-Morgan I'm re-requesting your review, but mostly to open the conversation of how cringy this is codewise... and that maybe we should not ever merge this 😅 Closing as we decided to fix this on the SketchUp side.
2025-04-01T06:40:26.745926
2024-01-22T22:02:33
2094838965
{ "authors": [ "patriksvensson", "tonycknight" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10825", "repo": "spectreconsole/spectre.console", "url": "https://github.com/spectreconsole/spectre.console/pull/1435" }
gharchive/pull-request
Direct contributors to the current CONTRIBUTING.md fixes # no issue logged, this is a simple spot suggestion [X] I have read the Contribution Guidelines [ ] I have commented on the issue above and discussed the intended changes [ ] A maintainer has signed off on the changes and the issue was assigned to me [ ] All newly added code is adequately covered by tests [ ] All existing tests are still running without errors [X] The documentation was modified to reflect the changes OR no documentation changes are required. Changes The current PR template contains a link to CONTRIBUTING.md. However, when clicking on it, Github redirects to a 404: In short, the URL is missing /blob/main. This PR fixes the issue, and so contributors don't have to look too deeply. Merged! Thank you for your contribution. Much appreciated! 👍
2025-04-01T06:40:26.759581
2024-05-01T22:54:31
2274314964
{ "authors": [ "TylerGillson" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10826", "repo": "spectrocloud-labs/validator-plugin-kubescape", "url": "https://github.com/spectrocloud-labs/validator-plugin-kubescape/pull/27" }
gharchive/pull-request
chore(main): release 0.0.1 :robot: I have created a release beep boop 0.0.1 (2024-05-01) Bug Fixes add production stage to Dockerfile (427a05c) add ReconcileFlaggedCVERule and update global manifest (c2b954c) group version, namespace, vuln counter (62fbc75) helm chart nil pointer dereference for SA (42f47f3) include all src code in Dockerfile build context (525b3e7) rename charts to chart (96a1825) typo (eae9f24) update v1 to v1alpha1 and CRD (9a50145) v1 to v1alpha1 (7cf2316) Other add extra files (d4bfa93) add github workflows (be53017) deps: update actions/checkout digest to 0ad4b8f (#19) (9f389b8) deps: update azure/setup-helm digest to fe7b79c (#20) (deee20d) deps: update codecov/codecov-action digest to 5ecb98a (#21) (53760d4) release 0.0.1 (27d0852) release 0.0.1 (5fa1a9b) upgrade controller-gen (eb6ab5a) upgrade github.com/docker/docker (7d6417c) upgrade github.com/kubescape/storage (358cb4f) upgrade golang.org/x/net (4488f72) upgrade to go1.22 (c2c1bf8) This PR was generated with Release Please. See documentation. :robot: Release is at https://github.com/spectrocloud-labs/validator-plugin-kubescape/releases/tag/v0.0.1 :sunflower:
2025-04-01T06:40:26.787617
2023-06-21T12:27:36
1767490776
{ "authors": [ "Gastron" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10827", "repo": "speechbrain/speechbrain", "url": "https://github.com/speechbrain/speechbrain/pull/2041" }
gharchive/pull-request
Basic SECURITY.md It is good to have at least some rudimentary security policy set up for SpeechBrain. The GitHub template suggests two sections: Supported versions and Reporting a vulnerability. I have made the security updates paragraph a bit clearer.
2025-04-01T06:40:26.798989
2023-03-16T05:17:22
1626737703
{ "authors": [ "Yuval-Ariel", "mrambacher", "udi-speedb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10828", "repo": "speedb-io/speedb", "url": "https://github.com/speedb-io/speedb/issues/430" }
gharchive/issue
Log Improvement: Options for only the first 10 column families are reported to the log The options of column families are reported to the log at the top of every log file. However, if there are more than 10 column families (not very common but definitely allowed and occurs in practice), only the options of the first 10 are reported to the log. Throughout the log file, any other log line that is associated with any column family will be reported. So, you find in the log information about column families whose options you don't know. @udi-speedb, do you know if these options are printed to the OPTIONS file? also, i dont know if this is a bug since its definitely intentional @Yuval-Ariel: I do not know if its in the options. I assume it is. I agree that it's intentional , but I still think it should be considered a bug. I believe that a log file should allow a person to see all the information that a log file provides. In addition, I might have access only to to a log file (e.g., log parsing tool) and should be able to use it only to parse and process. @Yuval-Ariel: I do not know if its in the options. I assume it is. I agree that it's intentional , but I still think it should be considered a bug. I believe that a log file should allow a person to see all the information that a log file provides. In addition, I might have access only to to a log file (e.g., log parsing tool) and should be able to use it only to parse and process. You will see in the log events, stats, etc. related to the "missing" column families, but no options for them I was working on something earlier that would only print/return options that were different than the default. This could be useful if we wanted to keep the logs (or options files) shorter and pruned. I can try to resurrect that code... @Yuval-Ariel: I do not know if its in the options. I assume it is. I agree that it's intentional , but I still think it should be considered a bug. I believe that a log file should allow a person to see all the information that a log file provides. In addition, I might have access only to to a log file (e.g., log parsing tool) and should be able to use it only to parse and process. You will see in the log events, stats, etc. related to the "missing" column families, but no options for them I was working on something earlier that would only print/return options that were different than the default. This could be useful if we wanted to keep the logs (or options files) shorter and pruned. I can try to resurrect that code... @mrambacher As part of the log parser tool, I am displaying a diff between baseline options files (options files that are generated from official RocksDB / Speedb releases whose values are the defaults for that release) and options as displayed in the log file. I am opening a new issue that will keep the limit of 10 cf-s but, unlike now, will print their names to the log. This is a simpler change and also a more useful one as most users do not have more than 10 cf-s anyway. In addition, this will apply to any number of cf-s in the log so it's useful either way. Until this issue is resolved (I am not sure if reporting the options for all of the cf-s is a valid solution when there are many cf-s) I have added https://github.com/speedb-io/speedb/issues/520 My concerns with reporting all of the options is that, when there are many cf-s (their number is not limited), we may bloat the log file with the text reporting the options of all of the cf-s. This may be a bigger issue when log files are rotated frequently, as the options are reported at the top of every rolled log. thats why @mrambacher suggestion of reporting only the options that are different than the first cf is a great one. i believe doing this is irrelevant of the log-parser and it would have several beneficial effects: reduce confusion since it would immediately popup an option thats different reduce writing to the log allow for all the cfs options to be printed to the log - what this issue is all about @mrambacher - Please attach a sample log output when you have one ready, so we would be able to better understand how that would look (and also estimate the effort of the log parser's adaptation). @mrambacher - Could you please add a reference for the pr-s on which you rely as infrastructure for this one? This is being resolved in stages that will require several PRs: #619 changes the serialize methods to use Properties/Maps instead of strings. This allows later formatting to be implemeneted #648 allows only options that were changed to be part of the serialization. This allows the output written to the Dump to be shorter and only contain the pertinent information, thereby shrinking the size of the LOG. #651 adds a pluggable formatter that allows options to be serialized in different formats (such as that written to the LOG) #719 changes the Options::Dump to use the Options internal code and not hard-coded values. This insures that all options are logged appropriately (as new ones are added) There will also be a subsequent PR that brings this altogether and removes the cap of the number of CFs that are written.
2025-04-01T06:40:26.832593
2015-12-10T01:00:55
121377926
{ "authors": [ "codefx9", "digitalcraftsman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10829", "repo": "spf13/hugo", "url": "https://github.com/spf13/hugo/issues/1697" }
gharchive/issue
Jinja support in Hugo Hi, We are really liking Hugo. Thanks for the great software. We would like to see Jinja template support in Hugo. My questions are: Is there any plan to add support for Jinja(via flosch/pongo2 maybe) templates? If there is no plan to add Jinja support by the core team, are you folks interested in accepting code contribution to this regard? Is there any guide on how to add new template support? Thanks. Hello @codefx9, there is already an issue (#1359) with the request of adding support for pongo2. Your effort could maybe give this enhancement a push. @digitalcraftsman, thanks for pointing this out. Closing this one.
2025-04-01T06:40:26.837618
2018-02-23T06:27:57
299611833
{ "authors": [ "bep", "danderson", "losinggeneration" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10830", "repo": "spf13/jWalterWeatherman", "url": "https://github.com/spf13/jWalterWeatherman/issues/22" }
gharchive/issue
Unusual camelCase repository name breaks vgo The x/vgo prototype enforces strict casing on imports. spf13/viper imports spf13/jwalterweatherman, which is not the same casing as the github repo. This breaks importing the library. Arguably vgo should not be enforcing case. However, even you, the author, don't use the correct case, which suggests to me maybe in this case the github repo should just be renamed to what people actually import it as. WDYT? I'm not the author, but I have an opinion: The camel-case makes a good test case for the x/vgo and simliar. So I vote to keep as is. My question would be: what do we gain from this test case? AFAICT, this repository is the only popular repository out there that violates the casing convention. Allowing mismatched case packages is problematic, because it will only work with vgo in specific situations ("is the repo on github", basically). The generic module hosting mecanisms described for vgo have as a requirement "must be able to use a dumb static content server," which means there must be no arbitrary transformations between the imported name and the name of the thing hosting the modules. In theory, vgo could add a bunch of special-casing for just Github... But otoh, this really is only a problem during the onboarding phase of vgo. Post vgo adoption, any such mismatching would be found instantly, and corrected before the module is even published. So, my question is: is it worth the pain to support this special case, if it results in a less consistent vgo UX, when github trivially allows renaming of repositories? AFAICT, this repository is the only popular repository out there that violates the casing convention. That I doubt. But as I said, I'm not the owner of this repo. If it somehow stops me from using vgo when that time comes, then I will maybe revisit this problem. But vgo is an early prototype, there are lots of "case issues" and other stuff to be ironed out. @bep seems to be fixed in a recent version of vgo. @danderson would you consider trying go get -u golang.org/x/vgo and if it works consider closing this issue? The "canonical Go import" part of this module has already been github.com/spf13/jwalterweatherman -- I say this because that is how it always has been in Hugo, the origin of this module. This is also the module name used in go.mod, which works fine for most. So, the only correct thing to do is to rename this repository to all lowercase. I will get @spf13 's attention about this and get it done.
2025-04-01T06:40:26.847027
2023-09-15T15:05:52
1898611088
{ "authors": [ "Ranguvar", "roobre" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10831", "repo": "spheenik/vfio-isolate", "url": "https://github.com/spheenik/vfio-isolate/issues/16" }
gharchive/issue
OSError: [Errno 22] Invalid argument when trying to run cpuset-modify on user.slice Today I noticed that after a system upgrade if I attempt to run vfio-isolate cpuset-modify --cpus C0-15 user.slice, it no longer works: 17:02:29 ~ #> vfio-isolate cpuset-modify --cpus C0-15 user.slice OSError: [Errno 22] Invalid argument During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/bin/vfio-isolate", line 33, in <module> sys.exit(load_entry_point('vfio-isolate==0.5.2', 'console_scripts', 'vfio-isolate')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 200, in run_cli executor.run() File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 191, in run for undo in e.action.record_undo(e.params): File "/usr/lib/python3.11/site-packages/vfio_isolate/action/cpuset_modify.py", line 39, in record_undo cpus=cpu_set.get_cpus(), ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 69, in get_cpus return self.impl.get_cpus(self) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 232, in get_cpus CGroupV2.ensure_cpuset_controller_enabled(cpuset) File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 228, in ensure_cpuset_controller_enabled CGroupV2.enable_controller(cpuset, "cpuset") File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 222, in enable_controller with cpuset.open("cgroup.subtree_control", "w") as f: OSError: [Errno 22] Invalid argument Changing the cpuset from --cpus C0-15 to something else does not seem to make any difference. The command does work for system.slice, strangely enough. I'm running #> uname -a Linux Archiroo 6.1.53-1-lts #1 SMP PREEMPT_DYNAMIC Wed, 13 Sep 2023 09:32:00 +0000 x86_64 GNU/Linux with systemd 254.3-1, in case it is relevant. I saw the same behavior -- failed cpuset.open("cgroup.subtree_control"... system.slice worked for me also. I even checked /sys/fs/cgroup/user.slice/cgroup.subtree_control. It exists and appears normal. I am using kernel 6.5.5-arch1-1 with systemd 254.5-1. Not sure which update caused this or how recently. vfio-isolate seemed to become intermittent some weeks or even longer ago. I was seeing silent failures to restore the undo file and restore cores to the host. I had no time to look at it or even use it much, but I did just now find the same error message as roobre. Previously, I was experiencing silent failures to restore the undo file and free cores up to the host again. The error disappeared suddenly while I was adding print statements for logging and testing repeatedly. I have a feeling it may return on next boot - if it does, I'll try to document what exactly gets around it. Happy to run any other debugging needed. This is what I'm getting on a fresh boot of 6.5.6-arch2-1 I have a the vfio_isolate command-line used below along with a crude debug from this non-Python guy /usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py has a modified open() that does print (self.__path(file)) Those are the paths you see printed on their own lines vfio-isolate -v -u /var/run/libvirt/qemu/vfio-isolate-undo.bin drop-caches cpuset-modify --cpus C8-14,24-30 /system.slice cpuset-modify --cpus C8-14,24-30 /user.slice compact-memory cpu-governor performance C0-7,15-23,31 irq-affinity mask C0-7,15-23,31 /sys/fs/cgroup/system.slice/cgroup.controllers /sys/fs/cgroup/system.slice/cgroup.subtree_control FileNotFoundError: [Errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/vfio-isolate", line 33, in <module> sys.exit(load_entry_point('vfio-isolate==0.5.2', 'console_scripts', 'vfio-isolate')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 200, in run_cli executor.run() File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 191, in run for undo in e.action.record_undo(e.params): File "/usr/lib/python3.11/site-packages/vfio_isolate/action/cpuset_modify.py", line 39, in record_undo cpus=cpu_set.get_cpus(), ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 70, in get_cpus return self.impl.get_cpus(self) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 233, in get_cpus CGroupV2.ensure_cpuset_controller_enabled(cpuset) File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 229, in ensure_cpuset_controller_enabled CGroupV2.enable_controller(cpuset, "cpuset") File "/usr/lib/python3.11/site-packages/vfio_isolate/cpuset.py", line 223, in enable_controller with cpuset.open("cgroup.subtree_control", "w") as f: FileNotFoundError: [Errno 2] No such file or directory taskset -pc 8-14,24-30 2 pid 2's current affinity list: 0-31 vfio-isolate -v restore /var/run/libvirt/qemu/vfio-isolate-undo.bin Traceback (most recent call last): File "/usr/bin/vfio-isolate", line 33, in <module> sys.exit(load_entry_point('vfio-isolate==0.5.2', 'console_scripts', 'vfio-isolate')()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 199, in run_cli cli(standalone_mode=False, obj=executor) File "/usr/lib/python3.11/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 1719, in invoke rv.append(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/click/decorators.py", line 45, in new_func return f(get_current_context().obj, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/site-packages/vfio_isolate/cli.py", line 171, in restore with open(undo_file, "rb") as f: ^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/var/run/libvirt/qemu/vfio-isolate-undo.bin' taskset -pc 0-31 2 pid 2's current affinity list: 8-14,24-30 pid 2's new affinity list: 0-31 [ranguvar@khufu ~]$ ls -l /sys/fs/cgroup/system.slice/cgroup.subtree_control -rw-r--r-- 1 root root 0 Oct 7 21:31 /sys/fs/cgroup/system.slice/cgroup.subtree_control [ranguvar@khufu ~]$ cat /sys/fs/cgroup/system.slice/cgroup.subtree_control memory pids I don't believe the failure to find the restore file is an issue, I believe it complains the same yet creates it happily in the past. Obviously it then isn't ready for the restore either.
2025-04-01T06:40:26.851632
2017-02-24T11:01:05
210019394
{ "authors": [ "Oehmi", "Siilwyn", "hisabimbola", "junajan" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10832", "repo": "sphereio/sphere-order-export", "url": "https://github.com/sphereio/sphere-order-export/issues/47" }
gharchive/issue
Add option to fill all rows with general order information In the order export, we should offer an option to fill all rows of an order with the general order information, instead of just the first row of an order. Background: Currently, the 1st row in the order export contains general information such as address; in the 2nd row per order are all the line items of the order. The general rows are not filled because they are redundant, but some of our customers would like them to always be filled. The first row of the order with lineItems contains basic (order level) info like addresses, prices, etc. If we put this info also to other rows, should we remove the first row because of a redundancy? What do you think @Siilwyn @hisabimbola? I think we shouldn't add more options and instead fill them all as a default behaviour unless there is a huge benefit that I'm missing. The redundancy is not that big of a problem imho. We should not remove the first row, but I am not sure whether we should do this without a flag, what if some users do not want the redundant rows 🙄 I added here https://github.com/sphereio/sphere-order-export/issues/48 a --fillAllRows parameter which will add this behavior. I'll just leave this here: https://www.youtube.com/watch?v=glZ1C-Yu5tw ^^ :wave:
2025-04-01T06:40:26.870057
2020-04-21T23:19:12
604343976
{ "authors": [ "engram-design", "pvldigital" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10833", "repo": "spicywebau/craft-embedded-assets", "url": "https://github.com/spicywebau/craft-embedded-assets/issues/127" }
gharchive/issue
Fix PHP 7.4 deprecation There's a deprecation at https://github.com/spicywebau/craft-embedded-assets/blob/master/src/Service.php#L196 with PHP 7.4. Should be changed to: $code = ($value instanceof Twig_Markup ? (string)$value : is_string($value)) ? $value : ''; fix has been released in v<IP_ADDRESS>
2025-04-01T06:40:26.871339
2018-03-08T00:44:00
303314188
{ "authors": [ "kalicki" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10834", "repo": "spider-gazelle/spider-gazelle", "url": "https://github.com/spider-gazelle/spider-gazelle/issues/2" }
gharchive/issue
Name Spider-Gazelle There is already a project with the same name but for ruby. would not something better be more creative? https://github.com/cotag/spider-gazelle I saw that you are comiiter, but both projects can be confused with the same purpose but with equal names. What sets it apart is the language. IMHO =/
2025-04-01T06:40:26.874308
2015-11-22T12:20:59
118260240
{ "authors": [ "PommeVerte", "smolinari" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10835", "repo": "spider/spider", "url": "https://github.com/spider/spider/pull/100" }
gharchive/pull-request
change gremlin-server version for vagrant to 3.0.2, added composer.lock @chrismichaels84 can you just double check the sanity of the vagrant changes? I can't really test vagrant in my env yet (will try to get it up and running) Or @smolinari if you get a chance to test vagrant on this. @smolinari does this work in the end? can I merge this branch or do changes need to be made? @PommeVerte - yes. It is working fine. I think the issue I had yesterday afternoon was caused by a flaky internet connection and somehow screwed up the Gremlin install. This morning all worked as it should on the Gremlin/Neo4j side. Scott ok awesome. adding this then :) Theoretically, if Chris accepts my changes (plus some needed cleanup work), then we'd also have ODB up to 2.1.6 too. Scott
2025-04-01T06:40:26.882460
2023-10-27T07:58:39
1964978329
{ "authors": [ "Icarus9913", "weizhoublue", "yylt" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10836", "repo": "spidernet-io/spiderpool", "url": "https://github.com/spidernet-io/spiderpool/issues/2465" }
gharchive/issue
[feature] add spidernode CRD which preallocted ips for each node 1 Code requirement 2 observe opensource license 3 sign-off your commit 4 Is your feature request related to a problem? Please describe. 5 Describe the solution you'd like Now in aws ECS environment could eni-cni, and design like eni in ciliumnodes resource maybe spidernet could add another CRD, like spidernode which will preallocte ips for each node, because ip is from vpc, but only support ip not subnat, so spidernode record ip list which is preallocted. below is some likely steps: daemon(spider agent) Use spidernode as the source for IP allocation Synchronize spidernode and spiderpool resources Can do some extra operations on nodes, such as adding network interfaces, setting IP/MAC etc. controller(spider controller) Create and update spidernode from spiderpool Synchronize spec, status information with daemon decision 6 Describe alternatives you've considered 7 Additional context @yylt Hi, it looks like a public cloud vpc support right? At present, in the public cloud with spiderpool usage you could just define every node IPs for different SpiderIPPool resource with IPPool.Spec.NodeName or IPPool.Spec.NodeAffinity to restrict this SpiderIPPool resource only serves for the Node you specified. And you could check alibaba cloud and aws cloud blogs. For the new CRD support, our team will talk or add some public cloud support later. And you could also commit a PR to solve this proposal if you would like. Thansk. some CNIs record the ip resource in Node object, spiderpool record it in SpiderIPPool, currently, the affinity settings of SpiderIPPool support to bind some IP resource to a specific interface of a specific vm. That is convenient for creating an ipvlan interface from a specific master interface I think the SpiderIPPool way also meet the same expectation Another reason, as https://spidernet-io.github.io/spiderpool/v0.7/reference/crd-spiderippool description. The spec.ips is iprange format which can not compatible with eni close by #2536
2025-04-01T06:40:26.884877
2024-08-26T09:57:59
2486459252
{ "authors": [ "cyclinder", "weizhoublue" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10837", "repo": "spidernet-io/spiderpool", "url": "https://github.com/spidernet-io/spiderpool/issues/3954" }
gharchive/issue
failed to cherry pick PR 3873 from cyclinder, to branch release-v1.0 commits 22a33847f9755f972c03aa121b41f19cc3b62a22 of cyclinder conflict when merging to branch release-v1.0, please manually cherry pick it by yourself. PR https://github.com/spidernet-io/spiderpool/pull/3873 , action https://github.com/spidernet-io/spiderpool/actions/runs/10557667197 Auto-merging test/e2e/reclaim/reclaim_test.go Auto-merging test/scripts/debugEnv.sh CONFLICT (content): Merge conflict in test/scripts/debugEnv.sh error: could not apply 22a33847... Merge pull request #3873 from cyclinder/coordinator/tune_pod_route hint: After resolving the conflicts, mark them with hint: "git add/rm <pathspec>", then run hint: "git cherry-pick --continue". hint: You can instead skip this commit with "git cherry-pick --skip". hint: To abort and get back to the state before "git cherry-pick", hint: run "git cherry-pick --abort". hint: Disable this message with "git config advice.mergeConflict false" https://github.com/spidernet-io/spiderpool/pull/3971
2025-04-01T06:40:26.890371
2024-11-19T11:07:36
2671778607
{ "authors": [ "damufo", "spidersouris" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10838", "repo": "spidersouris/termic", "url": "https://github.com/spidersouris/termic/issues/9" }
gharchive/issue
feat: Add make a example database Feature Description Make a test sample database Add sqlite database option is not exists Motivations Be able to test the self-hosted system quickly and easily Very thanks. Could you let me know more about your plans with self-hosting? I just want to gauge if termic would indeed the best fit for you or if there's a better way of achieving what you want to do. Do you already have data to show on the frontend? I don't believe it reasonable to think about termic as handling anything related to database management per se (one should instead rather see it as a pre-configured web app for browsing and searching terminology data). I expect people who self-host to already have their database available. But I can always add an example database to show what it can look like, yes. My idea is to analyze the option of having a place where you can upload translations and be able to consult them, in my case I am a translator and I do a lot of queries to see how certain terms have been translated in the past. I would like to be able to upload .po files (gettext) I think it would be great to have a docker compose that installs a blank database and if termic had a manager that allowed, once installed, to upload translations of tmx, po, csv files... I see. I guess that's a reasonable use of termic and something that could be supported. I originally made some tests with a SQLite database when first developing termic, but I eventually moved to PostgreSQL (which is a much better fit for deployment). If you plan on using termic locally though, this could totally be done. I can't give an estimate on how much time this will take to implement as I have quite a lot on my plate right now — but I hope I can have a look at it in January. Hello: 🙂 OK! Wonderful if work with sqlite. Being able to use Termic inside a docker container with a preloaded database would be great. Including with postgresql in separated container same stack. Something interesting would be to be able to remove translations from languages ​​that you don't need. I think an export option that allows you to send translations to Termic in their own format would be great. This would make it easier to contribute to the Termic project. None of this is urgent. Thank you very much anyway.
2025-04-01T06:40:26.891936
2023-04-23T06:05:35
1679875449
{ "authors": [ "electr1fy0", "spieglt" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10839", "repo": "spieglt/FlyingCarpet", "url": "https://github.com/spieglt/FlyingCarpet/issues/35" }
gharchive/issue
M1 Build for macOS An Apple Silicon release would be nice! Yep thanks, meant to but forgot, will do that in a day or two. The x86_64 version should work through Rosetta however. If you try and find that it doesn't please let me know. I replaced the x86_64 version with a universal binary.
2025-04-01T06:40:26.914665
2020-07-04T19:29:58
650944204
{ "authors": [ "jcohenadad" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10840", "repo": "spine-generic/spine-generic", "url": "https://github.com/spine-generic/spine-generic/issues/108" }
gharchive/issue
Processing killed on compute canada with the following SLURM config: #SBATCH --account=def-jcohen #SBATCH --time=0-03:00 # time (DD-HH:MM) #SBATCH --ntasks=128 # number of MPI processes #SBATCH --mem-per-cpu=4096M # memory; default unit is megabytes and this YML config for sct_run_batch: path_data: /scratch/jcohen/data-single-subject-master path_output: /scratch/jcohen/results task: /home/jcohen/code/spine-generic/spine-generic/processing/process_data.sh jobs: 128 i get this job killed: /home/jcohen/code/spine-generic/spine-generic/processing/process_data.sh: line 76: 4663 Killed sct_deepseg_sc -i ${file}.nii.gz -c $contrast -qc ${PATH_QC} -qc-subject ${SUBJECT} and this other job got the following error: OSError: [Errno 12] Cannot allocate memory This issue was solved by raising the memory per core, with this SLURM config: #SBATCH --account=def-jcohen #SBATCH --time=0-01:00 # time (DD-HH:MM) #SBATCH --ntasks=128 # number of MPI processes #SBATCH --mem-per-cpu=16384 # memory; default unit is megabytes #SBATCH --mail-user=*** #SBATCH --mail-type=ALL
2025-04-01T06:40:27.013495
2021-03-09T23:15:42
826781245
{ "authors": [ "48d90782", "jwillp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10841", "repo": "spiral/roadrunner", "url": "https://github.com/spiral/roadrunner/issues/581" }
gharchive/issue
[BUG] Unable to install using composer I tried installing the project using the composer method as outlined here: https://roadrunner.dev/docs/intro-install, in a fresh empty composer project, however, I get the following error: PHP Fatal error: Uncaught Error: Class 'Spiral\RoadRunner\Version' in ./vendor/spiral/roadrunner-cli/bin/rr:67 I tried this code: composer require spiral/roadrunner ./vendor/bin/rr get The version of RR used: ^2.0 (v2.0.1) @jwillp Feel free to close the issue if the problem was resolved.
2025-04-01T06:40:27.070301
2022-02-01T00:52:08
1120098291
{ "authors": [ "mpan-splunk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10842", "repo": "splunk-soar-connectors/sep14", "url": "https://github.com/splunk-soar-connectors/sep14/pull/2" }
gharchive/pull-request
PAPP-22704: Update release note Please ensure your pull request (PR) adheres to the following guidelines: Please refer to our contributing documentation for any questions on submitting a pull request, link: Contribution Guide Pull Request Checklist Please check if your PR fulfills the following requirements: [ ] Testing of all the changes has been performed (for bug fixes / features) [ ] The readme.html has been reviewed and added / updated if needed (for bug fixes / features) [ ] Use the following format for the PR description: <App Name>: <PR Type> - <PR Description> [ ] Provide release notes as part of the PR submission which describe high level points about the changes for the upcoming GA release. [ ] Verify all checks are passing. [ ] Do NOT use the next branch of the forked repo. Create separate feature branch for raising the PR. [ ] Do NOT submit updates to dependencies unless it fixes an issue. Pull Request Type Please check the type of change your PR introduces: [ ] New App [ ] Bugfix [ ] Feature [ ] Code style update (formatting, renaming) [ ] Refactoring (no functional changes, no api changes) [ ] Documentation [ ] Other (please describe): Release Notes (REQUIRED) Provide release notes as part of the PR submission which describe high level points about the changes for the upcoming GA release. What is the current behavior? (OPTIONAL) Describe the current behavior that you are modifying. What is the new behavior? (OPTIONAL) Describe the behavior or changes that are being added by this PR. Other information (OPTIONAL) Any other information that is important to this PR such as screenshots of how the component looks before and after the change. Pay close attention to (OPTIONAL) Any specific code change or test case points which must be addressed/reviewed at the time of GA release. Screenshots (if relevant) Thanks for contributing! Note: github integration test pipeline is failed because of time out, run in gitlab. Backend run: https://cd.splunkdev.com/phantom-apps/app-tests/-/jobs/24903653 UI run: https://cd.splunkdev.com/phantom-apps/app-tests/-/jobs/24903654
2025-04-01T06:40:27.074409
2021-03-09T14:10:08
825989173
{ "authors": [ "dkhatri-crest", "rfaircloth-splunk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10843", "repo": "splunk/addonfactory-ucc-generator", "url": "https://github.com/splunk/addonfactory-ucc-generator/pull/122" }
gharchive/pull-request
ADDON-34291: Added Validation utility ADDON-34291 Created Predefined Validators for entity type - string, regex, number, URL, email, ipv4 and date. doValidation method of class Validator checks for validation based on entity type and returns dictionary containing error and errormsg if any. :tada: This PR is included in version 5.0.0-develop.1 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 5.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T06:40:27.075335
2023-04-05T06:18:02
1655006507
{ "authors": [ "dfederschmidt", "mbruzda-splunk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10844", "repo": "splunk/appinspect-api-action", "url": "https://github.com/splunk/appinspect-api-action/issues/5" }
gharchive/issue
Update Node.js version The action is currently using an old version of nodejs. Ensure that this is updated to a more recent release. Won't fix, breaking change released
2025-04-01T06:40:27.080149
2015-01-30T16:28:04
56053984
{ "authors": [ "arctan5x", "coccyx", "dataPhysicist" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10845", "repo": "splunk/eventgen", "url": "https://github.com/splunk/eventgen/issues/24" }
gharchive/issue
Feature Request I'd like to be able to whitelist/blacklist when using token..replacement. For example, if I''m referencing a file like "allhosts.csv" I want to be able to blacklist a set of hosts with a regex. Possibly an additional parameter like below: replace server name token.2.token = (SERVERNAME) token.2.replacementType = file token.2.replacement = $SPLUNK_HOME/etc/apps/oidemo/master_eventgen_replace/webhosts.sample:2 token.2.replacementWhitelist.0 = * token.2.replacementBlacklist.0 = webserver-[0-9] Totally valid request, but why wouldn't you just modify the source file or create a different copy of it for that token? I have a few scenarios going at the same time. For example, I need webserver-02 to spike in CPU at 30min past the hour. To do this I create eventgen stanzas and sample files: An "issue" sample file which spikes the CPU data at 30min past the hour for webserver-02 A noise sample for the first 30min of webserver-02 A problem sample file(s) which runs for the whole hour that generates cpu data for all hosts replacing SERVERNAME from allhosts.sample which has all hosts except webserver-02. Next I want to do something similar (say a spike in queries and cpu) for dbserver-02 but now I have to create another file with all hosts except dbserver-02 and non-db servers. As I continue to add scenarios I have to make a lot of files. A white/black list allows me to substantially reduce the number of replace files required and makes it easier to update/maintain. Make sense? this is a good idea. I just don't want to touch that code :). I'm leaving it open for the time being. Since this issue has been open for a while and we have released new versions 6.x.x of Eventgen, please recreate the issue if you still see fit.
2025-04-01T06:40:27.086579
2024-01-02T20:49:38
2062927665
{ "authors": [ "atgithub11", "josehelps", "nasbench" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10846", "repo": "splunk/security_content", "url": "https://github.com/splunk/security_content/issues/2937" }
gharchive/issue
[BUG] O365 Mailbox Inbox Folder Shared with All Users. Field "object" doesn't exist. Correlation search, O365 Mailbox Inbox Folder Shared with All Users, is currently using a field called "object", as object=Inbox. But I do not see this field being sent as part of O365 exchange data. Instead, I see a field called Item.ParentFolder.Name with values such as Inbox, Calender, Contacts etc. Should "object=Inbox" be replaced with "Item.ParentFolder.Name=Inbox" for this correlation search? App Version: ESCU: 4.18.0 @atgithub11 this might be due to how the data for o365 is being collected in your environment. I believe for this detection we expect the user to be leveraging https://splunkbase.splunk.com/app/4055 let me know if this is the case? Hey @atgithub11 thanks for opening this issue. Here is some clarification that might help you understand this. You can see in the detection, the raw log does contain the field "Item.ParentFolder.Name" And this is what you are probably ingesting. But as @josehelps said, the analytics expects ingestion of O365 via the splunk app https://splunkbase.splunk.com/app/4055 as it is stated by the how to implement section how_to_implement: You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events. Behind the scene, this app is creating the field object and assigning its value based on some condition: In this case if the Operation field is "ModifyFolderPermissions" or "AddFolderPermissions", the value of the Object field will be set to Item.ParentFolder.Name. Hence the detection should be correct, and this is just an issue of ingestion. Hope this helps. I'll be closing this as complete, feel free to re-open the issue in case you have further questions
2025-04-01T06:40:27.090528
2019-03-09T17:53:04
419097942
{ "authors": [ "chaitanyaphalak", "matthewmodestino", "rockb1017", "sayeedc" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10847", "repo": "splunk/splunk-connect-for-kubernetes", "url": "https://github.com/splunk/splunk-connect-for-kubernetes/issues/109" }
gharchive/issue
Source typing container logs Right now, source types are derived from the container or pod name. Are there any plans to provide the option for users to set the source type? This allows users to create the same containers with a common source type rather different ones. I've seen over 1000 sourcetypes created on a single cluster. Hey Sayeed! Technically you can set the sourcetype by customizing the jq_transformer filter in the logging configmap: https://github.com/splunk/splunk-connect-for-kubernetes/blob/ec00d8fcf6b4030cca0ea434d8b54a606add85d0/manifests/splunk-kubernetes-logging/configMap.yaml#L177 Ideally, I'd like to see us support setting the sourcetype, among other options in annotations. Hey Matt Ideally, I'd like the users to set the sourcetype from Openshift (maybe via labels or annotations) since how containers are named is controlled by them. It can be anything, which makes setting source types via jq difficult. Currently a user can set the sourcetype using https://github.com/splunk/splunk-connect-for-kubernetes/blob/develop/helm-chart/splunk-kubernetes-logging/values.yaml#L108 I know its static, for setting sourcetypes dynamically through labels and annotations will require a significant overhaul of the current implementation. resolved as part of #294
2025-04-01T06:40:27.113120
2023-04-23T00:42:12
1679798007
{ "authors": [ "azumukupoe", "xnetcat" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10848", "repo": "spotDL/spotify-downloader", "url": "https://github.com/spotDL/spotify-downloader/issues/1815" }
gharchive/issue
AttributeError: 'NoneType' object has no attribute 'lower' with some tracks when synced is enabled as a lyrics provider System OS Windows Python Version 3.11 (CPython) Install Source pip / PyPi Install version / commit hash v4.1.7 Expected Behavior vs Actual Behavior No response Steps to reproduce - Ensure to include actual links! spotdl download https://open.spotify.com/track/3i5bc53F2glMZC7GFXZQ7T Traceback [09:40:16] DEBUG MainThread - Downloader settings: {'audio_providers': ['youtube-music'], downloader.py:115 'lyrics_providers': ['synced', 'genius', 'azlyrics', 'musixmatch'], 'playlist_numbering': False, 'scan_for_songs': True, 'm3u': None, 'output': 'D:\\Music\\{album-artist}\\{album} ({year})\\{disc-number} - {track-number} - {title}.{output-ext}', 'overwrite': 'metadata', 'search_query': None, 'ffmpeg': 'ffmpeg', 'bitrate': None, 'ffmpeg_args': None, 'format': 'mp3', 'save_file': None, 'filter_results': True, 'threads': 4, 'cookie_file': None, 'restrict': False, 'print_errors': True, 'sponsor_block': False, 'preload': True, 'archive': None, 'load_config': True, 'log_level': 'DEBUG', 'simple_tui': False, 'fetch_albums': True, 'id3_separator': '/', 'ytm_data': False, 'add_unavailable': False, 'generate_lrc': True, 'force_update_metadata': True, 'only_verified_results': True, 'sync_without_deleting': True, 'max_filename_length': 100} [09:40:16] DEBUG MainThread - FFmpeg path: ffmpeg downloader.py:133 [09:40:16] INFO MainThread - Scanning for known songs, this might take a while... downloader.py:152 [09:40:25] DEBUG MainThread - Found 2018 known songs downloader.py:158 [09:40:28] DEBUG MainThread - Archive: 0 urls downloader.py:192 [09:40:28] DEBUG MainThread - Downloader initialized downloader.py:194 [09:40:28] INFO MainThread - Processing query: https://open.spotify.com/track/3i5bc53F2glMZC7GFXZQ7T search.py:123 [09:40:29] DEBUG MainThread - Found 1 songs in 0 lists search.py:249 [09:40:29] INFO MainThread - Fetching 1 album downloader.py:228 [09:40:30] ERROR asyncio_0 - Traceback (most recent call last): progress_handler.py:358 File "C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py", line 495, in search_and_download lyrics = self.search_lyrics(song) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py", line 350, in search_lyrics lyrics = lyrics_provider.get_lyrics(song.name, song.artists) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\spotdl\providers\lyrics\synced.py", line 62, in get_lyrics lyrics = syncedlyrics.search(f"{name} - {artists[0]}", allow_plain_format=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\__init__.py", line 40, in search lrc = provider.get_lrc(search_term) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 30, in get_lrc a_tag = soup.find_all("a", string=lambda t: text_match(t) > 80, limit=4) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2030, in find_all return self._find_all(name, attrs, string, limit, generator, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 841, in _find_all found = strainer.search(i) ^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2320, in search found = self.search_tag(markup) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2291, in search_tag if found and self.string and not self._matches(found.string, self.string): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^ File "C:\spotDL\venv\Lib\site-packages\bs4\element.py", line 2352, in _matches return match_against(markup) ^^^^^^^^^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 30, in <lambda> a_tag = soup.find_all("a", string=lambda t: text_match(t) > 80, limit=4) ^^^^^^^^^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 26, in <lambda> text_match = lambda t: rapidfuzz.fuzz.token_sort_ratio(_t(search_term), _t(t)) ^^^^^ File "C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py", line 25, in <lambda> _t = lambda s: s.lower().replace("-", "") ^^^^^^^ AttributeError: 'NoneType' object has no attribute 'lower' ╭─────────────────── Traceback (most recent call last) ────────────────────╮ │ C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py:495 in │ │ search_and_download │ │ │ │ 492 │ │ │ │ │ │ ) │ │ 493 │ │ │ │ │ 494 │ │ │ # Find song lyrics and add them to the song object │ │ ❱ 495 │ │ │ lyrics = self.search_lyrics(song) │ │ 496 │ │ │ if lyrics is None: │ │ 497 │ │ │ │ logger.debug( │ │ 498 │ │ │ │ │ "No lyrics found for %s, lyrics providers: %s" │ │ │ │ C:\spotDL\venv\Lib\site-packages\spotdl\download\downloader.py:350 in │ │ search_lyrics │ │ │ │ 347 │ │ """ │ │ 348 │ │ │ │ 349 │ │ for lyrics_provider in self.lyrics_providers: │ │ ❱ 350 │ │ │ lyrics = lyrics_provider.get_lyrics(song.name, song.ar │ │ 351 │ │ │ if lyrics: │ │ 352 │ │ │ │ logger.debug( │ │ 353 │ │ │ │ │ "Found lyrics for %s on %s", song.display_name │ │ │ │ C:\spotDL\venv\Lib\site-packages\spotdl\providers\lyrics\synced.py:62 in │ │ get_lyrics │ │ │ │ 59 │ │ - The lyrics of the song or None if no lyrics were found. │ │ 60 │ │ """ │ │ 61 │ │ │ │ ❱ 62 │ │ lyrics = syncedlyrics.search(f"{name} - {artists[0]}", allo │ │ 63 │ │ │ │ 64 │ │ return lyrics │ │ 65 │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\__init__.py:40 in search │ │ │ │ 37 │ lrc = None │ │ 38 │ for provider in _providers: │ │ 39 │ │ logger.debug(f"Looking for an LRC on {provider.__class__.__ │ │ ❱ 40 │ │ lrc = provider.get_lrc(search_term) │ │ 41 │ │ if is_lrc_valid(lrc, allow_plain_format): │ │ 42 │ │ │ logger.info( │ │ 43 │ │ │ │ f'synced-lyrics found for "{search_term}" on │ │ {provider.__class__.__name__}' │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:30 │ │ in get_lrc │ │ │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ │ 29 │ │ soup = generate_bs4_soup(self.session, url, parse_only=a_ta │ │ ❱ 30 │ │ a_tag = soup.find_all("a", string=lambda t: text_match(t) > │ │ 31 │ │ if not a_tag: │ │ 32 │ │ │ return None │ │ 33 │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2030 in find_all │ │ │ │ 2027 │ │ if not recursive: │ │ 2028 │ │ │ generator = self.children │ │ 2029 │ │ _stacklevel = kwargs.pop('_stacklevel', 2) │ │ ❱ 2030 │ │ return self._find_all(name, attrs, string, limit, generat │ │ 2031 │ │ │ │ │ │ │ _stacklevel=_stacklevel+1, **kwargs │ │ 2032 │ findAll = find_all # BS3 │ │ 2033 │ findChildren = find_all # BS2 │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:841 in _find_all │ │ │ │ 838 │ │ │ except StopIteration: │ │ 839 │ │ │ │ break │ │ 840 │ │ │ if i: │ │ ❱ 841 │ │ │ │ found = strainer.search(i) │ │ 842 │ │ │ │ if found: │ │ 843 │ │ │ │ │ results.append(found) │ │ 844 │ │ │ │ │ if limit and len(results) >= limit: │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2320 in search │ │ │ │ 2317 │ │ # Don't bother with Tags if we're searching for text. │ │ 2318 │ │ elif isinstance(markup, Tag): │ │ 2319 │ │ │ if not self.string or self.name or self.attrs: │ │ ❱ 2320 │ │ │ │ found = self.search_tag(markup) │ │ 2321 │ │ # If it's text, make sure the text matches. │ │ 2322 │ │ elif isinstance(markup, NavigableString) or \ │ │ 2323 │ │ │ │ isinstance(markup, str): │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2291 in search_tag │ │ │ │ 2288 │ │ │ │ │ found = markup │ │ 2289 │ │ │ │ else: │ │ 2290 │ │ │ │ │ found = markup_name │ │ ❱ 2291 │ │ if found and self.string and not self._matches(found.stri │ │ 2292 │ │ │ found = None │ │ 2293 │ │ return found │ │ 2294 │ │ │ │ C:\spotDL\venv\Lib\site-packages\bs4\element.py:2352 in _matches │ │ │ │ 2349 │ │ │ return markup is not None │ │ 2350 │ │ │ │ 2351 │ │ if isinstance(match_against, Callable): │ │ ❱ 2352 │ │ │ return match_against(markup) │ │ 2353 │ │ │ │ 2354 │ │ # Custom callables take the tag as an argument, but all │ │ 2355 │ │ # other ways of matching match the tag name as a string. │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:30 │ │ in <lambda> │ │ │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ │ 29 │ │ soup = generate_bs4_soup(self.session, url, parse_only=a_ta │ │ ❱ 30 │ │ a_tag = soup.find_all("a", string=lambda t: text_match(t) > │ │ 31 │ │ if not a_tag: │ │ 32 │ │ │ return None │ │ 33 │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:26 │ │ in <lambda> │ │ │ │ 23 │ │ # Just processing the `a` tags whose `href` attribute start │ │ 24 │ │ # and whose text is similar to the query too. │ │ https://github.com/maxbachmann/RapidFuzz#scorers │ │ 25 │ │ _t = lambda s: s.lower().replace("-", "") │ │ ❱ 26 │ │ text_match = lambda t: rapidfuzz.fuzz.token_sort_ratio(_t(s │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ │ 29 │ │ soup = generate_bs4_soup(self.session, url, parse_only=a_ta │ │ │ │ C:\spotDL\venv\Lib\site-packages\syncedlyrics\providers\lyricsify.py:25 │ │ in <lambda> │ │ │ │ 22 │ │ │ │ 23 │ │ # Just processing the `a` tags whose `href` attribute start │ │ 24 │ │ # and whose text is similar to the query too. │ │ https://github.com/maxbachmann/RapidFuzz#scorers │ │ ❱ 25 │ │ _t = lambda s: s.lower().replace("-", "") │ │ 26 │ │ text_match = lambda t: rapidfuzz.fuzz.token_sort_ratio(_t(s │ │ 27 │ │ href_match = lambda h: h.startswith("/lyric/") │ │ 28 │ │ a_tags_boud = SoupStrainer("a", href=href_match) │ ╰──────────────────────────────────────────────────────────────────────────╯ AttributeError: 'NoneType' object has no attribute 'lower' None [09:40:32] DEBUG asyncio_1 - Synced failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:32] DEBUG asyncio_1 - Genius failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:33] DEBUG asyncio_1 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:36] DEBUG asyncio_1 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Piano downloader.py:358 Concerto No. 1 - i. 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster [09:40:36] DEBUG asyncio_1 - No lyrics found for Emerson, Lake & Palmer - Piano Concerto No. 1 - i. downloader.py:497 1st Movement: Allegro Giojoso; ii. 2nd Movement: Andante Molto Cantabile; iii. 3rd Movement: Toccata Con Fuoco; 2017 - Remaster, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:36] INFO asyncio_1 - downloader.py:540 None [09:40:39] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - C'est La Vie downloader.py:358 - 2017 Remastered Version None [09:40:39] DEBUG asyncio_2 - Synced failed to find lyrics for Emerson, Lake & Palmer - Lend Your downloader.py:358 Love to Me Tonight - 2017 Remastered Version [09:40:39] DEBUG asyncio_2 - Genius failed to find lyrics for Emerson, Lake & Palmer - Lend Your downloader.py:358 Love to Me Tonight - 2017 Remastered Version [09:40:40] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - C'est La Vie downloader.py:358 - 2017 Remastered Version [09:40:40] DEBUG asyncio_2 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Lend Your downloader.py:358 Love to Me Tonight - 2017 Remastered Version [09:40:41] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - C'est La downloader.py:358 Vie - 2017 Remastered Version None [09:40:42] DEBUG asyncio_0 - Synced failed to find lyrics for Emerson, Lake & Palmer - Hallowed Be downloader.py:358 Thy Name - 2017 Remastered Version [09:40:42] DEBUG asyncio_0 - Genius failed to find lyrics for Emerson, Lake & Palmer - Hallowed Be downloader.py:358 Thy Name - 2017 Remastered Version None [09:40:42] DEBUG asyncio_1 - Synced failed to find lyrics for Emerson, Lake & Palmer - Nobody Loves downloader.py:358 You Like I Do - 2017 Remastered Version [09:40:42] DEBUG asyncio_0 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Hallowed downloader.py:358 Be Thy Name - 2017 Remastered Version [09:40:43] DEBUG asyncio_1 - Genius failed to find lyrics for Emerson, Lake & Palmer - Nobody Loves downloader.py:358 You Like I Do - 2017 Remastered Version [09:40:43] DEBUG asyncio_2 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Lend downloader.py:358 Your Love to Me Tonight - 2017 Remastered Version [09:40:43] DEBUG asyncio_2 - No lyrics found for Emerson, Lake & Palmer - Lend Your Love to Me downloader.py:497 Tonight - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:43] INFO asyncio_2 - downloader.py:540 [09:40:44] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - C'est La downloader.py:358 Vie - 2017 Remastered Version [09:40:44] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - C'est La Vie - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:44] DEBUG asyncio_1 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Nobody downloader.py:358 Loves You Like I Do - 2017 Remastered Version [09:40:44] INFO asyncio_3 - downloader.py:540 None [09:40:45] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - The Enemy downloader.py:358 God Dances With the Black Spirits - 2017 Remastered Version [09:40:46] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - The Enemy downloader.py:358 God Dances With the Black Spirits - 2017 Remastered Version [09:40:46] DEBUG asyncio_0 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Hallowed downloader.py:358 Be Thy Name - 2017 Remastered Version [09:40:46] DEBUG asyncio_0 - No lyrics found for Emerson, Lake & Palmer - Hallowed Be Thy Name - downloader.py:497 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:46] INFO asyncio_0 - downloader.py:540 [09:40:46] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - The Enemy downloader.py:358 God Dances With the Black Spirits - 2017 Remastered Version [09:40:47] DEBUG asyncio_1 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Nobody downloader.py:358 Loves You Like I Do - 2017 Remastered Version [09:40:47] DEBUG asyncio_1 - No lyrics found for Emerson, Lake & Palmer - Nobody Loves You Like I downloader.py:497 Do - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:47] INFO asyncio_1 - downloader.py:540 [09:40:48] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - The downloader.py:358 Enemy God Dances With the Black Spirits - 2017 Remastered Version [09:40:48] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - The Enemy God Dances With downloader.py:497 the Black Spirits - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:48] INFO asyncio_3 - downloader.py:540 None [09:40:50] DEBUG asyncio_2 - Synced failed to find lyrics for Emerson, Lake & Palmer - Closer to downloader.py:358 Believing - 2017 Remastered Version None [09:40:50] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:50] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:51] DEBUG asyncio_2 - Genius failed to find lyrics for Emerson, Lake & Palmer - Closer to downloader.py:358 Believing - 2017 Remastered Version [09:40:51] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:52] DEBUG asyncio_2 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Closer to downloader.py:358 Believing - 2017 Remastered Version None [09:40:53] DEBUG asyncio_0 - Synced failed to find lyrics for Emerson, Lake & Palmer - L.A. Nights downloader.py:358 - 2017 Remastered Version [09:40:54] DEBUG asyncio_0 - Genius failed to find lyrics for Emerson, Lake & Palmer - L.A. Nights downloader.py:358 - 2017 Remastered Version [09:40:54] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Two Part downloader.py:358 Invention in D Minor - 2017 Remastered Version [09:40:54] DEBUG asyncio_0 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - L.A. downloader.py:358 Nights - 2017 Remastered Version [09:40:54] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - Two Part Invention in D downloader.py:497 Minor - 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:54] DEBUG asyncio_2 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Closer downloader.py:358 to Believing - 2017 Remastered Version [09:40:54] DEBUG asyncio_2 - No lyrics found for Emerson, Lake & Palmer - Closer to Believing - downloader.py:497 2017 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:54] INFO asyncio_3 - downloader.py:540 [09:40:54] INFO asyncio_2 - downloader.py:540 None [09:40:55] DEBUG asyncio_1 - Synced failed to find lyrics for Emerson, Lake & Palmer - New Orleans downloader.py:358 - 2017 Remastered Version [09:40:55] DEBUG asyncio_1 - Genius failed to find lyrics for Emerson, Lake & Palmer - New Orleans downloader.py:358 - 2017 Remastered Version [09:40:55] DEBUG asyncio_1 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - New downloader.py:358 Orleans - 2017 Remastered Version [09:40:58] DEBUG asyncio_0 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - L.A. downloader.py:358 Nights - 2017 Remastered Version [09:40:58] DEBUG asyncio_0 - No lyrics found for Emerson, Lake & Palmer - L.A. Nights - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:58] INFO asyncio_0 - downloader.py:540 [09:40:58] DEBUG asyncio_1 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - New downloader.py:358 Orleans - 2017 Remastered Version [09:40:58] DEBUG asyncio_1 - No lyrics found for Emerson, Lake & Palmer - New Orleans - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:40:58] INFO asyncio_1 - downloader.py:540 None [09:41:02] DEBUG asyncio_3 - Synced failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version None [09:41:03] DEBUG asyncio_2 - Synced failed to find lyrics for Emerson, Lake & Palmer - Tank - 2017 downloader.py:358 Remastered Version [09:41:03] DEBUG asyncio_3 - Genius failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version [09:41:04] DEBUG asyncio_2 - Genius failed to find lyrics for Emerson, Lake & Palmer - Tank - 2017 downloader.py:358 Remastered Version None [09:41:04] DEBUG asyncio_3 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version [09:41:04] DEBUG asyncio_0 - Synced failed to find lyrics for Emerson, Lake & Palmer - Pirates - downloader.py:358 2017 Remastered Version [09:41:05] DEBUG asyncio_0 - Genius failed to find lyrics for Emerson, Lake & Palmer - Pirates - downloader.py:358 2017 Remastered Version [09:41:05] DEBUG asyncio_2 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Tank - downloader.py:358 2017 Remastered Version [09:41:06] DEBUG asyncio_0 - AzLyrics failed to find lyrics for Emerson, Lake & Palmer - Pirates - downloader.py:358 2017 Remastered Version [09:41:07] DEBUG asyncio_3 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Food for downloader.py:358 Your Soul - 2017 Remastered Version [09:41:07] DEBUG asyncio_3 - No lyrics found for Emerson, Lake & Palmer - Food for Your Soul - 2017 downloader.py:497 Remastered Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:41:08] INFO asyncio_3 - downloader.py:540 [09:41:08] DEBUG asyncio_2 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Tank - downloader.py:358 2017 Remastered Version [09:41:08] DEBUG asyncio_2 - No lyrics found for Emerson, Lake & Palmer - Tank - 2017 Remastered downloader.py:497 Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:41:08] INFO asyncio_2 - downloader.py:540 [09:41:09] DEBUG asyncio_0 - MusixMatch failed to find lyrics for Emerson, Lake & Palmer - Pirates downloader.py:358 - 2017 Remastered Version [09:41:09] DEBUG asyncio_0 - No lyrics found for Emerson, Lake & Palmer - Pirates - 2017 Remastered downloader.py:497 Version, lyrics providers: Synced, Genius, AzLyrics, MusixMatch [09:41:10] INFO asyncio_0 - downloader.py:540 [09:41:10] ERROR MainThread - https://open.spotify.com/track/3i5bc53F2glMZC7GFXZQ7T - downloader.py:258 AttributeError: 'NoneType' object has no attribute 'lower' Other details No response imo that's an issue with synced lyrics library. this function should return None instead of raising an exception https://github.com/rtcq/syncedlyrics/pull/8 v4.1.8 will catch such exceptions and will debug print them
2025-04-01T06:40:27.124869
2020-08-13T03:30:43
678122153
{ "authors": [ "Rugvip", "andrewthauer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10849", "repo": "spotify/backstage", "url": "https://github.com/spotify/backstage/issues/1937" }
gharchive/issue
Migration fails on 'add_bootstrap_location' for postgres I started seeing the following error when trying to run the backend on the latest code: migration file "20200809202832_add_bootstrap_location.js" failed migration failed with error: insert into "locations" ("id", "target", "type") values ($1, $2, $3) - invalid input syntax for type uuid: "bootstrap" It seems related to #1890, however, I thought postgres migrations were run in CI now, so not sure how this crept through. Yep, the bug is fixed by #1935, but I'll do a proper fix of the actual issue of tests not being run for that code :grin:
2025-04-01T06:40:27.128116
2016-10-31T15:17:20
186310363
{ "authors": [ "codecov-io", "gabrielgerhardsson", "udoprog" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10850", "repo": "spotify/heroic", "url": "https://github.com/spotify/heroic/pull/124" }
gharchive/pull-request
Report when a time series is filtered out during ingest, try#2 Report dropped-by-filter when a filter drops a time series during ingest. This fixes a "XXX:" comment. Also updated tests. Current coverage is 45.18% (diff: 16.66%) Merging #124 into master will decrease coverage by 0.01% @@ master #124 diff @@ ========================================== Files 598 598 Lines 15421 15427 +6 Methods 0 0 Messages 0 0 Branches 1585 1585 ========================================== + Hits 6970 6971 +1 - Misses 7989 7994 +5 Partials 462 462 Powered by Codecov. Last update eef540e...472eea2 Thanks! Decrease in test coverage is due to the added lines in the semantic module. Ignoring them for now.
2025-04-01T06:40:27.135624
2015-10-07T08:39:17
110177045
{ "authors": [ "Tarrasch", "boosh", "mfcabrera" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10851", "repo": "spotify/luigi", "url": "https://github.com/spotify/luigi/issues/1282" }
gharchive/issue
Highlight root tasks in UI There's currently no easy way of knowing which tasks in the web UI are root tasks. So if one task creates tens of sub tasks, there will be a large number of tasks in the UI but only one will actually show the entire task graph. It'd be very useful if there was a way to filter the web UI to only show tasks that are root nodes so as to be able to easily see entire task graphs without specifically needing to know the names of those tasks. Agreed. Want to send a PR? :) If I get chance I'll look into it (but can't make any promises...) closing this issue. [x] It has been inactive for +4 months. [ ] It's not about luigi core, so not as many users are affected about this. [ ] The change seems quite big, it's unlikely to be sporadically picked up. [x] The owner haven't responded or disappeared. [ ] I don't understand what this issue is about. [ ] There exists a reasonable workaround for this. [ ] We need to check if this hasn't been fixed by now (for old issues). [ ] This is kind of by design and not a bug. [ ] Resolving this would probably add a lot of complexity. Every open issue adds some clutter, and we try to make the issues fewer and make it easier for new collaborators to find. Currently we try to close any issue that meets the first checkbox + one other. Feel free to reopen this issue at any point if you have the intent to continue to work this. :)
2025-04-01T06:40:27.140417
2016-02-09T10:28:02
132382482
{ "authors": [ "MezianeMehdi", "erikbern" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10852", "repo": "spotify/luigi", "url": "https://github.com/spotify/luigi/issues/1539" }
gharchive/issue
[Spark Configuration] Can't pass Spark property using the conf when the option contains an equals In spark configuration, the conf can't have some options with an equals inside: [spark] conf: spark.executor.extraJavaOptions=-Darchaius.deployment.environment=dev Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/luigi/worker.py", line 162, in run new_deps = self._run_get_new_deps() File "/usr/local/lib/python2.7/dist-packages/luigi/worker.py", line 113, in _run_get_new_deps task_gen = self.task.run() File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 235, in run args = list(map(str, self.spark_command() + self.app_command())) File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 214, in spark_command command += self._dict_arg('--conf', self.conf) File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 138, in conf return self._dict_config(configuration.get_config().get("spark", "conf", None)) File "/usr/local/lib/python2.7/dist-packages/luigi/contrib/spark.py", line 261, in _dict_config return dict(map(lambda i: i.split('='), config.split('|'))) ValueError: dictionary update sequence element #0 has length 3; 2 is required As a fix, it could be split only on the first equals : luigi/contrib/spark.py def _dict_config(self, config): if config and isinstance(config, six.string_types): return dict(map(lambda i: i.split('=',1), config.split('|'))) great if you want to submit a PR!
2025-04-01T06:40:27.143573
2021-05-18T17:38:41
894621901
{ "authors": [ "jamescooke" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10853", "repo": "spotify/luigi", "url": "https://github.com/spotify/luigi/pull/3081" }
gharchive/pull-request
Make task.disable_window be default source of window int Description When accessing task.disable_window property, use the property directly. Do not call the deprecated task.disable_window_seconds. Motivation and Context Fixes #3029 Current implementation gives a deprecation warning even when accessing the correct task.disable_window property, which is confusing. Fixing this means that the deprecation warning becomes more meaningful - it will only appear when task.disable_window_seconds is incorrectly accessed. Have you tested this? If so, how? Have ran this change locally for me in my employers' Luigi test suite and removed 21 warnings when running a small test on a single task. However, that test suite does not use disable_window. Please could one of @dlstadther , @Tarrasch or another maintainer approve the workflow so that tests run?
2025-04-01T06:40:27.164267
2019-10-17T18:31:49
508648987
{ "authors": [ "dturanski", "nebhale", "sobychacko", "tzolov" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10854", "repo": "spring-cloud/openjdk-docker", "url": "https://github.com/spring-cloud/openjdk-docker/pull/3" }
gharchive/pull-request
Improve addition of Java to Docker image Previously, the Docker file added a bunch of utilities and downloaded the version of Java directly in the image that was eventually created. This left a bunch of unnecessary and potentially vulnerable packages on the image that was used in production. This change makes the build a multi-stage build and ensures that the network utilities required for downloading only exist on a disposed stage. In addition to the change to a multi-stage build, this change also swaps from the Pivotal Distribution of OpenJDK to AdoptOpenJDK as part of our commitment to move to an industry standard distribution. It also swaps from Java 8 to Java 11. Trust me, you'll be fine. I run few manual checks of the ./wait-for-it.sh script and can confirm that it is working as expected. LGTM @nebhale Per some internal team discussions, we feel like we need more time to do testing with the apps that run on SCDF to ensure that they run on both JDK 8 and 11 without any issues. We are going to do such testing in the next release cycle. Can we keep this PR with all the cleanup that you added, but use the latest JDK 8 that has all the patches applied? @sobychacko You should feel free to make that change before merging. Should the 11 version be updated to the latest? @sobychacko - Can we merge this to a JDK11 branch for now? Closing in lieu of #5
2025-04-01T06:40:27.174428
2021-07-01T14:25:42
934905707
{ "authors": [ "avnerstr" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10855", "repo": "spring-cloud/spring-cloud-config", "url": "https://github.com/spring-cloud/spring-cloud-config/issues/1923" }
gharchive/issue
client with multiple profiles Describe the bug I try to start my client application with multiple profiles by setting the active profile to: stage, support-stage I even see this line in the log: 2021-07-01 17:20:20.236 INFO [springboot-no-ui-tests,,] [ restartedMain] c.c.c.ConfigServicePropertySourceLocator : Located environment: name=springboot-no-ui-tests, profiles=[stage,support-stage], label=null, version=e9e9fda77b46039faa314499ba133f7af8411ef9, state=null but for some reason only the first profile is being updated from the config server. Does Spring cloud config knows how to handle multiple profiles coming from clients? The reason behind it, is because we want to have two repositories in GIT one for developers to change for their configuration and one for support people. When the client will run we want it to read the configuration from both places for the same app. for example here is the Spring cloud config server configuration for that: repos: cloud-waf-support: pattern: - 'springboot-no-ui-tests/support*' cloneOnStart: true uri: https://.../v1/repos/cloud-waf-support default-label: main cloud-waf: pattern: - 'springboot-no-ui-tests/*' cloneOnStart: true uri: https://.../v1/repos/cloud-waf default-label: main If you have another idea that will solve the issue it can be great Eventually I solve this requirement: repo for support and repo for dev with the composite solution: cloud: config: server: composite: - type: git uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/fallback-repo default-label: main force-pull: true username: yyy password: xxx clone-on-start: true repos: springboot-no-ui-tests: clone-on-start: true pattern: - 'springboot-no-ui-tests' clone-submodules: false uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/dev-repo force-pull: true default-label: main username: yyy password: xxx - type: git uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/fallback-repo default-label: main force-pull: true username: xxx password: yyy clone-on-start: true repos: springboot-no-ui-tests: clone-on-start: true pattern: - 'springboot-no-ui-tests' clone-submodules: false uri: https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/support-repo force-pull: true default-label: main username: yyy password: xxx This can be closed.
2025-04-01T06:40:27.197906
2018-03-02T22:35:42
301927031
{ "authors": [ "marcingrzejszczak", "mfeygelson", "pivotal-issuemaster" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10856", "repo": "spring-cloud/spring-cloud-contract", "url": "https://github.com/spring-cloud/spring-cloud-contract/pull/564" }
gharchive/pull-request
Support pattern properties in messaging contracts We found that we were unable to use pattern properties such as anyAlphaUnicode() in messaging contracts and were told to open a bug report. We believe these changes would resolve the issue. @mfeygelson Please sign the Contributor License Agreement! Click here to manually synchronize the status of this Pull Request. See the FAQ for frequently asked questions. @mfeygelson Thank you for signing the Contributor License Agreement! Sweeeeet, great job @mfeygelson and congratulations on your first contribution!
2025-04-01T06:40:27.207673
2020-08-07T17:41:33
675170012
{ "authors": [ "ShahzebAnsari", "fzyzcjy", "spencergibb", "spring-cloud-issues", "sweat123" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10857", "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/1893" }
gharchive/issue
How can I get response body size of a request in gateway? I have created a custom filter in which I want to access size of response body. Is there anyway? Either the content length header or read the whole response body into memory. Actually I am new to spring framework and backend. I tried the content length header. It worked. but sometime we didn't get this header from microservice. Could you please share a small code for your second method (read the whole response body into memory)? like in term of ServerWebExchange or ServerHttpResponse. same question here. any updates? same question here. any updates? Closing due to age of the question. If you would like us to look at this issue, please comment and we will look at re-opening the issue. I tried get request/response body size by rewrite NettyRoutingFilter and NettyWriteResponseFilter. Gateway version is 2.2.9.RELEASE. Write NettyRoutingFilter ...... nettyOutbound.withConnection(connection -> { connection.channel().attr(TRACE_ID).set(traceId); if (log.isTraceEnabled()) { log.trace("outbound route: " + connection.channel().id().asShortText() + ", inbound: " + exchange.getLogPrefix()); } }); return nettyOutbound.send(request.getBody().map(body -> { // get request body size here, put size in exchange int size = body.readableByteCount(); exchange.getAttributes().put("gw-request-body-size", size); return getByteBuf(body); })); }).responseConnection((res, connection) -> { ..... rewrite NettyWriteResponseFilter ....... if (log.isTraceEnabled()) { log.trace("NettyWriteResponseFilter start inbound: " + connection.channel().id().asShortText() + ", outbound: " + exchange.getLogPrefix()); } ServerHttpResponse response = exchange.getResponse(); // TODO: needed? final Flux<DataBuffer> body = connection .inbound() .receive() .retain() .map(byteBuf -> { // get response body size here and put the result in exchange int respSize = byteBuf.readableBytes(); exchange.getAttributes().put("gw-response-body-size", respSize); return wrap(byteBuf, response); }); ....... create custom filter and get request/response body size from exchange /** * @author luobo.hwz */ @Slf4j @Component public class RespBodySizeFilter implements GlobalFilter, Ordered { @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { return chain.filter(exchange) .then(Mono.defer(() -> { // get result from exchange Integer exchangeReq = exchange.getAttribute("gw-request-body-size"); Integer exchangeResp = exchange.getAttribute("gw-response-body-size"); log.info("req from exchange: {}", exchangeReq); log.info("resp from exchange: {}", exchangeResp); return Mono.empty(); })); } @Override public int getOrder() { // filter before NettyWriteResponseFilter return NettyWriteResponseFilter.WRITE_RESPONSE_FILTER_ORDER - 1; } }
2025-04-01T06:40:27.210689
2022-08-22T08:31:42
1346025928
{ "authors": [ "Yanch1994", "hymagic", "skyfour" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10858", "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/2710" }
gharchive/issue
Too many tcp connections cause requests slow my spring、spring cloud and gateway version spring boot:2.3.12.RELEASE spring cloud version:Hoxton.SR12 gateway version:2.2.9.RELEASE i adjust reactor.netty.ioWorkerCount is cpu * 4 reactor.netty.ioSelectCount is 1 httpclient is default config now we production env if tcp client more than 10000,request will too slow and reach 20 seconds.but we don't reproduced in test env.so i need you help,how to do this?use fixed httpclient pool? Or other?And i think 10000 connections is very easy for gateway,but i don't why slowly. ok i find it. May I ask how it was resolved in the end? 请问最后是怎么解决的呢?
2025-04-01T06:40:27.213769
2023-12-26T08:44:01
2056183968
{ "authors": [ "fredliex", "kimmking" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10859", "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/3197" }
gharchive/issue
Unexpected changes to query string Describe the bug I use gateway mvc to reverse proxy vite. ProxyExchangeHandlerFunction changes the query string part of the url, causing vite to reply with different response. Sample originally http://localhost:5174/src/components/HelloWorld.vue?scoped=e17ea971&index=0&type=style&vue=&lang.css change to http://localhost:5174/src/components/HelloWorld.vue?scoped=e17ea971&index=0&type=style&vue=&lang.css= There is an extra '=' character at the end. It all seems to be in ProxyExchangeHandlerFunction.handle caused by UriComponentsBuilder.replaceQueryParams. SCG consider lang.css as a url parameter, and its value is empty. Use stripPrefix or custum filter to remove the redundant char.
2025-04-01T06:40:27.217082
2024-09-04T12:39:44
2505243782
{ "authors": [ "pwn-tndn", "spencergibb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10860", "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/3513" }
gharchive/issue
Response body is not coming in SCG Describe the bug We are using Spring CLoud Gateway version 4.0.7 with Jetty client. Our call flow is as follows client --> SCG -> stub What is happening is, request is sent from client which will be forwarded to the stub via SCG, stub responds with header and data. At SCG to client -> the response body is not coming. The call flow goes into error state. We have attached sample application, and wireshark screenshot. Please take a look. scg-app-demo.zip Thank you! I'm sorry, we only support the netty http client and corresponding routing filters. I don't have the bandwidth to debug custom routing filters.
2025-04-01T06:40:27.224177
2018-12-20T16:12:27
393113136
{ "authors": [ "ChengyuanZhao", "codecov-io" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10861", "repo": "spring-cloud/spring-cloud-gcp", "url": "https://github.com/spring-cloud/spring-cloud-gcp/pull/1329" }
gharchive/pull-request
README file dedup fixes #1319 The two files turn out to be exactly the same, so a simple include:: is enough. Codecov Report Merging #1329 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #1329 +/- ## ========================================= Coverage 67.72% 67.72% Complexity 1397 1397 ========================================= Files 202 202 Lines 5643 5643 Branches 567 567 ========================================= Hits 3822 3822 Misses 1587 1587 Partials 234 234 Flag Coverage Δ Complexity Δ #integration ? ? #unittests 67.72% <ø> (ø) 1397 <ø> (ø) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update a9ebc50...a00f9b1. Read the comment docs.
2025-04-01T06:40:27.225458
2019-01-30T15:39:22
404829893
{ "authors": [ "elefeint" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10862", "repo": "spring-cloud/spring-cloud-gcp", "url": "https://github.com/spring-cloud/spring-cloud-gcp/pull/1419" }
gharchive/pull-request
[WIP] Pubsub stream binder via synchronous pull So we have something to point at when discussing the streaming/synchronous doc. I am going to fix that commit history. I've created the branch off of pubsub-pull, which caused commits to be duplicated after rebase.
2025-04-01T06:40:27.240005
2023-12-23T19:13:19
2054874365
{ "authors": [ "SimoneGiusso", "sobychacko" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10863", "repo": "spring-cloud/spring-cloud-stream", "url": "https://github.com/spring-cloud/spring-cloud-stream/issues/2878" }
gharchive/issue
spring.cloud.stream.function.bindings property doesn't work as described in documentation In Functional Composition section it is mentioned the following: "For example, if we want to give our toUpperCase|wrapInQuotes a more descriptive name we can do so with the following property spring.cloud.stream.function.bindings.toUpperCase|wrapInQuotes-in-0=quotedUpperCaseInput" However it seems that this property doesn't work. I have an example. This is the yaml file. However when I try to use the name uppercaseAndReverseInput in this test it fails. I did different attempts but without positive results. @SimoneGiusso It looks like you are adding the property under spring.cloud.function.bindings..., which is the incorrect level for that property. It should be under spring.cloud.stream.function.bindings... here.
2025-04-01T06:40:27.244349
2017-04-30T08:37:37
225313536
{ "authors": [ "davidwadden", "jvalkeal", "mbogoevici" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10864", "repo": "spring-cloud/spring-cloud-stream", "url": "https://github.com/spring-cloud/spring-cloud-stream/issues/935" }
gharchive/issue
Runtime errors with Boot 2.x Mostly for heads up but boot is starting to break a lot of things Caused by: java.lang.NoClassDefFoundError: org/springframework/boot/bind/RelaxedDataBinder at org.springframework.cloud.stream.binding.BindingService.validate(BindingService.java:174) at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:96) at org.springframework.cloud.stream.binding.BindableProxyFactory.bindInputs(BindableProxyFactory.java:221) at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:55) RelaxedDataBinder is gone, see https://github.com/spring-projects/spring-boot/issues/9000 I think we need to create 2.x so that we start to see what issues spring5/boot2 brings in. @jvalkeal Yes, the plan is to get started on a 2.x branch in May. Is there a build snapshot maven repository tracking the 2.x branch anywhere? I don't see one at http://repo.spring.io/libs-snapshot/org/springframework/cloud/spring-cloud-stream/ Not yet, will have one next week once the binders are upgraded as well. Closing this after merging @jvalkeal 's PR in the 2.0.x branch and making respective branches for all binders. @davidwadden To your question - the BOM for Elmhurst release train (Based on 2.0) is CI-built now and available here https://repo.spring.io/libs-snapshot-local/org/springframework/cloud/spring-cloud-stream-dependencies/Elmhurst.BUILD-SNAPSHOT/
2025-04-01T06:40:27.255373
2024-05-06T11:47:26
2280691852
{ "authors": [ "robertmcnees" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10865", "repo": "spring-guides/gs-scheduling-tasks", "url": "https://github.com/spring-guides/gs-scheduling-tasks/pull/34" }
gharchive/pull-request
Guide Rewrite This is a rewrite of the guide with many changes. In addition to content changes, structural changes were made to the guide as a potential template for future guide updates to follow. Addresses issues #33, #32, #31, #29, #28. Several changes were made to promote correctness, make for a better user experience, and allow the option to dual publish to Spring Academy in the future. The README.adoc has no text content, but instead links to 3 separate files. This allows spring.io/guides to publish the guide with the full context, including Spring Initializr, while allowing Spring Academy to only be concerned with the guide specific content, located in content.adoc. There should be no change for how users view the guide on spring.io. The README.adoc added an additional conditional section that can be set if rendering for Spring Academy to exclude certain sections. The initial and complete folders were removed in favor of keeping only the solution in the root project directory. This should simplify the user experience by providing fewer places to look for code and create an easier experience when importing the project to an IDE. The user can still follow along with a blank project by starting with Spring Initializr. This format of keeping the code in the root directory makes it possible to easily load the project to Spring Academy as well. A single build tool is used. For this guide, Gradle is used because it is the preferred build tool of the project driving the functionality of the guide, Spring Framework. Having a single build tool should simplify the user experience when importing the project to a local IDE. This ease of import is also important for the user experience when loading into Spring Academy. A lot of references to the common macros project were removed. This was done to prioritize correctness and user experience over ease of guide creation. A few problems exist when trying to use a common template. First, the wording may be slightly off, as demonstrated by issue #33. The 'web' text in the line This web application is 100% pure Java... comes from a common file. Second, some teachings were incorrect. For example, in the popular rest-service guide, we advise the user that @ComponentScan will check the com/example package, but that information is not correct. It will in fact check the com/example/restservice package in this particular guide. Third, the commands to package the application as a jar file were not correct. The problem is persistent to all guides as described here. References to dated technologies, i.e. web.xml and WAR files, were also removed. This was done when removing links to the common GitHub macro repository in favor of static text. The Build an executable JAR section was renamed to Building the Application and text was added for using Cloud Native Buildpacks and Native Image compilation. I think that instead of 3 separate files, using ifndef from AsciiDoc Conditionals would be a better approach. If we choose to publish this guide to Spring Academy, we could set the variable env-exclude-spring-initializr to exclude certain sections. This can easily be achieved using a downdoc command: npx downdoc -a env-exclude-spring-initializr README.adoc @Buzzardo what do you think about this use of a conditional? @joemoore do you think an approach could work for Spring Academy? Converted to draft until I come up with a better solution to importing some common components from the getting started macros GitHub repository. Thanks for the feedback @mbhave! I added a new file to the getting-started-macros repository that is intended to be a common section at the end of every guide. The 3 files in this PR can be checked into the common macro project at a later date. guide_intro.adoc spring_academy_intro.adoc spring_academy_see_also.adoc Before those files are checked into the macro project and used as a template for all guides to follow, I'd like to verify that the process to convert content to Spring Academy (reducer, downdoc) is successful. This requires a GitHub action to execute, which should be performed on a non-forked repo.
2025-04-01T06:40:27.267815
2021-11-09T17:05:32
1048859058
{ "authors": [ "OlgaMaciaszek", "snicoll" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10866", "repo": "spring-projects-experimental/spring-native", "url": "https://github.com/spring-projects-experimental/spring-native/issues/1243" }
gharchive/issue
PrimaryDefaultValidatorPostProcessor triggered at different time with AOT leads to incomplete Validator bean definition In this sample, in AOT mode, NoUniqueBeanDefinition is thrown for Validator beans, even though one of the two is annotated with @ConditionalOnMissingBean: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'integrationMessageHandlerMethodFactory': Unsatisfied dependency expressed through method 'messageHandlerMethodFactory' parameter 1; nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.validation.Validator' available: expected single matching bean but found 2: defaultValidator,mvcValidator The problem is a different order of processing for BeanDefinitionRegistryPostProcessor. I have changed a custom code to use a callback of framework and that fixed the issue. Unfortunately, calling these has the side effects of contributing quite some more bean definitions, certain with not the type I'd expect. I am investigating. This was already fixed as part of #1213 but I've added a test to validate the behaviour. @OlgaMaciaszek your sample app still doesn't work unfortunately as Spring Integration is not yet supported. Ok - the users will need to wait till that's done then for these kinds of projects. Anyway, that user-provided sample has allowed us to discover and fix at least 3 different issues :) . Thanks, @matus753.
2025-04-01T06:40:27.271833
2019-08-20T17:26:45
482987753
{ "authors": [ "artembilan", "garyrussell" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10867", "repo": "spring-projects/spring-amqp", "url": "https://github.com/spring-projects/spring-amqp/pull/1072" }
gharchive/pull-request
GH-1071: JUnit 5 Support Improvements Resolves https://github.com/spring-projects/spring-amqp/issues/1071 Remove JUnit4 dependency from RabbitAvailableCondition (minor breaking API change) Add purgeAfterEach to @RabbitAvailable tabs not spaces in RabbitAvailableCondition (review with ?w=1) @LogLevels now requires level convert more tests. Does that mean that we are very close to remove hard dependency on JUnit 4 and only use it for rules in the test module? I wouldn't say "very close", but certainly "closer". We have one nut to crack - RepatableProcessor @Rule. In most cases, it should be easy to replace with @RepeatableTest but there is one test case where the RepeatableProcessor calls the test method on multiple threads. I added a couple more conversions with the push; I don't plan on doing any more today, so this can be merged.
2025-04-01T06:40:27.280980
2014-12-25T10:08:10
52857431
{ "authors": [ "WonderCsabo", "jaredsburrows", "paulvi", "primaproxima", "royclarkson" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10868", "repo": "spring-projects/spring-android", "url": "https://github.com/spring-projects/spring-android/issues/23" }
gharchive/issue
what is new for 2.0 ? Docs http://docs.spring.io/spring-android/docs/2.0.x/reference/pdf/spring-android-reference.pdf do not explain why there is major version increase. @royclarkson when do you plan to release 2.0? There was no activity on this repo since December. :cry: @paulvi 2.0 was started in response to some changes in Spring Framework's RestTemplate. Spring for Android's RestTemplate had made some different API choices, and the initial goal of 2.0 was to bring those two close to parity. The major version number update was because of some of these minor, but breaking changes. @WonderCsabo Unfortunately, we've had changes in priorities which mean this project is not receiving much attention right now. I'm happy to facilitate merging PRs, triaging issues, and pushing out releases. Thanks! I was just asking so we can communicate this correctly in the downstream AndroidAnnotations project. @royclarkson BTW what is stopping you from release 2.0 as the current HEAD? Are there any blocking issues or missing features? There's not anything critical, IIRC. I had gone through and merged in many of the improvements and updates from Spring Framework already. The main outstanding issue is the OkHttp support as mentioned in #24, and of course doc updates. Alright, OkHttp (#24) support is in a better place now. Here are a few other outstanding items that would be nice to have addressed prior to pushing a GA. remove Jackson 1.x support update dependencies and fix resulting issues https://jira.spring.io/browse/ANDROID-168 https://jira.spring.io/browse/ANDROID-163 https://jira.spring.io/browse/ANDROID-143 Went ahead and knocked out those first two items on the list. What is the ETA of final release of 2.0? 2.0.0.M3 is now available for testing. It includes the latest OkHttp improvements. @jaredsburrows unfortunately, I don't have a specific ETA for the GA. I'm juggling some priorities, but working on getting those last few issues cleaned up. Thanks. Hi, Roy. Will there be support for PATCH or should I like for another solution? If anyone has a good reference in solving this usecase, I'm all eyes. Thanks. We have PATCH support in the latest 2.0 milestone. You can see the usage in code via a search. Most versions (or all?) of the native Android HTTP clients do not support PATCH, however. You'll need to include the dependency for either the Android port of Apache HttpClient 4.3, or OkHttp to make use of it. Hope that helps.
2025-04-01T06:40:27.289993
2024-08-15T08:38:41
2467653914
{ "authors": [ "jgrandja", "wapkch" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10869", "repo": "spring-projects/spring-authorization-server", "url": "https://github.com/spring-projects/spring-authorization-server/issues/1691" }
gharchive/issue
Token exchange failed when the subject token is a client_credentials granted access token Describe the bug Token exchange failed when the subject token is a client_credentials granted access token. To Reproduce build a spring authorization server which enables client_credentials grant for 'user-client' and token-exchange grant for 'messaging-client' spring: security: oauth2: authorizationserver: client: test-client: registration: client-id: "user-client" client-secret: "{noop}user" client-authentication-methods: - client_secret_basic authorization-grant-types: - client_credentials scopes: - user.read token-client: registration: client-id: "messaging-client" client-secret: "{noop}messaging" client-authentication-methods: - client_secret_basic authorization-grant-types: - urn:ietf:params:oauth:grant-type:token-exchange scopes: - message.read get access token for 'user-client' using client_credentials grant build resource server 'user-service' and 'messaging-service', user-service remote calls messaging-service Access resource server 'user-service' with the access token above token exchange failed due to the client_credentials granted access token does not have a 'princial' attribute, but OAuth2TokenExchangeAuthenticationProvider require a principal to be available via the subject_token for impersonation or delegation use cases: Expected behavior As per https://datatracker.ietf.org/doc/html/rfc8693,there doesn't appear to be any explicit prohibition against using an access token obtained through the client_credentials grant for token exchange. @wapkch As per https://datatracker.ietf.org/doc/html/rfc8693,there doesn't appear to be any explicit prohibition against using an access token obtained through the client_credentials grant for token exchange. This is true but at the same time the spec does not make any reference to the client_credentials grant type either. Furthermore, looking at the examples in Appendix A. Additional Token Exchange Examples, you will see the subject tokens referenced contain<EMAIL_ADDRESS>and<EMAIL_ADDRESS>that clearly indicate a user principal. As well, in 1.1. Delegation vs. Impersonation Semantics, it states: One common use case for an STS (as alluded to in the previous section) is to allow a resource server A to make calls to a backend service C on behalf of the requesting user B The word "user" is referenced in many parts in the spec, which implies there is a Resource Owner principal associated to the subject token. I'm curious, why are you using a client_credentials obtained access token to represent the subject token in the token_exchange grant? If the client needs another scope, (message.read as per your example) why wouldn't the messaging-client be configured for the client_credentials grant and obtain a new access token with the required scope? @jgrandja That makes a lot of sense! Thanks for the detailed explanation. You can close this now.
2025-04-01T06:40:27.359729
2023-01-11T17:28:26
1529420673
{ "authors": [ "jfarjona", "schauder" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10870", "repo": "spring-projects/spring-data-relational", "url": "https://github.com/spring-projects/spring-data-relational/issues/1410" }
gharchive/issue
Provide access to PreparedStatement in the JdbcTemplate classes. Hello, I am building an application where I need to have direct access to the resulting PreparedStatement before it is executed by the different methods of the template. Currently the classes hide it from developers, making it less flexible. I understand the approach, but in my case (where the execution happens in a reactor environment) it is impossible to make use of Spring Data. You can provide access to the PreparedStatement as you already have it. Thanks, Juan PS. Trying to use MemPOI. Could you please explain in more detail what you are trying to do and why? We generally don't give access to data structures we consider internal, because every such public API drives up maintenance. Hi Jan, thanks for such a quick answer. I think best is to show you a piece of code (incomplete) so you can understand the problem... `// Using Apache POI and MemPOI (https://github.com/firegloves/MemPOI) XSSFWorkbook workbook = new XSSFWorkbook(); try { // Contains a list of SQL statements with named parameters. List reports = loadReports(); if (reports != null) { MempoiBuilder builder = MempoiBuilder.aMemPOI() .withWorkbook(workbook) .withAdjustColumnWidth(true); int i = 0; for (String sql : reports) { PreparedStatement statement = dataSource.getConnection() .prepareStatement(rep.sql); // @Wired DataSource dataSource... NamedParameterJdbcTemplate template = new NamedParameterJdbcTemplate(dataSource); Map<String, ?> parameters = new HashMap<>(); // given parameters from and to are Instant parameters.put("from", Timestamp.from(from)); parameters.put("to", Timestamp.to(to)); String name = names.get(i); // Code below is just to show how MemPOI expects data to assemble the workbook. // It needs to run the assemblage together because the links between the workbook and the // sheets. Not ideal, but understandable. PreparedStatementCallback<MempoiSheet> prepStmtCallBack = preparedStatement -> { // MemPOI expects a PreparedStatement, but that is executed when the whole workbook is // assembled. See code below. MempoiSheet sheet = MempoiSheetBuilder.aMempoiSheet() .withSheetName(name) .withPrepStmt(preparedStatement) .build(); return sheet; }; // Is there anyway to attach this execution to the Future<> that is assembled by MemPOI? // see code below... MempoiSheet sheet = template.execute(sql, parameters, prepStmtCallBack); builder.addMempoiSheet(sheet); i++; } MemPOI memPOI = builder.build(); CompletableFuture<MempoiReport> fut = memPOI.prepareMempoiReport(); // Here is the problem: // MemPOI executes the queries from the prepared statements when // fut is executed... but at this point, all preparedstatements are closed. // Obviously, they were ran through when the callback was called. MempoiReport report = fut.get(); workbook.close(); // Now we add the pivots by hand. .toSeconds()); } else { log.error("Error: cannot load report information!"); } } catch (Exception ex) { ... }` If I could get the PreparedStatement with the parameters set and ready to be "queried" then that would solve my problem. Obviously, I am responsible for closing it and freeing all resources. I cannot use getPreparedStatementCreator, nor ParsedSql... there is simply no option in the way all the nice utility classes are protected. So I have no way to use such a great idea of the named parameters template outside of the limitations of the provided methods. That limits a lot.
2025-04-01T06:40:27.415805
2023-01-06T11:45:14
1522451188
{ "authors": [ "StefanMessner", "asndevever", "bclozel", "jelena-pesevski", "marcusdacoregio", "marichka-spin", "schmoellphi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10871", "repo": "spring-projects/spring-graphql", "url": "https://github.com/spring-projects/spring-graphql/issues/594" }
gharchive/issue
Secured '/graphql' endpoint Hello, team Please help me to understand what I'm doing wrong. I'm trying to migrate our Spring Boot application from version 2.7.5 to 3.0.1. We have secured '/graphql' endpoint using SecurityFilterChain bean: .authorizeHttpRequests((authorizeRequests) -> authorizeRequests.requestMatchers("/graphql").access(new AdminAuthManager())) public static class AdminAuthManager implements AuthorizationManager<RequestAuthorizationContext> { @Override public AuthorizationDecision check(Supplier<Authentication> authentication, RequestAuthorizationContext object) { if (isAnonymous()) { return new AuthorizationDecision(false); } return new AuthorizationDecision(true); } } The request is received by my GraphQL controller and the correct response is returned from DB but at the end of request execution it is returned to spring AuthorizationFilter and SecurityContext is empty, this causes fail of the full request: 2023-01-06 13:03:27,995 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398385 DEBUG graphql.GraphQL - Execution 'fbce998b-d787-2caa-f857-0441c3b0d249' completed with zero errors 2023-01-06 13:03:27,996 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398386 DEBUG o.s.g.s.webmvc.GraphQlHttpHandler - Execution complete 2023-01-06 13:03:27,996 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398386 DEBUG o.s.w.c.r.async.WebAsyncManager - Started async request 2023-01-06 13:03:27,997 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398387 DEBUG o.s.w.c.r.async.WebAsyncManager - Async result set, dispatch to /admin/graphql 2023-01-06 13:03:27,997 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: c29b9c53cfabab31] 2398387 DEBUG o.s.web.servlet.DispatcherServlet - Exiting but response remains open for further handling 2023-01-06 13:03:27,998 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: f9375945ff600474] 2398388 DEBUG o.s.security.web.FilterChainProxy - Securing POST /graphql 2023-01-06 13:03:28,007 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: eee2beca764645fd] 2398397 DEBUG o.s.s.w.a.AnonymousAuthenticationFilter - Set SecurityContextHolder to anonymous SecurityContext 2023-01-06 13:03:30,972 [Traceid: 63b7ffff50b995f1f9375945ff600474, SpanId: 14b43c683496f98d] 2401362 DEBUG o.s.s.w.a.Http403ForbiddenEntryPoint - Pre-authenticated entry point called. Rejecting access 2023-01-06 13:03:30,974 [Traceid: , SpanId: ] 2401364 DEBUG o.s.security.web.FilterChainProxy - Securing POST /error ...... 2023-01-06 13:03:30,980 [Traceid: 63b800023f4fe8e2a1e95d2c2d8981fa, SpanId: 4fe5b407a4134d43] 2401370 DEBUG o.s.s.w.a.AnonymousAuthenticationFilter - Set SecurityContextHolder to anonymous SecurityContext 2023-01-06 13:03:32,319 [Traceid: 63b800023f4fe8e2a1e95d2c2d8981fa, SpanId: a1e95d2c2d8981fa] 2402709 DEBUG o.s.s.w.a.Http403ForbiddenEntryPoint - Pre-authenticated entry point called. Rejecting access I don't understand why Authorization is not propagated in SecurityContext Thank you for any help Could you share a sample application (something we can git clone or download) that shows the issue? Hello, I've implemented sample application for you that reproduces the issue. Please download from here https://github.com/idun-corp/neo4j-test The issue itself is described in HELP.md Thanks @marichka-spin for the sample application. I've managed indeed to reproduce the issue but I'm not familiar enough with the security setup to understand what changed. @rwinch could you have a look? Here's my current understanding of the problem. The application configures a InternalAuthFilter before the UsernamePasswordAuthenticationFilter; this custom filter "hard codes" a PreAuthenticatedAuthenticationToken authentication in the Spring Security context. I guess this is a simplified version of an existing implementation. This authentication is propagated as expected from the SecurityContextHolder to the GraphQL context during the execution of the GraphQL request. The GraphQL Controller com.proptechos.neo4j.graphql.DogQueryResolver is called as expected as well and returns the response object. Once the response has been completed asynchronously in the org.springframework.graphql.server.webmvc.GraphQlHttpHandler, the security context is overwritten with an anonymous user: 2023-01-11T16:37:12.216+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.security.web.FilterChainProxy : Securing POST /graphql 2023-01-11T16:37:12.253+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.security.web.FilterChainProxy : Secured POST /graphql 2023-01-11T16:37:12.436+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.security.web.FilterChainProxy : Securing POST /graphql 2023-01-11T16:37:12.440+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.s.w.a.AnonymousAuthenticationFilter : Set SecurityContextHolder to anonymous SecurityContext 2023-01-11T16:37:12.441+01:00 DEBUG 67097 --- [nio-8083-exec-1] o.s.s.w.a.Http403ForbiddenEntryPoint : Pre-authenticated entry point called. Rejecting access It looks like the async dispatch of the request is bypassing this custom filter. Is there something missing in the security configuration of this application? Do you know why we're getting a different behavior vs Spring GraphQL 1.0? If I'm not mistaken, we were already completing the GraphQL HTTP request asynchronously in the previous version. I'm wondering if this part of the Spring Security 6.0 upgrade docs is relevant. @bclozel sorry, have you managed to solve this issue? @marichka-spin sorry, have you managed to solve this issue? Is there any update on this isccue? I can actually reproduce it. We currently have the same problem, that the SecurityContext is not available after request filter. Thus Spring Security blocks the request. By disabling authorization for ASYNC dispatchers, we can work around. .authorizeHttpRequests { it.dispatcherTypeMatchers(DispatcherType.ASYNC).permitAll() } But still not sure, looks like a Bug in Spring Graphql to me. Investigating a bit deper in the problem, actually showed be that the root problem is that Spring for Graphql uses a async request dispatcher approach. Since, Spring Security 6 the SecurityContext is not "persisted" anymore automatically, and thus not available for subsequent requests. [see: Persisting Authentication] Therefore, we need to manually register the SecurityContextRepository, to make the context available for the async dispatcher. For me this solved the problem. Investigating a bit deper in the problem, actually showed be that the root problem is that Spring for Graphql uses a async request dispatcher approach. Since, Spring Security 6 the SecurityContext is not "persisted" anymore automatically, and thus not available for subsequent requests. [see: Persisting Authentication] Therefore, we need to manually register the SecurityContextRepository, to make the context available for the async dispatcher. http // ... .securityContext((securityContext) -> securityContext .securityContextRepository(new DelegatingSecurityContextRepository( new RequestAttributeSecurityContextRepository(), new HttpSessionSecurityContextRepository() )) ); For me this solved the problem. Hi, I have the same problem. This solution is not working for me, can you suggest in what direction to dig? By the way config.dispatcherTypeMatchers(DispatcherType.ASYNC).permitAll(); works fine, but it is not how security should be configured. Any updates on this issue? Running into the same problem. Sorry about the delayed response. I believe @StefanMessner is right as this behavior is due to a Spring Security behavior change that is documented in the migration guide. Adding the following to the sample application "fixes" the issue. @Bean public SecurityFilterChain filterChain(HttpSecurity http, InternalAuthFilter internalAuthFilter) throws Exception { return http.addFilterBefore(internalAuthFilter, UsernamePasswordAuthenticationFilter.class) // disabling explicit save .securityContext((securityContext) -> securityContext.requireExplicitSave(false)) ... Developers should look into this topic to decide whether changing the default in their application is a good fit. I'm closing this issue as a result. Hi everyone. Since Spring Security 6.0 the SecurityContext must be explicitly saved by the user if they wish the context to be available in subsequent requests. The built-in authentication mechanisms from Spring Security already do that and you can configure which SecurityContextRepository should be used to save the SecurityContext. However, when you specify a custom authentication filter without extending AuthenticationFilter/AuthenticationWebFilter, you also have to be aware that you should save the context. Considering the example from https://github.com/idun-corp/neo4j-test, there is an InternalAuthFilter that sets the SecurityContext in the SecurityContextHolder, but it does not save it anywhere else to be available in the ASYNC request. That said, I opened a PR with a suggested change to the implementation of InternalAuthFilter. The 6.0 Migration Guide has some details about this as well. Another solution would be to implement jakarta.servlet.Filter instead of extending OncePerRequestFilter, allowing the filter to be invoked again for the ASYNC dispatcher. @Component public class InternalAuthFilter implements Filter { protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { UserPrincipal principal = UserPrincipal.authenticated(); //Granting authorities List<SimpleGrantedAuthority> grantedAuthorities = principal.getRoles().stream() .map(SimpleGrantedAuthority::new) .collect(Collectors.toList()); final PreAuthenticatedAuthenticationToken authentication = new PreAuthenticatedAuthenticationToken(principal, null, grantedAuthorities); authentication.setAuthenticated(!grantedAuthorities.isEmpty()); authentication.setDetails(principal); SecurityContext context = SecurityContextHolder.createEmptyContext(); context.setAuthentication(authentication); SecurityContextHolder.setContext(context); filterChain.doFilter(request, response); } @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { doFilterInternal((HttpServletRequest) servletRequest, (HttpServletResponse) servletResponse, filterChain); } }
2025-04-01T06:40:27.428192
2020-09-15T16:28:09
702078874
{ "authors": [ "artembilan", "garyrussell" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10872", "repo": "spring-projects/spring-kafka", "url": "https://github.com/spring-projects/spring-kafka/pull/1588" }
gharchive/pull-request
GH-1587: Option to Correct Transactional Offsets Resolves https://github.com/spring-projects/spring-kafka/issues/1587 See javadoc for ConsumerProperties.setFixTxOffsets() for more information. cherry-pick to 2.5.x ... and cherry-picked to 2.5.x
2025-04-01T06:40:27.492290
2021-01-03T02:49:21
777561838
{ "authors": [ "Duncol", "artembilan", "garyrussell", "tomazfernandes" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10873", "repo": "spring-projects/spring-kafka", "url": "https://github.com/spring-projects/spring-kafka/pull/1664" }
gharchive/pull-request
GH 920 - Topic-based retry support Please refer to the RetryTopicConfigurer class' JavaDoc for an overview of the functionalities: https://github.com/tomazfernandes/spring-kafka/blob/GH-920/spring-kafka/src/main/java/org/springframework/kafka/retrytopic/RetryTopicConfigurer.java I've separated the code in 5 commits: Pausing partitions in the MessageListenerContainer -> so that we don't pause the entire consumer and end up backing off the other partition's messages more than we should BackOff Manager and ListenerAdapter -> Functionality to read a timestamp header and manage the partition's consumption by listening to events RetryTopic functionality -> Adds the topics / consumers configuration functionality A few style checks I've missed Some improvements to the javadoc Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Ok @garyrussell, I actually tried that, but couldn’t push or force push to this branch, it seemed to me because of the open PR (not really experienced with rebasing) Should I try squashing this merge / conflict resolution and rebasing again? Or maybe just leave the conflict there as it was? Thanks, @tomazfernandes we prefer rebasing PRs rather than adding merge commits (we'll rebase at the end anyway). Ok @garyrussell, I actually tried that, but couldn’t push or force push to this branch, it seemed to me because of the open PR (not really experienced with rebasing) Should I try squashing this merge / conflict resolution and rebasing again? Or maybe just leave the conflict there as it was? No don't worry; we'll squash and rebase before merging to master any way; weird that you couldn't force push, though. No don't worry; we'll squash and rebase before merging to master any way; weird that you couldn't force push, though. Hi @tomazfernandes I finally found some time to take a quick look at this. It is very impressive and seems to be a complete solution and will be a valuable addition to the framework. I like the approach. It's a huge amount of code to review, though so it will take some time. That said, I am inclined to merge it as-is and, perhaps, document it as an "experimental" feature, at least initially; I am sure we can get it into next month's 2.7.0-M2 milestone, as long as you have time to address a few issues. First a couple of style issues: we wrap our javadocs at column 90 (when possible) and code at 120 (when possible). Before the milestone, we will need at least some test cases; but the more coverage, the better; we don't want to trigger our Sonar coverage gate (we're currently a little short of 80%, the gate is currently 70%). We will need to document the feature in src/reference/asciidoc - it probably deserves a whole new chapter, rather than sprinkling stuff throughout the other sections (aside from documenting the new container properties). We would not need the docs for M2, but the sooner, the better. Our only strict asciidoctor rule is one-sentence-per-line, but you can see the other docs for examples. Thanks again for such a significant contribution!! Hi @tomazfernandes I finally found some time to take a quick look at this. It is very impressive and seems to be a complete solution and will be a valuable addition to the framework. I like the approach. It's a huge amount of code to review, though so it will take some time. That said, I am inclined to merge it as-is and, perhaps, document it as an "experimental" feature, at least initially; I am sure we can get it into next month's 2.7.0-M2 milestone, as long as you have time to address a few issues. First a couple of style issues: we wrap our javadocs at column 90 (when possible) and code at 120 (when possible). Before the milestone, we will need at least some test cases; but the more coverage, the better; we don't want to trigger our Sonar coverage gate (we're currently a little short of 80%, the gate is currently 70%). We will need to document the feature in src/reference/asciidoc - it probably deserves a whole new chapter, rather than sprinkling stuff throughout the other sections (aside from documenting the new container properties). We would not need the docs for M2, but the sooner, the better. Our only strict asciidoctor rule is one-sentence-per-line, but you can see the other docs for examples. Thanks again for such a significant contribution!! Hi @garyrussell, that's awesome news! I'm really glad you liked it, thank you very much, I'm really excited about it. Sure, I'll address these issues and implement the tests, no problem. There are a couple of things I've been meaning to improve as well, so I'll get to it. Also, if you have any suggestions please let me know. When would it be a good timeframe for me to finish the changes to get in M2? Hi @garyrussell, that's awesome news! I'm really glad you liked it, thank you very much, I'm really excited about it. Sure, I'll address these issues and implement the tests, no problem. There are a couple of things I've been meaning to improve as well, so I'll get to it. Also, if you have any suggestions please let me know. When would it be a good timeframe for me to finish the changes to get in M2? M2 is currently scheduled for Feb 17: https://github.com/spring-projects/spring-kafka/milestone/135 So, we have a few weeks. M2 is currently scheduled for Feb 17: https://github.com/spring-projects/spring-kafka/milestone/135 So, we have a few weeks. Sounds perfect @garyrussell. Thank you very much for the opportunity. Sounds perfect @garyrussell. Thank you very much for the opportunity. Hi @garyrussell, just a quick update. I've implemented some changes and improvements, as well as more than 90% test coverage, both integration and unitary. What's missing is updating the javadocs and addressing the style changes, as well as the documentation, which I plan on doing this week. Unfortunately I had covid and that set me back a couple of weeks, otherwise I'd probably have everything ready by now. I'll commit the code as is so you can take a look if you want - it won't build with Gradle due to checkstyle issues, but it should build and run normally on IntelliJ or by disabling checkstyle. What would be the deadline for committing the javadoc / style adjustments in order to make it into M2? Thanks! @tomazfernandes Thanks. The PR needs to be clean and reviews complete by end of day February 16. The reference manual (which has now moved to spring-kafka-docs/src/main.asciidoc) should be lowest priority and can miss M2 if necessary (although it would be nice to have). FYI, we are not working this Friday (12th) and next Monday (15th). Thanks for your concern Gary, it wasn't a fun ride at all, but thankfully not as bad as it could have been. I'm well now. About the code, I'll clean up the PR, think I should have it done by tomorrow. Then I'll work on the documentation until the 16th, but hopefully I'll have it ready sooner. Of course, if you feel that's too close to the M2 release date we can push it back to the following release, although I'd really like if we can make it for M2. Hi @garyrussell, I've cleaned the PR and formatted the javadocs and code the way you asked. If you could take a look, I think it should be fine now. I'll start working on the documentation soon to have it ready for M2, please let me know if there's anything else that needs to be done or anything I might have missed. Thanks again for the opportunity! Please rebase to master; thanks. @garyrussell, seems like this time I got the rebase right. I'll start working on the documentation. Thanks. Hi @garyrussell, I've created the documentation for the non-blocking retry functionality and updated the relevant parts of the previous documentations. Hope it's ok, please let me know if there's anything else needed. Thanks! Thanks a lot @garyrussell! I'll make these adjustments today after my 9-5. Also, there are a few other improvements / functionalities I'd like do add, such as: make DLT and DLT processing optional include the possibility of configuring a global timeout for the retries include the ability to configure retries from the application.properties files some other minor things Do you think it's worth trying to get those into M2, or maybe go with what we have now and create a new PR afterwards for the next release (considering all goes well)? I'd probably have it done by Tuesday (since you won't be working next Friday and Monday). @garyrussell thanks for your comments, I agree on the DLT part. For the group id, it gets suffixed with the retry topic's suffix as long as the user provides a groupId for the main topic. You can check it out in RetryTopicConfigurer.java:431, that's one of the functions for the customiser added to the KafkaListenerABPP processListener method. So each retry topic / dlt should get it's own consumer group. I think there might be some minor adjustments to make in that logic to cover all possibilities for the main endpoint, maybe we can look into that some time after M2. But it should be good for most use cases. Also, I think I should be able to retrieve the application.properties from the ApplicationContext in order to create a retry configuration, shouldn't I? Or even create a bean for that purpose. Maybe there's a more "Springy" way to do that, which probably is the case for other places of the code as well, feel free to point them out as you see them. I'll take a look into how they're handled internally by the Spring Boot app. As for the other improvements, I think I'll try to code them until Tuesday and maybe commit it to a separate branch, if this PR hasn't been merged by then. Then you can decide whether or not it's small enough for you to review and merge to M2, or if it goes to the release after that - I'll be good with either. Also maybe I can separate the commits so it doesn't have to be an all or nothing decision. Thanks a lot again! I'll add this groupId part to the documentation when I make the changes later today. Also, I think I should be able to retrieve the application.properties from the ApplicationContext in order to create a retry configuration, shouldn't I? Or even create a bean for that purpose. Right, the properties are in the environment, but it's separation of concerns; application.properties/.yml is processed by Boot's auto configuration in KafkaAnnotationDrivenConfiguration and KafkaProperties . I think it would be too confusing for automatic configuration via properties to be handled in two different places. Also Boot has much goodness in its property mapping - e.g. some-timeout: 5s Vs. 5000ms for delays, etc., and completion hints/validation in IDE editors, we don't want to reinvent all that here and, if it's not supported, it would be inconvenient for users. I am sure the Boot team will be receptive to a PR submitted to enhance the auto configuration for this important enhancement. Sounds great @garyrussell, I'll look into adding it to Spring Boot's properties then, probably after M2. Thank you very much for your support. @tomazfernandes you don’t need to comment “done” on each change. It spams our inboxes. Just comment if you disagree about something. Thanks. Sorry about that @garyrussell, thanks for letting me know. Hi @garyrussell, just a quick update. I have: fixed a couple of bugs in the original PR, other than that everything is the same there. created a new PR with a few improvements and new features, and updated the documentation accordingly. If you want to see the diff between the new and the original PR you can check it out here: https://github.com/tomazfernandes/spring-kafka/pull/2 I know you're probably having a busy day ahead of the new version so feel free to choose whether to include the second PR in M2 or to push it back to the next version. I think it should add value, but I know it's too close to the release date, so I'm good with either way. I also realised I haven't written anything for the 'What's new' part of the documentation, should we write something there? If so, is that something you'd like me to do, or do you prefer doing it yourself? FYI, today is a holiday here, and I'm under quarantine, so I should be available for anything that comes up. Thanks once again for the opportunity, it's been a very exciting experience! Thanks guys for the very precious work that you did/are doing. I am very glad, as this is the exact feature I need right now :). One question though, as I cannot properly configure the back-off for the retries. My config is as follows: @Bean public ConcurrentKafkaListenerContainerFactory<String, String> kafkaRetryListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(retryConsumerFactory()); return factory; } private ConsumerFactory<String, String> retryConsumerFactory() { Map<String, Object> configProps = new HashMap<>(); configProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); configProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); configProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); return new DefaultKafkaConsumerFactory<>(configProps); } And the listener annotated method is: @KafkaListener(topics = "test-topic", containerFactory = "kafkaRetryListenerContainerFactory", groupId = "test-group") @RetryableTopic( backoff = @Backoff(100L), attempts = 10, kafkaTemplate = "kafkaTemplate", fixedDelayTopicStrategy = FixedDelayStrategy.SINGLE_TOPIC ) public void consumeRetry30m(String msg) { log.info("Received message: {}", msg); throw new RuntimeException(); } Case that I encounter is that the delay seems to be applied only for the first retry, then it goes after ~10s for each consecutive attempt. I've tried various configuration, e.g. FixedDelayStrategy.MULTIPLE_TOPICS, higher delay, maxDelay. Is there some sample code, like a PoC, showing how to use this e2e (or some integration test maybe)? Thanks again! Hello @Duncol, thanks for trying out this feature! Which Spring Kafka version are you using? There was an issue with the delay precision that should have been fixed in RC1. Also note that for a 100ms delay you’d have to set the pollTimeout to at most 50ms, as well as partitionIdleEvent, which might not be ideal. Is this delay amount a requirement? In RC1 it should have more accurate timings OOTB with delays above 1 second. Hi Tomaz, thanks for the quick response! I am using spring-kafka 2.7.0-M2. This is not a requirement (the requirement is far less frequent TBH), just wanted to check how is this behaving and stumbled across this weird, nearly linear 'magic' 10s delay, which I don' where it comes from. I've tried setting this for 60s and at start it seems quite fine, but then it happened to be quite spontaneous: (looks like the 2nd and 3rd msg is delayed by configured backoff + 10s) but when message throughput is less 'quiet' (seems like less than 10s in betweeen), I get almost exact 10s delay for each: Maybe my configuration is not sufficient somehow? I've provided solely the @Backoff(60000L) for this case. Is there something to be configured for the kafka perhaps, or more in the retry feature itself? @Duncol Try upgrading to 2.7.0-RC1; with M2, it was tightly coupled to the poll timeout and idle interval; I saw similar issues with M2 and @tomazfernandes made some improvements that are included in RC1. @garyrussell @tomazfernandes I've upgraded to the RC1, but now it does not seems to retry at all. I still get the KafkaBackoffException (more concise in RC1), logging the approx backoff, but nothing happens after that (needs restart of the service to kick next (single) retry). M2 works well for my configuration (except the aforementioned issues with backof times) and the only thing I've changed was the M2 -> RC1 and config data types (int/long -> String) in @RetryableTopic More info about my approach: My retry is based on RuntimeExceptions (exception thrown -> should retry, no exception -> no further retries). First retry is initiated in try/catch around the core service; catch block sends wrapped message to first retry topic via KafkaTemplate. When first topic retry is exhausted and the wrapped msg lands in the DLT, I pass that msg to another retry topic (via KafkaTemplate as for the initial retry topic). After second retry topic 'completes', the DLT just logs the msg, with no further passing. Hi @Duncol, thanks a lot for bringing this up! There's indeed a bug when we use the same factory for the KafkaListener and RetryableTopic annotations. It'll be fixed ASAP, but for now as a workaround if you specify a different factory instance for the RetryableTopic annotation it should work. This scenario will be added to our integration tests so that it doesn't happen again. Please let us know how it turns out. Thanks again! @tomazfernandes There still seems to be something amiss - when I changed my test app to use a different factory, I see this 2021-04-12 16:49:44.152 INFO 35392 --- [ kgh920-0-C-1] com.example.demo.Kgh920Application : foo from kgh920 2021-04-12 16:49:44.657 INFO 35392 --- [etry-1000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-1000 2021-04-12 16:49:45.167 INFO 35392 --- [etry-2000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-retry-2000-1, groupId=kgh920-retry-2000] Seeking to offset 3 for partition kgh920-retry-2000-0 2021-04-12 16:49:45.167 WARN 35392 --- [etry-2000-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-retry-2000 is not ready for consumption, backing off for approx. 490 millis. 2021-04-12 16:49:47.175 INFO 35392 --- [etry-2000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-2000 2021-04-12 16:49:47.681 INFO 35392 --- [etry-4000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-retry-4000-4, groupId=kgh920-retry-4000] Seeking to offset 3 for partition kgh920-retry-4000-0 2021-04-12 16:49:47.681 WARN 35392 --- [etry-4000-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-retry-4000 is not ready for consumption, backing off for approx. 1494 millis. 2021-04-12 16:49:49.688 INFO 35392 --- [etry-4000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-4000 2021-04-12 16:49:50.200 INFO 35392 --- [etry-8000-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-retry-8000-5, groupId=kgh920-retry-8000] Seeking to offset 3 for partition kgh920-retry-8000-0 2021-04-12 16:49:50.200 WARN 35392 --- [etry-8000-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-retry-8000 is not ready for consumption, backing off for approx. 3488 millis. 2021-04-12 16:49:53.699 INFO 35392 --- [etry-8000-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-retry-8000 2021-04-12 16:49:54.210 INFO 35392 --- [gh920-dlt-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-kgh920-dlt-6, groupId=kgh920-dlt] Seeking to offset 3 for partition kgh920-dlt-0 2021-04-12 16:49:54.210 WARN 35392 --- [gh920-dlt-0-C-1] essageListenerContainer$ListenerConsumer : Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic kgh920-dlt is not ready for consumption, backing off for approx. 7489 millis. 2021-04-12 16:50:01.705 INFO 35392 --- [gh920-dlt-0-C-1] com.example.demo.Kgh920Application : foo from kgh920-dlt @RetryableTopic(attempts = "5", backoff = @Backoff(delay = 1000, multiplier = 2.0), listenerContainerFactory = "retryFactory") @KafkaListener(id = "kgh920", topics = "kgh920") public void listen(String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { LOG.info(in + " from " + topic); throw new RuntimeException("test"); } The first retry is +500ms instead of 1s, the next retry is +2s (correct), the next is +2.5s instead of 4, the next retry is +4s instead of 8. I then see 8 seconds before it goes to the DLT - in earlier versions, I am sure that it went straight to the DLT after the 8 second delivery attempt failed. @garyrussell, there are two things going on there: the first is a bug I've found now where the current topic's delay is being used instead of the next topic's - so it'd wrongly be 0, 1s, 2s, 4s, 8s. The 500ms+- differences are related to the poll timeout, which has to be a lot smaller in the retry topics if we have low backoffs such as 1s - not much time to go through all the pause - partition idle event - resume container - resume consumer process considering it takes about 500ms to get there in the first place. I've changed the default configuration to do that in the latest PR regarding this. I've already fixed this bug and will submit a PR after I run the tests. If you can test this out again tomorrow with the fixes it'll be great, I think everything should work as expected. I'll also open the PR for the factories bug. Thanks! @tomazfernandes Separate ListenerFactory works, thanks for the quick response, I can move forward now :). Just one side question (maybe more for you @garyrussell ) - whatis the planned release date of spring-kafka 2.7.0? @Duncol Later today https://github.com/spring-projects/spring-kafka/milestones Found something strange when writing IT for my retry feature - seems like messages addressed for the -retry topics are doubled? Found something strange when writing IT for my retry feature - seems like messages addressed for the -retry topics are doubled (except the first one)? I might be raising a false alarm due to some misunderstanding of deeper internals (still learning Kafka), but thought it would be worth mentioning. Hi @Duncol! Can you share the code where you’re getting this list from? Is it a batch consumer? Thanks for mentioning, your feedback is very important for us! @tomazfernandes (just mind it's WIP and I'm looking for a cleaner way to register RecordInterceptor just for tests :) ) The previous screen shows this collection after all retries. Hmm, that’s strange. What’s the “callbackKafkaListenerContainerFactory” for? The retry topic’s mechanism relies on the battle tested dead letter publishing recovered to forward messages, and we didn’t see any behavior like this before. The only possibility I see from the feature side would be if we’re registering two consumers per topic, which again never happened in our tests. So what comes to mind is this: Can you check if you have two consumer instances for each topic? You might be able to notice that by putting a breakpoint in the pollAndInvoke method in the KafkaMessageListenerContainer class and checking the instance id for each message consumption. The other possibility I see would be if you’re for some reason registering two interceptors for the same factory instance, not really sure how to check for that but probably a breakpoint in the interceptor assignment will do. @garyrussell, any thoughts on this? According to your screenshot all the interceptors are placing their records into the same consumerRecords collection. So, it might not be a surprise to see the same data in the tests when you produce a record into a topic. Rings a bell? @tomazfernandes 'callbackKafkaListenerContainerFactory' is for our main @KafkaListener. I'm not placing the @RetryableTopic directly over this listener - instead, I have separate handler with two listening methods (each having @KafkaListener + @RetryableTopic over them). I also have one method with @DltHandler. It goes something like this: (external service) --msg--> @KafkaListener(containerFactory = "callbackKafkaListenerContainerFactory") --on-error--> @KafkaListener(topics = "first-topic", containerFactory = "callbackRetryKafkaListenerContainerFactory") @RetryableTopic(listenerContainerFactory = callbackRetryAuxKafkaListenerContainerFactory) --retry-exhaustion--> @DltHandler --send-to-next-retry-topic--> @KafkaListener(topics = "first-topic", containerFactory = "callbackRetryKafkaListenerContainerFactory") @RetryableTopic(listenerContainerFactory = callbackRetryAuxKafkaListenerContainerFactory) --retry-exhaustion--> @DltHandler (do nothing more) callbackRetryKafkaListenerContainerFactory and callbackRetryAuxKafkaListenerContainerFactory are having the same ConsumerFactory Each of those methods have own logging which prove, that the retry count is correct (i.e. no doubled messages) I could perhaps place @RetryableTopic directly over the: @KafkaListener(containerFactory = "callbackKafkaListenerContainerFactory") But I wanted to decouple the code this way @artembilan so in each retry, the message goes through both @KafkaListener's and @RetryableTopic's listenerContainerFactory (and thus - configured consumer)? Such scenario would match my outcome. I would expect, that only initial msg arrival is consumed by consumer configured for callbackRetryKafkaListenerContainerFactory (i.e. @KafkaListener) and the retries are just utilizing callbackRetryAuxKafkaListenerContainerFactory's (i.e. @RetryableTopic) consumer for single message consumption. Is this the way the retry works? Hmm, well, then you have two listeners per topic, hence two instances of the same message in you collection... That’s the expected behavior, right? I didn’t understand what exactly you’re trying to achieve with this pattern: a single @KafkaListener method with @RetryableTopic should suffice. Also, unless that’s somehow a requirement, you don’t need to handle forwarding to the next topic manually in the DLT method, instead you should let the exception go all the way back to the listener (outside of any try/catch) and the framework will handle message forwarding for you. Makes sense? @Duncol Perhaps you misunderstood @tomazfernandes when we had that bug (needing two factories). This is what he meant... @SpringBootApplication public class Kgh920Application { private static final Logger LOG = LoggerFactory.getLogger(Kgh920Application.class); public static void main(String[] args) { SpringApplication.run(Kgh920Application.class, args); } @RetryableTopic(attempts = "5", backoff = @Backoff(delay = 1000, multiplier = 2.0), listenerContainerFactory = "retryFactory") @KafkaListener(id = "kgh920", topics = "kgh920") public void listen(String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { LOG.info(in + " from " + topic); throw new RuntimeException("test"); } @DltHandler public void dlt(String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { LOG.info(in + " from " + topic); } @Bean ConcurrentKafkaListenerContainerFactory<?, ?> retryFactory( ConcurrentKafkaListenerContainerFactoryConfigurer configurer, ObjectProvider<ConsumerFactory<Object, Object>> kafkaConsumerFactory, KafkaProperties kafkaProps) { ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>(); configurer.configure(factory, kafkaConsumerFactory .getIfAvailable(() -> new DefaultKafkaConsumerFactory<>(kafkaProps.buildConsumerProperties()))); return factory; } } i.e. specify a different factory on the retry annotation. Using a different factory is no longer needed now that the bug has been fixed, but it you still want to do it, it needs to go on the retry annotation. Hmm, well, then you have two listeners per topic, hence two instances of the same message in you collection... That’s the expected behavior, right? I didn’t understand what exactly you’re trying to achieve with this pattern: a single @KafkaListener method with @RetryableTopic should suffice. Also, unless that’s somehow a requirement, you don’t need to handle forwarding to the next topic manually in the DLT method, instead you should let the exception go all the way back to the listener (outside of any try/catch) and the framework will handle message forwarding for you. Makes sense? Given the fact, that there is a '-retry' topic created (single topic strategy) for the retry I assumed that the listener for the initial topic (without the '-retry' suffix) is somehow consuming messages from the '-retry' topics, thus - duplicate msg. I'll check it with the fixes, thanks a lot again, much appreciate work you are doing! The manual forward to the next topic is due to different backoff/attempt requirement. Can it be reconfigured on the same retry somehow, as a 'second tier approach' maybe? Hi @Duncol, sorry, I totally missed this message, was cleaning up the inbox and found it now. How is the feature working for you, is it behaving as expected? Can you share more details on your retrial requirements, such as number of attempts, delays, etc? Maybe we can work something out to include this second tier. Thanks and sorry again for taking this long to reply. Hi @tomazfernandes, sorry for lack of response, but I was quite occupied with some other tasks. I'll try to implement your suggestion (which looks quite nice and seems a nearly golden bullet for our case - and any further granular adjustments that we might need regarding changing the back-off) and let you know ASAP. One other thing I've stumbled across which might be interesting is that it seems that @RetryableTopic creates additional consumer with the same clientId. This causes the app to throw an exception (logged as WARN) regarding initializing MBean ('Already Exists'): javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-info,id=retry-30m-0 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1855) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:955) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:890) ~[?:?] at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:320) ~[?:?] at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) ~[?:?] at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:814) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:632) ~[kafka-clients-2.6.0.jar:?] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:699) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:317) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:384) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:206) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:384) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:312) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:257) ~[spring-kafka-2.7.0-M2.jar:2.7.0-M2] at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.3.jar:5.3.3] at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:940) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:591) ~[spring-context-5.3.3.jar:5.3.3] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:767) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:426) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.SpringApplication.run(SpringApplication.java:326) ~[spring-boot-2.4.2.jar:2.4.2] at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:123) ~[spring-boot-test-2.4.2.jar:2.4.2] at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:124) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:124) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:244) ~[spring-test-5.3.3.jar:5.3.3] at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:138) ~[spring-test-5.3.3.jar:5.3.3] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$6(ClassBasedTestDescriptor.java:350) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.executeAndMaskThrowable(ClassBasedTestDescriptor.java:355) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeTestInstancePostProcessors$7(ClassBasedTestDescriptor.java:350) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?] at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312) ~[?:?] at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735) ~[?:?] at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) ~[?:?] at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeTestInstancePostProcessors(ClassBasedTestDescriptor.java:349) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$instantiateAndPostProcessTestInstance$4(ClassBasedTestDescriptor.java:270) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.instantiateAndPostProcessTestInstance(ClassBasedTestDescriptor.java:269) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$2(ClassBasedTestDescriptor.java:259) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at java.util.Optional.orElseGet(Optional.java:362) ~[?:?] at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$testInstancesProvider$3(ClassBasedTestDescriptor.java:258) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.execution.TestInstancesProvider.getTestInstances(TestInstancesProvider.java:31) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$prepare$0(TestMethodTestDescriptor.java:101) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:100) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:65) ~[junit-jupiter-engine-5.7.0.jar:5.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$prepare$1(NodeTestTask.java:111) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.prepare(NodeTestTask.java:111) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:79) ~[junit-platform-engine-1.7.0.jar:1.7.0] at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?] at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) ~[junit-platform-engine-1.7.0.jar:1.7.0] at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?] at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) ~[junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) [junit-platform-engine-1.7.0.jar:1.7.0] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:170) [junit-platform-launcher-1.2.0.jar:1.2.0] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:154) [junit-platform-launcher-1.2.0.jar:1.2.0] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:90) [junit-platform-launcher-1.2.0.jar:1.2.0] at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:142) [surefire-junit-platform-2.22.0.jar:2.22.0] at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:117) [surefire-junit-platform-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:383) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:344) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) [surefire-booter-2.22.0.jar:2.22.0] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:417) [surefire-booter-2.22.0.jar:2.22.0] I've tried separate listenerContainerFactory for the @RetryableTopic with explicit clientId, different from the one I set for particular @KafkaListener, but the one from @KafkaListener takes precedence. When I comment out the @Retryable topic, those additional consumers are not being registered and problem disappears. Have you encounter something similar maybe? Those WARNs do not break anything, but as I've read, it may cut off some metrics and pollutes out logfile a bit :) Hi @Duncol, thanks for bringing this up! I'm having trouble reproducing your issue. When we specify a clientIdPrefix in the @KafkaListener annotation, the prefix gets suffixed by the topic's suffix (e.g. retry-250). And when we don't, Kafka's ConsumerConfig class has a monotonically increasing number that it appends to the consumer's id so that they're unique at least within the same app instance (line 576). In my tests all consumers end up with different client ids. Which Spring Kafka version are you using? Can you share more details on how we can reproduce the issue? Maybe @garyrussell has something to add? Thanks again for your input! I can't reproduce it either; when I enable info logging for sample-04 I get these client ids when I add clientIdPrefix = "test" to the listener: client.id = test-retry-10000-0 client.id = test-retry-4000-0 client.id = test-0 client.id = test-retry-2000-0 client.id = test-dlt-0 client.id = test-retry-8000-0 The exception seems to indicate you have multiple listeners with the same clientIdPrefix. If you can put together a minimal example that exhibits the behavior, we can take a look.
2025-04-01T06:40:27.596086
2017-08-06T18:53:00
248264828
{ "authors": [ "gonzalad" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10874", "repo": "spring-projects/spring-tenancy", "url": "https://github.com/spring-projects/spring-tenancy/issues/7" }
gharchive/issue
Support InheritableThreadLocal tenancyContextHolder Strategy Support an InheritableThreadLocal strategy for tenantyContext just as in Spring Security. PR available #8
2025-04-01T06:40:27.662714
2019-07-17T14:21:47
469244798
{ "authors": [ "Code88Hary", "CodeAndChoke", "EvgeniGordeev", "JYOTIRANJANj", "Kakau-preto", "M-Thirumal", "RaveKev", "Splash34", "WJie12", "alexiz10", "awesomeankur", "bagraercan", "chavesrodolfo", "dennisdaotvlk", "dilipkrish", "egch", "fornazieri", "ghostAmaru", "hendisantika", "iVieL", "jahidakhtargit", "jenni", "jonatan-ivanov", "marcusvoltolim", "mimkorn", "nurkan2313", "pawanmundhra", "shirisha-96", "sntour", "suerain", "thiagohmoreira", "ticoaraujo", "tk-png", "xavierKress" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10875", "repo": "springfox/springfox", "url": "https://github.com/springfox/springfox/issues/3052" }
gharchive/issue
springfox-swagger2 version 2.9.2 not compatible with springboot version 2.2.0.M4 Hi, I am using swagger2 2.9.2, springboot 2.2.0.M4, Hateos 0.25.1.RELEASE. Below is the rumtime error I'm getting: An attempt was made to call a method that does not exist. The attempt was made from the following location: springfox.documentation.spring.web.plugins.DocumentationPluginsManager.createContextBuilder(DocumentationPluginsManager.java:152) " The following method did not exist: org.springframework.plugin.core.PluginRegistry.getPluginFor(Ljava/lang/Object;Lorg/springframework/plugin/core/Plugin;)Lorg/springframework/plugin/core/Plugin; The method's class, org.springframework.plugin.core.PluginRegistry, is available from the following locations: jar:file:/C:/Users/EKHTJHD/.gradle/caches/modules-2/files-2.1/org.springframework.plugin/spring-plugin-core/2.0.0.M1/189f78af81f23eef12018a4d4cf50b8a6df8ec0d/spring-plugin-core-2.0.0.M1.jar!/org/springframework/plugin/core/PluginRegistry.class It was loaded from the following location: file:/C:/Users/EKHTJHD/.gradle/caches/modules-2/files-2.1/org.springframework.plugin/spring-plugin-core/2.0.0.M1/189f78af81f23eef12018a4d4cf50b8a6df8ec0d/spring-plugin-core-2.0.0.M1.jar Action: Correct the classpath of your application so that it contains a single, compatible version of org.springframework.plugin.core.PluginRegistry " Below is my gradle configuration: plugins { id 'org.springframework.boot' version '2.2.0.M4' //id 'org.springframework.boot' version '2.1.6.RELEASE' id 'java' } apply plugin: 'io.spring.dependency-management' group = 'com.in28minutes.rest.webservices' version = '0.0.1-SNAPSHOT' sourceCompatibility = '1.8' configurations { developmentOnly runtimeClasspath { extendsFrom developmentOnly } } repositories { mavenCentral() maven { url 'https://repo.spring.io/snapshot' } maven { url 'https://repo.spring.io/milestone' } } dependencies { //implementation 'org.springframework.boot:spring-boot-starter-data-jpa' implementation 'org.springframework.boot:spring-boot-starter-web' compile group: 'org.springframework.hateoas', name: 'spring-hateoas', version: '0.25.1.RELEASE' compile group: 'io.springfox', name: 'springfox-swagger2', version: '2.9.2' compile group: 'io.springfox', name: 'springfox-swagger-ui', version: '2.9.2' //developmentOnly 'org.springframework.boot:spring-boot-devtools' runtimeOnly 'com.h2database:h2' testImplementation('org.springframework.boot:spring-boot-starter-test') { exclude group: 'org.junit.vintage', module: 'junit-vintage-engine' exclude group: 'junit', module: 'junit' } } test { useJUnitPlatform() } If I change springboot version as below then the issue disappears. id 'org.springframework.boot' version '2.1.6.RELEASE' Duplicate of #2932 Facing the same issue. When can we expect the delivery time for this fix ? I'm also facing the same issue and have had no success of fixing it. A proposed solution is to add this dependency: <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>1.2.0.RELEASE</version> </dependency> but it doesn't work for me. Swagger 2.9.2 and Spring Boot 2.2.0-RELEASE I'm also facing the same issue and have had no success of fixing it. A proposed solution is to add this dependency: <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>1.2.0.RELEASE</version> </dependency> but it doesn't work for me. Swagger 2.9.2 and Spring Boot 2.2.0-RELEASE I have the same issue with Spring Boo 2.2.0-RELEASE Can confirm for version 2.2.1.RELEASE Yep, I'm facing the same issue for that boot version Same issue on version 2.2.2.RELEASE I fixed with above suggestion using spring-plugin-core:1.2.0.RELEASE compile("org.springframework.plugin:spring-plugin-core:1.2.0.RELEASE") { force = true } and also removed dependencies org.springframework.boot:spring-boot-starter-data-rest org.springframework.data:spring-data-rest-hal-browser +1 (2.2.2.RELEASE) +1 (2.2.2.RELEASE) Spring boot: 2.2.2 swagger2: 3.0.0-SNAPSHOT Working for me. I'm using jcenter-snapshots jcenter http://oss.jfrog.org/artifactory/oss-snapshot-local/ If you are not using RepositoryRestMvc, disable it in your Application: @SpringBootApplication(exclude = {RepositoryRestMvcAutoConfiguration.class}) @dilipkrish Do you happen to know how complicated fixing this? As far as I understand 3.x is using spring-plugin-core:2.x right? How hard is backporting it? I would like to use spring-hateoas and springfox-swagger together but as far as I understand, this is not possible right now. @jahidakhtargit @dilipkrish Could one of you please rename this issue so that it will reflect that multiple release versions are involved, something like: springfox-swagger 2.x is not compatible with spring-boot 2.2.x @Splash34 I have the same issue. But i resolved with your solution. I have the same issue. But i resolved with @LukeHackett solution. Springboot 2.2.X does not support SpringFox. Instead I recommend to migrate from springfox to OpenAPI to support Swagger UI. You need to remove all the swagger 2 and springfox dependencies from your project and add the below dependency. org.springdoc springdoc-openapi-ui 1.3.4 for more details go to this link: https://springdoc.org/migrating-from-springfox.htmlou need to update Addition to this to support springboot 2.2.x you need to update spring core plug in org.springframework.plugin spring-plugin-core 2.0.0.RELEASE Springboot 2.2.X does not support springfox. Instead I recommend to migrate from springfox to OpenAPI to support Swagger UI. You need to remove all the swagger 2 and springfox dependencies from your project and add the below dependency. <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>1.3.4</version> </dependency> for more details go to this link: https://springdoc.org/migrating-from-springfox.html Addition to this to support springboot 2.2.x you need to update springcore plugin <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>2.0.0.RELEASE</version> </dependency> Am currently testing with the latest Spring-boot version 2.2.6 and this issue is persisting with Swagger 2.9.2 version. This is bit annoying as it always blocks your development flow and you end up putting in efforts to troubleshoot which go un-productive. I see lot of developers posting similar problems, so it would be best if we can find the resolution on the versions soon or any work-around which can be taken into production projects. Thanks. Happy Coding !! I can confirm that this is also happening with Spring Boot 2.2.6 and Swagger 2.9.2. For me, everything is working just as it should. I started having this issue the moment I added this dependency to my pom.xml file: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> If I remove this dependency, the app starts up fine again. I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. This resolved my issue. Thanks. Thanks, resolved! I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. This resolved my issue. Thanks. Thanks, resolved! org.springframework.boot:spring-boot-starter-data-rest Yes, it seems it's the org.springframework.boot:spring-boot-starter-data-rest to be the issue, as soon as I remove it, everything works. I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. It works for me with spring-boot-starter-parent 2.2.6.RELEASE. Thanks! I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. It works for me with spring-boot-starter-parent 2.2.6.RELEASE. Thanks! Update: It seems comflict with my neo4j http driver (I am using old neo4j version 2.2.3). I get warnings like " o.s.d.n.mapping.neo4jpersistentproperty : owning classinfo is null for property". After remove the swagger from pom.xml, the neo4j http driver works. I'm using Spring Boot 2.3.0, and try to use HATEOAS and Swagger 2 as many of you here, with similar problems of compatibility between them. I already tried every suggestion from here without luck. I found this post that works for me, considering I'm using Gradle https://dev.to/otaviotarelhodb/how-to-use-springfox-2-9-2-with-spring-hateoas-2-on-gradle-project-6mn Actually works for me with HATEOAS and SWAGGER with the latest Spring Boot version by now. I'm using Spring Boot 2.3.0, and try to use HATEOAS and Swagger 2 as many of you here, with similar problems of compatibility between them. I already tried every suggestion from here without luck. I found this post that works for me, considering I'm using Gradle https://dev.to/otaviotarelhodb/how-to-use-springfox-2-9-2-with-spring-hateoas-2-on-gradle-project-6mn Actually works for me with HATEOAS and SWAGGER with the latest Spring Boot version by now. Thanks! It works for me :) I got it working with the following setup: #pom.xml <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.2.RELEASE</version> <relativePath /> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <!-- io.springfox setup --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-data-rest</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>jcenter-snapshots</id> <name>jcenter</name> <url>http://oss.jfrog.org/artifactory/oss-snapshot-local/</url> </repository> </repositories> #SpringFoxConfig.java @Configuration @EnableSwagger2WebMvc @Import(SpringDataRestConfiguration.class) public class SpringFoxConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build(); } } #URL: http://localhost:8080/swagger-ui.html#/ I know it's just a snapshot version at the moment, but for my purpose it's totally fine for now. Nevertheless, they have to release a new major version. Works fine. Thanks Change the version of Springfox of 2.9.2 for LATEST, like this: <!--for Swagger Endpoints support--> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>LATEST</version> </dependency> <!--for Swagger UI support--> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>LATEST</version> </dependency> it's working for me @fornazieri its meant to be as much compatible with previous versions as possible but 3.0.0 is zero config. You can just drop in the springfox-boot-starter dependency and remove any manual dependencies you've added. Also no longer need the @Enable... annotations, if you're using spring boot. yeahh, i changed my version for 3.0.0 and removed the @EnableSwagger2 annotation, now it's working perfectly, thanksss @dilipkrish That's an issue for SB 2.1- in springfox 3.0.0 too. Since we are still on SB 2.0.x and can't upgrade SB just yet ended up with redefining custom beans like this: @Primary @Component("documentationPluginsManager") public class CustomDocumentationPluginsManager extends DocumentationPluginsManager { @Autowired @Qualifier("modelNamesRegistryFactoryPluginRegistry") private PluginRegistry<ModelNamesRegistryFactoryPlugin, DocumentationType> modelNameRegistryFactoryPlugins; @Override public ModelNamesRegistryFactoryPlugin modelNamesGeneratorFactory(DocumentationType documentationType) { return Optional.ofNullable(modelNameRegistryFactoryPlugins.getPluginFor( documentationType)).orElseGet(DefaultModelNamesRegistryFactory::new); } } Basically polyfilling the failing methods with calls to available methods in PluginRegistry in spring-plugin-core:1.2.0.RELEASE. Had to override 3 beans - DocumentationPluginsManager, SchemaPluginsManager, TypeNameExtractor - and 1 class BodyParameterSpecificationProvider with preserving its fully qualified package springfox.documentation.builders since it's not a Spring bean. Man, I solved my problem with only this dependency org.springdoc springdoc-openapi-ui 1.5.1 I started to see this issue after upgrading Sentry to v3, I can't see how it's related but 2.9.2 used to work with the dependency spring-plugin-core-1.x. Upgrading to spring-plugin-core-2.0.0 did not solve it. I've upgraded to springfox 3.0.0 following the docs. application.properties implementation 'io.springfox:springfox-boot-starter:3.0.0' implementation 'io.springfox:springfox-swagger-ui:3.0.0' and removed the @Enable... from SwaggerConfig. I'm now getting another err. and it does not work at all. 09:12:36.547 [restartedMain] ERROR s.d.s.w.p.DocumentationPluginsBootstrapper - Unable to scan documentation context default java.lang.IllegalStateException: Model already registered with different name. at springfox.documentation.schema.TypeNameIndexingAdapter.checkTypeRegistration(TypeNameIndexingAdapter.java:55) at springfox.documentation.schema.TypeNameIndexingAdapter.registerUniqueType(TypeNameIndexingAdapter.java:82) tried using this classpath("org.springframework.boot:spring-boot-gradle-plugin:2.2.13.RELEASE") implementation 'io.springfox:springfox-boot-starter:3.0.0' Still getting same error Still getting the same error with 2.4.5 version.Anyone resolved it please reply. Same error, latest version None of the above fixed the issue for me, but, I've managed to get the 3.0.0-SNAPSHOT version of the springfox library to work, with the following changes to my pom.xml. Although, going forward, I'm probably look to other libraries to provide swagger such as Spring doc <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>3.0.0-SNAPSHOT</version> <exclusions> <exclusion> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>3.0.0-SNAPSHOT</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-spring-webflux</artifactId> <version>3.0.0-SNAPSHOT</version> <exclusions> <exclusion> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.plugin</groupId> <artifactId>spring-plugin-core</artifactId> <version>2.0.0.RELEASE</version> </dependency> THANK YOU
2025-04-01T06:40:27.677028
2022-12-07T00:23:54
1480538136
{ "authors": [ "codecov-commenter", "skgbafa" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10876", "repo": "spruceid/ssx", "url": "https://github.com/spruceid/ssx/pull/35" }
gharchive/pull-request
Export SIWE Description This PR exports the SiweMessage class from siwe as part of ssx and ssx-server. This is done as these packages can be used together commonly and was done to reduce dependencies Type [x] New feature (non-breaking change which adds functionality) Diligence Checklist [x] I have performed a self-review of my code [x] My changes generate no new warnings Codecov Report Base: 72.75% // Head: 72.77% // Increases project coverage by +0.02% :tada: Coverage data is based on head (248666f) compared to base (745e1dd). Patch coverage: 100.00% of modified lines in pull request are covered. Additional details and impacted files @@ Coverage Diff @@ ## main #35 +/- ## ========================================== + Coverage 72.75% 72.77% +0.02% ========================================== Files 22 22 Lines 2679 2681 +2 Branches 173 173 ========================================== + Hits 1949 1951 +2 Misses 730 730 Impacted Files Coverage Δ packages/ssx-sdk/src/index.ts 100.00% <100.00%> (ø) packages/ssx-server/src/index.ts 100.00% <100.00%> (ø) Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. :umbrella: View full report at Codecov. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
2025-04-01T06:40:27.678173
2019-08-12T17:29:22
479762679
{ "authors": [ "a-aaronson", "sam-surname" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10877", "repo": "sprydevs/kanboard", "url": "https://github.com/sprydevs/kanboard/pull/2" }
gharchive/pull-request
BUG-1: Fixed the bug to make the table look pretty. Fixed the bug that I was asked to fix. Since you're already working on this code, please capitalize the second "t" in the Time tracking column heading.
2025-04-01T06:40:27.682310
2017-10-02T19:21:59
262196847
{ "authors": [ "JackDanger", "drmorr0" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10878", "repo": "spulec/moto", "url": "https://github.com/spulec/moto/issues/1229" }
gharchive/issue
dicttoxml is GPLv2-licensed I see in this patch the dicttoxml library was added as a requirement, which is GPLv2-licensed. Unfortunately the GPLv2 is not compatible with the Apache2.0 license, which moto is licensed under, and as far as I understand it, this would require moto to switch to a GPLv2-based licensing scheme. (The difference between GPLv2 and LGPLv2 is that LGPL allows you to link libraries in to non-GPL-licensed code, whereas GPL considers even linking of libraries to create a "derivative work"). Any chance we could get a version of moto that doesn't depend on GPLv2-licensed code? Also I noticed that dicttoxml is only used in one place so it seems like it might not be that hard to replace it with something else. @drmorr0 Great find, thank you for this. Yes, we can remove this dependency immediately and provide you with a new version of Moto without a problematic license. Fantastic, thank you. We've been using an ancient version of moto that didn't have this dependency and I was hoping to upgrade to a more recent version -- it's really useful software. Let me know if there's any way I can help with the change. @drmorr0 once #1231 is merged I'll release a new version for you. Please give that a review if you have a moment. @drmorr0 Moto version 1.1.21 is now released and does not depend on dicttoxml Nice! Thank you so much!
2025-04-01T06:40:27.684091
2016-10-11T17:19:10
182322852
{ "authors": [ "majuscule" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10879", "repo": "spulec/moto", "url": "https://github.com/spulec/moto/issues/729" }
gharchive/issue
instance descriptions should include toplevel Public{IpAddress,DnsName} AWS/boto3 .describe_instances calls return instances with top level PublicDnsName and PublicIpAddress keys for instances with public or elastic IPs. moto currently returns this information only nested inside the NetworkInterfaces dict. I believe that this is relevant to the failing test case given here: https://github.com/spulec/moto/pull/730
2025-04-01T06:40:27.691103
2017-04-13T17:10:14
221626533
{ "authors": [ "coveralls", "gjtempleton", "spulec" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10880", "repo": "spulec/moto", "url": "https://github.com/spulec/moto/pull/897" }
gharchive/pull-request
ContainerInstance deregistration Features Implemented deregister_container_instance - tests added for behaviour around exception raising etc. Changes Changed ContainerInstance object attributes to use snake case for consistency Added myself to the authors list Coverage decreased (-0.07%) to 93.466% when pulling f3aff0f356196f29c3b7598499bf452fcd05e650 on gjtempleton:TaskDraining into 30b1de507cb08794edade449505fbddd0a8a043b on spulec:master. Coverage decreased (-0.2%) to 93.342% when pulling 69b86b2c7a25b225fc9da2f76f6dbb9f5adfee23 on gjtempleton:TaskDraining into 30b1de507cb08794edade449505fbddd0a8a043b on spulec:master. Hey, this looks great. It seems to be breaking on Python 3. If you can get that fixed, I'll be happy to merge. Coverage increased (+0.2%) to 93.724% when pulling 3cbeb551604aba658eb4993bce81dee97b7dc138 on gjtempleton:TaskDraining into 30b1de507cb08794edade449505fbddd0a8a043b on spulec:master. Serves me right for testing locally using 3.5 rather than 3.6. All good now. Coverage increased (+0.005%) to 93.724% when pulling 47bc23f4810051a7b6670f276ed5229fd00baa6a on gjtempleton:TaskDraining into 34c711189f4961eeee6a5de32e8106ec0bdb48bf on spulec:master. Looks great, thank you!
2025-04-01T06:40:27.726726
2024-09-01T15:29:32
2499539847
{ "authors": [ "0bs01ete", "dalthviz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10881", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/22409" }
gharchive/issue
The text "Run code" does not fit in the button (partially visible) at the Step №3 "IPython Console" of Introduction tour Issue Report Checklist [x] Searched the issues page for similar reports [x] Read the relevant sections of the Spyder Troubleshooting Guide and followed its advice [x] Reproduced the issue after updating with conda update spyder (or pip, if not using Anaconda) [x] Could not reproduce inside jupyter qtconsole (if console-related) [x] Tried basic troubleshooting (if a bug/error) [x] Restarted Spyder [x] Reset preferences with spyder --reset [x] Reinstalled the latest version of Anaconda [x] Tried the other applicable steps from the Troubleshooting Guide [x] Completed the Problem Description, Steps to Reproduce and Version sections below Problem Description The text "Run code" does not fit in the button (partially visible) at the Step №3 "IPython Console" of Introduction tour (Help --> Show tour). After button was clicked the text become is fully visible. What steps reproduce the problem? After installation and the first launch of the program there is a modal window with Introduction tour Go to Step №3 of Introduction tour OR Run the program Choose Help --> Show tour Go to Step №3 of Introduction tour What is the expected output? What do you see instead? I was unable to reproduce this (however I used Windows and Spyder 6.0.2 installed from our installers to check): Could it be possible for you to check if this is still happening with the latest release (Spyder 6.0.2)? You could do the check installing our Linux installer available over the Spyder GitHub release page: https://github.com/spyder-ide/spyder/releases/latest Let us know if using a more recent Spyder version helps! Closing due to lack of response
2025-04-01T06:40:27.743524
2018-06-20T06:47:44
333944442
{ "authors": [ "Ankk98", "ccordoba12" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10882", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/7314" }
gharchive/issue
Error while starting spyder through anaconda navigator Description What steps will reproduce the problem? Starting spyder What is the expected output? What do you see instead? Expected output: Spyder should start What Happens: It throws an error Please provide any additional information below File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/init.py", line 511, in toggled=lambda checked: self.toggle_view(checked), File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 725, in toggle_view self.create_new_client(give_focus=False) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 1033, in create_new_client self.connect_client_to_kernel(client) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 1059, in connect_client_to_kernel stderr_file) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/spyder/plugins/ipythonconsole.py", line 1477, in create_kernel_manager_and_kernel_client config=None, autorestart=True) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 958, in new inst.setup_instance(*args, **kwargs) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 986, in setup_instance super(HasTraits, self).setup_instance(*args, **kwargs) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 977, in setup_instance value.instance_init(self) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 1691, in instance_init self._resolve_classes() File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 1696, in _resolve_classes self.klass = self._resolve_string(self.klass) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/traitlets.py", line 1507, in _resolve_string return import_item(string) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/traitlets/utils/importstring.py", line 34, in import_item module = import(package, fromlist=[obj]) File "/home/ankk98/anaconda3/lib/python3.6/site-packages/jupyter_client/session.py", line 61, in from jupyter_client.jsonutil import extract_dates, squash_dates, date_default File "/home/ankk98/anaconda3/lib/python3.6/site-packages/jupyter_client/jsonutil.py", line 11, in from dateutil.parser import parse as _dateutil_parse File "/home/ankk98/anaconda3/lib/python3.6/site-packages/dateutil/parser.py", line 158 l.append("%s=%s" % (attr, value)) ^ SyntaxError: invalid syntax Version and main components Spyder Version: 3.2.6 Python Version: 3.6.4 Qt Versions: 5.6.2, PyQt5 5.6 on Linux Dependencies pyflakes >=0.6.0 : 1.6.0 (OK) pycodestyle >=2.3: 2.3.1 (OK) pygments >=2.0 : 2.2.0 (OK) pandas >=0.13.1 : None (NOK) numpy >=1.7 : 1.14.0 (OK) sphinx >=0.6.6 : 1.6.6 (OK) rope >=0.9.4 : 0.10.7 (OK) jedi >=0.9.0 : 0.11.1 (OK) psutil >=0.3 : 5.4.3 (OK) nbconvert >=4.0 : 5.3.1 (OK) sympy >=0.7.3 : 1.1.1 (OK) cython >=0.21 : 0.27.3 (OK) qtconsole >=4.2.0: 4.3.1 (OK) IPython >=4.0 : 6.2.1 (OK) pylint >=0.25 : 1.8.2 (OK) Reinstalling Anaconda fixed the issue. :) Thanks for letting us know about it!
2025-04-01T06:40:27.753948
2020-02-14T00:11:24
565024741
{ "authors": [ "ccordoba12", "goanpeca", "steff456" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10883", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/pull/11555" }
gharchive/pull-request
PR: Change variable explorer title for NumPy object arrays Description of Changes [x] Wrote at least one-line docstrings (for any new functions) [x] Added unit test(s) covering the changes (if testable) [x] Included a screenshot or animation (if affecting the UI, see Licecap) Change the title for NumPy arrays in the variable explorer Issue(s) Resolved Fixes #11331 Affirmation By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under Spyder's MIT (Expat) license. I certify the above statement is true and correct: Steff456 Please post an image @steff456, please make your branch to derive from our 4.x branch (instead of master), with the following commands: git checkout 4.x git checkout fix-11331 git rebase --onto 4.x master fix-11331 git push -f origin fix-11331
2025-04-01T06:40:27.759687
2021-12-07T10:27:04
1073179508
{ "authors": [ "ccordoba12", "impact27" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10884", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/pull/16974" }
gharchive/pull-request
PR: Limit the number of flags in the editor Description of Changes If there are too many flags in the document, the flags will slow down the printing. This disables the flags of a type if there are more than 10000. To reproduce, create a file with: with open("test.py", "w") as f: for i in range(10000): f.write("aaaaaa\n") This will create 10000 error flags, which will slow down spyder significantly [ ] Wrote at least one-line docstrings (for any new functions) [ ] Added unit test(s) covering the changes (if testable) [ ] Included a screenshot or animation (if affecting the UI, see Licecap) Issue(s) Resolved Fixes # Affirmation By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under Spyder's MIT (Expat) license. I certify the above statement is true and correct: @impact27, one thing I forgot to mention: could you add a test for this? Just to check that we're not trying to display more than MAX_FLAGS in the scrollflag area. I am not sure how to do that. the paint code does a bunch of "painter.drawRect" but I think the result is just an image. I wouldn't know how to check how many rectangles are drawn. What if you check the length of self._dict_flag_list[flag_type] for a certain flag type that goes over MAX_FLAGS in a file? The filtering is applied at the print stage, so the dictionary would contain more than the limit of elements Ok, no problem then. Creating a test for this is way harder than I thought.
2025-04-01T06:40:27.768681
2022-09-12T15:52:49
1370134789
{ "authors": [ "jasonhendrix", "marcosqlbi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10885", "repo": "sql-bi/DaxDateTemplate", "url": "https://github.com/sql-bi/DaxDateTemplate/issues/71" }
gharchive/issue
Fiscal Week Using the latest default PBIX file without modifications, creating a table visual that shows 'Fiscal Week Year', 'FW Year', FW StartOfWeek', and 'FW EndOfWeek', it appears that the values are duplicated for Fiscal Week Year values "FW53-2021" and "FW01-2022". Is that expected behavior? If I also add the 'Date' field, I can see the following, where 1/1/2022 is the only date in "FW01-2022" but its 'FW StartOfWeek' value is not valid. In my case, the report requester is looking for FW53-2021 to look as it does, but for FW01-2022 to start on 2022-01-02 instead of 2022-01-01 as it does above. Do I need to tweak settings to achieve that, or does the above situation reflect a potential bug? A sample PBIX showing this issue is attached. DAX Date Template - Fiscal Week Issue.zip Thanks for reporting the issue. There is certainly a problem, and the worst news is that also Bravo for Power BI has an issue, generating another wrong result for the ISO calendar. I will try to fix the template in Bravo first, then if I have time I'll backport the changes on this template, even though I think we'll deprecate this template because Bravo provides much more flexibility. However, I'll keep you posted. Thanks for your help! Using the latest version of Bravo does seem to give us what we're looking for now. The 53 weeks piece was probably not an accurate requirement. Thanks - I'll check whether I can fix the calculation in the DateTemplate, but it's not a high priority now. I finally realized that in your report you mixed a Fiscal Week column (which is not FW prefixed) with other columns that are FW prefixed. This explains the inconsistency. I can close the issue.
2025-04-01T06:40:27.821014
2023-06-06T12:38:25
1743785620
{ "authors": [ "CaselIT", "JonnyWong16", "Mark-Hetherington", "ccwienk", "zsblevins", "zzzeek" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10886", "repo": "sqlalchemy/mako", "url": "https://github.com/sqlalchemy/mako/issues/378" }
gharchive/issue
Sporadic SyntaxException w/ Python3.11 We use mako for rendering CICD Pipeline Definitions (YAML files). It is a rather complex template w/ a lot of Mako-Function-Definitions and nesting + multithreading. The resulting documents vary in size (depending on input-parameters), and are typically between ~300 kiB and ~1.2 MiB. The template was used w/ Python-versions 3.6 .. 3.10. When upgrading to 3.11, we saw (and still see) sporadic SyntaxExceptions, which roughly occur in about 5% of the time (w/ unchanged template-parameters, of course!). I started working on a minimalised reproducer. If instantiating the same template w/ same parameters 64 times using 2 threads, I almost always see at least one exception stacktrace. The incriminated lines vary, whereas the Mako-part of the stacktrace seems to be always the same. The error does not seem to occur when limiting concurrency to just one thread. Thus, I suspect a race-condition, probably within Mako's codebase. The error occurs for latest versions of python3 alpine (3.11.3-r11) when running inside a virtualisation container, and archlinux (3.11.3-1) when running natively. Example Stacktrace Traceback (most recent call last): File "/home/redacted/.local/lib/python3.11/site-packages/mako/pyparser.py", line 36, in parse return _ast_util.parse(code, "<unknown>", mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/_ast_util.py", line 91, in parse return compile(expr, filename, mode, PyCF_ONLY_AST) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SystemError: AST constructor recursion depth mismatch (before=63, after=65) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/mnt/shared_profile/src/sap/makobug-reproducer/concourse/replicator.py", line 140, in render definition_descriptor = self._render(definition_descriptor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/shared_profile/src/sap/makobug-reproducer/concourse/replicator.py", line 211, in _render t = mako.template.Template(template_contents, lookup=self.lookup) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/template.py", line 300, in __init__ (code, module) = _compile_text(self, text, filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/template.py", line 677, in _compile_text source, lexer = _compile( ^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/template.py", line 657, in _compile node = lexer.parse() ^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/lexer.py", line 248, in parse if self.match_python_block(): ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/lexer.py", line 392, in match_python_block self.append_node( File "/home/redacted/.local/lib/python3.11/site-packages/mako/lexer.py", line 129, in append_node node = nodecls(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/parsetree.py", line 158, in __init__ self.code = ast.PythonCode(text, **self.exception_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/ast.py", line 42, in __init__ expr = pyparser.parse(code.lstrip(), "exec", **exception_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/redacted/.local/lib/python3.11/site-packages/mako/pyparser.py", line 38, in parse raise exceptions.SyntaxException( mako.exceptions.SyntaxException: (SystemError) AST constructor recursion depth mismatch (before=63, after=65) ('import os\n\nimport oci.auth as oa\nimport model.cont') at line: 2 char: 1 It is probably worth mentioning that by decreasing the template's output size, the likelihood of this error seems to become smaller. I could share a copy of my somewhat slimmed-down reproducer; it still contains most of the code from the repository I referenced above, if this is considered helpful. it could be a bug in py3.11 itself. I haven't dealt with that code in probably more than 10 years so, sure, a real reproducer, as small as possible (it really should just be a single template) is an absolute minimum to do anything here. I created a reproducer (makobug-reproducer.tar.gz). To use, you need (obviously) python3.11 + install packages from requirements.txt. To run, simply execute run-me.py For your convenience, I also built an OCI Container Image (aka Docker Image); Dockerfile is included in uploaded tar-file. So as an alternative to installing and running locally, you might run: docker pull eu.gcr.io/gardener-project/cc/makobug-reproducer:1 docker run eu.gcr.io/gardener-project/cc/makobug-reproducer:1 cc-utils/run-me.py I hard-coded parameters such as amount of worker-threads (line 106, max_workers) and the amount of renderings to do (line 124, range). To my experience, this script will yield exceptions as the one I pasted above almost always; I did see occasional runs were no stacktraces were printed. Re-running at most twice always gave stacktraces again. hi that .tar.gz is 283 source files. it's an entire application. unfortunately I can't run an unknown application of enormous complexity and unknown origin within my own environment; instead, please supply a single, self contained mako template that reproduces the problem when compiled. if this is not feasible, I'm sure if this issue is widespread it will soon enough be reported in other forms as python 3.11 gains adoption. or print statements, or dump to a file, etc... The best option is probably to try a custom version of mako that has try: t = mako.template.Template(template_contents, lookup=self.lookup) except SyntaxException: print(template_contents) raise @zzzeek maybe doing something like this, probably logging to a logger, may make sense regardless of this issue though Mako already does this, poster could also turn this on, see https://docs.makotemplates.org/en/latest/usage.html#handling-exceptions @zzzeek : sorry for replying thus late. I dumped the contents of template_contents into a (gzipped) file I created this using (as suggested) the following code-block: try: t = mako.template.Template(template_contents, lookup=self.lookup) except: with open('/src/sap/makobug-reproducer/template_contents.dump', 'w') as f: f.write(template_contents) it is probably worth mentioning that using a bare except was the only way I could actually handle this exception. Neither of SystemError, mako.exceptions.SyntaxException, SyntaxError worked. hi and thanks for this. Unfortunately no error is reproduced, I can load this template under Python 3.11.3 and compile it without issue. what happens if you use the given template in a program like this (assume you put the content into foo.mako)? from mako.template import Template t = Template(filename="foo.mako") print(t.code) what's the exact Python 3.11 version you are using? Unfortunately no error is reproduced, I can load this template under Python 3.11.3 and compile it without issue. did you try this w/ multithreading (at least two threads) and multiple executions? As I explained initially, this error occurs sporadically, and only (according to my observations) when concurrency is involved. If using e.g. four threads, the issue occurs almost always at least once if doing the template-instantiation ~256 times As stated above, I can reproduce this error in the following environments: python3 alpine (3.11.3-r11) - running within a virtualisation container (aka docker container) python (3.11.3-2 and 3.11.3-2) from arch linux (no virtualisation involved) If running this just a couple of times, or running it hundreds / thousands of times, but single-threaded, this error never occurs. However, it occurs quite frequently if doing multithreading (but only as of python3.11). I will try whether I can reproduce the error using the approach you shared and will write another update to this issue. you can try adding threads to the POC but I can't see any way that threads would have an effect here, the Mako compilation process works on a local datastructure that is not accessible to other functions. I had assumed the error was sporadic only because this particular template was not getting compiled every time. it's also not clear why, if you are using a file-based TemplateLookup, that this template would be getting recompiled at all. Mako writes a module file to disk and reuses that. @zzzeek : originally, it was planned to have a multitude of templates. considering that we actually have just one (and might do some caching anyhow), I might change this and cache the template. In the reproducer I uploaded the multithreading is done in the run-me.py script Mako writes a module file to disk and reuses that. @zzzeek : would you be so kind as to give me a hint to the code doing that? That might be a very good explanation for the race-condition I assume. Although not a good explanation of why this seems to only affect python3.11, admittedly. hi - You use a TemplateLookup and give it a file path to store modules, and then use the TemplateLookup to retrieve .mako files from the filesystem as compiled templates. This works best when you have .mako files that you are loading and rendering in your application. the second example at https://docs.makotemplates.org/en/latest/usage.html#using-templatelookup illustrates how to configure TemplateLookup with a file path. The module caching thing is not readily available for on-the-fly templates, what you could do for on-the-fly templates is write them out to .mako files, then use TemplateLookup to access them. Otherwise for local in-memory Template objects, Mako does not make use of global state when compiling, although there is a global set of compiled template modules (after the compilation is done) that are indirectly linked to template URLs or in-memory identifiers; I can see here there is a potential for key conflicts if you have anonymously-identified Template objects but this map isn't used when compilation proceeds. The only thing I can see that could conceivably be some kind of "global" would be when we use the compile() Python builtin we pass a module identifier to it, and for an anonymous Template like you have, that identifer will be hash(id(template)), so there could be re-use of the same id with different template contents. That would be very unusual if compile() somehow held onto state from a previous call. There are many ways to fix your code here. One is to put the lock in your code: template_mutex = threading.Lock() def step_template(name): step_file = ci.util.existing_file(os.path.join(steps_dir, name + '.mako')) with template_mutex: return mako.template.Template(filename=step_file) Another, better and much more idiomatic way is to use TemplateLookup as mentioned, since these are file based templates: lookup = TemplateLookup(directories=[steps_dir]) def step_template(name): step_file = ci.util.existing_file(os.path.join(steps_dir, name + '.mako')) return lookup.get_template(name + ".mako") TemplateLookup uses a mutex for its compilation, so that would eliminate the problem. Then, you will get a lot less compile calls if you give your lookup a module directory: lookup = TemplateLookup(directories=[steps_dir], module_directory='/tmp/mako_modules') def step_template(name): step_file = ci.util.existing_file(os.path.join(steps_dir, name + '.mako')) return lookup.get_template(name + ".mako") your program will put .py files into /tmp/mako_modules that get reused. I could further narrow the issue down. If I add just lock Template._compile_text and Template._compile_from_file from parallel execution, the issue also does not appear. 300 # if plain text, compile code in memory only 301 if text is not None: 302 with lock: 303 (code, module) = _compile_text(self, text, filename) 304 self._code = code 305 self._source = text 306 ModuleInfo(module, None, self, filename, code, text, uri) 307 elif filename is not None: 308 # if template filename and a module directory, load 309 # a filesystem-based module file, generating if needed 310 if module_filename is not None: 311 path = module_filename 312 print(f'{module_filename=}') 313 elif module_directory is not None: 314 print(f'{module_directory=}') 315 path = os.path.abspath( 316 os.path.join( 317 os.path.normpath(module_directory), u_norm + ".py" 318 ) 319 ) 320 else: 321 path = None 322 with lock: 323 module = self._compile_from_file(path, filename) 324 else: 325 raise exceptions.RuntimeException( 326 "Template requires text or filename" 327 ) _compile_from_file() calls _compile_text() so that code would deadlock if _compile_from_file() does not have a path well I just successfully executed it w/o issues :-) @zzzeek : switching to TemplateLookup sounds like a good idea, too. However, I still think there is a bug in template.Template. Instantiating multiple instances of a class and calling their methods should not run into race-conditions I think At this point you should have enough information to create a single short script that demonstrates the problem, take my script at https://github.com/sqlalchemy/mako/issues/378#issuecomment-1600811371 and adjust well I just successfully executed it w/o issues :-) which means it's being called with a path, which seems to indicate there are other calls to TemplateLookup against the same file with different arguments as far as I understand the code, _compile_from_file calls _compile_module_file, so no deadlock :-) yeah, I think this should be feasible My code calls Template at two locations. one time using a fpath, one time passing a string. I also changed it to always pass a string (in this case, the race-condition still occurs - so I think there is a race-condition involved in the "_compile" method technically, you sometimes do raise an exception already ;-) as far as I understand the code, _compile_from_file calls _compile_module_file, so no deadlock :-) take a look. there's a conditional, so it can go either way in my case, path is always None (I checked this by adding a print..) I am not saying adding a lock is a good idea for a fix. this is how far I came in finding the root-cause OK in your code you are locking outside _compile_text() , so that's why that's OK I started (after observing that adding some caching will reduce likelihood of the error) to add lock to full __init__, then started to reduce the lines of code I had to lock and still not get an error anyhow: using this knowledge, I can certainly fix my code. Still, I think this is a bug in mako - albeit one that might not affect many users besides me it may very well be a bug in py3.11 itself. since you can reproduce, work to iteratively reduce your program one step at a time, ensuring reproduction each time, down to a script that looks like this one @zzzeek : interestingly, switching to mako.lookup.TemplateLookup as you suggested did seem to fix the issue (after removing the caching I added earlier, and of course, after removing again the lock I added to mako's code). It will still be an interesting task to add a reproducer for sure. I have also been encountering the same issue in my app. It also occurs intermittently and only with Python 3.11. https://www.reddit.com/r/Tautulli/comments/1042t13/error_syntaxexception_systemerror_ast_constructor/ I am already using TemplateLookup. https://github.com/Tautulli/Tautulli/blob/ea6c6078df410f333a060016dfce18c21ad134c9/plexpy/webserve.py#L126 I think I solved my issue. I was initializing a new TemplateLookup every single time I served a template. Re-factoring my code so that I only initialize it once seems to have fixed it. OK but we want to figure out why concurrent calls to compile() is causing this problem (And also why my test script above does not seem to have this problem) I have been trying to reproduce the error with a small test script but have been unsuccessful. I thus far also did not succeed in creating a minified reproducer. Will update the issue once I do find some more time. I am also seeing this occasionally pop up in a production application since updating to python 3.11. It's very rare, relative to the number of template renders. We are using a customised TemplateLookup that inherits from the mako TemplateLookup. Thanks for all the information provided on this issue - I think that will really help narrow this down for us. Just chiming in that I've also seen this behavior sporadically with Python 3.11 It occurs when using TemplateLookup.get_template. It's happening very rarely, I'd say about once a week in a nightly job that calls hundreds of renders via an API that uses mako. Each API call handles a single render, the lookup looks roughly like: lookup = TemplateLookup(directories=[templates_path]) try: template = lookup.get_template(specific_template) except TemplateLookupException: template = lookup.get_template(common_template) return template Exception observed is: API Exception: SyntaxException('(SystemError) AST constructor recursion depth mismatch (before=93, after=85) (\'if <condition>:pass\') in file <file_path> This seems more an issue with python: this pytest issue reports the same problem without mako involvement https://github.com/pytest-dev/pytest/issues/10874 wow howd you find that? someone in that pytest issue actually linked this one, and github links it if you scroll up
2025-04-01T06:40:27.859612
2018-09-05T08:55:49
357129707
{ "authors": [ "Stuart-Moore", "labyrinthsystems" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10887", "repo": "sqlcollaborative/dbatools", "url": "https://github.com/sqlcollaborative/dbatools/issues/3972" }
gharchive/issue
Restore-DbaDatabase apply log backups with -FileMapping Version 0.9.399 Have an existing database in full recovery. Using the latest full backup, restore it as a copy of that database. Use -FileMapping to provide the new physical files. Leave it in no recovery to restore transaction logs from the existing database to the newly restored database. Like log shipping. Issue is that -FileMapping does not seem to work with -Continue parameter as the error message says cannot use the physical files of the existing database Steps to Reproduce <# # Restore a full backup of a database # Say for instance your database is called MyDB # Restore it as MyDB2, to the same drives # these 4 variables must be changed to suit $db = 'MyDB2' $filemap = @{'MyDB_Data'='D:\SQLData\MyDB2.mdf';'MyDB_log'='L:\SQLLogs\MyDB2_log.ldf'} $pathFull = 'X:\SQLBackup\pathToFullBackup.bak' $pathLogs = 'X:\SQLBackup\pathToLogBackups' # This should restore the full backup as a second database in norecovery Restore-DbaDatabase -SqlInstance localhost -Path $pathFull -DatabaseName $db -FileMapping $filemap -NoRecovery -WithReplace -MaintenanceSolutionBackup -Verbose # -OutputScriptOnly # Now to restore transaction logs via -Continue Restore-DbaDatabase -SqlInstance localhost -Path $pathLogs -DatabaseName $db -FileMapping $filemap -NoRecovery -MaintenanceSolutionBackup -Verbose -Continue # -OutputScriptOnly # Cannot restore any log backups as the physical files of MyDB are in use (which we are not trying to use) #> Expected Behavior Transaction log backups should restore Actual Behavior Error throws as if we we were trying to restore the physical files of the existing database Environmental data PSVersion 4.0 WSManStackVersion 3.0 SerializationVersion <IP_ADDRESS> CLRVersion 4.0.30319.42000 BuildVersion 6.3.9600.18968 PSCompatibleVersions {1.0, 2.0, 3.0, 4.0} PSRemotingProtocolVersion 2.2 Microsoft SQL Server 2016 (SP1-CU10-GDR) (KB4293808) - 13.0.4522.0 (X64) Jul 17 2018 22:41:29 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows Server 2012 R2 Standard 6.3 (Build 9600: ) (Hypervisor) Having problems replicating this one. $filemap = @{RestoreTimeClean = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2SP2\MSSQL\DATA\MapClean.mdf'; RestoreTimeClean_log = 'C:\Program Files\Microsoft SQL Server\MS SQL10_50.SQL2008R2SP2\MSSQL\DATA\mapclean_log.ldf'} $null = Restore-DbaDatabase -SqlInstance localhost\sql2008r2sp2 -Path C:\github\appveyor-lab\RestoreTimeClean\RestoreTimeClean.bak -WithReplace Get-DbaDatabaseFile -SqlInstance localhost\sql2008r2sp2 -Database restoretimeclean | select LogicalName, PhysicalName $null = Restore-DbaDatabase -SqlInstance localhost\sql2008r2sp2 -Path C:\github\appveyor-lab\RestoreTimeClean\RestoreTimeClean.bak -DatabaseName rt2 -NoRecovery -WithReplace -FileMapping $filemap $null = Restore-DbaDatabase -SqlInstance localhost\sql2008r2sp2 -Path C:\github\appveyor-lab\RestoreTimeClean\ -DatabaseName rt2 -Continue -FileMapping $filemap Get-DbaDatabaseFile -SqlInstance localhost\sql2008r2sp2 -Database rt2 | select LogicalName, PhysicalName Which I'm pretty sure mirrors your logic. And it's working as expected. Thats in ps V4, I just need to try and find a PS4 box to try it on as well to confirm. Hi @Stuart-Moore , I think I'm doing the exact same as you, I can't quite believe it! I'm on dbatools version 0.9.381.
2025-04-01T06:40:27.869447
2019-09-04T02:21:50
488905295
{ "authors": [ "ClaudioESSilva", "imyourdba", "potatoqualitee", "sirsql", "wsmelton" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10888", "repo": "sqlcollaborative/dbatools", "url": "https://github.com/sqlcollaborative/dbatools/pull/6013" }
gharchive/pull-request
Fix Repair-DbaOrphanUser skipping contained databases Fix Repair-DbaOrphanUser skipping contained databases Type of Change [ ] Bug fix (non-breaking change, fixes #) [x] New feature (non-breaking change, adds functionality) [ ] Breaking change (effects multiple commands or functionality) [x] Ran manual Pester test and has passed (`.\tests\manual.pester.ps1) [ ] Adding code coverage to existing functionality [ ] Pester test is included [ ] If new file reference added for test, has is been added to github.com/sqlcollaborative/appveyor-lab ? [ ] Nunit test is included [ ] Documentation [ ] Build system Purpose Fix for issue #5887 so that contained databases are not ignored when attempting to fix orphaned users. Approach Eliminates the code that looks to see if the database is contained. The existing db level checks cover handling contained users correctly (see SQL below for DB creation on testing this). Commands to test Create database, logins, users, contained users, orphaned users using SQL code at https://gist.github.com/sirsql/760c0c052fbd3d7ef6be5c0f9db09887 Then run repair-dbadborphanuser -sqlinstance localhost Screenshots Learning thanks @sirsql - while waiting for claudio's review, i reformatted it to OTSB standard using Invoke-DbatoolsFormatter. I'm not sure why this one in particular got reformatted, but wanted to let you know so that you can sync the branch before making any changes. Hi @sirsql can you please also remove from Remove-DbaDbOrphanUser and Get-DbaDbOrphanUser to be equal on all commands @sirsql - will you have time to update Remove and Get? I'll get it done this weekend Don't know what the deal is with the formatting. Ran - invoke-dbatoolsformatter "C:\Users\nic\OneDrive\Documents\GitHub\dbatools\functions\Get-DbaDbOrphanUser.ps1" invoke-dbatoolsformatter "C:\Users\nic\OneDrive\Documents\GitHub\dbatools\functions\Repair-DbaDbOrphanUser.ps1" Invoke-DbatoolsFormatter "C:\Users\nic\OneDrive\Documents\GitHub\dbatools\functions\Remove-DbaDbOrphanUser.ps1" Also did a pull prior to doing anything else and there's the merge conflict (no conflict on my machine and able to pull from upstream without issues). Are there possible problems with the workspace setup that I am using in vscode? I think your local branch is extremely out of date. I'll reformat the files then remove the changes to Remove- would you be able to update then resubmit a PR just for the single file? The formatting issue may be a problem with the version of psscriptanalyzer which I've found to cause problems. @niphlod is there any way you can look into that? I have not been able to update psscriptanalyzer in months. I just wanted to check in on this Please comment on your issue whether this PR fixed you problem. The fix was merged 3 months ago.
2025-04-01T06:40:27.877698
2018-09-01T15:01:33
356208878
{ "authors": [ "LoveSponge", "MasterOdin", "arakash92", "dev-rsonx", "seantcanavan" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10889", "repo": "sqlectron/sqlectron-gui", "url": "https://github.com/sqlectron/sqlectron-gui/issues/449" }
gharchive/issue
missing dependency "libgconf-2-4" I used the "Sqlectron_1.29.0_amd64.deb" on ubuntu 18.04 LTS and it didn't work, so I tried running it from command line and it complained about a missing dependency so I had to install it manually (libgconf-2-4). I assume this dependency should be added to the deb file itself? It is indeed missing from the deb file. Confirming here. Found the workaround here via Google: https://github.com/electron/electron/issues/1518 sudo apt-get install libgconf-2-4 Issue present in 1.30.0 on Arch install with Pacman. Are there any updates on this? Do we even know what is causing it? @LoveSponge the project is dead - switch to DBeaver: https://www.archlinux.org/packages/community/x86_64/dbeaver/ Are there any updates on this? Do we even know what is causing it? What's causing it is that Chrome added a dependency on libgconf to function on Linux, except that it does not come with Chrome nor is it installed by default on most distros. After spending yet another day trying to get VirtualBox and Debian / Ubuntu OS to run inside of it (with both failing at different points of installation / runtime), I'm going to kick this issue and fixing it "properly" down the road to a later version and instead just update the README to say you need to install libgconf if on Linux. There's a good chance that just upgrade the version of electron / electron-builder that sqlectron uses will resolve this as well. @seantcanavan I only realised after posting my comment; Since then i've found Beekeeper Studio (link), a great alternative with active maintinence. This project does have active maintenance, though my time is split amongst a number of things. I want to think this issue has been fixed with 1.31.0, but it's hard for me to say as at least at the moment, I do not have a Linux environment to test on natively, and my last few attempts at setting up a VirtualBox image has been met with a number of annoying problems just installing debian / ubuntu, let alone getting them to run. It will definitely be fixed in 1.32.0 as I upgrade the electron and electron-builder dependencies which should definitely force gconf to be a dependency by default of building the project. This was fixed in #507. help I don't want to build from the source. discover says "dependency resolution failed". But it does not tell what dependency :< @dev-rsonx just switch to dbeaver already it's free and a million times better. It's fully cross-platform as well. https://dbeaver.io/download/ @dev-rsonx what linux distribution and version are you using?
2025-04-01T06:40:27.939238
2023-06-17T01:58:55
1761582639
{ "authors": [ "pshore73", "way0utwest" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10890", "repo": "sqlsaturday/sqlsatwebsite", "url": "https://github.com/sqlsaturday/sqlsatwebsite/issues/201" }
gharchive/issue
SQL Saturday Columbus 2023 - Site Adds Please add the link to the schedule https://sessionize.com/api/v2/piprkmwb/view/GridSmart Please add the precon links Understanding and Optimizing SQL Server Performance - Joey D'Antoni https://www.eventbrite.com/e/657500942017 An Introduction to Python for Data Science and Data Engineering - Chris Hyde https://www.eventbrite.com/e/657503609997 done From: pshore73 @.> Sent: Saturday, June 17, 2023 2:59 AM To: sqlsaturday/sqlsatwebsite @.> Cc: Subscribed @.***> Subject: [sqlsaturday/sqlsatwebsite] SQL Saturday Columbus 2023 - Site Adds (Issue #201) Please add the link to the schedule https://sessionize.com/api/v2/piprkmwb/view/GridSmart Please add the precon links Understanding and Optimizing SQL Server Performance - Joey D'Antoni https://www.eventbrite.com/e/657500942017 An Introduction to Python for Data Science and Data Engineering - Chris Hyde https://www.eventbrite.com/e/657503609997 — Reply to this email directly, view it on GitHubhttps://github.com/sqlsaturday/sqlsatwebsite/issues/201, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AA6AGDD3EJNGAI37HB4M7EDXLUFOTANCNFSM6AAAAAAZJ5EUZQ. You are receiving this because you are subscribed to this thread.Message ID: @.***> Redgate respects your privacy. To find out how we handle your personal data, see our privacy noticehttps://www.red-gate.com/privacy. Use of our software is subject to our license agreementhttps://www.red-gate.com/eula. Our head office is Cavendish House, Cambridge Business Park, Cambridge, CB4 0XB, United Kingdom, and we are registered in the UK as company #3857576.
2025-04-01T06:40:27.964196
2015-12-02T11:20:21
119921244
{ "authors": [ "JakeWharton", "jbaginski", "mikhailmelnik", "swankjesse" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10891", "repo": "square/okhttp", "url": "https://github.com/square/okhttp/issues/2058" }
gharchive/issue
HttpLoggingInterceptor shows a gzipped response as a raw string I use the standard HttpLoggingInterceptor with the log level set to Level.BODY. When the response from a server is gzipped HttpLoggingInterceptor shows it as a raw string (encoded). Is it expected behaviour? It doesn't seem to be very useful unless your server has some issue with gzip :) pidcat: OkHttp D <-- HTTP/1.1 200 OK (146ms) D Date: Wed, 02 Dec 2015 10:53:01 GMT D Server: Apache D X-Frame-Options: SAMEORIGIN D Access-Control-Allow-Origin: * D Cache-Control: no-cache, private D Pragma: no-cache D Content-Encoding: gzip D Vary: Accept-Encoding D Content-Length: 367 D Keep-Alive: timeout=15, max=69 D Connection: Keep-Alive D Content-Type: application/json;charset=UTF-8 D OkHttp-Selected-Protocol: http/1.1 D OkHttp-Sent-Millis:<PHONE_NUMBER>316 D OkHttp-Received-Millis:<PHONE_NUMBER>463 D ������������Œ�j�0EeЪG�a�rv&-$tQ(�.[���dd%%��{ӗ�d7bĜ;���2%�"�"dU���j����`�u�l�5¦�5 D 6ݫr� D �Y4�@UW�U�� D �� ����c_�����p�n�/�i:@*C5A��{����xt{,� �RJy6� �9Ic)���t��E0������ '!��JL2�e��Jɇԥ� _�k�x�3�bA36ޔ�'$ft�4���e -ܸ�*t��<�A�� N#����tf��� ����tU���<���`�w��^���f�'�&��y�|�� �:���r!�99{ժ�;���;�%�d���{�na�ս��|r�?.2������ D <-- END HTTP (367-byte body) Is it expected behaviour? In the sense that there's no logic to handle non-plain text bodies then yes, it behaves as we expect. Are you adding this as a network interceptor or a normal interceptor? If you do not set it as a network interceptor you should see the plain-text body. by adding it as a regular interceptor I'm getting decoded data :) thanks @JakeWharton Have tried to use HttpLoggingInterceptor as either network or application interceptor but in one case it doesn't show gzipped body. in another - all headers. Is there any default solution to log both? If no how hard is that to write one? Best approach is to use Chuck or Charles. https://github.com/jgilfelt/chuck https://www.charlesproxy.com/
2025-04-01T06:40:27.977579
2017-02-21T13:41:35
209148583
{ "authors": [ "mohamedchouat", "swankjesse" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10892", "repo": "square/okhttp", "url": "https://github.com/square/okhttp/issues/3179" }
gharchive/issue
CertificatePinner not working I am working with CertificatePinner to disable the ## man-in-the-middle attack . I have an ## ssl certificate . the problem here is # the web service works fine if i enter a invalid sha pins CertificatePinner` certificatePinner = new CertificatePinner.Builder() .add("myadr.com", "sha256/********************************=") .build(); Authenticator authenticator = new Authenticator() { @Override public Request authenticate(Route route, Response response) throws IOException { String credential = Credentials.basic("******", "********"); return response.request().newBuilder() .header("Authorization", credential) .build(); } }; OkHttpClient client = new OkHttpClient.Builder() .certificatePinner(certificatePinner) .connectTimeout(timeOut, TimeUnit.MILLISECONDS) .readTimeout(timeOut, TimeUnit.MILLISECONDS) .authenticator(authenticator) .build(); Request request; ...................... Could you please provide a complete test case? my code works just if i change the sha pin the web service still working . CertificatePinner certificatePinner = new CertificatePinner.Builder() .add(HOSTNAME, "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=") .build(); OkHttpClient client; client = new OkHttpClient.Builder() .connectTimeout(timeOut, TimeUnit.MILLISECONDS) .readTimeout(timeOut, TimeUnit.MILLISECONDS) .authenticator(new Authenticator() { @Override public Request authenticate(Route route, Response response) throws IOException { String credential = Credentials.basic("*********", "********"); return response.request().newBuilder() .header("Authorization", credential) .build(); } }) .build(); Request request; Response response = null; if (method == GET) { if (paramsGet != null) { String paramString = URLEncodedUtils .format(paramsGet, "utf-8"); url += "?" + paramString; } String deviceID = Utility.DEVICE_ID; String carrier = Utility.CARRIER; request = new Request.Builder() .url(url) .addHeader("Device", deviceID) .addHeader("carrier", carrier) .build(); response = client.newCall(request).execute(); } In the code sample above no certificate pinner is installed. Thx , have you a tutorial can help me On 21 Feb 2017 8:28 p.m., "Jesse Wilson"<EMAIL_ADDRESS>wrote: In the code sample above no certificate pinner is installed. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/square/okhttp/issues/3179#issuecomment-281453043, or mute the thread https://github.com/notifications/unsubscribe-auth/AP8BhMehohGcfZtnjWw9g7dfx0yBSNu6ks5rezrggaJpZM4MHVAf . Merci On 21 Feb 2017 8:32 p.m., "Jesse Wilson"<EMAIL_ADDRESS>wrote: https://github.com/square/okhttp/blob/master/samples/ guide/src/main/java/okhttp3/recipes/CertificatePinning.java — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/square/okhttp/issues/3179#issuecomment-281454367, or mute the thread https://github.com/notifications/unsubscribe-auth/AP8BhF5YTxWwR-1q7xAMWSXytxoCcpDuks5rezvfgaJpZM4MHVAf . Good Morning , I add the CertificatePinning class with my custom param but how i use this class ?? i am working in android appliocation . Cordially On Tue, Feb 21, 2017 at 8:50 PM, mohamed chouat<EMAIL_ADDRESS>wrote: Merci On 21 Feb 2017 8:32 p.m., "Jesse Wilson"<EMAIL_ADDRESS>wrote: https://github.com/square/okhttp/blob/master/samples/guide/ src/main/java/okhttp3/recipes/CertificatePinning.java — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/square/okhttp/issues/3179#issuecomment-281454367, or mute the thread https://github.com/notifications/unsubscribe-auth/AP8BhF5YTxWwR-1q7xAMWSXytxoCcpDuks5rezvfgaJpZM4MHVAf . -- Cordialement . CHOUAT Mohamed +216 24 546 564 / +216 28 205 270 <EMAIL_ADDRESS><EMAIL_ADDRESS>http://tn.linkedin.com/pub/mohamed-chouat/93/6ab/288
2025-04-01T06:40:27.980957
2019-10-01T10:05:46
500794513
{ "authors": [ "bantyK", "yschimke" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10893", "repo": "square/okhttp", "url": "https://github.com/square/okhttp/issues/5516" }
gharchive/issue
callEnd and responseBodyEnd methods are not called for EventListener When using the asynchronous call with ResponseCallback, the callEnd and responseBodyEnd callbacks are not triggered. But Callback's onResponse is getting triggered val client = OkHttpClient.Builder() .eventListenerFactory(HttpEventListenerFactory.FACTORY) .build() val request = Request.Builder() .url("http://jsonplaceholder.typicode.com/comments?postId=1") .build() with(client) { newCall(request).enqueue(object : Callback { override fun onFailure(call: Call, e: IOException) { Log.d("OkHttp##", "Request failed") } override fun onResponse(call: Call, response: Response) { //this is getting triggered Log.d("OkHttp##", "Response received") } }) } EventListener class public class HttpEventListenerFactory extends EventListener { public static final Factory FACTORY = new Factory() { final AtomicLong nextCallId = new AtomicLong(1L); @Override public EventListener create(Call call) { long callId = nextCallId.getAndIncrement(); Log.d("OkHttp##", "next call id : " + nextCallId); String message = String.format(Locale.US, "%04d %s%n", callId, call.request().url()); Log.d("OkHttp##", message); return new HttpEventListenerFactory(callId, System.nanoTime()); } }; @Override public void responseBodyEnd(Call call, long byteCount) { // this method never gets called printEvent("Response body end", callId); } It doesn't look like you are consuming the response body and then closing it. This is likely the problem. What do you suggest I should do in my code to get the responseBodyEnd callback ? What do you suggest I should do in my code to get the responseBodyEnd callback ? Read the body and then close the response? Thank you @yschimke. It worked.
2025-04-01T06:40:28.023227
2020-03-11T18:13:54
579454628
{ "authors": [ "squeevee" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10894", "repo": "squeevee/Addle", "url": "https://github.com/squeevee/Addle/issues/7" }
gharchive/issue
Windows build Build Addle for Windows with MinGW-w64 and MSVC. Bonus points for attempting to build Addle for Win32. I'm seriously deliberating whether to support MSVC. It's doable, but a pain, and I question its necessity. After some deliberation, I have decided that Addle will only support MinGW-w64 based builds on Windows. Supporting MSVC adds clutter to the source code, constrains the libraries Addle can use on Windows, and represents a general vague compatibility hazard where all our other target platforms are POSIX. These would be perfectly livable (lots of projects put up with it), but frankly I don't think MSVC support would add that much value to Addle. MinGW-w64 works quite well and definitely simplifies this leg of cross-platform support. Meanwhile, Addle isn't a library or utility that necessitates ABI compatibility, and the requirement of Qt is already quite large enough that I don't find the requirement of a (free and open source) compiler and toolchain to be that significant of an increase to our build burden. I still intend for Addle to support plugins compiled with MSVC. Clang exposes some interesting MSVC compatibility features that could represent some kind of compromise solution and/or means of using Visual Studio to develop and debug Addle. I am reversing this decision in light of three important pieces of information: __declspec( dllexport ) and __declspec( dllimport ) specifiers are actually not so much about MSVC as they are about Windows DLLs -- which stands to reason, given the "DLL" in the names. While MinGW-w64/GCC is able to link symbols declared without these specifiers, they will not be relocated during runtime, meaning that the "same" symbols will have different addresses from the perspectives of different libraries. That is a problem when, for example, calling QObject::connect in one library, using a pointer to a member function defined in another library. If a library contains __declspec( dllexport ) directives, MinGW-w64/GCC will link it using MSVC-style rules instead of the default Linux-style rules, meaning that GCC will produce similar linker errors for missing export specifiers. Classes containing only pure-virtual methods do not need to be exported. These import/export specifiers were a big reason why I was so reluctant to support MSVC, since GCC "worked" without them and MSVC didn't. But note 1 shows us avoiding them is much more of a "compatibility hazard" than using them. Notes 2 and 3 show us that they will not be too difficult to maintain as their use can be checked from Linux and there will not be nearly as many of them as I expected. MSVC still requires small adjustments to be happy, but at this point they're basically trivial. Meanwhile, MSVC is the most popular and well-supported compiler for native Windows binaries, so I might as well support it in the spirit of making Addle more accessible to the developer community and the tools that are available. Plus, being natively supported makes it possibly advantageous for performance or debugging on Windows. After 999603e86789397d1d03ebaf771ca7c98767e5e7 I was able to successfully build and run the dev_1 branch using MSVC.
2025-04-01T06:40:28.152435
2019-10-15T08:35:45
507091774
{ "authors": [ "alexpdp7", "erizocosmico" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10895", "repo": "src-d/gitbase", "url": "https://github.com/src-d/gitbase/issues/977" }
gharchive/issue
Natural join seems to eliminate rows which it shouldn't MySQL [gitbase]> select blob_hash, repository_id from blobs natural join repositories where blob_hash in ('93ec5b4525363844ddb1981adf1586ebddbc21c1', 'aad34590345310fe813fd1d9eff868afc4cea10c', 'ed82eb69daf806e521840f4320ea80d4fe0af435'); +------------------------------------------+-------------------------------------+ | blob_hash | repository_id | +------------------------------------------+-------------------------------------+ | aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/javascript-driver | | ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/enry | | aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/python-driver | | 93ec5b4525363844ddb1981adf1586ebddbc21c1 | github.com/src-d/go-mysql-server | | aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/ruby-driver | | ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/gitbase | +------------------------------------------+-------------------------------------+ 6 rows in set (14.90 sec) MySQL [gitbase]> select blob_hash, repository_id from blobs where blob_hash in ('93ec5b4525363844ddb1981adf1586ebddbc21c1', 'aad34590345310fe813fd1d9eff868afc4cea10c', 'ed82eb69daf806e521840f4320ea80d4fe0af435'); +------------------------------------------+-------------------------------------+ | blob_hash | repository_id | +------------------------------------------+-------------------------------------+ | aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/python-driver | | aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/javascript-driver | | ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/enry | | aad34590345310fe813fd1d9eff868afc4cea10c | github.com/bblfsh/ruby-driver | | 93ec5b4525363844ddb1981adf1586ebddbc21c1 | github.com/src-d/gitbase | | ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/gitbase | | 93ec5b4525363844ddb1981adf1586ebddbc21c1 | github.com/src-d/go-mysql-server | | ed82eb69daf806e521840f4320ea80d4fe0af435 | github.com/src-d/go-mysql-server | +------------------------------------------+-------------------------------------+ 8 rows in set (0.13 sec) also note that removing the natural join makes things go much faster- it was my understanding that normally we want to join with repositories to benefit from some specific optimizations (although I'm guessing that filtering with blob_hash makes those optimizations moot). Normally we don't want to join with repositories unless there are already joins involved. When querying a single table like blobs, they usually have other optimizations in place. For example, blobs with a filter like blob_hash IN list only reads the given blobs in each repository. That's why no join it's faster. As with everything: it depends on the query, depending on what you want, some optimizations may be better than others for performance. In any case, I reproduced the bug and there's actually an issue. It seems to not return the repeated rows for some reason. Yeah, I suspected something like that. Anyway, for my use case lack of duplicated rows is not an issue, so for my this is not high priority. This bug is really weird. The natural join is the one returning the correct result. If you remove the optimization in blobs table it returns the same. So, there something going on because repo.BlobObjects() doesn't return these blobs, but accessing them directly does @alexpdp7 are you using siva files got from gitcollector? I tried with regular repositories and it didn't happen. Yup, it's using siva Narrowed it down to a siva issue and reported it to go-borges: https://github.com/src-d/go-borges/issues/90, so leaving this as blocked until it's solved on their side.
2025-04-01T06:40:28.154183
2018-02-01T01:56:20
293376982
{ "authors": [ "chlins", "vmarkovtsev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10896", "repo": "src-d/jgscm", "url": "https://github.com/src-d/jgscm/issues/8" }
gharchive/issue
Does it support new file or notebook? When choose new a file or notebook, its will redirect a blank page. Please turn on debug logging (README) and send the full log here. Otherwise I cannot help - works everywhere in our setups. Also please say which python version you are using. Also try to run the test suite and report the result (python3.5 test.py)
2025-04-01T06:40:28.155754
2017-08-31T15:38:20
254378468
{ "authors": [ "dpordomingo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10897", "repo": "src-d/landing", "url": "https://github.com/src-d/landing/pull/148" }
gharchive/pull-request
Document dependencies I created a Requirements section that will explain the things needed to serve the landing. It is not described the full process to install and configure Go, but it is added too links to do so. I'd need you to confirm that the docs are now clear enough. I'd need that you @marnovo and @ricardobaeta take a look into this PR, since it is needed your feedback to improve the docs :D
2025-04-01T06:40:28.169149
2024-07-26T08:07:50
2431672876
{ "authors": [ "KTodts", "hellt" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10898", "repo": "srl-labs/srl-telemetry-lab", "url": "https://github.com/srl-labs/srl-telemetry-lab/issues/40" }
gharchive/issue
remove syslog-ng Hi @KTodts do you recall where did you remove syslong in time for 24.3? I forgot which lab was that. Maybe you can replicate this for this repo as well? @hellt, yes sure! it was this repo
2025-04-01T06:40:28.206161
2021-04-21T11:37:41
863773793
{ "authors": [ "cryptix", "staltz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10901", "repo": "ssb-ngi-pointer/go-ssb-room", "url": "https://github.com/ssb-ngi-pointer/go-ssb-room/issues/178" }
gharchive/issue
Double requestSolution sent for Firefox Focus (Android) Similar to https://github.com/ssb-ngi-pointer/go-ssb-room/issues/170 but sign-in succeeds, it's just odd or undesirable that there would be two calls to requestSolution. On Chrome Android and normal Firefox Android, only one requestSolution is called. I have a hunch Firefox Focus might be doing a HEAD request first, which might be handled like a GET.
2025-04-01T06:40:28.221039
2018-09-09T08:04:06
358359140
{ "authors": [ "gd2020", "sschmid" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10902", "repo": "sschmid/Entitas-CSharp", "url": "https://github.com/sschmid/Entitas-CSharp/issues/784" }
gharchive/issue
KeyNotFoundException when generating source code using Jenny. Hi All I suggest sth.. I found out sth which causes KeyNotFoundException. I'm pretty sure you all already know it. It must be before a user hasn't generated code from Entitas yet. (or not be imported plugin) I wanted to modify to let it print 'Please generate code using Entitas' or execute this process, but it seems like DesperateDevs is not part of this project. Hi, indeed, I've seen this too :D This error message might be confusing, you are right. DesperateDevs is not part of the Entitas project, I will take a look into this. Fyi: KeyNotFoundException is usually thrown by the Properties.cs class, which is thrown, when a key is requested that doesn't exists, e.g. when the Preferences.properties is not yet configured
2025-04-01T06:40:28.233480
2021-06-03T19:16:24
910782789
{ "authors": [ "gordonwatts", "sthapa" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10903", "repo": "ssl-hep/ServiceX_DID_Finder_lib", "url": "https://github.com/ssl-hep/ServiceX_DID_Finder_lib/pull/9" }
gharchive/pull-request
Update logging, change logging module name Hi Gordon, there's a few changes here. Some are more ticky-tacky so feel free to push back on those if needed. The main change is logging on line 77 of servicex_adapter.py ( self.logger.info(f"Metric: {json.dumps(mesg)}") ) . This outputs the necessary code to get parsed and used for performance information by Ilija's code. The other changes are: changing the name from logging to did_logging, I thought this might cause confusion in regards to the logging module using logging.exception in the except causes, doing this gets rid of the need for the traceback.print_exc since the exception reporting should output the same information in servicex_adapter, I changed __logging to a class instance and renamed to self.logger. I think using a class instance looks better, also __logging being used in the methods might be a bit confusing for people (and possibly the interpreter) since it's really close to self.__logging which would get name mangled. These look great, thank you very much!!
2025-04-01T06:40:28.238851
2022-12-03T21:25:45
1474240951
{ "authors": [ "TomJansen", "sspanak" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10904", "repo": "sspanak/tt9", "url": "https://github.com/sspanak/tt9/issues/123" }
gharchive/issue
Physical backspace key is slow when remaping the backspace to a physical key, the responsiveness is much lower then when using the software backspace button. Can this responsiveness be improved somehow? I think I know what you mean. Unfortunately, I can't make them work the same on all phones, because the on-screen key needs to be adjusted programmatically, while the keypad repeat rate is handled by Android and it will probably differ a little on every phone. Either way, I am almost certain there is a mistake in the code, slowing down the hardware key more than it should, that I will fix. Technical Description: The on-screen key sends events so fast, that I had to write extra code to make it repeat every 100 ms or so. Unfortunately, I forgot the hardware keys have a normal repeat rate and additionally applied the software repeat rate to the hardware backspace. When the two are combined, it results in a longer delay between two "delete" events. Solution: Apply the repeat delay only to the soft key. Thanks for reporting! I wouldn't have noticed it myself. The hardware key is no longer manually controlled, so it will now repeat at whatever the default phone/Android rate is. On my phone the delay between two events is about 50 ms. I adjusted the soft key to match that. It may differ on other phones, but nothing can be done about it. Case closed.
2025-04-01T06:40:28.251718
2024-04-19T06:47:56
2252210530
{ "authors": [ "JanStevens", "thdxr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10905", "repo": "sst/ion", "url": "https://github.com/sst/ion/issues/271" }
gharchive/issue
Generated typescript files should not be excluded from git Hi, When you want to for check typescript and ESlint rules in CI PR branches you quickly end up in the situation that you first need to deploy your app in CI before the types are generated. For PR this is not really ideal since you don't want the changes to be applied directly (hence the PR). An option would be to generate the types in userspace (ex: sst.types.ts), indicating that the files should not be edited and always overwritten, this way they can be committed and CI can run. Regards yeah we need to think about this more - maybe it does make sense to check in the generated codei was thinking about adding a codegen field in the config to allow specifying additional places to output code (and support other languages like go) we reworked where these get generated so you can handle however you want - try 0.0.343
2025-04-01T06:40:28.254765
2024-05-03T19:34:53
2278305469
{ "authors": [ "jaduplessis", "jakubknejzlik", "jayair", "urosbelov" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10906", "repo": "sst/ion", "url": "https://github.com/sst/ion/issues/361" }
gharchive/issue
AWS Cognito preSignUp trigger arn: invalid prefix Hello to all, when cognito preSignUp trigger is created, deploy is terminated and there is an error that say: XXX is an invalid ARN: arn: invalid prefix. Examine values at 'XXX.lambdaConfig.preSignUp'. Nice I'll ask Frank to review. @jaduplessis I've just hit this issue and after fixing it using workaround (see below) I discovered another issue, this time in pulumi: https://github.com/pulumi/pulumi-aws/issues/678 Should invoke permission be implemented here as well? Eg. adding invoke permission for each function: new aws.lambda.Permission("AllowExecutionFromCognito", { action: "lambda:InvokeFunction", function: migrateUser.name, principal: "cognito-idp.amazonaws.com", sourceArn: pool.nodes.userPool.arn, }); Workaround: const migrateUser = new sst.aws.Function("MigrateUser", { handler: "src/lambdas/migrate-user.handler", }); const pool = new sst.aws.CognitoUserPool( "RekapUserPool", { transform: { userPool: { lambdaConfig: { userMigration: migrateUser.arn, }, } } ) @jakubknejzlik I think you make a good point. I had to implement something similar myself. I've updated the PR to create the permissions after the pool and functions have been defined