added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:16.005858
| 2017-09-17T15:23:54
|
258311106
|
{
"authors": [
"vdelendik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10045",
"repo": "qaprosoft/carina",
"url": "https://github.com/qaprosoft/carina/issues/146"
}
|
gharchive/issue
|
strange pause 10-13 seconds after suite execution
Need investigate and found a reason of pause of 10-13 seconds after tests execution.
Pay attention to the timestamp between two first lines
2017-09-17 18:20:54 AbstractTest [main] [DEBUG]Short suite file name: debug.xml
2017-09-17 18:21:11 Messager [main] [INFO]INFO: '**************** Test execution summary ****************'.
2017-09-17 18:21:11 Messager [main] [INFO]RESULT #1: TEST [User profile test - addFollowers] PASS [log=file://L:\smule\smule-qa.\reports\qa\1505661650340/User_profile_test_-_addFollowers/test.log]
fixed indirectly
|
2025-04-01T04:35:16.340998
| 2023-10-10T09:12:34
|
1934854241
|
{
"authors": [
"lucasbrodo",
"mitsui29"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10046",
"repo": "qiayuanl/legged_control",
"url": "https://github.com/qiayuanl/legged_control/issues/52"
}
|
gharchive/issue
|
Unable to load robot model (dae files) in Rviz
Hi,
I'm getting this error (: Could not load resource [/home/lucas/legged_control_ws/src/legged_control/legged_examples/legged_unitree/legged_unitree_description/meshes/go1/calf.dae]: Unable to open file "/home/lucas/legged_control_ws/src/legged_control/legged_examples/legged_unitree/legged_unitree_description/meshes/go1/calf.dae") when trying to load legged_robot_description in Rviz. The files are correctly present in the meshes/go1 folder.
Thanks for your help !
hey, i got the same problem with u. but i think this line u mentioned here, it has been added to the robot.xacro, but i still got the same problem. Do u have some ideas?thx a lot!
This is the file you should have.
Then make sure the frame in rviz is odom not base (see issue #47)
Hope it helps.
Also make sure to activate the controller :
rosservice call /controller_manager/switch_controller "start_controllers: ['controllers/legged_controller']
stop_controllers: ['']
strictness: 0
start_asap: false
timeout: 0.0"
|
2025-04-01T04:35:16.346297
| 2024-04-26T23:22:16
|
2266618966
|
{
"authors": [
"ebolyen",
"lizgehret"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10047",
"repo": "qiime2/distributions",
"url": "https://github.com/qiime2/distributions/pull/230"
}
|
gharchive/pull-request
|
maint: update mac runner to include conda
So since we are using the 'latest' versions for our osx and ubuntu runners, we are subject to weekly updates to each runner's included packages/etc. The latest osx runner image (reference here) no longer includes miniconda as an included package for some reason, which breaks any ci action where we're using conda (so basically all of them).
I've updated the osx runner to macos-12 because that's what we were using prior to this week's runner update (reference here) but also included a commented out step in our ci-dev workflow to install conda within the runner (if we'd like to keep using the latest mac runners instead of pinning at macos-12).
I'm going to merge this so that ci-dev can run successfully for any open PRs - but leaving these details here for posterity, and we can re-assess what the best path forward is next week. cc @ebolyen
ALP commit SHA: 4179ecf
Reason for this: https://github.com/actions/runner-images/issues/9262
|
2025-04-01T04:35:16.348567
| 2023-10-24T11:32:28
|
1959051515
|
{
"authors": [
"AlbertMitjans",
"JavierSab"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10048",
"repo": "qilimanjaro-tech/qiboconnection",
"url": "https://github.com/qilimanjaro-tech/qiboconnection/pull/109"
}
|
gharchive/pull-request
|
fix(connection): error message hiding in auth get
This line was hiding the "minimum client version required" error
Behaviour after removal:
Traceback (most recent call last):
File "/home/javi/Documents/qilimanjaro/projects/qiboconnection/src/qiboconnection/errors.py", line 73, in custom_raise_for_status
raise HTTPError(json.dumps(json_text, indent=2), response=response)
requests.exceptions.HTTPError: {
"title": "Upgrade Required",
"status": 426,
"detail": "Client Version (0.13.3) < Client Version required by the method (0.14.2) 426 Client Error: for url: https://qilimanjaroqaas.ddns.net:8080/api/v1/jobs/1234"
}
Hello. You may have forgotten to update the changelog!
Please edit changelog-dev.md with:
A one-to-two sentence description of the change. You may include a small working example for new features.
A link back to this PR.
|
2025-04-01T04:35:16.370701
| 2020-06-11T09:25:02
|
636863330
|
{
"authors": [
"fengqinglingyu",
"qjebbs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10049",
"repo": "qjebbs/sketch-meaxure",
"url": "https://github.com/qjebbs/sketch-meaxure/issues/12"
}
|
gharchive/issue
|
Export slices wrong
I've got another problem, here is the file:
test.sketch.zip
This sketch file contains page1 and control, when I only select control to export, the exported slices is true, while I got wrong result when I've chosen both page1 and control. The exported slices is wrong.
And I found the slices info only diff in rect:
{"x":84,"y":84,"width":50,"height":50} // wrong
{"x":26,"y":26,"width":50,"height":50} // normal
I tried to resolve the problem but now I don't have time, so please resolve it
I don't understand, this plugin doesn't export symbol artboards, you can't even select them in export panel.
I assume you have change the codes, build your own plugin.
26 & 84 are both right results, because your symbol was put on (58,58), the red rect is at the symbol position of (26,26), 58+26=84.
If you find the the slice was exported blank, it's a known issue (#10), which will be fixed by latest release.
OK, I will merge the code and try again.
I've got the right slices , thanks.
|
2025-04-01T04:35:16.410955
| 2024-05-27T11:09:53
|
2318896759
|
{
"authors": [
"QHose",
"withdave"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10050",
"repo": "qlik-oss/qlik-cloud-embed-oauth-impersonation",
"url": "https://github.com/qlik-oss/qlik-cloud-embed-oauth-impersonation/issues/12"
}
|
gharchive/issue
|
unclear what is to be shown here
Should there be anything in the box?
Yes, do you receive an error message in the console @QHose ?
|
2025-04-01T04:35:16.495800
| 2024-08-09T16:44:23
|
2458298912
|
{
"authors": [
"andrewpbray",
"jimjam-slam"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10051",
"repo": "qmd-lab/closeread",
"url": "https://github.com/qmd-lab/closeread/pull/72"
}
|
gharchive/pull-request
|
Add devcontainer spec
Very selfish PR here: I only brought my iPad with me on my trip, so I can’t contribute unless I do it through Codespaces 😛
This adds a container spec that includes R, Quarto, a few R packages (mostly just tidyverse and sf, as I was hoping to add a map example as well as a palmer penguins chart example) and other (imo) essentials for VSCode container work, like httpgd (for SVG chart previews).
Unfortunately I can’t get the port forwarding to work on my iPad on the train (it usually works fine though), so I can’t preview my work, but it’s a start!
I don't think I can make this a draft on my phone, but probably no need to merge this (at least for now)! I just set it up on a branch so I could develop from Codespaces!
Hah, very clever! I was wondering what that comment about an iPad was in email...
|
2025-04-01T04:35:16.592705
| 2015-10-06T09:03:53
|
109964949
|
{
"authors": [
"7starsone",
"qooob",
"silenx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10052",
"repo": "qooob/authentic-theme",
"url": "https://github.com/qooob/authentic-theme/issues/261"
}
|
gharchive/issue
|
Usermin logo 404 (File not found)
Hello,
usermin got a 404 error on logo when setting it in webmin theme configuration.
Failed to load resource: the server responded with a status of 404 (File not found)
http://domain.tld:20000/images/logo.png
This should not be the case. I just double checked it and it works expected way.
What is the console output of ls -lsa /usr/libexec/webmin/authentic-theme/images/logo.png and ls -lsa /usr/libexec/webmin/authentic-theme?
i don't have that path.
i have:
~# find / -name logo.png | grep authentic
/usr/share/webmin/authentic-theme/images/logo.png
/usr/share/webmin/authentic-theme/images/__logo.png
/usr/share/usermin/authentic-theme/images/__logo.png
/etc/webmin/authentic-theme/logo.png
/etc/usermin/authentic-theme/logo.png
I'm on Debian 8.2 .. virtualmin/webmin installed with virtualmin install.sh script
Seems to be my bad, you're right. I always used a symlink to Authentic Theme installed in Webmin, this is why it was working. I will fix it in 16.02.
Thanks for reporting!
Thanks to you for this great UI
Actually, I think it's still working on my end. I misjudged the reason above.
What is the permissions rights on files. What is the output:
ls -lsa /etc/webmin/authentic-theme
ls -lsa /etc/usermin/authentic-theme
ls -lsa /usr/share/webmin/authentic-theme
ls -lsa /usr/share/usermin/authentic-theme
What theme version are you using??
http://pastebin.com/1CZhuEsC
Operating system Debian Linux 8
Webmin version 1.760
Virtualmin version 4.18
Theme version Authentic Theme 16.01
All seems fine. Do you have any errors in miniserv.log?
This log , on loggin into usermin:
[07/Oct/2015:10:58:19 +0200] "POST /session_login.cgi HTTP/1.1" 302 0
[07/Oct/2015:10:58:19 +0200] "GET /unauthenticated/js/settings.js HTTP/1.1" 200 1681
[07/Oct/2015:10:58:19 +0200] "GET /?mail HTTP/1.1" 200 6820
[07/Oct/2015:10:58:20 +0200] "GET /images/logo.png HTTP/1.1" 404 32 <----- 404 Here
[07/Oct/2015:10:58:20 +0200] "GET /unauthenticated/fonts/roboto/El-bgsteBznJNL5pgUfFLA.woff2 HTTP/1.1" 200 63104
[07/Oct/2015:10:58:20 +0200] "GET /sysinfo.cgi HTTP/1.1" 200 2236
[07/Oct/2015:10:58:20 +0200] "GET /unauthenticated/js/authentic.min.js?1601 HTTP/1.1" 200 226800
Did you try to restart Usermin/Webmin?
Yes of course..
i noticed this is miniserv.error when i login to usermin
sh: 1: mail: not found
Warning: something's wrong at /usr/share/usermin/authentic-theme/authentic.pl line 8.
Warning: something's wrong at /usr/share/usermin/authentic-theme/authentic.pl line 8.
[07/Oct/2015:11:05:20 +0200] [<IP_ADDRESS>] /images/logo.png : File not found
Warning: something's wrong at /usr/share/usermin/authentic-theme/authentic.pl line 8.
can you comment out (#) line 8?
i commented out line8: ( #warn $@; )
now in miniserv.error i got only
sh: 1: mail: not found
[07/Oct/2015:11:09:48 +0200] [<IP_ADDRESS>] /images/logo.png : File not found
That is so strange! What is the output of
ls -lsaZ /usr/share/webmin/authentic-theme/images
ls -lsaZ /usr/share/usermin/authentic-theme/images
sh: 1: mail: not found
You should install mail command to let notification features work.
http://pastebin.com/eyxP3cAy
I forgot two things:
on webmin/virtualmin Logo work. ( only usermin problem )
My usermin is in http ( not https )
I can not reproduce your bug :( Whenever I click save logo and reload Usermin, the logo automatically copied from /etc/usermin/images to /usr/share/usermin/authentic-theme/images.
You can make 2 workarounds:
Copy logo.png to theme directiry manually (oh by the way do you get any errors when running as root??)
cp /etc/usermin/authentic-theme/logo.png /usr/share/usermin/authentic-theme/images/
Create a symlink to Usermin theme, using Webmin copy of the theme:
rm -rf /usr/share/usermin/authentic-theme
ln -s /usr/share/webmin/authentic-theme /usr/share/usermin/authentic-theme
Thank you.
number1 solution fixed ( copied by root )
No errors/warnings on the console?
i only have this in miniserv.error when i login/logout
Using a hash as a reference is deprecated at /usr/share/usermin/authentic-theme/session_login.cgi line 181.
and this on miniserv.log
[07/Oct/2015:11:47:31 +0200] "GET / HTTP/1.1" 401 1990
[07/Oct/2015:11:47:44 +0200] "POST /session_login.cgi HTTP/1.1" 302 0
[07/Oct/2015:11:47:45 +0200] "GET / HTTP/1.1" 302 5444
[07/Oct/2015:11:47:46 +0200] "GET /?mail HTTP/1.1" 200 6820
[07/Oct/2015:11:47:46 +0200] "GET /sysinfo.cgi HTTP/1.1" 200 2236
I see. Allright. I will keep this in mind. At the moment I believe this is system specific and I can not reproduce it. In case you have any other problems, just start a new issue please.
hello, how to add the logo on Usermin login page?
Just add it in Webmin and then Ctrl+R to reload the theme. It will appear in Usermin as well.
that works on Webmin/Virtualmin, but the logo for usermin is the original logo...
may be why I never used Usermin with your theme before?
All you need is to make sure that you have both 18.30 installed on Webmin and Usermin, then go to Logo Control from Webmin and set the logo. It will just work after you hit safe.
Just tested it worked.
why usermin is 18.10 if authentic theme was updated? is it a bug?
Yes, there were some broken automatic updates. I'm aware. It's the last thing I'm going to fix before 18.31 release.
ok :-) thanks
I still see this, even after the 18.31 update and another click on Save into the logo control page...
There are 2 different types of logos. For authenticated users and for not-authenticated.
If you think you did everything correctly then check if /etc/usermin/authentic-theme directory exists and writable.
The workaround is to manually copy logo files there from /etc/webmin/authentic-theme.
that directory doesn't exist. Why? just /etc/usermin/authentic-theme as a file... but not directory
Probably it's symlink. Just create a dir and copy stuff there from /etc/webmin/authentic-theme
|
2025-04-01T04:35:16.600039
| 2020-06-26T22:51:20
|
646549133
|
{
"authors": [
"alexis-evelyn",
"qouteall"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10053",
"repo": "qouteall/ImmersivePortalsMod",
"url": "https://github.com/qouteall/ImmersivePortalsMod/issues/312"
}
|
gharchive/issue
|
When Creating a Vertical Global Portal, The Client Crashes
Version: immersive-portals-0.13(forMc1.16.1withFabric).jar
1 Minute Video Demonstrating The Issue: https://youtu.be/8y9LF31Zb-A
Crash Report: https://paste.ee/p/Xis4q
This requires a world backup to be restored in order to access the world again.
fixed in the latest version
|
2025-04-01T04:35:16.605927
| 2019-10-18T11:55:27
|
509033449
|
{
"authors": [
"jberkenbilt",
"trueroad"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10054",
"repo": "qpdf/qpdf",
"url": "https://github.com/qpdf/qpdf/pull/375"
}
|
gharchive/pull-request
|
Signature dictionary /Contents encryption
I've noticed that signature dictionary /Contents value is not encrypted even if the PDF is encrypted.
Adobe Acrobat Reader DC handles such PDFs properly.
Conversely, it cannot handle properly an encrypted PDF with the signature dictionary /Contents value encrypted.
This pull request makes QPDF handle the signature dictionary /Contents value as unencrypted even if the PDF is encrypted.
Also tests for this is added.
Please review this pull request.
I've noticed that this pull request needs to change QPDFObjectHandle::getParsedOffset().
So I've merged the branch and change it.
If you do not want the two branches to merge with merge commit, but rather want the commits to be linear, make such an integrated branch / pull request.
I am reviewing your changes and preparing pull request that I will be prepared to merge after your final okay that it meets your needs. Your changes are high quality, which makes my work easy. I'll let you know when I'm ready. Kids are playing at a neighbor's house, so I have some time. :-)
Incorporated by #376
|
2025-04-01T04:35:16.690056
| 2016-03-07T16:53:14
|
139030688
|
{
"authors": [
"StewartDouglas",
"jfkirk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10061",
"repo": "quantopian/zipline",
"url": "https://github.com/quantopian/zipline/pull/1030"
}
|
gharchive/pull-request
|
WIP (DO NOT MERGE): Introduce new ExchangeCalendar and TradingSchedule classes
Refactor our calendar and timing logic so that it is independent of the TradingEnvironment, and can be generalized to futures exchanges which trade 24/5.
The ExchangeCalendar's API looks like the following public attributes and methods:
name: The name of this exchange, e.g. NYSE.
tz: The timezone in which this exchange is located.
is_open_on_minute: Is the exchange open* at the given minute?
is_open_on_date: Is the exchange open* at any point during the specified calendar day.
trading_days: Return all of the exchange sessions between the given start and end, inclusive.
opens_and_closes: Return the opens and closes of the exchange during the specified date.
session_date: Return the session to which a given minute belongs. For example, a futures contract trading at 7pm EST on Sunday is considered to be part of Monday's session.
minutes_for_date: For a given datetime, return the minutes in the corresponding session
minute_window: Returns a DatetimeIndex containing a window of minutes.
The ExchangeCalendar API still has a number of unanswered questions, including:
Should the methods such as trading_days only accept canonicalized dates, or should they map arbitrary datetimes to session boundaries before returning the date range.
Should is_open_on_date return True when given Sunday datetimes for relevant futures exchanges.
These questions will be answered in party by what the current consumers of this information expect.
The main lookup data structure, self.schedule, is a pandas DataFrame, but we should review this and decide whether to instead use a numpy array of minutes or pandas DatetimeIndex. There will be space vs. time trade-offs associated with this decision.
This work borrows significantly from the work done by @ssanderson here: https://github.com/quantopian/zipline/pull/556
We define open to mean: is the exchange accepting orders.
Note from our conversation: We may want two separate is_open_on_date and is_session_date methods to break up the two separate answers for different consumers.
Closing in deference to new branch.
|
2025-04-01T04:35:16.698308
| 2021-04-09T13:58:45
|
854552135
|
{
"authors": [
"limdauto"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10062",
"repo": "quantumblacklabs/kedro-viz",
"url": "https://github.com/quantumblacklabs/kedro-viz/pull/414"
}
|
gharchive/pull-request
|
[WIP] Refactor viz backend
Description
Development notes
QA notes
Checklist
[ ] Read the contributing guidelines
[ ] Opened this PR as a 'Draft Pull Request' if it is work-in-progress
[ ] Updated the documentation to reflect the code changes
[ ] Added new entries to the RELEASE.md file
[ ] Added tests to cover my changes
Legal notice
[ ] I acknowledge and agree that, by checking this box and clicking "Submit Pull Request":
I submit this contribution under the Apache 2.0 license and represent that I am entitled to do so on behalf of myself, my employer, or relevant third parties, as applicable.
I certify that (a) this contribution is my original creation and / or (b) to the extent it is not my original creation, I am authorised to submit this contribution on behalf of the original creator(s) or their licensees.
I certify that the use of this contribution as authorised by the Apache 2.0 license does not violate the intellectual property rights of anyone else.
Closes in favour of https://github.com/quantumblacklabs/kedro-viz/pull/432/files
|
2025-04-01T04:35:16.707397
| 2020-01-14T01:26:27
|
549277088
|
{
"authors": [
"MichaelBroughton",
"NoureldinYosri",
"balopat",
"dstrain115",
"mpharrigan",
"verult"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10063",
"repo": "quantumlib/Cirq",
"url": "https://github.com/quantumlib/Cirq/issues/2677"
}
|
gharchive/issue
|
PauliSumCollector is not compatible with engine
In PauliSum Collector when doing the basis transformations with
https://github.com/quantumlib/Cirq/blob/3469e5c9bf99b4f54fe147379ff58e1642754dc0/cirq/work/pauli_sum_collector.py#L112
The gates that get returned are cirq.SingleQubitClifford(...) which won't play nice with the serializers. A quick fix is to just add a cirq.decompose around the result of pauli_string.to_z_basis_ops(), but I wasn't sure if people might have other opinions ?
xref #2781
Just to clarify this is what you are facing, right?
cirq.google.SYC_GATESET.serialize(cirq.Circuit(cirq.SingleQubitCliffordGate.Y(cirq.NamedQubit("a"))))
Throws an error
ValueError: Cannot serialize op cirq.SingleQubitCliffordGate(X:-X, Y:+Y, Z:-Z).on(cirq.NamedQubit('a')) of type <class 'cirq.ops.clifford_gate.SingleQubitCliffordGate'>
While
cirq.google.SYC_GATESET.serialize(cirq.Circuit(cirq.decompose(cirq.SingleQubitCliffordGate.Y(cirq.NamedQubit("a")))))
does the trick.
To answer your question: cirq.decompose happens to compile to the gateset that is compatible with the SYC_GATESET serializers, so today that seems to be your best bet.
Thoughts:
there is no nicer option than cirq.decompose() to convert a SingleQubitCliffordGate in general to regular Pauli gates that are recognized by our common serializers
we could change to_z_basis_ops so it returns a sequence of regular Pauli gate (using cirq.decompose) - the problem with that is that some algorithms might like the fact that it returns the SingleQubitCliffordGate representation.
we could add decomposition in the serializers as an automated fallback strategy, guarded by a parameter - eg. SYC_GATESET.serialize(circuit, decompose_on_fail=True) would try to decompose an unrecognized gate.
We could make the serialization error message better and suggest decomposition
We could add a SingleQubitCliffordGate serializer to our gatesets that would do the decomposition.
I think 3 and 4 would be my preference here.
@viathor @dstrain115 @dabacon ?
Would be good to arrive at a decision before 1.0 in case there are breaking changes necessary.
I don't know how well-used PauliSumCollector is
We should just be able to silently convert these into PhasedXZGates if necessary though. Would that satisfy this bug?
It looks like that is option 5 that Balint suggested above. I don't think option 3 is a good idea, since that could mess with moments during serialization, which would not be expected.
@verult Do we want to decide on what to do for the Clifford gates as part of the Google gateset refactor?
In the device refactor, the principle we've followed for arbitrary single qubit gates (e.g. cirq.H) is to not accept them directly and instead have them go through a transformer, so that parameters of PhasedXZGate, a device native gate, are clear to the user:
device = cirq_google.get_engine().get_processor('processor_name').get_device()
circuit = cirq.Circuit(cirq.SingleQubitCliffordGate.Y(cirq.NamedQubit("a")))
cirq.optimize_for_target_gateset(circuit, gateset=device.metadata.compilation_target_gatesets[0])
So I'm leaning towards leaving it as is.
SingleQubitCliffordGate is now serializable (but they converted to PhasedXZ gates in the process), however when interacting with the engine before serialization some validation happens and one of the steps checks if the operation is in the gateset supported by the device. no gateset explictly supports SingleQubitCliffordGate. for now running cirq.optimize_for_target_gateset(circuit, gateset=device.metadata.compilation_target_gatesets[0]) as suggest by https://github.com/quantumlib/Cirq/issues/2677#issuecomment-1152818978 is the solution.
unless we want to add SingleQubitCliffordGate to the supported gatesets, should we?
for reference here are the supported gatesets https://github.com/quantumlib/Cirq/blob/a3eed6b97490556cf1dda82928a0aa9ea8798da1/cirq-google/cirq_google/devices/grid_device.py#L56-L64
|
2025-04-01T04:35:16.710586
| 2021-07-13T14:20:57
|
943460600
|
{
"authors": [
"daxfohl"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10064",
"repo": "quantumlib/Cirq",
"url": "https://github.com/quantumlib/Cirq/pull/4317"
}
|
gharchive/pull-request
|
Lazily create TrialResult.final_simulator_state
Final step of #4100, creates the TrialResult.final_simulator_state lazily so that it's possible to run an entire simulation on a sparsely entangled circuit and sample measurements without running out of memory. Unit test added with 59 qubits on density matrix simulator running 1000 repetitions (9 cirq.X's, 1 cirq.measure, on all qubits) in under 100 ms.
def test_large_untangled_okay():
circuit = cirq.Circuit()
for i in range(59):
for _ in range(9):
circuit.append(cirq.X(cirq.LineQubit(i)))
circuit.append(cirq.measure(cirq.LineQubit(i)))
with pytest.raises(MemoryError, match='Unable to allocate'):
_ = cirq.DensityMatrixSimulator(split_untangled_states=False).simulate(circuit)
result = cirq.DensityMatrixSimulator(split_untangled_states=True).simulate(circuit)
assert set(result._step_result_or_state._qubits) == set(cirq.LineQubit.range(59))
# _ = result.final_density_matrix hangs (as expected)
result = cirq.DensityMatrixSimulator(split_untangled_states=True).run(circuit, repetitions=1000)
assert len(result.measurements) == 59
assert len(result.measurements['0']) == 1000
assert (result.measurements['0'] == np.full(1000, 1)).all()
Supercedes #4198, which had diverged from master too much to merge.
@95-martin-orion Start this review at simulator.py; otherwise it can be confusing. Basically all I did was take the field _final_simulator_state and change it to a property that is lazily calculated from the step result, and updated the code to pass in a that step result instead of the calculated simulator state. (It also still allows the calculated state to be passed in there for backwards compatibility). Then for all subclasses that had some field based on the final state, I changed those to properties that reference the _final_simulator_state property so that they are lazily calculated too.
@95-martin-orion The requested changes are complete.
|
2025-04-01T04:35:16.713857
| 2021-09-21T03:02:16
|
1001738352
|
{
"authors": [
"MichaelBroughton",
"tanujkhattar"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10065",
"repo": "quantumlib/Cirq",
"url": "https://github.com/quantumlib/Cirq/pull/4513"
}
|
gharchive/pull-request
|
Use gatesets in cirq/ion, cirq/neutral_atoms, cirq_pasqal and cirq_ionq
This PR adopts the use of newly added cirq.GateFamily and cirq.Gateset classes in cirq/ion, cirq/neutral_atoms, cirq_pasqal and cirq_ionq.
Note that functions is_native_neutral_atom_op and is_native_neutral_atom_gate are now moot, but are part of the public API and hence are not removed. We can deprecate them later -- though, IMO, both cirq/ion and cirq/neutral_atoms modules should be deprecated :)
Follow up PRs will further adopt the use of Gatesets in cirq/optimizers and cirq_google/
This is part of the roadmap item #3243
Can we split this up into two separate PRs ? One for common_gate_families and one for the vendors/devices changes you've made here ?
Sent out common_gate_families PR for review. Once it is in, I can further split off every vendor change into a separate PR to minimize overlap.
Marking this as a draft for now.
Smaller individual PRs have been sent out and we can close this one now.
|
2025-04-01T04:35:16.716275
| 2022-01-17T14:21:16
|
1105917560
|
{
"authors": [
"dstrain115",
"mpharrigan"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10066",
"repo": "quantumlib/Cirq",
"url": "https://github.com/quantumlib/Cirq/pull/4850"
}
|
gharchive/pull-request
|
Make gate_sets actually optional
mypy and pycharm need a default value in the abstract interface
in order to be truly optional.
Also, add a copy of the inherited arguments in
SimulatedLocalProcessor.
All of the changes here look like dangerous default values. Would it be possible for the optional case to do something like:
Changed.
what about get_sampler
oh weird maybe I was looking at only a subset of commits
|
2025-04-01T04:35:16.718587
| 2024-06-05T09:08:04
|
2335324933
|
{
"authors": [
"horst-laubenthal",
"melloware"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10067",
"repo": "quarkiverse/quarkus-mailpit",
"url": "https://github.com/quarkiverse/quarkus-mailpit/issues/70"
}
|
gharchive/issue
|
A possibility to configure the smtp Port in DevService
We are using a keycloak dev-service that is able to send emails. To configure our realm import with proper smtp config, it would be nice if there would the possibility to have a fixed port to be configured.
Or to be more convenient, would it be possible if you use the property: quarkus.mailer.port=... if it is set?
I was doing this before but @ggrebert helped refactor the code to be more dynamic and this feature was lost. @ggrebert do you think the request above is still possible? That was my original intent in the first version.
@all-contributors add @horst-laubenthal for ideas
@horst-laubenthal 1.1.0 is in Maven Central. Can you try it now? It should be looking for your quarkus.mailer.port and configuring that?
Works! Thank you!
|
2025-04-01T04:35:16.722731
| 2023-01-04T11:14:54
|
1518784388
|
{
"authors": [
"metacosm",
"r00ta"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10068",
"repo": "quarkiverse/quarkus-operator-sdk",
"url": "https://github.com/quarkiverse/quarkus-operator-sdk/issues/467"
}
|
gharchive/issue
|
max-retries is ignored in the controller configuration (operator-sdk version 4.0.5 and quarkus 2.13.6)
With the operator-sdk version 4.0.5 and quarkus 2.13.6, with the following controller
@ApplicationScoped
@ControllerConfiguration(name = FooController.NAME, labelSelector = "app.kubernetes.io/managed-by=foocontroller")
public class FooController implements Reconciler<Foo>,
EventSourceInitializer<Foo>,
ErrorStatusHandler<Foo> {
public static final String NAME = "foocontroller";
private static final Logger LOGGER = LoggerFactory.getLogger(FooController.class);
@Override
public UpdateControl<Foo> reconcile(Foo foo, Context<Foo> context) {
LOGGER.info("Create or update Foo: '{}' in namespace '{}'",
foo.getMetadata().getName(),
foo.getMetadata().getNamespace());
throw new RuntimeException("NICE TRY");
}
and the following application.properties
# Looks like this is ignored by quarkus...
quarkus.operator-sdk.controllers.foocontroller.retry.max-attempts=1
the reconcile loop is called 5 times instead of just one (when the exception is raised).
See https://github.com/r00ta/issues-reproducers/tree/main/operator-sdk-bug-ignore-max-retries for the reproducer
This is weird because there are tests that check that the values coming from application.properties are indeed taken into account. I suspect, though, that, since this configuration has been deprecated in JOSDK for a while, maybe there's something funky going on with the conversion to the supported configuration…
OK, I have found the issue and it's indeed an interplay between the old configuration and the new configuration style and it's rather annoying to fix… 😭
I've opened #469 to address this issue. Could you give it a try and let me know how it goes, please?
Just tested with my reproducer: looks like it works properly with this fix :+1: Thank you!
|
2025-04-01T04:35:16.871121
| 2021-05-25T02:29:50
|
900195008
|
{
"authors": [
"tianmeng1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10069",
"repo": "quartz-scheduler/quartz",
"url": "https://github.com/quartz-scheduler/quartz/issues/686"
}
|
gharchive/issue
|
Misfire happend all the time due to qrtz_ SIMPLE_ The triggers table is missing data
log:LocalDataSourceJobStore - Handling the first 1 triggers that missed their scheduled fire-time. More mis
fired triggers remain to be processed.
ERROR LocalDataSourceJobStore - MisfireHandler: Error handling misfires: Couldn't retrieve trigger: No record
found for selection of Trigger with key: task-betaf.43eeaba73d7b4ae0b12a41c003da48fb' and statement: SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE SCHED_NAME = 'scheduler' AND TRIGGER_
NAME = ? AND TRIGGER_GROUP = ?
I find many data through sql. SELECT * FROM qrtz_triggers WHERE SCHED_NAME = 'scheduler' AND TRIGGER_STATE = 'WAITING' and NEXT_FIRE_TIME < (unix_timestamp(now()) - 60) * 1000 ORDER BY NEXT_FIRE_TIME ASC, PRIORITY desc
The first data no record in QRTZ_SIMPLE_TRIGGERS ,so misfireHandler is interrupted. Other misfire record cannot be process,and cannot be fire.
quartz 2.2.1
|
2025-04-01T04:35:16.872721
| 2017-10-03T09:49:14
|
262366846
|
{
"authors": [
"kRITZCREEK"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10070",
"repo": "quasar-analytics/quasar",
"url": "https://github.com/quasar-analytics/quasar/issues/2872"
}
|
gharchive/issue
|
Weird deviation from urlencoding format
The Quasar view mount format expects spaces be encoded as pluses, while everything else can be url encoded.
We recently updated purescript-uri, and (for the best!) new ps-uri dropped support of that + encode/decode thing.
What's the reasoning behind encoding spaces with pluses not %20? It would be nice if %20 would be a valid encoding of a space.
Hmm seems like there's some other problem actually... sorry for the noise.
|
2025-04-01T04:35:17.214995
| 2024-03-28T11:43:05
|
2213009621
|
{
"authors": [
"marten-seemann",
"zllovesuki"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10071",
"repo": "quic-go/quic-go",
"url": "https://github.com/quic-go/quic-go/issues/4395"
}
|
gharchive/issue
|
Clarify the usage of StatelessResetToken
For someone like me who RTFM but still can't understand if:
StatelessResetToken needs to be the same for all servers?
StatelessResetToken should be unique per server?
StatelessResetToken should be the same for clients and servers?
Assuming that the servers are load balanced via anycast and routed based on ConnectionID (hence no migration is considered).
Could we clarify how the token should be used?
Hi @zllovesuki, that's a very good question!
StatelessResetToken should be the same for clients and servers?
No, and there's no way to communicate the token between clients and servers.
Assuming that the servers are load balanced via anycast and routed based on ConnectionID (hence no migration is considered).
With anycast, the routing can change at any moment. Packets intended for one server would then end up at a different server, which doesn't have access to the TLS session keys. In that case, you'd want the client to learn about this as quickly as possible (otherwise the client has to wait for the idle timeout). With a stateless reset, the client's connection would be closed right away.
So to answer your question, all servers participating in the anycast setup should share the same stateless reset key.
I've actually been working on a new documentation site for quic-go, since some QUIC concepts need a little bit more context than one can reasonably provide within the godoc framework. I'm not quite ready to launch yet, but expect some news in the next week(s).
What I can share at this point is the section about stateless resets. What do you think? Would that have been helpful to answer your question?
That's sufficient information! Thank you for clarifying
Great, thanks for confirming! I'm really looking forward to launching the new docs website :)
I'm going to close this issue for now. Feel free to reopen if you have any more questions about stateless resets.
|
2025-04-01T04:35:17.223106
| 2024-05-16T13:27:03
|
2300409069
|
{
"authors": [
"philclifford",
"popey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10072",
"repo": "quickemu-project/quicktest",
"url": "https://github.com/quickemu-project/quicktest/issues/20"
}
|
gharchive/issue
|
bug: Tesseract command line options only work on 5.x
Expected behavior
Quicktest should work on Ubuntu 22.04.
Actual behavior
Tesseract fails with Error, unknown command line argument '--loglevel'
Workaround:
Unset the tesseract options environment variable or set it to something else. TESSERACT_OCR_OPTIONS="--oem 0" ./quicktest
Steps to reproduce the behavior
Run Quicktest on Ubuntu 22.04
Additional context
We should probably either detect the version of tesseract and bail if less than 5.x, or perferably, tweak settings depending on the version of tesseract installed. Ideally people should be able to run this on any release.
For me, with the above TESSERACT_OCR_OPTIONS="--oem 0" workaround on 22.04 I then get
Error: Tesseract (legacy) engine requested, but components are not present in /usr/share/tesseract-ocr/4.00/tessdata/eng.traineddata!!
Failed loading language 'eng'
Tesseract couldn't load any languages!
but
List of available languages (4):
cym
eng
gla
osd
so resorted to looking minimally at the --help-extra and tried --oem 4 , which got further but failed to PASS (the requested text was there but not found). :stop_sign:
|
2025-04-01T04:35:17.226435
| 2022-07-09T00:00:40
|
1299520232
|
{
"authors": [
"fmassot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10073",
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/1746"
}
|
gharchive/issue
|
Avoid reading min/max fastfield values to check if we really need to filter on fast field values
We are reading fast field min/max to check if we need a timestamp filter in the collector. This operation requires prefetching the fast field file while we can already have this information from the split metadata and take the decision early not to prefetch the fast field file.
https://github.com/quickwit-oss/quickwit/blob/main/quickwit-search/src/filters.rs#L35
Closing this issue as we already prune split and so we don't need to do extra stuff on the leaf search, #1744 already describes what we should do when there is no timestamp filtering.
|
2025-04-01T04:35:17.228817
| 2022-12-01T09:41:38
|
1470991997
|
{
"authors": [
"fmassot",
"guilload"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10074",
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/2529"
}
|
gharchive/issue
|
Can't run quickwit on aws linux
I had first an issue with libsasl2.so.2
./quickwit
./quickwit: error while loading shared libraries: libsasl2.so.2: cannot open shared object file: No such file or directory
I tried to solve it with this fix
cd /usr/lib64
sudo ln libsasl2.so.3 libsasl2.so.2
And then I got these errors:
./quickwit: /lib64/libsasl2.so.2: no version information available (required by ./quickwit)
./quickwit: /lib64/libpthread.so.0: version `GLIBC_2.28' not found (required by ./quickwit)
./quickwit: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by ./quickwit)
./quickwit: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by ./quickwit)
./quickwit: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by ./quickwit)
./quickwit: /lib64/libc.so.6: version `GLIBC_2.29' not found (required by ./quickwit)
./quickwit: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by ./quickwit)
./quickwit: /lib64/libc.so.6: version `GLIBC_2.30' not found (required by ./quickwit)
was linux info:
cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
Fixed via #2546, #2549, and #2550.
|
2025-04-01T04:35:17.230885
| 2024-04-03T06:41:34
|
2222038484
|
{
"authors": [
"fulmicoton"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10075",
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/4830"
}
|
gharchive/issue
|
leaf search panicked on query
{
"query": {
"bool": {
"must": [
{
"multi_match": {
"query": "GET POST",
"fields": [
"db_statement"
]
}
},
{
"range": {
"duration": {
"gte": 0.6,
"lte": 0.9
}
}
},
{
"range": {
"timestamp": {
"gte": "2024-03-27T14:00:00Z",
"lte": "2024-03-27T14:10:00Z"
}
}
}
]
}
},
"aggs": {
"top_hits": {
"top_hits": {
"size": 10,
"_source": {
"includes": [
"db_statement",
"duration",
"timestamp"
]
},
"sort": [
{
"_score": "desc"
}
]
}
}
},
"size": 0
}
The panic happens even without the aggregation apparently.
it was due to top docs not being supported in quickwit.
|
2025-04-01T04:35:17.232718
| 2020-03-13T23:36:12
|
580932767
|
{
"authors": [
"fulmicoton",
"shikhar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10076",
"repo": "quickwit-oss/tantivy",
"url": "https://github.com/quickwit-oss/tantivy/issues/794"
}
|
gharchive/issue
|
Add a filtering collector
In some case it can be handy to have a filtering collector.
The filtering collector should wrap another collector, and a predicate (typically over fast fields)
Document are collected into the wrapped collector iff the document's match the predicate.
API should be discussed in the comments below.
Dupe of https://github.com/quickwit-oss/tantivy/issues/892
https://docs.rs/tantivy/0.16.1/tantivy/collector/struct.FilterCollector.html
|
2025-04-01T04:35:17.233911
| 2023-05-26T02:34:15
|
1726790256
|
{
"authors": [
"fulmicoton",
"ranile"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10077",
"repo": "quickwit-oss/whichlang",
"url": "https://github.com/quickwit-oss/whichlang/issues/11"
}
|
gharchive/issue
|
Detecting programming languages
In addition to detecting spoken languages, would it be possible to detect programming languages?
One big use case for this is: syntax highlighting when you don't already know what language it is.
out of scope for this project.
|
2025-04-01T04:35:17.289928
| 2024-05-02T10:44:18
|
2275200996
|
{
"authors": [
"drjwbaker"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10078",
"repo": "quirksahern/Runtime",
"url": "https://github.com/quirksahern/Runtime/issues/11"
}
|
gharchive/issue
|
Task structure
Title (e.g. Water for cooling)
Identifier:
Category: (e.g. 'Materials', 'Land', 'Humans', 'Water', 'Land', ...)
Researcher Prompt: (e.g. 'Your job requires more cooling. Please wait until cooling is sourced')
Gatherer Prompt: (e.g. 'Bring the researcher 10 vessels of water')
Facilitators Notes: (e.g. 'When the resources are assembled, take a picture and upload to verify the task is complete')
Extra Information: (e.g. 'This represents the water use required to cool data centres that run computationally expensive jobs, for more information on this see FIXME')
Done!
|
2025-04-01T04:35:17.292376
| 2019-02-05T23:35:13
|
407021582
|
{
"authors": [
"daneah",
"quobit"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10079",
"repo": "quobit/awesome-python-in-education",
"url": "https://github.com/quobit/awesome-python-in-education/pull/19"
}
|
gharchive/pull-request
|
Add Code Like a Pro to Manning book list
Hey José, I'm working on a new book for Manning called Code Like a Pro: Software Development in Python that I would be humbled to have added to this list! The goal with this book is to introduce some professional software development concepts to those without formal/traditional software development education. The book is currently in early access, which means I'd love anyone with interest to give feedback on it as I'm writing. If you wish to wait until the book is done to include it (or not at all, of course!) I totally understand!
It's an honour to me! It seems a good :snake: second book :smile:
|
2025-04-01T04:35:17.309484
| 2018-05-07T11:49:38
|
320774803
|
{
"authors": [
"Cinchoo",
"pleb"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10080",
"repo": "quozd/awesome-dotnet",
"url": "https://github.com/quozd/awesome-dotnet/pull/662"
}
|
gharchive/pull-request
|
Add Cinchoo NACHA library
Misc/Cinchoo NACHA - Cinchoo NACHA
A NACHA library for .NET / c#
Sorry, @Cinchoo,
This entry isn't going to make the list for the following reason(s):
Possibly too small
Possibly not awesome
No of response from the community suggested the previous statements were false
Don't let this setback cause you to lose heart, and we look forward to your next contribution.
Thanks, Pleb
|
2025-04-01T04:35:17.314201
| 2018-02-06T08:37:50
|
294673031
|
{
"authors": [
"ahmedre",
"mmahalwy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10081",
"repo": "quran/quran.com-frontend",
"url": "https://github.com/quran/quran.com-frontend/pull/936"
}
|
gharchive/pull-request
|
Return 404 when no routes found
Title of change
Currently, we don't return 404 when you go to not found urls. Example of that is http://localhost:8000/public/dist/main.js
Checklist
[x] Unit tests written
[x] Manually tested
[x] Prettier & ESLint were run
[x] New dependencies are included in package-lock.json
Screenshot
Deployed to: http://staging.quran.com:32953
@hammady that is generated automatically by prettier on the pre-commit. :)
Deployed to: http://staging.quran.com:32955
hmmm.. not sure how I forgot about this... sorry guys :( I will rebase and merge
Deployed to: http://staging.quran.com:32971
Deployed to: http://staging.quran.com:32972
|
2025-04-01T04:35:17.347127
| 2015-04-07T10:36:04
|
66850407
|
{
"authors": [
"letunglam",
"qw3rtman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10089",
"repo": "qw3rtman/gg.js",
"url": "https://github.com/qw3rtman/gg.js/issues/37"
}
|
gharchive/issue
|
Could not pull commits. (may have to pull using the standard git command to handle merge conflicts.)
It shows this error [✖] Could not pull commits. (may have to pull using the standard git command to handle merge conflicts.). When I pull with standard git command, it shows Current branch master is up to date.
I am using OSX 10.10
Sadly, this Node.js version of Git Goodies will no longer be maintained. On the bright side, I have recently completed a version of Git Goodies in Shell with more features, more speed, and less bugs. I believe it solves this issue as well; if you could let me know for sure, that'd be great. :smile:
|
2025-04-01T04:35:17.367586
| 2022-08-18T17:21:11
|
1343392708
|
{
"authors": [
"qzhu2017"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10091",
"repo": "qzhu2017/PyXtal_DFTB",
"url": "https://github.com/qzhu2017/PyXtal_DFTB/issues/1"
}
|
gharchive/issue
|
A quick DFTB calculator
The current ase-dftb calculator suffers from high I/O load.
We need a simple dftb calculator that does not save the following files at each relaxation step.
band.out
detailed.out
charges.bin
Being fixed in https://github.com/qzhu2017/PyXtal/commit/8516cfa0f9f7b1c584b2631040ddabf8821dec84
|
2025-04-01T04:35:17.438656
| 2019-07-02T19:38:01
|
463401674
|
{
"authors": [
"maxheld83"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10092",
"repo": "r-lib/ghactions",
"url": "https://github.com/r-lib/ghactions/issues/285"
}
|
gharchive/issue
|
fix invalid ELF header in act
when run via act the document action fails with:
[Document Package] docker build -t ghactions:bc9b6b4cba39c28f651fdfc16ffdab98678da4b9 /var/folders/pw/drmjtn754kv9dbsjq25_6kt40000gn/T/act/r-lib/ghactions/actions/document@bc9b6b4cba39c28f651fdfc16ffdab98678da4b9/actions/document
[Document Package] docker run image=ghactions:bc9b6b4cba39c28f651fdfc16ffdab98678da4b9 entrypoint=[] cmd=["--after-code=commit"]
Checking for consistency of roxygen2 with `man` and `NAMESPACE` ...
Updating vroom documentation
Writing NAMESPACE
Loading vroom
Error in dyn.load(dllfile) :
unable to load shared object '/github/workspace/src/vroom.so':
/github/workspace/src/vroom.so: invalid ELF header
Calls: <Anonymous> ... <Anonymous> -> load_dll -> library.dynam2 -> dyn.load
Execution halted
Error: exit with `FAILURE`: 1
The same commit passes fine when run inside GitHub actions.
This is a bit weird since act and github actions are supposed to use the exact same docker images to do run document (and therein, pkgbuild::compile_dll() and devtools::document().
ah, ok, this should be easy to fix:
act unfortunately takes the entire repo, including whatever would be .gitignored and runs the container therein -> https://github.com/nektos/act/issues/72
that leads to compiled .sos and friends lying around at build time that were compiled on a different platform. Hence the bad ELF header.
a preventative pkgbuild::clean_dll() should do the trick.
|
2025-04-01T04:35:17.531270
| 2023-01-13T17:18:11
|
1532662599
|
{
"authors": [
"blester125",
"craffel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10093",
"repo": "r-three/git-theta",
"url": "https://github.com/r-three/git-theta/issues/135"
}
|
gharchive/issue
|
Verify Merge Behavior with multiple commits on a branch.
Ensure that when merging branches with multiple commits that our merge tool still works.
Figure out things like, "Do we end up merging the files multiple times (one for each commit in the branch)?"
From the git documentation
With the strategies that use 3-way merge (including the default, ort)
...
only the heads and the merge base are considered when performing a merge, not the individual commits.
So we should be ok, but I we should add a test case that covers this.
Great, thanks for clarifying - I must have been thinking of rebases. I don't think there'd be a good way to do "take mine for all commits in this rebase", right?
There maybe.
Merge strategies are tools that decide what to merge and the default one called ort has an option called ours where anything touched by both branches always results taking the change from the current branch. This seems to be a chunk level approach allowing for changes from branch B to be applied, as long as they don't conflict with changes from A.
There is also a strategy called ours which seems to totally ignore all changes from the other branch which seems harsher than what you are aiming for? (their example is your remote branch have a different config file for the dev-instance and you don't want to try to merge that with the prod-instance config on your main branch)
Rebase support tracked in https://github.com/r-three/git-theta/issues/140
|
2025-04-01T04:35:17.618867
| 2021-04-07T09:48:57
|
852230034
|
{
"authors": [
"RamezIssac",
"squio"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10095",
"repo": "ra-systems/django-slick-reporting",
"url": "https://github.com/ra-systems/django-slick-reporting/pull/30"
}
|
gharchive/pull-request
|
Fix for Django 3.2
Two minor fixes:
use verbose_name instead of label in app config (breaking)
replace deprecated ugettext_lazy (deprecation)
Thanks !!
|
2025-04-01T04:35:17.627279
| 2023-07-14T11:30:16
|
1804718295
|
{
"authors": [
"raamcosta",
"srenrd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10096",
"repo": "raamcosta/compose-destinations",
"url": "https://github.com/raamcosta/compose-destinations/issues/464"
}
|
gharchive/issue
|
Animations does not always default to nothing
Animation for a destination seem to default to a animation when the UI does not fill the entire screen.
@Destination
@Composable
fun SomeScreen() {
Box() {
Text(text = "hi")
}
}
Navigating to this Screen courses an animation where the Text starts in the middle and moves to top left cornor. (I've not defined any animations anywhere).
@Destination
@Composable
fun SomeScreen() {
Box(
modifier = Modifier.fillMaxSize()
) {
Text(text = "hi")
}
}
Navigating to this is done without any animations.
This took me hours to figure out out since all the documentation says that the default is no animations. For me this seems like a bug in the code. NO animations should ever be used unless specifically instructed to.
Hi @srenrd !
This is not something we are doing in this library. I agree that it’s a bit weird but you’d have to open a ticket on jetpack compose navigation which we use internally.
That said, in practical terms, you probably want to have screens filling all screen so it may not be a big issue? I know I haven’t faced this as a problem before 🤔
Hi,
I used androidx.navigation:navigation-compose before switching to this lib. It did not have this problem. I have verified that changing to this lib is whats cause this animation to appear. But yeah i could be one of the libs you are using under the hood that's causing this bug.
And you are right, in practice you would always have something that fills the entire screen.
Are you using animations core or the normal one? What version?
Tried both currently using normal core i tried both with version 1.9.50.
I’m pretty sure this is not something this library is adding. Maybe when you tested with normal compose navigation it still did not have animations built?
Hey @srenrd!
So I just tested with official navigation and the exact same thing happens there. I'll leave here the code so you can see for yourself.
Note that if you uncomment that line on NavHost modifier = Modifier.fillMaxSize() then it stops doing that since the NavHost itself is filling max size.
package com.example.vanillanavigationtests
import android.os.Bundle
import androidx.activity.ComponentActivity
import androidx.activity.compose.setContent
import androidx.compose.animation.EnterTransition
import androidx.compose.animation.ExitTransition
import androidx.compose.foundation.layout.Column
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.material3.Button
import androidx.compose.material3.Text
import androidx.compose.runtime.Composable
import androidx.compose.runtime.remember
import androidx.compose.ui.Modifier
import androidx.navigation.NavType
import androidx.navigation.compose.NavHost
import androidx.navigation.compose.composable
import androidx.navigation.compose.navigation
import androidx.navigation.compose.rememberNavController
import androidx.navigation.navArgument
import com.example.vanillanavigationtests.ui.theme.VanillaNavigationTestsTheme
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
VanillaNavigationTestsTheme {
val navController = rememberNavController()
NavHost(
navController = navController,
startDestination = "greeting",
// enterTransition = { EnterTransition.None },
// exitTransition = { ExitTransition.None },
// modifier = Modifier.fillMaxSize()
) {
composable(
"greeting",
enterTransition = { null },
exitTransition = { null }
) {
Greeting(
navigateToSettings = {
navController.navigate("settings/asdasd?home1=qxcvxcv/qweqwewqe")
},
navigateToSettingsHome = {
navController.navigate("settings_home")
},
navigateToProfile = {
navController.navigate("profile")
},
)
}
composable(
route = "profile",
// enterTransition = { EnterTransition.None },
// exitTransition = { ExitTransition.None }
) {
Text("profile")
}
navigation(
route = "profile",
startDestination = "cenas",
// enterTransition = { EnterTransition.None },
// exitTransition = { ExitTransition.None }
) {
composable("cenas") {
Text("cenas")
}
}
navigation(
route = "settings/{graph1}?home1={home1}/{graph2}",
startDestination = "settings_home?home1={home1}",
arguments = listOf(
navArgument("graph1") {
type = NavType.StringType
},
navArgument("graph2") {
type = NavType.StringType
},
navArgument("home1") {
type = NavType.StringType
defaultValue = "DEFAULT"
}
),
// enterTransition = { EnterTransition.None },
// exitTransition = { ExitTransition.None }
) {
composable(
route = "settings_home?home1={home1}",
arguments = listOf(
navArgument("home1") {
type = NavType.StringType
defaultValue = "DEFAULT_HOME"
}
),
// enterTransition = { EnterTransition.None },
// exitTransition = { ExitTransition.None }
) {
val parentEntry = remember(it) {
navController.getBackStackEntry("settings/{graph1}?home1={home1}/{graph2}")
}
val parentArgs = parentEntry.arguments!!
val arguments = it.arguments!!
Column {
Home(
graphArg = parentArgs.getString("graph1") + parentArgs.getString("graph2"),
destinationArg = parentArgs.getString("home1")
)
Text("destination:")
Home(
graphArg = arguments.getString("graph1") + arguments.getString("graph2"),
destinationArg = arguments.getString("home1")
)
}
}
}
}
}
}
}
}
@Composable
fun Greeting(
navigateToSettings: () -> Unit,
navigateToSettingsHome: () -> Unit,
navigateToProfile: () -> Unit,
) = Column {
Button(
onClick = navigateToSettings,
) {
Text("Settings")
}
Button(
onClick = navigateToSettingsHome,
) {
Text("Settings Home")
}
Button(
onClick = navigateToProfile,
) {
Text("Profile")
}
}
@Composable
fun Home(graphArg: String?, destinationArg: String?) {
Text(
text = "Home!: \n" +
" graph: $graphArg \n" +
" destination: $destinationArg",
)
}
|
2025-04-01T04:35:17.701646
| 2015-12-22T20:20:34
|
123549449
|
{
"authors": [
"cneill",
"devGregA"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10099",
"repo": "rackerlabs/django-DefectDojo",
"url": "https://github.com/rackerlabs/django-DefectDojo/pull/55"
}
|
gharchive/pull-request
|
Updating ansible playbooks with new bower/pip requirements
Should fix #52
Added pip modules: pdfkit, django-overextends
Added step to try --force-latest to resolve dependency version issues
Commented out SSL settings by default (causes bad behavior when SSL is not properly configured)
Reviewed. Looks good.
|
2025-04-01T04:35:17.727874
| 2016-01-01T21:24:30
|
124561989
|
{
"authors": [
"Noitidart",
"neverfox",
"taion"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10100",
"repo": "rackt/history",
"url": "https://github.com/rackt/history/pull/197"
}
|
gharchive/pull-request
|
Force basename when location doesn't include it
This treats basename as the minimally viable pathname. Combined with a change to redux-simple-router to replace history with the initial location, this will allow for / -> /base/, /foo -> /base/foo/, /base -> /base/, /base/foo -> /base/foo/. As it is now, it's possible for a redux app to run at both / and /base, contrary to expectations.
Post-https://github.com/rackt/redux-simple-router/pull/141 RSR no longer works that way, so this is no longer relevant.
Post-https://github.com/rackt/redux-simple-router/pull/141 RSR no longer works that way, so this is no longer relevant.
But I'm working off of master of redux-simple-router and the behavior is still that way. Because basename will be '' in when someone enters, say, localhost:3000, and no attempt is made by RSR to replace history with the initial location, the address bar will stay localhost:3000. This is still relevant.
But I'm working off of master of redux-simple-router and the behavior is still that way. Because basename will be '' in when someone enters, say, localhost:3000, and no attempt is made by RSR to replace history with the initial location, the address bar will stay localhost:3000. This is still relevant.
If you like, I can make a repo demonstrating.
If you like, I can make a repo demonstrating.
Huh? That seems extremely odd and overly-specific. You're saying that, given an app with a basename of /foo, you want to redirect users from / to /foo?
Huh? That seems extremely odd and overly-specific. You're saying that, given an app with a basename of /foo, you want to redirect users from / to /foo?
That's just not right. If your basename is set to foo, then / does not map to:
{
pathname: '/',
basename: '/foo'
}
"When the app inits" is not a meaningful concept here – if somehow you're handling things above the basename, that's outside the concern of the router or of the basename-configured history.
But in no case should it just lie to you.
That's just not right. If your basename is set to foo, then / does not map to:
{
pathname: '/',
basename: '/foo'
}
"When the app inits" is not a meaningful concept here – if somehow you're handling things above the basename, that's outside the concern of the router or of the basename-configured history.
But in no case should it just lie to you.
Yes. That's exactly how, say, rootURL works in Ember. It seems highly odd to me for an app to work at two different states of the address bar. It should represent the minimally viable mount point for the app, and paths that are "deficient" should be corrected. Right now it works as nothing more than a way for the develop to say "*if the path includes basename, ignore basename" but it's much more natural to think of it as "if path doesn't include basename, make sure it does".
Yes. That's exactly how, say, rootURL works in Ember. It seems highly odd to me for an app to work at two different states of the address bar. It should represent the minimally viable mount point for the app, and paths that are "deficient" should be corrected. Right now it works as nothing more than a way for the develop to say "*if the path includes basename, ignore basename" but it's much more natural to think of it as "if path doesn't include basename, make sure it does".
That's just not right. If your basename is set to foo, then / does not map to
I didn't say it did. I said the app boots just fine, which is highly odd to me and contrary to how routers work in other frameworks with similar root or base url settings.
That's just not right. If your basename is set to foo, then / does not map to
I didn't say it did. I said the app boots just fine, which is highly odd to me and contrary to how routers work in other frameworks with similar root or base url settings.
if somehow you're handling things above the basename, that's outside the concern of the router or of the basename-configured history.
If they are, they shouldn't be mounting the app yet, but the app is mounted. And if it is, it should utilize basename.
if somehow you're handling things above the basename, that's outside the concern of the router or of the basename-configured history.
If they are, they shouldn't be mounting the app yet, but the app is mounted. And if it is, it should utilize basename.
I suppose we could throw an exception there. I don't know that this is better than the current behavior. In no event should we report a location that's inconsistent with the URL, though.
But why are you even bothering to set a basename if your app is handling /?
I suppose we could throw an exception there. I don't know that this is better than the current behavior. In no event should we report a location that's inconsistent with the URL, though.
But why are you even bothering to set a basename if your app is handling /?
Fair point about that not being inconsistent. It's handling because that's the out of the box behavior, which I was hoping to change by making the basename setting more of a way of declaring where an app should redirect to if something else is attempted (again similar to what other framework routers such as Ember's do). But, you're right that that's not history's job.
Fair point about that not being inconsistent. It's handling because that's the out of the box behavior, which I was hoping to change by making the basename setting more of a way of declaring where an app should redirect to if something else is attempted (again similar to what other framework routers such as Ember's do). But, you're right that that's not history's job.
Yeah just redirecting to the basename is the right way to handle this if it comes up, though it's a bit of an odd scenario anyway.
Yeah just redirecting to the basename is the right way to handle this if it comes up, though it's a bit of an odd scenario anyway.
In any case, thanks for taking the time to walk me through your thinking.
In any case, thanks for taking the time to walk me through your thinking.
No problem.
No problem.
I think I'm having this issue. My basename is about:mytool and I want pages to be about:mytool?page/1, about:mytool?page/2, etc. (new URL('about:mytool')).host shows that there is no basename. @taion how did you handle this?
In another instance when I am running locally, my basepath should be file:///C:/Users/Mercurius/Documents/GitHub/Non-JSX-React-Router-Redux-Demo/app.html but router is making links go to file:///?page1
|
2025-04-01T04:35:17.729743
| 2015-06-27T20:51:18
|
91512656
|
{
"authors": [
"blairanderson",
"globexdesigns",
"necolas"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10101",
"repo": "rackt/react-a11y",
"url": "https://github.com/rackt/react-a11y/issues/77"
}
|
gharchive/issue
|
Breaks pure-render checks
This module generates new a11y id's every render, meaning that pure render mixin doesn't work.
Why is this important for testing accessibility?
It's an issue because during development performance of the application is extremely poor with react-a11y enabled. Ideally we'd like to keep react-a11y enabled always so that new warnings are immediately visible to developers.
Yeah it's important because it slows iteration speed in development and makes it hard to identify similar performance regressions not caused by react-a11y.
|
2025-04-01T04:35:17.732884
| 2015-04-13T23:33:36
|
68218715
|
{
"authors": [
"jakemmarsh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10102",
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/1076"
}
|
gharchive/issue
|
'TypeError: Cannot read property 'firstChild' of undefined' when rendering in tests
I'm attempting to unit test my pages using Mocha. My test structure looks like this:
var React = require('react/addons');
var TestUtils = React.addons.TestUtils;
var should = require('should');
var TestHelpers = require('../../../spec/support/testHelpers');
var ExplorePage = require('../../../js/pages/ExplorePage.jsx');
describe('Explore Page', function() {
var page;
beforeEach(function(done) {
this.timeout(10000);
TestHelpers.testPage('/explore', ExplorePage, function(component) {
page = component;
done();
});
});
it('should exist', function(done) {
should.exist(page.getDOMNode());
done();
});
});
where TestHelpers.testPage looks like this (inspired by a solution posted on another issue here):
testPage: function(initialPath, targetComponent, steps) {
var div = document.createElement('div');
var router = Router.create({
routes: require('../../js/Routes.jsx'),
location: new TestLocation([initialPath])
});
var routerMainComponent;
var step;
if ( !_.isArray(steps) ) {
steps = [steps];
}
router.run(function (Handler, state) {
step = steps.shift();
routerMainComponent = React.render(<Handler params={state.params} query={state.query} />, div);
step(TestUtils.findRenderedComponentWithType(routerMainComponent, targetComponent));
}.bind(this));
}
This all works, and my should exist test passes. But, if I add any more tests, I get the following error the next time beforeEach is called: TypeError: Cannot read property 'firstChild' of undefined.
I've read all the related issues, and can't seem to find a solution. The only solid solution I've read for this specific error is ensuring that React isn't being loaded in two different places, which I have done. I've also tried unmounting the component in an afterEach, but still no dice. Any ideas?
EDIT: I've also tried TestUtils.renderIntoDocument instead of React.render, but same problem.
Turns out, I was getting this error as a result of errors within the component/page I was testing. Specifically, it was making failing AJAX requests on componentDidMount and componentDidUpdate, resulting in the mount/unmount process error. Fixing (or commenting out) these problem areas fixed the error.
|
2025-04-01T04:35:17.738495
| 2015-08-13T16:25:16
|
100812793
|
{
"authors": [
"Dharmoslap",
"dozoisch"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10103",
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/1708"
}
|
gharchive/issue
|
1.0.0-beta-3 - window is not defined
I want to use React Router1.0.0-beta-3 for server side rendering, but I've got this:
node_modules/react-router/lib/DOMUtils.js:52
return window.history && 'pushState' in window.history;
ReferenceError: window is not defined
How are you defining your routes for server-side rendering?
ReferenceError: window is not defined
at supportsHistory (/Volumes/Workspace/Skypicker/repos/Skypicker/node_modules/react-router/lib/DOMUtils.js:52:9)
at new BrowserHistory (/Volumes/Workspace/Skypicker/repos/Skypicker/node_modules/react-router/lib/BrowserHistory.js:46:54)
at Object. (/Volumes/Workspace/Skypicker/repos/Skypicker/node_modules/react-router/lib/BrowserHistory.js:132:15)
at Module._compile (module.js:430:26)
at Object.Module._extensions..js (module.js:448:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.cls_wrapMethod [as _load] (/Volumes/Workspace/Skypicker/repos/Skypicker/node_modules/newrelic/lib/shimmer.js:208:38)
at Module.require (module.js:365:17)
at require (module.js:384:17)
I'm defining the router pretty much like it's described here http://rackt.github.io/react-router/tags/v1.0.0-beta3.html#Server Rendering
Seems like you are using BrowserHistory on the server. This will not work as you are not in a browser. \
On the server, in the example they do not define an history at all. They use a "static" location. to do the render
// server.js
import Router from 'react-router';
import Location from 'react-router/lib/Location';
let action = (req, res) => {
var location = new Location(req.path, req.query);
Router.run(routes, location, (error, initialState, transition) => {
// ....
var html = React.renderToString(
<Router {...initialState}/>
);
could that be the culprit?
|
2025-04-01T04:35:17.768140
| 2023-01-06T13:16:49
|
1522575651
|
{
"authors": [
"slack-coder"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10104",
"repo": "radicle-dev/heartwood",
"url": "https://github.com/radicle-dev/heartwood/pull/207"
}
|
gharchive/pull-request
|
Test installing commands in CI
For an unknown reason, cargo test may succeed when compiling radicle-cli and radicle-remote-helper packages for installation fail.
Add a test to prevent these errors from reaching the master branch.
resolves: #206
Not needed, cargo install --locked is the advised install method.
Follow up at https://github.com/radicle-dev/heartwood/pull/209
|
2025-04-01T04:35:17.854881
| 2020-12-17T14:07:06
|
770072460
|
{
"authors": [
"Apsysikal",
"alexRicc2",
"aryanas159",
"benoitgrelard",
"chaance",
"dvnrsn",
"efremovag",
"hedysnike",
"jjenzz",
"mithleshjs",
"pacocoursey",
"raunofreiberg",
"rayjasson98",
"steveruizok",
"upq"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10105",
"repo": "radix-ui/primitives",
"url": "https://github.com/radix-ui/primitives/issues/346"
}
|
gharchive/issue
|
DropdownMenu changes padding / margin on document body.
Bug report
Current Behavior
When active, the DropdownMenu component adds the following CSS to the document body.
body {
overflow: hidden !important;
position: relative !important;
padding-left: 8px;
padding-top: 8px;
padding-right: 8px;
margin-left:0;
margin-top:0;
margin-right: 0px !important;
}
This can cause layout shift on layouts that have set padding / margin on the body.
Expected behavior
Opening the dropdown should not cause the layout to shift.
Reproducible example
Codesandbox.
Suggested solution
I'm guessing that the margins / padding have a function, though it isn't clear to me what that is. As long as that part of the CSS may be removed without breaking the component, I would suggest removing the padding / margins.
Your environment
Software
Name(s)
Version
Radix Package(s)
Dropdown
0.0.1
React
n/a
17.0.0
Browser
Chrome
Assistive tech
n/a
Node
n/a
npm/yarn
Operating System
Mac
I'm fairly certain that this comes from the @radix/react-menu package using react-remove-scroll which removes the document scrollbar (as a result of an overlay being open) while maintaining its space.
As a workaround, don't place your padding or margin on the body element.
Noted, though even with a project where the body has no padding / margin, this will add padding and cause a layout shift.
It shouldn't cause a layout shift, in fact it's designed to prevent it. But you do need to set margin: 0 on the body, and as @raunofreiberg mentioned it's generally a good idea to avoid adding margin and padding directly to the body and instead using an inner element for that. Here's a fork of your sandbox showing that in action: https://codesandbox.io/s/damp-snow-ev0rd?file=/src/App.tsx
That said, we probably need to add documentation to clear this up if we don't have it already. If you still have layout shift after making those adjustment, let me know and we can definitely investigate further.
Right on, so to boost that for anyone coming here in the future, adding margin: 0; to body fixes the problem. 👍
body {
margin: 0;
}
Just me or has this broken again? I'm seeing weird behavior – either the Dropdown (or Context) menus are wayy too low from the trigger, or opening/closing the Menu causes the entire page to jump.
I bumped all the packages in Chance's example above and you can see the offset problem: https://codesandbox.io/s/admiring-brown-dn7e7?file=/src/App.tsx
Could this be re-opened?
Ping @benoitgrelard and @jjenzz since you may not be following this issue alreaddy
@pacocoursey yes this is such a frustrating issue.
One of these is it sets position: relative on the body.
our popper solution doesn't support parent static contexts as we mostly rely on portalling
with a basic setup like in your sandbox (even though you have padding: 0, margin: 0 on body), body will not start at top: 0 because of collapsing margins
So the extra gap you're seeing is because of all that.
There are a few ways to get around it but the gist of it is you want to get rid of the collapsing margins on that parent.
The easiest way would be to add style={{ margin: -1, padding: 1 }} to your top-level app like so:
https://codesandbox.io/s/dropdownmenu-corrected-collapsing-margins-iiop9
as @raunofreiberg mentioned, we use react-remove-scroll which does a bunch of things we can't control. One of these is it sets position: relative on the body (We're hoping to soon take control of this by creating our own solution which hopefully would give us more control)
Hey, coming back to this as it's causing a strange layout issue for me. Any element with an absolute position that is taking its height as a percentage of the body will be thrown off once the body's position changes to relative.
Here's a repro: https://codesandbox.io/s/body-relative-issue-zz4q3?file=/App.js
In my case, I'm able to set the size as fixed, which solves the problem, but I can imagine cases where it would cause issues.
It adds a margin-right: 17px !important to the body, so unless you override that using important inline styles it will never work .. I don't know why is this closed anyway ..
adding the following styles to your body should prevent issues i believe:
body {
display: flow-root;
min-height: 100vh;
}
Or the solution provided above: https://github.com/radix-ui/primitives/issues/346#issuecomment-817690787
@benoitgrelard that doesn't solve @steveruizok's example unfortunately. the snippet i posted should work for everyone i believe.
flow-root
It is not working as display: flow-root; gets replaced by display: relative !important;
None of the above solution is working right now.
adding the following styles to your body should prevent issues i believe:
body {
display: flow-root;
min-height: 100vh;
}
@jjenzz It is not working as display: flow-root; gets replaced by display: relative !important;
None of the above solutions is working right now.
@mithleshjs I'm confused, display: relative isn't a thing. Are you talking about position: relative?
Regardless, I don't believe display is set so you shouldn't have any issue with overrides.
@mithleshjs I'm confused, display: relative isn't a thing. Are you talking about position: relative? Regardless, I don't believe display is set so you shouldn't have any issue with overrides.
@benoitgrelard Sorry, it was a typo and badly explained. What I meant was even if you set display: flow-root;, the problem still persists due to this style position: relative !important; being applied by the library on the body. The solution that worked for me (using Tailwind):
<body>
<div className='fixed w-full top-0 bottom-0 overflow-x-auto'>
content...
</div>
</body>
As mentioned by @upq:
It adds a margin-right: 17px !important to the body, so unless you override that using important inline styles it will never work .. I don't know why is this closed anyway ..
Added body styles in Radix UI website
body {
overflow: hidden !important;
position: relative !important;
padding-left: 0px;
padding-top: 0px;
padding-right: 0px;
margin-left: 0;
margin-top: 0;
margin-right: 17px !important;
}
Added body styles in my web project
body {
overflow: hidden !important;
position: relative !important;
padding-left: 0px;
padding-top: 0px;
padding-right: 17px; /* main culprit */
margin-left: 0;
margin-top: 0;
margin-right: 17px !important;
}
margin-right: 17px is unlikely the root cause of this problem. From my observation, I have an additional padding-right: 17px which is the main culprit of this layout shift issue. I really wonder from where I get this additional padding.
By the way, thanks @mithleshjs! Your solution works. I am using Tailwind too. May I know why adding these classes will help solve the issue?
how do i keep the radix from resizing thou
Putting modal={false} on the root dropdown element fixed it for me.
@mithleshjs I'm confused, display: relative isn't a thing. Are you talking about position: relative? Regardless, I don't believe display is set so you shouldn't have any issue with overrides.
@benoitgrelard Sorry, it was a typo and badly explained. What I meant was even if you set display: flow-root;, the problem still persists due to this style position: relative !important; being applied by the library on the body. The solution that worked for me (using Tailwind):
<body>
<div className='fixed w-full top-0 bottom-0 overflow-x-auto'>
content...
</div>
</body>
This worked for me, thanks
In NextJS:
add <body style={{ margin: '0px !important' }}>
This was a problem for me because we had
html {
overflow-x: hidden;
}
Putting modal={false} on the root dropdown element fixed it for me.
it works! the only downfall is that now the page is scrollable while the dropdownmenu is open
|
2025-04-01T04:35:17.858512
| 2023-03-04T21:06:17
|
1609947431
|
{
"authors": [
"UTkzhang",
"benoitgrelard",
"joaom00"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10106",
"repo": "radix-ui/primitives",
"url": "https://github.com/radix-ui/primitives/pull/1995"
}
|
gharchive/pull-request
|
Fix position when use anchor css properties
Description
Reset the middleware lifecycle to calculate the right position when using --radix-popper-anchor-width or --radix-popper-anchor-height.
See the behavior in some primitives although works with Select
https://codesandbox.io/p/sandbox/cocky-wood-wgqnuy
Possible fixes #1968
I can see that fixes the issue, but I'm a little unclear as to why it needs to be in this middleware, or why it happens.
Do you have a bit more understanding here @joaom00?
I had to change the position of the arrow middleware to after the size because it was not able to position correctly
1 still seeing this issue in react-dropdown-menu
|
2025-04-01T04:35:17.871010
| 2021-06-08T16:37:58
|
915240832
|
{
"authors": [
"andy-hook",
"benoitgrelard",
"jjenzz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10107",
"repo": "radix-ui/primitives",
"url": "https://github.com/radix-ui/primitives/pull/694"
}
|
gharchive/pull-request
|
[DropdownMenu] Enable onEntryFocus for root menu
Fixes accessibility in NVDA
Follows WAI-ARIA spec that says Enter and Space keypresses should focus first item.
I did this by listening for keyboard and mouse events to update an isKeyboardUser boolean in state. I chose this approach instead because the previous approach meant firing an onKeyOpen event from the trigger and we don't have a root trigger in Menu.
Interesting, I hadn't considered the context menu could be opened like that.
I do feel though that we keep coming back to the fact that Menu doesn't have a Trigger part. I know we moved away from that but I'm wondering now…
I do feel though that we keep coming back to the fact that Menu doesn't have a Trigger part. I know we moved away from that but I'm wondering now…
I had given similar feedback a while ago so @andy-hook and I had tried to move the trigger into the Menu but it opened a can of worms. It wasn't as simple as it seems.
It might be worth revisiting now though as we were in a bit of a rabbit hole with all the submenu stuff at the time. Either way though, I think I still prefer this solution in regard to focusing the first item on open.
This feels more like like our onPointerDownOutside or onFocusOutside stuff. With those we're saying it should close if those events occur externally and here we're saying it should open with the first item focused if a keydown happens externally.
I just realised this probably doesn't need to be state though so I'll update that part... just needs to be a value we can reference when context.open changes to true
I do feel though that we keep coming back to the fact that Menu doesn't have a Trigger part. I know we moved away from that but I'm wondering now…
I had given similar feedback a while ago so @andy-hook and I had tried to move the trigger into the Menu but it opened a can of worms. It might be worth revisiting now though as we were in a bit of a rabbit hole with all the submenu stuff at the time but it's only DropdownMenu that has a trigger atm. It's probably a premature abstraction at this stage.
Yeah I think I agree at this stage.
Either way though, I think I still prefer this solution in regard to focusing the first item on open.
This feels more like like our onPointerDownOutside or onFocusOutside stuff. With those we're saying it should close if those events occur externally and here we're saying it should open with the first item focused if a keydown happens externally.
The WAI-ARIA spec is saying that if they're using a keyboard and it opens then it should focus the first item... It's not saying "when it opens from the trigger" but "when a menu opens" and that's what the logic here communicates. That's how I got the ContextMenu stuff for free.
Yeah I have to admit it's pretty cool how the context menu just worked with this mouse keys accessibility thing!
it's only DropdownMenu that has a trigger atm. It's probably a premature abstraction at this stage.
Yeah.. that's where we landed when we last spoke about this. I can see the argument for moving it, especially as we've discovered more and more requirements but tbh I'm not convinced now is the right time to do so.
The WAI-ARIA spec is saying that if they're using a keyboard and it opens then it should focus the first item... It's not saying "when it opens from the trigger" but "when a menu opens" and that's what the logic here communicates. That's how I got the ContextMenu stuff for free.
I like this approach personally, makes more sense to me.
I just realised this probably doesn't need to be state though so I'll update that part... just needs to be a value we can reference when context.open changes to true
Turns out i can't do this because when context.open state changes at submenu level that state change obviously won't trigger a re-render of the parent context so it wouldn't have the updated reference. Also, when a menu is open you can switch from being a keyboard user to a mouse user but the ref change wouldn't update the context consumers (obvs) so this happens:
onItemLeave is firing and because it thinks i'm a keyboard user still, it's focusing the first item. So yeah, needs to be state.
I remembered why I bound capture events btw...
without capture
https://user-images.githubusercontent.com/175330/121559914-ee390800-ca0e-11eb-8952-c540487590e1.mp4
with capture
https://user-images.githubusercontent.com/175330/121559700-ba5de280-ca0e-11eb-8866-6255f3e03b5c.mp4
Event steps:
Key to open (isUsingKeyboardRef.current === true)
Escape key to close so I am still focused on trigger
Wiggle mouse (isUsingKeyboardRef.current === false)
Press enter to open
That last step doesn't set isUsingKeyboardRef.current to true before the content opens so the first item doesn't focus. Capture phase ensures we set it before any other key events execute.
|
2025-04-01T04:35:17.877071
| 2020-03-30T22:06:01
|
590628916
|
{
"authors": [
"radubolovan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10108",
"repo": "radubolovan/Godot-Database-Manager",
"url": "https://github.com/radubolovan/Godot-Database-Manager/issues/10"
}
|
gharchive/issue
|
[IMPROVEMENT][2.0] Add a Cancel button to the New database dialog.
Add a Cancel button to the "New database" dialog.
TODO: close the dialog when the Cancel button gets clicked.
This should be implemented with commit: e1016356985131c980aea4d1cdcfb635ec9c41cd
|
2025-04-01T04:35:17.897830
| 2021-03-21T14:51:30
|
837098659
|
{
"authors": [
"rafaeldellaquila",
"unixorn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10109",
"repo": "rafaeldellaquila/zsh-vitesse-theme",
"url": "https://github.com/rafaeldellaquila/zsh-vitesse-theme/issues/1"
}
|
gharchive/issue
|
Please add a license file and a screen shot
I'd like to add this to awesome-zsh-plugins, but I prefer to add themes with screen shots in the readme so that users can see what they look like without having to install them.
Also, please add a license file, some people won't use any code that doesn't have an open source license. If you don't have one in mind, https://choosealicense.com is a good resource to help you select an appropriate one.
Thanks!
hi @unixorn I made this request. Thanks
|
2025-04-01T04:35:17.899443
| 2023-05-19T15:12:08
|
1717412833
|
{
"authors": [
"rafaelgieschke",
"srett"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10110",
"repo": "rafaelgieschke/teering",
"url": "https://github.com/rafaelgieschke/teering/pull/1"
}
|
gharchive/pull-request
|
Fix resource leak
Make sure fd doesn't stay open for longer than needed.
Might be a breaking change as cat "/proc/$(pgrep -n teering)/fd/3" is not possible after this change. Have to use cat "/proc/$(pgrep -n teering)/map_files/"* then.
Ah but this will modify errno if close() fails?
close() never really fails AFAIK, I think it can with NFS in theory, so errno should not get modified, but yes, moving it past the error handling should be fine.
|
2025-04-01T04:35:17.901192
| 2017-02-10T17:33:30
|
206857870
|
{
"authors": [
"aqtrans",
"rafaelmaiolla"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10111",
"repo": "rafaelmaiolla/remote-vscode",
"url": "https://github.com/rafaelmaiolla/remote-vscode/pull/17"
}
|
gharchive/pull-request
|
Add the ability to edit the listening address.
This change allows configuring what address the rmate daemon will listen on, instead of just localhost.
For my own use, setting remote.host to my desktop local IP gives me the ability to remotely edit files on all machines within my LAN without worrying about forwarding ports and such.
Note: the getLocalDirectoryName test fails for me, but I'm not sure if this is due to any changes I've done.
Changes seems fine. I will accept it and publish the new version later today.
|
2025-04-01T04:35:17.922981
| 2017-05-02T20:33:24
|
225812728
|
{
"authors": [
"pnarielwala",
"rafrex"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10112",
"repo": "rafrex/detect-it",
"url": "https://github.com/rafrex/detect-it/issues/4"
}
|
gharchive/issue
|
hasMouse issue with Mozilla Firefox v53
detect-it is showing the deviceType being touchOnly on the desktop Mozilla Firefox browser which definitely supports mouse events
@pnarielwala I unable to reproduce this issue with Firefox v53 on a MacBook Pro running macOS Sierra (it registers as mouseOnly for me). What OS and hardware are you using? Does your device have a touch screen as well as a mouse?
I'm running it on Windows 10 on a Lenovo T440s. My device does not have a touch screen which makes it weird that it shows 'touchOnly'. There is a long standing issue of Firefox not implementing hover and point media queries which is why Detect Hover and Detect Pointer are all false https://bugzilla.mozilla.org/show_bug.cgi?id=1035774, but if it doesn't have a touch screen I'm surprised it's registering as though it does.
Can you run the following in your Firefox console (one at a time) and let me know the results, thanks:
window.matchMedia('(-moz-touch-enabled: 0)').matches
window.matchMedia('(-moz-touch-enabled: 1)').matches
window.TouchEvent this will either be a function or undefined
'ontouchstart' in window
'ontouchend' in document
Sure here's what I got:
window.matchMedia('(-moz-touch-enabled: 0)').matches = false
window.matchMedia('(-moz-touch-enabled: 1)').matches = true
window.TouchEvent = function ()
'ontouchstart' in window = true
'ontouchend' in document = true
Thanks. Even the Firefox specific -moz-touch-enabled is saying you have a touch screen. I feel like this may be a bug in Firefox's implementation on your hardware.
Do you know if this was an issue with Firefox v52 as well?
I'll consider adding Firefox browser sniffing as a fix, probably something like if it's Firefox and it supports touch events, then call it a hybrid (because I don't think there is anyway to tell the difference between Firefox running on your laptop or on an actual hybrid, like a Surface).
You have any thoughts or suggestions?
@pnarielwala I fixed this (sort of) in v3, see #5. Basically, if it registers as hasTouch with no pointer and hover support, and it's not on an Android assume it's a hybrid device with a mouse primaryInput type (if it's on Android assume it's touchOnly).
I know this means that your device with Firefox will register as a hybrid even though it should be mouseOnly, but if Firefox is going to tell me it supports touch there's not much I can do about it because it might actually be running on a hybrid device, e.g. a Surface.
Essentially since it registers as hybrid you should set both mouse and touch listeners, and the primaryInput type of mouse means you can optimize the UI for mouse input. So your site should work just fine on Firefox on your mouse only device.
@rafrex that will work! Thanks!
|
2025-04-01T04:35:17.944477
| 2023-05-08T12:27:57
|
1700163761
|
{
"authors": [
"Anilaydinaa",
"Deepak-811",
"Deepak-Augmont",
"EglerEnrique",
"MDSADABWASIM",
"Renatinaveen",
"bernardo-cloudwalk",
"isa-sabbagh",
"jehhxuxu",
"rahulbansal89"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10113",
"repo": "rahulbansal89/flutter_clarity",
"url": "https://github.com/rahulbansal89/flutter_clarity/issues/1"
}
|
gharchive/issue
|
Recording are not reflecting
currentSessionId is always null and recordings are not available. Recording is just blank.
same here. any thoughts?
same here. any thoughts?
Looks like a Clarity side issue, they are not supporting flutter UI Activities. I have written to them about this.
@rahulbansal89 news from the clarity team?
Same thing happing with me.
Any update on this?
Any update on this?
Same here. Any result?
@rahulbansal89 it was working earlier, or from the start, clarity is not working with flutter?
@rahulbansal89 news from the clarity team?
Yes.
_Hello rahul,
Thank you so much for your feedback.
We currently don’t support flutter yet. We only support native Android, or react native (on android), Cordova and ionic platforms.
Nevertheless, We will be constantly working on supporting other technologies and frameworks._
@rahulbansal89 I am facing the same problem and I am not able to solve it. Can someone who has managed to solve this please share the solution with us?
|
2025-04-01T04:35:17.954930
| 2016-07-20T14:56:56
|
166602487
|
{
"authors": [
"AileenMoreno",
"rahuliyer95"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10114",
"repo": "rahuliyer95/iShowcase",
"url": "https://github.com/rahuliyer95/iShowcase/issues/6"
}
|
gharchive/issue
|
Rotation - not updating size
The iShowcase is not autoresizing when apps performs a rotation. So if the app is displaying a portrait showcase and a rotation to Landscape is perform the view is too short to fill the screen 's width
Working on it, will need a lot of changes though
Please check the rotation-support branch
Still not working. You should test it on an iPad. I have tried adding autolayout to the view and subviews using Cartography pod. The problem is to recalculate the clear area after the layout.
If you could try the example application on the iPad it would help as rotation was working for me with the example application.
I'm sorry, i am using ipad 2 (9.3) simulator. Its your update example what
i was running. Look :
2016-07-25 11:02 GMT-04:30 Rahul Iyer<EMAIL_ADDRESS>
If you could try the example application on the iPad it would help as
rotation was working for me with the example application.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/rahuliyer95/iShowcase/issues/6#issuecomment-234989168,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AIfVN1HtkF8zNWqwDj7YHJbBe3qb8unwks5qZNcIgaJpZM4JQ3dX
.
--
Aileen Moreno <<EMAIL_ADDRESS>>
do try now
Thanks.
It works.
I have added a method to the delegate, so when the user tap and dismiss the
view the location where it was tapped can be known.
private func getGestureRecgonizer() -> UIGestureRecognizer {
let singleTap = UITapGestureRecognizer(target: self, action: #selector(
showcaseTapped(_:)))
singleTap.numberOfTapsRequired = 1
singleTap.numberOfTouchesRequired = 1
return singleTap
}
internal func showcaseTapped(gestureRecognizer: UITapGestureRecognizer) {
let location = gestureRecognizer.locationInView(containerView.superview)
UIView.animateWithDuration(
0.5,
animations: { () -> Void in
self.alpha = 0
}) { (_) -> Void in
self.onAnimationComplete(location)
}
}
private func onAnimationComplete(touchLocation:CGPoint) {
if singleShotId != -1 {
NSUserDefaults.standardUserDefaults().setBool(true, forKey:
String(format:
"iShowcase-%ld", singleShotId))
singleShotId = -1
}
for view in self.containerView.subviews {
view.userInteractionEnabled = true
}
recycleViews()
self.removeFromSuperview()
if let delegate = delegate {
delegate.iShowcaseDismissed?(self, location: touchLocation)
}
}
Thanks.
Bye
2016-07-25 14:34 GMT-04:30 Rahul Iyer<EMAIL_ADDRESS>
do try now
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/rahuliyer95/iShowcase/issues/6#issuecomment-235051942,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AIfVNwct6jT1iEngPvSk72qttWzytbDYks5qZQjOgaJpZM4JQ3dX
.
--
Aileen Moreno <<EMAIL_ADDRESS>>
|
2025-04-01T04:35:17.957482
| 2024-08-11T14:08:33
|
2459609911
|
{
"authors": [
"firmai",
"rahulnyk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10115",
"repo": "rahulnyk/graph_maker",
"url": "https://github.com/rahulnyk/graph_maker/issues/7"
}
|
gharchive/issue
|
Great Project
This is such a cool project, are you going to keep developing and improving it? There is so much potential here. It looks like something like this would be the backbone of many startups like this https://app.whyhow.ai/public/graph/66a191d58e5461730aa81ecb
I was thinking of SEC documents and asking certain things that would produce graphs of course but also nicely formatted dataframes from those "graphlets", I would love to explore how this could be done, for example see.
Hi @firmai
This can be easily done. In fact, the output of the graph maker can directly be read by pandas into a dataframe with a single line of code.
|
2025-04-01T04:35:17.959152
| 2019-10-22T08:07:47
|
510494249
|
{
"authors": [
"palango",
"pirapira"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10116",
"repo": "raiden-network/raiden-contracts",
"url": "https://github.com/raiden-network/raiden-contracts/issues/1300"
}
|
gharchive/issue
|
revert data to where data_0.25.2
because we will be working on data_0.25.2 for a while.
Such a reversion should happen on the release branch alderaan. https://github.com/raiden-network/raiden-contracts/issues/1327
Not valid any more
|
2025-04-01T04:35:17.989007
| 2020-05-16T00:27:27
|
619337982
|
{
"authors": [
"aviralkumar2907",
"s-gupta"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10117",
"repo": "rail-berkeley/d4rl",
"url": "https://github.com/rail-berkeley/d4rl/issues/21"
}
|
gharchive/issue
|
Unable to run antmaze_sac.py
Similar to #9, I believe appropriate version of rlkit is also needed to run antmaze_sac.py. Any estimate when the appropriate version of rlkit can be made available?
In particular, current code crashes at MdpPathCollector with the error init() got an unexpected keyword argument 'sparse_reward'
Thank you for pointing this out. We have added a script compatible with rlkit: https://github.com/rail-berkeley/d4rl/blob/master/scripts/antmaze_sac_rlkit.py
This should work with rlkit. Version differences in gym may give rise to an assertion error, which is fixable using instructions at the top of the launcher script.
|
2025-04-01T04:35:18.012525
| 2016-02-26T23:43:41
|
136846224
|
{
"authors": [
"oddg",
"tlkahn"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10118",
"repo": "rails/arel",
"url": "https://github.com/rails/arel/issues/415"
}
|
gharchive/issue
|
Should Arel::Nodes::True.new() be 1 in sqlite3?
I am using Arel::Nodes::True.new() as a fallback for a query construction. It works fine in development with the postgresql adapter but it breaks my tests as they are using the sqlite3 adapter. I figured out that this is because sqlite3 takes 1 instead of true:
$ rails console test
Running via Spring preloader in process 8777
Loading test environment (Rails 4.1.6)
2.2.3 :001 > Mymodel.where(1).count
=> 0
2.2.3 :002 > Mymodel.where(Arel::Nodes::True.new()).count
SQLite3::SQLException: no such column: TRUE: SELECT COUNT(*) FROM "mymodels" WHERE (TRUE)
I can fix the issue by checking the environment and returning either Arel::Nodes::True.new() or 1 but it does not feel right.
I made some changes on sqlite.rb, so that Arel::Nodes::True.new().to_sql == '1' and Arel::Nodes::False.new().to_sql == '0'. I have sent the pull request at: https://github.com/rails/arel/pull/417
|
2025-04-01T04:35:18.165868
| 2024-05-26T00:32:32
|
2317349766
|
{
"authors": [
"flavorjones",
"synth"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10119",
"repo": "rails/tailwindcss-rails",
"url": "https://github.com/rails/tailwindcss-rails/issues/367"
}
|
gharchive/issue
|
Use of gem without sprockets (eg vite) aka standalone use of tailwind generators
I'd like to use this gem primarily for the scaffolding generators. When the gem is included, I get the following error:
undefined method `assets' for an instance of Rails::Application::Configuration
This takes us to here: https://github.com/rails/tailwindcss-rails/blob/5c73dec4e38c9bd1664a530ecb66e938aaa4a3d0/lib/tailwindcss/engine.rb#L5-L7
My understanding is that Rails.application.config.assets is Sprockets specific. If I comment out this line, the generators work fine. Is there a way to explicitly opt out of this line/initializer or would this be considered a bug, perhaps? This seems to be the only sprockets specific reference, although I'm not too sure. Other things probably don't work in a vite context since assets go in app/frontend typically with Vite. Although, again, in my case, all I'm after is the generators :)
Related: https://github.com/rails/tailwindcss-rails/discussions/353
|
2025-04-01T04:35:18.195676
| 2024-06-14T15:34:46
|
2353625608
|
{
"authors": [
"JakeCooper",
"Thijmen",
"coffee-cup",
"ndneighbor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10120",
"repo": "railwayapp/nixpacks",
"url": "https://github.com/railwayapp/nixpacks/pull/1119"
}
|
gharchive/pull-request
|
Support for --add-host
This PR adds support for Nixpacks' build argument --add-host.
--add-host allows multiple hosts being added, just like with the docker build.
For example: nixpacks build --add-host nginx:<IP_ADDRESS> --add-host postgres:<IP_ADDRESS>.
I've added tests to validate that:
The example could be build when hosts are provided during runtime
The example could not be build when no hosts are provided.
It's my first time doing something with Rust, so code might not be up to your standards. However, I am always open for improvements!
Ideally I also would like to add --add-hosts-for-network docker-network, to pass all ip's, but that's something up for discussion first.
Not stale
Updated the label, going to reflag this internally. Our main maintainer for this repo is on PTO so apologies for the delay.
No need for apologies, @ndneighbor. Appreciate this package and only happy to contribute!
Not stale
Can you fix lint issues and other stuff?
I was not aware of these linting issues before, I will get right on it!
I believe there are no linting issues anymore, running cargo fmt --all -- --check does not give any error anymore @JakeCooper.
Sorry about the conflicts 😬
There also seems to be some missing tests and linting errors. Could you please
Update the snapshot tests with cargo snapshot
Fix the linting errors with cargo clippy --all-targets --all-features -- -D warnings
After this you should be all good to go
@coffee-cup I think this should do! :D
I'll dig into the failing test later tonight.
I fixed one of the snapshot tests (by running cargo snapshot), which enabled CI to run all of the docker run tests. These seem to be failing though. Could you please take a look. I don't want to mess with the code too much. Appologizes for changing things. I expected it to be a quick fix and merge
Interesting, the tests dont fail locally. I'll spin up a VM with ubuntu and try there. I'll ping you once done :D
I'm not done yet, I forgot something!
@coffee-cup In order to fully understand what's happening, I have some questions;
Can I safely change the snapshot of a given test (in my case: node_fetch_network)?
If so, is this the place to add environment variables? It is important that during build phase, a given environment variable is being used (in this case, REMOTE_URL).
@coffee-cup In order to fully understand what's happening, I have some questions;
Can I safely change the snapshot of a given test (in my case: node_fetch_network)?
If so, is this the place to add environment variables? It is important that during build phase, a given environment variable is being used (in this case, REMOTE_URL).
The snapshot should be generated automatically using cargo snapshot. You can change it as part of this PR, but it shouldn't be edited by hand. For more info on snapshot tests please see this
You can add the environment variable in the docker_run_tests.rs file when the build is run. Alternatively you can create a nixpacks.toml file in the node_fetch_network example directory with the contents
[variables]
REMOTE_URL = '...'
Hi there,
Cheers, now I understand. It seems that on MacOS Docker, the --add-host flag works fine, however on Ubuntu it does not. I need some more time to dive into this issue. I'll ping and un-draft this PR once done.
It seems Docker on Linux is much more strict when it comes to adding hosts. I've added --network=host when providing --add-hosts.
However, I can understand if you'd like to see that as an argument for Nixpacks itself. Let me know what you prefer, then I will make sure to add it in as well.
Would be nice though to see if this fixes the CI as well.
Looks great and all tests passing! I don't think adding the --network=host arg is a problem.
Thanks again!
|
2025-04-01T04:35:18.208063
| 2023-11-10T22:31:19
|
1988503074
|
{
"authors": [
"DanielSinclair",
"cdsiren"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10121",
"repo": "rainbow-me/rainbowkit",
"url": "https://github.com/rainbow-me/rainbowkit/pull/1603"
}
|
gharchive/pull-request
|
feat: add decent mint items
This PR updates the with-next-mint-nft examples repository to include Decent's hooks for cross-chain minting.
Updates include:
DecentMint: a new mint button component that prepares calldata for a crosschain transaction based on the actionArgs included in the index.tsx file. This component sends the transaction & then tracks its progress, surfacing blockscanner links for users.
TokenSelectorUsage: (optional) this is the Token Selector component from Decent's UI components library. It lets users select which token they would like to pay with on the chain to which they are connected. This can be removed if we just want to demonstrate cross-chain tx's using native gas tokens only. TODO: there seems to be a potential class conflict that is causing some formatting issues in the modal -- functionally works though.
two new packages: @decent.xyz/box-hooks & @decent.xyz/box-ui.
Testnet liquidity for cross-chain tx's is extremely poor, so we created a new NFT to test with - block explorer link.
@cdsiren Bumped this example to separate directory with-next-decent-box and used static versioning so that if there are breaking changes it won't fail the CI. Do you have a rate limited example API key we can use for the example? The sign-up process right now is a bit frictioned.
|
2025-04-01T04:35:18.234675
| 2024-11-02T01:45:08
|
2630112149
|
{
"authors": [
"codecov-commenter",
"rainyl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10122",
"repo": "rainyl/opencv_dart",
"url": "https://github.com/rainyl/opencv_dart/pull/283"
}
|
gharchive/pull-request
|
[Android] Remove debug info in release build to reduce package size
fix: #282
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 92.03%. Comparing base (192a2d1) to head (fe75579).
Report is 4 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #283 +/- ##
=======================================
Coverage 92.03% 92.03%
=======================================
Files 49 49
Lines 9129 9129
=======================================
Hits 8402 8402
Misses 727 727
Flag
Coverage Δ
unittests
92.03% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T04:35:18.256823
| 2018-04-08T12:34:53
|
312301955
|
{
"authors": [
"chris-bacon",
"rakr"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10126",
"repo": "rakr/vim-one",
"url": "https://github.com/rakr/vim-one/pull/80"
}
|
gharchive/pull-request
|
Haskell colours
Hello,
I've added support for Haskell with this PR.
@rakr
Thanks for this and sorry for the late merge
Cheers @rakr! Just making sure, but you seem to have closed the PR instead of merging 😀
Silly me.
Thanks again for the PR
Haha, no problem! Thanks 😀
|
2025-04-01T04:35:18.258601
| 2023-10-20T06:40:00
|
1953638862
|
{
"authors": [
"leon1995",
"rakuri255"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10127",
"repo": "rakuri255/UltraSinger",
"url": "https://github.com/rakuri255/UltraSinger/issues/97"
}
|
gharchive/issue
|
install ERROR: No matching distribution found for onnxruntime-gpu>=1.16.0
when installing requirements.txt on macos, I get the following error
ERROR: Ignored the following versions that require a different python version: 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10; 1.9.5 Requires-Python >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <3.7
ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu>=1.16.0 (from pyannote-audio) (from versions: none)
ERROR: No matching distribution found for onnxruntime-gpu>=1.16.0
EDIT: I use python version 3.10.13
this seems to be a macos problem. on windows it wotks
Closing it, because of no response for a month!
|
2025-04-01T04:35:18.267576
| 2024-08-29T12:10:28
|
2494261425
|
{
"authors": [
"falconsmilie",
"ralphjsmit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10128",
"repo": "ralphjsmit/laravel-seo",
"url": "https://github.com/ralphjsmit/laravel-seo/issues/86"
}
|
gharchive/issue
|
500 When Calling Model::shouldBeStrict() in AppServiceProvider
Hi.
When i have this code;
class AppServiceProvider extends ServiceProvider
{
....
public function boot(): void
{
// All-in-one for preventLazyLoading, preventAccessingMissingAttributes and preventSilentlyDiscardingAttributes
Model::shouldBeStrict(!app()->isProduction());
...
}
}
It results in a 500;
The attribute [canonical_url] either does not exist or was not retrieved for model [RalphJSmit\Laravel\SEO\Models\SEO].
The "easy" fix is to add the members of \RalphJSmit\Laravel\SEO\Support\SEOData to the \RalphJSmit\Laravel\SEO\Models\SEO class, but i'm not sure if this is the right direction to take.
Hi @falconsmilie , thank you for your report! Could you perhaps share the full Flare link to the error you are getting? Then I can look where it originates from and resolve it. Does it originate from here?
Hi @ralphjsmit , you're linking to the correct line of code.
https://flareapp.io/share/o7A2eZd5
When you expand the vendor frames, it is the 4th one.
Hi @falconsmilie, thanks! Fixed in https://github.com/ralphjsmit/laravel-seo/releases/tag/1.6.3.
|
2025-04-01T04:35:18.272644
| 2023-02-24T09:46:14
|
1598279969
|
{
"authors": [
"jlissner",
"skjo0c"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10129",
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/issues/3354"
}
|
gharchive/issue
|
equals returns false if the ordering is different
If the ordering of data is different in two array, equals function returns false.
It should return false as they are not equal.
For two arrays to be equal, they must have the same value at each index.
In your example, arr1[1] doesn't equal arr2[1].
|
2025-04-01T04:35:18.273553
| 2016-10-25T19:42:46
|
185214527
|
{
"authors": [
"buzzdecafe",
"kwijibo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10130",
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/pull/1954"
}
|
gharchive/pull-request
|
removed curry from the example for liftN
you don't need to curry functions that you pass to liftN, it works just the same, so the example is clearer without the currying
:cow2:
|
2025-04-01T04:35:18.288432
| 2023-05-05T07:18:47
|
1697134598
|
{
"authors": [
"AaboutL",
"rameau-fr",
"umar-centific"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10131",
"repo": "rameau-fr/MC-Calib",
"url": "https://github.com/rameau-fr/MC-Calib/issues/40"
}
|
gharchive/issue
|
Use of different pattern board, board not generated using this repo
System information (version)
using docker on ubuntu 20.04
Vision system
Number of cameras => 2
Types of cameras => perspective
Multicamera configurations => overlapping
Configuration file => (i.e. *.yml)
Image sequences => ❔
number of images per camera = 1
if relevant and possible, please share image sequences
Describe the issue / bug
Can we use different calibration pattern (ChaRuCo) generated elsewhere, not using this repo, I have attached images for reference.
Getting this error: terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.2.0) ../modules/calib3d/src/calibration.cpp:3681: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO'
Full log:
0000001 | 2023-05-05, 06:41:47.662046 [info] - Nb of cameras : 2 Nb of Boards : 1 Refined Corners : true Distortion mode : 0
0000002 | 2023-05-05, 06:41:47.687069 [info] - Extraction camera 001
0000003 | 2023-05-05, 06:41:47.735920 [info] - Number of threads for board detection :: 2
0000005 | 2023-05-05, 06:41:49.000203 [info] - Extraction camera 002
0000006 | 2023-05-05, 06:41:49.029938 [info] - Number of threads for board detection :: 2
0000008 | 2023-05-05, 06:41:50.096146 [info] - Board extraction done!
0000009 | 2023-05-05, 06:41:50.096222 [info] - Intrinsic calibration initiated
0000010 | 2023-05-05, 06:41:50.096249 [info] - Initializing camera calibration using images
0000011 | 2023-05-05, 06:41:50.096258 [info] - NB of board available in this camera :: 0
0000012 | 2023-05-05, 06:41:50.096266 [info] - NB of frames where this camera saw a board :: 0
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.2.0) ../modules/calib3d/src/calibration.cpp:3681: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO'
Basically what I understand is that it is not able to detect corners.
Can you please help.
Thank you for your interest in this project.
I sincerely apologize for this delayed reply.
At this time, you cannot use Charuco board generated from another source.
It has to be generated directly from our toolbox using your configuration file. I can provide you with further assistance in this regard if you need it.
To be clear, your current test is falling because our toolbox does not recognize this arrangement of Charuco. As soon as you generate the pattern using MC-Calib, everything should work as advertised ;-)
Best regards and thank you again for testing this toolbox
Thanks for your wonderful work. The toolbox is really helpful.
I have a question about why we must use this toolbox to generate Charuco boards.
I found "cv::aruco::DICT_6X6_1000" used in the create_charuco_boards.cpp file. Is this the reason?
If I also use "cv::aruco::DICT_6X6_1000" in my own code to generate Charuco boards, could it work fine with this toolbox?
Thank you for your interest in this project. I sincerely apologize for this delayed reply. At this time, you cannot use Charuco board generated from another source. It has to be generated directly from our toolbox using your configuration file. I can provide you with further assistance in this regard if you need it. To be clear, your current test is falling because our toolbox does not recognize this arrangement of Charuco. As soon as you generate the pattern using MC-Calib, everything should work as advertised ;-) Best regards and thank you again for testing this toolbox
Thanks for your wonderful work. The toolbox is really helpful.
I have a question about why we must use this toolbox to generate Charuco boards.
I found "cv::aruco::DICT_6X6_1000" used in the create_charuco_boards.cpp file. Is this the reason?
If I also use "cv::aruco::DICT_6X6_1000" in my own code to generate Charuco boards, could it work fine with this toolbox?
I have changed DICT_6X6_1000 to DICT_4X4_50 (My board configuration) in McCalib/include/McCalib.hpp file and build again, Now The algorithm is working fine and corners are detected. ENd to end calibration is also workust fine.
cv::Ptr<cv::aruco::Dictionary> dict_ = cv::aruco::getPredefinedDictionary( cv::aruco::DICT_4X4_50); // load the dictionary that correspond to the // charuco board
Thanks for this amazing project.
While it is theoretically possible to use the toolbox with patterns generated from other sources, it is difficult to guaranty that it will follow the same standard. For instance, someone faced problems with boards generated from calib.io and had to regenerate everything from MC-Calib.
Depending on the implementation, the ordering of the pattern might be widely different and cause problems. Honestly, it could probably be resolved by minor modifications — as long as the generation strategy from another source is known. For example, we could implement one new method to be compatible with patterns from calib.io, but we did not find the time to include such feature yet. Therefore, we recommend users to just generate the boards using the code embedded in the toolbox.
Also it would be great to have the possibility to use different types of boards like AprilGrid etc. Any new contributor ready to improve MC-Calib is very welcome ;-)
Thanks a lot for using MC-Calib, we are very pleased to see that our work is used by others. Do not hesitate to contact us again if you face any difficulties.
|
2025-04-01T04:35:18.414102
| 2015-06-03T18:18:51
|
84700479
|
{
"authors": [
"cnam",
"ddossot"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10132",
"repo": "raml-leanlabsio/raml2html",
"url": "https://github.com/raml-leanlabsio/raml2html/issues/2"
}
|
gharchive/issue
|
Generation error on valid RAML file
I'm trying to generate the doc for this RAML: https://api.unbounce.com/raml/v0.4/api.raml
Here is the error I get:
array(1) {
["schema"]=>
string(3008) "{"$schema":"http:\/\/json-schema.org\/draft-04\/schema#","description":"Collection of account entities.","type":"object","additionalProperties":false,"required":["accounts","metadata"],"properties":{"accounts":{"type":"array","minItems":0,"uniqueItems":true,"items":{"$schema":"http:\/\/json-schema.org\/draft-04\/schema#","description":"TODO describe","type":"object","additionalProperties":false,"required":["created_at","id","metadata","name","options","state"],"properties":{"created_at":{"title":"Generic Date","description":"RFC 5322, section 3.4.1, compliant date, which means formatted with: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'","type":"string","format":"date-time","id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/common\/definitions_v2.0.json#\/date"},"id":{"type":"string"},"metadata":{"type":"object","allOf":[{"title":"Entity Metadata","type":"object","required":["location","documentation"],"properties":{"location":{"type":"string","format":"uri"},"documentation":{"type":"string","format":"uri"},"related":{"type":"object"}},"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/common\/entities_v2.0.json#\/entity_metadata"},{"type":"object","required":["related"],"properties":{"related":{"type":"object","additionalProperties":false,"required":["pages","accounts","sub_accounts"],"properties":{"pages":{"type":"string","format":"uri"},"accounts":{"type":"string","format":"uri"},"sub_accounts":{"type":"string","format":"uri"}}}}}],"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/metadata.json#\/account_metadata"},"name":{"type":"string"},"options":{"$schema":"http:\/\/json-schema.org\/draft-04\/schema#","type":"object","additionalProperties":false,"properties":{"api_keys_enabled":{"type":"boolean"}},"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/account_options.json"},"state":{"type":"string","enum":["active","suspended"]}},"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/account.json"}},"metadata":{"title":"Collection Metadata","type":"object","allOf":[{"title":"Entity Metadata","type":"object","required":["location","documentation"],"properties":{"location":{"type":"string","format":"uri"},"documentation":{"type":"string","format":"uri"},"related":{"type":"object"}},"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/common\/entities_v2.0.json#\/entity_metadata"},{"type":"object","required":["count"],"properties":{"count":{"type":"integer"}}}],"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/common\/entities_v2.0.json#\/collection_metadata"}},"id":"file:\/\/\/home\/ddossot\/dev\/projects\/ub-public-api-specs\/resources\/webroot\/raml\/v0.4\/schema\/accounts.json"}"
}
The key application/vnd.unbounce.api.v0.4+json does not exist.
phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php 333
#0 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(336): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#1 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(336): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#2 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(336): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#3 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(336): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#4 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(273): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#5 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(211): Raml\Parser->parseRamlData(Array, '/home/ddossot/d...', true)
#6 phar:///home/ddossot/dev/tools/raml2html.phar/src/Generator.php(55): Raml\Parser->parse('/home/ddossot/d...', true)
#7 phar:///home/ddossot/dev/tools/raml2html.phar/src/Command/GenerateCommand.php(62): Cnam\Generator->parse('/home/ddossot/d...')
#8 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Command/Command.php(257): Cnam\Command\GenerateCommand->execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#9 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(874): Symfony\Component\Console\Command\Command->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#10 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(195): Symfony\Component\Console\Application->doRunCommand(Object(Cnam\Command\GenerateCommand), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#11 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(126): Symfony\Component\Console\Application->doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#12 phar:///home/ddossot/dev/tools/raml2html.phar/index.php(15): Symfony\Component\Console\Application->run()
#13 /home/ddossot/dev/tools/raml2html.phar(10): include('phar:///home/dd...')
#14 {main}
Hello, @ddossot
Your a use mediaType application/vnd.unbounce.api.v0.4+json that is not standard and does not pass validation.
This bug confirmed
If your change media type to application/json then generator will be correct working
Thanks for confirming the bug! Looking forward to a fix, I really like the design of the doc this tool generates :smile:
Hello, @ddossot
We have released version 0.0.4 can be downloaded from the link,
http://raml2html.leanlabs.io/raml2html.phar
Thanks!
Here is the error I receive now:
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
PHP Notice: Undefined index: application/vnd.unbounce.api.v0.4+json in phar:///home/ddossot/dev/tools/raml2html-0.0.4.phar/vendor/cnam/php-raml-parser/src/Parser.php on line 331
PHP Fatal error: Call to a member function setSourceUri() on a non-object in phar:///home/ddossot/dev/tools/raml2html-0.0.4.phar/vendor/cnam/php-raml-parser/src/Parser.php on line 332
I am sorry, please try again.
http://raml2html.leanlabs.io/raml2html.phar
Thank you :+1: I get a different error now:
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: key: application/vnd.unbounce.api.v0.4+json not found from accept types application/json,text/json,application/(.*?)+json
Warning: Not valid json or file not found.
JSON:'page_group'
Invalid JSON.
phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Schema/Parser/JsonSchemaParser.php 46
#0 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(340): Raml\Schema\Parser\JsonSchemaParser->createSchemaDefinition('page_group', '/home/ddossot/d...')
#1 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(342): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#2 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(342): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#3 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(342): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#4 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(342): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#5 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(342): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#6 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(273): Raml\Parser->recurseAndParseSchemas(Array, '/home/ddossot/d...')
#7 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/cnam/php-raml-parser/src/Parser.php(211): Raml\Parser->parseRamlData(Array, '/home/ddossot/d...', true)
#8 phar:///home/ddossot/dev/tools/raml2html.phar/src/Generator.php(55): Raml\Parser->parse('/home/ddossot/d...', true)
#9 phar:///home/ddossot/dev/tools/raml2html.phar/src/Command/GenerateCommand.php(62): Cnam\Generator->parse('/home/ddossot/d...')
#10 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Command/Command.php(257): Cnam\Command\GenerateCommand->execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#11 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(874): Symfony\Component\Console\Command\Command->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#12 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(195): Symfony\Component\Console\Application->doRunCommand(Object(Cnam\Command\GenerateCommand), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#13 phar:///home/ddossot/dev/tools/raml2html.phar/vendor/symfony/console/Symfony/Component/Console/Application.php(126): Symfony\Component\Console\Application->doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#14 phar:///home/ddossot/dev/tools/raml2html.phar/index.php(15): Symfony\Component\Console\Application->run()
#15 /home/ddossot/dev/tools/raml2html.phar(10): include('phar:///home/dd...')
#16 {main}
You can check the API Console for my RAML file here: https://api.unbounce.com/console.html It may help you debug the issue?
|
2025-04-01T04:35:18.424405
| 2024-10-02T07:25:09
|
2560843012
|
{
"authors": [
"TheWathis",
"ramokz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10133",
"repo": "ramokz/phantom-camera",
"url": "https://github.com/ramokz/phantom-camera/issues/396"
}
|
gharchive/issue
|
[2D] Group follow mode is biased towards first target when multiple targets are present
Issue description
When two nodes (and probably more) are present in the Follow Targets variable of a PhantomCamera2D with a Follow Mode set to Group, the camera will be biased towards the first target
Steps to reproduce
Add a PhantomCamera2D
Set the Follow Mode to Group
Add the two (or mode) nodes to Follow Targets
Swap the order of the nodes to better see the bias
(Optional) Minimal reproduction project
No response
After a bit of digging, it looks like the bias is due to setting the follow_position to rect.get_center() which is created around the first target. To be perfectly centered, follow_position should be set to the mean of every target position
Refactored a fair bit for the 0.8, so the initial issue you shared seems to have already been resolved a little while ago.
But spotted that it was still occurring when auto zoom was enabled when doing some repro tests, which should now be resolved as well.
Thanks for calling attention to this!
|
2025-04-01T04:35:18.427713
| 2017-10-26T12:23:15
|
268739060
|
{
"authors": [
"ramsey",
"shrikeh"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10134",
"repo": "ramsey/moontoast-math",
"url": "https://github.com/ramsey/moontoast-math/issues/11"
}
|
gharchive/issue
|
Is there value in a standardised interface?
I have been using the excellent bignumbers library by @Litipk recently, and just came across this one as part of using @ramsey's UUID library.
As these are both implementations to solve the inherent issues with PHP with large numbers and precision, is there any value in there being a unified interface so that end users can easily use either without refactoring?
Just a thought as I'm a great fan of the interoperability initiatives in Containers, Requests, etc.
@shrikeh, I agree that there's value in having a standard interface, and I'd love to work with @Litipk to help create that. Maybe we can pitch it to the @php-fig.
As a side note, I'm interested in deprecating moontoast/math and using a different math library for ramsey/uuid. I'll take a look at litipk/php-bignumbers as a potential solution. Thanks for the pointer.
|
2025-04-01T04:35:18.429169
| 2015-03-03T16:30:33
|
59667002
|
{
"authors": [
"georgefs",
"ramusus"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10135",
"repo": "ramusus/django-oauth-tokens",
"url": "https://github.com/ramusus/django-oauth-tokens/issues/7"
}
|
gharchive/issue
|
redirect_uri issue
https://github.com/ramusus/django-oauth-tokens/blob/master/oauth_tokens/providers/facebook.py#L101
why redirect_uri have to response status code 404?
hi, sorry for late response. I point redirect_uri to 404 page, becouse it's not necessary to get valid redirect there, application just get the code parameter from response with redirect status
|
2025-04-01T04:35:18.438107
| 2022-09-09T10:59:38
|
1367661614
|
{
"authors": [
"ValueRaider",
"teneon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10136",
"repo": "ranaroussi/yfinance",
"url": "https://github.com/ranaroussi/yfinance/issues/1057"
}
|
gharchive/issue
|
Same API request, different dataset each time (decimals in floats are different each time)!
Hi there,
i have noticed a weird issue. Each request returns slightly different data.
import yfinance as yf
df = yf.Ticker("UNH")
df = df.history(period="max")
df.to_csv("df.csv")
For demostration purpose i have downloaded this same stock 3 times (one download per python execution) and named the three files as 'df.csv', 'df1.csv', 'df2.csv'. All files are slightly different. Basically CRC and even filesize is different for all of them.
Let's analyze it with 'cksum' command. The first number is CRC sum and the second number is file size.
cksum df.csv
3599934556 947002 df.csv
cksum df1.csv
4145465075 946595 df1.csv
cksum df2.csv
3647012562 946461 df2.csv
This is IMPOSSIBLE i thought... and then i started to dig deeper. I compared df.csv and df1.csv with 'diff' command
diff df.csv df1.csv
....and there are a LOT of differences in both files. Basically i figured out then that the problem is in floats. For some reason floats are slightly different each time you send a new request to the API.
For example in my case:
df1.csv: 2022-03-08,475.69087992904736, ...
df2.csv: 2022-03-08,475.69091079082295, ...
And it is not a problem with saving CSV, numbers are different each time data is returned from API, even before saving to .csv. I saved it to csv, just so i could analyse it easier and show it to you.
Any ideas what's happening here?
Best regards,
Jim
Please look through recent issues first before submitting
https://github.com/ranaroussi/yfinance/issues/1038
Hi @ValueRaider ,
thanks for your quick reply. Sorry about that. I did browse through several related issues. I also saw issue #1038 ,the one you have posted in your reply. BUT, if i understand correctly, the user in the related issue is talking about adjusted close price issue?
You have commented there:
The prices are exactly the same. The adjusted price is changing, and in a consistently negligible way so first thought goes to floating-point rounding errors. This is possible depending on how adjustment calculation is structured.
What i have observed is that ALL float numbers are changing, open, high, low, close.
Any thoughts?
Best regards,
Jim
You ARE looking at the adjusted prices. Read documentation for auto_adjust
Thank you very much! I understand it now.
|
2025-04-01T04:35:18.442005
| 2021-11-04T00:12:17
|
1044236928
|
{
"authors": [
"mudler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10137",
"repo": "rancher-sandbox/cOS-toolkit",
"url": "https://github.com/rancher-sandbox/cOS-toolkit/issues/841"
}
|
gharchive/issue
|
Suid bits not set after running cos-upgrade
cos-toolkit version:
8e35cfb10eeb4fc35cb63f5be4308039e6a2f672
CPU architecture, OS, and Version:
Describe the bug
After running cos-upgrade, can't sudo anymore
To Reproduce
Run cos-upgrade and specify an image manually:
cos-upgrade
Expected behavior
Logs
After reboot, can't sudo anymore
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
Additional context
Since luet 0.20.5, we use the containerd code to extract bits. Could possible be that suid bits are not preserved in containerd
https://github.com/mudler/luet/commit/fba420865a2c6c9599e31d770b41e671d5b5657c
|
2025-04-01T04:35:18.456125
| 2016-04-23T07:06:54
|
150525694
|
{
"authors": [
"deviantony",
"jay-lau",
"yasker"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10138",
"repo": "rancher/convoy",
"url": "https://github.com/rancher/convoy/issues/112"
}
|
gharchive/issue
|
Convoy do not support Get()
For more detail, please refer to https://github.com/emccode/dvdcli/issues/23
I was using 0.2.0 and the 0.1.0 can work well for unmount operation.
root@mesos002:~# dvdcli version
Binary: /usr/bin/dvdcli
SemVer: 0.2.0
OsArch: Linux-x86_64
Branch: v0.2.0
Commit: b2f436bcd832fed904ac7d9b94cee7174ff59e18
Formed: Thu, 21 Apr 2016 22:43:26 CST
Testing steps
root@mesos002:~# dvdcli mount --volumedriver=convoy --volumename=dvd1
INFO[0000] /var/lib/convoy/devicemapper/mounts/59fd22ec-97c1-4b4a-a43e-15470fa1d730
/var/lib/convoy/devicemapper/mounts/59fd22ec-97c1-4b4a-a43e-15470fa1d730
root@mesos002:~# dvdcli unmount --volumedriver=convoy --volumename=dvd1
ERRO[0000] VolumeDriver.Get: Handler not found: POST /VolumeDriver.Get volumeName=dvd1
convoy log
} pkg=daemon
ERRO[18036] Handler not found: POST /VolumeDriver.List pkg=daemon
DEBU[18049] Handle plugin activate: POST /Plugin.Activate pkg=daemon
DEBU[18049] Response: {
"Implements": [
"VolumeDriver"
]
} pkg=daemon
ERRO[18049] Handler not found: POST /VolumeDriver.Get pkg=daemon
DEBU[18049] Handle plugin create volume: POST /VolumeDriver.Create pkg=daemon
DEBU[18049] Request from docker: &{dvd1 map[]} pkg=daemon
DEBU[18049] Found volume 59fd22ec-97c1-4b4a-a43e-15470fa1d730 (name dvd1) for docker pkg=daemon
DEBU[18049] Response: {} pkg=daemon
DEBU[18049] Handle plugin mount volume: POST /VolumeDriver.Mount pkg=daemon
DEBU[18049] Request from docker: &{dvd1 map[]} pkg=daemon
DEBU[18049] Mount volume: 59fd22ec-97c1-4b4a-a43e-15470fa1d730 (name dvd1) for docker pkg=daemon
DEBU[18049] event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=59fd22ec-97c1-4b4a-a43e-15470fa1d730
DEBU[18049] event=list mountpoint=/var/lib/convoy/devicemapper/mounts/59fd22ec-97c1-4b4a-a43e-15470fa1d730 object=volume pkg=daemon reason=complete volume=59fd22ec-97c1-4b4a-a43e-15470fa1d730
DEBU[18049] Response: {
"Mountpoint": "/var/lib/convoy/devicemapper/mounts/59fd22ec-97c1-4b4a-a43e-15470fa1d730"
} pkg=daemon
DEBU[18056] Handle plugin activate: POST /Plugin.Activate pkg=daemon
DEBU[18056] Response: {
"Implements": [
"VolumeDriver"
]
} pkg=daemon
ERRO[18056] Handler not found: POST /VolumeDriver.Get pkg=daemon
+1 I'm having the same issue when I try to restart a container with a NFS shared volume after restarting the Docker host.
This also happens when trying to delete a volume using:
$ docker volume rm shared-vol
Hi,
Please give a try https://github.com/rancher/convoy/releases/tag/v0.5.0-rc1 . It should fix this issue.
Notice it's not compatible with previous Convoy's configuration file. You may need a new setup to try it on.
Close this issue since v0.5.0-rc1 should fix it.
|
2025-04-01T04:35:18.464798
| 2023-07-14T20:33:06
|
1805534252
|
{
"authors": [
"GrabbenD",
"kkaempf"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10139",
"repo": "rancher/elemental-toolkit",
"url": "https://github.com/rancher/elemental-toolkit/issues/1803"
}
|
gharchive/issue
|
[Docs] Podman: allow local repository for first time install
It's possible to host a local registry with https://hub.docker.com/_/registry
However, the ability to skip this step would make it easier to use the project locally for a single computer (assuming contents of podman/docker are kept on a persistent partition).
Is this currently possible (as the documentation is unclear)?
$ podman run --privileged -v /dev/:/dev/ -ti localhost/local/elemental-toolkit:latest install --system.uri docker:localhost/local/system:latest /dev/sdb
INFO[2023-07-14T20:27:18Z] Starting elemental version v0.0.1
INFO[2023-07-14T20:27:18Z] Reading configuration from '/etc/elemental'
INFO[2023-07-14T20:27:18Z] Install called
INFO[2023-07-14T20:27:18Z] Partitioning device...
INFO[2023-07-14T20:27:19Z] Mounting disk partitions
INFO[2023-07-14T20:27:19Z] Running before-install hook
INFO[2023-07-14T20:27:19Z] Preparing root tree for image: /run/cos/state/cOS/active.img
INFO[2023-07-14T20:27:19Z] Copying localhost/local/system:latest source...
INFO[2023-07-14T20:27:31Z] Unmounting disk partitions
Error: 1 error occurred:
* GET https://index.docker.io/v2/localhost/local/system/manifests/latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:localhost/local/system Type:repository]]
This ticket might be continuation of https://github.com/rancher/elemental-toolkit/issues/1802 as there's a --local parameter:
$ sudo systemctl enable --now podman.socket
$ sudo ln -s /run/podman/podman.sock /var/run/docker.sock
$ curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping
OK
$ podman run --privileged -v /dev/:/dev/ -ti localhost/local/elemental-toolkit:latest install --local --system.uri docker:localhost/local/system:latest /dev/sdb
INFO[2023-07-14T21:23:33Z] Starting elemental version v0.0.1
INFO[2023-07-14T21:23:33Z] Reading configuration from '/etc/elemental'
INFO[2023-07-14T21:23:33Z] Install called
INFO[2023-07-14T21:23:33Z] Partitioning device...
INFO[2023-07-14T21:23:34Z] Mounting disk partitions
INFO[2023-07-14T21:23:34Z] Running before-install hook
INFO[2023-07-14T21:23:34Z] Preparing root tree for image: /run/cos/state/cOS/active.img
INFO[2023-07-14T21:23:34Z] Copying localhost/local/system:latest source...
INFO[2023-07-14T21:23:43Z] Unmounting disk partitions
Error: 1 error occurred:
* Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I wasn't paying attention 😄
Looks like this isn't a Podman issue after all:
$ sudo systemctl enable --now podman.socket
$ podman run --rm --privileged -v /var/run/podman/podman.sock:/var/run/docker.sock -v /dev/:/dev/ -ti localhost/local/elemental-toolkit:latest install --local --system.uri docker:localhost/local/system:latest /dev/sdb
INFO[2023-07-14T21:46:27Z] Starting elemental version v0.0.1
INFO[2023-07-14T21:46:27Z] Reading configuration from '/etc/elemental'
INFO[2023-07-14T21:46:27Z] Install called
INFO[2023-07-14T21:46:27Z] Partitioning device...
INFO[2023-07-14T21:46:27Z] Mounting disk partitions
INFO[2023-07-14T21:46:27Z] Running before-install hook
INFO[2023-07-14T21:46:27Z] Preparing root tree for image: /run/cos/state/cOS/active.img
INFO[2023-07-14T21:46:27Z] Copying localhost/local/system:latest source...
INFO[2023-07-14T21:46:27Z] Finished copying localhost/local/system:latest into /run/cos/workingtree
INFO[2023-07-14T21:46:27Z] Generating grub files for efi on /run/cos/efi
INFO[2023-07-14T21:46:27Z] Unmounting disk partitions
Error: 1 error occurred:
* did not find grub modules under /run/cos/workingtree (err: %!s(<nil>))
To sum up the remaining issues from above:
Issue 1
I'm currently testing this in Hyper-V with Arch Linux and I was able to workaround this error by using --disable-boot-entry
Error: 1 error occurred:
* variables not supported
Issue 2 (needs better documentation)
Looks like the sample green Dockerfile is just out of date
Error: 1 error occurred:
* did not find grub modules under /run/cos/workingtree (err: %!s(<nil>))
I'm able to install these image successfully on main branch with --disable-boot-entry, however the green images are refusing to boot in Hyper-V (for reference, TPM & Secure Boot are disabled)
oci:registry.opensuse.org/isv/rancher/elemental/stable/teal53/15.4/rancher/elemental-teal/5.3:latest (works)
oci:quay.io/costoolkit/releases-teal:cos-system-0.10.7 (works)
oci:quay.io/costoolkit/releases-green:cos-system-0.10.7 (halts)
oci:quay.io/costoolkit/releases-green:cos-container-system-0.10.7 (halts)
The green images halts due to:
dracut: FATAL: iscsiroot requested but kernel/initrd does not support iscsi
dracut: Refusing to continue
Looks like there's a newer Dockerfile here https://build.opensuse.org/package/view_file/home:kwk:osimage/slem4r/Dockerfile?expand=1 but it's not possible to use it locally:
Error: creating build container: initializing source docker://suse/sle-micro-rancher/5.4:latest: reading manifest latest in docker.io/suse/sle-micro-rancher/5.4: requested access to the resource is denied
To sum up the relevant issues from above:
Issue 1
Tested this in Hyper-V with Arch Linux and I was able to workaround this error by using --disable-boot-entry
Error: 1 error occurred:
* variables not supported
See rancher/elemental#881
|
2025-04-01T04:35:18.472476
| 2020-08-13T19:05:25
|
678670387
|
{
"authors": [
"brandond",
"galal-hussein"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10140",
"repo": "rancher/k3s",
"url": "https://github.com/rancher/k3s/pull/2129"
}
|
gharchive/pull-request
|
Update to v1.18.8-k3s1
Proposed changes
Update to v1.18.8-k3s1
Types of changes
Upstream release
Verification
QA to test
Linked Issues
#2113
PR looks good, LGTM thanks!
|
2025-04-01T04:35:18.473463
| 2017-09-14T00:56:11
|
257563450
|
{
"authors": [
"SvenDowideit"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10141",
"repo": "rancher/os",
"url": "https://github.com/rancher/os/pull/2099"
}
|
gharchive/pull-request
|
Quick fix to allow CAdvisor and kubelet to work
for #2098 and rancher/rancher#9848
backported to 1.0.5_ #2110
|
2025-04-01T04:35:18.474763
| 2015-10-28T09:03:43
|
113772844
|
{
"authors": [
"datawolf",
"imikushin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10142",
"repo": "rancher/os",
"url": "https://github.com/rancher/os/pull/621"
}
|
gharchive/pull-request
|
Delete the unused const variable in docker/client.go
The commit 9d76b79ac36e87965 refactor code for function
"NewClient", and we should delete the unused const
variable.
Signed-off-by: Wang Long<EMAIL_ADDRESS>
LGTM
|
2025-04-01T04:35:18.481330
| 2018-04-09T17:49:24
|
312625073
|
{
"authors": [
"moelsayed",
"soumyalj"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10143",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/12587"
}
|
gharchive/issue
|
Kubelet restarts constantly on RHEL 7.3 and RHEL 7.4 set ups with Docker 1.12.6 and Kubernetes v1.10.0-rancher1-1
Rancher versions:
rancher/server: v1.6.17-rc1
**Docker version: (docker version,docker info preferred)**1.12.6
Operating system and kernel: (cat /etc/os-release, uname -r preferred) RHEL 7.3 and 7.4
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) AWS
Environment Template: (Cattle/Kubernetes/Swarm/Mesos) Kubernetes v1.10.0
Steps to Reproduce:
Create a Kubernetes setup on RHEL7.3 or RHEL 7.4
Kubelet restarts constantly and fails to go to running state
Kubernetes infra stack is in degraded state.
Kubelet crashes with the error:
F0410 00:21:33.501593 28895 kubelet.go:1339] Failed to start cAdvisor inotify_add_watch /sys/fs/cgroup/cpuacct,cpu: no such file or directory
Issue seems specific to docker 1.12.x. We have seen it several times: #6181, #7252. I am not sure why exactly it's regressing.
The original problem is related to kubelet's built in cadvisor. The upstream cadivsor seems to have changed inotify library several times causing this. More details are in these issue:
https://github.com/google/cadvisor/issues/1461
https://github.com/kubernetes/kubernetes/issues/32728
https://github.com/google/cadvisor/issues/1890
https://github.com/google/cadvisor/issues/1708
#144 provides the advised work around needed to avoid this until upstream is fixed.
Docker version 1.13 and 17.03 are also affected.
Tested with rancher v1.6.17-rc2
Create a Kubernetes setup on RHEL7.3 or RHEL 7.4
Kubelet came up successfully. Kubernetes infra stack was active
|
2025-04-01T04:35:18.489113
| 2018-07-19T22:29:49
|
342911326
|
{
"authors": [
"kinarashah",
"loganhz",
"tfiduccia"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10144",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/14643"
}
|
gharchive/issue
|
Add UI for drain
Drain action already shows up in UI, just need to set the input.
to call drain from API, the input would look like -
curl -u $token --insecure -H 'Content-Type: application/json' https://localhost:8443/v3/nodes/local:machine-k6ct9?action=drain-X POST -d '{"deleteLocalData":"true","force":"false","gracePeriod":"-1","ignoreDaemonSets":"true","timeout":"20"
when draining is ongoing, it can be stopped using stopDrain action
Information about the params and default values -
type NodeDrainInput struct {
// Drain node even if there are pods not managed by a ReplicationController, Job, or DaemonSet
// Drain will not proceed without Force set to true if there are such pods
Force bool `json:"force,omitempty"`
// If there are DaemonSet-managed pods, drain will not proceed without IgnoreDaemonSets set to true
// (even when set to true, kubectl won't delete pods - so setting default to true)
IgnoreDaemonSets bool `json:"ignoreDaemonSets,omitempty" norman:"default=true"`
// Continue even if there are pods using emptyDir
DeleteLocalData bool `json:"deleteLocalData,omitempty"`
//Period of time in seconds given to each pod to terminate gracefully.
// If negative, the default value specified in the pod will be used
GracePeriod int `json:"gracePeriod,omitempty" norman:"default=-1"`
// Time to wait (in seconds) before giving up for one try
Timeout int `json:"timeout" norman:"min=1,max=10800,default=60"`
}
https://github.com/rancher/rancher/issues/14265
Hi @kinarashah Please let me know once backend is ready
@loganhz backend should now be available in master
@kinarashah Created a PR for adding drain and stop drain action in UI
But it looks like stopDrain doesn't show in API for a draining node.
Thanks @loganhz, I'll look into it
@kinarashah StopDrain is already in the rancher/rancher:master. But it becomes Cordoned after StopDrain.
Yeah, @loganhz this is expected, since drain also cordons the node. I'd add it to our docs
@loganhz @kinarashah - Once I Stop Drain, it is still Cordoned. Shouldn't it also be Uncordoned when I stop the drain so it reverts to being just "Active"?
@tfiduccia it is k8s behavior that when you stop an ongoing drain process, the node still remains into cordoned state. I agree this can be new to someone who isn't acquainted to k8s, but we'll need to document it
Version - 2.1 master 8/30
Verified fixed
|
2025-04-01T04:35:18.492815
| 2020-02-06T14:36:51
|
561053564
|
{
"authors": [
"Oats87",
"sangeethah"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10145",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/25293"
}
|
gharchive/issue
|
Support K8s 1.15.10, 1.16.7, 1.17.3
Kubernetes v1.15.10, v1.16.7, and v1.17.3 are planned to be released on February 11, 2020.
https://github.com/rancher/hyperkube/pull/97
https://github.com/rancher/hyperkube/pull/98
https://github.com/rancher/hyperkube/pull/99
https://github.com/rancher/kontainer-driver-metadata/pull/134
in flight
Validations done for rancher server version - v2.3-5 pointing to dev-v2.3 branch of k-d-m
Expected applicable k8s versions are seen in the dropdown while creating clusters (RKE)
User is able to provision clusters with all applicable k8s versions(from drop down list) using all 4 network providers ( test_clusters_for_kdm)
3.. All onTag automation runs for Cluster with each supported K8s version with “Canal” succeded.
|
2025-04-01T04:35:18.504755
| 2015-11-08T06:25:40
|
115711790
|
{
"authors": [
"AnalogJ",
"asciifaceman",
"fentas",
"obriensystems",
"rcarmo",
"vincent99"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10146",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/2599"
}
|
gharchive/issue
|
Automated rancher/agent deployment
Hi,
I've started writing a chef cookbook to automate my setup of rancher and the associated agents on my servers. The issue I'm running into is that the agents all seem to need a unique registration token, which I cant seem to get/create using the api.
Is there a way to do tokenless registration of rancher agents? Or retrieve unique registration tokens using the api?
I was able to get it working by taking a closer look at the Vagrantfile. It seems you can ignore the token, but you still need to include base url.
sudo docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock rancher/agent:v0.8.2 http://192.168.0.x:8080
This should probably be added to the docs at some point, but I was able to get everything working, so feel free to close the issue.
They are unique to the installation but do not need to be unique to every host. You generate one registration token (manually by going to the UI add custom host screen, or POST /v1/registrationTokens and reload the links.self URL until the state is active) and then can use that many times.
"Ignor[ing] tthe token" only works while access control is disabled.
My solution was to create an startup script, which will register hosts on startup.
Utilizing rancher rest api.
RANCHER_URL="..."
RANCHER_ACCESS_KEY="..."
RANCHER_SECRET_KEY="..."
## some host labels as needed
CATTLE_HOST_LABELS="host=$(hostname)&role=development"
## do some stuff like getting jq
sudo curl -skL https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 -o /usr/local/bin/jq
sudo chmod +x /usr/local/bin/jq
## generate rancher registrationtokens
ID=$(curl -X POST \
-u "${RANCHER_ACCESS_KEY}:${RANCHER_SECRET_KEY}" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d "{\"name\":\"$(hostname)\"}" \
"${RANCHER_URL}/v1/registrationtokens" | jq -r .id)
## need to wait
sleep 5
COMMAND=$(curl -X GET \
-u "${RANCHER_ACCESS_KEY}:${RANCHER_SECRET_KEY}" \
-H 'Accept: application/json' \
"${RANCHER_URL}/v1/registrationtokens/$ID" | jq -r .command)
## start ranger-agent ~ running with root privileges no need for sudo.
COMMAND=(docker run -e CATTLE_HOST_LABELS=$CATTLE_HOST_LABELS ${COMMAND#*run})
exec "${COMMAND[@]}"
Hope this helps.
I created chef environments that match rancher environments for host/token segregation and managed the tokens via attributes
eg.
command "http://#{node['sce-rancher']['agent']['master']}/v1/scripts/#{node['sce-rancher']['agent']['tokens']}"
Is there a way to set the token WHILE deploying the Rancher server? In that way, we could own that secret and bake it in to cloud-init scripts as machines are deployed.
@rcarmo Are you trying to deploy the master server & agent nodes in parallel? It looks like you might have issues with that as the agents will fail if they can't talk to an already existing server.
If you can run the agents after the server is up, then you can use the Rancher API to get a registrationToken:
https://github.com/mediadepot/chef-docker_rancher/blob/master/libraries/provider_agent.rb
@AnalogJ I know. I've done that in the past, but I think that still requires too much human intervention because you have to either have auth disabled or generate an API key.
I really want to be able to deploy everything in parallel, and it seems to me that being able to supply a shared secret to both masters and slaves would be much more practical - especially since on Azure I can instantiate a virtual machine scale set with that secret in CustomData and have new instances automatically register themselves - but I have to bake in the secret to the scale set definition.
So, in short, I would very much like to stop deploying Rancher clusters as a two-step process with manual intervention in between...
@rcarmo Well generating the API key & enabling auth can be done in an automated way (https://github.com/mediadepot/chef-docker_rancher/blob/master/libraries/provider_auth_local.rb#L70-L109). But I can see where you're going with this.
It's alot easier to just use a shared key than it is to
automatically generate the API key on master standup
push the API key to a secret server/persistent storage/configuration management system
use that API key for all subsequent agent installations
While a shared secret makes it easier, it is possible to have a completely automated cluster deployment.
Well, that works OK if your secret store is accessible from your master. If your secret store is not directly accessible from it, then you have to retrieve it to your orchestrator, save it and re-use it.
However, that too requires a second transaction for me, so it's a no-go.
If auth is off then you don't need a registration token. If it's on then you're already probably taking to the API to enable it. There is not a built in easy to inject a token for the default (or any other) project. If your really want to you could just add an appropriate row to the database.
Hi @vincent99, I don't think you understood my goal at all. I need to be able to trigger creating and provisioning both master and worker VMs remotely, without human intervention, and supply them beforehand with all the required information they need to have a working cluster at the end.
At that time, there is no master (and certainly no database), and even if I create the master first, the orchestrator may not have access to its internals, including the database.
It would be vastly easier if I could simply launch the the Rancher server container with a pre-defined environment variable (or even a mount point for a PEM certificate) and supply matching credentials to the workers.
Everything else (including the current approach) is just a kludge. Even cloud-init support would be nice, since I can supply instances with custom cloud-init files and inject anything I need into them beforehand.
Thanks guys, some details on this thread were very useful.
https://jira.onap.org/browse/OOM-715
https://gerrit.onap.org/r/#/c/32019/
https://jira.onap.org/browse/OOM-710
|
2025-04-01T04:35:18.517303
| 2021-04-12T23:59:27
|
856445854
|
{
"authors": [
"cbron",
"izaac",
"nickgerace"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10147",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/32074"
}
|
gharchive/issue
|
[Logging v2] - some log files are not collected and sent to the ClusterOutput
What kind of request is this:
bug
Steps to reproduce:
Provision a Windows RKE cluster (1809 was used during this test)
3 Windows workers
Enable RKE logs (values.yaml on dashboard)
https://github.com/rancher/rancher/issues/32070#issue-856427183
Result:
Some RKE logs are getting collected right away, but other files are not, or maybe are taking long time to be processed?
Other details that may be helpful:
Rancher version v2.5.8-rc2
Logging Chart version: 3.9.400-rc03
Fluentd ClusterOutput.
docker logs fluentd 2>&1 | grep -i "/var/lib/rancher/rke/log/kube-proxy_4" return nothing from the stdout fluentd logs but there are files that match that pattern:
Environment information
Rancher version: v2.5.8-rc2
Installation option: HA
https://github.com/rancher/rancher/issues/32021
Bumping to 2.6, this needs a more long term review.
Current status: Unreproducible in backend dev environments with the following:
Rancher 2.5.8-rc17 (using Dashboard from RC)
Logging 3.9.4-rc08
global.cattle.windows.enabled=true
additionalLoggingSources.rke.enabled=true
LTSC Cluster: 1 Ubuntu 20.04 LTS + 3 Windows 2019 LTSC nodes
SAC Cluster: 1 Ubuntu 20.04 LTS + 3 Windows 2004 SAC nodes
Next Steps: Requested cluster from QA environment: 1809 SAC node based. If the issue is still unreproducible at that point, we should descope it from the next release, and close this or pursue another option.
Investigation Into QA Cluster
With a cluster provided by @izaac with 1809 nodes, this issue was also unreproducible. We see that all kubelet and kube-proxy logs appear in the dashboard, as intended, and delivered cleanly.
Side Notes
Side Issue 1: There was a side issue when using the stdout output for fluent-bit where the output would not work properly on RKE nodeAgents only. This was investigated and we've determined that this is likely an upstream fluent-bit issue. The stdout output for fluent-bit may not always work for symlinked Windows logs. The style of the mounted paths may matter (e.g. \\ vs \ vs / and C:\ vs /). The only way to be sure if logs are being delivered is to use an actual endpoint via an Output for ClusterOutput CR. Fortunately, regardless of the style changes, the forward output does work, even though stdout does not always work. Since this does not affect the forward output to fluentd, this is irrelevant to our chart (unless that changes from user feedback, requests, etc.).
Side Issue 2: Sometimes, fluent-bit Windows nodeAgent pods logged that some rotated logs could not be processed. Neither backend teams nor QA teams encountered this during Logging v2 testing, which sparked an investigation. Our investigation uncovered that these errors may have been accurate, but the rotated logs were successfully ingested and delivered soon after. This most likely began due to usages of newer Windows Server AMIs since they are updated frequently. Thus, this issue is likely cosmetic or a timing issue. Regardless, this may be another upstream fluent-bit issue that is irrelevant to our chart.
Context: Upstream Windows fluent-bit issues are not provided. They are built in rancher/fluent-bit-package.
Closing issue after confirming results with @izaac.
Summary: issue was unreproducible in 2004 SAC (backend created), 2019 LTSC (backend created), and 1809 SAC (QA created) clusters.
|
2025-04-01T04:35:18.524257
| 2021-06-20T02:17:46
|
925504747
|
{
"authors": [
"nickgerace",
"sowmyav27"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10148",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/33296"
}
|
gharchive/issue
|
old fleet agent is deleted before the new fleet agent workload comes up in cattle-fleet-system namespace in a rancher upgraded setup
What kind of request is this (question/bug/enhancement/feature request): bug
Steps to reproduce (least amount of steps as possible):
On 2.5.8, deploy a downstream cluster - DO RKE - 2 nodes
Upgrade rancher to master-head commit id: a11f947
On the local cluster, The new namespace cattle-fleet-agent - fleet-controller and gitjob workload get created.
On the local cluster, the fleet-agent workload - 0.3.5 gets deleted
The new fleet-agent is not deployed.
Expected Result:
The new fleet-agent should be deployed before the previous one gets deleted
Other details that may be helpful:
The new fleet-agent is deployed after 15 minutes on the local cluster
Environment information
Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head commit id: a11f9476
Installation option (single install/HA): Single
This is no longer seen on master-head. Likely was an issue related to https://github.com/rancher/fleet/issues/437 due to the errors on agent Update API calls not being handled.
Note: this issue is separate from https://github.com/rancher/rancher/issues/33293
On upgrade from 2.5.9 to master-head commit id: 44bb5b4
On 2.5.9, deploy a downstream cluster - DO RKE - 2 nodes
Upgrade rancher to master-head commit id: 44bb5b4
On the local cluster, The new namespace cattle-fleet-agent - fleet-agent, fleet-controller and gitjob workloads get created.
|
2025-04-01T04:35:18.530736
| 2022-05-12T07:57:46
|
1233593925
|
{
"authors": [
"dgadelha",
"wirwolf"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10149",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/37701"
}
|
gharchive/issue
|
Can not rename downstream cluster
Rancher Server Setup
Rancher version: v2.6.4
Installation option (Docker install/Helm Chart): Helm Chart
If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): Kubespray
Proxy/Cert Details:
Information about the Cluster
Kubernetes version: v1.21.8
Cluster Type (Local/Downstream): Downstream
If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): Kubespray
User Information
What is the role of the user logged in: Admin(google auth)
Describe the bug
I can not rename the Downstream cluster in tab Cluster Management
To Reproduce
Open tab Cluster Management
In any downstream cluster click edit config
In field "Cluster Name*" change the value
Click Save
In the next tab NOT execute 'kubectl apply ' because connection to the cluster will be damaged and the cluster status to be "Unavailable"
Click "Done"
Result
Name not changed
Expected Result
Cluster name is changed to the new name
Additional context
This issue is still valid on v2.6.8.
|
2025-04-01T04:35:18.532190
| 2023-03-06T19:34:44
|
1612082533
|
{
"authors": [
"snasovich"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10150",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/40783"
}
|
gharchive/issue
|
vSphere CSI chart does not allow enabling topology support
rancher/rancher tracking issue for https://github.com/rancher/rke2/issues/3468
Already captured in https://github.com/rancher/rancher/issues/40727
|
2025-04-01T04:35:18.534064
| 2024-10-25T20:35:56
|
2615059950
|
{
"authors": [
"lovemianhuatang",
"rebeccazzzz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10151",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/47738"
}
|
gharchive/issue
|
Feature Charts: Add Longhorn 1.7.2 Chart in 2.9.x
Longhorn released 1.7.2 in October 2024 so we need to add this new release in the feature charts along with Rancher 2.9.x. We will probably need to make this OOB.
Add 1.7.2 LH chart in feature chart 2.9.x - Cluster Manager/Rancher Marketplace
Cc: @chriscchien @innobead @khushboo-rancher @yangchiu @PhanLe1010 @mantissahz @lucasmlp @nicholasSUSE @Jono-SUSE-Rancher @rancher/longhorn
Closing this issue because the chart was releasesd.
Can Longhorn be added in v2.10?
|
2025-04-01T04:35:18.556767
| 2016-12-12T13:47:05
|
194978139
|
{
"authors": [
"Sturgelose",
"cllunsford",
"deniseschannon",
"ekristen",
"pulberg",
"stefanlasiewski",
"typia",
"vincent99"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10152",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/7016"
}
|
gharchive/issue
|
Spam log - unable to remove filesystem
Rancher Version:
1.2
Docker Version:
1.12.3
OS and where are the hosts located? (cloud, bare metal, etc):
Debian 8
Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB)
Single Node rancher external DB
Environment Type: (Cattle/Kubernetes/Swarm/Mesos)
Cattle
Steps to Reproduce:
Only add nfs-rancher stack
Results:
spam log
2016-12-12 13:44:20,946 ERROR [67d957a6-36f2-480e-a6cc-606dd81fe718:9476] [volumeStoragePoolMap:236] [volumestoragepoolmap.remove] [] [cutorService-13] [c.p.e.p.i.DefaultProcessInstanceImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm:
device or resource busy
2016-12-12 13:44:20,958 ERROR [:] [] [] [] [cutorService-16] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for e15f4408443ea1065475127023d95526f45ec23aa9e4e13536fb90ce3564dc43: remove /var/lib/docker/containers/e15f4408443ea1065475127023d95526f45ec23aa9e4e13536fb90ce3564dc43/shm: device or resource busy
2016-12-12 13:44:20,990 ERROR [:] [] [] [] [cutorService-13] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm: device or resource busy
2016-12-12 13:44:35,920 ERROR [de5f5bd7-7171-4687-abbd-4117991d93e1:9476] [volumeStoragePoolMap:236] [volumestoragepoolmap.remove] [] [cutorService-15] [c.p.e.p.i.DefaultProcessInstanceImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm:
device or resource busy
2016-12-12 13:44:35,946 ERROR [:] [] [] [] [cutorService-15] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm: device or resource busy
2016-12-12 13:44:50,921 ERROR [0dc500c8-d60b-446e-b576-72440cfed6af:9476] [volumeStoragePoolMap:236] [volumestoragepoolmap.remove] [] [ecutorService-9] [c.p.e.p.i.DefaultProcessInstanceImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm:
device or resource busy
2016-12-12 13:44:50,950 ERROR [:] [] [] [] [ecutorService-9] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm: device or resource busy
2016-12-12 13:45:05,940 ERROR [05aaccbd-a812-4510-bc19-944069720065:9476] [volumeStoragePoolMap:236] [volumestoragepoolmap.remove] [] [ecutorService-5] [c.p.e.p.i.DefaultProcessInstanceImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm:
device or resource busy
2016-12-12 13:45:05,966 ERROR [:] [] [] [] [ecutorService-5] [.e.s.i.ProcessInstanceDispatcherImpl] Agent error for [storage.volume.remove.reply;agent=46]: Error response from daemon: Unable to remove filesystem for bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d: remove /var/lib/docker/containers/bfe3852bccc63f18cf71d4ee38d4de2e4de5070ab454586d53356d69e275f02d/shm: device or resource busy
Expected:
No spam log :)
I also had several thousand of these errors in my logs. I was running Docker 1.12.2 and 1.12.3.
I updated to Docker 1.12.4, and the messages went away as soon as Docker restarted with the new version.
There are many potential causes of this error message, but I wonder if it was fixed by Docker #29083, listed at https://github.com/docker/docker/releases/tag/v1.12.4
Fix issue where volume metadata was not removed #29083
I have the same problem since updating to 1.2.0 and also in 1.2.1. I tried disabling the CAdvisor container I had thinking it might be the problem but still it gets stuck.
I recently updated to docker 1.12.5 for Debian and still the same :(
Also, tried to find which process is keeping the resource busy with lsof but there is nothing to be seen, so no idea what is happening....
Running in:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 5.883 GiB
Same issue here, RHEL7.2, Rancher v1.2.1, upgraded Docker 1.12.4 --> 1.12.5, no change
Full error output -
https://gist.github.com/pulberg/5ffe71cba5ab0bb52bd808df3d909890
Seeing the same issue after upgrading to Rancher v1.2.1. Docker v1.11.2, Ubuntu 14.04
This looks to be very similar to #6639 . It appears that the rancher-agent container is holding onto a mount for /var/lib/docker/containers/[container-id]/shm preventing the deletion.
docker daemon log:
time="2016-12-19T14:42:28.477546954Z" level=error msg="Handler for DELETE /v1.22/containers/6f55ec60dc88cc31a03edb256a2e4165168e8aa5afd0a07f93d0c74feef7d854 returned error: Unable to remove filesystem for 6f55ec60dc88cc31a03edb256a2e4165168e8aa5afd0a07f93d0c74feef7d854: remove /var/lib/docker/containers/6f55ec60dc88cc31a03edb256a2e4165168e8aa5afd0a07f93d0c74feef7d854/shm: device or resource busy"
Finding process with mount:
PID TTY STAT TIME COMMAND
20033 ? Ss 0:00 /bin/bash /run.sh run
27592 ? Sl 0:06 /var/lib/cattle/pyagent/agent
27828 ? Sl 0:04 host-api -cadvisor-url http://<IP_ADDRESS>:9344 -logtostderr=true -ip <IP_ADDRESS> -po
If I exec into the rancher-agent, I can confirm that the mount exists:
$ mount | grep "6f55ec60dc88cc31a03edb256a2e4165168e8aa5afd0a07f93d0c74feef7d854"
shm on /var/lib/docker/containers/6f55ec60dc88cc31a03edb256a2e4165168e8aa5afd0a07f93d0c74feef7d854/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
Restarting the rancher/agent and rancher/network-manager clears the dead containers.
I'm seeing this exact same issue. Please advise.
Well, turns out that I'm still having this problem. It just took a couple days to re-appear.
Rancher 1.2.0, CentOS 78.2, Docke 1.12.4, Cattle, Devicemapper with direct-lvm.
Are these errors perhaps from containers which are bind mounting /var/lib/docker/ ? Does this Troubleshooting step offer a fix?
https://docs.docker.com/engine/admin/troubleshooting_volume_errors/#/error-unable-to-remove-filesystem
This is a docker bug that cAdvisor is good at triggering; I thought it was fixed in newer 1.12.x-es. But Rancher 1.2 does not run cAdvisor anymore, so it's unclear why you'd still have it running..
Rancher 1.2.1, Docker 1.12.1, Ubuntu 14.04.5, Cattle, AUFS -- This is a "FRESH" install of rancher on top of some existing servers, I never once had a problem until after installing rancher, then started getting these errors :/
@vincent99 In the issue you linked, the problem appears to be that cadvisor bind mounts /var/lib/docker. (https://github.com/docker/docker/issues/20560#issuecomment-187157369) Both the rancher/agent and rancher/network-manager are doing the same thing:
rancher/agent
"Mounts": [
{
"Source": "/var/lib/docker",
"Destination": "/var/lib/docker",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
...
rancher/network-manager:
"Mounts": [
{
"Source": "/var/lib/docker",
"Destination": "/var/lib/docker",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
...
Please reference this bug for the fix. We are releasing a 1.2.x for a fix: https://github.com/rancher/rancher/issues/7152
As a workaround for now you can try restarting your network manager to clear up the errors about being unable to remove filesystems until the proper fix is released with v1.2.2.
As #7152 is closed, when 1.2.2 is released, you will be able to upgrade and have these issues fixed.
|
2025-04-01T04:35:18.562959
| 2017-01-11T05:52:11
|
200009163
|
{
"authors": [
"Haks1",
"deniseschannon",
"ffittschen",
"soumyalj"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10153",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/7375"
}
|
gharchive/issue
|
When upgrading network-services, it gets stuck in Upgrading state
When attempting to upgrade network-services that have new metadata images, the services may get stuck in upgrading state due to a scheduler bug.
Workaround: Stop scheduler, finish upgrade and start scheduler again.
How can I stop the scheduler ?
I was stuck in upgrade too, a guy of my team did the mistake of starting all the upgrade at the same time, so with network-service for exemple, I just did "cancel upgrade" in the stack page, then it was in an other state that proposed me "Rollback", which worked.
I got that problem on many stacks, it worked for all, but not with IPsec: https://puu.sh/thQHn/793dcde2d0.png
IPsec is stuck creating new containers and I can't do anything because it's used by other instances (it says that even if I stop them).
Component Version
Rancher v1.2.1
Cattle v0.174.8
User Interface v1.2.39
Rancher Compose v0.12.1
@Haks1 You can click on stop next to the scheduler service. :) And start it back up after the services have been upgraded.
I cannot speak as to wehther this scheduler fix would fix your issue, but I know it's a known issue with network-services upgrade when upgrading the metadata service.
Fixed in scheduler/v0.5.2.
Tested with upgrade from v1.2.2( with an older template) to v1.3.1-rc3 and v1.3.1-rc4. Scheduler was upgraded first and then network services. The network services upgraded successfully.
Happened to me as well with rancher/server:v1.6.7 and rancher/scheduler:v0.7.5
Updating network-services (rancher/network-manager:v0.6.6 to rancher/network-manager:v0.7.7 and rancher/metadata:v0.9.1 to rancher/metadata:v0.9.3) got stuck in upgrading.
Disabling scheduler allowed the upgrade of network-services to succeed.
|
2025-04-01T04:35:18.564882
| 2019-05-11T00:21:08
|
442933922
|
{
"authors": [
"loganhz",
"orangedeng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10154",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/20171"
}
|
gharchive/pull-request
|
[2.2.2-patch]Sorting etcd tls config before setup the config
Problem:
Monitoring get redeployed due to etcd params updated. However, it is not expected as the etcd address doesn't change at all.
Solution:
Sort it before assign.
Issue:
https://github.com/rancher/rancher/issues/19945
LGTM
|
2025-04-01T04:35:18.566117
| 2019-07-01T08:24:33
|
462579986
|
{
"authors": [
"JacieChao",
"loganhz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10155",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/21248"
}
|
gharchive/pull-request
|
[master] Update pinganyunecs driver version to v0.2.0
Fix https://github.com/rancher/rancher/issues/20878
LGTM
|
2025-04-01T04:35:18.567706
| 2022-12-08T00:10:11
|
1483174812
|
{
"authors": [
"HarrisonWAffel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10156",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/39810"
}
|
gharchive/pull-request
|
psact update 2
https://github.com/rancher/rancher/issues/38701
@jiaqiluo @kinarashah this is now ready for review, sorry for the delay
PR has been rebased to resolve build errors
Build is now passing
|
2025-04-01T04:35:18.670100
| 2019-04-23T09:26:14
|
436087138
|
{
"authors": [
"alena1108",
"gitlawr",
"orangedeng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10157",
"repo": "rancher/types",
"url": "https://github.com/rancher/types/pull/801"
}
|
gharchive/pull-request
|
Add app & app revision fields
Add new app conditions:
ServiceAccountCreated
RoleCreated
Add new field for AppRevisionStatus:
ServiceAccountInjected
Related issue: https://github.com/rancher/rancher/issues/19381
Updated
LGTM
Due to the offline discussion with @cjellick , I change to add service account fields into PRTB instead. No longer need to add field to app and app revision.
/cc @loganhz
Updated
LGTM
|
2025-04-01T04:35:18.671140
| 2015-10-22T05:00:16
|
112731249
|
{
"authors": [
"alena1108",
"sangeethah"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10158",
"repo": "rancher/validation-tests",
"url": "https://github.com/rancher/validation-tests/pull/130"
}
|
gharchive/pull-request
|
Added test cases for selectorLink and selectorContainer and test case for sidekick service with data containers.
@alena1108 , Can you please review ?
We should add test cases for native containers. Can be done in a separate PR
|
2025-04-01T04:35:18.718742
| 2019-12-16T11:02:22
|
538340000
|
{
"authors": [
"Gbergz",
"Zeoic",
"raoulvdberge"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10159",
"repo": "raoulvdberge/refinedstorage",
"url": "https://github.com/raoulvdberge/refinedstorage/issues/2375"
}
|
gharchive/issue
|
[1.14.4] Refined Storage crash when re-entering world
Issue description:
A Refined Storage crash to the client when re-entering a world.
What happens:
The Gameclient crashes when trying to re-enter a existing world.
What you expected to happen:
Being able to enter the world without issues.
Steps to reproduce:
Enter the world
Wait
Crash
... Game closed.
Version (make sure you are on the latest version before reporting):
Minecraft: 1.14.4
Forge: 28.1.104
Refined Storage: 1.7.1
Does this issue occur on a server? [yes/no]
Not that i know of, no.
If a (crash)log is relevant for this issue, link it here:
https://pastebin.com/tA4VxS6V
I am getting the same error with the refined storage controller when restarting my dedicated server after setting up a refined storage system.
Using the All The Mods 4 Modpack.
https://pastebin.com/MwSKF24X
Thx for reporting. This issue will continue under #2381
|
2025-04-01T04:35:18.726659
| 2023-09-14T08:45:58
|
1896066274
|
{
"authors": [
"raphaelstolt"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10160",
"repo": "raphaelstolt/lean-package-validator",
"url": "https://github.com/raphaelstolt/lean-package-validator/issues/33"
}
|
gharchive/issue
|
Drop PHP EOL version support
Update the required PHP version i.e. 8.0 in composer.json and make a new release i.e. v3.1.0.
Fixed with b39bd6e1bd31810a6ced31446331f94bf2f6817c.
|
2025-04-01T04:35:18.776483
| 2023-06-21T03:33:16
|
1766617588
|
{
"authors": [
"Kirandeep-Singh-Khehra",
"ashishujjain"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10161",
"repo": "rapidfort/community-images",
"url": "https://github.com/rapidfort/community-images/pull/314"
}
|
gharchive/pull-request
|
Added wordpress ironbank
Description of the change
Added image directory to harden wordpress image provided by ironbank using docker compose as runtime and updated workflows.
Checklist
[ ] Added to the README.md using readme-generator
Congrats @Kirandeep-Singh-Khehra on your first commit.
|
2025-04-01T04:35:18.778774
| 2019-08-19T15:39:01
|
482384579
|
{
"authors": [
"jeanmarcp",
"rickspencer3"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10162",
"repo": "rapido-mobile/rapido-flutter",
"url": "https://github.com/rapido-mobile/rapido-flutter/issues/120"
}
|
gharchive/issue
|
Boolean field not supported by Parse persistence provider
Hi,
When using the Parse persistance, the boolean field type is not supported
because of the "?" at the end of the name of the field.
With this label : "Available": "available?" in the documentList, Parse send this error
╭-- Parse Response
Class: Products
Function: ParseApiRQ.create
Status Code: 105
Type: InvalidKeyName
Error: Invalid field name: available?.
╰--
My Parse server is hosted on back4app.
Oh Hi! I did not realize anyone was using this or I would have helped, I'm really sorry. I have been away from this project for a while due to real work. I am currently catching up all of the out of date dependencies, testing that everything still works, then I will get back to Parse support.
|
2025-04-01T04:35:18.859601
| 2024-10-19T05:56:17
|
2598786544
|
{
"authors": [
"bn-c"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10163",
"repo": "rapiz1/rathole",
"url": "https://github.com/rapiz1/rathole/issues/396"
}
|
gharchive/issue
|
local_addr = "<IP_ADDRESS>:4001"
Feature Proposed
Is there a way to forward all traffic from a specific port bi-directionally?
Use Case
I am running an IPFS node behind a NAT, and forwarding <IP_ADDRESS>:4001 to a remote VPS with public internet access.
The network can see my IPFS node, but the DHT refuses to populate properly because all connection goes through <IP_ADDRESS>:4001 instead of :4001
I realized that this is not what rathole is designed for. closed as unrelated
|
2025-04-01T04:35:18.868707
| 2023-09-26T15:39:55
|
1913800881
|
{
"authors": [
"Nuhvi",
"raptorswing"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10164",
"repo": "raptorswing/rustydht-lib",
"url": "https://github.com/raptorswing/rustydht-lib/issues/73"
}
|
gharchive/issue
|
Maintenance status
Hello, I am working on https://pkarr.org which is mostly a thin wrapper around BEP_0044 to enable DNS queries for Ed25519 keys.
I think this DHT implementation is the closest to something I can extend and add BEP_0044 support to. However, I am curious if you are still maintaining this library, so I can open a PR with minimal changes. Or should I just maintain a separate fork and be more liberal with the changes I would like to introduce, but also maintain it long-term?
Thanks and I appreciate your work on this.
Hi, thanks for reaching out.
I am working on https://pkarr.org/ which is mostly a thin wrapper around BEP_0044 to enable DNS queries for Ed25519 keys.
Sounds like a cool project!
I think this DHT implementation is the closest to something I can extend and add BEP_0044 support to. However, I am curious if you are still maintaining this library, so I can open a PR with minimal changes. Or should I just maintain a separate fork and be more liberal with the changes I would like to introduce, but also maintain it long-term?
As you can probably see from the commit history, I haven't been working on this recently. It's a hobby project that I did mainly for fun and to learn Rust. Where I'm at currently with life and this project, I'm happy to look at PRs/issues on a best-effort basis. Small, simple, and/or reasonable things have a pretty good chance of getting reviewed and merged. Large, complex, PRs, or ones requiring a lot of collaboration, on the other hand, may never get enough attention from me to reach the finish line.
So if your BEP0044 changes are small, I would tentatively say to do them here and I will try to help get them merged. But if you think it's going to be a lot of code, or pretty complex, or if there are other big features you're planning to implement, then you may be better served by forking the project.
Thanks and I appreciate your work on this.
Looking through the code, there are definitely a few rough patches and things I would like to clean up, but I hope it can be useful to you.
Thanks for the quick reply, I just wanted to confirm I am not fragmenting development. I will maintain a fork then. Thanks again.
Sounds good. Good luck!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.