added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:40:28.259661
| 2017-01-25T15:20:01
|
203131470
|
{
"authors": [
"sstarcher",
"xreeckz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10907",
"repo": "sstarcher/grafana-dashboards",
"url": "https://github.com/sstarcher/grafana-dashboards/issues/1"
}
|
gharchive/issue
|
influxdb metrics weird
As seen below, it reports http data as 0.X. Is that intentional?
I'm not sure what you are referring to 0.X.
In the image attached, it shows HTTP queries as 0.05 ops or 0.30 ops.
That seems inaccurate to me. Unless I don't understand it right?
I see what you are referring to. I don't currently have those dashboards running so I can't validate the data, but that does look odd.
closing as the old dashboards are not kept up to date.
|
2025-04-01T06:40:28.286342
| 2023-02-07T14:49:08
|
1574471969
|
{
"authors": [
"ssube"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10908",
"repo": "ssube/onnx-web",
"url": "https://github.com/ssube/onnx-web/issues/115"
}
|
gharchive/issue
|
fix release build for npm and pypi packages
The release jobs for a tag pipeline are missing credentials and/or using the wrong image to publish their packages:
https://git.apextoaster.com/ssube/onnx-web/-/jobs/432976
https://git.apextoaster.com/ssube/onnx-web/-/jobs/432977
Fix that before the next release, so that it can happen automatically.
https://git.apextoaster.com/ssube/onnx-web/-/jobs/434939 passed and a fix for https://git.apextoaster.com/ssube/onnx-web/-/jobs/434921 has been pushed.
|
2025-04-01T06:40:28.315577
| 2024-09-22T12:19:26
|
2541052141
|
{
"authors": [
"stc1988",
"su-cheng-yang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10909",
"repo": "stack-chan/stack-chan",
"url": "https://github.com/stack-chan/stack-chan/issues/294"
}
|
gharchive/issue
|
build flash -firmware
please help me
Hi, the link you posted is broken and I can't check the details. Could you please repost it?
Hi,Thanks for your reply,This issue is due to computer configuration problems,which have already been solved。I have encountered another problem:
npm run mod ./mods/face/manifest.json
Error: no manifest!
please help me!
Please specify the target according to the device you are using, as shown below.
npm run mod --target=esp32/m5stack_core2 ./mods/face/manifest.json
I followed your instructions to run the command, but still received an error message. However, I changed -- target=esp32/m5stack_core2 to -- target=esp32, and now it can be run.
run as shown below:
s@ubuntu:~/stack-chan/firmware$ npm run mod --target=esp32/m5stack_core2 ./mods/face/manifest.json
<EMAIL_ADDRESS>mod
cross-env npm_config_target?=esp32/m5stack cross-env-shell mcrun -d -m -p $npm_config_target ./mods/face/manifest.json
Error: No such file or directory
s@ubuntu:~/stack-chan/firmware$ npm run mod --target=esp32 ./mods/face/manifest.json
<EMAIL_ADDRESS>mod
cross-env npm_config_target?=esp32/m5stack cross-env-shell mcrun -d -m -p $npm_config_target ./mods/face/manifest.json
But I encountered another mistake,as shown below:
Python requirements are satisfied.
Please wait - probing for device
Failed to find a suitable device. Check your connections or set UPLOAD_PORT
Please wait - probing for device
Failed to find a suitable device. Check your connections or set UPLOAD_PORT
/bin/sh: 1: [[: not found
/home/s/Projects/moddable/tools/serial2xsbug/serial2xsbug_lin.c:132: No such file or directory
make: *** [/home/s/Projects/moddable/build/tmp/esp32/debug/face/makefile:136: debug] Error 1
However, I have already connected an m5stack.basic to the virtual machine using type_c
If you have recently set up the environment with npm run setup, it is likely due to an issue with Moddable. In that case, updating Moddable to the latest version
should resolve the issue.
https://github.com/Moddable-OpenSource/moddable/issues/1408
You can check the commit of the installed Moddable with npm run doctor.
I'm sorry,Following your advice,updating Moddable to the latest version, reproduced the same problem.as shown below:
Build environment:linux
Target device: esp32/m5stack
Steps to Reproduce:
Build and install the app using this build command: npm run mod --target=esp32/m5stack ./mods/look_around/manifest.json
/bin/sh: 1: [[: not found
/home/s/Projects/moddable/tools/serial2xsbug/serial2xsbug_lin.c:132: No such file or directory
make: *** [/home/s/Projects/moddable/build/tmp/esp32/debug/look_around/makefile:133: debug] Error 1
It's true that there is a failure, but from the logs, it seems that another issue is occurring.
How did you update Moddable?
I will perform the following actions:
git clone https://github.com/Moddable-OpenSource/moddable
I made another attempt,Stuck at the following prompt:
s@ubuntu:~/stack-chan/firmware$ npm run mod --target=esp32/m5stack ./mods/face/manifest.json
<EMAIL_ADDRESS>mod
cross-env npm_config_target?=esp32/m5stack cross-env-shell mcrun -d -m -p $npm_config_target ./mods/face/manifest.json
Detecting the Python interpreter
Checking "python3" ...
Python 3.8.10
"python3" has been detected
Checking Python compatibility
Using a supported version of tool cmake found in PATH: 3.16.3.
However the recommended version is 3.24.0.
Using a supported version of tool cmake found in PATH: 3.16.3.
However the recommended version is 3.24.0.
Constraint file: /home/s/.espressif/espidf.constraints.v5.3.txt
Requirement files:
/home/s/esp32/esp-idf/tools/requirements/requirements.core.txt
Python being checked: /home/s/.espressif/python_env/idf5.3_py3.8_env/bin/python
Python requirements are satisfied.
/bin/sh: 1: [[: not found
Has this open-source project already rejected interested developers ,please reply me ,thank you .
As you continue developing stack-chan, I recommend understanding Moddable as well.
For this Moddable update, simply updating the repository is not sufficient. Please refer to the following documentation for the updating and other troubleshooting.
https://github.com/Moddable-OpenSource/moddable/blob/public/documentation/devices/esp32.md
thank you for your reply
|
2025-04-01T06:40:28.317315
| 2024-12-04T15:28:12
|
2718106092
|
{
"authors": [
"razvan",
"sbernauer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10910",
"repo": "stackabletech/documentation",
"url": "https://github.com/stackabletech/documentation/pull/688"
}
|
gharchive/pull-request
|
doc: describe self signed certificate lifetime configuration
Part of: https://github.com/stackabletech/issues/issues/586
Replaced by https://github.com/stackabletech/documentation/pull/689, which incorporated the changes from here
|
2025-04-01T06:40:28.333537
| 2024-11-12T14:25:59
|
2652344419
|
{
"authors": [
"blbacelar",
"endocytosis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10911",
"repo": "stackblitz/bolt.new",
"url": "https://github.com/stackblitz/bolt.new/issues/2098"
}
|
gharchive/issue
|
Not able to see my previous chat
Describe the bug
When I open bolt.new and open the side menu I can't see my previous chat. I had a full app that I was almost finishing it and now I can't find it.
Link to the Bolt URL that caused the error
https://bolt.new/
Steps to reproduce
As you can see I don't see any project here in this screenshot
But if I open https://stackblitz.com/ I can see my previous project, only the ones that I created a repo. But unfortunately the project that I was working on I didn't create the repo and I don't see how can I recover it.
Expected behavior
I should be able to see all my projects.
Screen Recording / Screenshot
No response
Platform
OS: macOS
Browser: Chrome
Version: Version 130.0.6723.117 (Official Build) (arm64)
Additional context
No response
Appreciate the feedback! We are aware of this issue with chat persistence. Temporary workarounds and updates can be found here: #39. (Go to stackblitz.com, login (same credentials as Bolt), click Collections on the left-hand side, click Bolt Collection). Appreciate your patience as improvements continue to be made.
|
2025-04-01T06:40:28.345997
| 2024-10-05T12:02:44
|
2567930527
|
{
"authors": [
"Jedimind369",
"danishmbutt",
"kc0tlh",
"tincleo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10912",
"repo": "stackblitz/bolt.new",
"url": "https://github.com/stackblitz/bolt.new/issues/50"
}
|
gharchive/issue
|
Bug: Project Not Persisted To Backend for Some Free Tier Users (RESOLVED)
Describe the bug
Project suddenly disappeared after page refresh, making me to lose 4 hours of work in an instant. :-/
Link to the Bolt URL that caused the error
https://bolt.new/~/bolt-vanilla-vite-6e7ljo
Steps to reproduce
Browser window got refreshed and the project disappeared completely.
Expected behavior
Browser window got refreshed and the project had disappeared.
Screen Recording / Screenshot
Platform
Browser name = Chrome
Full version = <IP_ADDRESS>
Major version = 129
navigator.appName = Netscape
navigator.userAgent = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/<IP_ADDRESS> Safari/537.36
performance.memory = {
"totalJSHeapSize": 82182935,
"usedJSHeapSize": 80779135,
"jsHeapSizeLimit":<PHONE_NUMBER>
}
Username = tincleo
Chat ID = eb14bf1cab7d
Additional context
Please add some versioning to save versions while building something.
Apologies for this frustrating experience!
While there is a bug affecting a small number of free users (see below for more info) it is possible that what you are experiencing is simply related to our current limited chat persistence model. Chat history is currently not persisted across browsers/devices, and chats will disappear from the Bolt.new UI if your cache is cleared, so it is possible that your project was actually saved to StackBlitz behind the scenes. If that is the case, you can find your project on StackBlitz.com (same account as Bolt.new) > Collections Tab > Bolt collection. If this is you, we are landing improved chat persistence in 1-2 weeks and you can track that in issue 39.
Unfortunately, If your project is not stored there, then it is likely you are experiencing the bug described below:
Our engineering team has been looking at the logs and links that were provided related to this issue (thanks for providing these) and we have identified a root cause affecting a small number of users on our free tier rate limits. We are shipping a fix now but unfortunately are unlikely to be able to recover projects that ran into this error. We are working as I type this to ship the fix!
While bugs can happen during beta periods, we are sorry that you experienced this! I want to thank everyone that reported this issue and provided project links and information as it was key in identifying and resolving this issue for everyone going forward :man-bowing:
🎉 This fix has been shipped to production: This should not affect any users going forward!
If you are looking for a missing project, here's where you should find it:
Find Missing Chats or Open Chats On a New Browser/Device
Login to StackBlitz.com (same account as Bolt.new)
Open the Collections Tab
Open the Bolt collection
Open the project you want to edit in Bolt
Press the "Open in Bolt" button and continue coding!
Dear Bolt.new Support Team,
I am writing to follow up on my previous communication regarding the loss of my project, "Advanced Bookmark Management System with React and TypeScript." I appreciate the recent update indicating that a fix has been shipped to production, but unfortunately, this does not resolve my specific issue.
To reiterate my situation:
Paid User Status: I was using a paid plan when my project disappeared. This incident has significantly impacted my work, and I believe it warrants additional attention beyond what has been provided for free-tier users.
Project Recovery Attempts: I have followed the suggested steps to locate missing projects via StackBlitz.com under the Collections Tab > Bolt collection. However, my original project is not there. The files currently available were created after the loss as I attempted to restart my work.
Given these circumstances, I am requesting:
A thorough investigation into what happened to my original project and any potential recovery options available for paid users.
A refund or token reset due to the disruption caused by this data loss, reflecting my status as a paying customer.
I understand that bugs can occur during beta periods, but as a paying user, I expected a higher level of data protection and customer support. Could you please escalate this issue to ensure it receives the necessary attention?
Thank you for your assistance. I look forward to your response and a satisfactory resolution to this matter.
Best regards,
Alex
@Jedimind369 absolutely! Please reach out to<EMAIL_ADDRESS>with the same info as above + project URLs for assistance with this!
while you are fixing the bug for chat persistence -- what is the guidance to store the chat-projects? can i save it locally or in github? its not clear how bolt.new projects can be saved.
|
2025-04-01T06:40:28.353123
| 2024-07-30T17:16:20
|
2438313524
|
{
"authors": [
"AriPerkkio",
"noam-honig"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10913",
"repo": "stackblitz/tutorialkit",
"url": "https://github.com/stackblitz/tutorialkit/issues/192"
}
|
gharchive/issue
|
Adding urls to the previews definition
Is your feature request related to a problem?
I want to show several preview tabs - with different relative urls for example:
{port:5173, title:"frontend"}
{port:5173, title:"api",url:"/api/tasks"}
{port:5173, title:"admin",url:"/api/admin"}
Describe the solution you'd like.
I want to show several preview tabs - with different relative urls for example:
{port:5173, title:"frontend"}
{port:5173, title:"api",url:"/api/tasks"}
{port:5173, title:"admin",url:"/api/admin"}
Describe alternatives you've considered.
reviewed the https://tutorialkit.dev/reference/configuration/ url and couldn't find anything
Additional context
No response
Currently this is not possible but we are planning to add support for it. I've done some design for the possible API earlier:
previews:
- 1234/pathname
- [1234/pathname, 'Title']
- { port: 1234, path: '/pathname', title: 'Title' }
Released in 0.1.5.
|
2025-04-01T06:40:28.354640
| 2020-09-05T18:55:57
|
694146941
|
{
"authors": [
"simonmichael"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10914",
"repo": "stackbuilders/cassava-megaparsec",
"url": "https://github.com/stackbuilders/cassava-megaparsec/issues/12"
}
|
gharchive/issue
|
Allow megaparsec 9.0.0
https://github.com/commercialhaskell/stackage/issues/5632
cassava-megaparsec-2.0.1 builds and passes tests with megaparsec-9.0.0, so a hackage revision should be enough.
Fixed by the 2.0.2 release.
|
2025-04-01T06:40:28.360304
| 2016-01-17T17:03:10
|
127108706
|
{
"authors": [
"hughsk",
"mattdesl"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10916",
"repo": "stackgl/gl-texture2d",
"url": "https://github.com/stackgl/gl-texture2d/issues/11"
}
|
gharchive/issue
|
tex.setPixels binds to unit zero
This seems to be causing issues with gl-audio-analyser
https://github.com/stackgl/gl-audio-analyser/issues/1
Because the bind() calls in setPixels is not necessarily going to be the same active unit as the user requests with tex.bind(n), it seems like it is causing some issues. I didn't realize this but according to the spec, the two seem tied somehow:
glTexImage2D specifies a two-dimensional or cube-map texture for the current texture unit, specified with glActiveTexture.
I wonder if we should keep track of the most recently bound texture unit for that instance, and bind to that? So:
tex1.setPixels(...) // binds to 0
tex1.bind(3)
tex2.bind(5)
tex1.setPixels(...) // binds to 3
A little less likely to cause issues, but it might still result in weirdness if you're working at the WebGL level. Either way would be worth one of us documenting this behaviour in the readme's setPixels() docs when/if we next have a chance :)
|
2025-04-01T06:40:28.382774
| 2024-06-06T13:38:36
|
2338292263
|
{
"authors": [
"ebensh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10917",
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/11419"
}
|
gharchive/pull-request
|
ROX-23972: Add enableNetworkPolicies option to Helm charts
Description
Add the enableNetworkPolicies flag (default true) to the ACS Operator's CentralServices and SecuredClusterServices charts. Disabling it will prevent NetworkPolicy objects from being created.
Checklist
[x] Investigated and inspected CI test results
[x] Unit test and regression tests added
[x] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
Testing Performed
Here I tell how I validated my change
go test -v github.com/stackrox/rox/pkg/helm/charts/tests/centralservices
go test -v github.com/stackrox/rox/pkg/helm/charts/tests/securedclusterservices
CI
Reminder for reviewers
In addition to reviewing code here, reviewers must also review testing and request further testing in case the
performed one does not seem sufficient. As a reviewer, you must not approve the change until you understand the
performed testing and you are satisfied with it.
LGTM. Did you manually test that disabling the option and then upgrading the installation removes the existing network policies?
Yes, good idea :) Now I did:
kind delete cluster
kind create cluster
kubectx kind-kind
cdrox
./bin/linux_amd64/roxctl helm output central-services --image-defaults=development_build --debug --remove
helm upgrade --install -n stackrox stackrox-central-services --create-namespace ./stackrox-central-services-chart --set central.persistence.none=true --disable-openapi-validation
k get netpol -A
NAMESPACE NAME POD-SELECTOR AGE
stackrox allow-ext-to-central app=central 2m43s
stackrox central-db app=central-db 2m43s
stackrox scanner app=scanner 2m43s
stackrox scanner-db app=scanner-db 2m43s
helm upgrade --install -n stackrox stackrox-central-services --create-namespace ./stackrox-central-services-chart --set central.persistence.none=true --disable-openapi-validation --set system.enableNetworkPolicies=false
k get netpol -A
No resources found
helm upgrade --install -n stackrox stackrox-central-services --create-namespace ./stackrox-central-services-chart --set central.persistence.none=true --disable-openapi-validation --set system.enableNetworkPolicies=true
k get netpol -A
NAMESPACE NAME POD-SELECTOR AGE
stackrox allow-ext-to-central app=central 2s
stackrox central-db app=central-db 2s
stackrox scanner app=scanner 2s
stackrox scanner-db app=scanner-db 2s
|
2025-04-01T06:40:28.387727
| 2023-10-26T12:47:33
|
1963490497
|
{
"authors": [
"dvail"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10918",
"repo": "stackrox/stackrox",
"url": "https://github.com/stackrox/stackrox/pull/8375"
}
|
gharchive/pull-request
|
fix(ui): Sync FE and BE types for exception requests
Description
Updates UI expected response types to match BE type refactor in https://github.com/stackrox/stackrox/pull/8333
Checklist
[ ] Investigated and inspected CI test results
[ ] Unit test and regression tests added
[ ] Evaluated and added CHANGELOG entry if required
[ ] Determined and documented upgrade steps
[ ] Documented user facing changes (create PR based on openshift/openshift-docs and merge into rhacs-docs)
If any of these don't apply, please comment below.
Testing Performed
Repeat testing steps for submitting a deferral or false positive requests from previous PRs.
Current dependencies on/for this PR:
master
PR #8375 👈
This comment was auto-generated by Graphite.
|
2025-04-01T06:40:28.404965
| 2023-04-27T10:45:49
|
1686593733
|
{
"authors": [
"danielsveins",
"kristinnstefansson"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10919",
"repo": "stadlar/IST-FUT-FMTH",
"url": "https://github.com/stadlar/IST-FUT-FMTH/issues/167"
}
|
gharchive/issue
|
Claims - Inconsistency in the type of percentage properties?
I have a question regarding the types of the percentage properties of claimDetails.
percentage in defaultInerest is defined as number.
defaultCharge and discount (dayAmountItem) can be either fixed amount or percentage. They are defined as integer.
Is there a specific reason for this or should they all be of the type number?
This issue was addessed in the Claims 3.1 work done under workgroup VH-7. See the referenced comments and pull requests as appropriate.
|
2025-04-01T06:40:28.433364
| 2021-10-18T10:31:48
|
1028935642
|
{
"authors": [
"stamateas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10921",
"repo": "stamateas/upptime",
"url": "https://github.com/stamateas/upptime/issues/1185"
}
|
gharchive/issue
|
⚠️ GitLab Server has degraded performance
In bc9cd72, GitLab Server (https://gitlab01.its-telekom.eu) experienced degraded performance:
HTTP code: 200
Response time: 9257 ms
Resolved: GitLab Server performance has improved in 1cfe653.
|
2025-04-01T06:40:28.474101
| 2017-10-01T22:13:41
|
261950870
|
{
"authors": [
"bbbales2",
"bob-carpenter",
"mcol"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10922",
"repo": "stan-dev/math",
"url": "https://github.com/stan-dev/math/issues/635"
}
|
gharchive/issue
|
No quantiles test in the unit tests for exp_mod_normal_rng
Summary:
There aren't any tests that validate exp_mod_normal_rng via assert_matches_quantiles. These tests are there for all the other distributions.
Description
Current code: https://github.com/stan-dev/math/blob/develop/test/unit/math/prim/scal/prob/exp_mod_normal_test.cpp
Example _rng with test: https://github.com/stan-dev/math/blob/develop/test/unit/math/prim/scal/prob/normal_test.cpp
The issue with this is it's not easy to test that exp_mod_normal_rng is working correctly. I found this out after I introduced a bug into it in my local working branch (trying to make it a little faster -- these changes didn't get committed) and the unit tests passed.
Current Version:
v2.17.0
Thanks for filing the issue. The CDFs are implemented, so it shouldn't be hard to add them.
P.S. I assigned to you (@bbbales2), but if you don't want to deal with it, please unassign yourself.
I believe this was fixed in #833.
|
2025-04-01T06:40:28.514925
| 2019-12-18T06:15:27
|
539474159
|
{
"authors": [
"SteveBronder",
"bob-carpenter",
"rok-cesnovar",
"serban-nicusor-toptal"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10923",
"repo": "stan-dev/math",
"url": "https://github.com/stan-dev/math/pull/1525"
}
|
gharchive/pull-request
|
Generic var templates for operators and std::iterator_trait var/fvar specialization
Summary
This pulls out / cleans up some of the stuff in the complex branch. Namely
Pulls out and tidies up the templates that are added to the var operators
Adds std::iterator_traits specializations for var and fvar
Adds cmath member functions to the stan::math namespace for autodiff types
Tests
@bob-carpenter do need additional tests for the iterators?
Side Effects
std will have `std::iterator_trait specializations for var and fvar
Checklist
[x] Math issue #123
[x] Copyright holder: Steve Bronder
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
- Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
- Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
[x] the basic tests are passing
unit tests pass (to run, use: ./runTests.py test/unit)
header checks pass, (make test-headers)
docs build, (make doxygen)
code passes the built in C++ standards checks (make cpplint)
[x] the code is written in idiomatic C++ and changes are documented in the doxygen
[x] the new changes are tested
@bob-carpenter ready for review!
@SteveBronder Should I really be reviewing this given that I wrote some of it?
Good point! Would anyone else mind taking a look at this?
OK. I'll just go ahead and review today or tomorrow---it will have two of us looking at all the code.
I've read through this mainly because I happened to see how you dealt with std::isinf(), which hit me in #1545 (and I had worked around it in not such a nice way). It turned out that I think I've learned a lot of new things, so it was time well spent.
Glad to hear and thanks for looking this over!
As you'll see, most of comments concern the ordering of typenames that doesn't match the ordering of function arguments. Of course that's not a big deal, so I leave it up to you what do about that.
I think that's v reasonable to fix up
I totally skipped stan/math/rev/core/std_iterator_traits.hpp because I can't even pretend to know what use it has, and also stan/math/rev/scal/fun/pow.hpp because I haven't cracked std::forward yet. :)
That's fair, after snooping around online the link below makes me think they are much harder to implement than I originally thought. I'll probably have to rework these
https://github.com/tzlaine/stl_interfaces
OK. I'll just go ahead and review today or tomorrow---it will have two of us looking at all the code. This is going to take some reviewing and I have to head out soon, so I can get to it tomorrow.
Worst case we had 3 sets of eyes on it now (thanks again @mcol!) so I think that's fine
looking at cmath it looks like all these are in the cmath header? For clarification you want signbit, isinf, isfinite, isnan, copysign, and isnormal to be in their own header files?
Sorry, but yes. That's how we've laid out the other cmath function overloads.
Sorry, but yes. That's how we've laid out the other cmath function overloads.
lol wugh aight I'll get to this. They also need tests
+using require_all_autodiff_t = require_all_t<is_autodiff...>;
Now that I see what they're doing I think these should be enable_if_all_autodiff_t.
... I've leaned towards not doc since aliases are inlined in the docs you can see the full definition
Nice. So that'll follow cppreference's definition, which is OK by me.
var is just a pointer to vari, so can be copied by value efficient. An fvar is just two Ts held by value, not reference. So I don't see anything to move anywhere.
On Dec 30, 2019, at 1:53 PM, Steve Bronder<EMAIL_ADDRESS>wrote:
@SteveBronder commented on this pull request.
In stan/math/rev/core/operator_addition.hpp:
@param a First variable operand.
@param b Second variable operand.
@return Variable result of adding two variables.
*/
-inline var operator+(const var& a, const var& b) {
+template <typename Var1, typename Var2, require_all_var_t<Var1, Var2>...>
The direct answer to this is that a function signature with const var&& would only accept constant rvalue types. The Var1&& and Var2&& in the function signature are universal references. Combined with require_all_var_t the operators here accept var&, const var&, and var&& (they could accept const var&& though idt const rvalues are a thing). It allows the caller (callee?) to do something like auto the_sign = copysign(std::forward(an_fvar)); to move a fvar, for example, when the call site can tell a_fvar is movable.
The longer indirect answer is that if we want "universal references" in function signatures for perfect forwarding then you can't have any declaration specifiers alongside the type specifier for each function parameter.
Would we ever allow a var to be moved?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
I think I get what perfect forwarding is doing after reading a couple explanations. I should've thought through why T&& would be more general than var&&---I always forget that sometimes T matches more than just a base type---it picks up qualifiers.
I don't think we ever need to pass non-constant references for autodiff variables or move them. Where would the extra generality get used? Or the other way around, is there an example where (const var&) as an argument for a var won't work?
On Dec 30, 2019, at 1:53 PM, Steve Bronder<EMAIL_ADDRESS>wrote:
@SteveBronder commented on this pull request.
In stan/math/rev/core/operator_addition.hpp:
@param a First variable operand.
@param b Second variable operand.
@return Variable result of adding two variables.
*/
-inline var operator+(const var& a, const var& b) {
+template <typename Var1, typename Var2, require_all_var_t<Var1, Var2>...>
The direct answer to this is that a function signature with const var&& would only accept constant rvalue types. The Var1&& and Var2&& in the function signature are universal references. Combined with require_all_var_t the operators here accept var&, const var&, and var&& (they could accept const var&& though idt const rvalues are a thing). It allows the caller (callee?) to do something like auto the_sign = copysign(std::forward(an_fvar)); to move a fvar, for example, when the call site can tell a_fvar is movable.
The longer indirect answer is that if we want "universal references" in function signatures for perfect forwarding then you can't have any declaration specifiers alongside the type specifier for each function parameter.
Would we ever allow a var to be moved?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
fvar is 16 bytes and fvar<fvar> is 32 bytes whether T is var or double. But everything lives on the function call stack, so I'm not seeing what can get usefully forwarded.
On Dec 30, 2019, at 2:10 PM, Steve Bronder<EMAIL_ADDRESS>wrote:
@SteveBronder commented on this pull request.
In stan/math/rev/core/operator_equal.hpp:
@param a First variable.
@param b Second variable.
@return True if the first variable's value is the same as the
second's.
*/
-inline bool operator==(const var& a, const var& b) {
+template <typename Var1, typename Var2, require_all_var_t<Var1, Var2>...>
I agree but let's flag this as an issue and do it in another PR once this goes through.
On a side note, do we think of fvar's as a "heavy" type? we may want specializations that forward for fvar's with vars or fvar's inside
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Yep! I think with all these it would be good to do them as a seperate PR else this is going to blast off into like a 800-1000 line change PR
Agreed.
I can help write the tests if you want to give me some or all of them to do.
Actually I think the only ones we need testing for are
iterator_traits for var / fvar (? idk if this needs tests but we could check we can call and declare it with types we expect
ad types, and arithmetic for
signbit
isinf
isfinite
isnan
isnormal
copysign
I can probs get to them this weekend if you can do it earlier then :+1:
Now that I see what they're doing I think these should be enable_if_all_autodiff_t.
I'll make a seperate issue for this
I don't think we ever need to pass non-constant references for autodiff variables or move them. Where would the extra generality get used? Or the other way around, is there an example where (const var&) as an argument for a var won't work?
Tadej and I had a big discussion on ref vs const ref for Eigen matrices here. There's also this stackoverflow post I found interesting talking about casting rvalue's to const lvalue's in function calls.
For var I think it might be a different story. Though if you look at fvar's inverse function in mat it has these eigen matrix fvars multiplied together.
m_deriv = multiply(multiply(m_inv, m_deriv), m_inv);
I think the above stackoverflow's rvalue / const lvalue example would apply here
I have to think about this. var seems to be trivially copyable so idk. If var ever became non-trivial then these could matter.
Do non-const references hurt in some way?
I'll look over the C++ templates book tmrw, section 7 has a bunch of stuff about passing ref, const ref, etc. stuff
fvar is 16 bytes and fvar<fvar> is 32 bytes whether T is var or double. But everything lives on the function call stack, so I'm not seeing what can get usefully forwarded.
I'm not following why everything sitting on the function call stack makes their sizes not relevant
I'm not following why everything sitting on the function call stack makes their sizes not relevant
Because there's no memory allocated with malloc to steal.
Compare this with the behavior of std::vector, which allocates memory using malloc and frees it in the destructor. When you deep copy a standard vector, it requires a malloc and copy, whereas when you move it's just an assignment.
With var, there's nothing in memory to be moved. So a copy is just copying a pointer.
An fvar is just a struct containing two T type objects. Again no malloc going on and there's no difference between a copy and a deep copy.
Because there's no memory allocated with malloc to steal.
gah! brainfart alright I'll swap these back (though I'd like to leave the templates for Arith)
I think the templates for arithmetic make sense---it cuts out a lot of code.
On Jan 3, 2020, at 12:35 PM, Steve Bronder<EMAIL_ADDRESS>wrote:
Because there's no memory allocated with malloc to steal.
gah! brainfart alright I'll swap these back (though I'd like to leave the templates for Arith)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Is this ready to review again?
Not yet! Sorry gonna try to get around to to tonight. Didn't have time to write tests yet for the new functions. I'll ping you when this is ready for review
The problem now is that the test server is running out of memory. @serban-nicusor-catena ---should we just restart the tests or does something else need to happen?
g++ -std=c++1y -m64 -D_REENTRANT -Wall -Wno-unused-function -Wno-uninitialized -Wno-unused-but-set-variable -Wno-unused-variable -Wno-sign-compare -Wno-unused-local-typedefs -I lib/stan_math/lib/tbb_2019_U8/include -O3 -I src -I . -I lib/stan_math/ -I lib/stan_math/lib/eigen_3.3.3 -I lib/stan_math/lib/boost_1.72.0 -I lib/stan_math/lib/sundials_5.1.0/include -I lib/stan_math/lib/gtest_1.8.1/include -I lib/stan_math/lib/gtest_1.8.1 -D_USE_MATH_DEFINES -DBOOST_DISABLE_ASSERTS -c src/test/unit/lang/parser/assignment_statement_test.cpp -o test/unit/lang/parser/assignment_statement_test.o
cc1plus.exe: out of memory allocating 2097112 bytes
cc1plus.exe: out of memory allocating 1442192 bytes
make/tests:13: recipe for target 'test/unit/lang/generator/generate_idxs_test.o' failed
mingw32-make: *** [test/unit/lang/generator/generate_idxs_test.o] Error 1
mingw32-make: *** Waiting for unfinished jobs....
make/tests:13: recipe for target 'test/unit/lang/generator/generate_cpp_test.o' failed
mingw32-make: *** [test/unit/lang/generator/generate_cpp_test.o] Error 1
Argh, Nic did apply this fix https://www.intel.com/content/www/us/en/programmable/support/support-resources/knowledge-base/embedded/2016/cc1plus-exe--out-of-memory-allocating-65536-bytes.html but it seems to not do the trick.
It only occurs occasionally on the recently added Windows machine (running in our lab in Ljubljana). Let me know @serban-nicusor-toptal if I can do something you cant remotely. Will also buy some additional RAM.
I restarted the upstream tests https://jenkins.mc-stan.org/blue/organizations/jenkins/Math Pipeline/detail/PR-1525/30/pipeline
If you only restart a stage Github doesnt show the yellow dot but if it will pass it will go green.
Thanks for the help here @rok-cesnovar
Ram and swap are more than enough I think so it may be a software configuration which doesn't let it allocate more maybe ... I'll look more into it tonight.
Yatta! The fully-templated version works. This is useful as I'll be able to do this elsewhere I've been having trouble with unwanted promotion of primitives to var.
@SteveBronder or @rok-cesnovar : would one of you review my changes to pow() here---nothing else has changed since it's been reviewed. I also added tests for instantiating pow.
I approved the changes I was waiting for.
Sorry for the delay I'll take a look at this today
|
2025-04-01T06:40:28.531886
| 2023-11-16T22:14:00
|
1997882501
|
{
"authors": [
"SteveBronder",
"avehtari"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10924",
"repo": "stan-dev/stan",
"url": "https://github.com/stan-dev/stan/pull/3243"
}
|
gharchive/pull-request
|
update psis to not add back the max and return the unnormalized resample ratios
Fixes #3241 by just removing the line that adds back the max log likelihood ratio
Submission Checklist
[x] Run unit tests: ./runTests.py src/test/unit
[x] Run cpplint: make cpplint
[x] Declare copyright holder and open-source license: see below
Summary
Intended Effect
How to Verify
Should we have tests for this? This just seemed like a numeric bug inside of the function so I'm not sure if / how to test this
Side Effects
Documentation
Copyright and Licensing
Please list the copyright holder for the work you are submitting (this will be you or your assignee, such as a university or company): Flatiron Institute
By submitting this pull request, the copyright holder is agreeing to license the submitted work under the following licenses:
Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
Fixes https://github.com/stan-dev/stan/issues/3241 by just removing the line that adds back the max log likelihood ratio
it seems you also dropped the normalization, which is ok, if boost is accepting unnormalized weights
it seems you also changed how the truncation is done, by using a one-liner instead of for loop, and as I'm not familiar with the syntax, I'm not able to verify that it does the same thing, but I assume you know it's the same
|
2025-04-01T06:40:28.566869
| 2019-11-01T15:28:06
|
516182409
|
{
"authors": [
"kjmahalingam",
"ngfreiter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10925",
"repo": "standardhealth/sushi",
"url": "https://github.com/standardhealth/sushi/pull/2"
}
|
gharchive/pull-request
|
Adding FixedValueRule class
This adds a class for fixed value rules in the mold of the current existing Rule classes.
I'm on board with this, and agree with Chris's assessment!
|
2025-04-01T06:40:28.590555
| 2022-12-17T01:22:44
|
1501078791
|
{
"authors": [
"yifanmai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10926",
"repo": "stanford-crfm/helm",
"url": "https://github.com/stanford-crfm/helm/pull/1261"
}
|
gharchive/pull-request
|
Add support for sharding using Slurm job arrays
In the short term, we want to run write_run_display_json on multiple nodes.
Addresses #1260
Abandoning this for a few of reasons:
The use of hash is incorrect because of hash randomization; we should be using a stable hash function instead.
The helper job submission script in the Stanford NLP SLURM cluster does not support job arrays.
I would prefer to have a sharding mechanism that isn't tied to SLURM, and works in other environments as well.
|
2025-04-01T06:40:28.595155
| 2023-12-25T09:02:21
|
2055534303
|
{
"authors": [
"Hannibal046"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10927",
"repo": "stanford-futuredata/ColBERT",
"url": "https://github.com/stanford-futuredata/ColBERT/issues/281"
}
|
gharchive/issue
|
about the ranking operation
Hi, @okhat
Thanks for the great repo! I have a hard time understanding the code for reranking. Especially in this section: https://github.com/stanford-futuredata/ColBERT/blob/706a7265b06c6b8de1e3236294394e5ada92134e/colbert/ranking/index_ranker.py#L56C7-L112
I have searched the relevant github issue and I understand these code are for efficiency from this issue.
But can I find relevant documentations somewhere to help me better understand this? It seems that it would first turn 2D embeddings to 3D embeddings with different strides (108 and 180) for matrix multiplication. But I don't get it why we need this stride parameter? Why couldn't we just do something like this:
load all embeddings and corresponding doclens
get the embedding per passage based on the pids
padding them to the same length for matrix multiplication
maxsim operation and select the topk
After digging into the code, I think I got the logic behind. This stride thing was intended to partition the whole documents into different buckets based on their length (if I understand it correctly). In the actual implementation of ColBERTv1, it split the whole documents into two buckets, one with documents whose lengths takes up 90% of the whole document collections (which is 108), and the other takes the rest.
This would save the flops for the matching process. scores = (D @ group_Q) * mask.unsqueeze(-1)
BTW, for anyone who is also curious about the stride operation, I hope this would help:
|
2025-04-01T06:40:28.597318
| 2024-04-04T11:34:11
|
2225239096
|
{
"authors": [
"arnavsinghvi11",
"snimu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10928",
"repo": "stanfordnlp/dspy",
"url": "https://github.com/stanfordnlp/dspy/pull/769"
}
|
gharchive/pull-request
|
fix(dspy): add copy to AzureOpenAI to propagate api_(key|version|...)
Fixes issue #591 without re-creating issue #543
The problem was that AzureOpenAI.copy used LM.copy, but because api_key etc. aren't in the AzureOpenAI.kwargs, and LM.copy per default only propagates self.__class__.kwargs.
I now gave AzureOpenAI its own copy-method. This isn't really beautiful, but it seems to work pretty well.
Thanks @snimu !
|
2025-04-01T06:40:28.609518
| 2015-12-21T09:55:25
|
123244611
|
{
"authors": [
"FisherHUB",
"nicolasgramlich",
"roman-mazur"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10929",
"repo": "stanfy/spoon-gradle-plugin",
"url": "https://github.com/stanfy/spoon-gradle-plugin/issues/90"
}
|
gharchive/issue
|
SDR.handleImages problem - no screenshots in log from Android 6.0+ devices
Hello, lately I've been fighting with Spoon being unable to create complete *.html log for Android 6.0+ devices. I can't fully understand/localise the problem so I want to share my thoughts, get your opinion and make it work.
Specify the problem
1. Run tests on device with Android 6.0+ from terminal with usage of ./gradlew spoon task.
2. To make your screenshots use spoon provided method Spoon.screenshot(Activity activity, String tag)
3. Wait for tests to finish.
4. You can see log like that:
(...)
2015-12-18 15:58:00 [STRL.testRunEnded] elapsedTime=408847
03:58:00 I/XmlResultReporter: XML test result file generated at /Users/F1sherKK/Dev/myapp-Android/app/build/spoon-log/normal/debugRelease/junit-reports/05f3785c3444f1bf.xml. Total tests 32, failure 1, passed 31,
2015-12-18 15:58:00 [SDR.run] About to grab screenshots and prepare output for [05f3785c3444f1bf]
2015-12-18 15:58:00 [SDR.pullDirectory] Internal path is /data/data/com.myapp.sendmoney.debug1/app_spoon-screenshots
2015-12-18 15:58:00 [SDR.pullDirectory] External path is /sdcard/app_spoon-screenshots
2015-12-18 15:58:00 [SDR.pullDirectory] Pulling files from external dir on [05f3785c3444f1bf]
2015-12-18 15:58:05 [SDR.pullDirectory] Pulling files from internal dir on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Done pulling app_spoon-screenshots from on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Internal path is /data/data/com.myapp.sendmoney.debug1/app_spoon-files
2015-12-18 15:58:06 [SDR.pullDirectory] External path is /sdcard/app_spoon-files
2015-12-18 15:58:06 [SDR.pullDirectory] Pulling files from external dir on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Pulling files from internal dir on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.pullDirectory] Done pulling app_spoon-files from on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.handleImages] Moving screenshots to the image folder on [05f3785c3444f1bf]
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.flowtests.NotificationCenterActivity.NotificationCenterActivityFunctionTest#assertReferralPopUpWillAppear
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.flowtests.NotificationCenterActivity.NotificationCenterActivityFunctionTest#assertReferralPopUpWillHide
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#addMockItem_newMessage
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerService
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnBackButton_returnToNotificationCenter
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.CustomerServiceActivity.CustomerServiceNavigator#clickOnItem_referralType
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.General.GeneralNavigator#sendNotification_addSampleMessage
2015-12-18 15:58:06 [SDR.handleImages] Unable to find test for com.myapp.sendmoney.instrumentation.navigation.activity.InviteFriendsActivity.InviteFriendsNavigator#clickOnBackButton_returnToCustomerService
(...)
And html log from tests lacks screenshots.
Example: [log without screenshots]
(http://s8.postimg.org/ojirb2rc5/Screen_Shot_2015_12_21_at_10_56_54.png)
Fix tries and observations
As we know, since Marshmallow, the way we have to deal with permission changed a bit. I thought that there might be lack of
WRITE_EXTERNAL_STORAGE
or
READ_EXTERNAL_STORAGE
permissions which caused spoon to malfunction. So i created this small shell script. It should grant those two permission to the device from console:
SDK=`adb shell getprop ro.build.version.sdk | tr -d '\r'`
if (( "$SDK" >= 23 )) ; then
adb shell pm grant com.myapp.sendmoney.debug1 android.permission.WRITE_EXTERNAL_STORAGE
adb shell pm grant com.myapp.sendmoney.debug1 android.permission.READ_EXTERNAL_STORAGE
fi
./gradlew spoon
So I granted permissions to my device. I ran the tests but the result was the same. Nothing changed. So if my device has permission to pull the screenshots, then I thought that maybe there are no screenshots. I checked it and for sure screenshots are created on the device during test and screenshots are successfully pulled from the device after the test but not inserted into html.
So now I would like to find out - where is the difference between pre 6.0 devices and the 6.0+ ones. Where lies the problem? Did anyone of you faced/fixed it? Any thoughts what could be done?
We're facing the exact same problem right now and are probing for a solution without success so far. We'll post here if we find something.
Seems like problem got fixed after spoon-plugin got updated to:
com.stanfy.spoon:spoon-gradle-plugin:1.0.4
yep, should be fixed now
closing the ticket
|
2025-04-01T06:40:28.614269
| 2024-07-13T23:57:15
|
2407200734
|
{
"authors": [
"Axanderism",
"stantanasi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10930",
"repo": "stantanasi/streamflix",
"url": "https://github.com/stantanasi/streamflix/issues/136"
}
|
gharchive/issue
|
Favourites, history or progress
Summary
A quick way to access shows being watched from the home screen, rather than searching at every app start. This could be in the form of a favourites list, a watch history, or a Progress list from Trakt. Once the app starts to the home screen, it shouldn't take more than 2 clicks to continue watching a show!
Please confirm the following
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open issue.
Already implemented it, when you arrive on a movie or TV show you can favorite it with a button near the "Watch" button
|
2025-04-01T06:40:28.627848
| 2023-10-29T11:01:55
|
1966941974
|
{
"authors": [
"WindKn",
"starfi5h"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10931",
"repo": "starfi5h/DSP_Mod_Support",
"url": "https://github.com/starfi5h/DSP_Mod_Support/issues/8"
}
|
gharchive/issue
|
Conflict with GenesisBook
Could you please just fix 4D pocket...It seems some other api conflict with GenesisBook
Please provide the error message, otherwise I don't know where to fix it.
|
2025-04-01T06:40:28.639330
| 2021-09-29T15:49:19
|
1011131148
|
{
"authors": [
"dougwettlaufer",
"ivansenic",
"jdonenine",
"jsanda"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10932",
"repo": "stargate/stargate",
"url": "https://github.com/stargate/stargate/issues/1286"
}
|
gharchive/issue
|
Stargate fails to start because of too many open files
I am running Stargate 1.0.31 in Kubernetes with K8ssandra. The Stargate image used is stargateio/stargate-3_11:v1.0.31. In one of our automated tests we have seen Stargate fail to start a few times with this a resource limit error like this:
INFO [main] 2021-09-28 21:25:10,597 AbstractConnector.java:331 - Started<EMAIL_ADDRESS>(http/1.1)}{<IP_ADDRESS>:8082}
INFO [main] 2021-09-28 21:25:10,598 Server.java:415 - Started @80358ms
INFO [main] 2021-09-28 21:25:10,599 BaseActivator.java:185 - Started restapi
Finished starting bundles.
Unexpected error: java.io.IOException: User limit of inotify instances reached or too many open files
java.lang.RuntimeException: java.io.IOException: User limit of inotify instances reached or too many open files
at io.stargate.starter.Starter.watchJarDirectory(Starter.java:539)
at io.stargate.starter.Starter.start(Starter.java:441)
at io.stargate.starter.Starter.cli(Starter.java:619)
at io.stargate.starter.Starter.main(Starter.java:660)
Caused by: java.io.IOException: User limit of inotify instances reached or too many open files
at sun.nio.fs.LinuxWatchService.<init>(LinuxWatchService.java:64)
at sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at io.stargate.starter.Starter.watchJarDirectory(Starter.java:526)
... 3 more
This is in a CI environment with limited cpu/memory resources. The test is running in the free tier runner in GitHubActions. The runner vm has 2 cpus and 7 GB memory. The particular test in which this failed had already deployed two Cassandra nodes and one Stargate node. This failure is from the second Stargate node.
I believe the open file limit on the vm is set to 65536. I don't think I am able to increase it. Maybe the solution is to run my tests in an environment with more resources, but it would be nice if Stargate could less demanding especially considering this happens on startup.
Huh, well that's a new one. We run some things in the GitHub actions free tier as well without a problem, granted it is with fewer nodes.
The odd thing is that file descriptors shouldn't be heavily consumed until the services start taking traffic. In a resource constrained environment I'd expect the error you're seeing to occur under load rather than on start up.
We can take a look at dropwizard to see if there's anything we can tune. Although I wonder if your runner had a noisy neighbor?
We have only seen this 2 or 3 times so maybe it is noisy neighbors.
On Wed, Sep 29, 2021 at 8:01 PM Doug Wettlaufer @.***>
wrote:
Huh, well that's a new one. We run some things in the GitHub actions free
tier as well without a problem, granted it is with fewer nodes.
The odd thing is that file descriptors shouldn't be heavily consumed until
the services start taking traffic. In a resource constrained environment
I'd expect the error you're seeing to occur under load rather than on start
up.
We can take a look at dropwizard to see if there's anything we can tune.
Although I wonder if your runner had a noisy neighbor?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/stargate/stargate/issues/1286#issuecomment-930632573,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AABJBOMWLS44U2QC5KFTWXTUEOSGVANCNFSM5FAGY5DA
.
--
John
@jsanda @Miles-Garnsey Maybe what we can do is attempt these tests on the self-hosted runner we're going to setup and keep the file descriptor limits at the defaults and see if we run into this problem there, that would help rule out the noisy neighbor problem?
The file limit error has only happened a few times. We can run the test N times on a self-hosted runner without the error happening. That doesn't mean it won't happen, but it does give increased confidence. We have been deploying nodes with heaps configured as low as 256 MB and 384 MB. Surprisingly that works fine a lot of the time, but we have issues too often. The issues are not limited to this open file limit error. The situation is like a game of Jenga :)
https://media.giphy.com/media/PlnQNcQ4RYOhG/giphy.gif
@dougwettlaufer @jsanda How about creating a small fix here by adding a --enableBundlesWatch which would be false by default and thus you can enable the watching of bundles.. Or vice-versa.
Doug, I think we don't need bundles watching by default, but this might be a breaking change.. So we could also go with --disableBundlesWatch, keep the current behavior by default, but give anybody an option to avoid it..
I'm just curious here, what is the bundle watching used for @ivansenic ?
I'm just curious here, what is the bundle watching used for @ivansenic ?
With OSGi you can replace bundles during the runtime. Meaning you can paste a new version of a jar to the folder we are watching and you would in runtime update that specific bundle with new version.
Makes sense, I guess I was more wondering if that's something that is often done with Stargate? Just trying to gauge for example, is that an option we'd want to see exposed through K8ssandra or would it be sufficient just to turn it off by default when deployed through K8ssandra.
If you ask me, and you do :smile:, I would say it should be turned off in Kubernetes. I mean this is old tech, developed for monoliths and actually this bundle reloading was a way to achieve something you would nowadays do in the cloud. You have a new version, no problem, deploy. In fact, that's the whole benefit of the cloud-native development that you can deploy as much times as you want.
Right, watching the directory is to enable the hot-reload use case which really doesn't apply in the cloud. How about we add the --disableBundlesWatch and try-catch @ivansenic? That way we can avoid the error from ever happening with the flag and if it isn't set then avoid completely breaking with the try-catch.
|
2025-04-01T06:40:28.647476
| 2024-11-05T05:30:00
|
2634456505
|
{
"authors": [
"avi-starkware",
"reviewable-StarkWare",
"xrvdg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10933",
"repo": "starkware-libs/sequencer",
"url": "https://github.com/starkware-libs/sequencer/pull/1813"
}
|
gharchive/pull-request
|
feat(blockifier): add secp logic
This PR adds the logic for secp in preparation of the native syscalls.
The code has been refactored such that the implementation is shared between the native and non-native syscalls.
This change is
crates/blockifier/src/execution/native/syscall_handler.rs line 450 at r1 (raw file):
Previously, meship-starkware (Meshi Peled) wrote…
Can we move these tests to the secp test? I am not sure this is the best solution, but I don't like the tests to be part of the syscall handler file.
These can be moved to somewhere else, but that would require making Secp256Point public.
crates/blockifier/src/execution/syscalls/secp.rs line 40 at r13 (raw file):
pub fn secp_mul(&mut self, request: SecpMulRequest) -> SyscallResult<SecpMulResponse> {
let ep_point = self.get_point_by_id(request.ec_point_id)?;
let result = *ep_point * Curve::ScalarField::from(request.multiplier);
@ilyalesokhin-starkware
?
Suggestion:
let ec_point = self.get_point_by_id(request.ec_point_id)?;
let result = *ec_point * Curve::ScalarField::from(request.multiplier);
|
2025-04-01T06:40:28.650377
| 2020-03-17T18:18:48
|
583207719
|
{
"authors": [
"Atsidir",
"tfoldi"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10934",
"repo": "starschema/COVID-19-data",
"url": "https://github.com/starschema/COVID-19-data/pull/25"
}
|
gharchive/pull-request
|
PCM_DPS result column order changed in order to match with JHU result
fixed result column order and added Last_Update_Date in UTC time zone to it
Please do Cells -> `All outputs' -> 'Clear' in the notebook before you commit
|
2025-04-01T06:40:28.787001
| 2024-12-10T05:56:28
|
2729101963
|
{
"authors": [
"cliffckerr",
"daniel-klein"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10935",
"repo": "starsimhub/starsim",
"url": "https://github.com/starsimhub/starsim/issues/822"
}
|
gharchive/issue
|
Relabel the infected state as Infected
The infection class currently has:
ss.State('infected', label='Infectious')
This is fine for SIR, but modules that inheret from Infection are stuck with the infected state being labeled as Infectious, when often there's a non-infectious-but-infected latent period.
Child classes could replace this state, as SIR already does, but I expect most will want the infected state to be "Infected" rather than "Infectious."
I think this is probably better, but I can see an argument both ways. "Infectious" is the standard (https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology), and, as you said, anyone who wants to overwrite the default easily can. By default, infection.infected and infected.infectious are synonyms (via the infectious property), so I think a case could be made for either.
Since infected is a superset of infectious, and many diseases don't have equality, I'd recommend the relabeling. Also then the state name and label align.
|
2025-04-01T06:40:28.845859
| 2020-07-31T21:11:46
|
670238349
|
{
"authors": [
"andrewgordstewart",
"lalexgap"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10936",
"repo": "statechannels/statechannels",
"url": "https://github.com/statechannels/statechannels/issues/2406"
}
|
gharchive/issue
|
Audit SignState Usage
Profiling the pong server has revealed that we spend a lot of time in SignState. We should audit our calls to SignState to make sure we're only calling when necessary.
See the flamegraph:
https://statechannels.slack.com/files/UMXGSF1EY/F017WR84PD4/flamegraph.html
I think we can get a quick win by only calling hash state once per state?
This has been done!
|
2025-04-01T06:40:28.849187
| 2023-06-16T22:50:06
|
1761448251
|
{
"authors": [
"mxsdev",
"sourishkrout"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10937",
"repo": "stateful/runme",
"url": "https://github.com/stateful/runme/pull/308"
}
|
gharchive/pull-request
|
internal/document: migrate to TOML
Introduces attributeParser internal interface to deal with attribute parsing.
Tries to parse as TOML first, then falls back to our old parsing language written by @adambabik ("babikML").
In order to use double quotes rather than single, I had to use the old version of pelletier's go-toml (v1 rather than v2). We can use v2, I just personally thought double quotes are more common for toml.
There's some jankiness around inline toml, since toml is primarily a multiline format, but supports inline maps for nested attributes. So for parsing, I had to wrap the parsed string like so:
attrs={ <provided attributes> }
And for serialization, I had to join new lines with strings.
This is all in attributes.go - I would appreciate a more thorough review if possible, since it would be pretty bad if this broke.
What was the reasoning behind TOML? It requires unmaintained TOML v1 and additional operations after parsing and writing. Why not JSON?
I guess, we could do JSON as long as we curb or restrain the use of nesting objects
Will be switching to JSON
|
2025-04-01T06:40:28.852526
| 2022-08-23T13:29:24
|
1347964142
|
{
"authors": [
"christian-bromann",
"sourishkrout"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10938",
"repo": "stateful/vscode-marquee",
"url": "https://github.com/stateful/vscode-marquee/pull/206"
}
|
gharchive/pull-request
|
Improvements on project item references
fixes #126
This patch implements a set of improvements to the linking mechanics of project items (todos, clipboard items and notes). It implements a file watcher logic that is able to update references when line changes. It also consolidates the item links into a single React component to remove duplicate code.
Todo:
[x] enhance unit tests for new components
[ ] write some e2e tests to ensure functionality is given
Created a todo from the context menu inside the editor and got this when I overwrote the changes (stash pop) and clicked on the linkage icon in the widget:
Using a filewatcher is clever. I had a plan once to use a fuzzy text collation algo (levenshtein distance or simhash) to infer that a todo comment was edited / moved. Maybe that's the next level here. Endless amounts of strategies to do some work ahead of time to make scanning every line in the file fast. However, probably good call to keep it simple.
However, probably good call to keep it simple.
Adding the levenshtein distance algo shouldn't be difficult nor complex, thanks for the tip.
|
2025-04-01T06:40:28.855445
| 2024-07-08T12:23:48
|
2395493942
|
{
"authors": [
"exel-dot",
"statho"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10939",
"repo": "statho/ScoreHMR",
"url": "https://github.com/statho/ScoreHMR/issues/26"
}
|
gharchive/issue
|
Video not working
On google collab, video not working!
This program created two folders. The first with input images. The second (empty file log)
@statho please help
Please see the response to #18.
|
2025-04-01T06:40:28.861264
| 2015-08-07T13:00:24
|
99643114
|
{
"authors": [
"staticfloat",
"tkelman"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10940",
"repo": "staticfloat/julia-buildbot",
"url": "https://github.com/staticfloat/julia-buildbot/issues/27"
}
|
gharchive/issue
|
Kick the osx builders?
http://buildbot.e.ip.saba.us:8010/builders
This has been delaying nightlies for about 3 days and will need addressing before we can make RC binaries. We can maybe manually work around it for all platforms other than mac if needed.
cc @staticfloat
Done. The VM host got rebooted, but the VM didn't get started back up correctly.
Thanks! I swear I've been trying things on rundeck before opening these, but so far with little success.
Yeah, unfortunately that problem wasn't solvable with rundeck. :P
-E
On Fri, Aug 7, 2015 at 10:48 AM, Tony Kelman<EMAIL_ADDRESS>wrote:
Thanks! I swear I've been trying things on rundeck before opening these,
but so far with little success.
—
Reply to this email directly or view it on GitHub
https://github.com/staticfloat/julia-buildbot/issues/27#issuecomment-128776633
.
|
2025-04-01T06:40:28.870052
| 2023-01-16T15:10:51
|
1535091855
|
{
"authors": [
"mikaelol"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10941",
"repo": "statnett/controller-runtime-viper",
"url": "https://github.com/statnett/controller-runtime-viper/pull/68"
}
|
gharchive/pull-request
|
chore(main): release 0.1.2
:robot: I have created a release beep boop
0.1.2 (2023-01-30)
Dependency Updates
deps: bump github.com/onsi/ginkgo/v2 from 2.6.1 to 2.7.0 (#62) (c34c324)
deps: bump github.com/onsi/ginkgo/v2 from 2.7.0 to 2.7.1 (#77) (3b026b9)
deps: bump github.com/onsi/gomega from 1.24.2 to 1.25.0 (#69) (2795e36)
deps: bump github.com/onsi/gomega from 1.25.0 to 1.26.0 (#74) (351b75f)
deps: bump github.com/spf13/viper from 1.14.0 to 1.15.0 (#70) (b1422c8)
deps: bump sigs.k8s.io/controller-runtime from 0.14.1 to 0.14.2 (#76) (51f16a4)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/statnett/controller-runtime-viper/releases/tag/v0.1.2 :sunflower:
|
2025-04-01T06:40:29.433836
| 2024-04-12T09:48:32
|
2239604954
|
{
"authors": [
"Plootie",
"artehe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10942",
"repo": "stayintarkov/SIT.Manager.Avalonia",
"url": "https://github.com/stayintarkov/SIT.Manager.Avalonia/pull/180"
}
|
gharchive/pull-request
|
Properly fix configmanager installation
Create the directory that it should rather than creating a directory of the filename
Currently creates the directory below rather than just ensuring that the one it's trying to copy to exists:
oops..
|
2025-04-01T06:40:29.440887
| 2021-03-11T10:40:42
|
829025013
|
{
"authors": [
"aminya",
"kgryte"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10943",
"repo": "stdlib-js/stdlib",
"url": "https://github.com/stdlib-js/stdlib/pull/385"
}
|
gharchive/pull-request
|
Add a TLDR for cloning
Checklist
Please ensure the following tasks are completed before submitting this pull request.
[x] Read, understood, and followed the contributing guidelines, including the relevant style guides.
[x] Read and understand the Code of Conduct.
[x] Read and understood the licensing terms.
[x] Searched for existing issues and pull requests before submitting this pull request.
[x] Filed an issue (or an issue already existed) prior to submitting this pull request.
[x] Rebased onto latest develop.
[x] Submitted against develop branch.
Description
What is the purpose of this pull request?
This pull request:
add TLDR for contribution
Related Issues
Does this pull request have any related issues?
This pull request:
Addresses https://github.com/stdlib-js/stdlib/issues/373#issuecomment-782247121
Questions
Any questions for reviewers of this pull request?
No.
Other
Any other information relevant to this pull request? This may include screenshots, references, and/or implementation notes.
No.
@stdlib-js/reviewers
I added a TLDR in https://github.com/stdlib-js/stdlib/commit/45f6535faa0b00d13fb2d96bfa708ff84933b4a3.
|
2025-04-01T06:40:29.463959
| 2017-12-24T23:50:37
|
284387136
|
{
"authors": [
"E-D-A",
"kingsofcoms"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10944",
"repo": "steemit/steem-python",
"url": "https://github.com/steemit/steem-python/issues/110"
}
|
gharchive/issue
|
What is the function to get Steem Power of an account?
I've looked over at https://steem.readthedocs.io/en/latest/ and i couldn't find anything. Is it possibe?
all i got was
sp_to_rshares(sp, voting_power=10000, vote_pct=10000)
Parameters:
sp (number) – Steem Power
voting_power (int) – voting power (100% = 10000)
vote_pct (int) – voting participation (100% = 10000)
Tried
from steem.account import Account
account = Account("manuelcho12")
print("VP: %s" % account.voting_power())
print("SP: %s" % account.sp())
Got this error:
VP: 94.13
Traceback (most recent call last):
File "steemapp.py", line 5, in
print("SP: %s" % account.sp())
TypeError: 'float' object is not callable
You get this from the Account object. You have to take into account SP delgated to you and SP you have delegated to get the active SP. I don't think there is an out of the box function.
See below. I have used rstrip to remove VESTS from the string. I haven't checked, but there might be a converter function that can simplify it, but the concept stays the same.
allSP = float(account.get('vesting_shares').rstrip(' VESTS'))
delSP = float(account.get('delegated_vesting_shares').rstrip(' VESTS'))
recSP = float(account.get('received_vesting_shares').rstrip(' VESTS'))
activeSP = account.converter.vests_to_sp(allSP - delSP + recSP)```
thank you for the workaround.
account.get_balances() also gets total vests but have to use json and to convert total vests to SP.
|
2025-04-01T06:40:29.468035
| 2023-02-12T21:45:19
|
1581420491
|
{
"authors": [
"Victor-Savu",
"raders",
"stefan-hoeck"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10945",
"repo": "stefan-hoeck/idris2-json",
"url": "https://github.com/stefan-hoeck/idris2-json/issues/22"
}
|
gharchive/issue
|
Encoding record with 1 field doesn't work
record Config where
constructor MkConfig
test: String
Encoding MkConfig "test1" produces the string "test1" instead of the object {"test" : "test1"}
This is expected behavior as it reflects the representation of such a record at runtime. However, I think this should be customizable via JSON.Option.Options. I'll look into it.
Hi @raders ! With the latest version of idris2-json, you can set unwrapRecords to False to get the most verbose expansion of the record, and set sum to UntaggedValue so that you only keep the "value" (which is the pair you want).
module Main
import Hedgehog
import JSON.Derive
%language ElabReflection
record Config where
constructor MkConfig
test: String
%runElab derive "Config" [Show,Eq,(customToJSON $ {sum:=UntaggedValue, unwrapRecords:=False} defaultOptions),(customFromJSON $ {sum:=UntaggedValue, unwrapRecords:=False} defaultOptions)]
main : IO ()
main = test . pure $ MkGroup "Maelstrom" [("#22", property $ encode (MkConfig "test1") === #"{"test":"test1"}"#)]
Looks good. Thank you!
I'm closing this as it seems this is now resolved.
|
2025-04-01T06:40:29.478864
| 2022-09-07T18:45:47
|
1365058089
|
{
"authors": [
"codecov-commenter",
"stefanfreitag"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10946",
"repo": "stefanfreitag/cdk-budget-notifier",
"url": "https://github.com/stefanfreitag/cdk-budget-notifier/pull/174"
}
|
gharchive/pull-request
|
chore: upgrade to cdk v2.240.0
Fixes #
Codecov Report
Base: 100.00% // Head: 100.00% // No change to project coverage :thumbsup:
Coverage data is based on head (fe798a7) compared to base (1ba42aa).
Patch coverage: 100.00% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## master #174 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 3 3
Lines 51 52 +1
Branches 14 15 +1
=========================================
+ Hits 51 52 +1
Impacted Files
Coverage Δ
src/budget_notifier.ts
100.00% <100.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
2025-04-01T06:40:29.483336
| 2024-05-06T11:33:09
|
2280667916
|
{
"authors": [
"fattynoparents",
"stefanklut"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10947",
"repo": "stefanklut/laypa",
"url": "https://github.com/stefanklut/laypa/issues/37"
}
|
gharchive/issue
|
Warning "No region type defined for eSc_dummyblock_c" when training a region model
When I try training a region model, I get quite a few of such warnings:
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: No region type defined for eSc_dummyblock_ at /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: Element type "None" undefined in class dict /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: No region type defined for eSc_dummyblock_ at /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: Element type "None" undefined in class dict /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: No region type defined for eSc_dummyblock_ at /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
WARNING [05/06 11:19:39 laypa.page_xml.xmlPAGE]: Element type "None" undefined in class dict /home/user/training-laypa/region/2024.05.06/val_input/page/4212.xml
Why would this happen and should I pay attention? If yes, how can I fix this?
Thanks in advance!
This mean that in the GT there is a region that doesn't have a label. If the region has no label it will be ignored, so the pixels will just be classified (in the GT) as background
This mean that in the GT there is a region that doesn't have a label.
Hmm that's weird, since all regions in my GT do have labels.
You are right, seems eScriptorium didn't remove the dummy green region that occupied the whole page (a default situation after running Loghi with only baseline detection), although I did remove it manually in their UI.
Question is, why do I get 3 similar warnings about it even though there's only 1 eSc_dummyblock_ element?
The three warning are probably due to the preprocessing reading this region 3 times (could probably be optimized :smile: ), once for semantic segmentation once for instance segmentation and once for panoptic segmentation.
This is due to the reading order, but I am not sure why this would change. Maybe @rvankoert or @TimKoornstra knows. As there were some changes to the inference script. But this is done somewhere in the Java or bash part
Does that mean not running the RECALCULATEREADINGORDER order as well?
This might very well be the issue, thanks for the tip, I will check this!
|
2025-04-01T06:40:29.485249
| 2015-10-21T22:39:54
|
112693676
|
{
"authors": [
"martincostello",
"stefanprodan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10948",
"repo": "stefanprodan/WebApiThrottle",
"url": "https://github.com/stefanprodan/WebApiThrottle/pull/40"
}
|
gharchive/pull-request
|
Fix #31
Use HashAlgorithm.Create(string) so that .NET loads FIPS-compliant hash algorithms if available on the local machine. This also allows algorithms to be overridden as described here.
Thanks
|
2025-04-01T06:40:29.495883
| 2021-12-13T08:07:41
|
1078204851
|
{
"authors": [
"Nydery"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10950",
"repo": "steinmax/reSign",
"url": "https://github.com/steinmax/reSign/issues/29"
}
|
gharchive/issue
|
Prüfung der Machbarkeit Schnittstelle der Webuntis API
Task
Postman Requests for getting
classes and class teacher by room
pupils of class
the current lesson of class
Requirements
API-key -> ask sysadmin
Webuntis API documentation
Postman
Contacted sysadmin on 16.12.2021, still no response.
Got some information from Prof. Stütz:
Example project
https://birklbauerjonas.github.io/webUntis-docs/
by a few former HTL students.
API Documentation
JSONRpc api
Here and/or here.
Additional info
Npm package for nodejs
Here.
Former project at HTL Here.
|
2025-04-01T06:40:29.502111
| 2020-11-10T08:05:56
|
739684327
|
{
"authors": [
"aanupam23",
"leighmcculloch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10951",
"repo": "stellar/go",
"url": "https://github.com/stellar/go/issues/3202"
}
|
gharchive/issue
|
module .. does not contain package github.com/stellar/go/clients/txnbuild
What version are you using?
v0.0.0-20201109195737-ef1e30d4691b
What did you do?
There is an incorrect installation command in txnbuild README.md which specifies Installation instruction as
go get github.com/stellar/go/clients/txnbuild
The above command is incorrect since txnbuild is not part of client directory. If the above command is used, the below error message will be displayed
module github.com/stellar/go@upgrade found (v0.0.0-20201109195737-ef1e30d4691b), but does not contain package github.com/stellar/go/clients/txnbuild
This should be
go get github.com/stellar/go/txnbuild
What did you expect to see?
I expected the installation of txnbuild.
What did you see instead?
It threw error
Thanks @aanupam23. I am correcting the installation instructions in #3207.
|
2025-04-01T06:40:29.513121
| 2020-10-23T17:50:45
|
728406265
|
{
"authors": [
"2opremio",
"abuiles",
"bartekn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10952",
"repo": "stellar/go",
"url": "https://github.com/stellar/go/pull/3160"
}
|
gharchive/pull-request
|
services/horizon: Allow captive core to start from any ledger.
PR Checklist
PR Structure
[ ] This PR has reasonably narrow scope (if not, break it down into smaller PRs).
[ ] This PR avoids mixing refactoring changes with feature changes (split into two PRs
otherwise).
[ ] This PR's title starts with name of package that is most changed in the PR, ex.
services/friendbot, or all or doc if the changes are broad or impact many
packages.
Thoroughness
[ ] This PR adds tests for the most critical parts of the new functionality or fixes.
[ ] I've updated any docs (developer docs, .md
files, etc... affected by this change). Take a look in the docs folder for a given service,
like this one.
Release planning
[ ] I've updated the relevant CHANGELOG (here for Horizon) if
needed with deprecations, added features, breaking changes, and DB schema changes.
[ ] I've decided if this PR requires a new major/minor version according to
semver, or if it's mainly a patch change. The PR is targeted at the next
release branch if it's not a patch change.
What
Allow captive core to start from any ledger.
Why
Previously we were limiting the ledgers where online captive core could start since we were always trying to start (captive core) from the previous check-point ledger.
This was probably problematic since this wouldn't work for ledgers smaller than 63.
Known limitations
[TODO or N/A]
Thanks!
This seems to fix #3157 ! (which I tested using #3144 ).
However, I find confusing that the stats obtained from GET / don't reflect the log messages. For instance, even if Horizon was outputting this:
time="2020-10-27T14:36:46.525Z" level=info msg="Ingestion system state machine transition" current_state="resume(latestSuccessfullyProcessedLedger=61)" next_state="resume(latestSuccessfullyProcessedLedger=61)" pid=198 service=expingest
time="2020-10-27T14:36:46.533Z" level=info msg="Waiting for ledger to be available in stellar-core" core_sequence=61 ingest_sequence=62 pid=198 service=expingest
The root stats ere still at 0 (including the CoreSequence):
I would expect the IngestSequence and CoreSequence to be consistent in both the log messages and the root endpoint.
@2opremio I think I run into this while working on this PR but haven't debugged it much yet. I suspect that changes in https://github.com/stellar/go/pull/3106 broke something. If you /bin/bash the container and run curl localhost:8000 there you'll see correct values. So it looks like two Horizons are running? @tamirms can you confirm/take a look?
It's strange, because after ledger 64 is reached (according to the logs) the CoreSequence I obtain is correct.
If you /bin/bash the container and run curl localhost:8000 there you'll see correct values.
True.
$ docker exec -ti horizon-integration curl localhost:8000 | grep ledger
"ledger": {
"href": "http://localhost:8000/ledger/{sequence}",
"ledgers": {
"href": "http://localhost:8000/ledgers{?cursor,limit,order}",
"ingest_latest_ledger": 18,
"history_latest_ledger": 18,
"history_elder_ledger": 2,
"core_latest_ledger": 18,
@2opremio I noticed there is a new env variable: HORIZON_INTEGRATION_ENABLE_CAPTIVE_CORE. I haven't checked it but maybe it will fix it.
I'm going to approve this PR but please 👍 too because I worked on this partially. And maybe let's more discussion about the issue with a container to a new issue.
OK, just for the record. This seems to be the problem:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9a1d35cf918b stellar/quickstart:testing2 "/start --standalone…" 2 minutes ago Up 2 minutes <IP_ADDRESS>:32797->1570/tcp, <IP_ADDRESS>:32796->5432/tcp, <IP_ADDRESS>:32795->6060/tcp, <IP_ADDRESS>:32794->8000/tcp, <IP_ADDRESS>:32793->11625/tcp, <IP_ADDRESS>:32792->11626/tcp horizon-integration
it should be <IP_ADDRESS>:8000->8000/tcp instead
|
2025-04-01T06:40:29.531374
| 2022-04-14T23:16:16
|
1205096971
|
{
"authors": [
"lijamie98",
"marcelosalloum",
"stellar-jenkins"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10953",
"repo": "stellar/java-stellar-anchor-sdk",
"url": "https://github.com/stellar/java-stellar-anchor-sdk/pull/195"
}
|
gharchive/pull-request
|
core: Add inclusive and additive amount calculation in SEP-31
PR Checklist
PR Structure
[x] This PR has reasonably narrow scope (if not, break it down into smaller PRs).
[x] This PR avoids mixing refactoring changes with feature changes (split into two PRs
otherwise).
[x] This PR's title starts with name of package that is most changed in the PR, ex.
paymentservice.stellar, or all or doc if the changes are broad or impact many
packages.
Thoroughness
[ ] This PR adds tests for the most critical parts of the new functionality or fixes.
What
Add inclusive and additive amount calculation in SEP-31. Default to "inclusive"
Why
There are anchors who calculate amount in the following equation.
amount_in = amount + fee (additive)
Known limitations
Demo wallet won't work with additive settings.
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Also, it would be great to add tests to make sure we cover all these cases. This part of the code is a bit complicated and we should have tests in place to make sure a change won't break things here.
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
Reference Server Preview is available here:https://anchor-ref-pr195.previews.kube001.services.stellar-ops.com/SEP Server Preview is available here:https://anchor-sep-pr195.previews.kube001.services.stellar-ops.com/
|
2025-04-01T06:40:29.545945
| 2023-12-22T18:16:37
|
2054343788
|
{
"authors": [
"marwen-abid",
"stellar-jenkins"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10954",
"repo": "stellar/stellar-disbursement-platform-backend",
"url": "https://github.com/stellar/stellar-disbursement-platform-backend/pull/132"
}
|
gharchive/pull-request
|
Change to Circle USDC.
What
Switching e2e tests to use circle's USDC.
Why
It's easier to fund the distribution account with circle's USDC, then having one more asset to maintain during testnet resets.
Known limitations
[TODO or N/A]
Checklist
PR Structure
[ ] This PR has a reasonably narrow scope (if not, break it down into smaller PRs).
[ ] This PR title and description are clear enough for anyone to review it.
[ ] This PR does not mix refactoring changes with feature changes (split into two PRs otherwise).
Thoroughness
[ ] This PR adds tests for the new functionality or fixes.
[ ] This PR contains the link to the Jira ticket it addresses.
Configs and Secrets
[ ] No new CONFIG variables are required -OR- the new required ones were added to the helmchart's values.yaml file.
[ ] No new CONFIG variables are required -OR- the new required ones were added to the deployments (pr-preview, dev, demo, prd).
[ ] No new SECRETS variables are required -OR- the new required ones were mentioned in the helmchart's values.yaml file.
[ ] No new SECRETS variables are required -OR- the new required ones were added to the deployments (pr-preview secrets, dev secrets, demo secrets, prd secrets).
Release
[ ] This is not a breaking change.
[ ] This is ready for production.. If your PR is not ready for production, please consider opening additional complementary PRs using this one as the base. Only merge this into develop or main after it's ready for production!
Deployment
[ ] Does the deployment work after merging?
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr132.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr132.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr132.previews.kube001.services.stellar-ops.com
stellar-disbursement-platform-backend-preview is available here:SDP: https://sdp-backend-pr132.previews.kube001.services.stellar-ops.com/healthAP: https://sdp-ap-pr132.previews.kube001.services.stellar-ops.com/healthFrontend: https://sdp-backend-dashboard-pr132.previews.kube001.services.stellar-ops.com
|
2025-04-01T06:40:29.562917
| 2018-03-28T20:52:22
|
309527319
|
{
"authors": [
"NaturFront",
"stephan-strate"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10955",
"repo": "stephan-strate/teamspeak-league-update",
"url": "https://github.com/stephan-strate/teamspeak-league-update/issues/3"
}
|
gharchive/issue
|
Check your .tlu/properties.dat properties
I cant install the bot and the update have a bug or anything.
Here a screenshot: http://prntscr.com/ixqgua
Try to download the newest version manually: https://github.com/stephan-strate/teamspeak-league-update/releases/download/3.0.1/teamspeak-league-update.jar and delete your .tlu folder.
I had a bug in Version 3.0.0, which caused this issue.
Where is the .tlu folder?
It is in the same folder as your teamspeak-league-update.jar, it might be hidden by your system. Try to show hidden files and folders.
Thanks man!
All setups are right.
Here a screen: http://prntscr.com/iy63p0
I reconnect with my second ID and dont get my rank.
I connect with the verifyed ID from bot. The ID is on the reconnect the same, what I mean.
Difficult to tell where the problem is. You can send me your .tlu folder to my mail address (you can find my mail address on my profile, left side)
I released a new version 3.0.2 with a small bugfix. This should fix your problems, you might need to delete your .tlu/properties.dat
|
2025-04-01T06:40:29.574401
| 2022-12-26T05:21:19
|
1510616702
|
{
"authors": [
"stephenberry",
"toge"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10956",
"repo": "stephenberry/glaze",
"url": "https://github.com/stephenberry/glaze/issues/92"
}
|
gharchive/issue
|
Any plan to support NDJSON?
Hello.
I think glaze is a great library.
I have started to use glaze when I write new programs.
Next phase, I would like to replace other libraries with glaze in existing programs where speed is important, but NDJSON(Newline Delimited JSON) support is an issue.
http://ndjson.org/
As far as I know, glaze does not support NDJSON.
Are there any plans to support NDJSON?
NDJSON is a reasonable and great suggestion. We'll try to add support for this soon, probably as a compile time option.
@toge
NDJSON support has been added for array types like std::array, std::vector, std::tuple, and glz::array
For runtime deduced variant types see this Variant Handling documentation on the Wiki. This will let you deduce the type from the NDJSON.
Here are some tests that demonstrate the behavior and calling syntax:
std::vector<std::string> x = { "Hello", "World", "Ice", "Cream" };
std::string s = glz::write_ndjson(x);
expect(s ==
R"("Hello"
"World"
"Ice"
"Cream")");
x.clear();
glz::read_ndjson(x, s);
expect(x[0] == "Hello");
expect(x[1] == "World");
expect(x[2] == "Ice");
expect(x[3] == "Cream");
Another example:
std::tuple<my_struct, sub_thing> x{};
std::string s = glz::write_ndjson(x);
expect(s ==
R"({"i":287,"d":3.14,"hello":"Hello World","arr":[1,2,3]}
{"a":3.14,"b":"stuff"})");
auto& first = std::get<0>(x);
auto& second = std::get<1>(x);
first.hello.clear();
first.arr[0] = 0;
second.a = 0.0;
second.b.clear();
glz::read_ndjson(x, s);
expect(first.hello == "Hello World");
expect(first.arr[0] = 1);
expect(second.a == 3.14);
expect(second.b == "stuff");
@stephenberry
Thanks a lot!
It is wonderful.
I will give it a try!
|
2025-04-01T06:40:29.575856
| 2016-04-04T00:17:40
|
145555267
|
{
"authors": [
"jlawton",
"stephencelis"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10957",
"repo": "stephencelis/SQLite.swift",
"url": "https://github.com/stephencelis/SQLite.swift/pull/396"
}
|
gharchive/pull-request
|
Restrict use of qualified table names
Qualified table names are not allowed when creating indexes or foreign key references.
Fixes #395
Nice solution! Thank you!
|
2025-04-01T06:40:29.581557
| 2016-09-05T21:37:49
|
175129530
|
{
"authors": [
"sjuxax",
"stephenmcd"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10958",
"repo": "stephenmcd/mezzanine",
"url": "https://github.com/stephenmcd/mezzanine/pull/1668"
}
|
gharchive/pull-request
|
Fix positional argument count on get_db_prep_value
When I try to run the Wordpress importer with the latest git head of Mezzanine (ee24a48b5f33) and Django 1.10.1, I get the error reproduced at the bottom of this comment when it tries to import WordPress pages. This occurs on both SQLite (which didn't experience this problem in February when I was last experimenting) and PostgreSQL (which I didn't test previously).
Despite spending a bit of time looking into it, I'm not totally sure why this started appearing now. The code at core/fields.py:103 doesn't appear to have changed since 2012 and the Django API for this function hasn't changed since Django 1.2. Probably Django is passing the connection object now regardless of whether a multi-tenant DB configuration is used or not, and maybe it wasn't before?
In any case, I'm not sure what the hiccup is, but now that I got it working, I'm not super interested in spending more time hunting down the root cause. It's working again with the attached patch. Hope this helps. :smile:
Imported comment by: Test
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_from_command_line(sys.argv)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 359, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/base.py", line 294, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/core/management/base.py", line 345, in execute
output = self.handle(*args, **options)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/Mezzanine-4.2.0-py3.5.egg/mezzanine/blog/management/base.py", line 232, in handle
page, created = RichTextPage.objects.get_or_create(**page)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 473, in get_or_create
return self.get(**lookup), False
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 379, in get
num = len(clone)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 238, in __len__
self._fetch_all()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 1087, in _fetch_all
self._result_cache = list(self.iterator())
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/query.py", line 54, in __iter__
results = compiler.execute_sql()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 824, in execute_sql
sql, params = self.as_sql()
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 376, in as_sql
where, w_params = self.compile(self.where) if self.where is not None else ("", [])
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 353, in compile
sql, params = node.as_sql(self, self.connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/where.py", line 79, in as_sql
sql, params = compiler.compile(child)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 353, in compile
sql, params = node.as_sql(self, self.connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/lookups.py", line 156, in as_sql
rhs_sql, rhs_params = self.process_rhs(compiler, connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/lookups.py", line 92, in process_rhs
return self.get_db_prep_lookup(value, connection)
File "/home/jeff/mez_test/mez_venv/lib/python3.5/site-packages/django/db/models/lookups.py", line 184, in get_db_prep_lookup
[get_db_prep_value(value, connection, prepared=True)]
TypeError: get_db_prep_value() takes 2 positional arguments but 3 were given
Thanks for tracking this down!
I had a look myself and the code in Django is a bit inconsistent - sometimes it's defined as a positional arg, and sometimes as a keyword arg. Anyway it looks like the change in 1.10 here caused the problem: https://github.com/django/django/commit/eab5df12b664b154b2e280330aa43d8c0621b94a
|
2025-04-01T06:40:29.612162
| 2019-07-31T11:37:20
|
475092105
|
{
"authors": [
"EmilPi",
"obraun-sl"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10959",
"repo": "stereolabs/zed-examples",
"url": "https://github.com/stereolabs/zed-examples/issues/165"
}
|
gharchive/issue
|
ZED camera causes ubuntu 18.04 with RX 2080 and CUDA 10.0 to freeze
My setup:
Ubuntu 18.04
AMD Ryzen 2700X
NVidia RTX 2080
nvidia-driver-410
cuda-10-0
Steps to reproduce:
1)
Connect ZED camera
If ZED camera is not detected, try plug/unplug until it is
Try to calibrate camera
if the calibartion window will become unresponding, try restarting calibration
Wait for system freeze
Connect ZED camera
If ZED camera is not detected, try plug/unplug until it is
Try to see video using ZED explorer
if camera is not detected, just wait
Any news?
Hi,
Can you tell the ZED SDK version you are using ?
The latest, 2.8.3, from https://download.stereolabs.com/zedsdk/2.8/ubuntu18 .
|
2025-04-01T06:40:29.617582
| 2019-06-20T13:45:24
|
458668183
|
{
"authors": [
"adujardin",
"zaher88abd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10960",
"repo": "stereolabs/zed-python-api",
"url": "https://github.com/stereolabs/zed-python-api/issues/86"
}
|
gharchive/issue
|
Disparity as Grayscale image
HI,
I need the disparity as an image so what the best way to get it?
I am thinking to normalize the MEASURE_DISPARITY from 0-1 then multiply by 255 but the issue each image has a different range so it will be not accurate to make it in real time. Please, if a there better way to do it let me know.
Hi,
Yes, there is a better way to display the disparity map. Use the retrieveImage function with view=VIEW_DEPTH
The disparity and depth are visually similar.
Ok, thank you @adujardin for this quick response.
I put issue from a long time and I will appreciate more if you could answer it.
|
2025-04-01T06:40:29.626138
| 2024-11-23T11:21:57
|
2685856197
|
{
"authors": [
"oysandvik94"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10961",
"repo": "stevearc/oil.nvim",
"url": "https://github.com/stevearc/oil.nvim/issues/520"
}
|
gharchive/issue
|
bug: Oil window just closes without error when trying to add/remove file
Did you check the docs and existing issues?
[X] I have read the docs
[X] I have searched the existing issues
Neovim version (nvim -v)
NVIM v0.11.0-dev-1207+g534544cbf7
Operating system/version
Arch Linux LTS
Describe the bug
I have not determined when, but quite often when I try to add or remove files, the oil window just closes and puts me to the last file I was in. There is no error.
It happened a few version ago, but I'm not sure which. It happens in multiple projects.
Right now I have a directory where I can delete some folders, but not all, and adding a file/folder just closes the window.
Is there some logs I can find to see what's happening?
I used the below repro config, and I did not get the error, but I did try with a clean oil config with my own nvim config, and got the issue.
What is the severity of this bug?
breaking (some functionality is broken)
Steps To Reproduce
Be in file
Run this keymap: vim.keymap.set("n", "-", require("oil").open, { desc = "Open parent directory" })
Delete folder
I havent determined the exact causation, so it's hard to give a definitive reproduction.
Expected Behavior
Can modity oil buffer
Directory structure
/012 .git/
/010 .github/
/006 .tests/
/013 autoload/
/003 deps/
/004 doc/
/001 ftplugin/
/011 lua/
/009 scripts/
/002 tests/
/014 .gitignore
/007 CONTRIBUTING.md
/008 Makefile
/005 README.md
/015 stylua.toml
Repro
-- save as repro.lua
-- run with nvim -u repro.lua
-- DO NOT change the paths
local root = vim.fn.fnamemodify("./.repro", ":p")
-- set stdpaths to use .repro
for _, name in ipairs({ "config", "data", "state", "runtime", "cache" }) do
vim.env[("XDG_%s_HOME"):format(name:upper())] = root .. "/" .. name
end
-- bootstrap lazy
local lazypath = root .. "/plugins/lazy.nvim"
if not vim.loop.fs_stat(lazypath) then
vim.fn.system({
"git",
"clone",
"--filter=blob:none",
"--single-branch",
"https://github.com/folke/lazy.nvim.git",
lazypath,
})
end
vim.opt.runtimepath:prepend(lazypath)
-- install plugins
local plugins = {
"folke/tokyonight.nvim",
{
"stevearc/oil.nvim",
config = function()
require("oil").setup({
-- add any needed settings here
})
end,
},
-- add any other plugins here
}
require("lazy").setup(plugins, {
root = root .. "/plugins",
})
vim.cmd.colorscheme("tokyonight")
-- add anything else here
Did you check the bug with a clean config?
[X] I have confirmed that the bug reproduces with nvim -u repro.lua using the repro.lua file above.
I tracked it down to be casued by this plugin: https://github.com/Shatur/neovim-session-manager
Probably some code related to session saving that borked it out. I didnt find a fix, I just switched plugins.
|
2025-04-01T06:40:29.636684
| 2017-09-09T15:35:10
|
256446382
|
{
"authors": [
"Submanifold",
"steveharoz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10962",
"repo": "steveharoz/open-access-vis",
"url": "https://github.com/steveharoz/open-access-vis/issues/38"
}
|
gharchive/issue
|
Clique Community Persistence: A Topological Visual Analysis Approach for Complex Networks
Dear Steve,
thank you so much for this service! I have finally added all information about this project on OSF.io. Would you kindly include the relevant URIs?
All data: https://osf.io/rdktg/
Paper (PDF): https://mfr.osf.io/render?url=https://osf.io/td973/?action=download%26mode=render
Supplementary materials (PDF): https://mfr.osf.io/render?url=https://osf.io/e8dmp/?action=download%26mode=render
Additional explanations: https://mfr.osf.io/render?url=https://osf.io/ygd6m/?action=download%26mode=render
Hope that I am using this service correcetly---this is my first project on OSF, inspired by your initiative. Thanks for your efforts!
Best,
Bastian
Thanks Bastian. The paper is updated. And I've put the OSF repository as the materials. The data category is for experiment results such as experiment subject responses or algorithmic runtime results. I'm not sure the data folder here fits into that. Feel free to comment in this issue if you think anything should change.
Hi Steve! I realize that I should have named the folder somewhat differently, but it actually contains the results of algorithmic runs as well as scripts to create them for yourself (in order to compare them with the ones we reported in the paper). I'll also add a link from the GitHub repository to the OSF one so that more people are capable of finding it.
Hi Steve, sorry to bother you again, but I wanted to ask whether you could add the OSF repository as a data repository of our paper, as well. I have uploaded the raw input files as well as the results of our analysis in order to make everything reproducible.
I took a look at the data folder, and I'm still not sure if it fits the criteria of experiment data rather than materials.
I don't see:
A data dictionary (how do I read the data?)
An explanation in the paper about how the data is used for an analysis (e.g., mean & SD of algorithm run times) or comparison (algorithm A vs algorithm B; simulation vs ground truth; algorithm vs human estimation; etc.). I see the conclusion mentions a contrast with existing methods, but I don't see a discussion of the comparison using the data.
I just want to make sure that I understand what's posted, as it seems to be quite outside the typical umbrella of experiment results.
OK, I understand that I was mistaken about the nature of these experimental data. I have added an updated README to the Data folder to explain what is in there. In short, I added:
all networks that we analysed in our paper
the results of this analysis in the form of so-called persistence diagrams
further data calculated with our method (topological centrality) and a comparison between existing centrality measures
comparisons between our method and existing methods in the form of distance matrices (one is generated with the old method, the other with our method)
embeddings & glyphs to reproduce the figures in our paper
I also added detailed instructions and automated scripts for reproducing every figure and every table in the table. The results are added in order to make it possible to check the correctness of the calculations.
So, I was hoping that this (along with the supplied code) should make it possible to reproduce just about everything in the paper (and also try it out with new data, if possible).
Sorry for taking up so much time; I have to admit that Open Science is something new for me—but I'm very excited to try it out and see the benefits for other scientists.
|
2025-04-01T06:40:29.673504
| 2014-05-13T23:52:46
|
33449969
|
{
"authors": [
"benjamin-hull",
"sprynm"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10963",
"repo": "stevenschobert/instafeed.js",
"url": "https://github.com/stevenschobert/instafeed.js/issues/109"
}
|
gharchive/issue
|
Group items
Is there an index or anything I could hack to group items in a wrapping template?
IE... create rows, blocks of items for a slide show etc.
I'm closing this as it's very old, but for future reference, the mock option and success callback can be used to work with the data before inserting it into the DOM yourself.
|
2025-04-01T06:40:29.677089
| 2021-01-22T00:02:29
|
791580526
|
{
"authors": [
"Geczy",
"googlicius"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10964",
"repo": "steverydz/build-url",
"url": "https://github.com/steverydz/build-url/issues/49"
}
|
gharchive/issue
|
Doesn't work for duplicate params
Doesn't work for duplicate params
It just adds it as a second param, rather than replacing the first
Doesn't work for duplicate params
It just adds it as a second param, rather than replacing the first
I just made same library, which same name, even same options. And I found this package because I cannot publish mine with the name build-url, so I placed my library as scoped library: @googlicius/build-url
And it can replace the existing query params, Typescript supported. So please check it.
|
2025-04-01T06:40:29.679617
| 2018-09-17T16:02:24
|
360937592
|
{
"authors": [
"bevzaanton",
"mayuroks"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10965",
"repo": "stfalcon-studio/ChatKit",
"url": "https://github.com/stfalcon-studio/ChatKit/issues/198"
}
|
gharchive/issue
|
[BUG] In DialogListAdapter, addItem(dialog) adds the item to end of the list but updates the first item
On adding a new dialog, item gets added but UI isn't updated.
/**
* Add dialog to the end of dialogs list
*
* @param dialog dialog item
*/
public void addItem(DIALOG dialog) {
items.add(dialog);
notifyItemInserted(0); <=== should be notifyItemInserted(items.length() - 1)
}
Hi! I can't understand how we missed it. I'll fix it. You can use method addItem(int position, DIALOG dialog) while I haven't update library.
Yup! I am using the addItem(int position, DIALOG dialog) for now.
|
2025-04-01T06:40:29.685971
| 2017-06-19T18:27:16
|
236979020
|
{
"authors": [
"badshark",
"fzunino",
"nixtrace"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10966",
"repo": "stickermule/rump",
"url": "https://github.com/stickermule/rump/issues/16"
}
|
gharchive/issue
|
Add support to sync only keys matching a certain pattern
I've used this tool it in the past and it works great, but now I have a case in which I need to dump keys that only match a certain pattern instead of dumping all the keys of a DB.
MATCH option of SCAN command can be used to add this support.
The following commit https://github.com/fzunino/rump/commit/67bc2e1dac6d46019019a8f6cf6996ba8a02cf70 adds this support adding an optional command line argument called match and using * as default value, maintaining the current semantic.
Let me know and I can submit a PR with this commit.
@fzunino Yes, that's great idea!
Let's keep the new argument optional. :+1:
Hi,
We just released Rump 1.0.0 and dropped support for MATCH, considering that it could lead to inconsistencies since the filter is applied after SCAN: https://redis.io/commands/scan#the-match-option
It is important to note that the MATCH filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, SCAN will likely return no elements in most iterations.
|
2025-04-01T06:40:29.702061
| 2020-08-19T13:18:52
|
681833873
|
{
"authors": [
"TheZoq2",
"codemaster97"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10967",
"repo": "stm32-rs/stm32f1xx-hal",
"url": "https://github.com/stm32-rs/stm32f1xx-hal/pull/260"
}
|
gharchive/pull-request
|
Fix small mistake in time.rs: kilohertz should be megahertz
Fix error in description of Megahertz
Perfect, thanks!
|
2025-04-01T06:40:29.725154
| 2021-01-17T22:57:14
|
787814775
|
{
"authors": [
"UNlDAN"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10968",
"repo": "stockmarkat/stockmarket-simulation",
"url": "https://github.com/stockmarkat/stockmarket-simulation/issues/156"
}
|
gharchive/issue
|
Fetching packages stuck at last package
Expected Behavior
For the install to continue
Actual Behavior
Steps to Reproduce the Problem
Clone the repo
Run yarn install
Stuck
Specifications
Platform: Ubuntu 18.04 LTS
Subsystem:
<EMAIL_ADDRESS>The platform "linux" is incompatible with this module
<EMAIL_ADDRESS>The platform "linux" is incompatible with this module
|
2025-04-01T06:40:29.749415
| 2024-10-15T13:58:00
|
2588875880
|
{
"authors": [
"dislbenn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10969",
"repo": "stolostron/backplane-operator",
"url": "https://github.com/stolostron/backplane-operator/pull/1019"
}
|
gharchive/pull-request
|
[Manual] Operator bundle update
Description
Please provide a brief description of the purpose of this pull request.
Related Issue
If applicable, please reference the issue(s) that this pull request addresses.
Changes Made
Provide a clear and concise overview of the changes made in this pull request.
Screenshots (if applicable)
Add screenshots or GIFs that demonstrate the changes visually, if relevant.
Checklist
[ ] I have tested the changes locally and they are functioning as expected.
[ ] I have updated the documentation (if necessary) to reflect the changes.
[ ] I have added/updated relevant unit tests (if applicable).
[ ] I have ensured that my code follows the project's coding standards.
[ ] I have checked for any potential security issues and addressed them.
[ ] I have added necessary comments to the code, especially in complex or unclear sections.
[ ] I have rebased my branch on top of the latest main/master branch.
Additional Notes
Add any additional notes, context, or information that might be helpful for reviewers.
Reviewers
Tag the appropriate reviewers who should review this pull request. To add reviewers, please add the following line: /cc @reviewer1 @reviewer2
/cc @cameronmwall @ngraham20
Definition of Done
[ ] Code is reviewed.
[ ] Code is tested.
[ ] Documentation is updated.
[ ] All checks and tests pass.
[ ] Approved by at least one reviewer.
[ ] Merged into the main/master branch.
/cherry-pick backplane-2.7
|
2025-04-01T06:40:29.755251
| 2024-06-24T08:27:23
|
2369581564
|
{
"authors": [
"thibaultmg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10970",
"repo": "stolostron/multicluster-observability-operator",
"url": "https://github.com/stolostron/multicluster-observability-operator/pull/1505"
}
|
gharchive/pull-request
|
ACM-10812: Fix status report
We are still having issues with status reporting. In the latest instance, the endpoint-operator fails to update the metrics-collector deployment because of a conflict, then sets the status to degraded. And it remains in that state while the metrics are forwarded.
This PR:
Adds retry on conflict error to the metrics-collector related updates.
Changes the comparison between the found resources version and the desired ones to use DeepDerivative. This will reduce the number of unnecessary updates.
Adds retry on conflict for status updates made by the metrics-collector.
Sort conditions before updating the status in the metrics-collector to ensure it handles the most recent one.
I tried to change as few things as possible.
/retest
/cherrypick release-2.10
/cherrypick release-2.10
|
2025-04-01T06:40:29.757445
| 2024-04-05T18:03:15
|
2228549294
|
{
"authors": [
"ngraham20"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10971",
"repo": "stolostron/multiclusterhub-operator",
"url": "https://github.com/stolostron/multiclusterhub-operator/pull/1426"
}
|
gharchive/pull-request
|
Upgraded to Go1.21 and dependencies
Upgraded all dependencies and go version to 1.21
/retest
Trying to get unit tests to pass
Getting unit tests to pass
|
2025-04-01T06:40:29.772493
| 2023-12-11T15:35:01
|
2035947707
|
{
"authors": [
"chohmann",
"manvydasu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10972",
"repo": "stoplightio/elements",
"url": "https://github.com/stoplightio/elements/issues/2469"
}
|
gharchive/issue
|
Links with absolute urls do not work
Hello,
Context
Currently, its impossible to add an absolute url link, which would redirect user from stoplight elements page, to a differente page , which is located outside of stoplight elements scope
Steps to Reproduce
Have @stoplight/elements generated API be placed under specific route, i.e. my-domain.com/api#.
Use hash router.
Have some pages outside of the @stoplight/elements scope, i.e. my-domain.com/guides
Attempt to add a link in API documentation, which would direct user to guides page, i.e. [check out our guides page](/guides)
Current Behavior
The generated links poins to my-domain.com/api#/guides instead of my-domain.com/guides
Expected Behavior
I would expect a valid link to be genrated, which would point to my-domain.com/guides.
Possible Workaround/Solution
No workarounds, unless final host url is known in advance.
Version used: Latest @stoplight/elements version
@manvydasu We propose the following enhancement to achieve what you're after:
pick some sort of token to use in the link definition to tell us to put the host in the url (i.e. [check out our guides page]($$origin/guides)
when we come across this token, we'd replace it with the origin where the app is currently hosted
We will work with our Product team to prioritize this, but feel free to put up a PR for the proposed solution above in the meantime.
|
2025-04-01T06:40:29.775932
| 2021-10-25T09:28:54
|
1034896679
|
{
"authors": [
"abhayathapa",
"morten-nielsen"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10973",
"repo": "stoplightio/prism",
"url": "https://github.com/stoplightio/prism/issues/1929"
}
|
gharchive/issue
|
405 Method Not Allowed, response is invalid
Describe the bug
For operations not described in the OAS Prism will return a 405, but without the required Allow header value.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/405
To Reproduce
Created a default API in Stoplight Studio, exposing a User endpoint, the one provided out of the box.
Running this request against the prism mock.
curl --request GET 'http://<IP_ADDRESS>:3100/user'
Expected behavior
HTTP/1.1 405, Method Not Allowed
Allow: POST
405 is coming for me for no reason. It was working 2 weeks back
405 is coming for me for no reason. It was working 2 weeks back
I'm not sure how this has anything to do with the issue I raised @abhayathapa?
|
2025-04-01T06:40:29.808878
| 2022-06-08T21:18:18
|
1265325083
|
{
"authors": [
"StaffOfHades",
"alvarosabu",
"arpadgabor",
"do-web"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10974",
"repo": "storyblok/storyblok-nuxt",
"url": "https://github.com/storyblok/storyblok-nuxt/issues/149"
}
|
gharchive/issue
|
Cache option not working
If i try to enable cache option:
buildModules: [ ["@storyblok/nuxt", { accessToken: "xxxxx", apiOptions: { cache: { type: "memory" }, }, }] ],
I get an error:
Cannot add property accessToken, object is not extensible
This is also not working:
buildModules: [ ["@storyblok/nuxt", { accessToken: "xxxxx", apiOptions: { accessToken: "xxxxx", cache: { type: "memory" }, }, }] ],
Iam using the latest version.
If i remove the apiOptions property, storyblok creates two request:
301 https://api.storyblok.com/v2/cdn/stories/en?version=published&token=xxxxx&cv=undefined =>
200 https://api.storyblok.com/v2/cdn/stories/en?cv=1654702667&token=xxxxx&version=published
How can i prevent this 301?
Confirming this issue. Whenever I pass apiOptions to the module it start failing with [nuxt] [request error] Cannot add property accessToken, object is not extensible.
The problematic code seems to be this bit in the storyblokInit function (code taken from dist folder in node_modules):
const { bridge, accessToken, use = [], apiOptions = {} } = pluginOptions;
apiOptions.accessToken = apiOptions.accessToken || accessToken;
This should be fixed as of V4.3.0. Check https://github.com/storyblok/storyblok-nuxt/issues/170#issuecomment-1239527769.
Closing since it hasn't been active in a while and the latest comment from @StaffOfHades. If it's still happening feel free to re-open providing a valid reproduction link
Thanks!
|
2025-04-01T06:40:29.848307
| 2023-09-03T00:35:31
|
1878890707
|
{
"authors": [
"kasperpeulen",
"legobeat",
"yannbf"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10975",
"repo": "storybookjs/test-runner",
"url": "https://github.com/storybookjs/test-runner/pull/348"
}
|
gharchive/pull-request
|
Upgrade jest dependencies to v29 [rebased]
#319 + #345 but rebased on next.
Blocked by
#354
[ ] Release of #349
Dropping node 12 is fine, storybook itself also doesn't support node 12 anymore.
@legobeat Sounds good to me.
@yannbf Let's do this when you are back.
Hey there! Sorry for not checking this sooner. I'll update this PR and test it out next week!
|
2025-04-01T06:40:29.850506
| 2022-05-23T14:11:07
|
1245231027
|
{
"authors": [
"Etienne-Buschong",
"Yogu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10976",
"repo": "storybookjs/webpack-angular-types-plugin",
"url": "https://github.com/storybookjs/webpack-angular-types-plugin/issues/21"
}
|
gharchive/issue
|
[Bug] Protected properties are included in args table
The args table currently includes protected properties of components without indicating that they are protected.
I guess in most cases, protected properties do not need to be shown at all. In an API documentation, they are only relevant for classes that are intended to be subclassed. In other cases, protected members usually are an artifact of how a component is implemented instead of part of the API.
The angular framework itself excludes protected members from the public API except for classes that are explicitly marked as non-final in the documentation.
Yes, the plan was to exclude protected and private properties and i simply overlooked the protected properties.
Since the target audience of the generated types are developers consuming public components, and not extending them, i think it is fine to always exclude protected properties.
|
2025-04-01T06:40:29.851563
| 2016-11-30T13:18:11
|
192563241
|
{
"authors": [
"arunoda",
"wmonk"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10977",
"repo": "storybooks/react-storybook",
"url": "https://github.com/storybooks/react-storybook/pull/634"
}
|
gharchive/pull-request
|
Update Typescript Definition File closes #632
This adds the addDecorator exported function to the definitions.
Awesome.
Thanks.
|
2025-04-01T06:40:29.856850
| 2017-02-10T05:18:08
|
206711028
|
{
"authors": [
"ajhyndman",
"andrewcashmore",
"delijah",
"eatrocks",
"joscha",
"ndelangen",
"tomitrescak"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10978",
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/issues/690"
}
|
gharchive/issue
|
Webpack 2
Hi, is there any guideline how can we move storybook to webpack 2?
Thanks!
refs #637
Note to self:
This fork seems to have done pretty much the same things I have also done, could be interesting to compare at some point.
@ndelangen Is this the proper issue to watch for progress on webpack 2 support in storybook?
We'll be releasing an 3.0.0.alpha.01 soonish, which will have webpack 2 support.
We'll probably have to write some good migration guides before the real release.
Webpack 2 support is already in master, so you could already give it a try manually by either npm link or local file dependencies.
We're real close to a 3.0.0-alpha.01 release people!
I think after this: https://github.com/storybooks/storybook/issues/773#issuecomment-297679733 I will be publishing!
Any updates on this?
I second this...any update?
I'm working on it as much as I can 👍
It looks like #773 was unblocked today. Is there anything still holding up a 3.0.0-alpha release? :smile:
Not too much, 🔬 details I think and then 🚢
I'm going to releasing 3.0.0-alpha.0 in a short while, so this can be closed.
|
2025-04-01T06:40:29.860768
| 2017-04-15T17:39:28
|
221962143
|
{
"authors": [
"shilman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10979",
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/issues/885"
}
|
gharchive/issue
|
2 storyshots libraries???
Issue by pyros2097
Saturday Mar 04, 2017 at 12:18 GMT
Originally opened as https://github.com/storybooks/storyshots/issues/82
Why are there 2 storyshots libraries? Which one should I use. Its seems they are different versions also.
npm i -D @kadira/storyshots
npm i -D storyshots
Comment by mnmtanish
Saturday Apr 01, 2017 at 11:07 GMT
Please use the storyshots module
|
2025-04-01T06:40:29.864606
| 2017-07-21T22:40:33
|
244799635
|
{
"authors": [
"danielduan",
"jribeiro",
"shilman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10980",
"repo": "storybooks/storybook",
"url": "https://github.com/storybooks/storybook/pull/1505"
}
|
gharchive/pull-request
|
Add Docgen info to PropTable
Issue:
Storybook Info Addon currently does not support React docgen for documenting components. This was actually implemented on the old repo but got lost on the migration:
https://github.com/storybooks/react-storybook-addon-info/commit/092e10a736d9381ffdab5609ec7585df9d2cb7a1
What I did
Added support for react docgen on the PropTable component and documentation
How to test
Follow the instructions on the readme on a repo supporting flowtype. normal npm link approaches should be helpful
@jribeiro Thanks for contributing this! Is there any chance you can add this to the examples/cra-kitchen-sink example as part of this PR? We're starting to make sure that all PRs have good examples there.
To get the example running on your machine:
yarn && yarn bootstrap
cd examples/cra-kitchen-sink
yarn storybook
Many thanks again and please let me know if you have any questions!
Hi @shilman, thanks for that. I've added an example to the examples/cra-kitchen-sink. Let me know if you need any other change.
Hi @danielduan, is this not needed anymore?
Sorry about that, I tried to fix the merge conflicts and when I pushed to your branch, this got closed automatically. New PR is above ^
|
2025-04-01T06:40:29.866946
| 2019-07-17T15:00:55
|
469272569
|
{
"authors": [
"piotrf"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10981",
"repo": "storycopter/storycopter",
"url": "https://github.com/storycopter/storycopter/issues/22"
}
|
gharchive/issue
|
Opening titles component
[x] themeable variations
[x] data schema
[x] reponsiveness
[x] micro interactions: hovers/active
[x] abstract anilink/button component variation?
Done in https://github.com/storycopter/storycopter/commit/94b649e4b4565330fd569bacb513d5882bc665c8
|
2025-04-01T06:40:29.870729
| 2016-01-11T11:09:23
|
125920771
|
{
"authors": [
"marc-hanheide"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10982",
"repo": "strands-project/v4r_ros_wrappers",
"url": "https://github.com/strands-project/v4r_ros_wrappers/pull/24"
}
|
gharchive/pull-request
|
and another package version not initialised
before a repository can be released, all package.xml need to have the same version or prepare-release is doomed to fail :-(
I'll merge it myself and re-release afterwards
now the re-release worked: https://lcas.lincoln.ac.uk/jenkins/job/prepare-release/384/
|
2025-04-01T06:40:29.873610
| 2024-05-25T01:30:57
|
2316551808
|
{
"authors": [
"agouin",
"scirner22"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10983",
"repo": "strangelove-ventures/cosmos-operator",
"url": "https://github.com/strangelove-ventures/cosmos-operator/issues/420"
}
|
gharchive/issue
|
StatefulJob can't easily tar node data
The StatefulJob docs state this, and I'm attempting to use it for the same function.
Strangelove uses it to compress and upload snapshots of chain data.
I'm having a problem achieving this though. On the Provenance chain our nonpruned nodes contain about 1TB of data. Our PVCs are setup for 1.25TB and to grow when they are at 90% used. With about 25% space overhead, there's not enough space to compress the data and store the tar.gz on the same volume.
The two ways I could think that the StatefulJob could support this would be the following:
Allow configuration to specify additional PV/PVCs that are created/cleaned up
Allow setting additional snapshot PV size. Then the PV is restored from the snapshot, it is then edited to increase its size further based on this config
I can try tackling this if we settle on the solution.
We currently handle this by doing a streamed compress and upload so that storage is not necessary for the compressed file prior to upload.
For resumable uploads though, it would be great to have this feature so that the file is compressed once, and retries can be retried.
We could add an additional parameter that would allow creating the StatefulJobs PVC with something like twice the size so that additional room was available for these kinds of operations.
|
2025-04-01T06:40:29.876715
| 2024-08-27T22:06:06
|
2490451932
|
{
"authors": [
"Reecepbcups",
"mrzigha",
"nourspace",
"vimystic"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10984",
"repo": "strangelove-ventures/heighliner",
"url": "https://github.com/strangelove-ventures/heighliner/issues/273"
}
|
gharchive/issue
|
fix alpine version crahes for cosmwasm based networks
reference: https://github.com/CosmWasm/wasmvm/issues/523
Currently Osmosis builds cause a cgo panic. Also affected wormhole w/ testing for Joel.
I have found a solution to this problem if necessary CosmWasm/wasmvm#576
I think we should submit a PR on the Heighliner project to systematically use an Alpine 3.18 image, at least for projects requiring cosmwasmvm.
@vimystic @0xPuncker
Considering generi-sizing all of heighliner to have choice of alpine base and install golang version from the mod file of the respective chains .Similar to what you suggest but for everything.
Alternatively , if the desired golang alpine combo does not exist , we then pass in an ARG to the docker file to do the needful
Will run it by people internally and keep you posted on this issue.
|
2025-04-01T06:40:29.886884
| 2018-06-14T21:21:49
|
332569040
|
{
"authors": [
"demobo-com"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10985",
"repo": "strapi/strapi-docker",
"url": "https://github.com/strapi/strapi-docker/issues/31"
}
|
gharchive/issue
|
Quickstart is not working on my mac
api_1 | [2018-06-14T21:18:40.844Z] info Creating your application... It might take a few seconds.
api_1 | [2018-06-14T21:18:40.928Z] error $ strapi new can only be called in an empty directory.
api_1 | [2018-06-14T21:18:41.177Z] error This command can only be used inside a Strapi project.
strapi-docker_api_1 exited with code 0
docker pull node:9.11.1-alpine
and redo quickstart. Now it works.
|
2025-04-01T06:40:30.029194
| 2018-03-06T14:54:52
|
302732519
|
{
"authors": [
"alexppxela",
"simonvadee",
"such",
"t-bast"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10986",
"repo": "stratumn/go-indigocore",
"url": "https://github.com/stratumn/go-indigocore/pull/356"
}
|
gharchive/pull-request
|
tmpop: add tendermint evidence
At last, we're able to compute full tendermint evidence for our segments!
We can't really use the lite package as it's meant to be used by clients and is quite complex to fit in our infrastructure (it acts as a proxy node) but I think the data we're able to access in the votes is enough to produce a correct proof.
It's a bit complicated to do so because we can do it only at the beginning of block N+3 so we need to be careful with off-by-one errors in many places.
I've tested on a filetmpop that the evidence produced looks correct and is correctly validated by the Verify() method, but this is a tricky piece of code so I'm counting on you to find potential issues during the PR ;).
Don't hesitate to drop by my desk to draw diagrams of what object signs what part if you're unsure.
This change is
Reviewed 10 of 10 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved, some commit checks failed.
cs/evidence_tendermint_test.go, line 189 at r1 (raw file):
}
assert.True(t, e.Verify(linkHash), "Proof should be verified")
supernit: shouldn't this assert be in its own test case?
tmpop/tmpop.go, line 376 at r1 (raw file):
linkHashes, err := t.getCommitLinkHashes(evidenceHeight)
if err != nil {
log.Warn("Could not get link hashes for this block. Evidence will not be generated.")
we should add more information about the block in the warning message
tmpop/tmpop.go, line 386 at r1 (raw file):
validatorHash, err := t.getValidatorHash(evidenceHeight)
if err != nil {
log.Warn("Could not get validator hash for this block. Evidence will not be generated.")
here as well
Comments from Reviewable
at last we have proper evidence! good job!
Review status: all files reviewed at latest revision, 3 unresolved discussions, some commit checks failed.
Comments from Reviewable
Review status: all files reviewed at latest revision, 3 unresolved discussions, some commit checks failed.
cs/evidence_tendermint_test.go, line 189 at r1 (raw file):
Previously, such (Adrien Montfort) wrote…
supernit: shouldn't this assert be in its own test case?
I liked making sure that the proof was valid before I was passing it on to the tests that might modify it.
But now that it works, maybe it's not necessary, I'll have a second look at it.
tmpop/tmpop.go, line 376 at r1 (raw file):
Previously, such (Adrien Montfort) wrote…
we should add more information about the block in the warning message
Good idea.
tmpop/tmpop.go, line 386 at r1 (raw file):
Previously, such (Adrien Montfort) wrote…
here as well
Good idea as well :).
Comments from Reviewable
Note that I'd like to work in the next few weeks on setting up a real metrics/monitoring stack for Indigo, this will be the occasion to provide good analytics on what happens in TMPoP.
Review status: 8 of 10 files reviewed at latest revision, 3 unresolved discussions.
Comments from Reviewable
And a first brick for IndigoEntreprise. Nice!
Review status: 8 of 10 files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Reviewed 9 of 10 files at r1, 1 of 2 files at r2.
Review status: 9 of 10 files reviewed at latest revision, 2 unresolved discussions, some commit checks failed.
tmpop/tmpoptestcases/evidence.go, line 150 at r1 (raw file):
}
tmClientMock.EXPECT().Block(int64(5)).Return(blocks[5], nil).AnyTimes()
nit: can factorize all block initialization
tmpop/tmpoptestcases/evidence.go, line 310 at r1 (raw file):
err := makeQuery(h, tmpop.GetSegment, linkHash4, got)
assert.NoError(t, err)
assert.Len(
nit: assert.Empty()
Comments from Reviewable
Yeah !
Review status: 9 of 10 files reviewed at latest revision, 2 unresolved discussions, some commit checks failed.
tmpop/tmpoptestcases/evidence.go, line 150 at r1 (raw file):
Previously, alexppxela (Alexandre Thibault) wrote…
nit: can factorize all block initialization
Not really, because a few blocks aren't built exactly like the others (some don't have votes).
This E2E test is big and a bit hard to follow I'll admit :)
Comments from Reviewable
Review status: 8 of 10 files reviewed at latest revision, 2 unresolved discussions.
tmpop/tmpoptestcases/evidence.go, line 310 at r1 (raw file):
Previously, alexppxela (Alexandre Thibault) wrote…
nit: assert.Empty()
Done.
Comments from Reviewable
Reviewed 1 of 2 files at r2, 1 of 1 files at r3.
Review status: all files reviewed at latest revision, all discussions resolved, some commit checks failed.
Comments from Reviewable
looking great !
Reviewed 7 of 10 files at r1, 1 of 2 files at r2, 1 of 1 files at r3.
Review status: all files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.
cs/evidence_tendermint_test.go, line 171 at r3 (raw file):
// generates a valid block and its proof, and returns the link
// and the evidence.
func CreateTendermintProof(t *testing.T, linksCount int) (*types.Bytes32, *evidences.TendermintProof) {
nit: does this need to be exported ?
cs/evidences/evidences.go, line 192 at r2 (raw file):
// We validate that nodes signed the header.
if !p.validateVotes(p.Header, p.HeaderVotes) {
as discussed IRL: are the (potential) byzantine votes included in p.HeaderVotes by tendermint ? if yes, then we should make sure than more than 1/3 of the votes are valid, not all of them.
tmpop/tmClient.go, line 86 at r3 (raw file):
for _, tx := range tmBlock.Block.Txs {
tmTx, err := unmarshallTx(tx)
if !err.IsOK() || tmTx.TxType != CreateLink {
why do you check if the transaction was a CreateLink (even though we only have this type) ?
tmpop/tmpoptestcases/evidence.go, line 337 at r3 (raw file):
// vote creates a valid vote for a given header.
// It simulates nodes signing a header and is crucial for the proof.
func vote(header *tmtypes.Header) []*evidences.TendermintVote {
is this the same function as above (cs/evidence_tendermint_test.go) ? if yes, is there a way to factorize ?
Comments from Reviewable
The TendermintProof verification should be updated in JS now! ;)
Reviewed 1 of 2 files at r2.
Review status: all files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.
Comments from Reviewable
The developer named Bastien is not available anymore. Please try another developer.
Review status: all files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.
cs/evidence_tendermint_test.go, line 171 at r3 (raw file):
Previously, simonvadee (Simon Vadée) wrote…
nit: does this need to be exported ?
I think it's useful yes
cs/evidences/evidences.go, line 192 at r2 (raw file):
Previously, simonvadee (Simon Vadée) wrote…
as discussed IRL: are the (potential) byzantine votes included in p.HeaderVotes by tendermint ? if yes, then we should make sure than more than 1/3 of the votes are valid, not all of them.
Yes good point! I'll dive more into simulating byzantine nodes next.
tmpop/tmClient.go, line 86 at r3 (raw file):
Previously, simonvadee (Simon Vadée) wrote…
why do you check if the transaction was a CreateLink (even though we only have this type) ?
It feels more future-proof when/if we add other TxTypes :)
We most likely only want to create evidence for CreateLink operations (even though I admit it depends on what other operations we add in the future).
But the main usecase was that we might at some point have an AddEvidence operation to store external evidence, and we don't want to generate evidence on evidence...but that's still a bit blurry so we'll see later.
tmpop/tmpoptestcases/evidence.go, line 337 at r3 (raw file):
Previously, simonvadee (Simon Vadée) wrote…
is this the same function as above (cs/evidence_tendermint_test.go) ? if yes, is there a way to factorize ?
Yes it is, but I'm not a big fan of factorizing it yet, as Rob Pike says in Go sometimes a little duplication is better...if more functions need to be shared then we'll reevaluate :)
Comments from Reviewable
cs/evidences/evidences.go, line 192 at r2 (raw file):
Previously, t-bast (Bastien Teinturier) wrote…
Yes good point! I'll dive more into simulating byzantine nodes next.
That's a bit different than byzantine votes. A wrong signature is a bad way to try to mess with the system since it's so easily catchable. I'm pretty sure bad signature are not included in the block header by the Tendermint engine.
A byzantine vote would be voting twice on blocks with the same height for instance.
But yeah we should check that we have at least 2/3+ signatures.
Comments from Reviewable
^^
Comments from Reviewable
|
2025-04-01T06:40:30.040822
| 2023-09-30T13:29:03
|
1920261385
|
{
"authors": [
"aprams",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10987",
"repo": "strawberry-graphql/strawberry-graphql-django",
"url": "https://github.com/strawberry-graphql/strawberry-graphql-django/pull/380"
}
|
gharchive/pull-request
|
Fix: DjangoOptimizerExtension corrupts nested objects' fields' prefetch objects
Description
This PR aims to resolve issue #379 .
I included a reproducible test and a suggested fix, where the Prefetch object gets deepcopied to avoid the side effects from add_prefix.
Considerations:
There were multiple ways to do this, including copying the OptimizerStore at a higher level in the execution stack, but this should be the least invasive one
I used deepcopy now as Prefetch didn't offer an easier way of copying the object that I know of, I'm happy about suggestions on improving this
I'm also very open for improving the test case included. If you know a better way on how to do it without the custom type setup, please let me know.
Types of Changes
[ ] Core
[x] Bugfix
[ ] New feature
[ ] Enhancement/optimization
[ ] Documentation
Issues Fixed or Closed by This PR
#379
Checklist
[x] My code follows the code style of this project.
[ ] My change requires a change to the documentation.
[ ] I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[x] I have added tests to cover my changes.
[x] I have tested the changes and verified that they work and don't break anything (as well as I can manage).
I love the work you do here, thanks a lot for the really awesome work! ❤️
Codecov Report
All modified lines are covered by tests :white_check_mark:
Comparison is base (a17b51b) 87.98% compared to head (c03dc67) 87.99%.
Additional details and impacted files
@@ Coverage Diff @@
## main #380 +/- ##
=======================================
Coverage 87.98% 87.99%
=======================================
Files 33 33
Lines 2971 2973 +2
=======================================
+ Hits 2614 2616 +2
Misses 357 357
Files
Coverage Δ
strawberry_django/optimizer.py
89.13% <100.00%> (+0.06%)
:arrow_up:
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T06:40:30.043879
| 2024-01-24T10:15:45
|
2097921153
|
{
"authors": [
"erikwrede",
"patrick91"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10988",
"repo": "strawberry-graphql/strawberry",
"url": "https://github.com/strawberry-graphql/strawberry/issues/3359"
}
|
gharchive/issue
|
Deprecate starlite
Now that we Litestar support I think we can deprecate Starlite 😊
### Tasks
- [ ] Add notice in the docs
- [ ] Use typing_extensions.deprecated to mark class as deprecated (see https://peps.python.org/pep-0702/)
- [ ] Trigger deprecation warning (with test), this might
Upvote & Fund
We're using Polar.sh so you can upvote and help fund this issue.
We receive the funding once the issue is completed & confirmed by you.
Thank you in advance for helping prioritize & fund our backlog.
@Birdi7 Please feel free to go ahead with this 😊 If you need assistance or a review, feel free to ping me
|
2025-04-01T06:40:30.056813
| 2024-05-27T15:13:17
|
2319373394
|
{
"authors": [
"ShtykovaAA",
"bellini666",
"coady",
"patrick91"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10989",
"repo": "strawberry-graphql/strawberry",
"url": "https://github.com/strawberry-graphql/strawberry/issues/3517"
}
|
gharchive/issue
|
default_factory doesn't work
Hi!
I use default_factory to initialize my variable, but variable always returns the same result. It seems like default_factory doesn't work and it returns always the same result of function.
Here is example to reproduce:
https://play.strawberry.rocks/?gist=a7a5e62ffe4e68696b44456398d11104
Upvote & Fund
We're using Polar.sh so you can upvote and help fund this issue.
We receive the funding once the issue is completed & confirmed by you.
Thank you in advance for helping prioritize & fund our backlog.
I can reproduce this, I think it's because we store the default value when creating the object, see:
I don't think this can be supported (and still be compliant with the spec). GraphQL defaults are statically in the schema. strawberry export-schema ...:
type Mutation {
update1(fields: MyInputType!): String!
update2(fields: MyInputType!): String!
}
input MyInputType {
field1: String = "318fbf6e-73b6-40eb-932f-0b66ba935b75"
}
type Query {
hello: String!
}
Another issue is that the client sending an explicit null is valid and semantically different. So MyInputType would need logic like
if field1 in (None, UNSET):
field1 = uuid_pkg.uuid4()
That may seem like a workaround, but is actually the only correct implementation.
@patrick91 and also can I ask in what situations then we need to use default_factory if this field is static, when we can use only default?
@patrick91 and also can I ask in what situations then we need to use default_factory if this field is static, when we can use only default?
I'm not sure to be honest, I'll need to think about this a bit
I do think it might be a flaw, or at least something surprising, so maybe we need to reconsider it
I do understand the schema's default value issue, but I do agree with this comment. I actually had to do workaround a similar issue on strawberry-resources when exporting form data for the field as a dynamic default value would not actually make sense there.
So my vote would be to actually change the behavior to fix this issue, specially since we still are 0.x =P, and mention as a "possible breaking change" in the changelog, mentioning the use of default as the correct way of relying on the older behavior
just throwing out some ideas to make the change less painful
we could a static_factory or schema_factory which will have the current behaviour, default_factory will mimic dataclasses' and pydantic's behaviour
have a configuration option to disable static defaults
just change the behaviour
@coady sorry to ping you again, but do you use default_factory for defaults in the schema level? 😊 what's your use case exactly?
@coady sorry to ping you again, but do you use default_factory for defaults in the schema level? 😊 what's your use case exactly?
The only use I'm aware of (and use) is for mutables, as dataclasses requires. Any valid value is a valid default value, including [] and {}.
I'd be happy with a cleaner alternative for mutables. This is forbidden (but is valid GraphQL):
q: list[float] = [0.5]
So instead I have to used default_factory or this:
q: list[float] = (0.5,) # type: ignore
which mypy complains about.
just throwing out some ideas to make the change less painful
we could a static_factory or schema_factory which will have the current behaviour, default_factory will mimic dataclasses' and pydantic's behaviour
have a configuration option to disable static defaults
just change the behaviour
option 1 could also have a codemod to make the update easier 😊
I think we could do option 2 for the time being
|
2025-04-01T06:40:30.096631
| 2023-10-23T13:48:03
|
1957218713
|
{
"authors": [
"Quintar",
"Razer0123"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10990",
"repo": "streamdeck-linux-gui/streamdeck-linux-gui",
"url": "https://github.com/streamdeck-linux-gui/streamdeck-linux-gui/issues/107"
}
|
gharchive/issue
|
Can't build
In raising this issue I confirm that
[X] I have fully completed the issue template
[X] I have searched open and closed issues for duplicates
[X] I have read the Contribution Guidelines
[X] I have read the Code of Conduct
[X] I have read the Documentation
Describe the bug
Trying to build but failing
Steps to reproduce the behavior
First command of the guide
git clone <https://github.com/streamdeck-linux-gui/streamdeck-linux-gui.git>
Gives file or directory not existing error
If i try
git clone https://github.com/streamdeck-linux-gui/streamdeck-linux-gui.git
and then trying to build i get
/usr/bin/python: No module named build
ALSO
the fedora script results in this error
` The headers or library files could not be found for zlib,
a required dependency when compiling Pillow from source.
Please see the install instructions at:
https://pillow.readthedocs.io/en/latest/installation.html
Traceback (most recent call last):
File "<string>", line 852, in <module>
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/wheel/bdist_wheel.py", line 364, in run
self.run_command("build")
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build.py", line 131, in run
self.run_command(cmd_name)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 345, in run
self.build_extensions()
File "<string>", line 687, in build_extensions
RequiredDependencyException: zlib
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alessandro/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/alessandro/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alessandro/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 434, in build_wheel
return self._build_with_temp_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 419, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 507, in run_setup
super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-s3ikdncm/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 903, in <module>
RequiredDependencyException:
The headers or library files could not be found for zlib,
a required dependency when compiling Pillow from source.
Please see the install instructions at:
https://pillow.readthedocs.io/en/latest/installation.html
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pillow
Failed to build hidapi pillow
ERROR: Could not build wheels for hidapi, pillow, which is required to install pyproject.toml-based projects
`
Expected behavior
building from source
Screenshots
No response
System Information
Fedora 39
Stream Deck Version
No response
Could you try
python -m pip install ./
In the streamdeck-linux-gui directory?
python -m pip install ./
ERROR: Package 'streamdeck-linux-gui' requires a different Python: 3.12.0 not in '<3.12,>=3.11'
Also, i had streamdeck installed already, but now gives this error
Traceback (most recent call last): File "/home/alessandro/.local/bin/streamdeck", line 5, in <module> from streamdeck_ui.gui import start ModuleNotFoundError: No module named 'streamdeck_ui'
Try installing python 3.11 or change pyproject.toml on line 15 to include
your current version (that's easier in my opinion)
Also remove the old streamdeck_ui it's way out of date.
Razer @.***> schrieb am Mo., 23. Okt. 2023, 17:33:
python -m pip install ./
ERROR: Package 'streamdeck-linux-gui' requires a different Python: 3.12.0
not in '<3.12,>=3.11'
Also, i had streamdeck installed already, but now gives this error
Traceback (most recent call last): File
"/home/alessandro/.local/bin/streamdeck", line 5, in from
streamdeck_ui.gui import start ModuleNotFoundError: No module named
'streamdeck_ui'
—
Reply to this email directly, view it on GitHub
https://github.com/streamdeck-linux-gui/streamdeck-linux-gui/issues/107#issuecomment-1775471106,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABJEWWX7X7O62ND5552ABSLYA2E3RAVCNFSM6AAAAAA6MDPRCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZVGQ3TCMJQGY
.
You are receiving this because you commented.Message ID:
@.***
com>
Try installing python 3.11 or change pyproject.toml on line 15 to include your current version (that's easier in my opinion) Also remove the old streamdeck_ui it's way out of date. Razer @.> schrieb am Mo., 23. Okt. 2023, 17:33:
…
python -m pip install ./ ERROR: Package 'streamdeck-linux-gui' requires a different Python: 3.12.0 not in '<3.12,>=3.11' Also, i had streamdeck installed already, but now gives this error Traceback (most recent call last): File "/home/alessandro/.local/bin/streamdeck", line 5, in from streamdeck_ui.gui import start ModuleNotFoundError: No module named 'streamdeck_ui' — Reply to this email directly, view it on GitHub <#107 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJEWWX7X7O62ND5552ABSLYA2E3RAVCNFSM6AAAAAA6MDPRCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZVGQ3TCMJQGY . You are receiving this because you commented.Message ID: @. com>
Tried uninstalling
pip3 uninstall streamdeck-ui
WARNING: Skipping streamdeck-ui as it is not installed.
Tried installing with the command, but same error as before
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pillow Successfully built streamdeck-linux-gui Failed to build pillow ERROR: Could not build wheels for pillow, which is required to install pyproject.toml-based projects
Fixed installing libjpeg-turbo-devel and zlib-devel
|
2025-04-01T06:40:30.102680
| 2017-11-30T10:42:09
|
278073697
|
{
"authors": [
"digitalkaoz",
"streamich"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10991",
"repo": "streamich/unionfs",
"url": "https://github.com/streamich/unionfs/pull/8"
}
|
gharchive/pull-request
|
expose Readable & Writable
fixes #1
There is a way to add streams to unionfs, but it is not trivial.
Your implementation is obviously invalid, all it does is it creates some "placeholders".
What is the reason for doing so?
graceful-fs tries to get those streams (fs.prototype.ReadStream, fs.prototype.WriteStream), if they are not there it fails hard.
yes it seems wrong as it still doesnt work correctly here (just a few steps later)
I am OK including this hack. However this introduces a dependency:
import {Readable, Writable} from "stream";
Which may break browser users. It has to be done somehow as to not break browser builds.
Maybe something like this
const isBrowser = typeof __filename === 'undefined';
?
yeah made it conditional...
another question here:
i think im using it alltogether somehow wrong:
const {ufs} = require('unionfs');
const {Volume} = require('memfs');
const fs = require('fs');
ufs
.use(fs)
.use(Volume.fromJSON({"foo.js": ""}, "/tmp"))
console.log(ufs.existsSync(__filename)); // false
console.log(fs.existsSync(__filename)); // true
console.log(ufs.existsSync("/tmp/foo.js")); // true
why does it fail when trying to stat an existing file? the in memory volume is mounted somewhere else __filename shouldnt be affected?!
@streamich any idea on this one?
I will take a look at it this evening.
@streamich thanks for merging and fixing! did you have some time to look into my example above?
@digitalkaoz Sorry, I keep forgetting about this, will create an issue.
@digitalkaoz Should be fixed here: https://github.com/streamich/unionfs/issues/10#issuecomment-350269633
Thanks for the find.
|
2025-04-01T06:40:30.191824
| 2023-04-13T13:12:24
|
1666397435
|
{
"authors": [
"AlecHsiao",
"nickjsanders"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10993",
"repo": "stressapptest/stressapptest",
"url": "https://github.com/stressapptest/stressapptest/issues/107"
}
|
gharchive/issue
|
Is it possible to know how why have 20 seconds delay between issuing the stressapptest command and system running
We run the command to enable the StressAppTest with testing time, take our 100 seconds test for example, after issuing the command it is 04/13 21:02:32, but it start from 21:02:51 so it takes around 20 seconds before running, is it possible to know why we have the 20 seconds before running?
2023/04/13-21:02:32(CST) Log: Prefer plain malloc memory allocation.
2023/04/13-21:02:32(CST) Log: Using mmap() allocation at 0x7f0e4b600000.
2023/04/13-21:02:32(CST) Stats: Starting SAT, 243912M, 100 seconds
2023/04/13-21:02:50(CST) Log: region number 8 exceeds region count 8
2023/04/13-21:02:51(CST) Log: Region mask: 0xff
2023/04/13-21:03:01(CST) Log: Seconds remaining: 90
2023/04/13-21:03:11(CST) Log: Seconds remaining: 80
On Thu, Apr 13, 2023 at 6:12 AM AlecHsiao @.***> wrote:
is it possible to know why we have the 20 seconds before running?
2023/04/13-21:02:32(CST) Stats: Starting SAT, 243912M, 100 seconds
Stressapptest fills memory with patterns before starting the test. You have
a large amount of memory
and it's likely that initializing the memory takes 20 seconds. Do you have
a full log with memory bandwidth indicated?
Message ID: @.***>
Yes, we change the log level to 20 to capture more data, I think you're correct, that is under the process to fill data in that period. Thanks for your help for that.
2023/02/22-08:00:46(CST) Starting Fill Threads 0: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 1: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 2: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 3: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 4: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 5: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 6: 30511 pages
2023/02/22-08:00:46(CST) Starting Fill Threads 7: 30518 pages
2023/02/22-08:00:46(CST) Log: Thread 0 running on core ID 81 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 0
2023/02/22-08:00:46(CST) Log: Thread 2 running on core ID 103 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 2
2023/02/22-08:00:46(CST) Log: Thread 1 running on core ID 0 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 1
2023/02/22-08:00:46(CST) Log: Thread 3 running on core ID 19 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 3
2023/02/22-08:00:46(CST) Log: Thread 6 running on core ID 44 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 6
2023/02/22-08:00:46(CST) Log: Thread 5 running on core ID 121 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 5
2023/02/22-08:00:46(CST) Log: Thread 4 running on core ID 116 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 4
2023/02/22-08:00:46(CST) Log: Thread 7 running on core ID 52 mask FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF (FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF).
2023/02/22-08:00:46(CST) Log: Starting fill thread 7
2023/02/22-08:01:05(CST) Log: Co
1.0.10.log
mpleted 0: Fill thread. Status 1, 30511 pages filled
2023/02/22-08:01:05(CST) Log: Completed 7: Fill thread. Status 1, 30518 pages filled
2023/02/22-08:01:05(CST) Log: Completed 5: Fill thread. Status 1, 30511 pages filled
2023/02/22-08:01:05(CST) Log: Completed 6: Fill thread. Status 1, 30511 pages filled
2023/02/22-08:01:05(CST) Log: Completed 3: Fill thread. Status 1, 30511 pages filled
2023/02/22-08:01:05(CST) Log: Completed 1: Fill thread. Status 1, 30511 pages filled
2023/02/22-08:01:05(CST) Log: Completed 4: Fill thread. Status 1, 30511 pages filled
2023/02/22-08:01:05(CST) Log: Completed 2: Fill thread. Status 1, 30511 pages filled
1.0.10.log
For whatever reason, initialization uses only 8 threads, which
probably isn't appropriate on large systems such as yours.
I guess a better approach would be to scale the number of initialization
and teardown threads to the number of cores
available, similar to the default for copy threads. I'll keep it in mind as
a feature request, or you can send a PR.
If you just want it to be faster and can compile yourself, you can change
the hardcoded thread counts here:
https://github.com/stressapptest/stressapptest/blob/fd4ae17eaad7fde69e1308abbe5af3181ec6ce15/src/sat.cc#L719
On Sun, Apr 16, 2023 at 9:01 PM AlecHsiao @.***> wrote:
1.0.10.log
https://github.com/stressapptest/stressapptest/files/11246074/1.0.10.log
—
Reply to this email directly, view it on GitHub
https://github.com/stressapptest/stressapptest/issues/107#issuecomment-1510660957,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ADMRIIJTDAMGBYSYGA5RHRDXBS6ANANCNFSM6AAAAAAW5CELRA
.
You are receiving this because you commented.Message ID:
@.***>
Thanks, I think that works, now I can reduce the delay time , so I think that would be enough for me, appreciated for your help !!!
|
2025-04-01T06:40:30.195254
| 2014-12-05T20:36:35
|
51143142
|
{
"authors": [
"bscott",
"maggit",
"matryer",
"tmsoft"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10994",
"repo": "stretchr/gomniauth",
"url": "https://github.com/stretchr/gomniauth/issues/26"
}
|
gharchive/issue
|
Update Facebook to use v2 of their API as the current v1 will expire in a few months
facebookTokenURL string = "https://graph.facebook.com/v2.2/oauth/access_token"
facebookEndpointProfile string = "https://graph.facebook.com/v2.2/me?fields=email,first_name,last_name,link,about,id,name,picture,location"
@tmsoft Was this resolved
Doesn't look like it. Is anyone maintaining the codebase?
I'm not sure anyone is assigned to maintain the package. Perhaps we should find some new people who are interested?
On 24 Jul 2015, at 05:40, Todd<EMAIL_ADDRESS>wrote:
Doesn't look like it. Is anyone maintaining the codebase?
—
Reply to this email directly or view it on GitHub.
@matryer Feel free to add me as one and I'll take a stab at it.
@matryer: I would like to volunteer to be a maintainer if possible.
|
2025-04-01T06:40:30.208407
| 2023-02-13T13:18:56
|
1582342770
|
{
"authors": [
"IXLLEGACYIXL",
"manio143"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10995",
"repo": "stride3d/stride",
"url": "https://github.com/stride3d/stride/pull/1609"
}
|
gharchive/pull-request
|
[WIP] Roslyn based serialization source generator
PR Details
Opening draft PR in order to be able to reference it in documentation, once I make some progress on this I'll update the description.
Description
TODO
Related Issue
TODO
Motivation and Context
TODO
Types of changes
[ ] Docs change / refactoring / dependency upgrade
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Checklist
[ ] My change requires a change to the documentation.
[ ] I have added tests to cover my changes.
[ ] All new and existing tests passed.
what is the current state?
do you need help with it?
i have some expierence with source generators but i struggle with strides structure still.
But it would be a huge step forward.
what is the current state?
do you need help with it?
i have some expierence with source generators but i struggle with strides structure still.
But it would be a huge step forward.
I'm currently busy with another project and haven't made much progress on this. You can see the TODOs in my code. The main hassle is getting parity of output on edge cases. The way I was comparing output: compiling Stride main branch and viewing generated code with DotPeak, then running the source generator and comparing.
Once the GlobalDataSerializer attributes are correctly emitted, the next part is generating the method in the class they're on with the object IDs later used by the runtime serializer.
Since that may be difficult to handle - you could give it a try to refactor my code a bit so that it feels nicer to read (lack of readability was main issue of the previous implementation making it hard to change anything).
|
2025-04-01T06:40:30.226864
| 2023-10-05T14:47:04
|
1928481187
|
{
"authors": [
"Eideren",
"Ethereal77",
"IXLLEGACYIXL",
"Jklawreszuk",
"ly3027929699",
"xen2"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10996",
"repo": "stride3d/stride",
"url": "https://github.com/stride3d/stride/pull/1896"
}
|
gharchive/pull-request
|
[Native] - Implement some existing C++ methods in C#
PR Details
Description
This PR focuses on migrating most of the C++ code in Stride.Native to C#.
My change also applies to Stride.Graphics, so with a few gimmicks it should be possible to build this library outside of Windows.
Related Issue
#1394
Types of changes
[x] Docs change / refactoring / dependency upgrade
TODO
[ ] My change requires testing, so It could break existing code.
Thanks for the PR.
Before we merge this in, I would like to make sure we don't loose much performance. I remember it was in C++ for real perf reason (but at the time mono mobile was very slow so it might not be necessary anymore).
If I remember correctly, the 'FastTextRenderer' is only used for the debug text. Is this correct?
A long time since I've seen this code, but I think the regular spritebatch inherit from BatchBase, and is all C#.
I test the NativeInvoke.xnGraphicsFastTextRendererGenerateVertices with native and cs metod based on .net8.0
Hey @ly3027929699 thanks a bunch for testing this out for us, can you share the benchmark source as well ?
here is the source
TestAOTUnitTests.zip
this is a .rar file.
change .zip to .rar
here is the source TestAOTUnitTests.zip this is a .rar file. change .zip to .rar
the folder is empty, could you make a repo instead?
Nah, it works fine, make sure to change it to rar
is your result similar to me? @Eideren
https://github.com/ly3027929699/TestNativeVSCharp
here is repo of the source
@IXLLEGACYIXL
Fixed the benchmark usage of unsafe, results are still significantly better for c# though
TestAOTUnitTests.zip
Method
num
Mean
NativeMethod
100
30.00 us
CSharpMethod
100
21.51 us
NativeMethod
500
160.88 us
CSharpMethod
500
107.07 us
NativeMethod
1000
299.19 us
CSharpMethod
1000
213.93 us
Here's the result of another benchmark following the suggestions @froce made
Method
num
Mean
NativeMethod
100
32.42 us
Span
100
19.18 us
SpansFor
100
24.21 us
Optimized
100
10.54 us
Ptr
100
18.92 us
NativeMethod
500
163.00 us
Span
500
98.99 us
SpansFor
500
124.67 us
Optimized
500
54.89 us
Ptr
500
96.45 us
NativeMethod
1000
331.56 us
Span
1000
196.77 us
SpansFor
1000
250.98 us
Optimized
1000
107.84 us
Ptr
1000
195.63 us
Where
Ptr is with the fixed VertexPositionNormalTexture* vertexBuffer signature.
Span is using a span instead of a pointer.
SpansFor is a for loop instead of the manually unrolled loop in source.
I wrote an Optimized version where loop-constants are pre-computed, making it almost two times faster than pointer:
public static unsafe void Optimized(RectangleF constantInfos, RectangleF renderInfos, string textPointer, ref int textLength, Span<VertexPositionNormalTexture> vertexBuffer)
{
float fX = renderInfos.X / renderInfos.Width;
float fY = renderInfos.Y / renderInfos.Height;
float fW = constantInfos.X / renderInfos.Width;
float fH = constantInfos.Y / renderInfos.Height;
RectangleF destination = new(fX, fY, fW, fH);
RectangleF source = new(0.0f, 0.0f, constantInfos.X, constantInfos.Y);
// Copy the array length (since it may change during an iteration)
int textCharCount = textLength;
float scaledDestinationX;
float scaledDestinationY = -(destination.Y * 2f - 1f);
float invertedWidth = 1f / constantInfos.Width;
float invertedHeight = 1f / constantInfos.Height;
Span<(Vector2 Position, Vector2 TextureCoordinate)> baseData = stackalloc (Vector2, Vector2)[4]
{
( new(-destination.Width, +destination.Height), new(0 * source.Width * invertedWidth, 0 * source.Height * invertedHeight) ),
( new(+destination.Width, +destination.Height), new(1 * source.Width * invertedWidth, 0 * source.Height * invertedHeight) ),
( new(-destination.Width, -destination.Height), new(0 * source.Width * invertedWidth, 1 * source.Height * invertedHeight) ),
( new(+destination.Width, -destination.Height), new(1 * source.Width * invertedWidth, 1 * source.Height * invertedHeight) ),
};
int j = 0;
for (int i = 0; i < textCharCount; i++)
{
char currentChar = textPointer[i];
if (currentChar == '\v')
{
// Tabulation
destination.X += 8 * fX;
--textLength;
continue;
}
else if (currentChar >= 10 && currentChar <= 13) // '\n' '\v' '\f' '\r'
{
destination.X = fX;
destination.Y += fH;
scaledDestinationY = -(destination.Y * 2f - 1f);
--textLength;
continue;
}
else if (currentChar < 32 || currentChar > 126)
{
currentChar = ' ';
}
source.X = (currentChar % 32 * constantInfos.X) * invertedWidth;
source.Y = (currentChar / 32 % 4 * constantInfos.Y) * invertedHeight;
scaledDestinationX = destination.X * 2f - 1f;
// 0
vertexBuffer[j].Position.X = scaledDestinationX + baseData[0].Position.X;
vertexBuffer[j].Position.Y = scaledDestinationY + baseData[0].Position.Y;
vertexBuffer[j].TextureCoordinate.X = source.X + baseData[0].TextureCoordinate.X;
vertexBuffer[j].TextureCoordinate.Y = source.Y + baseData[0].TextureCoordinate.Y;
j++;
// 1
vertexBuffer[j].Position.X = scaledDestinationX + baseData[1].Position.X;
vertexBuffer[j].Position.Y = scaledDestinationY + baseData[1].Position.Y;
vertexBuffer[j].TextureCoordinate.X = source.X + baseData[1].TextureCoordinate.X;
vertexBuffer[j].TextureCoordinate.Y = source.Y + baseData[1].TextureCoordinate.Y;
j++;
// 2
vertexBuffer[j].Position.X = scaledDestinationX + baseData[2].Position.X;
vertexBuffer[j].Position.Y = scaledDestinationY + baseData[2].Position.Y;
vertexBuffer[j].TextureCoordinate.X = source.X + baseData[2].TextureCoordinate.X;
vertexBuffer[j].TextureCoordinate.Y = source.Y + baseData[2].TextureCoordinate.Y;
j++;
// 3
vertexBuffer[j].Position.X = scaledDestinationX + baseData[3].Position.X;
vertexBuffer[j].Position.Y = scaledDestinationY + baseData[3].Position.Y;
vertexBuffer[j].TextureCoordinate.X = source.X + baseData[3].TextureCoordinate.X;
vertexBuffer[j].TextureCoordinate.Y = source.Y + baseData[3].TextureCoordinate.Y;
j++;
destination.X += destination.Width;
}
}
Thanks !
|
2025-04-01T06:40:30.241165
| 2019-11-22T22:50:23
|
527451532
|
{
"authors": [
"adreyfus-stripe",
"andybons"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10997",
"repo": "stripe-samples/checkout-subscription-and-add-on",
"url": "https://github.com/stripe-samples/checkout-subscription-and-add-on/pull/5"
}
|
gharchive/pull-request
|
Go server updates
Used FormValue
Renamed method
Handled Stripe error
Ran gofmt
Thanks!
@andybons
Thanks again @andybons ! 👏
Yay thanks, @adreyfus-stripe! Hope my lack of context in the suggestions or comments came off poorly. Sometimes I forget to say why I'm suggesting changes.
Not at all @andybons! I appreciate attention to detail especially since I'm brand new to Go and still learning best practices.
|
2025-04-01T06:40:30.307302
| 2019-02-04T18:06:02
|
406451413
|
{
"authors": [
"Fossil01",
"PranayShah",
"SpaceyRezum",
"dsampaolo",
"enzoferey",
"fthuk",
"it-creed",
"jahudka",
"kernio",
"loctrice",
"lucasgsati",
"thandaanda",
"thorsten-stripe",
"vinnyvimto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10998",
"repo": "stripe/stripe-payments-demo",
"url": "https://github.com/stripe/stripe-payments-demo/issues/43"
}
|
gharchive/issue
|
Demo for how to Save Card with PaymentIntents
I can't seem to work out a way to save a card when making payment using the new PaymentIntents.
The old way of making a token on the client-side and passing this to the backend meant I could use the token as a source to make a customer and the card would be saved.
With PaymentIntents however, if I try to create a customer from the source returned from the Stripe.js handleCardPayment API call it throws an error: "The source you provided cannot be attached to the customer. It must be chargeable or pending.".
This has left me at an impasse. The only documentation I could find that covers saving a source with PaymentIntents assumes you already have a customer and source.
Edit: Creating a customer when you create a PaymentIntent is the only way I can see to have cards saveable with the new PaymentIntents flow. Although minor it's a little annoying because a customer is made whenever a user hits the checkout rather than when they've made at least one payment.
@vinnyvimto thanks for raising this, we're still actively working on the docs.
When using handleCardPayment[0] it does a couple of things under the hood:
It creates a card source (tokenisation of card details)
It confirms the PaymentIntent with the source at which point the radar and SCA rules[1] are evaluated
Based on the outcome of the rules:
do Strong Customer Authentication with 3D Secure if required & create the charge
if no authentication is required create the charge
For your scenario, do you always want to create a customer object if there was a successful payment, or do you have a checkbox in your checkout form that decides whether a customer object is created or not?
[0] https://stripe.com/docs/stripe-js/reference#stripe-handle-card-payment
[1] https://stripe.com/guides/strong-customer-authentication
@thorsten-stripe thanks for coming back to me.
For this scenario, there is a checkbox for saving a card that like you say used to decide whether a customer is created after submitting a form to the backend.
For the most seamless user experience with PaymentIntents, in the end, I just decided it was easier to make a call to my backend before calling handleCardPayment. This 'pre-call' does a number of things such as create a "pending" order on our end, update the PaymentIntent with shipping information, attach other metadata and create a customer if required to save a card.
If a customer leaves that page between this call and handleCardPayment they're then given an option to try and pay again. Moving an order from pending to completed happens via webhook and then updates the frontend via WebSocket.
Because this ended up being the route I took, it actually didn't matter too much what got sent through with the handleCardPayment call. At the time of opening this issue, however, I was really looking for a way to add shipping details and other metadata without having to hit my backend first. Shipping data being something that can only be added with new card sources.
I'd be interested to know what other flows people have generally come up with.
@vinnyvimto that sounds like a great solution for your use-case, thanks for sharing! We're thinking about allowing the creation of a customer object from a successful PaymentIntent which would allow you to shift the customer creation request outside of the checkout path. Would this be of interest to you?
YES, it would be of great interest !
+1 it would be great to be able to create a customer object and attach the source used in the paymentIntent
I think migrating to intents when there's application code already written would be much easier this way. By easier I mean less code and workflow would need to be changed.
+1 👍 I'm in the same position that @vinnyvimto initially described when this thread was opened.
We're using Stripe for charity donations; on the donation page, we're taking both one-off and recurring gifts using Stripe Elements. When the user-facing donation form is submitted, the payment details are checked and, if successful, a client-side Stripe token is return to the donation form and then the whole form (token, donor details and gift details) is passed to the server. The server then creates a customer and, depending on if the gift is a one-off gift or recurring gift, either a charge is applied (for a one-off gift) or a new subscription plan is created and added to the customer (for a recurring gift). If everything has happened correctly, the user is then sent to a 'Thank you' page and sent a receipt e-mail confirming their gift.
With PaymentIntents, I can't see that a corresponding workflow is possible without a serious code re-write. Any thoughts or advice would be very welcome.
@fthuk thanks for outlining your integration path. You can achieve this via the manual confirmation flow: https://stripe.com/docs/payments/payment-intents/quickstart#manual-confirmation-flow
Just note that you will need to go back to the client for performing authentication if required. Alternatively you can use the new Checkout, which also supports subscriptions now: https://stripe.com/docs/payments/checkout/server#create-subscriptions
@thorsten-stripe So I need to create a payment method and not let Elements make an intent out of it straight away?
Then add the method to a customer, subscribe to a plan and then somehow create the intent with or without 3DS?
If so, why can't we just attach intents to a customer and then subscribe them?
Same problem for me with Checkout. I can integrate it in a few minutes, but without proper handling of EU VAT, it useless :(
So right now I'm trying to integrate flow with a subscription manually.
@kernio after hours of messing around I finally figured it out manually.
Use Elements, to create a payment method (not intent), send that id to PHP and add it to a customer, also add it as their default payment method otherwise (weirdly) it wont work.
"invoice_settings" => ["default_payment_method" => $paymentMethod]
then create the subscription.
Quick follow-up that post-payment attachment of a payment method to a customer object is now available via the SCA off-session payment APIs: https://stripe.com/docs/payments/cards/saving-cards#saving-card-after-payment
+1 it would be great to be able to create a customer object and attach the source used in the paymentIntent
Hi, is there an update on this one?
@it-creed yes, see the details in my last comment: https://github.com/stripe/stripe-payments-demo/issues/43#issuecomment-508920663
https://stripe.com/docs/payments/cards/saving-cards#save-payment-method
It is crazy how the first (hence the only) added PaymentMethod is not the default and you have to set it explicitly.
@PranayShah thanks for the feedback. While we understand that this adds complexity, we want our users to be more aware and explicit of what method they are charging. Payments regulation is evolving globally and requires merchants in certain regions to explicitly set up new cards via a non-payment authentication during which they have to present the terms of service. You can find more information on this here: https://stripe.com/en-US/guides/sca-payment-flows
We've published a video that specifically looks at customer management / card-on-file with regard to SCA which you might find helpful if you landed here: https://youtu.be/52oinv6BZ34
I am saving PaymentMethod using: https://stripe.com/docs/payments/cards/saving-cards#saving-card-after-payment. However, when I query customer data, I don't see cards under sources. How can I show my users which cards are stored with our platform.
@thandaanda our recommendation is to store a list of your customer's payment methods (including the fingerprint, expiration date, billing address etc) in your own database. That way you don't have to ping the API for that data. Should you need to list the customer's payment methods, you can do so via the payment_methods endpoint: https://stripe.com/docs/api/payment_methods/list
Hi, I might have some issues with saving cards after payment using the PaymentIntents integration. For customers who don't yet have a saved payment method, the flow looks like this:
Customer hits a "Pay" button
UI calls stripe.createPaymentMethod('card', element)
UI calls backend, passing the ID of the payment method obtained in the previous step
Backend calls stripe.paymentIntents.create(), passing in the payment method ID, as well as setup_future_usage: 'off_session', confirmation_method: 'manual', confirm: true
If this fails because the card needs SCA, backend passes the client secret back to frontend, otherwise end
Frontend calls stripe.handleCardAction(clientSecret) and passes the ID of the returned payment intent to backend
Backend calls const intent = stripe.paymentIntents.confirm(intentId)
If payment method isn't saved yet, backend calls stripe.customers.create({ payment_method: intent.payment_method }) and saves customer ID along with payment method ID, otherwise end
Payment using these saved credentials looks like this:
Backend calls stripe.paymentIntents.create(), passing in stored customer ID and payment method ID
Continue from 5) above
The issue is that when charging a saved card, SCA is always triggered - even for the 4000 0025 0000 3155 test card, which should only require SCA the first time around. I think it might be because the first paymentIntents.create() backend call, where I pass in setup_future_usage, fails because it requires SCA and the payment intent is actually really created when I call handleCardAction() - but I can't pass the setup_future_usage to that, so the intent is created without that option.. or is it something else?
@jahudka the setup steps sound correct. For step 1) of "Payment using these saved credentials" are you passing the off_session:true[0] flag to the payment intent creation?
[0] https://stripe.com/docs/payments/cards/charging-saved-cards#create-payment-intent-off-session
@thorsten-stripe omg I wasn't, can't believe I missed that.. It works now! Although the off_session property of Stripe.paymentIntents.IPaymentIntentCreationOptions is missing in @types/stripe version 6.31.23 (which is the latest), so I had to convince TypeScript a little, but that's no biggie.. Thanks a lot!
Okay, now there's another issue - if I test with a card which always requires SCA, during off-session attempts when the card has already been saved, things start to get wonky..
The initial stripe.paymentIntents.create() fails as expected (well, except for the fact that now instead of returning an IPaymentIntent object with the appropriate state, it throws a StripeCardError - but the client secret can be extracted from that too). But then I pass the client secret to the client side and call stripe.handleCardAction() and it fails; my console says that "[t]he PaymentIntent supplied is not in the requires_action state". Inspecting the intent object found on the StripeCardError exception thrown by stripe.paymentIntents.create() indeed shows that the intent is in the requires_source state, but the code of the error is authentication_required and the message says Your card was declined. This transaction requires authentication.. This is in accord with what the docs say: charging a saved card is supposed to fail with an authentication_required code and a requires_payment_method (resp. requires_source, in older API versions) state.
The docs say I should now "follow the on-session payment instructions from step 2". So I pass the client secret extracted from the error object back to my frontend, my frontend calls stripe.handleCardAction().. and fails, telling me in the console that "[t]he PaymentIntent supplied is not in the requires_action state". This, too, is in accord with the docs - namely the fact that handleCardAction() only works with an intent in the requires_action state - but that effectively means that I have to switch to automatic confirmation now, doesn't it, because I'll have to call stripe.handleCardPayment() instead... what am I missing? SCA in off-session payments can't only support automatic confirmation, or does it?
@jahudka off_session is a confirm-time parameter and is not stored on the state of the PaymentIntent. You can extract the PaymentIntent ID from the error object and call confirm on it again with the saved payment method and omitting off_session:true which will default to false (on-session payment) which will then move the PaymentIntent into requires_action state. I'd recommend that you watch our Dev Chat on this topic, especially starting from 26:55[0] where we talk about the recovery flows.
[0] https://youtu.be/52oinv6BZ34?t=1616
Hi everybody !
I have been in the comfortable situation of implementing a checkout flow from scratch today. I wanted to achieve something similar to what @jahudka wanted to migrate: users can either do a one-off payment or go for a subscription.
I ended up with this flow, which looks pretty more lightweight than manual PaymentIntent handling:
Client requests a PaymentIntent to the server as soon as we know the kind of payment we want to go for. For that, I'm sending a payload like this:
{
userData: {
name: "Some name",
email<EMAIL_ADDRESS> },
amount: 123123,
currency: "eur",
isSubscription: true,
}
The userData field contains information that I want to attach to the customer I may create later on if the payment succeeds. Note that if you need to charge existing customers you could add to this payload another key to send the customerId to use in the creation of the PaymentIntent.
Server generates the PaymentIntent. The userData is stored temporally in the metadata field and if isSubscription === true we need to set setup_future_usage = "off_session". Also, it's required to send payment_method_types = ["card"]. Something like this:
const payload = {
metadata: {
userData: JSON.stringify(userData),
},
amount,
currency,
payment_method_types: ['card'],
};
if (isSubscription) {
payload.setup_future_usage = 'off_session';
}
stripe.paymentIntents.create(payload);
Then it sends the client_secret field of the created PaymentIntent to the client.
The client receives the client_secret (setting it in the state or any other medium depending on your technology of choice), captures the values of the inputs (billing address and card data mainly) and uses stripe.handleCardPayment() passing the payment_method_data when the user presses the "pay" button. In my case looks something like this:
const { paymentIntent, error } = await stripe.handleCardPayment(
clientSecret,
{
payment_method_data: {
billing_details: {
... some details
},
},
}
);
Note that handleCardPayment handles the authentication part of SCA for you, so you don't need to do anything else on the client except providing feedback to the user based on the error key returned.
Finally, in a async way, the server receives via a webhook the payment_intent.succeeded event and takes care of creating a Customer using the data saved in metadata and based on the setup_future_usage value of the PaymentIntent creates a subscription or not (which we could use another field in metadata to differentiate). Note that when creating the customer you need to check if it's a subscription or not in order to add to the payload the payment_method:
const customer = await stripe.customers.create({
payment_method: intent.payment_method, // only if subscription, otherwise it throws an error
});
For the webhooks handling of the events, it was really useful to use the Stripe CLI and this code snippet. The endpointSecret secret is a value that I was actually no able to find anywhere on the dashboard, but the Stripe CLI displays it when executing stripe listen (it has the structure "whsec_....`).
Hope it helps people coming in this SCA storm under deadlines 😄
Hello all, im working with Stripe and cloning cards, everything works for the first step, but when i want to select an stored card im getting "The payment method you provided has already been attached to a customer"
This is what im doing:
$getPaymentMethod = \Stripe\PaymentMethod::retrieve(
$paymentMethodId
);
$payment_method = \Stripe\PaymentMethod::retrieve(
$getPaymentMethod->id
);
$getPaymentMethod->attach([
'customer' => $this->customer->getStripeId(),
]);
@lucasgsati please reach out to support using the form at https://support.stripe.com/ (preferred) or via email to<EMAIL_ADDRESS>Closing this out as we now have dedicated demos for this at https://github.com/stripe-samples?q=sav
|
2025-04-01T06:40:30.329753
| 2019-10-07T06:50:21
|
503263570
|
{
"authors": [
"carlspring",
"raksit31667"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10999",
"repo": "strongbox/strongbox",
"url": "https://github.com/strongbox/strongbox/pull/1506"
}
|
gharchive/pull-request
|
Issue 1500: Use the built-in formatting in logging methods
Pull Request Description
This pull request closes #1500
Acceptance Test
[X] Building the code with mvn clean install -Dintegration.tests still works.
[X] Running mvn spring-boot:run in the strongbox-web-core still starts up the application correctly.
[X] Building the code and running the strongbox-distribution from a zip or tar.gz still works.
[X] The tests in the strongbox-web-integration-tests still run properly.
Questions
Does this pull request break backward compatibility?
[ ] Yes
[X] No
Does this pull request require other pull requests to be merged first?
[ ] Yes, please see #...
[X] No
Does this require an update of the documentation?
[ ] Yes, please see strongbox/strongbox-docs#{PR_NUMBER}
[X] No
Hi @raksit31667 !
Thank you for your contribution!
Would you mind signing the ICLA, as described in the Contributing page?
Also, please, feel free to join our chat channel, if you'd like to learn more about the project and/or like to find out what else you could help with.
Kind regards,
Martin
@raksit31667 ,
Thanks for signing the ICLA!
@ptirador ,
Would you like to review this? :)
|
2025-04-01T06:40:30.397192
| 2021-07-15T21:46:09
|
945775296
|
{
"authors": [
"Sluggyy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11000",
"repo": "stroupbslayen/discord-pretty-help",
"url": "https://github.com/stroupbslayen/discord-pretty-help/issues/49"
}
|
gharchive/issue
|
Command Groups
Hey, I've tested your Help command and I had one problem, when I added a command group, the bot just put the typing status and after that nothing happens, after I removed the command group all was working again. I checked the Cog, if there was any mistake in it, but no. So let me know, if it is my fault or the one from the help command.
Oh, i found the error was a mistake from my site.
|
2025-04-01T06:40:30.412651
| 2018-08-13T17:09:44
|
350115908
|
{
"authors": [
"kevinrobinson"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11001",
"repo": "studentinsights/studentinsights",
"url": "https://github.com/studentinsights/studentinsights/pull/1971"
}
|
gharchive/pull-request
|
Migrate global datepicker_options into Datepicker component, limit jQuery UI imports
Who is this PR for?
developers
What problem does this PR fix?
It's a step towards https://github.com/studentinsights/studentinsights/issues/1758. This came up as I was looking to see what it would take to upgrade to jQuery 3.x, since Firefox raises CSP violations on load for jQuery 1.12.x.
What does this PR do?
Removes two global bits that <Datepicker /> relied on - window.datepicker_options and an asset path sent down in application.html.erb. Also scopes down the sprockets imports for jQuery UI.
selfie
|
2025-04-01T06:40:30.416123
| 2018-11-05T17:37:28
|
377507320
|
{
"authors": [
"kevinrobinson"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11002",
"repo": "studentinsights/studentinsights",
"url": "https://github.com/studentinsights/studentinsights/pull/2234"
}
|
gharchive/pull-request
|
Profile: Allow showing MCAS in summary if no STAR, remove SGP graphs if no data
Who is this PR for?
Bedford educators
What problem does this PR fix?
In Bedford, they don't use STAR assessments, but the profile still assumes these are meaningful.
Separately, there's no SGP data for students in younger grades (or other students taking MCAS for the first time). Yet these charts still appear.
What does this PR do?
Adds a PerDistrict.js function that enables showing MCAS in place of STAR for the summary tabs on the student profile.
Updates the details sections to hide the SGP charts if there is no data.
Screenshot (if adding a client-side feature)
Checklists
Which features or pages does this PR touch?
[x] Student Profile
Does this PR use tests to help verify we can deploy these changes quickly and confidently?
[x] Included specs for changes
[x] Manual testing made more sense here
selfie
|
2025-04-01T06:40:30.417084
| 2017-01-20T22:24:18
|
202257717
|
{
"authors": [
"alexsoble",
"kevinrobinson"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11003",
"repo": "studentinsights/studentinsights",
"url": "https://github.com/studentinsights/studentinsights/pull/827"
}
|
gharchive/pull-request
|
Remove eager includes from schools_controller
These aren't needed on the precomputed path, so are probably pulling in more data than is needed.
Tests pass locally and kevin says 🚢
|
2025-04-01T06:40:30.450636
| 2014-07-17T18:00:07
|
38108812
|
{
"authors": [
"hootan-nikbakht",
"nozpheratu",
"steakchaser",
"stve"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11004",
"repo": "stve/capistrano-local-precompile",
"url": "https://github.com/stve/capistrano-local-precompile/issues/8"
}
|
gharchive/issue
|
manifest.json
This gem definitely, solved my problem in precompiling my assets with same fingerprinting to be deployed to multiple instances of app server code. I am however seeing a side effect when trying to redeploy by seeing the following exception:
`parse': (<unknown>): mapping values are not allowed in this context at line 1 column 13 (Psych::SyntaxError)
from /Users/hootan/.rvm/rubies/ruby-2.0.0-p451/lib/ruby/2.0.0/psych.rb:205:in `parse_stream'
from /Users/hootan/.rvm/rubies/ruby-2.0.0-p451/lib/ruby/2.0.0/psych.rb:153:in `parse'
from /Users/hootan/.rvm/rubies/ruby-2.0.0-p451/lib/ruby/2.0.0/psych.rb:129:in `load'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/recipes/deploy/assets.rb:26:in `parse_manifest'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:191:in `method_missing'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:191:in `method_missing'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/recipes/deploy/assets.rb:93:in `block (3 levels) in load'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:138:in `instance_eval'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:138:in `invoke_task_directly'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:25:in `invoke_task_directly_with_callbacks'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:89:in `execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:101:in `find_and_execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/callback.rb:38:in `call'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:141:in `block in trigger'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:141:in `each'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:141:in `trigger'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:23:in `invoke_task_directly_with_callbacks'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:89:in `execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:101:in `find_and_execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/callback.rb:38:in `call'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:141:in `block in trigger'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:141:in `each'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:141:in `trigger'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:27:in `invoke_task_directly_with_callbacks'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:89:in `execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:191:in `method_missing'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:110:in `block in define_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/recipes/deploy.rb:234:in `block (3 levels) in load'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:56:in `transaction'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:191:in `method_missing'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/recipes/deploy.rb:233:in `block (2 levels) in load'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:138:in `instance_eval'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:138:in `invoke_task_directly'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:25:in `invoke_task_directly_with_callbacks'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:89:in `execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:191:in `method_missing'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/namespaces.rb:110:in `block in define_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/recipes/deploy.rb:201:in `block (2 levels) in load'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:138:in `instance_eval'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:138:in `invoke_task_directly'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/callbacks.rb:25:in `invoke_task_directly_with_callbacks'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:89:in `execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/configuration/execution.rb:101:in `find_and_execute_task'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/cli/execute.rb:46:in `block in execute_requested_actions'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/cli/execute.rb:45:in `each'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/cli/execute.rb:45:in `execute_requested_actions'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/cli/help.rb:19:in `execute_requested_actions_with_help'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/cli/execute.rb:34:in `execute!'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/lib/capistrano/cli/execute.rb:14:in `execute'
from /Users/hootan/.rvm/gems/ruby-2.0.0-p451@global/gems/capistrano-2.15.5/bin/cap:4:in `<top (required)>'
from /Users/hootan/.rvm/rubies/ruby-2.0.0-p451/bin/cap:23:in `load'
from /Users/hootan/.rvm/rubies/ruby-2.0.0-p451/bin/cap:23:in `<main>'
I removed manifest-.json from app/shared/assets folder and the redeploy works fine, but this is not practical to login to every single instance and remove. Is this a known problem ? please help.
This error occurs in :update_asset_mtimes if there is more than 1 manifest*.json in the shared_path/shared_assets_prefix. The above fix should remove the manifest from this location instead.
I think we should change to:
desc "remove manifest file from remote server"
task :remove_manifest, roles: :web do
run "rm -f #{shared_path}/#{shared_assets_prefix}/manifest*.json"
end
Thanks @steakchaser, would you mind submitting a pull request?
+1
@stve Have you had a chance to take a look at the PR from @steakchaser ?
Nevermind. It looks like @steakchaser did submit that PR after all. The gem just hasn't been updated on rubygems.
|
2025-04-01T06:40:30.482199
| 2024-12-10T20:59:10
|
2731179994
|
{
"authors": [
"Hussainuse",
"biliman001"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11006",
"repo": "styled-components/styled-components",
"url": "https://github.com/styled-components/styled-components/issues/4465"
}
|
gharchive/issue
|
[EXCLUSIVE CLIP] Five Girls Five Rocket Viral Video
In the fast-paced world of social media, it doesn’t take long for a unique or intriguing clip to capture the imagination of millions. The latest phenomenon? The “Five Girls Five Rocket Viral Video”, a dazzling piece of content that has become an instant sensation across platforms. This video, which showcases a group of five girls performing a synchronized and visually stunning stunt involving five rockets, has sparked a frenzy of shares, discussions, and imitations worldwide.
🔴 ➤► WATCH THE VIDEO HERE✅👉 https://shorturl.at/H0aYQ (VideoLink)
🔴 ➤► WATCH THE VIDEO HERE✅👉 https://shorturl.at/H0aYQ (VideoLink)
What is the “Rocket Viral Video”?
The Rocket Viral Video features five young women showcasing a perfectly choreographed sequence involving rockets. While the exact origins of the video remain unclear, its captivating visuals and flawless execution have made it impossible for viewers to look away. Each girl holds a rocket, launching them in precise coordination, creating a jaw-dropping spectacle that combines science, artistry, and entertainment.
The synergy of the performers, their confidence, and the dramatic backdrop have made the 5 girls 5 rocket breakout video an unforgettable watch. From Instagram reels to TikTok duets, the video has become a cultural touchstone for creativity and teamwork.
The Rise of the “5 Rocket 5 Girl Viral Video Breakout”
The viral journey of the 5 rocket 5 girl video breakout is a textbook case of how powerful social media can be. The video initially surfaced on platforms like YouTube and TikTok, where its unique concept and mesmerizing execution drew significant attention. Within hours, it began trending under hashtags like #RocketGirls, #5Rockets5Girls, and #ViralVideoBreakout.
As more users began sharing the video, it transcended language and geographical barriers. Memes, reaction videos, and recreations quickly followed, further amplifying its reach. Celebrities and influencers also jumped on the bandwagon, sharing their admiration and even attempting to replicate the stunt.
Why Did the “5 Girl 5 Rocket Video Breakout” Go Viral?
Several factors contributed to the five girls five rocket viral video becoming such a sensation:
Unconventional Concept: Combining five rockets and five performers in a synchronized act was something fresh and unexpected.
Visual Appeal: The video’s stunning visuals, paired with an engaging soundtrack, made it ideal for social sharing.
Relatability and Aspiration: The performers’ teamwork and determination resonated with audiences, inspiring them to recreate the stunt.
Global Accessibility: Short, impactful, and easy to share, the video was perfectly suited for platforms like TikTok, Instagram, and Twitter.
The Impact of the “Five Girls Five Rocket Viral Video”
Beyond its entertainment value, the video has sparked broader discussions about the power of collaboration and innovation in social media content. Schools, dance troupes, and content creators have used it as a template to inspire their own creations.
🔴 ➤► WATCH THE VIDEO HERE✅👉 https://shorturl.at/H0aYQ (VideoLink)
🔴 ➤► WATCH THE VIDEO HERE✅👉 https://shorturl.at/H0aYQ (VideoLink)
Moreover, the clip has demonstrated the potential for short-form content to captivate global audiences in an era of dwindling attention spans. The 5 girl 5 rocket video breakout serves as a reminder that creativity and originality remain key drivers of online virality.
Conclusion
The five girls five rocket viral video is more than just a trending clip; it’s a celebration of creativity, teamwork, and the boundless potential of digital platforms. As it continues to dominate timelines and inspire countless recreations, it cements its place as one of the most talked-about viral phenomena of the year.
|
2025-04-01T06:40:30.487882
| 2018-04-26T01:59:14
|
317850704
|
{
"authors": [
"AaronBuxbaum",
"adiun",
"georgefeast",
"jxnblk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11007",
"repo": "styled-system/styled-system",
"url": "https://github.com/styled-system/styled-system/issues/172"
}
|
gharchive/issue
|
Perf with responsive styling and pure components
I'm a big fan of styled-system so first, thanks!
I was curious about the use of the array literal in responsive styling: <RoundedBox borderRadius={[1,2]} /> I created a little sandbox to show this: a responsive-styled pure component will re-render even though incoming props from the parent component do not change: https://codesandbox.io/s/m7p59mv3xx
The example is kind of contrived - I want the RoundedBox component to be pure so it's pretty obvious this will fail when you pass a prop with an array literal.
Perf should be measured before optimizing - however - given that object/array literals in renders is an infamous way of accidentally re-rendering in React, do you think there should be a note in the docs to mention that using responsive styling through array literals will break pure components? A simple suggestion is, if you're using a pure component, break out the array literal into a const...
Sorry for the delay, I haven't looking into the performance implications at this level, but I'd guess that it's negligible for most use-cases. Feel free to experiment on a branch with the benchmarks and let me know if you find anything interesting. I think a potential future solution for some of this might be in using babel plugins, but going to close this issue out for now
For what it's worth, we found a pretty significant performance impact with inline array literals, in that it effectively ruins pure. With a relatively large app, it becomes important to use pure to keep performance reasonable, and any prop literal is inherently a reference mismatch -- having a note in there might be valuable to others with larger-scale projects.
Also interested in which Babel plugins you're referring to, as we'd love to utilize them to improve our performance.
@AaronBuxbaum How did you identify this performance issue? Am trying to figure out a sporadic bug at the moment that I think might be related to performance but don't know how to identify.. any leads much appreciated
@AaronBuxbaum How did you identify this performance issue? Am trying to figure out a sporadic bug at the moment that I think might be related to performance but don't know how to identify.. any leads much appreciated
If you're seeing a lot of renders in your react dev tools, something is updating too much, and this is one possibility. I think your first step should be to figure out if your renders are expensive, or if you're re-rendering too much. Once you're there, you can dig into the problem -- multiple renders implies problems with things like this (which means that you can resolve a lot by hoisting object/function literals and using pure), but expensive few renders implies very different problems.
|
2025-04-01T06:40:30.547597
| 2024-10-02T10:35:28
|
2561227063
|
{
"authors": [
"funkyhippo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11011",
"repo": "subject-f/guya-status-page",
"url": "https://github.com/subject-f/guya-status-page/issues/181"
}
|
gharchive/issue
|
⚠️ Guya.moe Proxied has degraded performance
In 10e7b04, Guya.moe Proxied (https://ice.guya.moe/) experienced degraded performance:
HTTP code: 200
Response time: 12904 ms
Resolved: Guya.moe Proxied performance has improved in 173a2e9 after 2 hours, 27 minutes.
|
2025-04-01T06:40:30.619654
| 2022-06-22T07:35:24
|
1279699910
|
{
"authors": [
"mkolesnik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11012",
"repo": "submariner-io/releases",
"url": "https://github.com/submariner-io/releases/pull/407"
}
|
gharchive/pull-request
|
Fix image tagging
Images should be cross-tagged without v prefix, make sure it's so.
Signed-off-by: Mike Kolesnik<EMAIL_ADDRESS>
Backported as #409
|
2025-04-01T06:40:30.625557
| 2020-10-07T12:53:56
|
716503753
|
{
"authors": [
"mangelajo",
"nyechiel",
"sridhargaddam"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11013",
"repo": "submariner-io/submariner-website",
"url": "https://github.com/submariner-io/submariner-website/issues/304"
}
|
gharchive/issue
|
Document everything OVN
The OVN support will bring some changes that need to be documented from the architecture point of view.
See: https://github.com/submariner-io/enhancements/pull/11
Related-Issue: https://github.com/submariner-io/submariner/issues/778
I was discussing this with @sridhargaddam, and wanted to summarize our thinking here:
At least from a user perspective, OVN should "just work". We do need to:
Add OVN to the list of CNI support matrix: https://submariner.io/getting_started/#support-matrix
Mention that for OVN-based clusters there is no need to open 4800 UDP ports (which normally being used for vxlan-submariner)
Document known issues/limitations (for e.g, OVN is not supported with Globalnet): https://submariner.io/operations/known_issues/
We also need to update some of the architecture pages (and diagrams?), in particular the Route Agent: https://submariner.io/getting_started/architecture/route-agent/
Document known issues/limitations (for e.g, OVN is not supported with Globalnet): https://submariner.io/operations/known_issues/
PR raised:
https://github.com/submariner-io/submariner-website/pull/402
* Add OVN to the CNI support matrix: https://submariner.io/getting_started/#support-matrix
PR: https://github.com/submariner-io/submariner-website/pull/403
|
2025-04-01T06:40:30.636565
| 2022-07-15T12:48:24
|
1305998326
|
{
"authors": [
"XY-Wang",
"dzhelezov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11014",
"repo": "subsquid/squid-sdk",
"url": "https://github.com/subsquid/squid-sdk/issues/76"
}
|
gharchive/issue
|
Decode extrinsic error at ingestion
Hi guys,
For the moment an extrinsic module error is coming in the form
{
"__kind": "Module",
"value": {
"error": "0x06000000",
"index": 9
}
}
Unfortunately, this is not very helpful when we try to show it in the UI of an explorer. It would be nice to have it in the decoded form like what @polkadotjs/api api.registry.findMetaError returns:
{
args: [],
docs: [ 'Contract trapped during execution.' ],
fields: Type(0) [ registry: TypeRegistry {}, initialU8aLength: 1 ],
index: 11,
method: 'ContractTrapped',
name: 'ContractTrapped',
section: 'contracts'
}
Is this something you guys plan on supporting?
Hey! It's likely not feasible to support on the archive explorer side as it simply presents the raw data saved by the ingester. I recommend setting up a separate squid for these purposes and then decode the error using
(ctx._chain as any).scaleCodec.decodeBinary(...)
Take a look at this squid to get an idea of how to handle events and extrinsic at the squid side
https://github.com/subsquid/talisman-squidtest/tree/firesquid-migration
Hey, thanks for the explanation!
There's no problem subscribing directly to succeeded and failed extrinsics in the squid but the error returned is still not decoded. Is it possible to expose a method like ctx._chain.decodeError() similar to ctx._chain.decodeCall() that could be used in the squid processor to get the decoded error? If I understood well from this article: https://wiki.polkadot.network/docs/maintain-errors#polkascan-and-subscan, the index 9 in my example above indicates the index of the pallet as described in the chain metadata and the error 0x06000000 indicates the index of the error in the error field of the pallet description. Since the squid archive explorer already has the chain metadata, it should be relatively easy to resolve the error?
We'll look into supporting this on our end so I'll close the issue.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.