id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
106238823 | Rest sensor
This is a basic implementation of a REST sensor.
Ups, there seems to be a bug...only the first entry in the configuration.yaml is working.
This is the doc for this sensor.
Those are points to implement (collected here and in gitter):
allow POST request
parse nested responses (like http://docs.zwayhomeautomation.apiary.io/)
| gharchive/pull-request | 2015-09-13T20:47:21 | 2025-04-01T04:33:36.016103 | {
"authors": [
"fabaff"
],
"repo": "balloob/home-assistant",
"url": "https://github.com/balloob/home-assistant/pull/360",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1436041916 | Keynavigation on forms
[ ] All form controls should be accessible with the keyboard navigation / tab
[ ] Add card as a focusable component / use button element
https://github.com/baloise/design-system/issues/699
| gharchive/issue | 2022-11-04T13:02:26 | 2025-04-01T04:33:36.017438 | {
"authors": [
"hirsch88"
],
"repo": "baloise-incubator/design-system",
"url": "https://github.com/baloise-incubator/design-system/issues/857",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
796629654 | Allow password input via stdin or env variable
...so that the password is not printed in the CI log.
:tada: This issue has been resolved in version 4.6.0 :tada:
The release is available on:
v4.6.0
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/issue | 2021-01-29T06:48:18 | 2025-04-01T04:33:36.019745 | {
"authors": [
"christiansiegel"
],
"repo": "baloise/gitopscli",
"url": "https://github.com/baloise/gitopscli/issues/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1112959504 | Potfolio Positions Fix
Fixes #517
[ ] toggle on/off to any token list has some delay
flow:
open token lists management
click to enable
expected: immediatly see the toggle change
actual: it takes 4-5 seconds until something happens.
| gharchive/pull-request | 2022-01-24T17:50:31 | 2025-04-01T04:33:36.049995 | {
"authors": [
"RanCohenn",
"ashachaf"
],
"repo": "bancorprotocol/webapp-v2",
"url": "https://github.com/bancorprotocol/webapp-v2/pull/521",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1879089146 | Gutenberg: contribute to gutenberg.run
Summary
Complete the current progress of gutenberg.run comment under Gutenberg PR
Should understand .run mechanism, when it recieve PR number, how it fetch the built packages and install on the VM? https://github.com/bangank36/spriral-learning/issues/27#issuecomment-1706688010
The prior PR has raised a question: when is the correct time to post the build message?
Inside pull_request_automation, along with other PR actions: message is posted immediately, but the VM is not ready to run just yet
After Upload artifact, then how to get the artifacts URL, testing expiration
In fact the actual link is stable, we just need to follow redirect on the request, real issue is the artifact id is not easy to grab
It would be really useful to have access to artifacts before run completion. In my case I need retrieve artifacts download URLs and publish it in custom issue comment. And it must be done in one workflow run, I cannot use 2 separate workflows for generating artifacts and publication comment.
Discussion on artifacts id retrieval
Getting artifact URL in steps after upload-artifact
API reference
workflow artifact pull request comment
workflowArtifactsPullRequestCommentAction -> getWorkflowArtifactsComment -> getWorkflowArtifactDetails
Absolute URL of the artifacts that match the URL on Checks screen
Caplypso reference
PR for comment generate github.issues.createComment
Live URL generate
Cloudflare reference
https://github.com/cloudflare/pages-action
push action
Edit the comment in createJobSummary
Run after certain time, not immediately? Since the staging URL will take some time to create
Reference
Prior art: #30149
Build Gutenberg Zip action: #26746
Relevant question: #19
How to download zip file: #28881, we should include the link to artifacts
https://github.com/bangank36/spriral-learning/assets/10071857/91eb51a7-31df-4ee2-904c-f81fa81e75e6
Update
Running test PRs against the new fork https://github.com/BeyondspaceStudio/gutenberg
How to merge the PR to a fork for testing?
Update
How gutenberg.run init the site
On run tasks
When the site is set up, Gutenberg is installed by downloading the ARTIFACT_DOWNLOAD_URL, is it the generated zip from gutenberg Github actions?
action name: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce on this action.yml file
Notes
Current pull_request_automation has 2 actions based on PR
Experiment
Include artifacts-url-comments action after upload-artifacts action
Running into issue below, reference can be found in code
eems that the action you are using is encountering this error because it is not running within the context of a workflow. More on this
API Reference
Some initial steps has been replaced by listening to workflow_run and get the value via payload, but some alternate use could be useful
Get workflow by file name
payload.workflow_run.id
debug( 'workflow_run: run detail' );
// Get latest workflow_run for the
const res = await octokit.rest.actions.listWorkflowRuns({
owner,
repo,
workflow_id: "build-plugin-zip.yml",
per_page: 1
});
// Parse the response and extract the download URL or other information
const workflow = res.data;
debug( JSON.stringify(workflow) );
Get artifact by id
debug( 'artifacts: detail data.' );
// Retrieve artifacts for a specific workflow run
const runId = 6035878166;
const getArtifacts = async (owner, repo, runId) => {
try {
const response = await octokit.rest.actions.listWorkflowRunArtifacts({
owner,
repo,
run_id: runId,
});
// Parse the response and extract the download URL or other information
const artifacts = response.data.artifacts;
// ... process the artifacts as needed
return artifacts;
} catch (error) {
console.error("Error retrieving artifacts:", error);
throw error;
}
};
const artifacts = await getArtifacts("WordPress", repo, runId)
Staled
| gharchive/issue | 2023-09-03T13:03:17 | 2025-04-01T04:33:36.075157 | {
"authors": [
"bangank36"
],
"repo": "bangank36/spriral-learning",
"url": "https://github.com/bangank36/spriral-learning/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1321923935 | LICENSE for the code?
Hello!
What is your license for the code? I searched the repo, the code does not appear to have a license.
Most people use the MIT license. See https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/licensing-a-repository for guidance in choosing your license, if you want to add a license (which helps devs in actually using your code).
Thank you!
Here's an easy way to add a license to your repo:
https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/adding-a-license-to-a-repository
Hey @silentyak thank you for raising the issue. Added the appropriate license.
| gharchive/issue | 2022-07-29T08:27:54 | 2025-04-01T04:33:36.093147 | {
"authors": [
"bansalsurya",
"silentyak"
],
"repo": "bansalsurya/whiteboard-app",
"url": "https://github.com/bansalsurya/whiteboard-app/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
783773868 | Operator dead locks during rolling upgrade if another pod is killed
Describe the bug
I have seen this happen a few times if we have a relatively large cluster, say 24 nodes, and a rolling update is started on that cluster if any other pod down the line is also restarted that pod never gets replaced and the rolling update deadlocks. I can make it progress forward if I start killing the pod its waiting on, its like the reconcile loop happens replaces the missing pods which allows the updated pod to come on line then it progress forward just fine.
I believe the main problem is it waits for all topics to not be under replicated as here which I agree with. Likely we need to keep the reconciliation loop going, maybe starting the pods with the existing config while the rolling upgrade happens. I'm not entirely sure of the best possible resolution here.
One possible solution I have thought of is maybe a timeout waiting for it to be in sync, allowing the loop to replace the missing pods and try the upgrade again.
Steps to reproduce the issue:
Create a cluster, I think it would work with any size cluster but its easier to do it with larger clusters. Trigger a rolling upgrade (change a config property or the version of kafka) and once the CR is in that state delete one of the kafka brokers that's further down the path, like if you have 5 delete broker 4. At that point it should be stuck on waiting for what ever pod its on for all replicas to be in sync. Delete what ever broker its on (a bit of guess work) and watch it replace broker 4 as well as the one its currently upgrading.
Expected behavior
That rolling upgrade doesn't block downstream pod restarts and can proceed
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem like release numberm version, branch, etc.
@matt-christiansen-exa I think rollingUpgradeConfig.failureThreshold can help in your case https://github.com/banzaicloud/kafka-operator/blob/1f68f0a0c05dc53cce9d140e427ea67a937c9dfc/config/base/crds/kafka.banzaicloud.io_kafkaclusters.yaml#L5710-L5718
It controls the max number of failures the rolling restart tolerates. If you increase that the operator should be able to skip a failing broker and continuing the procedure.
@matt-christiansen-exa I think rollingUpgradeConfig.failureThreshold can help in your case https://github.com/banzaicloud/kafka-operator/blob/1f68f0a0c05dc53cce9d140e427ea67a937c9dfc/config/base/crds/kafka.banzaicloud.io_kafkaclusters.yaml#L5710-L5718
It controls the max number of failures the rolling restart tolerates. If you increase that the operator should be able to skip a failing broker and continuing the procedure.
Closing this since a fix just got merged. Please reopen it in case of reoccurrence.
| gharchive/issue | 2021-01-11T23:13:11 | 2025-04-01T04:33:36.099979 | {
"authors": [
"amuraru",
"baluchicken",
"matt-christiansen-exa"
],
"repo": "banzaicloud/koperator",
"url": "https://github.com/banzaicloud/koperator/issues/533",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1350992322 | CruiseControlOperation controller
Q
A
Bug fix?
no
New feature?
yes
API breaks?
no
Deprecations?
no
Related tickets
mentioned in #829
License
Apache 2.0
What's in this PR?
New controller with finalizer for CruiseControlOperation custom resources.
The reconciler handles Cruise Control tasks based on the CruiseControlOperation properties.
When a new CruiseControlOperation CR is added and there is no running task at the CC side the reconciler executes the currenTask in the status and updates it based on the API HTTP response
When the CruiseControlOperation errorPolicy is retry and there is no pause="true" annotation on the resource and CC user task is finished with completedWithError the reconciler retires the task every 30sec and updates the CruiseControlOperation status (Failed task should be restarted when currentTime >= currentTask.Started + 30s)
Finalizer stops the execution of the running task by calling the POST /kafkacruisecontrol/stop_proposal_execution endpoint
Why?
With this controller, the failed Cruise Control user tasks can be handled better.
In the future users can execute Cruise Control tasks declaratively.
Checklist
[x] Implementation tested
[x] Error handling code meets the guideline
[x] Logging code meets the guideline
[x] User guide and development docs updated (if needed)
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: bartam1:x: pregnorYou have signed the CLA already but the status is still pending? Let us recheck it.
Thank you for the detailed reviews! I have re-test everything and It works well. I think Koperator is much better with this change.
Thank you all!
| gharchive/pull-request | 2022-08-25T14:28:17 | 2025-04-01T04:33:36.108651 | {
"authors": [
"CLAassistant",
"bartam1"
],
"repo": "banzaicloud/koperator",
"url": "https://github.com/banzaicloud/koperator/pull/854",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
378539366 | ActiveRecordTest Compile error
3.0.6
ActiveRecordTest compile error
lines of below:
line 54:
Assert.assertTrue(student.update(new QueryWrapper<>().gt("id",10)));
line 130:
Assert.assertTrue(student.delete(new QueryWrapper<>().gt("id",10)));
报错信息
3Q
| gharchive/issue | 2018-11-08T01:23:39 | 2025-04-01T04:33:36.111926 | {
"authors": [
"MacleZhou"
],
"repo": "baomidou/mybatis-plus",
"url": "https://github.com/baomidou/mybatis-plus/issues/616",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
721835802 | Dual licensed?
Regarding the license for this project, the file headers say LGPL-2.1-or-later, while the LICENSE file says it's the MIT license. The only way I can reconcile this apparent contradiction is that the project can be licensed under either LGPL or MIT, at the user's discretion (dual licensing). Is this the case? Or which license is the canonical one?
Hi Karl,
when preparing the files for packaging, I shifted the license to MIT, as
per the Coq community's preference. Something must have gone wrong along
the way! I will fix this up in the next few days. Thanks for pointing this
out.
Yours,
Barry
On Thu, Oct 15, 2020 at 9:30 AM Karl Palmskog notifications@github.com
wrote:
Regarding the license for this project, the file headers say
LGPL-2.1-or-later, while the LICENSE file says it's the MIT license. The
only way I can reconcile this apparent contradiction is that the project
can be licensed under either LGPL or MIT, at the user's discretion
(dual licensing). Is this the case? Or which license is the canonical one?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/barry-jay-personal/tree-calculus/issues/2, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AL46LUO7OUC55YRMWXGWZLDSKYRA7ANCNFSM4SRGKZSQ
.
Hi Karl,
I've changed all licenses to MIT now, and created a new release. Hopefully
that is all fixed now. Thanks again; do let me know if you find anything
else.
Yours,
Barry
On Thu, Oct 15, 2020 at 4:12 PM Barry Jay barry.jay8@gmail.com wrote:
Hi Karl,
when preparing the files for packaging, I shifted the license to MIT, as
per the Coq community's preference. Something must have gone wrong along
the way! I will fix this up in the next few days. Thanks for pointing this
out.
Yours,
Barry
On Thu, Oct 15, 2020 at 9:30 AM Karl Palmskog notifications@github.com
wrote:
Regarding the license for this project, the file headers say
LGPL-2.1-or-later, while the LICENSE file says it's the MIT license. The
only way I can reconcile this apparent contradiction is that the project
can be licensed under either LGPL or MIT, at the user's discretion
(dual licensing). Is this the case? Or which license is the canonical one?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/barry-jay-personal/tree-calculus/issues/2, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AL46LUO7OUC55YRMWXGWZLDSKYRA7ANCNFSM4SRGKZSQ
.
Looks good to me with only MIT license now, so closing this issue.
| gharchive/issue | 2020-10-14T22:30:25 | 2025-04-01T04:33:36.205039 | {
"authors": [
"barry-jay-personal",
"palmskog"
],
"repo": "barry-jay-personal/tree-calculus",
"url": "https://github.com/barry-jay-personal/tree-calculus/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1409945056 | 2.0 Preview.3663: message: not updating in --minimode
When using --minimode in 2.0 Preview.3663, message isn't updated on-the-fly as expected.
https://user-images.githubusercontent.com/24623109/195954687-7063ed32-0784-4bbd-abe5-fc0540eb7f13.mp4
yup, known issue with the command file processing. I suspect this is an bug with xcode 14.0.1 that has incorrect thread handling. I need to test a version compiled against Xcode 13
Appears to be resolved in 2.0 Preview.3706; thanks!
Turns out I was wrong. I was not referencing the correct value for the message when in mini view. (some inside baseball, the mini view is essentially a completely different implementation of the dialog window, not just a "preset" or something. So everything it does is duplicated and if I change a reference for the full dialog window, I have to remember to change it here as well. keeping it super cut down a specialised helps with that)
Ah … good to know.
(So, what I hear you saying is that Mike and I should open new issues for the layout of --mini mode.)
| gharchive/issue | 2022-10-14T22:50:19 | 2025-04-01T04:33:36.240792 | {
"authors": [
"bartreardon",
"dan-snelson"
],
"repo": "bartreardon/swiftDialog",
"url": "https://github.com/bartreardon/swiftDialog/issues/174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2750207532 | On Rails < 8 a stub version gets installed
I just followed the README and got confused because it didn't seem to work. After some digging I discovered version 0.0.1 of the gem was installed and it doesn't contain any functionality. It seems the actual working versions require Rails 8.0 but those did not get installed because I'm still on Rails 7.2. You might want to yank 0.0.1 from Rubygems to avoid this confusion or change the README with something like:
gem "hotwire-spark", "> 0.0.1"
Thanks for the heads up @frenkel. I just yanked it.
| gharchive/issue | 2024-12-19T12:42:28 | 2025-04-01T04:33:36.242758 | {
"authors": [
"frenkel",
"jorgemanrubia"
],
"repo": "basecamp/hotwire-spark",
"url": "https://github.com/basecamp/hotwire-spark/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1915916669 | Add streaming guide.
Pulled this branch, gonna rebase off of main and work on a few formatting issues and minor fixes across this & other recent guides.
@philipkiely-baseten i dropped the ball on this -- still think it's worth adding?
@squidarth now that we have the "LLM with Streaming" example I don't think this is necessary.
| gharchive/pull-request | 2023-09-27T16:15:16 | 2025-04-01T04:33:36.253917 | {
"authors": [
"philipkiely-baseten",
"squidarth"
],
"repo": "basetenlabs/truss",
"url": "https://github.com/basetenlabs/truss/pull/683",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
116943212 | Improve load balancing strategy
A riak-nodejs-client user has noted a scenario in which one node has 2x the network traffic of the other nodes in the cluster. My initial theory is that a slow node is causing commands to "pile up" on this node.
More information needed:
riak-debug output from slow and normal node
A description of the commands being sent to the cluster - % that are read, % write, and if there are keys more frequently accessed than others
Proposed solution: in the default node manager, take the current # of executing commands on a node into account when selecting a node for the next command. Try to keep the execution # the same or close to it for all nodes.
@drewkerrigan
create jira issue
create jira issue
| gharchive/issue | 2015-11-14T18:20:18 | 2025-04-01T04:33:36.258959 | {
"authors": [
"DSomogyi",
"lukebakken"
],
"repo": "basho/riak-nodejs-client",
"url": "https://github.com/basho/riak-nodejs-client/issues/107",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
91232027 | cork/uncork not supported warning message. [JIRA: CLIENTS-497]
I am getting the following warning message each time I create a client. What does it signify and how to fix it ?
warn: [RiakConnection] wanted to use cork/uncork but not supported!
warn: [RiakConnection] wanted to use cork/uncork but not supported!
it's appears only on node.js 0.10.x
The cork / uncork functions are used to batch socket writes to Riak. Older versions of Node don't support this function, as you note. We only test the client with Node.js version 0.12 and later.
I'll post an example for how you can suppress this if you need to use an older version of Node.js.
Please see the example here for using cork: false to suppress this warning:
https://github.com/basho/riak-nodejs-client-examples/blob/master/github/issue-77/example.js
Thanks!
Thanks for the help :)
| gharchive/issue | 2015-06-26T11:50:35 | 2025-04-01T04:33:36.262288 | {
"authors": [
"lukebakken",
"mightwork",
"mogadanez"
],
"repo": "basho/riak-nodejs-client",
"url": "https://github.com/basho/riak-nodejs-client/issues/77",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
99210928 | Converting stop_fold exception breaks vnode worker - BDP patch [JIRA: RIAK-2184]
If the riak_kv_worker sees a stop_fold exception, it will stop and do no
further work. The code was blocking that exception and handling it in
the backend by return the value of the accumulator before the
exception. This is wrong. The kv fold logic has already processed that
accumulator and sent the results back. This causes some extra values to
be appended to the results. In the case of paginated 2i queries that use
list:merge to do a merge sort of the results, the extra values made the
partial lists not be in sorted order, and list:merge in turn returns
results that are in the incorrect order. This causes some results to be
moved to the end and be left out of the final merged result.
create jira issue
Created at Heather's request via email (Subject = Data Platform Riak Deltas, sent 10/9/2015 @ 3:23 PST).
[posted via JIRA by Derek Somogyi]
John, can you get this in by Wednesday of this week?
[posted via JIRA by Derek Somogyi]
It’s unlikely. My early attempts to trace the behavior of this code were unsuccessful because of the instability of the integration efforts, and since then I’ve been alternating working on the 2i code both for BDP and for the BDP/TS merge, and the 0.8 tests.
I can drop 2i in favor of this, but tomorrow will also involve time spent discussing our 3-week target, so I don’t know how much time I’ll have to code this.
I’ll try to have a better feel for the scope of the fix by tomorrow’s daily call.
-John
On Oct 5, 2015, at 9:02 PM, Basho JIRA bot! notifications@github.com wrote:
John, can you get this in by Wednesday of this week?
[posted via JIRA by Derek Somogyi]
—
Reply to this email directly or view it on GitHub https://github.com/basho/riak_kv/pull/1165#issuecomment-145713184.
Oops, replied to this thinking it was a different issue. Will see what I can do.
Welp, after digging and poking I was satisfied with the PR, but a wee bit more digging and poking revealed this was already merged at some previous date (possibly as @jonmeredith was merging 2.0 bits into 2.1). Closing.
| gharchive/pull-request | 2015-08-05T14:00:14 | 2025-04-01T04:33:36.268441 | {
"authors": [
"Basho-JIRA",
"DSomogyi",
"engelsanchez",
"macintux"
],
"repo": "basho/riak_kv",
"url": "https://github.com/basho/riak_kv/pull/1165",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
129154028 | ☠️ WIP ☠️ Flexible keys RTS-621
Previous to this change, partition and local keys had to be three fields long and end with the quanta field.
With this change the partition key must be at least one field long and end with the quanta function/field. The initial fields of local key must match the entire partition key but can include any number of fields afterwards.
This requires the riak_ql branch at/flexi_keys to build.
Closing and opening a new PR against 1.3.
| gharchive/pull-request | 2016-01-27T14:06:42 | 2025-04-01T04:33:36.270045 | {
"authors": [
"andytill"
],
"repo": "basho/riak_kv",
"url": "https://github.com/basho/riak_kv/pull/1333",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
251412234 | Updated to Bootstrap v4.0.0-beta
I had to modify the scss a bit to get it to work with Bootstrap v4.0.0-beta . Here is my proposal for improvement.
I tested it across Google Chrome, Firefox, IE10 and EDGE.
(I did not compile new CSS in the PR.)
| gharchive/pull-request | 2017-08-19T09:07:41 | 2025-04-01T04:33:36.288378 | {
"authors": [
"elpescador-nl"
],
"repo": "bassjobsen/typeahead.js-bootstrap4-css",
"url": "https://github.com/bassjobsen/typeahead.js-bootstrap4-css/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
97706282 | display tags inline with date
As part of #40, to display the tags:
What do you think? Should we change something? It looks quite legible to me.
I like it. It looks clean, doesn't take too much space. Very nice
| gharchive/pull-request | 2015-07-28T13:29:27 | 2025-04-01T04:33:36.355247 | {
"authors": [
"baudren",
"egolus"
],
"repo": "baudren/NoteOrganiser",
"url": "https://github.com/baudren/NoteOrganiser/pull/70",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1430044091 | possible to specify more than one template in the renderIf method?
Hi,
Looking to see if I can specify more than a single template in renderIf method:
<?= $rockfrontend->renderIf("sections/includes/related-items.latte", "template=template1") ?>
would you simply rewrite the template declaration?
<?= $rockfrontend->renderIf("sections/includes/related-items.latte", "template=template1, template=template2") ?>
Thanks,
Hi @protrolium
It's just a regular PW page selector, so the syntax is template=tempate1|template2
| gharchive/issue | 2022-10-31T15:30:45 | 2025-04-01T04:33:36.357334 | {
"authors": [
"BernhardBaumrock",
"protrolium"
],
"repo": "baumrock/RockFrontend",
"url": "https://github.com/baumrock/RockFrontend/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2255463867 | Allow typing into date input fields during book event editing.
Allow typing date into field to set selection of datepickers when editing reading events.
Does not apply to the different datepicker element when editing a book's metadata.
Ok this should not break anything. Thank you !
I upgraded vue3-datepicker because newer versions have a bugfix for the typeable property.
:tada: This PR is included in version 0.53.0 :tada:
The release is available on:
v0.53.0
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-04-22T02:45:09 | 2025-04-01T04:33:36.360067 | {
"authors": [
"DarthNerdus",
"bayang"
],
"repo": "bayang/jelu",
"url": "https://github.com/bayang/jelu/pull/112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
211696077 | Make sure it looks ok on mobile
hide the legend on mobile
make the location input field limited in size on mobile
+@pcorpet
This change is
Reviewed 1 of 1 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
| gharchive/pull-request | 2017-03-03T14:03:44 | 2025-04-01T04:33:36.375750 | {
"authors": [
"dedan",
"pcorpet"
],
"repo": "bayesimpact/project-noah",
"url": "https://github.com/bayesimpact/project-noah/pull/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
140744444 | Multilevel logistic regression
Are there any examples of using BayesPy to do multilevel logistic regression? This seems like it'd be straightforward to do if you could define a variable as an arbitrary function of another, like
p[i] <- 1 / (1 + exp(-z[i]))
but I don't know whether that's possible.
Nope, it's not currently possible to use arbitrary functions. BayesPy is currently limited to the conjugate exponential family. I have plans to add support for non-conjugate nodes but we'll see when that happens.
I added a separate general issue for that: https://github.com/bayespy/bayespy/issues/54 in case you want to follow the progress by subscribing to the issue. But it will probably take some time until I find the time to implement it. So I guess my suggestion would be to take a look on Stan.
| gharchive/issue | 2016-03-14T17:39:32 | 2025-04-01T04:33:36.377738 | {
"authors": [
"jluttine",
"tom-christie"
],
"repo": "bayespy/bayespy",
"url": "https://github.com/bayespy/bayespy/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
460007786 | Merge prototype into master branch
This PR overwrites the existing (dummy) repository with a prototype that can be built on Bazel CI. However, the prototype is still very much WIP and requires a lot more thoughts and work.
Yeah, I think in the long run we wan to have all that information (WORKSPACE; internal_deps) in the individual repos. Right now the federation-repo approach makes it easier for me to change and test things.
| gharchive/pull-request | 2019-06-24T17:25:06 | 2025-04-01T04:33:36.383120 | {
"authors": [
"fweikert"
],
"repo": "bazelbuild/bazel-federation",
"url": "https://github.com/bazelbuild/bazel-federation/pull/7",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1646102082 | bazel project do not support for debug step by step in Clion
Description of the issue. Please be specific.
What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible.
My project is developed by C and compiled by bazel in the Clion,but Bazel Command (bazel run xxx)do not support for my code debug?
the Debug button is gray,the Run button is green for running。
how should i solve this problem?
Version information
CLion: 2023.1 RC
Platform: Linux 4.18.0-240.10.1.el8_3.x86_64
Bazel for CLion plugin: 2023.03.10.0.1-api-version-231
Bazel: 6.1.0
Hi @leland17, Could you please provide a sample code to test the same. Thanks !
Thanks for update. We are closing this issue now. Feel free to reach us for any further details.
| gharchive/issue | 2023-03-29T15:57:42 | 2025-04-01T04:33:36.424963 | {
"authors": [
"leland17",
"sgowroji"
],
"repo": "bazelbuild/intellij",
"url": "https://github.com/bazelbuild/intellij/issues/4653",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1449319095 | Switch default standard to C++20 (+gnu extensions)
As in #5, I'd like to propose that we enable C++20 by default, since I know the NDK compilers support it.
No pressure. It just seems like a good opportunity before the interface hardens.
Also, while we're all here: Is this code shared with blaze still (hopefully, so improvements are shared both ways)? If so maybe there's an equivalent of has_cxx17_headers that should be enabled? (From a quick search, that seemed to be a blaze-specific feature, since I didn't see the string in Bazel, but maybe that's wrong.)
[Self note: Maybe also strip X-ray section if no longer relevant. Seems unlikely people will be using this with NDK r15...]
I see it's already the case here... but why would NDK builds use a different default standard version than other platforms? That seems prone to confusing users. As a user I would expect bazel to be consistent across targets (even if it's consistently no opinion and punts to the compiler to decide, which is what ~every other build system I've used does).
I don't know if there is another bazel standard (I'm not a bazel user, I just lurk here because I work on the NDK). If bazel doesn't aim to be consistent across targets this PR seems fine :)
I remember from seeing you on other issues over there, like https://github.com/android/ndk/issues/837 Hardly lurking! Thanks so much for all you do, and for caring enough to be here, too!
Tagging @ahumesky, since it seems like we'll want him to make the directional call here.
I guess a quick pitch for the latest would be: The benefit of having people be up to date by default is that they can use the broadest set of (stable) language features without having to do additional configuration. But that's no different than just recommending people use the latest stable version of anything that's (mostly) backwards compatible:)
(@ahumesky, could we get your call on this? Totally find to close or to change over to the compiler default instead; just thought this might be the best default option for the reasons above.)
| gharchive/pull-request | 2022-11-15T07:41:46 | 2025-04-01T04:33:36.429152 | {
"authors": [
"DanAlbert",
"cpsauer"
],
"repo": "bazelbuild/rules_android_ndk",
"url": "https://github.com/bazelbuild/rules_android_ndk/pull/26",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
76293667 | Add means to start a job (possibly with parameters) from the web UI
Widget:
Adding parameters:
Nice!
| gharchive/pull-request | 2015-05-14T09:14:23 | 2025-04-01T04:33:36.435827 | {
"authors": [
"jawher",
"julienvey"
],
"repo": "bazooka-ci/bazooka",
"url": "https://github.com/bazooka-ci/bazooka/pull/220",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2172019964 | How to use with my own data
I want to use this with my own data (32 channels) and I am wondering if the data is hard coded or how to use it.
Hi, have you found a solution to use your data?
| gharchive/issue | 2024-03-06T17:04:00 | 2025-04-01T04:33:36.437586 | {
"authors": [
"TwoHuang",
"denlolsauce"
],
"repo": "bbaaii/DreamDiffusion",
"url": "https://github.com/bbaaii/DreamDiffusion/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163106824 | named block call inside params breaks code
Expected behavior
working code (actually just "yield" instead of "yield)" )
Actual behavior
This code
https://gist.github.com/anonymous/4c9e468632534f8bbfd0b30bab44fb50
is transformed into
this code
https://gist.github.com/anonymous/9bad599e05ca0264f53a89f664f19032
RuboCop version
$ rubocop -V
0.39.0 (using Parser 2.3.0.7, running on ruby 2.3.1 x86_64-linux)
And with latest version too
0.41.1 (using Parser 2.3.1.2, running on ruby 2.3.1 x86_64-linux)
| gharchive/issue | 2016-06-30T08:13:50 | 2025-04-01T04:33:36.447118 | {
"authors": [
"RobertDober"
],
"repo": "bbatsov/rubocop",
"url": "https://github.com/bbatsov/rubocop/issues/3266",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
276241561 | Style/MinMax cop fails to detect error
Given a file with
class MinMaxRange
def min
1
end
def max
2
end
def to_s
[min, max].uniq.join('-')
end
end
Rubocop fails with:
Scanning minmax.rb
An error occurred while Style/MinMax cop was inspecting minmax.rb:11:4.
undefined method `source' for nil:NilClass
/.gem/ruby/2.4.2/gems/rubocop-0.51.0/lib/rubocop/cop/style/min_max.rb:38:in `message'
Expected behavior
Rubocop should either ignore this case, or suggest correcting to self.minmax...
Actual behavior
Please, report your problems to RuboCop's issue tracker.
Mention the following information in the issue report:
0.51.0 (using Parser 2.4.0.2, running on ruby 2.4.2 x86_64-darwin16)
Steps to reproduce the problem
Run rubocop 0.51.0 on the above file with a default configuration
RuboCop version
$ rubocop -V
0.51.0 (using Parser 2.4.0.2, running on ruby 2.4.2 x86_64-darwin16)
Minimum code and stack trace.
[min, max]
$ rubocop -d
For /tmp/tmp.kaKlYAH35t: configuration from /home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/config/default.yml
Inheriting configuration from /home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/config/enabled.yml
Inheriting configuration from /home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/config/disabled.yml
Inspecting 1 file
Scanning /tmp/tmp.kaKlYAH35t/test.rb
undefined method `source' for nil:NilClass
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/style/min_max.rb:38:in `message'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/style/min_max.rb:25:in `block in on_array'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/style/min_max.rb:34:in `min_max_candidate'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/style/min_max.rb:21:in `on_array'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/commissioner.rb:44:in `block (2 levels) in on_array'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/commissioner.rb:109:in `with_cop_error_handling'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/commissioner.rb:43:in `block in on_array'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/commissioner.rb:42:in `each'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/commissioner.rb:42:in `on_array'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/ast/traversal.rb:12:in `walk'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/commissioner.rb:60:in `investigate'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/team.rb:114:in `investigate'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/team.rb:102:in `offenses'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cop/team.rb:44:in `inspect_file'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:258:in `inspect_file'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:205:in `block in do_inspection_loop'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:237:in `block in iterate_until_no_changes'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:230:in `loop'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:230:in `iterate_until_no_changes'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:201:in `do_inspection_loop'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:111:in `block in file_offenses'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:121:in `file_offense_cache'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:109:in `file_offenses'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:100:in `process_file'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:78:in `block in each_inspected_file'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:75:in `each'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:75:in `reduce'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:75:in `each_inspected_file'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:67:in `inspect_files'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/runner.rb:39:in `run'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cli.rb:128:in `execute_runner'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cli.rb:60:in `execute_runners'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/lib/rubocop/cli.rb:31:in `run'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/bin/rubocop:13:in `block in <top (required)>'
/usr/lib/ruby/2.4.0/benchmark.rb:308:in `realtime'
/home/pocke/.gem/ruby/2.4.0/gems/rubocop-0.51.0/bin/rubocop:12:in `<top (required)>'
/home/pocke/.gem/ruby/2.4.0/bin/rubocop:23:in `load'
/home/pocke/.gem/ruby/2.4.0/bin/rubocop:23:in `<main>'
.
1 file inspected, no offenses detected
Finished in 0.17434273899561958 seconds
| gharchive/issue | 2017-11-23T00:45:45 | 2025-04-01T04:33:36.451171 | {
"authors": [
"asafbrukarz",
"pocke"
],
"repo": "bbatsov/rubocop",
"url": "https://github.com/bbatsov/rubocop/issues/5099",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
125159082 | Reduce duplication in indentation cops
Introduce ArrayHashIndentation to be used by Style/IndentArray and
Style/IndentHash. This should bring these up from the CodeClimate's
F-class.
:+1:
:+1:
| gharchive/pull-request | 2016-01-06T11:22:01 | 2025-04-01T04:33:36.452800 | {
"authors": [
"alexdowad",
"bbatsov",
"lumeet"
],
"repo": "bbatsov/rubocop",
"url": "https://github.com/bbatsov/rubocop/pull/2591",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1042441795 | 🛑 FND is down
In 903277f, FND (https://www.flooranddecor.com/rewards?redirect=true) was down:
HTTP code: 403
Response time: 275 ms
Resolved: FND is back up in eb1d0a7.
| gharchive/issue | 2021-11-02T14:29:32 | 2025-04-01T04:33:36.455108 | {
"authors": [
"bbaumler"
],
"repo": "bbaumler/uptime",
"url": "https://github.com/bbaumler/uptime/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53966157 | Changes to run under Rakudo 2014.12
The important change lies in class MongoDB::Connection where the call to IO::Socket::INET send() is changed to write(). The send() needed a string for which the buffer $b must be decoded. This will corrupt the mongodb opcodes. This could be noticed by using wireshark. Write() however can process the buffer directly making it faster too.
The rest of the changes are in META.info to change the version and the README.md to modify the examples and changelog
I appreciate the changes, thanks!
If you want to maintain those modules I can transfer ownership to you. Or feel free to change links to your forks in Perl 6 ecosystem.
They are one step away from being qualified as driver: IEEE754 buffer pack/unpack is missing in Rakudo but can easily be added to BSON manually, and that opens way to float numbers and server authentication. Also BSON now can get gigantic speedup from parallelization in Rakudo. And maybe Reactive Mongo based on Supplies is also within reach.
Hi Pawel,
I would like to take over your two modules but I am very new to perl 6. It
will take time to learn the language in full so at this moment I could only
keep the modules runnable.
When you transfer the ownership, I must remove the fork is n"t it? I"m also
new to git you know, only a year experience :-)
Greetings,
Marcel
On 10 januari 2015 21:16:06 Pawel Pabian notifications@github.com wrote:
I appreciate the changes, thanks!
If you want to maintain those modules I can transfer ownership to you. Or
feel free to change links to your forks in Perl 6 ecosystem.
They are one step away from being qualified as driver: IEEE754 buffer
pack/unpack is missing in Rakudo but can easily be added to BSON manually,
and that opens way to float numbers and server authentication. Also BSON
now can get gigantic speedup from parallelization in Rakudo. And maybe
Reactive Mongo based on Supplies is also within reach.
Reply to this email directly or view it on GitHub:
https://github.com/bbkr/mongo-perl6-driver/pull/3#issuecomment-69470208
Yes, please remove your forks. I'll transfer repositories and update Ecosystem locations.
Done that.
Transfer requests sent. Once completed please edit:
Ecosystem
https://github.com/perl6/ecosystem/blob/master/META.list#L87
https://github.com/perl6/ecosystem/blob/master/META.list#L93
"source-url" in META.json files
CONTACT section in README.md files
I also:
switched those modules to semantic versioning
made you an official author
made modules compatible with S11 spec
Good luck.
On 01/12/2015 02:20 AM, Pawel Pabian wrote:
Transfer requests sent. Once completed please edit:
Ecosystem
https://github.com/perl6/ecosystem/blob/master/META.list#L87
https://github.com/perl6/ecosystem/blob/master/META.list#L93
"source-url" in META.json files
CONTACT section in README.md files
I also:
switched those modules to semantic versioning
made you an official author
made modules compatible with S11 spec
Think I've got everything right now.
Thanks for the pointers above because I would have forgotten that.
Bye,
Marcel
| gharchive/pull-request | 2015-01-10T18:07:07 | 2025-04-01T04:33:36.520675 | {
"authors": [
"MARTIMM",
"bbkr"
],
"repo": "bbkr/mongo-perl6-driver",
"url": "https://github.com/bbkr/mongo-perl6-driver/pull/3",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
269230823 | Allows JSON to be passed if JSON Content-Type
Doesn't support hashes in general
Re: https://github.com/bblimke/webmock/issues/449
@LDonoughe thank you for the pull request!
This would indeed be a very useful addition to Webmock.
In order to support hashes as stubbed response body, Webmock would either need
to convert them to json or make sure all http client libs support hash as a response body.
I believe this has been raised in https://github.com/bblimke/webmock/issues/449
To make the pull request is complete, an appropriate acceptance spec would be useful,
to make sure that this functionality actually works across all http client libs,
and updated README.
If we allow WebMock to accept hashes just for JSON, then perhaps
Webmock should provide a more specific error message in case Hash is provided,
but json content type is not?
You're Welcome @bblimke !
I've tried to address your PR comments in my latest commit. The reason so many files were touched is because a number of the adaptors were relying on implicit string conversion where Hash does not have one, so I'm explicitly to_s on the body where I got errors. Let me know if these changes do not completely address your concerns and what I can do to change that.
@bblimke is there anything else I can do to speed up the acceptance/merge of this PR? My team could really use this functionality to properly test 3rd-party endpoints
@LDonoughe sorry. I missed your latest changes.
Putting duplication aside, I'm not sure I understand the need for to_s change in all adapters.
Are you trying to convert the Hash object to String in order for http clients to accept it?
Hash to_s is not JSON.
Please see my above comment:
In order to support hashes as stubbed response body, Webmock would either need
to convert them to json or make sure all http client libs support hash as a response body.
I believe this has been raised in #449
The current specs also don't actually check the body returned from request.
@bblimke I know it's been a couple years but I wanted to share my findings and close this PR.
The short answer is that I was wrong about needing to return a hash rather than a string. So, my advice to anyone who finds this PR is to check what data is actually being returned by your endpoint via your choice of HTTP adapter. If it's actually a hash and not a string, feel free to resurrect my "fix." In most cases, you're going to have to JSON.parse the string to get your JSON.
As far as the integration tests go: when I attempted to add integration tests for the JSON work I noticed that it was broken in 4 or so HTTP adapters. When I went to add a similar shared test for Array to see why that was the case I noticed that that too was broken. I lost the original code/tests I had but I've created a new Array integration test if you'd like to see for yourself by checking out my branch (https://github.com/LDonoughe/webmock/tree/data_type_support_tests) and running bundle exec rake spec
I sincerely hope this saves at least one person some time and effort testing their integrations.
@LDonoughe thank you for these insights and the for closing the issue. Hopefully that will help others.
Yes, there are many trade-offs in WebMock in order to support various http clients,
but that's the price to have a unified API that works everywhere (in theory ;)
People can always mock the specific http clients in case WebMock is too generic.
| gharchive/pull-request | 2017-10-27T20:47:25 | 2025-04-01T04:33:36.540958 | {
"authors": [
"LDonoughe",
"bblimke"
],
"repo": "bblimke/webmock",
"url": "https://github.com/bblimke/webmock/pull/727",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
830864460 | labels for events, new screenplay filter
allow grouping / filtering of Event by so called "label".
version 406.
| gharchive/issue | 2021-03-13T10:14:00 | 2025-04-01T04:33:36.546025 | {
"authors": [
"bbortt"
],
"repo": "bbortt/event-planner",
"url": "https://github.com/bbortt/event-planner/issues/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2449371141 | 🐛 [BUG]: TypeError: Cannot read properties of undefined (reading 'id')
Is there an existing issue for this?
[X] I have searched the existing issues and this is a new bug.
Current Behavior
console error, saying 'TypeError: Cannot read properties of undefined (reading 'id')'
it happens when I add custom nodes.
Expected Behavior
No error should be triggered.
Steps To Reproduce
am not sure as it happens occasionally, but look at the code description below you will understand the issue
Relevant log output
at applyChanges (vue-flow-core.mjs:4966:25)
at Object.applyNodeChanges2 [as applyNodeChanges] (vue-flow-core.mjs:6416:12)
at nodesChangeHandler (vue-flow-core.mjs:7792:19)
at vue-flow-core.mjs:5105:52
at Array.map (<anonymous>)
at Proxy.trigger (vue-flow-core.mjs:5105:40)
at updateNodeDimensions (vue-flow-core.mjs:6104:31)
at vue-flow-core.mjs:9563:24
Anything else?
Looking at the updateNodeDimensions function the changes member is set here only if the doUpdate is true.
the issue happen because it is set using changes[i] = ... instead of changes[] = ...
The issue happen when doUpdate is false, then in the next loop it becomes true, the changes value will be:
`[empty, {...}, {...}]
Where the index 0 is empty because doUpdate was false at index 0.
then when looping thro this changes, the first element will be undefined or empty. take applyChanges function for example, currentChange).id will throw an error here because currentChange is actually empty.
Thanks for reporting, will be fixed in the next patch.
Fixed with 1.39.3
| gharchive/issue | 2024-08-05T19:58:07 | 2025-04-01T04:33:36.554304 | {
"authors": [
"bcakmakoglu",
"mshamaseen"
],
"repo": "bcakmakoglu/vue-flow",
"url": "https://github.com/bcakmakoglu/vue-flow/issues/1568",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
225919040 | Automate platform builds and deployments
Currently, the jpy built and release management is way too expensive, we should automate it:
[ ] Configure TravisCI for Linux
[ ] Configure TravisCI for Darwin
[ ] Configure AppVeyor for Windows
[ ] Configure CodeCov
[ ] Deploy Conda package to some channel
[ ] Deploy PiPy package
See also #15 and #83
| gharchive/issue | 2017-05-03T08:49:23 | 2025-04-01T04:33:36.578785 | {
"authors": [
"forman"
],
"repo": "bcdev/jpy",
"url": "https://github.com/bcdev/jpy/issues/92",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2153963035 | SExpParser fails with GnuPG ed25519 private keys
The SExpParser cannot partse GnuPG private keys if they are ed25519.
Reason is trivial: this is how GnuPG stores them:
https://github.com/gpg/gnupg/blob/40227e42ea0f2f1cf9c9f506375446648df17e8d/common/t-ssh-utils.c#L179-L199
https://github.com/gpg/gnupg/blob/40227e42ea0f2f1cf9c9f506375446648df17e8d/agent/cvt-openpgp.c#L222-L243
And SExprParser always ends up here:
https://github.com/bcgit/bc-java/blob/bd6e70c7cad0a35fdcba055de13ef2e36f6a151b/pg/src/main/java/org/bouncycastle/gpg/SExprParser.java#L94
As it does not count for (flags xxx).
Closing as this should be resolved by #1591 being closed; I think this was fixed in d9412c3bf993c751466c8e98e3ca7743dac8a621 but I might be incorrect. Let us know if it doesn't work!
| gharchive/issue | 2024-02-26T11:36:46 | 2025-04-01T04:33:36.587011 | {
"authors": [
"cipherboy",
"cstamas"
],
"repo": "bcgit/bc-java",
"url": "https://github.com/bcgit/bc-java/issues/1590",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2607094211 | TLS PSK intermittently fails with TlsFatalAlertReceived: bad_record_mac(20)
I think I am experiencing a possible race condition in either TlsClientProtocol or TlsServerProtocol.
I am using bctls-jdk18on 1.78.1.
I can reproduce an intermittent failure in a JUnit test. It performs:
10 repeats of a client/server echo test, using plaintext communication
10 repeats of a client/server echo test, using TLS PSK communication
Sometimes all 20 tests pass. Other times only 19 out of 20 tests pass.
Plaintext tests 1 through 10 always pass.
TLS PSK test 1 intermittently passes, or fails with exception TlsFatalAlertReceived: bad_record_mac(20).
TLS PSK tests 2 through 10 always pass.
I uploaded a Maven project to GitHub to demonstrate the issue.
https://github.com/justincranford/bc-tls-psk
Here are screenshots comparing when all tests passed, versus when all tests passed except the first TLS PSK test.
If TLS PSK test #1 fails, the stack trace is:
com.github.justincranford.psk.PskTlsTest
testTlsPsk(com.github.justincranford.psk.PskTlsTest)
org.bouncycastle.tls.TlsFatalAlertReceived: bad_record_mac(20)
at org.bouncycastle.tls.TlsProtocol.handleAlertMessage(TlsProtocol.java:245)
at org.bouncycastle.tls.TlsProtocol.processAlertQueue(TlsProtocol.java:740)
at org.bouncycastle.tls.TlsProtocol.processRecord(TlsProtocol.java:563)
at org.bouncycastle.tls.RecordStream.readRecord(RecordStream.java:247)
at org.bouncycastle.tls.TlsProtocol.safeReadRecord(TlsProtocol.java:879)
at org.bouncycastle.tls.TlsProtocol.blockForHandshake(TlsProtocol.java:427)
at org.bouncycastle.tls.TlsClientProtocol.connect(TlsClientProtocol.java:88)
at com.github.justincranford.psk.PskTlsTest$PskTlsClient.send(PskTlsTest.java:79)
at com.github.justincranford.psk.PskTlsTest.doClientServer(PskTlsTest.java:62)
at com.github.justincranford.psk.PskTlsTest.testTlsPsk(PskTlsTest.java:51)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1596)
Test files: pom.xml and TlsPksTest.java
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE RelativeLayout>
<project xmlns="https://maven.apache.org/POM/4.0.0"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.github.justincranford</groupId>
<artifactId>bc-tls-psk</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>BC TLS PSK</name>
<description>BC TLS PSK</description>
<dependencies>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bctls-debug-jdk18on</artifactId>
<version>1.78.1</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>2.0.16</version>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>3.26.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>5.11.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>5.14.2</version>
<scope>test</scope>
</dependency>
</dependencies>
TlsPskTest.java
package com.github.justincranford.psk;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.nio.charset.StandardCharsets;
import java.security.SecureRandom;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import org.assertj.core.api.Assertions;
import org.bouncycastle.tls.CipherSuite;
import org.bouncycastle.tls.PSKTlsClient;
import org.bouncycastle.tls.PSKTlsServer;
import org.bouncycastle.tls.TlsClientProtocol;
import org.bouncycastle.tls.TlsPSKIdentity;
import org.bouncycastle.tls.TlsPSKIdentityManager;
import org.bouncycastle.tls.TlsServerProtocol;
import org.bouncycastle.tls.crypto.impl.bc.BcTlsCrypto;
import org.bouncycastle.util.io.Streams;
import org.junit.jupiter.api.MethodOrderer;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.TestMethodOrder;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import org.mockito.Mockito;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@SuppressWarnings({"nls", "static-method", "hiding", "synthetic-access", "resource"})
public class PskTlsTest {
private static final Logger log = LoggerFactory.getLogger(PskTlsTest.class);
public static final SecureRandom SECURE_RANDOM = new SecureRandom();
private static final int[] CIPHER_SUITES = new int[] { CipherSuite.TLS_PSK_WITH_AES_128_CBC_SHA };
private static final TlsPskIdentity PSK_IDENTITY = new TlsPskIdentity("identity".getBytes(StandardCharsets.UTF_8),"secret".getBytes(StandardCharsets.UTF_8));
@ParameterizedTest // repeat test, use unique port each time to avoid TCP CLOSE_WAIT
@ValueSource(ints={9440, 9441, 9442, 9443, 9444, 9445, 9446, 9447, 9448, 9449})
@Order(1)
public void testPlaintext(final int port) throws Exception {
doClientServer(false, "localhost", port);
}
@ParameterizedTest // repeat test, use unique port each time to avoid TCP CLOSE_WAIT
@ValueSource(ints={8440, 8441, 8442, 8443, 8444, 8445, 8446, 8447, 8448, 8449})
@Order(2)
public void testTlsPsk(final int port) throws Exception {
doClientServer(true, "localhost", port);
}
// Start server, send message with client, and verify client received echo of its request
// useTlsPsk=true uses plaintext communication
// useTlsPsk=true uses plaintext communication
private void doClientServer(final boolean useTlsPsk, final String address, final int port) throws Exception {
final PskTlsServer pskTlsServer = Mockito.spy(new PskTlsServer(useTlsPsk, address, port, 2));
final String clientRequest = "This is an echo test " + SECURE_RANDOM.nextInt();
final Thread serverThread = pskTlsServer.start();
final String serverResponse = PskTlsClient.send(useTlsPsk, address, port, clientRequest);
Assertions.assertThat(serverResponse).isEqualTo(clientRequest);
serverThread.interrupt();
}
public static class PskTlsClient {
public static String send(final boolean useTlsPsk, final String address, final int port, final String clientRequest) throws Exception {
log.info("Client: Connecting to server port " + port);
try (final Socket socket = new Socket(address, port)) {
log.info("Client: Connected to server port " + port);
final InputStream inputStream = socket.getInputStream();
final OutputStream outputStream = socket.getOutputStream();
final byte[] serverResponseBytes = new byte[clientRequest.length()];
if (useTlsPsk) { // TLS PSK send and receive
final TlsClientProtocol tlsClientProtocol = new TlsClientProtocol(inputStream, outputStream);
tlsClientProtocol.connect(new PSKTlsClient(new BcTlsCrypto(SECURE_RANDOM), PSK_IDENTITY) {
@Override public int[] getCipherSuites() { return CIPHER_SUITES; }
});
final OutputStream tlsOutputStream = tlsClientProtocol.getOutputStream();
log.info("Client: Sending \"Hello from PSK Client\"");
tlsOutputStream.write(clientRequest.getBytes(StandardCharsets.UTF_8));
tlsOutputStream.flush();
final InputStream tlsInputStream = tlsClientProtocol.getInputStream();
final int numServerResponseBytes = tlsInputStream.read(serverResponseBytes);
Assertions.assertThat(numServerResponseBytes).isEqualTo(clientRequest.length());
tlsClientProtocol.close();
} else { // PLAINTEXT send and receive
log.info("Client: Sending \"Hello from PSK Client\"");
outputStream.write(clientRequest.getBytes(StandardCharsets.UTF_8));
outputStream.flush();
final int numServerResponseBytes = inputStream.read(serverResponseBytes);
Assertions.assertThat(numServerResponseBytes).isEqualTo(clientRequest.length());
}
final String serverResponse = new String(serverResponseBytes, StandardCharsets.UTF_8);
log.info("Client: Received from server: " + serverResponse);
return serverResponse;
}
}
}
public static class PskTlsServer {
private static final int MAX_WAIT_MILLIS = 3000;
private final boolean useTlsPsk;
private final String address;
private final int port;
private final int backlog;
public PskTlsServer(final boolean useTlsPsk, final String address, final int port, final int backlog) {
this.useTlsPsk = useTlsPsk;
this.address = address;
this.port = port;
this.backlog = backlog;
}
public void listen(final CountDownLatch countDownLatch) throws Exception {
try (ServerSocket serverSocket = new ServerSocket(this.port, this.backlog, InetAddress.getByName(this.address))) {
log.info("Server: Listening on " + this.address + ":" + this.port + "...");
countDownLatch.countDown(); // signal to main thread that server started OK
while (true) {
log.info("Server: While loop");
try (final Socket socket = serverSocket.accept()) {
log.info("Server: Accepted connection from client");
final InputStream inputStream = socket.getInputStream();
final OutputStream outputStream = socket.getOutputStream();
if (this.useTlsPsk) { // TLS PSK echo
final TlsServerProtocol tlsServerProtocol = new TlsServerProtocol(inputStream, outputStream);
final BcTlsCrypto bcTlsCrypto = new BcTlsCrypto(SECURE_RANDOM);
final PSKTlsServer pskTlsServer = new PSKTlsServer(bcTlsCrypto, new TlsPskIdentityManager(PSK_IDENTITY)) {
@Override public int[] getCipherSuites() { return CIPHER_SUITES; }
};
tlsServerProtocol.accept(pskTlsServer);
final InputStream tlsInputStream = tlsServerProtocol.getInputStream();
final OutputStream tlsOutputStream = tlsServerProtocol.getOutputStream();
Streams.pipeAll(tlsInputStream, tlsOutputStream);
} else { // PLAINTEXT echo
Streams.pipeAll(inputStream, outputStream);
}
}
}
}
}
public Thread start() {
final CountDownLatch countDownLatch = new CountDownLatch(1);
final Thread serverThread = new Thread(() -> {
try {
this.listen(countDownLatch);
} catch (Exception e) {
log.info("Main: Exception while listening", e);
}
});
log.info("Main: Waiting for Server");
final long nanos = System.nanoTime();
serverThread.start();
try {
countDownLatch.await(MAX_WAIT_MILLIS, TimeUnit.MILLISECONDS); // wait for server thread to indicate it started OK
// Thread.sleep(100); // waiting for server to call serverSocket.accept() doesn't seem to help
} catch (InterruptedException e) {
log.info("Main: Exception while waiting for start", e);
throw new RuntimeException(e);
} finally {
log.info("Main: Waited for Server start for " + Float.valueOf((System.nanoTime() - nanos)/1000000F) + " msec");
}
return serverThread;
}
}
public static class TlsPskIdentity implements TlsPSKIdentity {
private final byte[] identity;
private final byte[] psk;
public TlsPskIdentity(final byte[] pskIdentity, final byte[] psk) {
this.identity = pskIdentity;
this.psk = psk;
}
@Override public byte[] getPSKIdentity() { return this.identity; }
@Override public byte[] getPSK() { return this.psk; }
@Override public void skipIdentityHint() { /*do nothing*/ }
@Override public void notifyIdentityHint(byte[] psk_identity_hint) { /*do nothing*/ }
}
public static class TlsPskIdentityManager implements TlsPSKIdentityManager {
private final TlsPskIdentity tlsPskIdentity;
public TlsPskIdentityManager(final TlsPskIdentity tlsPskIdentity) { this.tlsPskIdentity = tlsPskIdentity; }
@Override
public byte[] getHint() { return this.tlsPskIdentity.getPSKIdentity(); }
@Override
public byte[] getPSK(byte[] identity) { return this.tlsPskIdentity.getPSK(); }
}
}
It seems straight forward to reproduce the problem.
As a quick estimate, I have to re-run the JUnit test about 10 times before I randomly get all tests to pass. The rest of the runs, the first TLS PSK test fails, so about a 90% failure rate, but only for the first parameterized test repeat.
The return value from TlsPSKIdentity#getPSK (resp. TlsPSKIdentityManager#getPSK) needs to be cloned, as it will be filled with zeros after use.
You could e.g. use org.bouncycastle.tls.BasicTlsPSKIdentity in this case.
Thank you for your feedback. I tried your suggestion and it worked! My mini stress test passes 100% of the time now. Awesome! I will close this issue.
Q: Is returning a cloned byte[] from TlsPSKIdentity#getPSK documented somewhere?
I hope it is OK to ask this question as a postscript to the original issue.
I don't recall seeing this in main/test comments, or in javadocs. I didn't expect a read method to have a side effect of zeroing out the byte array.
| gharchive/issue | 2024-10-23T03:39:42 | 2025-04-01T04:33:36.598649 | {
"authors": [
"justincranford",
"peterdettman"
],
"repo": "bcgit/bc-java",
"url": "https://github.com/bcgit/bc-java/issues/1876",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
812273571 | Skip marker packets when parsing public key rings
According to rfc4880 marker packets are obsolete and need to be ignored when encountered.
If the user tried to parse a public key ring which was prefixed with a marker packet, parsing would fail as the inital tag would match neither public key packet nor subkey packet.
My fix skips any marker packets and continues to parse the public key ring after skipping.
This fixes parts of https://github.com/pgpainless/pgpainless/issues/84
I guess there is a first time for everything. Thanks for the patch, merged with minor revision.
| gharchive/pull-request | 2021-02-19T19:19:55 | 2025-04-01T04:33:36.601879 | {
"authors": [
"dghgit",
"vanitasvitae"
],
"repo": "bcgit/bc-java",
"url": "https://github.com/bcgit/bc-java/pull/891",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1165261381 | Handle collection date ranges and multiple dates
Collection dates (years) can be one of:
Individual year (1990)
Year range (1990-1996)
Multiple years (1990, 1992)
Combination of 2 & 3 (1990, 1992-1994)
Need to allow for these combinations in the model and change the load to accommodate.
@elisabethdeom - Feel free to put your answer in here if it doesn't need to be discussed. If you need clarification or would like to discuss we can do that tomorrow.
| gharchive/issue | 2022-03-10T13:58:08 | 2025-04-01T04:33:36.603733 | {
"authors": [
"bferguso"
],
"repo": "bcgov/BCHeritage",
"url": "https://github.com/bcgov/BCHeritage/issues/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
901436887 | OOP: Replace special characters when submitting form.
[x] After these changes, the app was run and still works as expected
[x] Tests for these changes were added (if applicable)
[x] All existing unit tests were run and still pass
No longer need this PR. Closing.
| gharchive/pull-request | 2021-05-25T22:15:25 | 2025-04-01T04:33:36.607928 | {
"authors": [
"harrymaynard-maximus"
],
"repo": "bcgov/MOH-AOP-OOP",
"url": "https://github.com/bcgov/MOH-AOP-OOP/pull/181",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1942475988 | Incorporate PY Contribution Maximums into FT workflows
Description of Task:
Each contribution variable in the FT assessment workflows must be subjected to PY maximums which are calculated across ALL applications within a PY.
Context
Currently Camunda assesses each application independently, and results in a final award configuration.
SABC policy includes two types of PY maximums which require assessments within a PY to reference one another
PY award maximums. These affect final award amounts, and can therefore be applied during the eCert configuration stage, after camunda assessment calculations are concluded.
PY contribution maximums. These need to affect the intermediary 'inputs' which are part of each application's assessment calculations, and therefore need to be included within Camunda workflows somehow.
Questions:
How can Camunda be configured to reference other applications? Can it be done within the FT assessment workflow, or does there need to be a seperate workflow established to capture all sum variables within a given PY and establish them as variables to be consumed by FT assessment workflow?
Does this present an issue for the design of eCert actual numbers being different than assessment numbers?
Closing as this is replaced by other tickets related to PY maximums
| gharchive/issue | 2023-10-13T18:54:53 | 2025-04-01T04:33:36.610949 | {
"authors": [
"HRAGANBC"
],
"repo": "bcgov/SIMS",
"url": "https://github.com/bcgov/SIMS/issues/2415",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2187420197 | 2.1.1 ToolTip - Label Interactive Elements- BC Services Card Login- Supporting Partner Accessibility Fixes - Level A- Keyboard
User Story
**As a Supporting Partner, I need proper labelling to identify interactive elements so that I can...
Context:
Theses issues were uncovered through accessibility audit via CITZ.
Detailed Spreadsheet for Student s inserted inline and also attached here: 2024_Supporting Information_Audit.xlsx
All functionality of the content is operable through a keyboard interface without requiring specific timings for individual keystrokes, except where the underlying funtion requires input that depends on the path of the user's movement and not just the endpoints.
The Problem
ToolTip is ignored by keyboard navigation and screen reader.
Acceptance Criteria
-[ ] Ensure proper labelling is used to identify interactive elements for the following:
[ ] Welcome to StudentAid BC
[ ] Login with BC Services Card
[ ] Welcome! (Start submission)
[ ] Search for application
[ ] Supporting information form
Who decides labels?
@michesmith connect with Rowan to resolve questions
Closing this ticket as out of scope. The BC Services Card login page is managed by a different team.
| gharchive/issue | 2024-03-14T22:59:09 | 2025-04-01T04:33:36.616170 | {
"authors": [
"HRAGANBC",
"michesmith",
"ninosamson"
],
"repo": "bcgov/SIMS",
"url": "https://github.com/bcgov/SIMS/issues/2998",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2653607189 | TechDebt: API: Environment Variables Validation
Description of Changes
Note: In my previous version env-config.ts exported a getter and setter for ENV values. I think the added complexity made it not worthwhile.
Now you get and set process.env values the regular way with the addition of process.env being fully typed. The app will crash on start up if the environment variables are not set correctly, still undecided if the loadEnvironmentVariables function should always exit the process, or only for development environments instead.
Validates environment variables against a zod schema
Extends the process.env type to include the inferred zod type
Updates some existing casting and default values
Testing Notes
App should work normally (when all environment variables exist in .env)
Need to validate that all the values in the schema are actually required. Should any be set to optional?
Looks great!
| gharchive/pull-request | 2024-11-12T23:25:33 | 2025-04-01T04:33:36.620575 | {
"authors": [
"MacQSL",
"NickPhura"
],
"repo": "bcgov/biohubbc",
"url": "https://github.com/bcgov/biohubbc/pull/1425",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2495568574 | Mat/Pat - Employee Information Validations
Tab - Employee Information
AC
[x] Name Ministry and Email field values is present (testing only the critical fields)
[x] "Incorrect or missing information can cause delays in the processing of your leave application" can be left empty and next tab "Alternate contact information" should show up
Note
"Incorrect or missing information can cause delays in the processing of your leave application" is mandatory only on form submission so will test that as part of another ticket
script - https://bcgov.sharepoint.com/:u:/r/teams/03991/Shared Documents/General/Selenium/Selenium Scripts/Maternity Parental L%26A/MaternityEmployeeInfo_1886.side?csf=1&web=1&e=gt56fc
Working great - thanks Fazil!
Perfect - Good to go PO A = closed
| gharchive/issue | 2024-08-29T21:01:18 | 2025-04-01T04:33:36.625294 | {
"authors": [
"Stella-Archer",
"ayushdamani",
"fazil-ey"
],
"repo": "bcgov/digital-journeys",
"url": "https://github.com/bcgov/digital-journeys/issues/1886",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1216253547 | Verify if we can setup Microsoft Teams functionality
Title of ticket:
Description
Microsoft has the documentation here: https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook
Someone with admin should see if we have this functionality. If we DO have this functionality, we can setup alerting directly to Teams. If not, we have to use either RocketChat or email.
Dependencies
A user with admin(?) must check
All users can check connectors
DOD
[x] User has verified if we have Microsoft Teams connectors - https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook
Mark as done. Success! Setup a new Teams room, "Monitoring", that can be sent custom messages via Connectors.
| gharchive/issue | 2022-04-26T17:17:15 | 2025-04-01T04:33:36.651451 | {
"authors": [
"acoard-aot"
],
"repo": "bcgov/foi-flow",
"url": "https://github.com/bcgov/foi-flow/issues/1973",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1265097859 | Design - Non-Pilot Ministries Ability to Process Payments Online
Title of ticket: Design - Non-Pilot Ministries Ability to Process Payments Online
Description
Design a flow, process and mock-ups of how we can on-board non-pilot users into the applicant fee payment process but nothing else.
Dependencies
Are there any dependencies?
DOD
[ ] List the items that need to be complete for this ticket to be considered done
[ ]
[ ]
[ ]
[ ]
@m-prodan @arielleandrews @lmullane
Links to update prototype: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000374
Non-pilot users will land on a queue of there teams request that are in CFR be able to select or search for the request they want to process fees for. (Non-Pilot users have restricted search) When Non-Pilot users select a request they will be taken to the CFR forms. In the CFR Form they will be able to upload the Invoice and letter as well as insure the amounts pulled from AXIS are correct. Once they move through to the Approved Status, and have uploaded the documents on the form the "Email Applicant" button will become active.
Modal if Form hasn't been approved but user tries to move to next stages: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000377
Modal if users hasnt upload documents but tries to move to next stages: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000865
Modal if user tries to resend email: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000866
Modal for sending emailing and having final confimation: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000378
Designs look good, Jacky.
A couple of questions:
should we differentiate between the ministry (pilot vs non-pilot ministry) rather than the user? An IAO analyst may have some requests in the system that are pilot ministries and some that are not. So only in some instances will they only see the CFR form, correct?
what would be the difference for the Advanced Search for non-pilot?
Sent from my iPhone
On Aug 8, 2022, at 8:53 AM, Jacklyn @.***> wrote:
[EXTERNAL] This email came from an external source. Only open attachments or links that you are expecting from a known sender.
@m-prodanhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fm-prodan&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817286677|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=jt5lTt%2B7gHxHyxM9yqIupJpfonxJ%2BRw33hF50nmQSPc%3D&reserved=0 @arielleandrewshttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Farielleandrews&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817286677|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=3GZZrqc7obQ5qVeURmAvFAVKJC9E3uXcSricohr1zAQ%3D&reserved=0 @lmullanehttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Flmullane&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817286677|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=Sy%2BqMt%2FWjRclKmi0zKcL8YQMoexN4lsjRurvee4VpoM%3D&reserved=0
Links to update prototype: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000374https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjacklyn808742.invisionapp.com%2Fconsole%2Fshare%2F3W2O64AP5C%2F941000374&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=ZuugRp7VzXUtnvN7AoQMiLA8rSGA7wnWurQXfxviCpc%3D&reserved=0
Non-pilot users will land on a queue of there teams request that are in CFR be able to select or search for the request they want to process fees for. (Non-Pilot users have restricted search) When Non-Pilot users select a request they will be taken to the CFR forms. In the CFR Form they will be able to upload the Invoice and letter as well as insure the amounts pulled from AXIS are correct. Once they move through to the Approved Status, and have uploaded the documents on the form the "Email Applicant" button will become active.
Modal if Form hasn't been approved but user tries to move to next stages: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000377https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjacklyn808742.invisionapp.com%2Fconsole%2Fshare%2F3W2O64AP5C%2F941000377&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=k7%2Fx6Knk%2B4wkL9Vq%2BDjIdoVAlOVO9XNBfFt0xgXybNI%3D&reserved=0
Modal if users hasnt upload documents but tries to move to next stages: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000865https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjacklyn808742.invisionapp.com%2Fconsole%2Fshare%2F3W2O64AP5C%2F941000865&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=xwrIr%2FVTzaTuXJax6WtFcniEck2CBIP8lpGqzTwhuT8%3D&reserved=0
Modal if user tries to resend email: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000866https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjacklyn808742.invisionapp.com%2Fconsole%2Fshare%2F3W2O64AP5C%2F941000866&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=3uu1ew4DRbpOmJ%2B3uAlHWjSmyf8SIvcuY7iOiO6Xo4c%3D&reserved=0
Modal for sending emailing and having final confimation: https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/941000378https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjacklyn808742.invisionapp.com%2Fconsole%2Fshare%2F3W2O64AP5C%2F941000378&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=3qHiRafZHESffmltzaSbvpEw5heGXz9R%2BeS4Q1V8Ns8%3D&reserved=0
—
Reply to this email directly, view it on GitHubhttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbcgov%2Ffoi-flow%2Fissues%2F2293%23issuecomment-1208303845&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=%2F%2BT1YqxPyV5Pj9EgeWAXqLWZytEFlWcrmLL4A7WThX4%3D&reserved=0, or unsubscribehttps://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAC33VB2AAAUSXM7YOHKI6E3VYEUNZANCNFSM5YHLYNVA&data=05|01|Loren.Mullane%40gov.bc.ca|4307d8ff0b3a4f4e714b08da79561232|6fdb52003d0d4a8ab036d3685e359adc|0|0|637955707817442887|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=ujw02FqpPRn%2Fn23OiHevBOQT2sYv6GKBrhGz931ZnnA%3D&reserved=0.
You are receiving this because you were mentioned.Message ID: @.***>
| gharchive/issue | 2022-06-08T17:50:27 | 2025-04-01T04:33:36.668600 | {
"authors": [
"JHarrietha-AOT",
"lmullane"
],
"repo": "bcgov/foi-flow",
"url": "https://github.com/bcgov/foi-flow/issues/2293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1318786735 | Fee Waiver Approval Recommendations
As a user
I want (goal)
so that (reason/ why/ value)
Assumptions & Scope
What are the assumptions for this story?
What is IN scope?
What is NOT in scope?
Acceptance Criteria
Scenario 1 – Waive Fee in Part
• GIVEN an IAO user is on the fee waiver form
• WHEN they activate the checkbox status ‘Waive Fee in Part’
• THEN under ‘Overall Analyst Recommendation’ the analyst will see editable fields for ‘Summarize Rationale, ‘Value of Amount’ and ‘Amount to be waived’
• AND it will activate the drop downs for both ‘Amount to be Waived’ and ‘Value of Amount’
• THEN the user can either select a $ Amount or % Amount to waive partial fees
Scenario 2: Waive Fee in Full
• GIVEN an IAO user is on the fee waiver form
• WHEN they activate the checkbox status ‘Waive Fee in Full’
• THEN under ‘Overall Analyst Recommendation’ the analyst will see editable fields for ‘Summarize Rationale, ‘Value of Amount’ and ‘Amount to be waived’
• AND the fields for 'Summarize Rationale and Amount to be waived will be editable
• AND the field for ‘Value of Amount’ will NOT be editable
Scenario 3: Do not Waive fee
• GIVEN an IAO user is on the fee waiver form
• WHEN they activate the checkbox status ‘Do Not Waive Fee’
• THEN under ‘Overall Analyst Recommendation’ the analyst will see editable fields for ‘Summarize Rationale, ‘Value of Amount’ and ‘Amount to be waived’
• AND the field for 'Summarize Rationale’ will be editable
• AND the fields for ‘Value of Amount’ and ‘Amount to be waived will NOT be editable
Scenario 4: Save Activation
• GIVEN I am on the fee waiver form
• WHEN I update any field
• THEN the save button will become active
Scenario 5: Save Form
• GIVEN the save button is active on the fee waiver form
• WHEN I click on the save button
• THEN a toast will appear confirming a successful save
• AND the changes I made to the form will persist
...
Dependencies? What is the impact of this dependency? (If so, link dependency in the ticket, make it visible in a team´s backlog)
Validation Rules? (If yes, list here)
Design
https://jacklyn808742.invisionapp.com/console/share/3W2O64AP5C/933929008
Definition of Ready
[ ] Is there a well articulated User Story?
[ ] Is there Acceptance Criteria that covers all scenarios (happy/sad paths)?
[ ] If there is a user interface, is there a design?
[ ] Does the user story need user research/validation?
[ ] Does this User Story needs stakeholder approval?
[ ] Design / Solution accepted by Product Owner
[ ] Is this user story small enough to be completed in a Sprint? Should it be split?
[ ] Are the dependencies known/ understood? (technical, business, regulatory/policy)
[ ] Has the story been estimated?
Definition of Done
[ ] Passes developer unit tests
[ ] Passes peer code review
[ ] If there's a user interface, passes UX assurance
[ ] Passes QA of Acceptance Criteria with verification in Dev and Test
[ ] Confirm Test cases built and succeeding
[ ] No regression test failures
[ ] Test coverage acceptable by Product Owner
[ ] Ticket ready to be merged to master or story branch
[ ] Developer to list Config changes/ Update documents and designs
[ ] Can be demoed in Sprint Review
[ ] Tagged as part of a Release
[ ] Feature flagged if required
[ ] Change Management activities done?
@m-prodan - ready for your review
as per discussion at standup the two fields in the recommendation area will dollar amount and percent. Changing the dollar amount will auto update the percentage and vice versa. Dollar amount will be increments of 0.01 and percentage in increments of 1
| gharchive/issue | 2022-07-26T21:50:08 | 2025-04-01T04:33:36.684177 | {
"authors": [
"KaraBeach",
"nkan-aot"
],
"repo": "bcgov/foi-flow",
"url": "https://github.com/bcgov/foi-flow/issues/2521",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
705799205 | Re-issue credentials using same connection
When using the admin interface of Issuer Kit, the connection information used to issue the first credential should be persisted so that future attempts to re-issue credentials can be performed without having to establish a new connection.
Caveats:
the user may have deleted the connection from their wallet: in this case, the issuer agent/controller will not be notified and will just need to keep track of a pre-defined timeout to create a new invitation
the user will still need to go through the issuer webapp to confirm/fill the credential attributes, this will not be a straight push of a credential to their wallet
For handling the connection(s) in the frontend (issuer-web), guidelines have been published here.
For "automatic" issuance after the first credential is issued and a connection is established, the system should store the connection id for future use and call the API endpoint to issue a credential directly, specifying the connection to be used.
For handling the connection(s) in the frontend (issuer-web), guidelines have been published here.
For "automatic" issuance after the first credential is issued and a connection is established, the system should store the connection id for future use and call the API endpoint to issue a credential directly, specifying the connection to be used.
| gharchive/issue | 2020-09-21T17:51:57 | 2025-04-01T04:33:36.688246 | {
"authors": [
"esune"
],
"repo": "bcgov/issuer-kit",
"url": "https://github.com/bcgov/issuer-kit/issues/231",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1531308791 | Add unit tests for add user to group
Issue #:
https://github.com/bcgov/met-public/issues/718
Description of changes:
Add resource unit test
Add jest for user management page
Add jest test to check rendering of add user to group modal
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the met-public license (Apache 2.0).
Codecov Report
Merging #1099 (198249b) into main (f403f26) will increase coverage by 0.13%.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #1099 +/- ##
==========================================
+ Coverage 73.84% 73.97% +0.13%
==========================================
Files 234 234
Lines 6755 6755
Branches 464 464
==========================================
+ Hits 4988 4997 +9
+ Misses 1692 1683 -9
Partials 75 75
Flag
Coverage Δ
metapi
79.46% <ø> (+0.26%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
met-api/src/met_api/services/user_service.py
81.11% <0.00%> (+5.55%)
:arrow_up:
met-api/src/met_api/resources/user.py
85.18% <0.00%> (+7.40%)
:arrow_up:
| gharchive/pull-request | 2023-01-12T20:20:27 | 2025-04-01T04:33:36.700589 | {
"authors": [
"codecov-commenter",
"jadmsaadaot"
],
"repo": "bcgov/met-public",
"url": "https://github.com/bcgov/met-public/pull/1099",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1389740176 | ParcelMap and iMap integration
Describe the task
A technical task to understand how we can implement maps in our front end to display different layers (specifically related to EPD)
Acceptance Criteria
[ ] Document/research on map capabilities and how to integrate layers specific to land remediation
Additional context
BC Web Mapping Frameworks (https://bcgov.github.io/bcwebmaps-options/#backend-technology)
Leaning towards researching more on CWM (Common Web Mapping) or SMK (Simple Map Kit) considering the team's familiarity with the technologies utilized in these framework.
As per the architecture diagram shared by Chris Robinson, data from SITE's Oracle DB is loaded to BCGW (BC Geographic Warehouse) which in turn is utilized by iMap to render the contaminated sites layer
| gharchive/issue | 2022-09-28T17:51:29 | 2025-04-01T04:33:36.705365 | {
"authors": [
"nikhila-aot"
],
"repo": "bcgov/nr-epd-digital-services",
"url": "https://github.com/bcgov/nr-epd-digital-services/issues/55",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1889399516 | fix: #874 frontend improvement no try catch
Fix/Remove unnecessary try-catch in components.
Organize some import using better reference.
Create LoadingState store for central loading.
Trigger "startLoading", "stopLoading" and "stopLoadingOnError" at both new request/response HttpInterceptors.
Add loadingLabel at Button component and implement feature for Button to take care of label change when loading.
Looks good to me, thanks!!
| gharchive/pull-request | 2023-09-11T00:08:42 | 2025-04-01T04:33:36.707144 | {
"authors": [
"MCatherine1994",
"ianliuwk1019"
],
"repo": "bcgov/nr-forests-access-management",
"url": "https://github.com/bcgov/nr-forests-access-management/pull/875",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1269830598 | Input with strip=True will error when user presses the down arrow
I am working on MacOS. I have installed bullet 2.2.0 using pip.
When I running the following code, and press the down arrow,
from bullet import Input
cli = Input('Press down arrow: ', strip=True)
result = cli.launch()
I see error:
% python3 /tmp/a.py
Press down arrow: Traceback (most recent call last):
File "/tmp/a.py", line 3, in <module>
result = cli.launch()
File "/usr/local/lib/python3.9/site-packages/bullet/client.py", line 458, in launch
return result.strip() if self.strip else result
AttributeError: 'NoneType' object has no attribute 'strip'
%
This error does not happen when strip=False
That is because when you run the program it listens for input, And due to how bullet works, It will make that input be None
Because it uses variables to store input data. And due to the down and up arrow being used for bullet.Bullet
| gharchive/issue | 2022-06-13T19:00:07 | 2025-04-01T04:33:36.741467 | {
"authors": [
"h4rldev",
"lxylxy123456"
],
"repo": "bchao1/bullet",
"url": "https://github.com/bchao1/bullet/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
133043716 | Form validation - min_lenght / max_lenght
Hi,
I'm currently upgrading my project from version 2.x to 3.x.
The login page (which uses form validations) behaves suddently strange by saying that there's a problem with the min/max lenght of my field.
Unable to access an error message corresponding to your field name Username.(min_lenght)
Truying to isolate the problem, I finally made a new project by taking EXACTLY your example:
http://www.codeigniter.com/user_guide/libraries/form_validation.html?highlight=form_validation#CI_Form_validation
Everything is working except when I add a validation rule on min_lenght and max_lenght.
$this->form_validation->set_rules('username', 'Username', 'trim|required|min_length[5]|max_length[12]');
Many thanks for your help,
Alex
`
load->helper(array('form', 'url'));
$this->load->library('form_validation');
$this->form_validation->set_rules('username', 'Username', 'required|min_lenght[3]');
$this->form_validation->set_rules('password', 'Password', 'required',
array('required' => 'You must provide a %s.')
);
$this->form_validation->set_rules('passconf', 'Password Confirmation', 'required');
$this->form_validation->set_rules('email', 'Email', 'required');
if ($this->form_validation->run() == FALSE)
{
$this->load->view('myform');
}
else
{
$this->load->view('formsuccess');
}
}
}
`
The word "length" is written with a 'th', not 'ht' ...
Please note that this is a bug tracker and you should debug and verify that something is actually a bug before writing about it here. If you're seeking help with something, ask on our forums instead.
P.S.: Also, the Markdown syntax for links is [description text](http://address), you've flipped those when trying to link to the FV docs, just like you've done with the 'th'. :)
You're perfectly right, sorry for the mistake, it works...
Cheers
2016-02-11 19:42 GMT+01:00 Andrey Andreev notifications@github.com:
The word "length" is written with a 'th', not 'ht' ...
Please note that this is a bug tracker and you should debug and verify
that something is actually a bug before writing about it here. If you're
seeking help with something, ask on our forums
http://forum.codeigniter.com/ instead.
P.S.: Also, the Markdown syntax for links is description text, you've flipped those when trying to link to the FV docs,
just like you've done with the 'th'. :)
—
Reply to this email directly or view it on GitHub
https://github.com/bcit-ci/CodeIgniter/issues/4451#issuecomment-183003241
.
| gharchive/issue | 2016-02-11T18:03:03 | 2025-04-01T04:33:36.783861 | {
"authors": [
"alexstan57",
"narfbg"
],
"repo": "bcit-ci/CodeIgniter",
"url": "https://github.com/bcit-ci/CodeIgniter/issues/4451",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
770562087 | Error after try
copynya udah bener?
copynya udah bener?
Baru aja git clone sc, pas d run kek gtu
copynya udah bener?
Baru aja git clone sc, pas d run kek gtu
hapus } itunya satu di line 591
| gharchive/issue | 2020-12-18T04:40:29 | 2025-04-01T04:33:36.838632 | {
"authors": [
"bdrsmsdn",
"nandaid"
],
"repo": "bdrsmsdn/lucya-bot",
"url": "https://github.com/bdrsmsdn/lucya-bot/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1088725106 | Checks if the class names are the same in Circular Detector
Let's say if there are two NativeClass classes:
@nativeClass() class A extends NativeClass { @nativeField(B) b:B; }
currently when u do console.log(a); it will show that the field b is circular but b actually isnt, cuz the content inside B differs from A.
(NetworkItemStackDescriptor is an example)
This commits fixes that by comparing the class names, now circular detector will only detect this:
@nativeClass() class C extends NativeClass { @nativeField(C) c:C; }
However I am not sure if this change is correct or not
Looks good to me
It seems really strange that NetworkItemStackDescriptor's first field is itself...
NetworkItemStackDescriptor's first field is a ItemDescriptor, and it contains a weak pointer of Item, the aux value and the Block though they are not added to bdsx cuz idk how to implement weak pointer
Actually CommandPosition was in the case, the first field should be a Vec3 called “offset” but due to the circular detection I split it to x, y, z respectively.
Why was that detected? Don't they have different native addresses?
The first field and the instance have the same address
Thanks for finding my mistakes.
about this commit, it can be an infinite loop if the second instance has a pointer that indicates itself.
Let me fix it as the map to use 2 keys.
| gharchive/pull-request | 2021-12-26T08:40:22 | 2025-04-01T04:33:36.842830 | {
"authors": [
"7dev7urandom",
"Rjlintkh",
"karikera"
],
"repo": "bdsx/bdsx",
"url": "https://github.com/bdsx/bdsx/pull/266",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2104450173 | XCR Roadmap 4: Misc
XCR will be divided up into specific "roadmaps" for its progress since the platform is so vast. This roadmap covers misc changes to the platform, including design and other logic.
[x] Final pass on art design
[x] Change name from "Electron" to "XCR" in finished package
[x] Amend XCR logo to be in accordance with MacOS desktop icon standards
[x] Make sure app runs well
[x] Change styling on various elements
[x] Package for Apple Silicon Macs (ARM64)
[x] Package for Intel-based Macs (x86_64)
[x] Package for Windows (x64)
XCR 1.0.0-RELEASE will be available soon!
| gharchive/issue | 2024-01-29T01:09:23 | 2025-04-01T04:33:36.845759 | {
"authors": [
"beachweak"
],
"repo": "beachweak/XCR",
"url": "https://github.com/beachweak/XCR/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
269281909 | Add Windows binary & Update build instructions
I updated the readme to match to https://beakerbrowser.com/docs/install/#building-from-source
thanks!
Thanks! :+1:
| gharchive/pull-request | 2017-10-28T04:14:47 | 2025-04-01T04:33:36.852834 | {
"authors": [
"pfrazee",
"whfeeds"
],
"repo": "beakerbrowser/beaker",
"url": "https://github.com/beakerbrowser/beaker/pull/730",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1518213011 | [Bug] List item type would not be check sometimes by function decorator @beartype
I would expect that all calls with string type item would fail, but sometime it would be OK.
>>> from beartype import beartype
>>> @beartype
... def test(nums: list[int]) -> None:
... print(f'nums:{nums}')
...
>>> test([1, 2])
nums:[1, 2]
>>> test([1, 'b'])
nums:[1, 'b']
>>> test([1, 'b'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<@beartype(__main__.test) at 0x7fed749b0d30>", line 32, in test
beartype.roar.BeartypeCallHintParamViolation: @beartyped __main__.test() parameter nums=[1, 'b'] violates type hint list[int], as list index 1 item str 'b' not instance of int.
>>> test([1, 'b'])
nums:[1, 'b']
version:
$ pip show beartype
Name: beartype
Version: 0.11.0
Summary: Unbearably fast runtime type checking in pure Python.
Home-page: https://github.com/beartype/beartype
Author: Cecil Curry, et al.
Author-email: leycec@gmail.com
License: MIT
Location: /home/xxxxxx/.pyenv/versions/3.10.1/lib/python3.10/site-packages
Requires:
Required-by:
Heh. Surprisingly, @beartype is working exactly as intended here. You are now thinking to yourself: "@beartype, you are dumb and I will never use you."
Let us explicate. @beartype guarantees constant-time (i.e., O(1)) runtime behaviour. In fact, @beartype basically guarantees that each call to a @beartype-decorated callable (like the test() function in your example) takes no more than 10µs (10 microseconds = 10-6 seconds). That's basically instantaneous. Moreover, that's the entire reason for @beartype's existence: it's so fast than you never want to disable it.
But there's a price for speed. How does @beartype type-check an arbitrarily large list in constant-time? It doesn't, because that's impossible. Instead, @beartype type-checks only random items of parameters passed to and returns returned from @beartype-decorated callables.
Usually, this is fine. Most lists, for example, are homogeneously constructed with a list comprehension or generator effectively guaranteeing uniform types for all items of that list (e.g., test(range(-5, 50, 3))).
Sometimes, this is not fine. For those times, a future version of @beartype will add support for the more traditional linear-time (i.e., O(n)) type-checking that you are expecting. Until then, @beartype supports "full-fat" O(n) type-checking via @beartype validators:
# Import the requisite machinery.
from beartype import beartype
from beartype.vale import Is
from typing import Annotated # <--------------- if Python ≥ 3.9.0
#from typing_extensions import Annotated # <--- if Python < 3.9.0
# Type hint matching all integers in a list of integers in O(n) time. Please
# never do this. You now want to, don't you? Why? You know the price! Why?!?
IntList = Annotated[list[int], Is[lambda lst: all(
isinstance(item, int) for item in lst)]]
# Type-check all integers in a list of integers in O(n) time. How could you?
@beartype
def sum_intlist(my_list: IntList) -> int:
'''
The slowest possible integer summation over the passed list of integers.
There goes your whole data science pipeline. Yikes! So much cringe.
'''
return sum(my_list) # oh, gods what have you done
See also this open issue currently tracking O(n) support. And thanks so much for your interest in @beartype! May 2023 be a year of profound prosperity and joy for you and yours. :tada:
Thanks @leycec for explaining! I'm looking forward to O(n) support!
Oh, you're most welcome. 2023 promises to be the Year of the Beartype – complete with O(n) support and a likely hybrid mode called O(smarty pants) that dynamically switches between O(n), O(log n), and O(1) type-checking depending on the size of the container to be type-checked.
Thanks again for being so accommodating and patient, @yioda. Much :smiling_face_with_three_hearts: !
| gharchive/issue | 2023-01-04T02:56:33 | 2025-04-01T04:33:36.869944 | {
"authors": [
"leycec",
"yioda"
],
"repo": "beartype/beartype",
"url": "https://github.com/beartype/beartype/issues/202",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1192726454 | LESS mixin unexpected code formatting
Description
This is moved over from https://github.com/microsoft/vscode/issues/146693
Using VSCode 1.66.0 and the new built-in LESS code formatting feature less.format.enable implemented by the JS Beautify library.
When using a LESS mixin that accepts CSS code rules as a parameter (this is used for media query mixins), the formatting when saved is a little unexpected. The formatting is consistent, but it is unusal.
Input & Expected Output
.example(@rules) {
@rules();
}
.test {
.example({
color:red;
});
}
Actual Output
.example(@rules) {
@rules();
}
.test {
.example( {
color:red;
}
);
}
More Information
Similar issue reported a while back js-beautify issue 722
I've noticed that if you add the following to VSCode's settings.json file, the results are slightly better:
"less.format.newlineBetweenRules": false
This gives you formatting like:
.example(@rules) {
@rules();
}
.test {
.example( {
color:red;
}
);
}
I was asked to move the issue over here, so hope it's in the right place now.
Environment
OS: macOS 12.3.1 VSCode 1.66.0 which bundles js-beautify
I have been trying to track down the version of js-beautify, but I cannot seem to find it within VSCode, so if anyone can advise how I can report this that would be great.
Settings
Defined by VSCode
1.66.0 uses jsbeautify 1.14.0
@aeschli
Thanks for the update. That means this is definitely still valid.
@aeschli
Are you seeing a lot of issue reported?
I only see 5 open in this project currently: https://github.com/beautify-web/js-beautify/issues?q=is%3Aissue+is%3Aopen+less+OR+scss+label%3A"language%3A+templating"+
As I said this issue probably wouldn't be super hard to fix, I just don't have the time myself.
As to comfort, it is really up to you. When I go searching for an alternative to this package all I get is a bunch of garbage knock-off sites that are almost surely using this package internally. Without a better alternative, maybe someone will be inspired to help make this one better.
@bitwiseman Yes, so far it's only a few issues that don't seem to be complicated to fix.
@aeschli @bitwiseman I have added a PR for a possible solution to the spacing problems mentioned in this ticket and #772.
| gharchive/issue | 2022-04-05T07:13:48 | 2025-04-01T04:33:36.885245 | {
"authors": [
"aeschli",
"bitwiseman",
"mhnaeem",
"richmilns"
],
"repo": "beautify-web/js-beautify",
"url": "https://github.com/beautify-web/js-beautify/issues/2016",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
571065162 | How I change search mode to ignore manga
Hi !
I tried this program,it work correctly,but I just want to download ranking list illustration,because manga
is too many.
So how I change to achieve it?
Okay, I’ll look into this later.
On Feb 26, 2020, 12:42 +0800, Alex notifications@github.com, wrote:
Hi !
I tried this program,it work correctly,but I just want to download ranking list illustration,because manga
is too many.
So how I change to achieve it?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
change this line to
download_illustrations(user, data_list, save_path, add_rank=True, skip_manga=True) should work
https://github.com/bebound/pixiv/blob/a08f8502873d5fc456111f3f3436f3273d846e9a/pixiv.py#L241
skip_manga=True
Hi,I tried this argument,there is no error,but it is still not working.
download_illustrations(user, data_list, save_path, add_rank=True, skip_manga=True)
I made a mistake in my previous commit, I fix it now.
If you use it correctly, it should raise an error...
I made a mistake in my previous commit, I fix it now.
If you use it correctly, it should raise an error...
I updated pixiv.py,but still not working.I checked model.py and pixiv.py and found
if skip_manga:
illustrations = list(filter(lambda x: not x.is_manga, illustrations))
this sentence can't filter correctly,so what I suppose to do?
I got it. I use download user to test skip_manga function and it works. But in ranking list, the is_manga field is always None.
The filter should work for ranking list now.
data in ranking list
{'rank': 6, 'previous_rank': 15, 'work': {'id': 79842982, 'title': '喫茶店にいるカップルの話。', 'caption': None, 'tags': ['漫画', '創作', 'オリジナル', '創作男女', 'イケメンがついたおっぱい', 'リブ生地'], 'tools': None, 'image_urls': {'px_128x128': 'https://i.pximg.net/c/128x128/img-master/img/2020/03/02/12/10/09/79842982_p0_square1200.jpg', 'px_480mw': 'https://i.pximg.net/c/480x960/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'small': 'https://i.pximg.net/c/150x150/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'medium': 'https://i.pximg.net/c/600x600/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'large': 'https://i.pximg.net/img-original/img/2020/03/02/12/10/09/79842982_p0.jpg'}, 'width': 914, 'height': 1280, 'stats': {'scored_count': 1531, 'score': 15310, 'views_count': 73386, 'favorited_count': {'public': None, 'private': None}, 'commented_count': None}, 'publicity': 0, 'age_limit': 'all-age', 'created_time': '2020-03-02 12:10:00', 'reuploaded_time': '2020-03-02 12:10:09', 'user': {'id': 25533, 'account': '51039ra3', 'name': 'さいそう。@斎創', 'is_following': None, 'is_follower': None, 'is_friend': None, 'is_premium': None, 'profile_image_urls': {'px_170x170': 'https://i.pximg.net/user-profile/img/2017/11/17/16/09/25/13465685_435dafc32e7ab05312ba02525cfde81b_170.jpg', 'px_50x50': 'https://i.pximg.net/user-profile/img/2017/11/17/16/09/25/13465685_435dafc32e7ab05312ba02525cfde81b_50.jpg'}, 'stats': None, 'profile': None}, 'is_manga': None, 'is_liked': None, 'favorite_id': None, 'page_count': 4, 'book_style': 'none', 'type': 'manga', 'metadata': None, 'content_type': None, 'sanity_level': 'white'}},
data in download by user
{'id': 79842982, 'title': '喫茶店にいるカップルの話。', 'caption': '【https://www.pixiv.net/artworks/78004065】のキャラの、付き合って大学生の番外編です。\r\n高校生時代の話は単行本になります!→https://www.amazon.co.jp/dp/4065189004/ref=cm_sw_r_tw_dp_U_x_wJ6uEb6PPFZG9\r\n\r\nピクシブコミックスさんにも1話乗せて頂いてます。\r\n(冒頭数話分は更新される予定です。)\r\n https://comic.pixiv.net/works/6413', 'tags': ['漫画', '創作', 'オリジナル', '創作男女', 'イケメンがついたおっぱい', 'リブ生地'], 'tools': ['Photoshop'], 'image_urls': {'px_128x128': 'https://i.pximg.net/c/128x128/img-master/img/2020/03/02/12/10/09/79842982_p0_square1200.jpg', 'px_480mw': 'https://i.pximg.net/c/480x960/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'small': 'https://i.pximg.net/c/150x150/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'medium': 'https://i.pximg.net/c/600x600/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'large': 'https://i.pximg.net/img-original/img/2020/03/02/12/10/09/79842982_p0.jpg'}, 'width': 914, 'height': 1280, 'stats': {'scored_count': 5434, 'score': 54340, 'views_count': 89839, 'favorited_count': {'public': 5060, 'private': 74}, 'commented_count': 32}, 'publicity': 0, 'age_limit': 'all-age', 'created_time': '2020-03-02 12:10:09', 'reuploaded_time': '2020-03-02 12:10:09', 'user': {'id': 25533, 'account': '51039ra3', 'name': 'さいそう。@斎創', 'is_following': False, 'is_follower': False, 'is_friend': False, 'is_premium': None, 'profile_image_urls': {'px_50x50': 'https://i.pximg.net/user-profile/img/2017/11/17/16/09/25/13465685_435dafc32e7ab05312ba02525cfde81b_50.jpg'}, 'stats': None, 'profile': None}, 'is_manga': True, 'is_liked': False, 'favorite_id': 0, 'page_count': 4, 'book_style': 'none', 'type': 'manga', 'metadata': None, 'content_type': None, 'sanity_level': 'white'}
I got it. I use download user to test skip_manga function and it works. But in ranking list, the is_manga field is always None.
The filter should work for ranking list now.
data in ranking list
{'rank': 6, 'previous_rank': 15, 'work': {'id': 79842982, 'title': '喫茶店にいるカップルの話。', 'caption': None, 'tags': ['漫画', '創作', 'オリジナル', '創作男女', 'イケメンがついたおっぱい', 'リブ生地'], 'tools': None, 'image_urls': {'px_128x128': 'https://i.pximg.net/c/128x128/img-master/img/2020/03/02/12/10/09/79842982_p0_square1200.jpg', 'px_480mw': 'https://i.pximg.net/c/480x960/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'small': 'https://i.pximg.net/c/150x150/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'medium': 'https://i.pximg.net/c/600x600/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'large': 'https://i.pximg.net/img-original/img/2020/03/02/12/10/09/79842982_p0.jpg'}, 'width': 914, 'height': 1280, 'stats': {'scored_count': 1531, 'score': 15310, 'views_count': 73386, 'favorited_count': {'public': None, 'private': None}, 'commented_count': None}, 'publicity': 0, 'age_limit': 'all-age', 'created_time': '2020-03-02 12:10:00', 'reuploaded_time': '2020-03-02 12:10:09', 'user': {'id': 25533, 'account': '51039ra3', 'name': 'さいそう。@斎創', 'is_following': None, 'is_follower': None, 'is_friend': None, 'is_premium': None, 'profile_image_urls': {'px_170x170': 'https://i.pximg.net/user-profile/img/2017/11/17/16/09/25/13465685_435dafc32e7ab05312ba02525cfde81b_170.jpg', 'px_50x50': 'https://i.pximg.net/user-profile/img/2017/11/17/16/09/25/13465685_435dafc32e7ab05312ba02525cfde81b_50.jpg'}, 'stats': None, 'profile': None}, 'is_manga': None, 'is_liked': None, 'favorite_id': None, 'page_count': 4, 'book_style': 'none', 'type': 'manga', 'metadata': None, 'content_type': None, 'sanity_level': 'white'}},
data in download by user
{'id': 79842982, 'title': '喫茶店にいるカップルの話。', 'caption': '【https://www.pixiv.net/artworks/78004065】のキャラの、付き合って大学生の番外編です。\r\n高校生時代の話は単行本になります!→https://www.amazon.co.jp/dp/4065189004/ref=cm_sw_r_tw_dp_U_x_wJ6uEb6PPFZG9\r\n\r\nピクシブコミックスさんにも1話乗せて頂いてます。\r\n(冒頭数話分は更新される予定です。)\r\n https://comic.pixiv.net/works/6413', 'tags': ['漫画', '創作', 'オリジナル', '創作男女', 'イケメンがついたおっぱい', 'リブ生地'], 'tools': ['Photoshop'], 'image_urls': {'px_128x128': 'https://i.pximg.net/c/128x128/img-master/img/2020/03/02/12/10/09/79842982_p0_square1200.jpg', 'px_480mw': 'https://i.pximg.net/c/480x960/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'small': 'https://i.pximg.net/c/150x150/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'medium': 'https://i.pximg.net/c/600x600/img-master/img/2020/03/02/12/10/09/79842982_p0_master1200.jpg', 'large': 'https://i.pximg.net/img-original/img/2020/03/02/12/10/09/79842982_p0.jpg'}, 'width': 914, 'height': 1280, 'stats': {'scored_count': 5434, 'score': 54340, 'views_count': 89839, 'favorited_count': {'public': 5060, 'private': 74}, 'commented_count': 32}, 'publicity': 0, 'age_limit': 'all-age', 'created_time': '2020-03-02 12:10:09', 'reuploaded_time': '2020-03-02 12:10:09', 'user': {'id': 25533, 'account': '51039ra3', 'name': 'さいそう。@斎創', 'is_following': False, 'is_follower': False, 'is_friend': False, 'is_premium': None, 'profile_image_urls': {'px_50x50': 'https://i.pximg.net/user-profile/img/2017/11/17/16/09/25/13465685_435dafc32e7ab05312ba02525cfde81b_50.jpg'}, 'stats': None, 'profile': None}, 'is_manga': True, 'is_liked': False, 'favorite_id': 0, 'page_count': 4, 'book_style': 'none', 'type': 'manga', 'metadata': None, 'content_type': None, 'sanity_level': 'white'}
Hi, I just want download illustration,so I checked pixiv website and found param page_count.It's record how many illustration included.I added a filter to filtrate page_count !=1.Now the program is work properly.
Thank you provide this program.
| gharchive/issue | 2020-02-26T04:42:11 | 2025-04-01T04:33:36.900312 | {
"authors": [
"Anthorty",
"bebound"
],
"repo": "bebound/pixiv",
"url": "https://github.com/bebound/pixiv/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
124390006 | sorting documentation incorrect
for some reason sorting stopped working for me.
in the sort documentation in https://github.com/bebraw/reactabular/blob/master/docs/sorting_table.md
one should be able to create a table like:
<Table columns={columns} data={paginated.data} header={header} />
where header is
header: {
onClick: (column) => {
sortColumn(
this.state.columns,
column,
this.setState.bind(this)
);
},
}
what I have noticed is that the onClick function never gets called. also I wasn't able to find in the source where props.header is ever used: https://github.com/bebraw/reactabular/search?utf8=✓&q=.header
Am I missing something?
onClick should get mapped to a prop.
Can you try setting up a demo for me to study? You can for instance fork this repo and tweak the official demo. Whatever is the easiest for you.
@bebraw in the demo, we are using the prop `columnNames' instead of 'header':
https://github.com/goldensunliu/reactabular/blob/master/demos/full_table.jsx#L286
if instead of columnNames={this.columnFilters} we replace it with header={this.state.header}, nothing seem to work
looks like the doc in https://github.com/bebraw/reactabular/blob/master/docs/sorting_table.md
should be
<Table columns={columns} data={paginated.data} columnNames ={header} />
correct me if I am wrong
@goldensunliu Yeah. Want to do a PR to fix that?
accordingly to this change. https://github.com/bebraw/reactabular/commit/19b3501aff0f0cd269144f3d97f7422195309be3
is the intention to change header to columnNames? i.e should we change the documentation to reflect that
@goldensunliu Yup. The documentation needs a fix.
@bebraw PR is ready to go, dawg.
Thanks!
| gharchive/issue | 2015-12-30T19:27:55 | 2025-04-01T04:33:36.907733 | {
"authors": [
"bebraw",
"goldensunliu"
],
"repo": "bebraw/reactabular",
"url": "https://github.com/bebraw/reactabular/issues/120",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1516450183 | Add forms implementation with SDC
implement forms with FHIR SDC.
No need to invent forms in case we can take https://github.com/Aidbox/sdc-forms-library as an example. It have several forms implemented with Aidbox Zen SDC. Convert them to FHIR SDC.
https://github.com/HealthSamurai/aidbox-zen-sdc
| gharchive/issue | 2023-01-02T14:20:13 | 2025-04-01T04:33:36.911905 | {
"authors": [
"dymio"
],
"repo": "beda-software/fhir-emr",
"url": "https://github.com/beda-software/fhir-emr/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
866829262 | unlocalized strings in UI
User @52x13 in Beefy's Discord has correctly pointed out that a number of strings in the UI remain unlocalized. These include "vote" and "barn" in the menu and most if not all of the dashboard (statistics) screen.
Cool for me to address as a further foray into the codebase?
@PCRyan Absolutely yes
| gharchive/issue | 2021-04-24T20:17:09 | 2025-04-01T04:33:36.940073 | {
"authors": [
"PCRyan",
"roman-monk"
],
"repo": "beefyfinance/beefy-app",
"url": "https://github.com/beefyfinance/beefy-app/issues/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1426226954 | #93 - Further refine results output formatting
Closes #93
This PR builds on the previous work for displaying nicely formatted JSON results and table output where available. It has two main goals:
To have the /tasks/{id}/result/ API show the results as the correct type, rather than always being stringified JSON. Objects now return as objects, numbers as numbers, etc.
To no longer require explicitly declaration of the output format up front on the function definition. It was previously the case that when viewing the results you would be shown only a the raw JSON or a table, based on what output format had been declared. Now you can toggle between both formats when appropriate like this:
JSON Raw Output
JSON Table Output
Other Notes
I removed the output_format field from the Function and Task model. It turned out we can do everything we want to without it.
I also added a permissions check to the unicorn view, as it was possible for data to be accessed through the unicorn endpoint by any logged in user before, regardless of their actual permissions.
Testing Instructions
Both the python and javascript templates had to be updated to support these changes. Be sure to run the ./build.sh -p from the templates folder to get the new versions of the templates in your registry.
I added a simple "demo" example package that has useful functions for testing the UI rendering of JSON and text (including CSV formatted text). Publish that package to an environment (after publishing the new templates) to ease your testing.
Task the output_json and output_text functions with data that can and cannot be rendered as a table. The descriptions of each function explain what should generate table output.
Retrieve the task results via the API as well and verify that the date type of the result property is what you expect based on the return type of the function.
All the changes look good. My only complaint, since you're refining the output, is that the delay function "has an output' but it's empty and it looks like the output is missing. Can you put a message when the output is blank?
Added a message indicating that there is no result for a task. I had to update the package templates again, so if you go to test it, make sure you rebuild the templates and then republish the package.
| gharchive/pull-request | 2022-10-27T20:26:23 | 2025-04-01T04:33:36.961454 | {
"authors": [
"scott-taubman"
],
"repo": "beer-garden/functionary",
"url": "https://github.com/beer-garden/functionary/pull/122",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1812606008 | can't remove leader by method of RaftHandle's removeServer
I want to manually transfer leader at sometime.
Thus, I invoke removeServer method of RaftHandle in leader node to remove itself, but it seems not work.
How can I remove current leader or transfer my leader in my code? thanks
RaftHandle.removeServer() is only for changing the (static) membership
of a cluster.
You cannot transfer the leader role to any given member, as this might
violate the Raft properties, ie. the member with the longest log and
the majority of votes becomes leader.
If you for example transferred leadership to a member who doesn't have
the longest log, you will lose committed log entries, and this
violates the Raft prototocol.
On 19.07.23 21:32, jackjoesh wrote:
I want to manually transfer leader at sometime.
Thus, I invoke removeServer method of RaftHandle in leader node to
remove itself, but it seems not work.
How can I remove current leader or transfer my leader in my code? thanks
—
Reply to this email directly, view it on GitHub
https://github.com/belaban/jgroups-raft/issues/212, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AADPZXIIKU3PA2EI43PRBSDXRAY5LANCNFSM6AAAAAA2QMJSY4.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
--
Bela Ban | http://www.jgroups.org
RaftHandle.removeServer() is only for changing the (static) membership of a cluster. You cannot transfer the leader role to any given member, as this might violate the Raft properties, ie. the member with the longest log and the majority of votes becomes leader. If you for example transferred leadership to a member who doesn't have the longest log, you will lose committed log entries, and this violates the Raft prototocol.
…
On 19.07.23 21:32, jackjoesh wrote: I want to manually transfer leader at sometime. Thus, I invoke removeServer method of RaftHandle in leader node to remove itself, but it seems not work. How can I remove current leader or transfer my leader in my code? thanks — Reply to this email directly, view it on GitHub <#212>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADPZXIIKU3PA2EI43PRBSDXRAY5LANCNFSM6AAAAAA2QMJSY4. You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- Bela Ban | http://www.jgroups.org
Yes, I know, we can't violate the raft protocol.
But we have some special requirements: we have our own health-check logic in each node.
If current leader node is not health, we want to force current leader node to give up its own leader role and transfer to other nodes. So shall we have two ways:
1 support a transfer leader command ,and regard as a type of raft event log to add to state machine, so I can guarantee consistency
2 remove current leader node from the cluster, and add to the cluster after new leader starting as a follower
If we can't support 1 way, can we support 2 way?
Thanks
Approach 1 sounds like creating an external election algorithm. So I highly suggest 2.
The current ELECTION protocol uses the JGroups views to trigger the election mechanism. To trigger the election, the current RAFT leader disconnects from the cluster and connects back. This should cause a view change, and an election round starts. You could use the JChannel#disconnect and JChannel#connect(String) methods. The channel is on the RaftHandle#channel method. You wouldn't need to change the RAFT membership with this. But I suspect that depending on the timing, the same leader could be re-elected.
Approach 1 sounds like creating an external election algorithm. So I highly suggest 2.
The current ELECTION protocol uses the JGroups views to trigger the election mechanism. To trigger the election, the current RAFT leader disconnects from the cluster and connects back. This should cause a view change, and an election round starts. You could use the JChannel#disconnect and JChannel#connect(String) methods. The channel is on the RaftHandle#channel method. You wouldn't need to change the RAFT membership with this. But I suspect that depending on the timing, the same leader could be re-elected.
thank you, let me try disconnecting and connecting again. By the way , do we try this way before? I remember maybe it throws "channel close" exception after disconnecting and reconnecting again
If you disconnect, then wait a few milliseconds before reconnecting, another leader should have been elected (f you still have a majority). If you then reconnect, the old leader should not become leader.
Just call JChannel.disconnect(), not JChannel.close(). In the first case, JChannel.connect() will succeed, in the latter it will fail with a ChannelClosedException
Just call JChannel.disconnect(), not JChannel.close(). In the first case, JChannel.connect() will succeed, in the latter it will fail with a ChannelClosedException
it seems still has some problem, After disconnecting, and reconnect again , it occurs the following exception. Is my invocation fault?
this.raftHandle.channel().disconnect();
Thread.sleep(1000l);
this.raftHandle.channel().connect(this.raftClusterName);
org.iq80.leveldb.DBException: Closed
at org.fusesource.leveldbjni.internal.JniDB.iterator(JniDB.java:100) ~[leveldbjni-all-1.8.jar:1.8]
at org.fusesource.leveldbjni.internal.JniDB.iterator(JniDB.java:95) ~[leveldbjni-all-1.8.jar:1.8]
at org.jgroups.protocols.raft.LevelDBLog.sizeInBytes(LevelDBLog.java:217) ~[jgroups-raft-1.0.11.Final.jar:?]
at org.jgroups.raft.util.LogCache.sizeInBytes(LogCache.java:206) ~[jgroups-raft-1.0.11.Final.jar:?]
at org.jgroups.protocols.raft.RAFT.logSizeInBytes(RAFT.java:378) ~[jgroups-raft-1.0.11.Final.jar:?]
at org.jgroups.protocols.raft.RAFT.start(RAFT.java:570) ~[jgroups-raft-1.0.11.Final.jar:?]
at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:890) ~[jgroups-5.2.14.Final.jar:5.2.14.Final]
at org.jgroups.JChannel.startStack(JChannel.java:919) ~[jgroups-5.2.14.Final.jar:5.2.14.Final]
at org.jgroups.JChannel._preConnect(JChannel.java:797) ~[jgroups-5.2.14.Final.jar:5.2.14.Final]
at org.jgroups.JChannel.connect(JChannel.java:322) ~[jgroups-5.2.14.Final.jar:5.2.14.Final]
at org.jgroups.JChannel.connect(JChannel.java:316) ~[jgroups-5.2.14.Final.jar:5.2.14.Final]
Try something like:
this.raftHandle.channel().disconnect();
Thread.sleep(1000l);
this.raftHandle.raft().log(null);
this.raftHandle.channel().connect(this.raftClusterName);
This should cause LevelDB to reinitialize after connecting again.
this.raftHandle.raft().log(null); // <--- new line
cool, it works now, thanks
| gharchive/issue | 2023-07-19T19:32:27 | 2025-04-01T04:33:37.022651 | {
"authors": [
"belaban",
"jabolina",
"jackjoesh"
],
"repo": "belaban/jgroups-raft",
"url": "https://github.com/belaban/jgroups-raft/issues/212",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1580935304 | feat: menambahkan materi exception
Deskripsi (Description)
Checklist:
Umum:
[ ] Saya menambahkan algoritma terbaru.
[ ] Saya memperbaiki algoritma yang sudah ada.
[x] Saya menambahkan dokumentasi.
[ ] Saya memperbaiki dokumentasi yang sudah ada.
Contributor Requirements (Syarat Kontributor) dan Lain-Lain:
[x] Saya sudah membaca (I have read) CONTRIBUTING dan sudah menyetujui semua syarat.
[x] Saya telah menambahkan komentar kode yang memberikan penjelasan maksud dari kode yang saya buat.
[x] Saya menggunakan bahasa Indonesia untuk memberikan penjelasan dari kode yang saya buat.
Environment
Saya menggunakan (I'm using):
OS = Windows
Java = 11
Link Issues
Issues : #
LGTM. Tetapi bisakah untuk menghapus file bin yang ada di repository? Terima kasih!
LGTM,terima kasih atas kontribusinya ! @fhasnur
terima kasih kembali :)
| gharchive/pull-request | 2023-02-11T17:12:13 | 2025-04-01T04:33:37.031391 | {
"authors": [
"fhasnur",
"random-prog"
],
"repo": "bellshade/Java",
"url": "https://github.com/bellshade/Java/pull/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
102679355 | Deprecated APIs
I get this everytime I use nano with postcss
Container#eachInside is deprecated. Use Container#walk instead.
Node#between is deprecated. Use Node#raws.between
Rule#_selector is deprecated. Use Rule#raws.selector
Node#_value was deprecated. Use Node#raws.value
Node#_important was deprecated. Use Node#raws.important
Container#eachAtRule is deprecated. Use Container#walkAtRules instead.
Container#eachRule is deprecated. Use Container#walkRules instead.
Container#eachDecl is deprecated. Use Container#walkDecls instead.
Node#before is deprecated. Use Node#raws.before
Node#after is deprecated. Use Node#raws.after
Node#semicolon is deprecated. Use Node#raws.semicolon
@steelbrain It's just warnings and they do not influence on result. We'll update it as soon as it possible.
PostCSS 5.x is not officially supported until cssnano 3.x. If you don't want to see these warnings then use it with a PostCSS 4.x runner (or use one of the dedicated tools such as gulp-cssnano).
Fixed since https://github.com/ben-eb/cssnano/commit/37a36c2c0db0deb082e7522fe90747fdbfc9545f.
| gharchive/issue | 2015-08-24T01:05:53 | 2025-04-01T04:33:37.042936 | {
"authors": [
"TrySound",
"ben-eb",
"steelbrain"
],
"repo": "ben-eb/cssnano",
"url": "https://github.com/ben-eb/cssnano/issues/50",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
135689320 | Poor optimizations results
gulp-minify-css says its deprecated, and points to gulp-css-nano as its successor.
Hoever, why does gulp-css-nano (2.1.1) give me so much weaker results?
I sought for help with a very short, isolated „testcase“ on stackoverflow:
http://stackoverflow.com/questions/35574050/why-is-css-nano-so-much-weaker-than-deprecated-minify-css
Is this 'early stage' on some common things? Nano modules that need to be explicitly enabled?
Or a bug in the gulp-wrapper? (I am wildly guessing. sorry.)
All minifiers have different levels of support for different optimisations, so output will vary depending on which engine you use and what styles you have. In this specific case, clean-css does rule restructuring to merge duplicated selectors, something that I want to look into but haven't had the chance yet.
Of course, you are free to use clean-css instead if you find it produces a better output.
https://www.npmjs.com/package/gulp-clean-css
Thank you for your answer!
(I was honestly wondering, if I got something wrong. No criticism disguised as a bug.)
| gharchive/issue | 2016-02-23T09:55:56 | 2025-04-01T04:33:37.047188 | {
"authors": [
"ben-eb",
"fnocke"
],
"repo": "ben-eb/gulp-cssnano",
"url": "https://github.com/ben-eb/gulp-cssnano/issues/39",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
39953759 | Fields with optional values
It will be very useful to have optional value for a field. In most real applications some fields are not set (ie if the mail is not required then some users will leave it blank). For testing purpose it's always a good thing to reflect the real world.
This can be solved by saying that if a Randomizer returns null then it means it does not want to set the field. Then it's easy to add an OptionalRandomizer (with a percent) which according to that percent returns null or a random value returned by a delegate.
While working on #32 , I have an issue with this behavior. I have a randomizer than MUST return null, and then the null value must be set on the bean. It is for validation @Null implementation.
I think the best solution is to throw an exception when the user really need to skip this value generation (I suggest DefaultValueException for the name).
| gharchive/issue | 2014-08-11T12:35:23 | 2025-04-01T04:33:37.055962 | {
"authors": [
"Toilal",
"eric-taix"
],
"repo": "benas/jPopulator",
"url": "https://github.com/benas/jPopulator/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
830478492 | Button labels should not be capitalized using CSS
Describe the bug
Currently, all button labels are capitalized by using the text-transform CSS property with a value of capitalize. This property automatically converts text to Title Case. However, while Title Case is valid in English, it isn't necessarily in other languages.
For example, Create Room is valid in English but Créer Une Salle is not in French as it should instead be Créer une salle.
Expected behavior
I think that text capitalization - and text formatting in general - should be done within the translation files themselves.
One way I can see this being solved is creating a translation class kinda how Android or Minecraft has, in which you call a function to get the translated text based on language, or a lot will need to be refactored for each individual language. Sadly, I lack the knowledge to implement something like this but someone smarter will probably read this and know how to implement it. Good bug report!
I disagree. The Text-Capitalization property exists for a reason. A11y recommends using it because screen-readers scream at you if you don't. Even worse: If they don't recognize the word, they may read it letter by letter (e.g B U T T O N)
I disagree. The Text-Capitalization property exists for a reason. A11y recommends using it because screen-readers scream at you if you don't. Even worse: If they don't recognize the word, they may read it letter by letter (e.g B U T T O N)
Firstly, why would screen readers read a word letter by letter if it's properly capitalized in the translation file? Furthermore, as of speaking, not a single string from all translation files is all uppercase, so a transition wouldn't be a potential problem for screen readers.
Secondly, as I wrote in the issue's description, Title Case simply is invalid in some languages (e.g. French). If the website wants to reach a broader international audience then issues like this one will have to get fixed, especially considering that others have pulled them off while still offering great accessibility.
Firstly, why would screen readers read a word letter by letter if it's properly capitalized in the translation file?
Because some work that way.
Furthermore, as of speaking, not a single string from all translation files is all uppercase, so a transition wouldn't be a potential problem for screen readers.
I didn't realize this was the case. It should be fine then.
I'm not sure how other projects solve this issue, I wanted to make sure people knew about it. There are more examples here. Screenreaders read the text in the DOM, regardless of how text is represented via css
This is valid for Serbian 🇷🇸 too.
It should be Направите собу, not Направите Собу.
Also german words can change meaning just by capitalisation
| gharchive/issue | 2021-03-12T21:19:26 | 2025-04-01T04:33:37.063625 | {
"authors": [
"daloes",
"juliankrieger",
"milansusnjar",
"xslendix",
"younesaassila"
],
"repo": "benawad/dogehouse",
"url": "https://github.com/benawad/dogehouse/issues/775",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
462082469 | Corsair Crystal 570X RGB
Hi,
I would like to know if your software is able to pilot/manage led (SP120 RGB Fan) on a Tower Corsair Crystal 570X RGB.
By advance thanks for your answer.
@benburkhart1
Having no response from you, I guess the controller of my box sp120 RGB is not the Lighting Node Pro. Also,I close this issue.
Just for people wondering (like me) what they have to manage the fans in their corsair case this is the RGB Fan Led Hub that you can connect to the Lighting Node Pro or better to the Commander Pro.
| gharchive/issue | 2019-06-28T15:13:12 | 2025-04-01T04:33:37.068555 | {
"authors": [
"olielvewen"
],
"repo": "benburkhart1/lighting-node-pro",
"url": "https://github.com/benburkhart1/lighting-node-pro/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
912887995 | Revamp configuration for bench-routes as per new design.
The main features of the configparser module are:
This module parses the new config file structure.
It has a Load() function to load the data from config file to in-memory data.
It also has a Validate() function to validate the config file properties.
It has an AddAPI() function to add API to the config file.
It also has unit test to test the loading of the config file.
@Tushar3099, can you please mention the new config design part from the design doc in the PR description?
@Harkishen-Singh @aquibbaig I have made the suggested changes. Please review.
| gharchive/pull-request | 2021-06-06T16:53:47 | 2025-04-01T04:33:37.071280 | {
"authors": [
"Harkishen-Singh",
"Tushar3099"
],
"repo": "bench-routes/bench-routes",
"url": "https://github.com/bench-routes/bench-routes/pull/494",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2317688040 | 🛑 tilde news is down
In 081cd1f, tilde news (https://tilde.news) was down:
HTTP code: 500
Response time: 8022 ms
Resolved: tilde news is back up in a46fb24 after 12 minutes.
| gharchive/issue | 2024-05-26T12:40:23 | 2025-04-01T04:33:37.080911 | {
"authors": [
"benharri"
],
"repo": "benharri/upptime",
"url": "https://github.com/benharri/upptime/issues/395",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
834217978 | Fix optimized.go inconsistency
Hi! I don't know if its just me, but I am getting inconsistent outputs every time I run it.
jiphthahel 9
jip 1
h 1
sait 1
t 1
hthahel 1
This version is a little bit slower, but works. And it is pretty easy to understand and maintain if I may say (more than the simple version actually :sweat_smile:).
I do like this refactoring -- nice! However, I'd rather not include this change as I want the Go versions to stand pretty much as-is given they're quoted an discussed in the article.
I'm more than willing to fix bugs in it without refactoring, though I can't reproduce what you're seeing. Closing for now, but please re-open if you have a case where you can reproduce the issue you mentioned with the current optimized.go version.
Oh yeah, that is fine, it makes sense. Let me see if I can find a way to reproduce it.
So I just run this:
$ for _ in $(seq 10); do
cat kjvbible_x10.txt |
go run optimized.go |
python3 normalize.py |
tail
echo
sleep 1
done
And I get this:
Output:
zorobabel 10
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
t 1
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
omen 1
w 1
zorites. 10
zorobabel 10
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
zorites. 10
zorobabel 10
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
aft 1
bo 1
dy, 1
manas 1
seh, 1
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
p 1
wee 1
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
e 1
ther 1
zorites. 10
zorobabel 10
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
zuzims 10
dr 1
fo 1
r 1
righ 1
teousness 1
tho 1
u 1
ut, 1
y, 1
zorobabel 10
zorobabel, 10
zorobabel; 10
zuar 10
zuar, 10
zur; 10
zuriel 10
zurishaddai, 10
zuzims 10
: 1
I am running Alpine Edge, Go 1.16. I will try on a Debian container and let you know.
Wow, I see it sometimes now. So weird! I've eliminated the python3 normalize script and the tail, and can still get it to happen sometimes. So it must be a bug in the Go code. I'll investigate further. Shows the danger of optimizing without extensive testing. Thanks!
I can reproduce this on a Debian Bullseye (with Linux 5.10.18, from my Alpine setup) container, but not in a Debian Stretch (with Linux 4.9.0) server, not sure if it might be any breaking change on the kernel itself.
Yeah, this is really weird to me -- on my machine it's very inconsistent, and only happens 1 in every 10 or 20 times. That indicates it's a race condition (but it's simple linear, non-concurrent code) or an undefined memory issue (but Go doesn't usually have those). I'll dig in a bit further later, but let me know if you have any ideas in the meantime.
For sure, and thanks for your time! :smile:
So I just removed all the unnecessary state management and it is consistent now. It is pretty much the same as the changes in this PR but without the WordCounter abstraction, which allows cleaning the buffer in place and has the same performance as your code. Let me know if you want another PR.
As a note, the WordCounter abstraction could have the same performance implementing the io.ReaderFrom interface.
I believe the inconsistency / bug is due to the fact that I'm not handling partial reads when there's no LF in the bytes read. I had assumed Read would always give you what you asked for unless it was at the end of the file, but that's not so (the signature and semantics of Read are actually quite complex). For example, if for some reason a read give you 1:27 So God created ma with the rest coming, it'll search for the last LF, not find it, and then handle the whole buffer as if it was a whole line, treating ma as a distinct word. In any case, I'll fix this bug in my solution in the next couple of days.
I really appreciate your contribution, and it's simpler / less bug prone than mine! However, I'd like to "own" the Go version as I coded that for and presented that in the article, and keep it fairly similar to what I had, warts and all. Thanks for helping me find this (rather tricky!) bug.
Sure, I understand :smile:
The thing is, you don't need to complicate the state management that much, you can treat '\n' as ' ' and count words every time you find one.
https://github.com/ntrrg/countwords/blob/patch-1/optimized.go:
var word []byte
buf := make([]byte, 64*1024)
counts := make(map[string]*int)
That is all the state I needed.
Yes, that's simpler and less tricky, thank you! I think I'll basically take this approach (and add your name to the credits). Will update later today. Cheers!
Don't worry, not doing it for the credits :joy: just wanted to show that even optimized Go code is easy to read and maintain :smile:
Indeed!
| gharchive/pull-request | 2021-03-17T22:13:34 | 2025-04-01T04:33:37.091043 | {
"authors": [
"benhoyt",
"ntrrg"
],
"repo": "benhoyt/countwords",
"url": "https://github.com/benhoyt/countwords/pull/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1368387060 | ERROR: string indices must be integers
Getting the following error when doing:
$> aws-sso-util login --profile myprofile # finishes ok
$> aws-sso-util credential-process --profile myprofile --debug
ERROR: string indices must be integers
aws-sso-util: 4.28.0
python: 3.8.9
MacOs Monterey 12.5.1
Closed as a duplicate of #70, fixing this soon.
| gharchive/issue | 2022-09-09T22:36:51 | 2025-04-01T04:33:37.108952 | {
"authors": [
"benkehoe",
"tairosonloa"
],
"repo": "benkehoe/aws-sso-util",
"url": "https://github.com/benkehoe/aws-sso-util/issues/73",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
102041099 | Simple usage example?
Hey Michael,
I'll take some time later today to get the readme updated with actual usage details.
Ben
That would be amazing!
Hey - I just added in details to the readme. Additionally, I will try to find time to publish a minor, minor, minor version update to get those details added to the npm package page as well.
Yes this is much more helpful! I had no idea I had to instrument the code myself and was scratching my head over the lack of output.
Thank you very much for taking the time to do this
| gharchive/issue | 2015-08-20T02:13:27 | 2025-04-01T04:33:37.115212 | {
"authors": [
"bennyhat",
"faceleg"
],
"repo": "bennyhat/protractor-istanbul-plugin",
"url": "https://github.com/bennyhat/protractor-istanbul-plugin/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
79637641 | Django 1.7+ migrations not included in pypi distribution
I tried installing both 0.5.0 and 0.5.1 via pip, and neither one seems to include django 1.7+ migrations.
Poking around, it seems the migrations folder is present in the 0.5.0 zip and tars available from the GitHub releases tab, but the tar available from pypi does not include them (https://pypi.python.org/pypi/django-organizations#downloads)
Thanks for catching this - still upgrading prod sites to Django 1.8 so haven't noticed. The 0.5.2 version is on PyPI with verified migrations module in the source distribution.
| gharchive/issue | 2015-05-22T22:37:36 | 2025-04-01T04:33:37.117524 | {
"authors": [
"bennylope",
"stevenmcdonald"
],
"repo": "bennylope/django-organizations",
"url": "https://github.com/bennylope/django-organizations/issues/69",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
503657381 | Change background color on load to black
My humble attempt to fix a white screen while the app is loading. I use the app before bed sometimes and the white startup screen is not very comfortable.
Hey @Nik-Pavlov ,
Thank you so much for your input, I know I took a while to answer but be sure that your PR has been received and I really appreciate it.
I'm actually thinking about it because there's no perfect solution to the problem :
Right now, if you're using Android 10 the launch screen takes the appropriate color depending on your OS theme (dark or light). It's not related to your in app choice though.
If we apply an always dark launch screen on Android < 10 (what your PR does), it's not the right choice if you're using the app on a light theme
If we apply an always white launch screen on Android < 10 (what your PR actually tries to fix), it's not the right choice if you're using the app on a dark theme
So I'm still not decided about what what to do here, but I guess your solution works better in every cases..
I agree, I'll have to chance some of your code as I already have a dark theme background color, and I also think I have to apply the AppTheme to all the activities, not just the main one. But I definitely appreciate your help and it's a nice addition.
Hey @Nik-Pavlov
Just wanted to let you know that I included your contribution, it will be part of the next update. I didn't merge it directly because it needed some work but those 2 commits are for you:
https://github.com/benoitletondor/EasyBudget/commit/b4e7cd2690a6d70fc64f87fb5d12356c6a7b4062
https://github.com/benoitletondor/EasyBudget/commit/c5cffc2a392c4e051b544572c61f39ec5bf10ba6
Thank you again :)
Thank you!
| gharchive/pull-request | 2019-10-07T19:55:23 | 2025-04-01T04:33:37.167108 | {
"authors": [
"Nik-Pavlov",
"benoitletondor"
],
"repo": "benoitletondor/EasyBudget",
"url": "https://github.com/benoitletondor/EasyBudget/pull/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1697644156 | Update sbt-ci-release to 1.5.12
About this PR
📦 Updates com.github.sbt:sbt-ci-release from 1.5.11 to 1.5.12
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.github.sbt", artifactId = "sbt-ci-release" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "com.github.sbt", artifactId = "sbt-ci-release" }
}]
labels: sbt-plugin-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #97.
| gharchive/pull-request | 2023-05-05T13:42:03 | 2025-04-01T04:33:37.213295 | {
"authors": [
"scala-steward"
],
"repo": "benthecarman/translnd",
"url": "https://github.com/benthecarman/translnd/pull/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
222898251 | Fixed non-existent 'skip_reason' key lookup in skip method
Ansible version: 2.2.2.0.
I have the following task in my role:
- name: "Enabling yum plugins"
ini_file:
dest: "/etc/yum/pluginconf.d/{{ item }}.conf"
section: main
option: enabled
value: 1
with_items: "{{ enable_plugins }}"
which produces the following warning when enable_plugins is an empty list:
[WARNING]: Failure using method (v2_runner_on_skipped) in callback plugin
(<ansible.plugins.callback.tap.CallbackModule object at 0x7f16fe5bbcd0>):
u'skip_reason'
and task isn't visible in the TAP output.
I'm not sure if it's an intentional behaviour, but I would prefer to see it with an empty reason:
ok - ini_file: Enabling yum plugins # SKIP
Nice catch. I was curious and created a failing test case similar to your example.
Stepping through the test I found that Ansible actually stores the reason "No items in list" as result['skipped_reason'] instead of result['skip_reason']. The new skipped task handler will check both keys.
I released version 0.2.1 with a fix for this bug.
No problem, thanks for reporting the issue.
| gharchive/pull-request | 2017-04-19T23:07:50 | 2025-04-01T04:33:37.229048 | {
"authors": [
"benwebber",
"ezamriy"
],
"repo": "benwebber/ansible-tap",
"url": "https://github.com/benwebber/ansible-tap/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
280334135 | problem forng2-img-max
I have a problem when build android, Metadata version mismatch for module ......../node_modules/ng2-img-max/dist/src/img-exif.service.d.ts, found version 4, expected 3.I don't know how to solve it
I have a similar issue. Building my project with Ng2ImgToolsModule in my app.module will not work, but if I comment it out it will build, and then if I uncomment it, it will still build.
For Angular 4 compatibility downgrade ng2-img-max to version 2.1.6 and ng2-img-tools to version 1.1.0. The recent updates for Angular 5 broke it on 4.
Hey guys, as @sdsharma suggests you have to use an older version of ng2-img-max on older versions of Angular.
| gharchive/issue | 2017-12-08T01:03:00 | 2025-04-01T04:33:37.246026 | {
"authors": [
"bergben",
"marker004",
"sdsharma",
"zgliuhouqing"
],
"repo": "bergben/ng2-img-tools",
"url": "https://github.com/bergben/ng2-img-tools/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
350669446 | Unexpected deletion of Strategy config.name property during creation
I recently updated from 0.32.1 to 0.35.0. This caused by application to fail with the following error message
Authentication strategies must have a name
After investigation, I found the following code was added in 0.33.0 https://github.com/bergie/passport-saml/blob/v0.33.0/lib/passport-saml/strategy.js#L17:L23
// Customizing the name can be useful to support multiple SAML configurations at the same time.
// Unlike other options, this one gets deleted instead of passed along.
if (options.name) {
this.name = options.name;
delete options.name;
}
else {
this.name = 'saml';
}
I was already using a name property in the config so that I can setup multiple SAML strategies, sample code shown below
let samlStrategy = new SAMLStrategy(conf, (profile, done) => {
// some code here
});
passport.use(conf.name, samlStrategy);
The above code now fails because for the change made in 0.33.0 and I had to solved this issue by making a clone of the conf object and passing the clone when creating the strategy.
Question: Shouldn't the module take care of creating the clone if it is going to delete properties of parameters that is been passed to it?
@josnidhin You are right-- it would be a better design to clone the options passed in and delete from that. Would you be willing to put together a PR to improve this?
@josnidhin I'm checking again to see if you are willing to fix this issue. Thanks.
@josnidhin I'm planning a new release within the next 10 days, and this improvement would be welcome.
@markstos I would like to see this get fixed before a 1.0 release. I see that you added the comment explaining that options.name should not get passed along. Could you clarify why it shouldn't get passed along?
I understand that part of it, but I'm wondering why options.name is deleted in the first place.
@markstos Sorry for the late reply was on vacation.
I can help with this bug but as @cjbarth asked why do we have we have to delete that property.
If it needs to be deleted then the options object will have to be cloned and the way to do it will depend on the Node versions to supported. Currently the package.json says Node Engine >= 4 do we have to support 4?. If its 6 and above the we can use the Object.assign else I am thinking of something like JSON.parse(JSON.stringify(options)).
@cjbarth Oh. :) I wrote that code in: https://github.com/bergie/passport-saml/commit/6e2418bae01b98dc982a3d37a543ef7ea079fd76
The reason is that "options.name" was not passed along before, and I didn't want to modify that behavior.
At the time, I needed to be able to connect to multiple IdPs, and giving each IdP a custom name here helped me be do that.
For more context, here's the issue that led to the original change. https://github.com/bergie/passport-saml/commit/6e2418bae01b98dc982a3d37a543ef7ea079fd76
@josnidhin We will support all Node LTS releases that are actively maintained. A PR is welcome to bump the minimum supported version to 6, as Node 4 has been EOLed.
I have not looked at the effect of continuing to pass options.name on through to the next call:
this._saml = new saml.SAML(options);
You are welcome to investigate, but deleting it and not passing is definitely safe as that's the original behavior.
@markstos Interestingly enough, I have had to federate against multiple IdP's as well and leveraged the ability to set different names to get it done and I'm on 0.15, before this changed landed. Thus, I'm not sure that this was strictly required to get everything working. I note that options.name was always being passed down to the saml.SAML() ctor and that behavior only stopped with this PR.
if (typeof options == 'function') {
verify = options;
options = {};
}
if (!verify) {
throw new Error('SAML authentication strategy requires a verify function');
}
this.name = 'saml'; // if options.name was set at this point, it was never unset.
passport.Strategy.call(this);
this._verify = verify;
this._saml = new saml.SAML(options);```
I thus feel that the only change that we need to make would be to remove the line to delete `options.name`. Is there something I'm missing? In fact, There doesn't seem to be mush use for `this.name` either.
@cjbarth About two weeks after I commited this change, someone from the passport project left a comment explaining how I could accomplish the same thing using an undocumented feature of passport. You can see my comment and the answer here:
https://github.com/jaredhanson/passport/pull/606#issuecomment-365710590
By that point, I had already released the new version of passport-saml with the change. However, now it's clear the change wasn't necessary and was clearly broken for a common case that led to this bug report. So another option we have is just revert the bad commit. I can add some documentation on how to use the undocumented 2-argument call to passport.use() instead.
This might be considered a breaking change, but our next release is planned to contain some other breaking changes anyway.
In that case, since this change introduced a bug, I'm in favor of reverting it and updating the documentation. I must be using the undocumented way too, though I spent a lot of time reading the source code to build my code to use this library, so I probably missed that it wasn't documented:) Now is a good time to make such breaking changes, however, would this really be a breaking change since we'd just be reverting to previous behavior?... Well, I suppose we did a breaking change, and this would be a breaking change to undo that breaking change... Anyway 1.0 here we come!
It's only breaking for the small (very small?) number of people who used the new feature. I'll handle the revert and related docs. Assigning to myself.
| gharchive/issue | 2018-08-15T03:03:16 | 2025-04-01T04:33:37.258578 | {
"authors": [
"cjbarth",
"josnidhin",
"markstos"
],
"repo": "bergie/passport-saml",
"url": "https://github.com/bergie/passport-saml/issues/296",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2391654207 | 🛑 DAP Website is down
In 769bbf5, DAP Website (http://web.ula.ve/dap/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: DAP Website is back up in 15370a8 after 2 hours, 9 minutes.
| gharchive/issue | 2024-07-05T02:23:32 | 2025-04-01T04:33:37.280555 | {
"authors": [
"berlinserver"
],
"repo": "berlinserver/estatus",
"url": "https://github.com/berlinserver/estatus/issues/392",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
139253568 | RTやあるツイートの前後のTweetを取得したい
発言の文脈を追いたい。ユースケースを示す。
RT元のTweetがどのような経緯で発信されたものか、その後どのように補遺があったかを参照したい。
RTしたアカウントがどのような反応をしているか参照したい。
ビューの意味では、 #1 にあるFragmentTimelineによる実装で実現できる。
リクエストの意味では、in_reply_to_status_idを追うものと、あるTweetの前後n件を取得するようなAPIが必要になる。
| gharchive/issue | 2016-03-08T11:31:27 | 2025-04-01T04:33:37.281832 | {
"authors": [
"berlysia"
],
"repo": "berlysia/EthnicPolyphony",
"url": "https://github.com/berlysia/EthnicPolyphony/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2197889810 | Bug: Re-Review in approval although not submitted
Description
In the re-review process for lamb1, there seems to be an issue. The entity lamb1 is appearing in the approval list prematurely, before it has been officially submitted for approval by the reviewer.
Steps to Reproduce
Navigate to the re-review section.
Observe the appearance of lamb1 in the approval list.
Confirm whether lamb1 has been submitted for review.
Expected Behavior
Entities should only appear in the approval list post-submission for re-review.
Actual Behavior
The entity lamb1 is visible in the approval list before submission, indicating a potential flaw in the submission or listing process.
Additional Context
This issue may affect the workflow and efficiency of the re-review process, necessitating a prompt fix.
This is actually true for all genes. The status "re_review_submitted" and "re_review_approved" are only tracked in the table re_review_entity_connect.
In the standard curation approval views all status and reviews are displayed which casues this "bug". This is also related to the problem of "re_review_approved" not being saved as some curators do not use the dedicated approval view for that.
Possible fix in logic:
check for each status or review if it is in the "re_review_entity_connect" table. Show a info for the curator that this is actually part of the re-curation.
fix the re_review_entity_connect table by adding the re_review_approved status based on some logic (e.g. the current status and revioew are primary and it is amrked as submitted
| gharchive/issue | 2024-03-20T15:51:10 | 2025-04-01T04:33:37.294898 | {
"authors": [
"berntpopp"
],
"repo": "berntpopp/sysndd",
"url": "https://github.com/berntpopp/sysndd/issues/31",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
1128604618 | Automatise la publication du paquet modele-social sur NPM
Et publication sous forme d'ES module. Closes #1935.
C'est ici
https://github.com/betagouv/mon-entreprise/blob/master/modele-social/build.js#L50
Dans la PR vitejs j'avais changé exports.module = {...} en export default { ... }, ce qui n'a absolument rien à voir 😄
| gharchive/pull-request | 2022-02-09T14:21:19 | 2025-04-01T04:33:37.381831 | {
"authors": [
"mquandalle"
],
"repo": "betagouv/mon-entreprise",
"url": "https://github.com/betagouv/mon-entreprise/pull/2004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
357278698 | Uniformisation "data-couleur" et "data-lang"
Pourquoi avoir une des variables en anglais et l'autre en français ?
Plus logique de faire [data-color et data-lang] ou [data-langue et data-couleur], non ?
Merci pour la remarque !
Techniquement ce n'est pas un problème de langage, mais plutôt d'abbréviations : couleur n'est pas écourté parce que coul n'évoque rien, alors que lang est compréhensible.
Mais je suis d'accord avec toi, ça n'a aucun intérêt, et mieux vaut avoir data-langue.
Dans le même genre, dans le code se baladent des colour alors que la propriété CSS est color... Là aussi une mauvaise décision.
Dans l'idée j'aurais plutôt tout mit en anglais ;)
Je peux faire une PR si il faut
Malheureusement, impossible de changer ce bout de code : c'est notre lien avec les intégrations en iFrame des partenaires (pôle-emploi par exemple, mais aussi 30 autres), et ça risque d'être très compliqué de tous les faires changer ces attributs...
C'est peut être pas ouf, mais accepter les 2, et dans la doc et les exemples mettre le bon, ça fera une sorte de rétro comptabilité
| gharchive/issue | 2018-09-05T15:17:04 | 2025-04-01T04:33:37.407644 | {
"authors": [
"BorisLeMeec",
"laem"
],
"repo": "betagouv/syso",
"url": "https://github.com/betagouv/syso/issues/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
223915102 | Homepage: Links in Hero Box
Add "Get Oriented:" before four links
Reorder/rename four links: Site Overview, Communities Overview, Intro to CSE, Intro to HCP
Current link goes to communities page
These links should carry when the Get Oriented subpages are created
Done.
| gharchive/issue | 2017-04-24T18:44:23 | 2025-04-01T04:33:37.421430 | {
"authors": [
"curfman",
"sbxchicago"
],
"repo": "betterscientificsoftware/betterscientificsoftware.github.io",
"url": "https://github.com/betterscientificsoftware/betterscientificsoftware.github.io/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2762044621 | 🛑 Samsun Psikoterapi Merkezi is down
In fa87287, Samsun Psikoterapi Merkezi (https://samsunpsikoterapi.tr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Samsun Psikoterapi Merkezi is back up in 10e3c99 after 1 hour, 43 minutes.
| gharchive/issue | 2024-12-29T00:02:48 | 2025-04-01T04:33:37.423838 | {
"authors": [
"betterwithagency"
],
"repo": "betterwithagency/status-page",
"url": "https://github.com/betterwithagency/status-page/issues/1527",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2433076814 | 🛑 bw/a Smart is down
In 7ed6221, bw/a Smart (https://smart.betterwith.agency) was down:
HTTP code: 0
Response time: 0 ms
Resolved: bw/a Smart is back up in 767dd51 after 17 minutes.
| gharchive/issue | 2024-07-26T22:53:59 | 2025-04-01T04:33:37.426189 | {
"authors": [
"betterwithagency"
],
"repo": "betterwithagency/status-page",
"url": "https://github.com/betterwithagency/status-page/issues/379",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2441888366 | 🛑 Emir Tarlan is down
In b688238, Emir Tarlan (https://emirtarlan.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Emir Tarlan is back up in 4204f8f after 39 minutes.
| gharchive/issue | 2024-08-01T08:58:59 | 2025-04-01T04:33:37.428495 | {
"authors": [
"betterwithagency"
],
"repo": "betterwithagency/status-page",
"url": "https://github.com/betterwithagency/status-page/issues/764",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
345504417 | 页面F5刷新的时候,左侧menu的状态没有了
页面F5刷新的时候,左侧menu的状态没有了
第一反应是不可能
如图所示,左边二级菜单被隐藏了,菜单状态丢失。
怎么出现的这种情况?操作如下
1、npm start
2、点击左侧-功能管理->功能列表(实际上,点任何二级菜单都可以)
3、F5刷新页面
截图上的情况,便出现了。
因为我偷懒没想写这个功能,具体的业务逻辑自己加上吧。
在componentWillMount里边判断window.location.pathName,然后设置defaultOpenKeys和defaultSelectedKeys。
好的。
| gharchive/issue | 2018-07-29T05:53:54 | 2025-04-01T04:33:37.431230 | {
"authors": [
"EdisonFan",
"beverle-y"
],
"repo": "beverle-y/react-starter-kit",
"url": "https://github.com/beverle-y/react-starter-kit/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
680577795 | Adding ScheduleRunnerPlugin after default plugin crashes at startup
Thanks for your work!
I've been experimenting a bit and stumbled across the ScheduleRunnerPlugin. However, when the plugin is added after the default plugins, it crashes on startup. When adding the default plugins after the ScheduleRunnerPlugin, it works fine.
Minimal example:
use std::time::Duration;
use bevy::prelude::*;
fn main() {
App::build()
.add_default_plugins()
.add_plugin(bevy::app::ScheduleRunnerPlugin::run_loop(
Duration::from_secs_f64(1.0 / 60.0),
))
.run();
}
Fails with:
Short Backtrace
C:/Users/Lukas/.cargo/bin/cargo.exe run --color=always --package pong --bin crash
Compiling pong v0.1.0 (D:\Lukas\Documents\Rust\GD50\pong)
Finished dev [unoptimized + debuginfo] target(s) in 4.77s
Running `target\debug\crash.exe`
thread 'main' panicked at 'Resource does not exist bevy_input::keyboard::KeyboardInputState', C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:157:32
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:475
1: std::panicking::begin_panic_fmt
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:429
2: bevy_ecs::resource::resources::{{impl}}::get_unsafe_ref::{{closure}}<bevy_input::keyboard::KeyboardInputState>
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:157
3: core::option::Option<core::ptr::non_null::NonNull<bevy_input::keyboard::KeyboardInputState>>::unwrap_or_else
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\option.rs:409
4: bevy_ecs::resource::resources::Resources::get_unsafe_ref
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:144
5: bevy_ecs::resource::resource_query::{{impl}}::get<bevy_input::keyboard::KeyboardInputState>
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resource_query.rs:221
6: bevy_ecs::resource::resource_query::{{impl}}::get
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resource_query.rs:255
7: bevy_ecs::resource::resources::Resources::query_system
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:139
8: bevy_ecs::system::into_system::{{impl}}::system::{{closure}}
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\system\into_system.rs:178
9: bevy_ecs::system::into_system::{{impl}}::run<bevy_ecs::system::into_system::QuerySystemState,closure-0,closure-1,closure-2,closure-3>
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\system\into_system.rs:60
10: bevy_ecs::schedule::schedule::Schedule::run
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\schedule\schedule.rs:139
11: bevy_app::schedule_runner::{{impl}}::build::{{closure}}
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_app\src\schedule_runner.rs:60
12: alloc::boxed::{{impl}}::call
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\boxed.rs:1039
13: bevy_app::app::App::run
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_app\src\app.rs:73
14: bevy_app::app_builder::AppBuilder::run
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_app\src\app_builder.rs:43
15: crash::main
at .\src\crash.rs:6
16: core::ops::function::FnOnce::call_once<fn(),tuple<>>
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs:233
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Full backtrace
C:/Users/Lukas/.cargo/bin/cargo.exe run --color=always --package pong --bin crash
Finished dev [unoptimized + debuginfo] target(s) in 0.85s
Running `target\debug\crash.exe`
thread 'main' panicked at 'Resource does not exist bevy_input::keyboard::KeyboardInputState', C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:157:32
stack backtrace:
0: 0x7ff6ec2ca6e9 - std::backtrace_rs::backtrace::dbghelp::trace
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\..\..\backtrace\src\backtrace\dbghelp.rs:98
1: 0x7ff6ec2ca6e9 - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ff6ec2ca6e9 - std::sys_common::backtrace::_print_fmt
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\sys_common\backtrace.rs:79
3: 0x7ff6ec2ca6e9 - std::sys_common::backtrace::_print::{{impl}}::fmt
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\sys_common\backtrace.rs:58
4: 0x7ff6ec2e180c - core::fmt::write
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\core\src\fmt\mod.rs:1117
5: 0x7ff6ec2c57ec - std::io::Write::write_fmt<std::sys::windows::stdio::Stderr>
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\io\mod.rs:1510
6: 0x7ff6ec2cd61b - std::sys_common::backtrace::_print
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\sys_common\backtrace.rs:61
7: 0x7ff6ec2cd61b - std::sys_common::backtrace::print
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\sys_common\backtrace.rs:48
8: 0x7ff6ec2cd61b - std::panicking::default_hook::{{closure}}
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:200
9: 0x7ff6ec2cd268 - std::panicking::default_hook
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:219
10: 0x7ff6ec2cde0f - std::panicking::rust_panic_with_hook
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:569
11: 0x7ff6ec2cd975 - std::panicking::begin_panic_handler::{{closure}}
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:476
12: 0x7ff6ec2caf9f - std::sys_common::backtrace::__rust_end_short_backtrace<closure-0,!>
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\sys_common\backtrace.rs:153
13: 0x7ff6ec2cd929 - std::panicking::begin_panic_handler
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:475
14: 0x7ff6ec2cd8dc - std::panicking::begin_panic_fmt
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:429
15: 0x7ff6ec913ac6 - bevy_ecs::resource::resources::{{impl}}::get_unsafe_ref::{{closure}}<bevy_input::keyboard::KeyboardInputState>
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:157
16: 0x7ff6ec9126fc - core::option::Option<core::ptr::non_null::NonNull<bevy_input::keyboard::KeyboardInputState>>::unwrap_or_else
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\option.rs:409
17: 0x7ff6ec9126fc - bevy_ecs::resource::resources::Resources::get_unsafe_ref
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:144
18: 0x7ff6ec9126fc - bevy_ecs::resource::resource_query::{{impl}}::get<bevy_input::keyboard::KeyboardInputState>
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resource_query.rs:221
19: 0x7ff6ec515fc6 - bevy_ecs::resource::resource_query::{{impl}}::get
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resource_query.rs:255
20: 0x7ff6ec515fc6 - bevy_ecs::resource::resources::Resources::query_system
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\resource\resources.rs:139
21: 0x7ff6ec515fc6 - bevy_ecs::system::into_system::{{impl}}::system::{{closure}}
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\system\into_system.rs:178
22: 0x7ff6ec515fc6 - bevy_ecs::system::into_system::{{impl}}::run<bevy_ecs::system::into_system::QuerySystemState,closure-0,closure-1,closure-2,closure-3>
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\system\into_system.rs:60
23: 0x7ff6ec31cc53 - bevy_ecs::schedule::schedule::Schedule::run
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_ecs\src\schedule\schedule.rs:139
24: 0x7ff6ec2b12c4 - bevy_app::schedule_runner::{{impl}}::build::{{closure}}
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_app\src\schedule_runner.rs:60
25: 0x7ff6ec2af1bc - alloc::boxed::{{impl}}::call
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\alloc\src\boxed.rs:1039
26: 0x7ff6ec2af1bc - bevy_app::app::App::run
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_app\src\app.rs:73
27: 0x7ff6ec2b0db0 - bevy_app::app_builder::AppBuilder::run
at C:\Users\Lukas\.cargo\git\checkouts\bevy-f7ffde730c324c74\99e39b5\crates\bevy_app\src\app_builder.rs:43
28: 0x7ff6ec2a5485 - crash::main
at D:\Lukas\Documents\Rust\GD50\pong\src\crash.rs:6
29: 0x7ff6ec2a13db - core::ops::function::FnOnce::call_once<fn(),tuple<>>
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs:233
30: 0x7ff6ec2a5a5b - std::sys_common::backtrace::__rust_begin_short_backtrace<fn(),tuple<>>
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\sys_common\backtrace.rs:137
31: 0x7ff6ec2a57e1 - std::rt::lang_start::{{closure}}<tuple<>>
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\rt.rs:66
32: 0x7ff6ec2ce13e - core::ops::function::impls::{{impl}}::call_once
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\library\core\src\ops\function.rs:286
33: 0x7ff6ec2ce13e - std::panicking::try::do_call
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:373
34: 0x7ff6ec2ce13e - std::panicking::try
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panicking.rs:337
35: 0x7ff6ec2ce13e - std::panic::catch_unwind
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\panic.rs:394
36: 0x7ff6ec2ce13e - std::rt::lang_start_internal
at /rustc/7e6d6e5f535321c2223f044caba16f97b825009c\/library\std\src\rt.rs:51
37: 0x7ff6ec2a57b3 - std::rt::lang_start<tuple<>>
at C:\Users\Lukas\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\rt.rs:65
38: 0x7ff6ec2a54f0 - main
39: 0x7ff6ecb2a930 - invoke_main
at d:\agent\_work\4\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
40: 0x7ff6ecb2a930 - __scrt_common_main_seh
at d:\agent\_work\4\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
41: 0x7ffd13a97bd4 - BaseThreadInitThunk
42: 0x7ffd13bece51 - RtlUserThreadStart
error: process didn't exit successfully: `target\debug\crash.exe` (exit code: 101)
bevy version: 0.1.2 and 99e39b52
rustup toolchain: nightly-x86_64-pc-windows-msvc
rust version: rustc 1.47.0-nightly (7e6d6e5f5 2020-08-16)
Kind regards,
Lukas
Sidenote: as far as I can tell, ScheduleRunnerPlugin waits a fixed amount of time between each update but does not schedule updates in a fixed interval (1/60 is not necessarily 60 fps). I assume this is either intended or being planed (maybe with #125) but for the moment, https://github.com/bevyengine/bevy/blob/99e39b522775b1530bbe226f29f7368c37174bb9/examples/app/headless.rs#L18 is technically not correct :-)
After some more experimentation, it seems that ScheduleRunnerPlugin is incompatible with winit (hence the crash).
When adding ScheduleRunnerPlugin before .add_default_plugins(), it doesn't crash but it also doesn't apply the schedule. This is because the WinitPlugin also calls set_runner: https://github.com/bevyengine/bevy/blob/4a06bbf9f6828849d353d0732c1885077c32c0f4/crates/bevy_winit/src/lib.rs#L27-L32
So when using winit, currently the only way to limit the fps is using vsync: true in the WindowDescriptor. Maybe the solution is to add a new field fps to WindowDescriptor?
I encountered this issue when running headless.
The root cause is that when not using bevy_winit::winit_runner local resources of Local<T> are not initialized.
winit_runner somehow triggers ResourceQuery:initialize() and when it is not used, this trait implementation does not run.
Following simple program is enough to reproduce the issue:
use bevy::{app::ScheduleRunnerPlugin, prelude::*};
use std::time::Duration;
fn main() {
App::build()
.add_plugin(ScheduleRunnerPlugin::run_loop(Duration::from_secs_f64(
1.0 / 60.0,
)))
.add_system(counter.system())
.run();
}
fn counter(mut state: Local<CounterState>) {
println!("{}", state.count);
state.count += 1;
}
#[derive(Default)]
struct CounterState {
count: u32,
}
$ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
Running `target/debug/local_test`
thread 'main' panicked at 'Resource does not exist local_test::CounterState', /home/smoku/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/macros.rs:16:9
| gharchive/issue | 2020-08-17T23:10:36 | 2025-04-01T04:33:37.447725 | {
"authors": [
"lukasschlueter",
"smokku"
],
"repo": "bevyengine/bevy",
"url": "https://github.com/bevyengine/bevy/issues/221",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1656460760 | Ratchet deprecation (support for php 8.2)
There's a dependency of this that floods my logfiles with deprecation warnings every second since updating to php 8.2
There is no support here, look this example https://github.com/beyondcode/laravel-websockets/pull/1049
| gharchive/issue | 2023-04-05T23:52:36 | 2025-04-01T04:33:37.481774 | {
"authors": [
"manuelmaceira",
"parallels999"
],
"repo": "beyondcode/laravel-websockets",
"url": "https://github.com/beyondcode/laravel-websockets/issues/1117",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
169938899 | introduce separate modules reduks-rx-android and reduks-kovenant-android
Currently the reduks-android module does not have any real android dep, just an interface called interface ReduksActivity<S> that does not actually depends on android activity.
What about reintegrating ReduksActivity<S> in core and creating instead two modules with real dependency on android, one for rx (RxReduksActivity) and one for kovenant (KovenantReduksActivity).
Or perhaps explore a different a way to do this: the need of having a base activity class for using reduks is an ugly design pattern.
after some refactoring of android related code, I think we can stick with the current set of modules
| gharchive/issue | 2016-08-08T14:36:53 | 2025-04-01T04:33:37.483489 | {
"authors": [
"beyondeye"
],
"repo": "beyondeye/Reduks",
"url": "https://github.com/beyondeye/Reduks/issues/3",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2426563684 | Typescript issue in Nuxt 3
IDEs report this error.
Vue: Object literal may only specify known properties, and fontawesome does not exist in type InputConfig<NuxtConfig, ConfigLayerMeta
PR is welcome!
Have you tried running npm run dev again?
After I did this (and restarted my TS language server) the error was gone in my IDE (VS Code)
| gharchive/issue | 2024-07-24T04:22:58 | 2025-04-01T04:33:37.485856 | {
"authors": [
"bezumkin",
"iondisc",
"nielsvanrijn"
],
"repo": "bezumkin/nuxt-fontawesome",
"url": "https://github.com/bezumkin/nuxt-fontawesome/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
216442207 | Fix for stuck overline when triggerd by code
When calling [self selectTabAtIndex:1 animated:true];, the overline bar stuck at its position while underline is moving. This fix will try to move both line if they are enabled.
Thank you! I will update CocoaPods to include this.
Pushed version 2.2.7 to CocoaPods which includes this merge! Thanks!
| gharchive/pull-request | 2017-03-23T14:16:43 | 2025-04-01T04:33:37.487175 | {
"authors": [
"OnurVar",
"bfeher"
],
"repo": "bfeher/BFPaperTabBarController",
"url": "https://github.com/bfeher/BFPaperTabBarController/pull/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
38968986 | notifies :redeploy fails, but manually notifying :stop, :remove and :run works
Reference: https://snap-ci.com/rapidftr/RapidFTR/branch/master/logs/defaultPipeline/74/DEV?back_to=build_history
notifies :redeploy, ..., :immediately
Fails randomly. Instead we have switched to:
notifies :stop, ..., :immediately
notifies :remove, ..., :immediately
notifies :run, ..., :immediately
And that works consistently well better. Any ideas what could be the issue?
EDIT: That doesn't work consistently well, it just works "better", and does fail occasionally.
This is my experience as well. When using :redeploy I get an error about not being able to remove an running container. As a fix I'm using action [:stop, :remove, :run].
FIxed in master
| gharchive/issue | 2014-07-29T08:46:54 | 2025-04-01T04:33:37.492217 | {
"authors": [
"caleb",
"rdsubhas",
"someara"
],
"repo": "bflad/chef-docker",
"url": "https://github.com/bflad/chef-docker/issues/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.