id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
106944011 | Double click on mobile
Forum - http://angulargrid.com/forum/showthread.php?tid=2813
Any chance of modifying cellDoubleClicked event to work on mobile devices?
does not seem to be working out of the box: http://tinyurl.com/qfsmrcf
full plnkr: http://plnkr.co/edit/r0hj1QDmbLODzuNdgNXj?p=preview
i need to be able to highlight the row to enable certain actions upon it and on double click, it should open a modal for additional detail... works fine on desktop but no dice on ios and android
what's the fix for mobile?
no movement in over a year, closing.
| gharchive/issue | 2015-09-17T09:21:24 | 2025-04-01T04:33:45.924744 | {
"authors": [
"ceolter",
"equilerex"
],
"repo": "ceolter/ag-grid",
"url": "https://github.com/ceolter/ag-grid/issues/436",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
122540146 | Server side sorting, I always get the same sortModel
I have a gridOptions with enableServerSideSorting: true.
In the parameter of the "getRows" function of my datasource the params.sortModel[0].sort has always the value 'asc'. Even if I click multiple time on the same column header.
i can't reproduce this here.
http://localhost/angular-grid-pagination/pagingServerSide.html
i debugged thought the code, and when no sort is applied, the sortModel array is empty.
so i'm guessing something is wrong with your code?
| gharchive/issue | 2015-12-16T16:07:37 | 2025-04-01T04:33:45.926545 | {
"authors": [
"ceolter",
"jmdagenais"
],
"repo": "ceolter/ag-grid",
"url": "https://github.com/ceolter/ag-grid/issues/602",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
67048671 | Failed when 'TASK: [ceph-common | Check for a Ceph socket]'
I've tried several times, it always failed at TASK: [ceph-common | Check for a Ceph socket].
error output:
TASK: [ceph-common | Check for a Ceph socket] *********************************
failed: [mon1] => {"changed": true, "cmd": "stat /var/run/ceph/.asok > /dev/null 2>&1", "delta": "0:00:00.001969", "end": "2015-04-08 03:56:01.522639", "rc": 1, "start": "2015-04-08 03:56:01.520670", "warnings": []}
...ignoring
failed: [osd0] => {"changed": true, "cmd": "stat /var/run/ceph/.asok > /dev/null 2>&1", "delta": "0:00:00.001964", "end": "2015-04-08 03:56:01.535388", "rc": 1, "start": "2015-04-08 03:56:01.533424", "warnings": []}
...ignoring
failed: [mon2] => {"changed": true, "cmd": "stat /var/run/ceph/.asok > /dev/null 2>&1", "delta": "0:00:00.001971", "end": "2015-04-08 03:56:01.551236", "rc": 1, "start": "2015-04-08 03:56:01.549265", "warnings": []}
...ignoring
failed: [mon0] => {"changed": true, "cmd": "stat /var/run/ceph/.asok > /dev/null 2>&1", "delta": "0:00:00.002027", "end": "2015-04-08 03:56:01.657233", "rc": 1, "start": "2015-04-08 03:56:01.655206", "warnings": []}
...ignoring
failed: [osd1] => {"changed": true, "cmd": "stat /var/run/ceph/.asok > /dev/null 2>&1", "delta": "0:00:00.002610", "end": "2015-04-08 03:56:01.592151", "rc": 1, "start": "2015-04-08 03:56:01.589541", "warnings": []}
...ignoring
failed: [osd2] => {"changed": true, "cmd": "stat /var/run/ceph/.asok > /dev/null 2>&1", "delta": "0:00:00.001967", "end": "2015-04-08 03:56:01.660520", "rc": 1, "start": "2015-04-08 03:56:01.658553", "warnings": []}
...ignoring
Sometimes not all the nodes failed in this step.
I did not change anything in the ceph-ansible dir, just vagrant up|provision.
Is there anyone can help me figure out where the problem is?
If it's the first time you bootstrap this is normal (since the services are not started there is no socket).
This error ignored anyway, this is not critical.
| gharchive/issue | 2015-04-08T04:56:32 | 2025-04-01T04:33:45.934386 | {
"authors": [
"ddxgz",
"leseb"
],
"repo": "ceph/ceph-ansible",
"url": "https://github.com/ceph/ceph-ansible/issues/244",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
609728300 | set CGO_ENABLED=1 in makefile
if we set the CGO_ENABLED=1 in makefile the user dont need to set the CGO_ENABLED=1
before running any commands.
Signed-off-by: Madhu Rajanna madhupr007@gmail.com
CGO is enabled by default, but some users may have CGO_ENABLED=0 set. There should probably be a test that checks for dependencies/configuration, instead of setting the environment variable everywhere.
sounds good we can add a check for both CGO_ENABLED and go version also we need to go1.13 as minimum version
| gharchive/pull-request | 2020-04-30T08:53:12 | 2025-04-01T04:33:45.939485 | {
"authors": [
"Madhu-1"
],
"repo": "ceph/ceph-csi",
"url": "https://github.com/ceph/ceph-csi/pull/1000",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
620208440 | ci: retry if no machine is immediately available
To prevent the failure of a job due to unavailability of
a machine immediately, a retry mechanism is used.
If unable to reserve a machine, it will retry every
5 mins for 30 times to avoid job failure.
Signed-off-by: Yug Gupta ygupta@redhat.com
What happens when the number of retries is up, and no machine has been found?
The build will fail if that happens
The build will fail if that happens
I guess at the stage step?
It will be unable to reserve machine and will fail at that stage
| gharchive/pull-request | 2020-05-18T13:26:24 | 2025-04-01T04:33:45.941886 | {
"authors": [
"Yuggupta27",
"nixpanic"
],
"repo": "ceph/ceph-csi",
"url": "https://github.com/ceph/ceph-csi/pull/1081",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
846171343 | Avoid logging secrets when Logging Replication GRPC requests
This PR uses the helper function StripReplicationSecrets from replication-lib-utils to avoid the logging of secrets in Replication GRPC request.
Signed-off-by: Madhu Rajanna madhupr007@gmail.com
@mergifyio refresh
/retest ci/centos/mini-e2e-helm/k8s-1.19
/retest ci/centos/mini-e2e-helm/k8s-1.19
Failed to deploy rook #1636
Error from server: error when creating "/tmp/tmp.mYPVfII4SK/cluster-test.yaml": etcdserver: request timed out
/retest ci/centos/mini-e2e-helm/k8s-1.19
/retest ci/centos/upgrade-tests-rbd
/retest ci/centos/mini-e2e-helm/k8s-1.19
/retest ci/centos/upgrade-tests-rbd
Jenkins lost its connection to the test systems?
/retest ci/centos/mini-e2e-helm/k8s-1.20
/retest ci/centos/mini-e2e
/retest ci/centos/mini-e2e-helm/k8s-1.20
/retest ci/centos/mini-e2e
These jobs were affected as well.
/retest ci/centos/upgrade-tests-cephfs
| gharchive/pull-request | 2021-03-31T08:23:58 | 2025-04-01T04:33:45.946559 | {
"authors": [
"Madhu-1",
"nixpanic"
],
"repo": "ceph/ceph-csi",
"url": "https://github.com/ceph/ceph-csi/pull/1947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
93899159 | Add a test for concurrent PUT requests.
Adds two tests for concurrent PUT requests and concurrent overwrite
PUT requests. The expectation is that in each case the request with
the latest submitted time should be the one that is externally
visible.
This new test fails for me sometimes when running against aws:
FAIL: s3tests.functional.test_s3.test_concurrent_object_overwrite
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/gaul/work/s3-tests/s3tests/functional/test_s3.py", line 988, in test_concurrent_object_overwrite
eq(measured_size, small_size + 40)
AssertionError: None != 1048616
Reference for last writer wins: https://forums.aws.amazon.com/message.jspa?messageID=72266
I increased the sleep time between kicking off the large upload thread and when the small upload occurs to 1s. Ideally, we would like to pause the upload process and inject the second request. However, I could not figure out how to do that. Attempt to use the FakeFileWriter showed that it's not quite meant for that, as I observed the request created in the interrupt on the wire before the request that triggered the interrupt.
@timuralp @andrewgaul it looks like a racy test, I'm not sure it's the correct way to test that. Moreover, don't we already have a test that mimics that in a different way?
@yehudasa I updated the test to use the FakeWriteFile primitive to induce the desired behavior. The content MD5 value needs to be supplied ahead of time, as otherwise the interrupt would be called prematurely.
I also couldn't find tests that test exactly the same behavior (but I haven't read all of them). I assumed that any test that does this would have to either pre-compute MD5 or use threads (like the prior iteration of the PR) and no tests appear to test concurrent PUTs in that way. Skimming through the list of all tests in functional/test_s3.py I also couldn't find a test that does this. Could you point me at a test that validates this behavior?
@timuralp the s3tests-test-readwrite in this repository are doing concurrent writes, and ir doesn't need to have the md5 provided ahead of time (md5 is embedded in the data).
@yehudasa updated the PR to remove MD5 generation and the import statement for hashlib.
| gharchive/pull-request | 2015-07-08T21:55:57 | 2025-04-01T04:33:46.168266 | {
"authors": [
"andrewgaul",
"timuralp",
"yehudasa"
],
"repo": "ceph/s3-tests",
"url": "https://github.com/ceph/s3-tests/pull/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1833739682 | PhysicalConsole: Fix UTF-8 related failure
Fixes: https://tracker.ceph.com/issues/62286
@dmick, I'm not sure if this was your intent, but we're logging a lot more now after #1869. For example:
$ grep -a -A1 "console.*\(expect before\|output\):" /a/zack-2023-08-02_18:04:55-powercycle-reef-distro-default-smithi/7358307/teuthology.log > /tmp/7358307_console.txt
$ wc -l /tmp/7358307_console.txt
76 /tmp/7358307_console.txt
$ wc -c /tmp/7358307_console.txt
53344 /tmp/7358307_console.txt53344
https://pulpito-ng.ceph.com/runs/zack-2023-08-02_18:04:55-powercycle-reef-distro-default-smithi
| gharchive/pull-request | 2023-08-02T18:54:57 | 2025-04-01T04:33:46.170380 | {
"authors": [
"zmc"
],
"repo": "ceph/teuthology",
"url": "https://github.com/ceph/teuthology/pull/1879",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170430932 | misc fixes
Description:
Fix booting from the ZFS pool to be automatic, improve the build container a bit.
This change is
Reviewed 2 of 2 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved, some commit checks failed.
Comments from Reviewable
| gharchive/pull-request | 2016-08-10T14:27:30 | 2025-04-01T04:33:46.177072 | {
"authors": [
"mmlb",
"nshalman"
],
"repo": "cerana/cerana",
"url": "https://github.com/cerana/cerana/pull/320",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1799100337 | Discard unknown fields in grpc-gateway?
Currently we have DiscardUnknown: false in our grpc-gateway config. This means that we get an invalid argument error when the JSON message includes unknown fields.
However, gRPC calls that include an unknown field succeed (the unknown field is discarded).
I think the behaviour should be consistent.
I think the reason it was enabled was because REST API users were getting confused by invalid requests (with typos) being accepted and discarded silently.
With gRPC, you can only have an unknown field if you use a newer version of the protobuf, right? It's more "controlled" in that way compared to JSON.
Yep, fair enough.
It is a pain in the JS SDK because the gRPC client works after introducing a new field, but the HTTP one doesn't. The problem is really that the generated toJSON method doesn't normalize the message, so by including the new field set to the zero value, it makes it into the JSON when it could (should?) be omitted.
Even if we changed this going forward, I'll still need a workaround in the SDK to keep compatibility with earlier versions, so I will have to figure it out 🙂
| gharchive/issue | 2023-07-11T14:43:06 | 2025-04-01T04:33:46.181327 | {
"authors": [
"charithe",
"haines"
],
"repo": "cerbos/cerbos",
"url": "https://github.com/cerbos/cerbos/issues/1682",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
126273355 | fix a bug on safari that popstate event is fired during page load
on safari cerebral-router will trigger twice during page load.
Thanks for report. Seems it is time to setup SauceLabs testing environmet to test in real browsers.
seems some test cases always failed. SauceLabs is a good idea.
Try (event && !event.state) ||
Do you have any details why this happens?
I investigated into one case "should not allow going to a new url". I try the test case in safari&chrome manually, that is, add an anchor link "#/foo" in the test page and visit http://locahost:3001/tests/preventdefault/. I click the anchor link and jump doesn't happen and url doesn't change. It means the test case succeed with my PR. Maybe phantomjs is the problem? No experience with phantomjs:(
I meant not failing tests, but double triggering problem on safari.
BTW, I had ran tests on safari with saucelab. It was failed with and without your pr :(
I am preparing new test setup. We will fix it.
I just setup cerebral router as follows.Put log in 'mediator.sceneChanged' and load 'http://localhost:3000/scene/0' with safari. the log will display twice without the pr. Actually it's a bug of safari:(
const redirectToDefaultScene= ({services}) => {
services.router.redirect('/scene/0');
};
controller.signal('rootRouted', [redirectToDefaultScene]);
Router(controller, {
'/': 'rootRouted',
'/scene/:scene': 'mediator.sceneChanged'
}, {
mapper: {
query: true
}
});
Ohh. I have made a lot to setup new cool tests env with zuul, but failed to rewrite tests. It is really time consuming task. I need to move forward with another tasks.
What about your change, it definetly breaks another cases.
I have followed to the same issue in Page JS and copied their solution.
| gharchive/pull-request | 2016-01-12T20:54:22 | 2025-04-01T04:33:46.185860 | {
"authors": [
"Guria",
"octarrow"
],
"repo": "cerebral/addressbar",
"url": "https://github.com/cerebral/addressbar/pull/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
263876436 | Set webpack dev server at 2.7.1 for IE10 compatibility
Summary
Webpack > 2.7.1 dropped IE 10 & 11 support.
See https://github.com/cerner/terra-core/pull/901
Thanks for contributing to Terra.
@cerner/terra
11 still works as it supports ‘let’ and ‘const’
| gharchive/pull-request | 2017-10-09T12:24:24 | 2025-04-01T04:33:46.196586 | {
"authors": [
"mhemesath",
"mjhenkes"
],
"repo": "cerner/terra-clinical",
"url": "https://github.com/cerner/terra-clinical/pull/201",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
940133804 | Fix helm chart link in README
Signed-off-by: marshields 78823471+marshields@users.noreply.github.com
/assign @irbekrm
| gharchive/pull-request | 2021-07-08T18:54:46 | 2025-04-01T04:33:46.205232 | {
"authors": [
"marshields"
],
"repo": "cert-manager/aws-privateca-issuer",
"url": "https://github.com/cert-manager/aws-privateca-issuer/pull/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1811894141 | helm: Update Chart.template.yaml - add apache 2.0 license
Pull Request Motivation
helm: Update Chart.template.yaml - add apache 2.0 license annotaion
show the license at artifacthub.io
Kind
/kind feature
Release Note
helm: Add apache 2.0 license annotation
Thanks! Feel free to add more annotations.
/approve
/lgtm
/kind cleanup
| gharchive/pull-request | 2023-07-19T12:58:22 | 2025-04-01T04:33:46.208011 | {
"authors": [
"arukiidou",
"inteon"
],
"repo": "cert-manager/cert-manager",
"url": "https://github.com/cert-manager/cert-manager/pull/6225",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2011906628 | Replace deprecated pkcs12 function call with pkcs12.LegacyRC2
This PR is a soft version of https://github.com/cert-manager/cert-manager/pull/6515.
It replaces the deprecated function calls with equivalent non-deprecated function calls.
Kind
/kind cleanup
Release Note
NONE
/retest
Looks to be a legit test failure:
pkg/controller/certificates/issuing/internal/keystore.go:62:16: undefined: pkcs12.LegacyRC2
pkg/controller/certificates/issuing/internal/keystore.go:72:16: undefined: pkcs12.LegacyRC2
Looks to be a legit test failure:
pkg/controller/certificates/issuing/internal/keystore.go:62:16: undefined: pkcs12.LegacyRC2
pkg/controller/certificates/issuing/internal/keystore.go:72:16: undefined: pkcs12.LegacyRC2
Should be fixed now.
| gharchive/pull-request | 2023-11-27T09:56:43 | 2025-04-01T04:33:46.211582 | {
"authors": [
"SgtCoDFish",
"inteon"
],
"repo": "cert-manager/cert-manager",
"url": "https://github.com/cert-manager/cert-manager/pull/6517",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1711505950 | Update helm chart
Update helm chart from 1.8.0 to 1.11.0 #363
Thanks for this PR. Could you also generate the chart and put it in the docs directory?
Thanks so much!!
| gharchive/pull-request | 2023-05-16T08:22:59 | 2025-04-01T04:33:46.240533 | {
"authors": [
"robinschneider",
"techknowlogick"
],
"repo": "cesanta/docker_auth",
"url": "https://github.com/cesanta/docker_auth/pull/364",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2216102380 | List Start Index 0 or 1?
Hi, when I was using list related functions, something got me quite confused.
SUBSTRING function says that lists start index is 1-based, but AT function seems to be 0-based. Can you help clarify which is correct? Thank you for your help.
@liumiaowilson
That's right, they work a little bit differently. At the beginning I was trying to match exactly how Salesforce formula fields and CRM Analytics formulas work, which a lot are 1 based, so they could be easily translatable. But then I introduced the list concept and kept it 0 based to match Apex.
I'll probably go back and clean up the very first few functions that got implemented to make sure things are consistent.
| gharchive/issue | 2024-03-30T00:05:51 | 2025-04-01T04:33:46.244601 | {
"authors": [
"cesarParra",
"liumiaowilson"
],
"repo": "cesarParra/expression",
"url": "https://github.com/cesarParra/expression/issues/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
124312621 | links to atom.io (minor issues)
(different than the package.json's repo which points here properly)
package-name (language-autohotkey) inside Atom's Settings -> Packages goes to the atom.io managed repo by nshakin which is no longer managed by him
in the same area, your git username is not attached to atom.io
I have no idea how same-name repos react in atom.io's packages - may need to rename the git-repo?
Relevant discussion:
https://discuss.atom.io/t/how-should-we-deal-with-unmaintained-and-deprecated-packages/15407
atom/apm currently supports installing packages from git urls (atom/apm#518).
Before linking this repo to atom.io, a relatively simple way to install this could be
apm install cescue/language-autohotkey
rather than replacing package.json and grammars/autohotkey.cson files. Would you consider updating this info in Readme?
| gharchive/issue | 2015-12-30T07:04:32 | 2025-04-01T04:33:46.251944 | {
"authors": [
"cescue",
"tsaiid",
"vanillaSprinkles"
],
"repo": "cescue/language-autohotkey",
"url": "https://github.com/cescue/language-autohotkey/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1393051461 | Calculation of the gradient of a loss function that requires the calculation of the gradient of a Flux model.
Calculating the gradient of a loss function that requires computing the gradient of the energy, which is currently defined by a Flux neural network model. I did not find a clean and performant way to do this. For the moment I am computing the gradient of the neural network model "analytically", particularly, the gradient of a feed-forward neural network using relu as activation function (see here). In addition, I had to use Flux.destructure to extract the parameters of the model.
Links related to this issue:
https://discourse.julialang.org/t/how-to-add-norm-of-gradient-to-a-loss-function/69873/16
https://discourse.julialang.org/t/issue-with-zygote-over-forwarddiff-derivative/70824
https://github.com/FluxML/Zygote.jl/issues/953#issuecomment-841882071
Solving this issue enables working with many types of neural network architectures, not only FFNN. For example, it allows experimenting with different models in this script which helps to find the optimal hyperparameters of the model
Progress on https://github.com/cesmix-mit/PotentialLearning.jl/pull/25 by @dallasfoster
See also the following issue in Zygote: https://github.com/FluxML/Zygote.jl/issues/1244.
Based on PR above, next example is closer to actual neural potential training. This code addresses reverse mode over reverse mode with the latest version of Flux. It consumes a substantial amount of memory, so it would be good to iterate it. Months ago something similar happened with the reverse mode over forward mode approach, it worked but very slowly.
using Flux
# Range
xrange = 0:π/99:π
xs = [ Float32.([x1, x2]) for x1 in xrange for x2 in xrange]
# Target function: E
E_analytic(x) = sin(x[1]) * cos(x[2])
# Analytical gradient of E
dE_analytic(x) = [cos(x[1]) * cos(x[2]), -sin(x[1]) * sin(x[2])]
# NN model
mlp = Chain(Dense(2,4, Flux.σ),Dense(4,1))
ps_mlp = Flux.params(mlp)
E(x) = sum(mlp(x))
dE(mlp, x) = sum(gradient(x -> sum(mlp(x)), x))
# Loss
loss(x, y) = Flux.Losses.mse(x, y)
# Training
epochs = 10; opt = Flux.Adam(0.1)
for _ in 1:epochs
g = gradient(()->loss(reduce(vcat, dE.([mlp],xs)),
reduce(vcat, dE_analytic.(xs))), ps_mlp)
Flux.Optimise.update!(opt, ps_mlp, g)
l = loss(reduce(vcat, dE.([mlp],xs)),
reduce(vcat, dE_analytic.(xs)))
println("loss:", l)
# GC.gc()
end
Note: NeuralPDE, a project with similarities to this one (training of MLPs using ML Julia abstractions) moved from Flux to Lux.
| gharchive/issue | 2022-09-30T21:43:10 | 2025-04-01T04:33:46.272314 | {
"authors": [
"emmanuellujan"
],
"repo": "cesmix-mit/PotentialLearning.jl",
"url": "https://github.com/cesmix-mit/PotentialLearning.jl/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2706801 | Fixes, new functions, language tests
Fixed functions that handle sequences to work with ArraySequence, not plain old js array.
Added tests for language functions and run them from run.js
Implement visit(SequenceEnumeration) to the compiler to compile {}-type sequences.
Add getNull() to js; implement coalesce() and append() in js and test
them.
Great, thanks Enrique!
| gharchive/issue | 2012-01-03T02:33:30 | 2025-04-01T04:33:46.282272 | {
"authors": [
"chochos",
"gavinking"
],
"repo": "ceylon/ceylon-js",
"url": "https://github.com/ceylon/ceylon-js/issues/6",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
482414034 | Use with vs code via nuget package
Now that support for roslyn analysers and code fixes has been merged into omnisharp (c.f. https://github.com/OmniSharp/omnisharp-roslyn/pull/1076) I would love to be able to use this with vs code.
Would you consider updating the nuget package?
MappingGenerator is mainly based on the code refactorings. Is it supported by VSCode?
The pull request I linked to claims to support "roslyn analyzers and code fixes", which suggests that they are supported. However, I only have partial success.
Quick testing with VS Code with dotnet core 2.2 running on linux shows that I can Ctrl+. on a class name and do things like "Generate constructor" and "Generate Equals and GetHashCode...".
More quick testing shows that adding the existing version of the nuget package MappingGenerator does not enable Ctrl+. for things that I regularly do in Visual Studio with the extension. I don't know why.
So basically, maybe refactorings are supported, and if not, but they probably will be soon.
Another approach is suggested by Roslynator that has just released an extension that supports VS Code, c.f. https://marketplace.visualstudio.com/items?itemName=josefpihrt-vscode.roslynator.
Thanks for the research. The current nuget package of MappingGenerator may not be completed (I tried to make it work a while ago but I didn't finish that). I will take a look an that in the free moment.
Ok, I've updated the Nuget package. Can you verify how it works on VSCode?
Thank you for your efforts, unfortunately I can't make it work on VS Code on Linux and I can't see why.
I'll try on Windows tomorrow.
@jpeg729 Here's a very good description of how to use refactorings from VSIX with VSCode. Can you try it and let me know if this actually works for MappintGenerator?
https://www.strathweb.com/2017/05/using-roslyn-refactorings-with-omnisharp-and-visual-studio-code/
Analyzers and CodeFixes should work using NuGet package. Unfortunately, code refactorings - which are used as underlying mechanisms for most of the MappingGenerator features are not supported by NuGet package (Only Rider is able to load refactorings from NugetPackage, not sure about VS - maybe they change something recently).
| gharchive/issue | 2019-08-19T16:39:35 | 2025-04-01T04:33:46.298702 | {
"authors": [
"cezarypiatek",
"jpeg729"
],
"repo": "cezarypiatek/MappingGenerator",
"url": "https://github.com/cezarypiatek/MappingGenerator/issues/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
312744724 | timeSeries featureType with a forecast / reference time dimension?
Dear All,
There's been some question as to whether a timeSeries featureType is allowed to have a "reference time" dimension in addition to a "valid time" dimension as in a forecast model run collection. See: https://github.com/Unidata/thredds/issues/1080
Is Mandatory space-time coordinates for a collection of these features of x(i) y(i) t(i,o) in the table here. to be interpreted as limited to t(i,o) or not limited to t(i,o) ? @cofinoa and I agree that it should be "not limited to" but I wanted to poll the community and get the take of those with a bit more background on the CF convention language.
Thanks!!
Dave
Closure Criteria:
This issue should be closed when this question is answered. It may propogate an additional issue to clarify the text in the CF spec, but I would consider that an additional issue with it's own description.
Dear Dave @dblodgett-usgs
This issue is an old one (from two years ago) that has gone dormant, it appears. I have now labelled it as a defect issue because it's a problem with the conventions document being unclear. Since this issue is in the conventions repo, we should make a definite proposal here. (Alternatively, if you would like to have a longer discussion, you could close this issue and open an issue as a question in the discuss repo.)
I agree with you and Antonio @cofinoa in your interpretation, so I would propose that we amend the document by adding another sentence to the caption of Table 9.1: "Other space-time coordinates may be included which are not mandatory." This repeats a point which is already made in the paragraph just below the table, but there's no harm in making it more prominent by repetition, I think. Also, I propose that we insert your example in the relevant sentence in that paragraph, so it would read "However, a featureType may also include other space-time coordinates which are not mandatory (notably the z coordinate, and for instance a forecast_reference_time coordinate in addition to a mandatory time coordinate)."
If no-one objects to this being treated as a correction to a defect, or to my proposed correction, then it will be accepted in three weeks from now (6th Jan).
Jonathan
I agree with the "not limited to" interpretation, and am happy with Jonathan's proposed text.
Thanks, David
Please could someone merge this pull request, which qualifies for acceptance today. Thanks.
@JonathanGregory which PR is associated with this issue? I don't see one linked.
Aha! Quite right. I did only half the job. :-) I will write the PR.
The PR is https://github.com/cf-convention/cf-conventions/pull/346
Thanks @JonathanGregory . If you have no objections, I'll set a reminder to myself to merge this in 3 weeks (2022-01-27) so that interested parties can comment on the small modifications to the updated draft in good time.
Thanks
Thanks, Jonathan - The PR looks fine to me.
| gharchive/issue | 2018-04-10T01:47:51 | 2025-04-01T04:33:46.305968 | {
"authors": [
"JonathanGregory",
"davidhassell",
"dblodgett-usgs",
"erget"
],
"repo": "cf-convention/cf-conventions",
"url": "https://github.com/cf-convention/cf-conventions/issues/129",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
330248508 | Review repository "labels"
The labels list is good but could use some modifications?
Suggestion:
Remove: asciidoctor mod?, bug, invalid, simple.
Add: typo, style
This issue is related to #130
+! I was just thinking that asciidoctor mod should be replaced by something bout style. So perfect.
:-)
updated the labels
| gharchive/issue | 2018-06-07T12:23:36 | 2025-04-01T04:33:46.308907 | {
"authors": [
"ChrisBarker-NOAA",
"dblodgett-usgs"
],
"repo": "cf-convention/cf-conventions",
"url": "https://github.com/cf-convention/cf-conventions/issues/131",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1590345639 | 设置验证登录环境变量后后台管理界面无登录验证
设置完环境变量后,需要重新部署才能生效
设置完环境变量后,需要重新部署才能生效
重新部署试了好几次了还是没有
| gharchive/issue | 2023-02-18T13:27:40 | 2025-04-01T04:33:46.310644 | {
"authors": [
"a2603802339",
"cf-pages"
],
"repo": "cf-pages/Telegraph-Image",
"url": "https://github.com/cf-pages/Telegraph-Image/issues/34",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1324331517 | Render only level 12 of layer 0
I want to render only level 12 of layer 0. How?!
It is not supported right now. But I could consider adding an optional feature to skip a few bottom levels when writing images to disk, which can save some IO time.
As the omit_levels option now supports this, I'll close this one.
| gharchive/issue | 2022-08-01T12:20:59 | 2025-04-01T04:33:46.316274 | {
"authors": [
"Lu5ck",
"cff29546"
],
"repo": "cff29546/pzmap2dzi",
"url": "https://github.com/cff29546/pzmap2dzi/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
58918586 | Switch out term facets for aggregations
NOTE: There is a complementary cfgov-refresh PR that should be merged at the same time as this one: https://github.com/cfpb/cfgov-refresh/pull/285
Facets are deprecated in ElasticSearch and will be removed in a future release. This migrates them to aggregations which work pretty much the same way as facets.
Sites that previously used terms facet with sheer will need to make slight modifications to their code. The iterable that is returned by possible_values_for now looks like {"key": "term-you-want"} instead of {"term": "term-you-want"}.
Good catch, @willbarton and @dpford
:+1: I don't see any issues here.
| gharchive/pull-request | 2015-02-25T15:31:19 | 2025-04-01T04:33:46.324751 | {
"authors": [
"dpford",
"rosskarchner",
"willbarton"
],
"repo": "cfpb/sheer",
"url": "https://github.com/cfpb/sheer/pull/88",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2523967422 | 🛑 Casey is down
In 404eb5f, Casey (https://caseyburrus.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Casey is back up in e2a300b after 20 minutes.
| gharchive/issue | 2024-09-13T06:19:54 | 2025-04-01T04:33:46.369047 | {
"authors": [
"chadburrus"
],
"repo": "chadburrus/upptime",
"url": "https://github.com/chadburrus/upptime/issues/1193",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2069189342 | 🛑 Enchanted Vacations is down
In f6acb78, Enchanted Vacations (https://yourenchantingvacation.com) was down:
HTTP code: 521
Response time: 66 ms
Resolved: Enchanted Vacations is back up in 3ff9181 after 17 minutes.
| gharchive/issue | 2024-01-07T16:15:03 | 2025-04-01T04:33:46.371405 | {
"authors": [
"chadburrus"
],
"repo": "chadburrus/upptime",
"url": "https://github.com/chadburrus/upptime/issues/691",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2152814209 | Claims telescope dependency but doesn't include in lazy config as a dependency
README says that you need Telescope.nvim but it is not listed as a dependency in lazy.nvim config.
umm thats my mistake for not documenting this properly, will be resolved in the next commit.
| gharchive/issue | 2024-02-25T15:26:20 | 2025-04-01T04:33:46.372880 | {
"authors": [
"Makaze",
"chadcat7"
],
"repo": "chadcat7/prism",
"url": "https://github.com/chadcat7/prism/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
244070992 | Add word2vec tutorial
Related to #2576
Write the word2vec tutorial by writing the explanation of chainer/examples/word2vec/.
@Crissman Thank you for the reveiw. I fixed the document as you said.
@Crissman Thank you for the review. What to do next?
@mitmul add the equation for the skipgram and CBoW.
I'd like to refine the figures in this tutorial. Please wait a while before merging.
I added some explanations and fixed some descriptions. Submitting those changes as review comments ins inefficient way, so I wrote the modified version here: https://github.com/mitmul/chainer-notebooks/blob/master/5_word2vec.ipynb
All the code have been checked and those work fine. You can see the outputs of the code in the above jupyter notebook.
@mitmul I merged the text from jupyter notebook. But i just use literalinclude for codes because
I want to keep the compatibility for example codes. How about this?
@keisuke-umezawa If you just replace the code blocks with "literalinclude", that's OK for me. Please let me make sure the content again. Thanks.
Jenkins, test this please
Jenkins, test this please
LGTM
| gharchive/pull-request | 2017-07-19T14:58:35 | 2025-04-01T04:33:46.392487 | {
"authors": [
"keisuke-umezawa",
"mitmul"
],
"repo": "chainer/chainer",
"url": "https://github.com/chainer/chainer/pull/3040",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
253966856 | Avoid unwanted output of assert_allclose failure
Related to #2936.
If error information is dumped directly to stdout, there's a problem when using @testing.retry: intermediate failure information is dumped even if a subsequent trial succeeded.
I fixed so that information is put into the exception.
AssertionError:
Not equal to tolerance rtol=0.0001, atol=1e-05
(mismatch 100.0%)
x: array(-0.01787613206619142)
y: array(-0.01786161959171295, dtype=float32)
assert_allclose failed:
shape: () ()
dtype: float64 float32
i: (0,)
x[i]: -0.01787613206619142
y[i]: -0.01786161959171295
err[i]: 1.451247447846818e-05
x: -0.01787613206619142
y: -0.01786161959171295
Test has passed
LGTM
| gharchive/pull-request | 2017-08-30T11:33:54 | 2025-04-01T04:33:46.394325 | {
"authors": [
"niboshi",
"unnonouno"
],
"repo": "chainer/chainer",
"url": "https://github.com/chainer/chainer/pull/3277",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
465573030 | Reverse input array for non-contiguous tests
This PR reproduces the bug: #7726.
~Merge #7727 first.~
CIs, test this please.
Jenkins CI test (for commit 94519834941683a3b9a7fbf5f57d298a1e120ac6, target branch master) failed with status FAILURE.
@toslunar This pull-request is marked as st:test-and-merge, but there were no activities for the last 3 days. Could you check?
Jenkins, test this please.
Jenkins CI test (for commit 94519834941683a3b9a7fbf5f57d298a1e120ac6, target branch master) succeeded!
| gharchive/pull-request | 2019-07-09T05:14:20 | 2025-04-01T04:33:46.396566 | {
"authors": [
"asi1024",
"chainer-ci",
"toslunar"
],
"repo": "chainer/chainer",
"url": "https://github.com/chainer/chainer/pull/7728",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
477131491 | Fix chainerx.logsumexp test tolerance
Fixes #7861
Ran 10000 trials without error.
CIs, test this please.
Jenkins CI test (for commit 18868c0b250d17b64b59c8a630454a21678ce47d, target branch master) failed with status FAILURE.
The test failures are unrelated to the PR.
| gharchive/pull-request | 2019-08-06T02:04:45 | 2025-04-01T04:33:46.397992 | {
"authors": [
"chainer-ci",
"niboshi",
"toslunar"
],
"repo": "chainer/chainer",
"url": "https://github.com/chainer/chainer/pull/7867",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
269111721 | Fix grammatical mistakes in doc
I am not intended to fix all grammar errors with this PR.
@yuyu2172 Could you solve conflict?
| gharchive/pull-request | 2017-10-27T13:43:18 | 2025-04-01T04:33:46.398779 | {
"authors": [
"Hakuyume",
"yuyu2172"
],
"repo": "chainer/chainercv",
"url": "https://github.com/chainer/chainercv/pull/477",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
390939748 | Add asset link button
put Assets button to linkt /assets API, on result table row and result detail page.
LGTM!
| gharchive/pull-request | 2018-12-14T02:24:56 | 2025-04-01T04:33:46.399898 | {
"authors": [
"disktnk",
"ofk"
],
"repo": "chainer/chainerui",
"url": "https://github.com/chainer/chainerui/pull/242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
339275190 | Add Contribution guide
We need to add a template of code and README.
I think we need to create template for PR.
| gharchive/issue | 2018-07-09T01:06:08 | 2025-04-01T04:33:46.400634 | {
"authors": [
"aonotas"
],
"repo": "chainer/models",
"url": "https://github.com/chainer/models/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2472059257 | @zag-js/svelte peerDependencies is misconfigured
🐛 Bug report
The peerDependencis in the @zag-js/svelte package specifies the following:
"peerDependencies": {
"svelte": ">=5.0.0"
},
This is not correct as Svelte 5 is not out yet.
I know the svelte package is technically not officially public but so far it's been working very well.
💥 Steps to reproduce
Go to the package.json of @zag-js/svelte
Go down to the peerDependencies field
View the wrongly configured svelte version
💻 Link to reproduction
Not Relevant
🧐 Expected behavior
It should be configured correctly until Svelte releases 5.0.0
🧭 Possible Solution
Since Svelte 5 is currently still in RC it uses next versioning, so the peerDependencies should actually be:
"peerDependencies": {
"svelte": "^5.0.0-next.1"
},
🌍 System information
Software
Version(s)
Zag Version
0.65.0
Browser
n/a
Operating System
n/a
📝 Additional information
It causes npm warnings:
Good catch @Hugos68.
Feel free to open a PR. Will file a release once it's merged.
Submitted a PR: #1785
| gharchive/issue | 2024-08-18T16:56:38 | 2025-04-01T04:33:46.436831 | {
"authors": [
"Hugos68",
"segunadebayo"
],
"repo": "chakra-ui/zag",
"url": "https://github.com/chakra-ui/zag/issues/1772",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
469287257 | Adding selectors with multiple html elements will not work more than once
Selectors such as ol li or ol > li will only be added once
This might have been fixed when I refactor StyleSheet and Hash classes. Need to check
| gharchive/issue | 2019-07-17T15:26:59 | 2025-04-01T04:33:46.486159 | {
"authors": [
"chanan"
],
"repo": "chanan/BlazorStyled",
"url": "https://github.com/chanan/BlazorStyled/issues/11",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
477982959 | Documentation: Fix mobile styles
The left nav needs to "collapse" when in mobile
Done in the next doc updates
| gharchive/issue | 2019-08-07T15:00:38 | 2025-04-01T04:33:46.486935 | {
"authors": [
"chanan"
],
"repo": "chanan/BlazorStyled",
"url": "https://github.com/chanan/BlazorStyled/issues/25",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1228572283 | Incorrect error code thrown with FileSizeLimitExceededError
Hi!
I found that FileSizeLimitExceededError exception contains incorrect ErrorCode - FileSizeMinimumNotMet instead of FileSizeMaximumExceeded
Please, check it out: https://github.com/chanced/filedrop-svelte/blob/bae3e5f25dbd2cee2dcbd6d8b35e35bfba0c0db7/src/lib/errors.ts#L56-L64
sorry about that! this was resolved and pushed as version0.1.2
| gharchive/issue | 2022-05-07T08:50:16 | 2025-04-01T04:33:46.489843 | {
"authors": [
"Antosik",
"chanced"
],
"repo": "chanced/filedrop-svelte",
"url": "https://github.com/chanced/filedrop-svelte/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
166371072 | Updated countries list
Updated countries ISO 3166 list from https://github.com/umpirsky/country-list/blob/master/data/en_US/country.json
Eg; update some removed codes such as "CS" https://en.wikipedia.org/wiki/ISO_3166-2:CS
Changes Unknown when pulling c2de7bcb8e1222382f1c7d206a9e629184d527ea on gbourel:master into ** on chancejs:master**.
Thanks @gbourel !
FYI this is live on npm as version 1.0.4
Thank a lot for your reactivity and this great lib !
Cheers,
GB
Le 19/07/2016 20:33, Victor Quinn a écrit :
FYI this is live on npm as version 1.0.4
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/chancejs/chancejs/pull/269#issuecomment-233724854,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFu6u8m3Xr_bnu27Hl6GOAfSJsblGnAnks5qXRh_gaJpZM4JP5Qc.
| gharchive/pull-request | 2016-07-19T16:08:52 | 2025-04-01T04:33:46.494727 | {
"authors": [
"coveralls",
"gbourel",
"victorquinn"
],
"repo": "chancejs/chancejs",
"url": "https://github.com/chancejs/chancejs/pull/269",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2267486634 | Can it instantly make a zip?
ChatGPT only accepts zip if content is too large.
I just ran git2gpt on this repo https://github.com/graphql/graphql-spec
and got
I can't attach this as context
I need .zip put into my clipboard that I can paste.
Or I am confused how to work with this.
Thanks.
yea I just got
trying this
yea its failing
I don't know how I can instantly turn a git repo into something I can then instantly paste into chatgpt for context
thank you for any help 🖤
tried to manually add zip too, it ignores the zip completely as context
yea super confused
seems like I have to tell gpt to use the uploaded content, it doesn't do it automatically
still point stands I'd like to have command similar to git2gpt . that will put into clipboard a .zip I can paste into GPT as context (I don't want to go through manually choosing the .zip in file picker (if I can avoid it)
Here's a bash script that should accomplish what you're looking for.
# Only works on Mac!!!!!
# https://apple.stackexchange.com/a/15542
file-to-clipboard() {
osascript \
-e 'on run args' \
-e 'set the clipboard to POSIX file (first item of args)' \
-e end \
"$@"
}
git ls-files | zip ~/repo.zip -@ && file-to-clipboard ~/repo.zip
This will let you paste as a file in applications like ChatGPT or Telegram.
turned it into fish shell
function fileToClipboard
osascript \
-e 'on run args' \
-e 'set the clipboard to POSIX file (first item of args)' \
-e end \
$argv
end
# turn current folder into zip I can ask questions on with GPT
# TODO: name should not be `repo.zip` but name of repo + date
function .f
git ls-files | zip ~/do/repo.zip -@; and fileToClipboard ~/do/repo.zip
# rm -rf ~/process/repo.zip
end
But for some reason either fish shell or bash solution you have fails to upload to ChatGPT.
Made a video recording of it here with more context: https://www.loom.com/share/972fe7714a894376b5624c770017666f
This happens in bash too with:
file-to-clipboard() {
osascript \
-e 'on run args' \
-e 'set the clipboard to POSIX file (first item of args)' \
-e end \
"$@"
}
git ls-files | zip ~/repo.zip -@ && file-to-clipboard ~/repo.zip
tested on this repo https://github.com/openai/openai-assistants-quickstart
and on GPT-4.
Does it work when you download the zip from GitHub?
Yea I guess its issue of zip itself indeed.
going to trying some other repos with zip, not really sure why it fails, its not even a large repo 🤔
Going to close since this issue seems unrelated to core functionality.
| gharchive/issue | 2024-04-28T10:55:30 | 2025-04-01T04:33:46.503472 | {
"authors": [
"chand1012",
"nikitavoloboev"
],
"repo": "chand1012/git2gpt",
"url": "https://github.com/chand1012/git2gpt/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1154542224 | Hid edit button, removed logo from nav, fixed nav alignment, replaced…
… text with logo on homepage, removed non-functional calendar from category pages, extended session to be 1 day
GOOD JOB!
| gharchive/pull-request | 2022-02-28T21:00:02 | 2025-04-01T04:33:46.505952 | {
"authors": [
"chandrapanda",
"thenickedwards"
],
"repo": "chandrapanda/happy-habit-tracker",
"url": "https://github.com/chandrapanda/happy-habit-tracker/pull/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
855993024 | 白名单有更方便的方式来批量添加吗?
现有的添加白名单方式,有点限制死了,假设用docker部署,如果我要添加白名单,就得干掉镜像,重启一个,每次修改和添加都很不方便.
是否可以通过配置文件来配置?
在yml的配置文件中可以配置,用docker的话默认在/root/.chanify.yml。要想修改文件位置的话可以使用参数--config。
server:
register:
enable: false
whitelist:
- <user id 1>
- <user id 2>
...
但是目前修改配置文件后需要重启服务才会reload这个文件。
在APP首页右上角扫描二维码图标长按就会弹出菜单,里面有添加频道功能。
在每个新建的频道的详情中都会有token,每个频道的token不一样,Chanify会根据token发送到不同的频道。
在APP首页右上角扫描二维码图标长按就会弹出菜单,里面有添加频道功能。
在每个新建的频道的详情中都会有token,每个频道的token不一样,Chanify会根据token发送到不同的频道。
好的,谢谢!这高级操作,不说还真不知道.十分感谢!
不客气,至于白名单后面会考虑增加web管理页面来简化操作,不过可能要等上一段时间。
我最近主要打算把基本的推送通知功能都完善了再增加这一个方面的功能。
不客气,至于白名单后面会考虑增加web管理页面来简化操作,不过可能要等上一段时间。
我最近主要打算把基本的推送通知功能都完善了再增加这一个方面的功能。
好的,期待!
web界面应该可以借鉴一下 gotify 的.
| gharchive/issue | 2021-04-12T13:43:36 | 2025-04-01T04:33:46.509994 | {
"authors": [
"Jonnyan404",
"wizjin"
],
"repo": "chanify/chanify",
"url": "https://github.com/chanify/chanify/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
244647579 | redirect "back"
Is there anyway to redirect to previous page in Sanic?
i am not sure wha you're looking for... maybe https://github.com/channelcat/sanic/blob/master/sanic/response.py#L394 ?
Keeping track of the history should be done by the client (e.g. in the browser). I'm going to close this but feel free to reopen it if needed.
| gharchive/issue | 2017-07-21T11:56:15 | 2025-04-01T04:33:46.515397 | {
"authors": [
"r0fls",
"shiravand",
"yunstanford"
],
"repo": "channelcat/sanic",
"url": "https://github.com/channelcat/sanic/issues/859",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1691007992 | get_anndata() / get_seurat() should set uns with census version info
Description
cellxgene_census._get_anndata.get_anndata() and cellxgene.census::get_seurat() should set uns with census version info.
Context
Impact
Helps ensure reproducibility, if only an exported anndata object is available.
Alternatives you've considered
Ideal behavior
This should be cross-language, I edited the top-level comment and title
| gharchive/issue | 2023-05-01T16:04:47 | 2025-04-01T04:33:46.520287 | {
"authors": [
"atolopko-czi",
"pablo-gar"
],
"repo": "chanzuckerberg/cellxgene-census",
"url": "https://github.com/chanzuckerberg/cellxgene-census/issues/440",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2633763532 | fix: table hover state
#1283
Fixes table link hover state to show darker color when hovering over the link or row
Adds caret that shows next to the link on hover
Demo
Looks great!
| gharchive/pull-request | 2024-11-04T20:23:05 | 2025-04-01T04:33:46.535706 | {
"authors": [
"codemonkey800",
"kev-zunshiwang"
],
"repo": "chanzuckerberg/cryoet-data-portal",
"url": "https://github.com/chanzuckerberg/cryoet-data-portal/pull/1300",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
364264107 | Download taxid descriptions
download with 100 batch and 16 threads.
Ready for review~
you don't need an account for Entrez. just need to put an email there.
@yunfangjuan you could change to accounts@idseq.net if you like
| gharchive/pull-request | 2018-09-27T00:47:41 | 2025-04-01T04:33:46.537518 | {
"authors": [
"jshoe",
"yunfangjuan"
],
"repo": "chanzuckerberg/idseq-dag",
"url": "https://github.com/chanzuckerberg/idseq-dag/pull/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
975926118 | feat: Add INSERT_ONLY option to streams
Add INSERT_ONLY as a DDL option to streams resource in support of insert only streams on external tables.
Test Plan
[x] unit tests
Note: I didn't see any validation to help avoid conflicting settings in other modules, (e.g. APPEND_ONLY and INSERT_ONLY) so didn't add any here as I imagine that is a purposeful decision.
References
https://docs.snowflake.com/en/sql-reference/sql/create-stream.html#creating-an-insert-only-stream-on-an-external-table
/ok-to-test sha=5df4f89
Sorry about that - didn't realize there were other changes needed to be made in resources patching now.
@alldoami these tests should be ready trigger again:
➜ terraform-provider-snowflake git:(feature/add-insert-only-option-snowflake-streams) SKIP_MANAGED_ACCOUNT_TEST=1 TF_ACC=1 go test -run '.*?Stream.*' -v -coverprofile=coverage.txt -covermode=atomic ./...
? github.com/chanzuckerberg/terraform-provider-snowflake [no test files]
=== RUN TestAccStreams
=== PAUSE TestAccStreams
=== CONT TestAccStreams
streams_acceptance_test.go:17: Step 1/1 error: Error running pre-apply refresh: exit status 1
Error: Missing required argument
The argument "username" is required, but was not set.
Error: Missing required argument
The argument "account" is required, but was not set.
--- FAIL: TestAccStreams (0.50s)
FAIL
coverage: 4.2% of statements
FAIL github.com/chanzuckerberg/terraform-provider-snowflake/pkg/datasources 0.883s
? github.com/chanzuckerberg/terraform-provider-snowflake/pkg/db [no test files]
testing: warning: no tests to run
PASS
coverage: 0.0% of statements
ok github.com/chanzuckerberg/terraform-provider-snowflake/pkg/provider 0.252s coverage: 0.0% of statements [no tests to run]
=== RUN TestStreamIDFromString
--- PASS: TestStreamIDFromString (0.00s)
=== RUN TestStreamStruct
--- PASS: TestStreamStruct (0.00s)
=== RUN TestStreamOnTableIDFromString
--- PASS: TestStreamOnTableIDFromString (0.00s)
=== RUN TestAcc_Stream
=== PAUSE TestAcc_Stream
=== RUN TestAccStreamGrant_basic
stream_grant_acceptance_test.go:21: Step 1/1 error: Error running pre-apply refresh: exit status 1
Error: Missing required argument
The argument "username" is required, but was not set.
Error: Missing required argument
The argument "account" is required, but was not set.
--- FAIL: TestAccStreamGrant_basic (0.46s)
=== RUN TestAccStreamGrante_future
stream_grant_acceptance_test.go:44: Step 1/1 error: Error running pre-apply refresh: exit status 1
Error: Missing required argument
The argument "account" is required, but was not set.
Error: Missing required argument
The argument "username" is required, but was not set.
--- FAIL: TestAccStreamGrante_future (0.46s)
=== RUN TestStreamGrant
--- PASS: TestStreamGrant (0.00s)
=== RUN TestStreamGrantCreate
--- PASS: TestStreamGrantCreate (0.00s)
=== RUN TestStreamGrantRead
--- PASS: TestStreamGrantRead (0.00s)
=== RUN TestFutureStreamGrantCreate
--- PASS: TestFutureStreamGrantCreate (0.00s)
=== RUN TestStream
--- PASS: TestStream (0.00s)
=== RUN TestStreamCreate
--- PASS: TestStreamCreate (0.00s)
=== RUN TestStreamRead
--- PASS: TestStreamRead (0.00s)
=== RUN TestStreamDelete
--- PASS: TestStreamDelete (0.00s)
=== RUN TestStreamUpdate
--- PASS: TestStreamUpdate (0.00s)
=== CONT TestAcc_Stream
stream_acceptance_test.go:15: Step 1/1 error: Error running pre-apply refresh: exit status 1
Error: Missing required argument
The argument "username" is required, but was not set.
Error: Missing required argument
The argument "account" is required, but was not set.
--- FAIL: TestAcc_Stream (0.44s)
FAIL
coverage: 5.5% of statements
FAIL github.com/chanzuckerberg/terraform-provider-snowflake/pkg/resources 1.726s
=== RUN TestStreamCreate
--- PASS: TestStreamCreate (0.00s)
=== RUN TestStreamChangeComment
--- PASS: TestStreamChangeComment (0.00s)
=== RUN TestStreamRemoveComment
--- PASS: TestStreamRemoveComment (0.00s)
=== RUN TestStreamDrop
--- PASS: TestStreamDrop (0.00s)
=== RUN TestStreamShow
--- PASS: TestStreamShow (0.00s)
PASS
coverage: 1.7% of statements
ok github.com/chanzuckerberg/terraform-provider-snowflake/pkg/snowflake 0.298s coverage: 1.7% of statements
? github.com/chanzuckerberg/terraform-provider-snowflake/pkg/testhelpers [no test files]
testing: warning: no tests to run
PASS
coverage: 0.0% of statements
ok github.com/chanzuckerberg/terraform-provider-snowflake/pkg/validation 0.174s coverage: 0.0% of statements [no tests to run]
? github.com/chanzuckerberg/terraform-provider-snowflake/pkg/version [no test files]
FAIL
/ok-to-test sha=dd06679
Can you run make docs :)
Hey @alldoami ran make docs! Thanks again for bearing with me on this
/ok-to-test sha=84a6234
| gharchive/pull-request | 2021-08-20T21:15:44 | 2025-04-01T04:33:46.549453 | {
"authors": [
"alldoami",
"momer"
],
"repo": "chanzuckerberg/terraform-provider-snowflake",
"url": "https://github.com/chanzuckerberg/terraform-provider-snowflake/pull/655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
544637825 | Incompatibility with the experimental result in the paper
With the code in the 'init' branch, I will get the RMSE on the delaney dataset is '0.9458+/-0.0266', which differs from the result '0.769' in the original paper. I think that maybe it is the hyperparameter configuration that causes this incompatibility. So, what's the exact hyperparameter configuration during the training and testing in the delaney dataset ?
Hi @Dinxin
There could be two possible reasons.
As pointed out in #1 (just fixed), the graph_util.py is missing. It contains functions for data preprocessing. (which makes me wonder if you are using your own preprocessing functions?)
I added the optimal hyper-parameter for Delaney in the latest commit. Though it has only been tested on CPU (the GPU server hasn't been back yet), the results match with Table 13 in the paper.
The optimal hyper-parameters for other tasks will be added.
I am using the 'graph_util.py' from the supplement in the url 'https://papers.nips.cc/paper/9054-n-gram-graph-simple-unsupervised-representation-for-graphs-with-applications-to-molecules'.
@Dinxin
Hi, does this solve your problem? If so, I'll close this issue.
Otherwise, feel free to let me know if you have more questions.
Hi, I used the hyperparameter in the repo, I cant reproduce the result of delaney either. my result is 0.84.
| gharchive/issue | 2020-01-02T16:00:31 | 2025-04-01T04:33:46.554127 | {
"authors": [
"Dinxin",
"chao1224",
"kexul"
],
"repo": "chao1224/n_gram_graph",
"url": "https://github.com/chao1224/n_gram_graph/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
371292340 | [use_cases] Add directory and description for use cases.
This is a preliminar version, to start working with use cases.
Isn't this what we would put into the "Focus Areas" Folders if we are following the D&I Structure?
Following up on my comment just now, Underneath "Focus Areas" I see the specific metrics defined. I know we have discussed using Use Cases, but I do not see them in the D&I Structure. @GeorgLink : Can you advise regarding how D&I is handling use case definitions and if any are committed to the repository yet? Or, if the use cases are embedded in your metrics definitions?
As far as I know, D&I are not using use cases in this way. The idea here came from several people in CHAOSS having interesting experiences on using (or willing to use) metrics, which could be useful for defining the questions and framing the metrics.
The idea of these use case is to be orthogonal to the "focus area | goal | question | metric" structure (therefore the different directory).
In D&I, we do not use the term 'use case'. What we include on our resource pages are 'objectives', which are not full fledged use cases but provide rationale for why to use metrics. Example
How does the use case compare to our idea of the blog posts series Metrics in Use (which we still need to launch)?
How does the use case compare to our idea of the blog posts series Metrics in Use (which we still need to launch)?
I think most use cases could be the basis for a blog post.
One suggestion: Point to our blog post series as a possibility for disseminating the use case. https://docs.google.com/document/d/1p9FZM6rixjiEsxXQ7Ij-mbGCJKm_OrOQ6nd3oIBRnto/edit#heading=h.i08ikslakwjv
Good idea. I'm going to do it.
| gharchive/pull-request | 2018-10-17T22:32:32 | 2025-04-01T04:33:46.568831 | {
"authors": [
"GeorgLink",
"jgbarah",
"sgoggins"
],
"repo": "chaoss/wg-gmd",
"url": "https://github.com/chaoss/wg-gmd/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1805400705 | 🛑 Silky.Network - ca-3 is down
In fb7930d, Silky.Network - ca-3 (https://ca-3-mirror.chaotic.cx/no-failover/chaotic-aur/lastupdate) was down:
HTTP code: 503
Response time: 72 ms
Resolved: Silky.Network - ca-3 is back up in 69a1569.
| gharchive/issue | 2023-07-14T19:07:59 | 2025-04-01T04:33:46.571506 | {
"authors": [
"Chaotic-Temeraire"
],
"repo": "chaotic-aur/chaotic-uptimes",
"url": "https://github.com/chaotic-aur/chaotic-uptimes/issues/2695",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1930231660 | 🛑 Silky.Network - br-4 is down
In 577b349, Silky.Network - br-4 (https://gru-br-mirror.silky.network) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Silky.Network - br-4 is back up in 0868266 after 57 minutes.
| gharchive/issue | 2023-10-06T13:53:56 | 2025-04-01T04:33:46.573973 | {
"authors": [
"Chaotic-Temeraire"
],
"repo": "chaotic-aur/chaotic-uptimes",
"url": "https://github.com/chaotic-aur/chaotic-uptimes/issues/5186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1125483953 | Precomputed flows for DAVIS 2016?
Hi,
I'm interested in your work but have some issues to run the code.
So should precomputed flows be generated for DAVIS 2016? In raft, when I run the code of run_inference.py, it only generates some empty folders inside DAVIS folder (for raft I have followed the requirement instruction via conda).
Could you check run_inference.py and predict.py to see why?
Thanks,
Hi, thanks for interest in our work.
Mine works fine. I suspect it might be directory?
Make sure your path structure is something like
/path/to/DAVIS2016/JPEGImages to where it contains the images
and you use
data_path = "/path/to/DAVIS2016" in line 4 of run_inference.py
Let me know if it doesn't work and I'll debug further.
Hi,
I think I've fixed it. Just add '/480p' or '/1080p' following '/JPEGImages' at line 7 of run_inference.py.
Thanks,
| gharchive/issue | 2022-02-07T04:12:33 | 2025-04-01T04:33:46.670339 | {
"authors": [
"charigyang",
"edizhuang"
],
"repo": "charigyang/motiongrouping",
"url": "https://github.com/charigyang/motiongrouping/issues/10",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
88190268 | Error -1
The call of new TesseractEngine(@"C:\Test", "en", EngineMode.Default))
cause the error -1. The content of "C:\Test" is the folder "tessdata" with english language files insides (version 3.02). As the assemblie in NuGet manager is the version 2.3, I tried with the version of 2.x language files, I removed the TESSERA_PREFIX variable but without any success
I work with Visual studi 2013 c# express for desktop.
Thank's for your help !
Can you please include the System.Diagnostics trace (see
https://github.com/charlesw/tesseract/wiki/Error-2 for example and
instructions) and the console output if possible.
Also tripple check that TESSDATA_PREFIX is not defined on the system.
Here the diagnostic :
===============================================================================
Tesseract Information: 0 : Current OS: Windows
Tesseract Information: 0 : Current platform: x86
Tesseract Information: 0 : Custom search path is not defined, skipping.
Tesseract Information: 0 : Checking executing application domain location 'C:\Users\e9913627\EMC Lab\Documents\Recherches\C#\OCR\OCR\bin\Debug' for 'liblept168.dll' on platform x86.
Tesseract Information: 0 : Trying to load native library "C:\Users\e9913627\EMC Lab\Documents\Recherches\C#\OCR\OCR\bin\Debug\x86\liblept168.dll"...
Tesseract Information: 0 : Successfully loaded native library "C:\Users\e9913627\EMC Lab\Documents\Recherches\C#\OCR\OCR\bin\Debug\x86\liblept168.dll", handle = 260964352.
Tesseract Information: 0 : Trying to load native function "pixaReadMultipageTiff" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixaReadMultipageTiff", function handle = 261804656.
Tesseract Information: 0 : Trying to load native function "pixaGetCount" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixaGetCount", function handle = 261625632.
Tesseract Information: 0 : Trying to load native function "pixaGetPix" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixaGetPix", function handle = 261547568.
Tesseract Information: 0 : Trying to load native function "pixaDestroy" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixaDestroy", function handle = 261547024.
Tesseract Information: 0 : Trying to load native function "pixCreate" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixCreate", function handle = 261489248.
Tesseract Information: 0 : Trying to load native function "pixClone" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixClone", function handle = 261488736.
Tesseract Information: 0 : Trying to load native function "pixDestroy" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixDestroy", function handle = 261489792.
Tesseract Information: 0 : Trying to load native function "pixGetWidth" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetWidth", function handle = 261490720.
Tesseract Information: 0 : Trying to load native function "pixGetHeight" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetHeight", function handle = 261490304.
Tesseract Information: 0 : Trying to load native function "pixGetDepth" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetDepth", function handle = 261434448.
Tesseract Information: 0 : Trying to load native function "pixGetXRes" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetXRes", function handle = 261766688.
Tesseract Information: 0 : Trying to load native function "pixGetYRes" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetYRes", function handle = 261311952.
Tesseract Information: 0 : Trying to load native function "pixGetResolution" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetResolution", function handle = 261490640.
Tesseract Information: 0 : Trying to load native function "pixGetWpl" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetWpl", function handle = 261312096.
Tesseract Information: 0 : Trying to load native function "pixSetXRes" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSetXRes", function handle = 261491744.
Tesseract Information: 0 : Trying to load native function "pixSetYRes" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSetYRes", function handle = 261312528.
Tesseract Information: 0 : Trying to load native function "pixSetResolution" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSetResolution", function handle = 261491568.
Tesseract Information: 0 : Trying to load native function "pixScaleResolution" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixScaleResolution", function handle = 261491120.
Tesseract Information: 0 : Trying to load native function "pixGetData" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetData", function handle = 261490176.
Tesseract Information: 0 : Trying to load native function "pixGetInputFormat" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetInputFormat", function handle = 261490336.
Tesseract Information: 0 : Trying to load native function "pixSetInputFormat" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSetInputFormat", function handle = 261491536.
Tesseract Information: 0 : Trying to load native function "pixEndianByteSwap" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixEndianByteSwap", function handle = 261496224.
Tesseract Information: 0 : Trying to load native function "pixRead" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixRead", function handle = 261655600.
Tesseract Information: 0 : Trying to load native function "pixReadMemTiff" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixReadMemTiff", function handle = 261802048.
Tesseract Information: 0 : Trying to load native function "pixWrite" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixWrite", function handle = 261841648.
Tesseract Information: 0 : Trying to load native function "pixGetColormap" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixGetColormap", function handle = 261490144.
Tesseract Information: 0 : Trying to load native function "pixSetColormap" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSetColormap", function handle = 261491296.
Tesseract Information: 0 : Trying to load native function "pixDestroyColormap" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixDestroyColormap", function handle = 261489840.
Tesseract Information: 0 : Trying to load native function "pixConvertRGBToGray" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixConvertRGBToGray", function handle = 261587360.
Tesseract Information: 0 : Trying to load native function "pixDeskewGeneral" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixDeskewGeneral", function handle = 261785488.
Tesseract Information: 0 : Trying to load native function "pixRotate" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixRotate", function handle = 261680800.
Tesseract Information: 0 : Trying to load native function "pixRotateOrth" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixRotateOrth", function handle = 261693040.
Tesseract Information: 0 : Trying to load native function "pixOtsuAdaptiveThreshold" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixOtsuAdaptiveThreshold", function handle = 261011824.
Tesseract Information: 0 : Trying to load native function "pixSauvolaBinarize" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSauvolaBinarize", function handle = 261012720.
Tesseract Information: 0 : Trying to load native function "pixSauvolaBinarizeTiled" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixSauvolaBinarizeTiled", function handle = 261013264.
Tesseract Information: 0 : Trying to load native function "pixcmapCreate" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapCreate", function handle = 261089968.
Tesseract Information: 0 : Trying to load native function "pixcmapCreateRandom" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapCreateRandom", function handle = 261090224.
Tesseract Information: 0 : Trying to load native function "pixcmapCreateLinear" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapCreateLinear", function handle = 261090080.
Tesseract Information: 0 : Trying to load native function "pixcmapCopy" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapCopy", function handle = 261089696.
Tesseract Information: 0 : Trying to load native function "pixcmapDestroy" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapDestroy", function handle = 261090784.
Tesseract Information: 0 : Trying to load native function "pixcmapGetCount" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetCount", function handle = 261092192.
Tesseract Information: 0 : Trying to load native function "pixcmapGetFreeCount" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetFreeCount", function handle = 261092544.
Tesseract Information: 0 : Trying to load native function "pixcmapGetDepth" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetDepth", function handle = 261064432.
Tesseract Information: 0 : Trying to load native function "pixcmapGetMinDepth" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetMinDepth", function handle = 261092688.
Tesseract Information: 0 : Trying to load native function "pixcmapClear" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapClear", function handle = 261088688.
Tesseract Information: 0 : Trying to load native function "pixcmapAddColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapAddColor", function handle = 261088384.
Tesseract Information: 0 : Trying to load native function "pixcmapAddNewColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapAddNewColor", function handle = 261088576.
Tesseract Information: 0 : Trying to load native function "pixcmapAddNearestColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapAddNearestColor", function handle = 261088448.
Tesseract Information: 0 : Trying to load native function "pixcmapUsableColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapUsableColor", function handle = 261095072.
Tesseract Information: 0 : Trying to load native function "pixcmapAddBlackOrWhite" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapAddBlackOrWhite", function handle = 261088240.
Tesseract Information: 0 : Trying to load native function "pixcmapSetBlackAndWhite" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapSetBlackAndWhite", function handle = 261094224.
Tesseract Information: 0 : Trying to load native function "pixcmapGetColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetColor", function handle = 261091120.
Tesseract Information: 0 : Trying to load native function "pixcmapGetColor32" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetColor32", function handle = 261091232.
Tesseract Information: 0 : Trying to load native function "pixcmapResetColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapResetColor", function handle = 261093968.
Tesseract Information: 0 : Trying to load native function "pixcmapGetIndex" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetIndex", function handle = 261092576.
Tesseract Information: 0 : Trying to load native function "pixcmapHasColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapHasColor", function handle = 261093552.
Tesseract Information: 0 : Trying to load native function "pixcmapCountGrayColors" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapCountGrayColors", function handle = 261089808.
Tesseract Information: 0 : Trying to load native function "pixcmapCountGrayColors" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapCountGrayColors", function handle = 261089808.
Tesseract Information: 0 : Trying to load native function "pixcmapGetNearestIndex" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetNearestIndex", function handle = 261092880.
Tesseract Information: 0 : Trying to load native function "pixcmapGetNearestGrayIndex" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetNearestGrayIndex", function handle = 261092768.
Tesseract Information: 0 : Trying to load native function "pixcmapGetComponentRange" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetComponentRange", function handle = 261091328.
Tesseract Information: 0 : Trying to load native function "pixcmapGetExtremeValue" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGetExtremeValue", function handle = 261092224.
Tesseract Information: 0 : Trying to load native function "pixcmapGrayToColor" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGrayToColor", function handle = 261093264.
Tesseract Information: 0 : Trying to load native function "pixcmapColorToGray" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapColorToGray", function handle = 261088720.
Tesseract Information: 0 : Trying to load native function "pixcmapColorToGray" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapColorToGray", function handle = 261088720.
Tesseract Information: 0 : Trying to load native function "pixcmapToRGBTable" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapToRGBTable", function handle = 261094896.
Tesseract Information: 0 : Trying to load native function "pixcmapSerializeToMemory" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapSerializeToMemory", function handle = 261094032.
Tesseract Information: 0 : Trying to load native function "pixcmapDeserializeFromMemory" from the library with handle 260964352...
Tesseract 'OCR.vshost.exe' (CLR v4.0.30319: OCR.vshost.exe): Loaded 'InteropRuntimeImplementer.TessApiSignaturesInstance'.
Information: 0 : Successfully loaded native function "pixcmapDeserializeFromMemory", function handle = 261090544.
Tesseract Information: 0 : Trying to load native function "pixcmapGammaTRC" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapGammaTRC", function handle = 261090832.
Tesseract Information: 0 : Trying to load native function "pixcmapContrastTRC" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapContrastTRC", function handle = 261089072.
Tesseract Information: 0 : Trying to load native function "pixcmapShiftIntensity" from the library with handle 260964352...
Tesseract Information: 0 : Successfully loaded native function "pixcmapShiftIntensity", function handle = 261094352.
Tesseract Information: 0 : Current platform: x86
Tesseract Information: 0 : Custom search path is not defined, skipping.
Tesseract Information: 0 : Checking executing application domain location 'C:\Users\e9913627\EMC Lab\Documents\Recherches\C#\OCR\OCR\bin\Debug' for 'libtesseract302.dll' on platform x86.
Tesseract Information: 0 : Trying to load native library "C:\Users\e9913627\EMC Lab\Documents\Recherches\C#\OCR\OCR\bin\Debug\x86\libtesseract302.dll"...
Tesseract Information: 0 : Successfully loaded native library "C:\Users\e9913627\EMC Lab\Documents\Recherches\C#\OCR\OCR\bin\Debug\x86\libtesseract302.dll", handle = 107741184.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIAnalyseLayout" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIAnalyseLayout", function handle = 107868288.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIClear" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIClear", function handle = 107868304.
Tesseract Information: 0 : Trying to load native function "TessBaseAPICreate" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPICreate", function handle = 107868336.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIDelete" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIDelete", function handle = 107868368.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetBoolVariable" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetBoolVariable", function handle = 107868688.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetDoubleVariable" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetDoubleVariable", function handle = 107868864.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetHOCRText" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetHOCRText", function handle = 107868928.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetIntVariable" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetIntVariable", function handle = 107868960.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetIterator" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetIterator", function handle = 107868992.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetPageSegMode" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetPageSegMode", function handle = 107869232.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetStringVariable" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetStringVariable", function handle = 107869264.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetThresholdedImage" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetThresholdedImage", function handle = 107869376.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIGetUTF8Text" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIGetUTF8Text", function handle = 107869424.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIInit1" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIInit1", function handle = 107869504.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIMeanTextConf" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIMeanTextConf", function handle = 107869696.
Tesseract Information: 0 : Trying to load native function "TessBaseAPIRecognize" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPIRecognize", function handle = 107870096.
Tesseract Information: 0 : Trying to load native function "TessBaseAPISetDebugVariable" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPISetDebugVariable", function handle = 107870528.
Tesseract Information: 0 : Trying to load native function "TessBaseAPISetImage2" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPISetImage2", function handle = 107870288.
Tesseract Information: 0 : Trying to load native function "TessBaseAPISetInputName" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPISetInputName", function handle = 107870304.
Tesseract Information: 0 : Trying to load native function "TessBaseAPISetPageSegMode" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPISetPageSegMode", function handle = 107870368.
Tesseract Information: 0 : Trying to load native function "TessBaseAPISetRectangle" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPISetRectangle", function handle = 107870432.
Tesseract Information: 0 : Trying to load native function "TessBaseAPISetVariable" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessBaseAPISetVariable", function handle = 107870528.
Tesseract Information: 0 : Trying to load native function "TessDeleteBlockList" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessDeleteBlockList", function handle = 107870592.
Tesseract Information: 0 : Trying to load native function "TessDeleteIntArray" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessDeleteIntArray", function handle = 107870608.
Tesseract Information: 0 : Trying to load native function "TessDeleteText" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessDeleteText", function handle = 107870608.
Tesseract Information: 0 : Trying to load native function "TessDeleteTextArray" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessDeleteTextArray", function handle = 107870624.
Tesseract Information: 0 : Trying to load native function "TessVersion" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessVersion", function handle = 107871520.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorBaseline" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorBaseline", function handle = 107870800.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorBegin" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorBegin", function handle = 107870848.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorBlockType" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorBlockType", function handle = 107870864.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorBoundingBox" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorBoundingBox", function handle = 107870880.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorCopy" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorCopy", function handle = 107871120.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorDelete" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorDelete", function handle = 107868368.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorGetBinaryImage" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorGetBinaryImage", function handle = 107870928.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorGetImage" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorGetImage", function handle = 107870944.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorIsAtBeginningOf" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorIsAtBeginningOf", function handle = 107870976.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorIsAtFinalElement" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorIsAtFinalElement", function handle = 107871008.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorNext" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorNext", function handle = 107871040.
Tesseract Information: 0 : Trying to load native function "TessPageIteratorOrientation" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessPageIteratorOrientation", function handle = 107871072.
Tesseract Information: 0 : Trying to load native function "TessResultIteratorCopy" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessResultIteratorCopy", function handle = 107871120.
Tesseract Information: 0 : Trying to load native function "TessResultIteratorDelete" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessResultIteratorDelete", function handle = 107868368.
Tesseract Information: 0 : Trying to load native function "TessResultIteratorConfidence" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessResultIteratorConfidence", function handle = 107871104.
Tesseract Information: 0 : Trying to load native function "TessResultIteratorGetPageIterator" from the library with handle 107741184...
Tesseract Information: 0 : Successfully loaded native function "TessResultIteratorGetPageIterator", function handle = 107871168.
Tesseract Information: 0 : Trying to load native function "TessResultIteratorGetUTF8Text" from the library with handle 107741184...
Tesseract Information: 0 : SuccessA first chance exception of type 'Tesseract.TesseractException' occurred in Tesseract.dll
An unhandled exception of type 'Tesseract.TesseractException' occurred in Tesseract.dll
Additional information: Failed to initialise tesseract engine.. See https://github.com/charlesw/tesseract/wiki/Error-1 for details.
==========================================================================
No error until TessResultIteratorGetUTF8Text.
Thank's for your help !
I don't see anything in the log that could indicate whats wrong here.
Unfortunately I can't add much more diagnostic information as all this
logic is in the tesseract native library which uses standard output for
this purpose.
Just double checking but is the language files located in c:/test/tessdata?
Also can you run the program from cmd prompt and post any tesseract output?
I tried with this project that I think it's the native library:
https://code.google.com/p/tesseractdotnet/
I downloaded tesseractdotnet_v301_r590.zip and tried the "tesseractconsole" in the source folder.
I run the project, and the erreur "Acess violation" occured. The cause is even the project is 3.01, the tessdata has to be in 3.0 here : https://code.google.com/p/tesseract-ocr/downloads/detail?name=eng.traineddata.gz&can=2&q=
After, the tessdata path has to be complete and with the '' in the end. Affter this, it's work well.
So, I tried the same things with your project but I have the error again...I don't know what I can tried. Maybe I will use the google project sorry...
| gharchive/issue | 2015-06-14T14:13:15 | 2025-04-01T04:33:46.746290 | {
"authors": [
"charlesw",
"jojo150393"
],
"repo": "charlesw/tesseract",
"url": "https://github.com/charlesw/tesseract/issues/179",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1227245260 | Update wheel_lib .so files search
the current glob assumes that the '.so' files are at the root folder, but depending on the project config, they may be in the subfolders. this pr makes it a recursive glob.
Just a CICD problem, looks as though you're missing the DOCKERHUB_TOKEN and DOCKERHUB_USERNAME secrets in your fork. I guess you could either add these to your fork or PR from a branch instead.
| gharchive/pull-request | 2022-05-05T22:45:30 | 2025-04-01T04:33:46.754629 | {
"authors": [
"charliebudd",
"wyli"
],
"repo": "charliebudd/torch-extension-builder",
"url": "https://github.com/charliebudd/torch-extension-builder/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1552693591 | Implement flynt's "join and concat to f-string" transforms
flynt is a specialized linter for f-string usage.
UP031 and UP032 implement flynt's core features, but the two extra transforms
-tc, --transform-concats
Replace string concatenations (defined as +
operations involving string literals) with
f-strings. Available only if flynt is
installed with 3.8+ interpreter.
-tj, --transform-joins
Replace static joins (where the joiner is a
string literal and the joinee is a static-
length list) with f-strings. Available only
if flynt is installed with 3.8+ interpreter.
i.e.
a = "Hello"
-msg = a + " World"
+msg = f"{a} World"
-msg2 = "Finally, " + a + " World"
+msg2 = "Finally, {a} World"
and
a = "Hello"
-msg1 = " ".join([a, " World"])
-msg2 = "".join(["Finally, ", a, " World"])
-msg3 = "x".join(("1", "2", "3"))
-msg4 = "x".join({"4", "5", "yee"})
-msg5 = "y".join([1, 2, 3]) # Should be transformed
+msg1 = f"{a} World"
+msg2 = f"Finally, {a} World"
+msg3 = "1x2x3"
+msg4 = "4x5xyee"
+msg5 = f"{1}y{2}y{3}" # Should be transformed
msg6 = a.join(["1", "2", "3"]) # Should not be transformed (not a static joiner)
msg7 = "a".join(a) # Should not be transformed (not a static joinee)
msg8 = "a".join([a, a, *a]) # Should not be transformed (not a static length)
respectively could be implemented in Ruff too. (I'd like to work on them! 😄) Should these be FLY series, or should they be RUF?
[ ] (code): consider replacing string concatenation with f-string
[ ] (code): consider replacing static string joining with f-string
Refs https://github.com/charliermarsh/ruff/issues/2097 (relating to f-strings)
I have UP031 and UP032 enabled by way of select = "UP" and haven't seen any issues yet! 👍
Maybe (and this is tangential to the #2142 RUF005 discussion) UP03[12] should have a "suggestion" flag for % and .format they can't auto-fix, like "Consider using an f-string".
That said, I could see it being a nuisance if we had to noqa places like "foo {bar} {baz}".format(**something_dynamic) (though that's better served by format_map anyway), or "foo %(bar)s %(baz)s" % something_dynamic)...
Idk if transform-concats/FLY001 is always desirable on only two items. Especially given possible overrides of __add__ and __radd__.
But for 3 items, especially when a variable is inserted between two strings, is definitely something I've been looking for.
| gharchive/issue | 2023-01-23T08:28:00 | 2025-04-01T04:33:46.760768 | {
"authors": [
"Avasam",
"akx"
],
"repo": "charliermarsh/ruff",
"url": "https://github.com/charliermarsh/ruff/issues/2102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
35597334 | TypeError - no _dump_data is defined for class Binding:
I m getting one issue when marshalling data in rails application.
We are using active_record_store for caching:
Sample::Application.config.session_store :active_record_store
its giving this error:
TypeError - no _dump_data is defined for class Binding:
config/initializers/active_record.rb:4:in `marshal'
activerecord (3.2.18) lib/active_record/session_store.rb:150:in `marshal_data!'
activesupport (3.2.18) lib/active_support/callbacks.rb:407:in `_run__343736847__save__402356227__callbacks'
activesupport (3.2.18) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.18) lib/active_support/callbacks.rb:385:in `_run_save_callbacks'
activesupport (3.2.18) lib/active_support/callbacks.rb:81:in `run_callbacks'
activerecord (3.2.18) lib/active_record/callbacks.rb:264:in `create_or_update'
activerecord (3.2.18) lib/active_record/persistence.rb:84:in `save'
activerecord (3.2.18) lib/active_record/validations.rb:50:in `save'
activerecord (3.2.18) lib/active_record/attribute_methods/dirty.rb:22:in `save'
activerecord (3.2.18) lib/active_record/transactions.rb:259:in `block (2 levels) in save'
activerecord (3.2.18) lib/active_record/transactions.rb:313:in `block in with_transaction_returning_status'
activerecord (3.2.18) lib/active_record/connection_adapters/abstract/database_statements.rb:192:in `transaction'
activerecord (3.2.18) lib/active_record/transactions.rb:208:in `transaction'
newrelic_rpm (3.8.1.221) lib/new_relic/agent/method_tracer.rb:497:in `block in transaction_with_trace_ActiveRecord_self_name_transaction'
newrelic_rpm (3.8.1.221) lib/new_relic/agent/method_tracer.rb:235:in `trace_execution_scoped'
newrelic_rpm (3.8.1.221) lib/new_relic/agent/method_tracer.rb:493:in `transaction_with_trace_ActiveRecord_self_name_transaction'
activerecord (3.2.18) lib/active_record/transactions.rb:311:in `with_transaction_returning_status'
activerecord (3.2.18) lib/active_record/transactions.rb:259:in `block in save'
activerecord (3.2.18) lib/active_record/transactions.rb:270:in `rollback_active_record_state!'
activerecord (3.2.18) lib/active_record/transactions.rb:258:in `save'
activerecord (3.2.18) lib/active_record/session_store.rb:323:in `block in set_session'
activesupport (3.2.18) lib/active_support/benchmarkable.rb:50:in `silence'
activerecord (3.2.18) lib/active_record/session_store.rb:320:in `set_session'
rack (1.4.5) lib/rack/session/abstract/id.rb:327:in `commit_session'
rack (1.4.5) lib/rack/session/abstract/id.rb:211:in `context'
rack (1.4.5) lib/rack/session/abstract/id.rb:205:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/cookies.rb:341:in `call'
activerecord (3.2.18) lib/active_record/query_cache.rb:64:in `call'
activerecord (3.2.18) lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/callbacks.rb:28:in `block in call'
activesupport (3.2.18) lib/active_support/callbacks.rb:405:in `_run__345051598__call__402356227__callbacks'
activesupport (3.2.18) lib/active_support/callbacks.rb:405:in `__run_callback'
activesupport (3.2.18) lib/active_support/callbacks.rb:385:in `_run_call_callbacks'
activesupport (3.2.18) lib/active_support/callbacks.rb:81:in `run_callbacks'
actionpack (3.2.18) lib/action_dispatch/middleware/callbacks.rb:27:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/reloader.rb:65:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/remote_ip.rb:31:in `call'
better_errors (1.1.0) lib/better_errors/middleware.rb:84:in `protected_app_call'
better_errors (1.1.0) lib/better_errors/middleware.rb:79:in `better_errors_call'
better_errors (1.1.0) lib/better_errors/middleware.rb:56:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/show_exceptions.rb:56:in `call'
railties (3.2.18) lib/rails/rack/logger.rb:32:in `call_app'
railties (3.2.18) lib/rails/rack/logger.rb:16:in `block in call'
activesupport (3.2.18) lib/active_support/tagged_logging.rb:22:in `tagged'
railties (3.2.18) lib/rails/rack/logger.rb:16:in `call'
quiet_assets (1.0.2) lib/quiet_assets.rb:18:in `call_with_quiet_assets'
actionpack (3.2.18) lib/action_dispatch/middleware/request_id.rb:22:in `call'
rack (1.4.5) lib/rack/methodoverride.rb:21:in `call'
rack (1.4.5) lib/rack/runtime.rb:17:in `call'
activesupport (3.2.18) lib/active_support/cache/strategy/local_cache.rb:72:in `call'
rack (1.4.5) lib/rack/lock.rb:15:in `call'
actionpack (3.2.18) lib/action_dispatch/middleware/static.rb:63:in `call'
railties (3.2.18) lib/rails/engine.rb:484:in `call'
railties (3.2.18) lib/rails/application.rb:231:in `call'
rack (1.4.5) lib/rack/content_length.rb:14:in `call'
railties (3.2.18) lib/rails/rack/log_tailer.rb:17:in `call'
thin (1.6.2) lib/thin/connection.rb:86:in `block in pre_process'
thin (1.6.2) lib/thin/connection.rb:84:in `pre_process'
thin (1.6.2) lib/thin/connection.rb:53:in `process'
thin (1.6.2) lib/thin/connection.rb:39:in `receive_data'
eventmachine (1.0.3) lib/eventmachine.rb:187:in `run'
thin (1.6.2) lib/thin/backends/base.rb:73:in `start'
thin (1.6.2) lib/thin/server.rb:162:in `start'
rack (1.4.5) lib/rack/handler/thin.rb:13:in `run'
rack (1.4.5) lib/rack/server.rb:268:in `start'
railties (3.2.18) lib/rails/commands/server.rb:70:in `start'
railties (3.2.18) lib/rails/commands.rb:55:in `block in '
railties (3.2.18) lib/rails/commands.rb:50:in `'
script/rails:6:in `'
script/rails:0:in `'
After debugging i see in data before marshalling it adds something like binding trace,
data in yaml form:
--- !ruby/exception:Ty \nmessage: Invalid number\n__better_errors_bindings_stack: \n- !ruby/object:Binding \n frame_description: get_partner_rfc_output\n frame_type: :method\n- !ruby/object:Binding \n frame_description: execute\n frame_type: :method\n- !ruby/object:Binding \n frame_description: get_one\n frame_type: :method\n- !ruby/object:Binding \n frame_description: get_pat\n frame_type: :method\n- !ruby/object:Binding \n frame_description: manual_entry\n frame_type: :method\n- !ruby/object:Binding \n frame_description: populate_prerequisites\n frame_type: :method\n- !ruby/object:Binding \n frame_description: populate_prerequisites\n frame_type: :method\n- !ruby/object:Binding \n frame_description: shop\n frame_type: :method\n- !ruby/object:Binding \n frame_description: send_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in process_action\n frame_type: :block\n- !ruby/object:Binding \n frame_description: _run__816577052__process_action__906960469__callbacks\n frame_type: :method\n- !ruby/object:Binding \n frame_description: __run_callback\n frame_type: :method\n- !ruby/object:Binding \n frame_description: _run_process_action_callbacks\n frame_type: :method\n- !ruby/object:Binding \n frame_description: run_callbacks\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in process_action\n frame_type: :block\n- !ruby/object:Binding \n frame_description: block in instrument\n frame_type: :block\n- !ruby/object:Binding \n frame_description: instrument\n frame_type: :method\n- !ruby/object:Binding \n frame_description: instrument\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in process_action\n frame_type: :block\n- !ruby/object:Binding \n frame_description: perform_action_with_newrelic_trace\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process_action\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process\n frame_type: :method\n- !ruby/object:Binding \n frame_description: dispatch\n frame_type: :method\n- !ruby/object:Binding \n frame_description: dispatch\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in action\n frame_type: :block\n- !ruby/object:Binding \n frame_description: dispatch\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in call\n frame_type: :block\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: context\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in call\n frame_type: :block\n- !ruby/object:Binding \n frame_description: _run__913420421__call__417485616__callbacks\n frame_type: :method\n- !ruby/object:Binding \n frame_description: __run_callback\n frame_type: :method\n- !ruby/object:Binding \n frame_description: _run_call_callbacks\n frame_type: :method\n- !ruby/object:Binding \n frame_description: run_callbacks\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: protected_app_call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: better_errors_call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call_app\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in call\n frame_type: :block\n- !ruby/object:Binding \n frame_description: tagged\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call_with_quiet_assets\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: call\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in pre_process\n frame_type: :block\n- !ruby/object:Binding \n frame_description: pre_process\n frame_type: :method\n- !ruby/object:Binding \n frame_description: process\n frame_type: :method\n- !ruby/object:Binding \n frame_description: receive_data\n frame_type: :method\n- !ruby/object:Binding \n frame_description: run\n frame_type: :method\n- !ruby/object:Binding \n frame_description: start\n frame_type: :method\n- !ruby/object:Binding \n frame_description: start\n frame_type: :method\n- !ruby/object:Binding \n frame_description: run\n frame_type: :method\n- !ruby/object:Binding \n frame_description: start\n frame_type: :method\n- !ruby/object:Binding \n frame_description: start\n frame_type: :method\n- !ruby/object:Binding \n frame_description: block in \n frame_type: :block\n- !ruby/object:Binding \n frame_description: \n frame_type: :top\n- !ruby/object:Binding \n frame_description: \n frame_type: :eval\n- !ruby/object:Binding \n frame_description: \n frame_type: :top\nid 453626456
Unable to figure out why "!ruby/object:Binding \n frame_description: start\n frame_type: :method" is getting in the data.
thanks for this awesome gem.
Hi @charliesome - I ran into this issue while working on the parallel gem for parallel processing, since parallel uses exceptions to kill or break the workers. I've sent them a PR with a workaround (https://github.com/grosser/parallel/pull/202), but would it also be easy to fix this in better errors? Maybe just strip away the binding stuff and marshall the original exception?
(I'm testing on the latest version of better_errors: 2.1.1).
| gharchive/issue | 2014-06-12T15:54:22 | 2025-04-01T04:33:46.767440 | {
"authors": [
"ndbroadbent",
"yogendra689"
],
"repo": "charliesome/better_errors",
"url": "https://github.com/charliesome/better_errors/issues/254",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1409870717 | support for strict microk8s
With the new strictly confined microk8s, a different user group was used. This PR adds a if statement to cover for that.
#43 is a more complete solution. Closing this pr
| gharchive/pull-request | 2022-10-14T21:12:21 | 2025-04-01T04:33:46.771486 | {
"authors": [
"agathanatasha"
],
"repo": "charmed-kubernetes/actions-operator",
"url": "https://github.com/charmed-kubernetes/actions-operator/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1816294884 | Add write_certificates and related helpers
The new write_certificates function writes certificates to a place where the Kubernetes services will be able to use them.
The other new functions are helper functions for determining certificate SANs.
yup. No trouble here
| gharchive/pull-request | 2023-07-21T19:32:18 | 2025-04-01T04:33:46.772785 | {
"authors": [
"Cynerva",
"addyess"
],
"repo": "charmed-kubernetes/charm-lib-kubernetes-snaps",
"url": "https://github.com/charmed-kubernetes/charm-lib-kubernetes-snaps/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
748261862 | Chart.js v3.0.0-beta.6 - Line chard - minimumFractionDigits value is out of range.
Expected Behavior
How can I find out how this error occurs. I suspect it has something to do with the labels, but I can't figure it out
Current Behavior
minimumFractionDigits value is out of range.
Context
{
"type": "line",
"data": {
"labels": [
[
"17. Nov.",
" 07:00"
],
[
"17. Nov.",
" 08:00"
],
[
"17. Nov.",
" 09:00"
],
[
"17. Nov.",
" 10:00"
],
[
"17. Nov.",
" 11:00"
],
[
"17. Nov.",
" 12:00"
],
[
"17. Nov.",
" 13:00"
],
[
"17. Nov.",
" 14:00"
],
[
"18. Nov.",
" 08:00"
],
[
"18. Nov.",
" 09:00"
],
[
"18. Nov.",
" 10:00"
],
[
"18. Nov.",
" 11:00"
],
[
"18. Nov.",
" 12:00"
],
[
"18. Nov.",
" 13:00"
],
[
"18. Nov.",
" 14:00"
],
[
"19. Nov.",
" 09:00"
],
[
"19. Nov.",
" 10:00"
],
[
"19. Nov.",
" 11:00"
],
[
"20. Nov.",
" 08:00"
],
[
"20. Nov.",
" 09:00"
],
[
"20. Nov.",
" 10:00"
],
[
"20. Nov.",
" 11:00"
],
[
"20. Nov.",
" 12:00"
],
[
"21. Nov.",
" 08:00"
],
[
"21. Nov.",
" 09:00"
],
[
"21. Nov.",
" 10:00"
],
[
"21. Nov.",
" 11:00"
],
[
"21. Nov.",
" 12:00"
],
[
"21. Nov.",
" 13:00"
],
[
"21. Nov.",
" 14:00"
],
[
"22. Nov.",
" 06:00"
],
[
"22. Nov.",
" 08:00"
],
[
"22. Nov.",
" 09:00"
],
[
"22. Nov.",
" 10:00"
],
[
"22. Nov.",
" 11:00"
],
[
"22. Nov.",
" 12:00"
],
[
"22. Nov.",
" 13:00"
],
[
"22. Nov.",
" 14:00"
]
],
"datasets": [
{
"label": "Verbrauch",
"unit": "kWh",
"borderWidth": 0.5,
"name": "Verbrauch",
"yAxisID": "left",
"backgroundColor": "#e74c3c",
"borderColor": "#e74c3c",
"data": [
"0.04",
"0.09",
"0.03",
"0.03",
"0.03",
"0.03",
"0.01",
"0.00",
"0.00",
"0.00",
"0.00",
"0.00",
"0.00",
"0.05",
"0.04",
"0.04",
"0.04",
"0.06",
"0.04",
"0.04",
"0.43",
"0.19",
"0.03",
"0.04",
"0.02",
"0.02",
"0.03",
"0.03",
"0.03",
"0.04",
"0.04",
"0.00",
"0.00",
"0.00",
"0.00",
"0.06",
"0.03",
"0.05",
"0.05",
"0.12",
"0.04",
"0.05",
"0.04",
"0.42",
"0.16",
"0.03",
"0.03",
"0.08",
"0.04",
"0.11",
"0.03",
"0.04",
"0.03",
"0.09",
"0.03",
"0.02",
"0.08",
"0.05",
"0.03",
"0.02",
"0.03",
"0.04",
"0.04",
"0.08",
"0.05",
"0.04",
"0.04",
"0.43",
"0.03",
"0.03",
"0.04",
"0.02",
"0.02",
"0.02",
"0.15",
"0.03",
"0.06",
"0.03",
"0.00",
"0.00",
"0.00",
"0.00",
"0.03",
"0.03",
"0.03",
"0.03",
"0.06",
"0.06",
"0.04",
"0.04",
"0.04",
"0.57",
"0.03",
"0.02",
"0.02",
"0.04",
"0.03",
"0.02",
"0.03",
"0.03",
"0.02",
"0.06",
"0.00",
"0.00",
"0.04",
"0.01",
"0.14",
"0.00",
"0.02",
"0.03",
"0.25",
"0.20",
"0.18",
"0.18",
"0.05",
"0.04",
"0.03",
"0.03",
"0.02",
"0.02",
"0.03",
"0.02",
"0.03",
"0.04",
"0.03",
"0.01",
"0.01",
"0.00",
"0.01",
"0.00",
"0.01",
"0.00",
"0.03",
"0.03"
]
},
{
"label": "Überschuss",
"unit": "kWh",
"borderWidth": 0.5,
"name": "Überschuss",
"yAxisID": "left",
"backgroundColor": "#8bc34a",
"borderColor": "#8bc34a",
"type": "bar",
"state": "0.0",
"data": [
"0.00",
"105.00",
"109.00",
"73.00",
"36.00",
"25.00",
"26.00",
"0.00",
"16.00",
"101.00",
"43.00",
"55.00",
"32.00",
"7.00",
"0.00",
"0.00",
"0.00",
"0.00",
"1.00",
"0.00",
"7.00",
"105.00",
"0.00",
"62.00",
"51.00",
"0.00",
"33.00",
"0.00",
"23.00",
"0.00",
"0.00",
"0.00",
"50.00",
"59.00",
"38.00",
"36.00",
"10.00",
"0.00"
]
}
]
},
"options": {
"color": "rgba(0,0,0,0.1)",
"elements": {
"arc": {
"borderAlign": "center",
"borderColor": "#fff",
"borderWidth": 0,
"offset": 0,
"backgroundColor": "rgba(0,0,0,0.1)"
},
"line": {
"borderCapStyle": "butt",
"borderDash": [],
"borderDashOffset": 0,
"borderJoinStyle": "miter",
"borderWidth": 3,
"capBezierPoints": true,
"fill": false,
"tension": 0,
"backgroundColor": "rgba(0,0,0,0.1)",
"borderColor": "rgba(0,0,0,0.1)"
},
"point": {
"borderWidth": 0,
"hitRadius": 8,
"hoverBorderWidth": 1,
"hoverRadius": 8,
"pointStyle": "circle",
"radius": 0.33,
"backgroundColor": "rgba(0,0,0,0.1)",
"borderColor": "rgba(0,0,0,0.1)"
},
"bar": {
"borderSkipped": "start",
"borderWidth": 0,
"borderRadius": 0,
"backgroundColor": "rgba(0,0,0,0.1)",
"borderColor": "rgba(0,0,0,0.1)"
}
},
"events": [
"mousemove",
"mouseout",
"click",
"touchstart",
"touchmove"
],
"font": {
"color": "#666",
"family": "'Helvetica Neue', 'Helvetica', 'Arial', sans-serif",
"size": 12,
"style": "normal",
"lineHeight": 1.2,
"weight": null,
"lineWidth": 0
},
"interaction": {
"mode": "index",
"intersect": true
},
"hover": {
"mode": "nearest",
"intersect": true,
"onHover": null
},
"maintainAspectRatio": false,
"onHover": null,
"onClick": null,
"responsive": true,
"showLine": true,
"plugins": {
"filler": {
"propagate": true
},
"legend": {
"display": true,
"position": "top",
"align": "center",
"fullWidth": true,
"reverse": false,
"weight": 1000,
"onHover": null,
"onLeave": null,
"labels": {
"boxWidth": 40,
"padding": 10
},
"title": {
"display": false,
"position": "center",
"text": ""
}
},
"title": {
"align": "center",
"display": false,
"font": {
"style": "bold"
},
"fullWidth": true,
"padding": 10,
"position": "top",
"text": "",
"weight": 2000
},
"tooltip": {
"enabled": true,
"custom": null,
"position": "average",
"backgroundColor": "rgba(0,0,0,0.8)",
"titleFont": {
"style": "bold",
"color": "#fff"
},
"titleSpacing": 2,
"titleMarginBottom": 6,
"titleAlign": "left",
"bodySpacing": 2,
"bodyFont": {
"color": "#fff"
},
"bodyAlign": "left",
"footerSpacing": 2,
"footerMarginTop": 6,
"footerFont": {
"color": "#fff",
"style": "bold"
},
"footerAlign": "left",
"yPadding": 6,
"xPadding": 6,
"caretPadding": 2,
"caretSize": 5,
"cornerRadius": 6,
"multiKeyBackground": "#fff",
"displayColors": true,
"borderColor": "rgba(0,0,0,0)",
"borderWidth": 0,
"animation": {
"duration": 400,
"easing": "easeOutQuart",
"numbers": {
"type": "number",
"properties": [
"x",
"y",
"width",
"height",
"caretX",
"caretY"
]
},
"opacity": {
"easing": "linear",
"duration": 200
}
},
"callbacks": {}
},
"chardbackground": {},
"gradient": {}
},
"layout": {
"padding": {
"top": 0,
"right": 16,
"bottom": 16,
"left": 16
}
},
"animation": false,
"locale": "de-DE",
"defaultFontColor": "#ecf0f1",
"defaultFontFamily": "Quicksand, Roboto, \"Open Sans\",\"Rubik\",sans-serif",
"datasetElementType": "line",
"datasetElementOptions": [
"backgroundColor",
"borderCapStyle",
"borderColor",
"borderDash",
"borderDashOffset",
"borderJoinStyle",
"borderWidth",
"capBezierPoints",
"cubicInterpolationMode",
"fill"
],
"dataElementType": "point",
"dataElementOptions": {
"backgroundColor": "pointBackgroundColor",
"borderColor": "pointBorderColor",
"borderWidth": "pointBorderWidth",
"hitRadius": "pointHitRadius",
"hoverHitRadius": "pointHitRadius",
"hoverBackgroundColor": "pointHoverBackgroundColor",
"hoverBorderColor": "pointHoverBorderColor",
"hoverBorderWidth": "pointHoverBorderWidth",
"hoverRadius": "pointHoverRadius",
"pointStyle": "pointStyle",
"radius": "pointRadius",
"rotation": "pointRotation"
},
"spanGaps": true,
"type": "line",
"units": "",
"title": {
"align": "center",
"display": true,
"font": {
"style": "normal",
"color": "#ecf0f1"
},
"fullWidth": true,
"padding": 10,
"position": "top",
"text": "Produktion vs. Verbrauch",
"weight": 2000
},
"chartArea": {
"backgroundColor": "rgba(0,0,0,0.5)"
},
"legend": {
"display": true,
"position": "top",
"lineWidth": 0,
"labels": {
"usePointStyle": true,
"boxWidth": 8
}
},
"tooltips": {
"mode": "nearest",
"intersect": true,
"enabled": true,
"custom": null,
"position": "nearest",
"backgroundColor": "#ecf0f1",
"titleFont": {
"style": "normal",
"color": "#2c3e50"
},
"titleSpacing": 2,
"titleMarginBottom": 6,
"titleAlign": "left",
"bodySpacing": 2,
"bodyFont": {
"color": "#2c3e50",
"style": "normal"
},
"bodyAlign": "left",
"footerSpacing": 2,
"footerMarginTop": 6,
"footerFont": {
"color": "#2c3e50",
"style": "normal"
},
"footerAlign": "left",
"yPadding": 6,
"xPadding": 6,
"caretPadding": 2,
"caretSize": 5,
"cornerRadius": 6,
"multiKeyBackground": "#fff",
"displayColors": true,
"borderColor": "rgba(0,0,0,0)",
"borderWidth": 0,
"animation": false,
"callbacks": {}
},
"scales": {
"left": {
"axis": "y",
"id": "left",
"type": "linear",
"position": "left",
"display": true,
"scaleLabel": {
"display": true,
"labelString": "Verbrauch / Überschuss (kWh)",
"padding": {
"top": 4,
"bottom": 4
}
},
"beginAtZero": true,
"ticks": {
"minRotation": 0,
"maxRotation": 50,
"mirror": false,
"lineWidth": 0,
"strokeStyle": "",
"padding": 0,
"display": true,
"autoSkip": true,
"autoSkipPadding": 0,
"labelOffset": 0,
"minor": {},
"major": {},
"align": "center",
"crossAlign": "near"
},
"offset": false,
"reverse": false,
"gridLines": {
"display": true,
"color": "#d3d7cf",
"lineWidth": 0.18,
"drawBorder": true,
"drawOnChartArea": true,
"drawTicks": true,
"tickMarkLength": 10,
"offsetGridLines": false,
"borderDash": [
2
],
"borderDashOffset": 0,
"zeroLineWidth": 8
}
},
"x": {
"axis": "x",
"time": {
"unit": "hour",
"locale": "de-DE"
},
"ticks": {
"autoSkip": true,
"maxTicksLimit": 8,
"minRotation": 0,
"maxRotation": 50,
"mirror": false,
"lineWidth": 0,
"strokeStyle": "",
"padding": 0,
"display": true,
"autoSkipPadding": 0,
"labelOffset": 0,
"minor": {},
"major": {},
"align": "center",
"crossAlign": "near"
},
"scaleLabel": {
"display": true,
"labelString": "Zeitraum",
"padding": {
"top": 4,
"bottom": 4
}
},
"type": "category",
"offset": true,
"gridLines": {
"offsetGridLines": true,
"display": true,
"color": "#d3d7cf",
"lineWidth": 0.18,
"drawBorder": true,
"drawOnChartArea": true,
"drawTicks": true,
"tickMarkLength": 10,
"borderDash": [
2
],
"borderDashOffset": 0,
"zeroLineWidth": 8
},
"display": true,
"reverse": false,
"beginAtZero": false,
"id": "x",
"position": "bottom"
},
"right": {
"axis": "r",
"scaleLabel": {
"display": true,
"labelString": "Produktion (kWh)",
"padding": {
"top": 4,
"bottom": 4
}
},
"type": "linear",
"ticks": {
"minRotation": 0,
"maxRotation": 50,
"mirror": false,
"lineWidth": 0,
"strokeStyle": "",
"padding": 0,
"display": true,
"autoSkip": true,
"autoSkipPadding": 0,
"labelOffset": 0,
"minor": {},
"major": {},
"align": "center",
"crossAlign": "near"
},
"display": true,
"offset": false,
"reverse": false,
"beginAtZero": false,
"gridLines": {
"display": true,
"color": "#d3d7cf",
"lineWidth": 0.18,
"drawBorder": true,
"drawOnChartArea": true,
"drawTicks": true,
"tickMarkLength": 10,
"offsetGridLines": false,
"borderDash": [
2
],
"borderDashOffset": 0,
"zeroLineWidth": 8
},
"id": "right",
"position": "chartArea"
}
}
}
}
Environment
Chart.js version: Chart.js v3.0.0-beta.6
Browser name and version: Chrome Version 86.0.4240.111
Link to your project:
https://github.com/zibous/lovelace-graph-chart-card
"right": {
"axis": "r",
...
"position": "chartArea"
See https://www.chartjs.org/docs/master/axes/cartesian/index#common-configuration and axis
@kurkle
Thanks, strange because before new Chart(this.ctx, this.config) :
There is no "axis": "r"
...
this.config = {
"type": "line",
"data": {
"labels": [
[
"17. Nov.",
" 07:00"
],
[
"17. Nov.",
" 08:00"
],
[
"17. Nov.",
" 09:00"
],
[
"17. Nov.",
" 10:00"
],
[
"17. Nov.",
" 11:00"
],
[
"17. Nov.",
" 12:00"
],
[
"17. Nov.",
" 13:00"
],
[
"17. Nov.",
" 14:00"
],
[
"18. Nov.",
" 08:00"
],
[
"18. Nov.",
" 09:00"
],
[
"18. Nov.",
" 10:00"
],
[
"18. Nov.",
" 11:00"
],
[
"18. Nov.",
" 12:00"
],
[
"18. Nov.",
" 13:00"
],
[
"18. Nov.",
" 14:00"
],
[
"19. Nov.",
" 09:00"
],
[
"19. Nov.",
" 10:00"
],
[
"19. Nov.",
" 11:00"
],
[
"20. Nov.",
" 08:00"
],
[
"20. Nov.",
" 09:00"
],
[
"20. Nov.",
" 10:00"
],
[
"20. Nov.",
" 11:00"
],
[
"20. Nov.",
" 12:00"
],
[
"21. Nov.",
" 08:00"
],
[
"21. Nov.",
" 09:00"
],
[
"21. Nov.",
" 10:00"
],
[
"21. Nov.",
" 11:00"
],
[
"21. Nov.",
" 12:00"
],
[
"21. Nov.",
" 13:00"
],
[
"21. Nov.",
" 14:00"
],
[
"22. Nov.",
" 06:00"
],
[
"22. Nov.",
" 08:00"
],
[
"22. Nov.",
" 09:00"
],
[
"22. Nov.",
" 10:00"
],
[
"22. Nov.",
" 11:00"
],
[
"22. Nov.",
" 12:00"
],
[
"22. Nov.",
" 13:00"
],
[
"22. Nov.",
" 14:00"
]
],
"datasets": [
{
"label": "Verbrauch",
"unit": "kWh",
"minval": 0,
"maxval": 0.57,
"sumval": 0,
"avgval": 0,
"current": "0.04",
"last_changed": "2020-11-22T19:00:10.176555+00:00",
"mode": "history",
"borderWidth": 0.5,
"entity": "sensor.energy_consumption",
"name": "Verbrauch",
"yAxisID": "left",
"backgroundColor": "#e74c3c",
"borderColor": "#e74c3c",
"state": "0.04",
"data": [
"0.04",
"0.09",
"0.03",
"0.03",
"0.03",
"0.03",
"0.01",
"0.00",
"0.00",
"0.00",
"0.00",
"0.00",
"0.00",
"0.05",
"0.04",
"0.04",
"0.04",
"0.06",
"0.04",
"0.04",
"0.43",
"0.19",
"0.03",
"0.04",
"0.02",
"0.02",
"0.03",
"0.03",
"0.03",
"0.04",
"0.04",
"0.00",
"0.00",
"0.00",
"0.00",
"0.06",
"0.03",
"0.05",
"0.05",
"0.12",
"0.04",
"0.05",
"0.04",
"0.42",
"0.16",
"0.03",
"0.03",
"0.08",
"0.04",
"0.11",
"0.03",
"0.04",
"0.03",
"0.09",
"0.03",
"0.02",
"0.08",
"0.05",
"0.03",
"0.02",
"0.03",
"0.04",
"0.04",
"0.08",
"0.05",
"0.04",
"0.04",
"0.43",
"0.03",
"0.03",
"0.04",
"0.02",
"0.02",
"0.02",
"0.15",
"0.03",
"0.06",
"0.03",
"0.00",
"0.00",
"0.00",
"0.00",
"0.03",
"0.03",
"0.03",
"0.03",
"0.06",
"0.06",
"0.04",
"0.04",
"0.04",
"0.57",
"0.03",
"0.02",
"0.02",
"0.04",
"0.03",
"0.02",
"0.03",
"0.03",
"0.02",
"0.06",
"0.00",
"0.00",
"0.04",
"0.01",
"0.14",
"0.00",
"0.02",
"0.03",
"0.25",
"0.20",
"0.18",
"0.18",
"0.05",
"0.04",
"0.03",
"0.03",
"0.02",
"0.02",
"0.03",
"0.02",
"0.03",
"0.04",
"0.03",
"0.01",
"0.01",
"0.00",
"0.01",
"0.00",
"0.01",
"0.00",
"0.03",
"0.04",
"0.09",
"0.04",
"0.04",
"0.04"
]
},
{
"label": "Überschuss",
"unit": "kWh",
"minval": 0,
"maxval": 109,
"sumval": 0,
"avgval": 0,
"current": "0.0",
"last_changed": "2020-11-22T14:20:09.887351+00:00",
"mode": "history",
"borderWidth": 0.5,
"entity": "sensor.energy_notused",
"name": "Überschuss",
"yAxisID": "left",
"backgroundColor": "#8bc34a",
"borderColor": "#8bc34a",
"type": "bar",
"state": "0.0",
"data": [
"0.00",
"105.00",
"109.00",
"73.00",
"36.00",
"25.00",
"26.00",
"0.00",
"16.00",
"101.00",
"43.00",
"55.00",
"32.00",
"7.00",
"0.00",
"0.00",
"0.00",
"0.00",
"1.00",
"0.00",
"7.00",
"105.00",
"0.00",
"62.00",
"51.00",
"0.00",
"33.00",
"0.00",
"23.00",
"0.00",
"0.00",
"0.00",
"50.00",
"59.00",
"38.00",
"36.00",
"10.00",
"0.00"
]
}
]
},
"options": {
"type": "line",
"units": "",
"font": {
"size": 12,
"style": "normal",
"lineHeight": 1.2,
"lineWidth": 0
},
"title": {
"display": true,
"text": "Produktion vs. Verbrauch",
"font": {
"style": "normal",
"color": "#ecf0f1"
}
},
"layout": {
"padding": {
"left": 16,
"right": 16,
"top": 0,
"bottom": 16
}
},
"chartArea": {
"backgroundColor": "rgba(0,0,0,0.5)"
},
"legend": {
"display": true,
"position": "top",
"lineWidth": 0,
"labels": {
"usePointStyle": true,
"boxWidth": 8
}
},
"tooltips": {
"enabled": true,
"mode": "nearest",
"position": "nearest",
"backgroundColor": "#ecf0f1",
"titleFont": {
"style": "normal",
"color": "#2c3e50"
},
"bodyFont": {
"style": "normal",
"color": "#2c3e50"
},
"footerFont": {
"style": "normal",
"color": "#2c3e50"
},
"animation": false
},
"hover": {
"mode": "nearest",
"intersect": true
},
"elements": {
"point": {
"radius": 0.33,
"hitRadius": 8
}
},
"spanGaps": true,
"plugins": {},
## ###################################
"scales": {
"left": {
"id": "left",
"type": "linear",
"position": "left",
"display": true,
"scaleLabel": {
"display": true,
"labelString": "Verbrauch / Überschuss (kWh)"
}
},
"x": {
"time": {
"unit": "hour",
"locale": "de-DE"
},
"ticks": {
"autoSkip": true,
"maxTicksLimit": 8
},
"scaleLabel": {
"display": true,
"labelString": "Zeitraum"
}
},
"right": {
"scaleLabel": {
"display": true,
"labelString": "Produktion (kWh)"
}
}
}
}
}
After this.chart = new Chart(this.ctx, ....
"right": {
"axis": "r", <----
"scaleLabel": {
"display": true,
"labelString": "Produktion (kWh)",
"padding": {
"top": 4,
"bottom": 4
}
},
"type": "linear",
"ticks": {
"minRotation": 0,
"maxRotation": 50,
"mirror": false,
"lineWidth": 0,
......
It currently defaults to the first letter of axis id, when everything else fails. We should remove that behavior.
Does adding axis: 'y' make it work?
@kurkle
Thanks 👍
position: 'right' should also work. Closing as solved.
I had this error too and I managed to fix it by adding:
scales: {
y: {
ticks: {
callback: function(val, index) {
return val;
},
}
},
I had this error too and I managed to fix it by adding:
scales: {
y: {
ticks: {
callback: function(val, index) {
return val;
},
}
},
VERY HELPFUL, I was getting this error:
chart.js:13 Uncaught RangeError: minimumFractionDigits value is out of range.
at new NumberFormat ()
at chart.js:13
at zi (chart.js:13)
at zs.numeric (chart.js:13)
at Q (chart.js:13)
at zs.generateTickLabels (chart.js:13)
at zs._convertTicksToLabels (chart.js:13)
at zs.update (chart.js:13)
at qe (chart.js:13)
at Object.update (chart.js:13)
and the code provided by deemeetree fixed it! THANK YOU
I had this error too and I managed to fix it by adding:
scales: {
y: {
ticks: {
callback: function(val, index) {
return val;
},
}
},
This is epic, thank you so much!
I slightly modified this to get the side ticks labeling correctly:
{
scales: {
y: {
ticks: {
callback: function(val, index) {
return this.getLabelForValue(Number(val));
},
}
}
}
}
| gharchive/issue | 2020-11-22T15:04:23 | 2025-04-01T04:33:46.807880 | {
"authors": [
"awantoch",
"deemeetree",
"jpalacios333",
"kurkle",
"zibous"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/issues/8092",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
652290065 | PluginService using registry
TODO:
[x] migration guide
I think we'll also need some docs explaining to esm users that they need to register their scales, controllers, plugins, etc. now
Cool. Looks good to me. I think just some docs is all that's left
Can you add scaleService to the migration guide as well? @stockiNail suggested it's missing (https://github.com/chartjs/Chart.js/pull/7592#issuecomment-655319811)
@sgratzl @sebiniemann we're about to release alpha.2 with tree shaking after this PR is merged. Do you want to test it out at all before we do that?
@benmccann I'm already very busy this week, but I'd like to try it out in our test system afterwards.
| gharchive/pull-request | 2020-07-07T12:40:43 | 2025-04-01T04:33:46.813328 | {
"authors": [
"benmccann",
"kurkle",
"sebiniemann"
],
"repo": "chartjs/Chart.js",
"url": "https://github.com/chartjs/Chart.js/pull/7590",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2013826893 | Bug Report >>>AxiosError: Request failed with status code 502
问题描述 / Problem Description
用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner.
在docker部署的web界面,添加文档的时候报错
复现问题的步骤 / Steps to Reproduce
执行 '...' / Run '...'
点击 '...' / Click '...'
滚动到 '...' / Scroll to '...'
问题出现 / Problem occurs
预期的结果 / Expected Result
描述应该出现的结果 / Describe the expected result.
实际结果 / Actual Result
描述实际发生的结果 / Describe the actual result.
环境信息 / Environment Information
langchain-ChatGLM 版本/commit 号:v2.0.1
是否使用 Docker 部署(是/否):是
使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM-6B
使用的 Embedding 模型(moka-ai/m3e-base 等):m3e-large
使用的向量库类型 (faiss / milvus / pg_vector 等): faiss
操作系统及版本 / Operating system and version: CentOS7
Python 版本 / Python version:3.10
其他相关环境信息 / Other relevant environment information:
附加信息 / Additional Information
添加与问题相关的任何其他信息 / Add any other information related to the issue.
解决了,是自己网络的问题,在docker 命令中加入 --network host
| gharchive/issue | 2023-11-28T07:32:03 | 2025-04-01T04:33:46.825837 | {
"authors": [
"Flagami"
],
"repo": "chatchat-space/Langchain-Chatchat",
"url": "https://github.com/chatchat-space/Langchain-Chatchat/issues/2199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2029658699 | 这个项目不支持llama2-7b模型吗
我看配置文件没有7B的配置项
支持 但是要chat
| gharchive/issue | 2023-12-07T01:06:38 | 2025-04-01T04:33:46.827173 | {
"authors": [
"Cartride",
"zRzRzRzRzRzRzR"
],
"repo": "chatchat-space/Langchain-Chatchat",
"url": "https://github.com/chatchat-space/Langchain-Chatchat/issues/2297",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
586696835 | [fluent-bit] create image
I've created an image of Fluent Bit with a plugin that's useful for working with AWS.
https://hub.docker.com/repository/docker/chatwork/fluent-bit
I have created a repository and set up permissions.
| gharchive/pull-request | 2020-03-24T05:10:57 | 2025-04-01T04:33:46.834989 | {
"authors": [
"cw-ozaki"
],
"repo": "chatwork/dockerfiles",
"url": "https://github.com/chatwork/dockerfiles/pull/277",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2423225044 | Replace req["user"] with req["userEntity"] in session-user.controller
Overview
Take a read through the umbrella ticket. As a sub task to this ticket we want to phase out the use of req["user"] from the session-user.controller.ts. The endpoints that need addressing are POST /session-user, POST /session-user/complete, POST /session-user/incomplete. Note that we haven't followed rest patterns here and hopefully this will be refactored at a later date but not part of this ticket.
Action Items
[ ] For each endpoint that uses req["user"], the service methods will need to be refactored to take the req["userEntity"] rather than req["user"]. This means the service methods will be taking userEntity rather than GetUserDto as its argument. The types need to be updated in the service method and adjustments will need to be made if they are used elsewhere.
[ ] Run all tests. This includes Cypress tests. You will need to load the frontend repository to get instructions about how this works. https://github.com/chaynHQ/bloom-frontend
Hi I would like to take up this issue, can you assign it to me? :)
Hi I created the pull request :) https://github.com/chaynHQ/bloom-backend/pull/567
| gharchive/issue | 2024-07-22T15:58:31 | 2025-04-01T04:33:46.838815 | {
"authors": [
"eleanorreem",
"leoseg"
],
"repo": "chaynHQ/bloom-backend",
"url": "https://github.com/chaynHQ/bloom-backend/issues/546",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1286712718 | question about the pretrain epoch
Hi,
Thanks for the great work!
I am confused about the training epoch of VoxelMAE. The pretrain epoch mentioned in the paper is 20 and 10 for Waymo and nuScenes. However, I find the epoch in config files is 30 for both Waymo and nuScenes. Do we just take the checkpoint of 20th epoch and load it to the model when fine-tuning?
Look forward to your reply.
Hi, Thanks for the great work! I am confused about the training epoch of VoxelMAE. The pretrain epoch mentioned in the paper is 20 and 10 for Waymo and nuScenes. However, I find the epoch in config files is 30 for both Waymo and nuScenes. Do we just take the checkpoint of 20th epoch and load it to the model when fine-tuning? Look forward to your reply.
Yes!
I did not do a lot of experiments on Waymo and nuScenes as it is time-consuming to train the models. Maybe you can get better results with another epoch of the pretraining model or with another masking ratio. Besides, I uploaded the wrong version of the paper to the arxiv, please download the newest version in the arxiv without the results of nuScenes.
Thanks for your reply!
@zwei-lin Hi, have you reproduced the results on Nuscenes?
| gharchive/issue | 2022-06-28T03:25:00 | 2025-04-01T04:33:46.841525 | {
"authors": [
"FrontierBreaker",
"chaytonmin",
"zwei-lin"
],
"repo": "chaytonmin/Voxel-MAE",
"url": "https://github.com/chaytonmin/Voxel-MAE/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1959128957 | first multistep docs draft
Affected Components
[ ] Content & Marketing
[ ] Pricing
[ ] Test
[X] Docs
[ ] Learn
[ ] Other
Pre-Requisites
[ ] Code is linted (npm run lint)
Notes for the Reviewer
New Dependency Submission
Screenshots
This is looking really good so far!
| gharchive/pull-request | 2023-10-24T12:15:36 | 2025-04-01T04:33:46.860285 | {
"authors": [
"drakirnosslin",
"ndom91"
],
"repo": "checkly/docs.checklyhq.com",
"url": "https://github.com/checkly/docs.checklyhq.com/pull/930",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
500994701 | systemd prevents splunk user from starting on port 443
Cookbook version
1.7.3
Chef-client version
12+
Platform Details
centos 7 and rhel 7
Scenario:
The systemd splunk.service config uses the splunk user to start/stop/restart the service. However, it does not have permission to use port 443 (privileged port).
Steps to Reproduce:
Run server-cluster-master-centos-7 test suite in Kitchen
kitchen test server-cluster-master-centos-7
Expected Result:
Test should pass
Actual Result:
The test fails and further debugging reveals that the splunk user does not have permission to use port 443.
================================================================================
Error executing action `restart` on resource 'service[splunk]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of /bin/systemctl --system restart splunk ----
STDOUT:
STDERR: Job for splunk.service failed because the control process exited with error code. See "systemctl status splunk.service" and "journalctl -xe" for details.
---- End output of /bin/systemctl --system restart splunk ----
Ran /bin/systemctl --system restart splunk returned 1
Resource Declaration:
---------------------
# In /tmp/kitchen/cache/cookbooks/chef-splunk/recipes/service.rb
84: service 'splunk' do
85: supports status: true, restart: true
86: provider Chef::Provider::Service::Systemd
87: action [:enable, :start]
88: end
89: else
-- Subject: Unit splunk.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit splunk.service has begun starting up.
Oct 01 15:37:36 server-cluster-master-centos-7.vagrantup.com splunk[4123]: Splunk> Be an IT superhero. Go home early.
Oct 01 15:37:36 server-cluster-master-centos-7.vagrantup.com splunk[4123]: Checking prerequisites...
Oct 01 15:37:36 server-cluster-master-centos-7.vagrantup.com splunk[4123]: Checking http port [443]: not available
Oct 01 15:37:36 server-cluster-master-centos-7.vagrantup.com splunk[4123]: ERROR: http port [443] - no permission to use address/port combi
Oct 01 15:37:36 server-cluster-master-centos-7.vagrantup.com systemd[1]: splunk.service: control process exited, code=exited status=1
Oct 01 15:37:36 server-cluster-master-centos-7.vagrantup.com systemd[1]: Failed to start Splunk.
-- Subject: Unit splunk.service has failed
This was fixed in #119
| gharchive/issue | 2019-10-01T16:02:02 | 2025-04-01T04:33:46.934957 | {
"authors": [
"haidangwa"
],
"repo": "chef-cookbooks/chef-splunk",
"url": "https://github.com/chef-cookbooks/chef-splunk/issues/121",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
169969984 | Chef generate cookbook creates tests in a different location than where kitchen-inspec looks for tests
Issue Description
The default cookbook generators in chef-dk do not create the necessary scaffolding to work with kitchen-inspec.
By default, kitchen-inspec looks for tests in the cookbooks in these locations, in order of preference:
./test/recipes/default/
./test/integration/default/inspec/
However, chef generate cookbook lays down tests in /test/recipes/, without the default subdirectory.
Current behavior
In order to run Inspec tests, people must either:
manually move the files to the proper directories required to work with kitchen-inspec;
manually edit the .kitchen.yml file to point to the default created tests in ./test/recipes/default_test.rb
Desired Behavior
Inspec tests can be run on a freshly generated cookbook without modification.
Error message when no tests are found by kitchen-inspec is clarified, so that it is obviously referring to a path, and not a filename.
ChefDK Version
Chef Development Kit Version: 0.16.28
Platform Version
Mac OS/X latest
Replication Case
chef generate cookbook foo
cd foo
delivery local smoke
Note the output:
-----> Setting up <default-ubuntu-1604>...
Finished setting up <default-ubuntu-1604> (0m0.00s).
-----> Verifying <default-ubuntu-1604>...
Use `/Users/charlesjohnson/foo/test/recipes/default` for testing
Summary: 0 successful, 0 failures, 0 skipped
Finished verifying <default-ubuntu-1604> (0m0.28s).
...
-----> Setting up <default-centos-72>...
Finished setting up <default-centos-72> (0m0.00s).
-----> Verifying <default-centos-72>...
Use `/Users/charlesjohnson/foo/test/recipes/default` for testing
Summary: 0 successful, 0 failures, 0 skipped
Finished verifying <default-centos-72> (0m0.26s).
mkdir test/recipes/default && mv test/recipes/default_test.rb test/recipes/default/default_test.rb
delivery local smoke
Note the output:
Chef Delivery
Running Smoke Phase
-----> Starting Kitchen (v1.10.2)
-----> Verifying <default-ubuntu-1604>...
Use `/Users/charlesjohnson/foo/test/recipes/default` for testing
Target: ssh://vagrant@127.0.0.1:2222
○ 1 skipped
This is an example test, replace with your own test.
○ 1 skipped
This is an example test, replace with your own test.
Summary: 2 successful, 0 failures, 2 skipped
Finished verifying <default-ubuntu-1604> (0m0.32s).
-----> Verifying <default-centos-72>...
Use `/Users/charlesjohnson/foo/test/recipes/default` for testing
Target: ssh://vagrant@127.0.0.1:2200
○ 1 skipped
This is an example test, replace with your own test.
○ 1 skipped
This is an example test, replace with your own test.
Summary: 2 successful, 0 failures, 2 skipped
Finished verifying <default-centos-72> (0m0.29s).
-----> Kitchen is finished. (0m1.67s)
As of Chef-DK 0.16, chef generate cookbook foo creates Inspec scaffolding in test/recipes/.
kitchen-inspec will check both locations, preferring test/recipes.
The challenge here is that kitchen-inspec is actually looking for tests in the ./test/recipes/default directory. chef generate cookbook foo creates a sample test in ./test/recipes, which still leaves the user to create the default subdirectory. As a user, I expect the chef command to create all the scaffolding necessary for my cookbook.
Test case: ```09:51:46-rlupo~/cookbooks/dev/foo (master)$ kitchen verify
-----> Starting Kitchen (v1.10.2)
-----> Setting up ...
$$$$$$ Running legacy setup for 'Docker' Driver
Finished setting up (0m0.00s).
-----> Verifying ...
Use /Users/rlupo/cookbooks/dev/foo/test/recipes/default for testing
Summary: 0 successful, 0 failures, 0 skipped
Finished verifying (0m0.73s).
-----> Kitchen is finished. (0m2.65s)
09:51:57-rlupo~/cookbooks/dev/foo (master)$ mkdir /Users/rlupo/cookbooks/dev/foo/test/recipes/default
09:52:21-rlupo~/cookbooks/dev/foo (master)$ mv /Users/rlupo/cookbooks/dev/foo/test/recipes/default
default/ default_test.rb
09:52:21-rlupo~/cookbooks/dev/foo (master)$ mv /Users/rlupo/cookbooks/dev/foo/test/recipes/default
default/ default_test.rb
09:52:21-rlupo~/cookbooks/dev/foo (master)$ mv /Users/rlupo/cookbooks/dev/foo/test/recipes/default_test.rb /Users/rlupo/cookbooks/dev/foo/test/recipes/default/default_test.rb
/Users/rlupo/cookbooks/dev/foo/test/recipes/default_test.rb -> /Users/rlupo/cookbooks/dev/foo/test/recipes/default/default_test.rb
09:52:44-rlupo~/cookbooks/dev/foo (master)$ kitchen verify
-----> Starting Kitchen (v1.10.2)
-----> Verifying ...
Use /Users/rlupo/cookbooks/dev/foo/test/recipes/default for testing
Target: ssh://kitchen@192.168.99.100:32778
○ User root should exist; User root This is an example test, r... (1 skipped)
This is an example test, replace with your own test.
○ Port 80 should not be listening; Port 80 This is an example ... (1 skipped)
This is an example test, replace with your own test.
Summary: 2 successful, 0 failures, 2 skipped
Finished verifying (0m0.81s).
-----> Kitchen is finished. (0m2.32s)```
Updated the title and description of this issue to be more descriptive. Thanks for reporting this, @ricardolupo!
add leading / to ensure no confusion that default is a directory
-----> Starting Kitchen (v1.10.2)
-----> Verifying <default-centos-72>...
Use `/Users/rlupo/cookbooks/dev/foo/test/recipes/default` for testing
Target: ssh://kitchen@192.168.99.100:32778```
Another side effect is #1027 which causes foodcritic to fail.
I think it makes the most sense for chef generate cookbook to put the inspec files in /test/integration/default/inspec/ as this will keep with the current structure allowing us to have different suites and will allow for easier transition from ServerSpec to InSpec.
Now the generator explicitly generates an inspec_tests key with a location and we've reverted the changes that made this confusing in the first place, closing.
| gharchive/issue | 2016-08-08T16:46:30 | 2025-04-01T04:33:46.950583 | {
"authors": [
"charlesjohnson",
"cheeseplus",
"qubitrenegade",
"ricardolupo"
],
"repo": "chef/chef-dk",
"url": "https://github.com/chef/chef-dk/issues/964",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
122796450 | Add ECS service and task definition resource type
It would be really nice to be able to use this with EC2 Container Service.
Bumping this.
| gharchive/issue | 2015-12-17T18:38:34 | 2025-04-01T04:33:46.951762 | {
"authors": [
"cwandrews",
"thehar"
],
"repo": "chef/chef-provisioning-aws",
"url": "https://github.com/chef/chef-provisioning-aws/issues/419",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
89349120 | [openstack] timeout calculation
I am sorry if I am not correct, I am not a everyday rubyist. We store 'starded_at' as Time.now.to_i but later we subtract from Time.now.utc assuming local time is in UTC as well.
https://github.com/chef/chef-provisioning-fog/blob/ebd78fae4b4582c5abd284b1cbe9954e425f2eda/lib/chef/provisioning/fog_driver/driver.rb#L212
http://ruby-doc.org/stdlib-2.1.1/libdoc/time/rdoc/Time.html
This is a clearer example, and does indeed look like it's wrong.
https://github.com/chef/chef-provisioning-fog/blob/ebd78fae4b4582c5abd284b1cbe9954e425f2eda/lib/chef/provisioning/fog_driver/driver.rb#L372
I'll get this PR in ASAP. this seems like pretty easy fix.
fix https://github.com/chef/chef-provisioning-fog/pull/133
| gharchive/issue | 2015-06-18T17:24:47 | 2025-04-01T04:33:46.954494 | {
"authors": [
"epcim",
"jjasghar",
"randomcamel"
],
"repo": "chef/chef-provisioning-fog",
"url": "https://github.com/chef/chef-provisioning-fog/issues/106",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
57525920 | Unable to reconfigure new chef-server install - postgres not accepting connections on 5432
On Ubuntu 14.04 Server.
psql is looking for the socket at /var/run/postgresql/.s.PGSQL.5432
If I edit /var/opt/opscode/postgresql/9.2/data/postgresql.conf to change the unix_socket_directory
(a) it is rolled back during the reconfigure
(b) postgres is found by part of the reconfigure
(c) reconfigure then dies looking for /tmp/.s.PGSQL.5432
So which socket should I use? Apparently /tmp. But then why does having the socket in /tmp not work for part of the Reconfigure?
---- Begin output of bundle exec rake db:migrate ----
STDOUT:
STDERR: rake aborted!
PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
/opt/opscode/embedded/service/gem/ruby/1.9.1/gems/activerecord-4.1.8/lib/active_record/connection_adapters/postgresql_adapter.rb:888:in `initialize'
Tried the following:
sudo ln -s /tmp/.s.PGSQL.5432 /var/run/postgresql/
Hoping that the socket would be the primary problem and this workaround would fix it.
Ran
sudo chef-server-ctl reconfigure
again and it died again. Checked the postgres logs and found:
2015-02-12_21:49:31.93866 LOG: database system is ready to accept connections
2015-02-12_21:49:31.94005 LOG: autovacuum launcher started
2015-02-12_22:02:42.37008 FATAL: no pg_hba.conf entry for host "[local]", user "postgres", database "postgres", SSL off
log file:
/var/log/opscode/postgresql/9.2/current
@davenorthcreek Thanks for the report. Do you mind providing the entire output of chef-server-ctl reconfigure as well as chef-server-ctl status? My hunch is that the problem isn't with postgresql's configuration, but with the configuration for the oc_id (one of the chef-server services) database migration. Could you also list any local environment variables before running those commands, using the env command?
https://gist.github.com/davenorthcreek/ea8987f251d999fd8f86
Output of env:
XDG_SESSION_ID=1
GEM_HOME=/home/dblock/.chefdk/gem/ruby/2.1.0
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=192.168.1.124 45465 22
OLDPWD=/var/opt/opscode
SSH_TTY=/dev/pts/0
ANT_HOME=/usr/share/ant
USER=dblock
MAIL=/var/mail/dblock
PATH=/opt/chefdk/bin:/home/dblock/.chefdk/gem/ruby/2.1.0/bin:/opt/chefdk/embedded/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/share/ant/bin
PWD=/home/dblock
JAVA_HOME=/usr/
LANG=en_CA.UTF-8
SHLVL=1
HOME=/home/dblock
LANGUAGE=en_CA:en
GEM_ROOT=/opt/chefdk/embedded/lib/ruby/gems/2.1.0
LOGNAME=dblock
GEM_PATH=/home/dblock/.chefdk/gem/ruby/2.1.0:/opt/chefdk/embedded/lib/ruby/gems/2.1.0
SSH_CONNECTION=192.168.1.124 45465 192.168.1.139 22
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/env
Apologies for not following up here. Is this still an issue for you?
Yes, I'm still stuck. If it doesn't get fixed I'll go in a different direction, but this is Chef's chance.
Thanks for not losing my ticket!
Sent from my iPhone
On Feb 24, 2015, at 6:36 PM, Steven Danna notifications@github.com wrote:
Apologies for not following up here. Is this still an issue for you?
—
Reply to this email directly or view it on GitHub.
@davenorthcreek You're best bet might be to reach out to support@chef.io. I think I'm fairly time-zoneshifted from you and they will be able to respond more promptly. However, let's see if we can't knock this out quickly. I think I failed to read your original message closely enough.
In your original message you mentioned:
psql is looking for the socket at /var/run/postgresql/.s.PGSQL.5432
Were you trying to use psql from a distribution provided package? The one shipped in the chef-server package should look at /tmp:
root@vagrant:~# su - opscode-pgsql
$ bash
opscode-pgsql@vagrant:~$ strace /opt/opscode/embedded/bin/psql 2>&1 | grep /tmp
connect(3, {sa_family=AF_FILE, path="/tmp/.s.PGSQL.5432"}, 110) = 0
If this was the only problem before your initial attempt to reconfigure the unix socket location, I would suggest using the built-in psql binary rather than the distribution provided one.
Note that you usually shouldn't have to touch psql directly if all goes well. If your server is in a non-working state because of previous attempts to fix this issue, I would try the install from almost scratch:
chef-server-ctl cleanse
chef-server-ctl reconfigure
some more background
In an out-of-the-box install, the socket should end up in /tmp already without any reconfiguration:
vagrant@vagrant:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.4 LTS
Release: 12.04
Codename: precise
vagrant@vagrant:~$ ls -al /tmp/.s.PGSQL.5432
srwxrwxrwx 1 opscode-pgsql opscode-pgsql 0 Feb 25 08:14 /tmp/.s.PGSQL.5432
The oc_id migration uses the tcp socket rather then the unix socket:
sudo -E strace bundle exec rake db:migrate 2>&1 | grep 5432
connect(7, {sa_family=AF_INET, sin_port=htons(5432), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
If this migration fails even on a clean installation, check the postgres logs (as you did before) and double check the postgresql is listening on TCP port 5432.
Hi! Thanks!
You know how something twigs?
I was doing some testing of a facebook login snippet and had commented
out localhost 127.0.0.1 in /etc/hosts.
Uncomment.
Cleanse.
Reconfigure.
Everything works!
Have a great day down there! Enjoy summer!
-Dave
northcreek.ca
@northcreeksoft
Buried in winter :-)
On Wed, 25 Feb 2015 00:49:23 -0800
Steven Danna notifications@github.com wrote:
@davenorthcreek You're best bet might be to reach out to
support@chef.io. I think I'm fairly time-zoneshifted from you and
they will be able to respond more promptly. However, let's see if we
can't knock this out quickly. I think I failed to read your original
message closely enough.
In your original message you mentioned:
psql is looking for the socket at /var/run/postgresql/.s.PGSQL.5432
Were you trying to use psql from a distribution provided package?
The one shipped in the chef-server package should look at /tmp:
root@vagrant:~# su - opscode-pgsql
$ bash
opscode-pgsql@vagrant:~$ strace /opt/opscode/embedded/bin/psql 2>&1 |
grep /tmp connect(3, {sa_family=AF_FILE, path="/tmp/.s.PGSQL.5432"},
110) = 0 ```
If this was the only problem before your initial attempt to
reconfigure the unix socket location, I would suggest using the
built-in psql binary rather than the distribution provided one.
Note that you usually shouldn't have to touch psql directly if all
goes well. If your server is in a non-working state because of
previous attempts to fix this issue, I would try the install from
almost scratch:
chef-server-ctl cleanse
chef-server-ctl reconfigure
## some more background
In an out-of-the-box install, the socket should end up in /tmp
already without any reconfiguration:
vagrant@vagrant:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.4 LTS
Release: 12.04
Codename: precise
vagrant@vagrant:~$ ls -al /tmp/.s.PGSQL.5432
srwxrwxrwx 1 opscode-pgsql opscode-pgsql 0 Feb 25
08:14 /tmp/.s.PGSQL.5432 ```
The oc_id migration uses the tcp socket rather then the unix socket:
sudo -E strace bundle exec rake db:migrate 2>&1 | grep 5432
connect(7, {sa_family=AF_INET, sin_port=htons(5432),
sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now
in progress) ```
If this migration fails even on a clean installation, check the
postgres logs (as you did before) and double check the postgresql is
listening on TCP port 5432.
---
Reply to this email directly or view it on GitHub:
https://github.com/chef/chef-server/issues/89#issuecomment-75925103
Glad to hear this got sorted. Closing this ticket out. Thanks!
Having a similar problem! host is intact! getting the following error on
sudo chef-compliance-ctl cleanse
sudo chef-comliance-ctl reconfigure
reconfiguration ends up with error
even completer purge and dpkg installation followed by reconfigure ends up with the same error
Running handlers:
[2016-06-24T20:56:24+05:30] ERROR: Running exception handlers
Running handlers complete
[2016-06-24T20:56:24+05:30] ERROR: Exception handlers complete
Chef Client failed. 23 resources updated in 01 minutes 37 seconds
[2016-06-24T20:56:24+05:30] FATAL: Stacktrace dumped to /opt/chef-compliance/embedded/cookbooks/cache/chef-stacktrace.out
[2016-06-24T20:56:24+05:30] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2016-06-24T20:56:24+05:30] FATAL: Mixlib::ShellOut::ShellCommandFailed: enterprise_pg_database[chef_compliance] (chef-compliance::_postgresql-enable line 47) had an error: Mixlib::ShellOut::ShellCommandFailed: execute[create_database_chef_compliance] (/opt/chef-compliance/embedded/cookbooks/cache/cookbooks/enterprise/providers/pg_database.rb line 19) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of createdb --template template0 --encoding UTF-8 chef_compliance ----
STDOUT:
STDERR: createdb: could not connect to database template1: FATAL: no pg_hba.conf entry for host "[local]", user "chef-pgsql", database "template1", SSL off
---- End output of createdb --template template0 --encoding UTF-8 chef_compliance ----
Ran createdb --template template0 --encoding UTF-8 chef_compliance returned 1
@arunachalaramana this appears to be a Chef Compliance install. Please contact support@chef.io for further assistance.
| gharchive/issue | 2015-02-12T21:57:38 | 2025-04-01T04:33:46.976376 | {
"authors": [
"arunachalaramana",
"davenorthcreek",
"marcparadise",
"stevendanna"
],
"repo": "chef/chef-server",
"url": "https://github.com/chef/chef-server/issues/89",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
78990050 | Tests for internal _identifiers endpoint
This forward-ports @marcparadise's tests for _identifiers with minor changes
to account for the new project structure.
cc @chef/lob
:+1:
| gharchive/pull-request | 2015-05-21T13:10:46 | 2025-04-01T04:33:46.978290 | {
"authors": [
"sdelano",
"stevendanna"
],
"repo": "chef/chef-server",
"url": "https://github.com/chef/chef-server/pull/284",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
686040239 | windows_path doesn't effect ISE
Description
This is somewhere between a bug and a enhancement request.
The windows_path resource is - at least it seems to me - somewhat useless on modern systems, because it sets the old kind of path which is not applicable to powershell or anything that leverages the so-called "ISE" framework. I think.
If you use windows_path to add an element to the path then pop open powershell and do echo $env:Path, it will not be there. If you dig up wherever cmd.exe is and do echo %PATH%, it will be there. How can this be? Well this page was the closest thing to an explanation I've found... it talks through the 6 different profiles - 3 types that affect "classic" stuff and then 3 types for "ISE" environments.
Again, Windows is not my strong-suit, and I could be missing things. But based on this I think it'd be useful if windows_path could set ISE paths.
Chef Version
15 and 16, various versions.
Platform Version
Windows 10, Windows Server 2016
I don't know how you set your path and all, but does your problem persist after a reboot?
@gaelcolas it's something-something WMI queries? https://github.com/chef/chef/blob/f05dc3618241eaac5755b9904fc9ceb43e59a455/lib/chef/provider/windows_env.rb#L199
FTR reboot does not help. Though, honestly, if I have to reboot a production system to add something to the search path for the system, we've already failed. It's not a kernel parameter, it's just a default for an environment variable.
As far as I can tell from my research, the old places path goes are seen by 'classic' applications like cmd.exe and friends. Powershell or anything that falls under ISE use new paths, but I could be totally wrong.
There is really just one definitive set of environment variables on windows. The set has 2 scopes. A "system" (or machine) and "user" scope. Foe the PATH variable, the "user" scope paths are appended to the "system" scoped paths to form the complete PATH for a given process environment.
Looking at the Chef resources, it looks like chef uses WMI to set the path at the "user" scope. This will update the variable in the Windows registry which is the authoritative location for all "persistent" environment variables. "Persistent" meaning that the variable is expected to be available to any process at startup. However, this will not export the value into the current process or shell. Maybe that is where the problem is occurring here? Once the converge completes, assuming you are running chef-client in a Powershell ISE environment, the PATH variable would look exactly the same as it did before chef-client ran. You would need to close the ISE and reopen to see the new value.
Even if you simple set $env:PATH using a powershell resource, it would not impact the PATH of the current ISE shell since that is the parent process of the chef process and would therefore not inherit any environment changes.
This all said, I might be completely misunderstanding your scenario. It would be helpful to get a more detailed set of repro steps to understand what you are seeing @jaymzh.
That sounds about right. How does one set the "system" scope? I can send a PR to fix it.
Going over the code again it looks like I missed a key point, the variable is scoped to the <SYSTEM> user which effectively scopes it for everyone.
I'm not certain what your repro steps are but assuming you were in an ISE environment and converged a windows_path resource. I would not expect the environment to change until you closed and reopened the ISE.
| gharchive/issue | 2020-08-26T06:14:47 | 2025-04-01T04:33:46.990034 | {
"authors": [
"gaelcolas",
"jaymzh",
"mwrock"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/issues/10354",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
39230130 | "knife upload" doesn't freeze cookbooks "knife cookbook upload" does
Using the latest chef 11.12.8 - and using knife upload does not freeze the cookbooks when the option is chosen.
$ knife cookbook delete lvm
Do you really want to delete lvm version 1.2.2? (Y/N)y
Deleted cookbook[lvm version 1.2.2]
$ knife upload cookbooks --freeze true
Created cookbooks/lvm
$ knife cookbook show lvm latest
attributes:
chef_type: cookbook_version
cookbook_name: lvm
definitions:
files:
frozen?: false
json_class: Chef::CookbookVersion
knife cookbook upload does though.
knife cookbook upload --freeze lvm
Uploading lvm [1.2.2]
Uploaded 1 cookbook.
$ knife cookbook show lvm latest | grep frozen
frozen?: true
If you've got metadata.json with the frozen attribute set to true I think it probably will freeze it.
You might also be able to drop a frozen true tag into metadata.rb
This is still an issue as of knife 12.3.0
Any updates here? It would be pretty handy for this to work as described.
This 4 year old bug bit me this week.
| gharchive/issue | 2014-07-31T19:42:53 | 2025-04-01T04:33:46.993371 | {
"authors": [
"jekriske",
"lamont-granquist",
"luckymike",
"petecheslock",
"pmacdougall"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/issues/1728",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
56445250 | pin rspec to 3.1.x for now
rspec 3.2.0 release changed some private APIs in rspec that we were using for audit
mode and broke our master (kind of expected/understandable).
@chef/client-engineers @chef/client-core should fix audit mode failures on master.
:+1:
:+1: BTW the reasoning for changes in RSpec is here: http://rspec.info/blog/2015/02/rspec-3-2-has-been-released/ and https://github.com/rspec/rspec-core/pull/1808
:+1:
| gharchive/pull-request | 2015-02-03T22:07:52 | 2025-04-01T04:33:46.995920 | {
"authors": [
"danielsdeleo",
"jaymzh",
"lamont-granquist",
"tyler-ball"
],
"repo": "chef/chef",
"url": "https://github.com/chef/chef/pull/2854",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
68755149 | chef_wm_search dialyzer issue
sqerl records will come back as [[binary()]]
@chef/lob
:+1:
| gharchive/pull-request | 2015-04-15T18:27:04 | 2025-04-01T04:33:47.002310 | {
"authors": [
"joedevivo",
"tylercloke"
],
"repo": "chef/oc_erchef",
"url": "https://github.com/chef/oc_erchef/pull/134",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
433884638 | BUILD FAILED value.xml
Hi, i create a new project with ionic 4 to test functionality of messages and background mode, but after install cordova-plugin-firebase-messaging I can't build project
Obs.: I'm tryed build to android
error:
:app:splitsDiscoveryTaskRelease UP-TO-DATE
/Users/junior-anzolin/.gradle/caches/transforms-1/files-1.1/appcompat-v7-28.0.0.aar/3db66f8e32ade988d41d84026e4a634c/res/values/values.xml: AAPT: error: resource android:attr/fontVariationSettings not found.
/Users/junior-anzolin/.gradle/caches/transforms-1/files-1.1/appcompat-v7-28.0.0.aar/3db66f8e32ade988d41d84026e4a634c/res/values/values.xml: AAPT: error: resource android:attr/ttcIndex not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values-v28/values-v28.xml:7: error: resource android:attr/dialogCornerRadius not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values-v28/values-v28.xml:11: error: resource android:attr/dialogCornerRadius not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values/values.xml:245: error: resource android:attr/fontVariationSettings not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values/values.xml:245: error: resource android:attr/ttcIndex not found.
error: failed linking references.
Failed to execute aapt
com.android.ide.common.process.ProcessException: Failed to execute aapt
at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:796)
at com.android.build.gradle.tasks.ProcessAndroidResources.invokeAaptForSplit(ProcessAndroidResources.java:551)
at com.android.build.gradle.tasks.ProcessAndroidResources.doFullTaskAction(ProcessAndroidResources.java:285)
at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:794)
... 48 more
Caused by: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.android.builder.internal.aapt.v2.QueueableAapt2.lambda$makeValidatedPackage$1(QueueableAapt2.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.android.builder.png.AaptProcess$NotifierProcessOutput.handleOutput(AaptProcess.java:454)
at com.android.builder.png.AaptProcess$NotifierProcessOutput.err(AaptProcess.java:411)
at com.android.builder.png.AaptProcess$ProcessOutputFacade.err(AaptProcess.java:332)
at com.android.utils.GrabProcessOutput$1.run(GrabProcessOutput.java:104)
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:processReleaseResources'.
> Failed to execute aapt
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
BUILD FAILED in 11s
:app:processReleaseResources FAILED
33 actionable tasks: 4 executed, 29 up-to-date
(node:1736) UnhandledPromiseRejectionWarning: Error: /Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/gradlew: Command failed with exit code 1 Error output:
/Users/junior-anzolin/.gradle/caches/transforms-1/files-1.1/appcompat-v7-28.0.0.aar/3db66f8e32ade988d41d84026e4a634c/res/values/values.xml: AAPT: error: resource android:attr/fontVariationSettings not found.
/Users/junior-anzolin/.gradle/caches/transforms-1/files-1.1/appcompat-v7-28.0.0.aar/3db66f8e32ade988d41d84026e4a634c/res/values/values.xml: AAPT: error: resource android:attr/ttcIndex not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values-v28/values-v28.xml:7: error: resource android:attr/dialogCornerRadius not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values-v28/values-v28.xml:11: error: resource android:attr/dialogCornerRadius not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values/values.xml:245: error: resource android:attr/fontVariationSettings not found.
/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/app/build/intermediates/incremental/mergeReleaseResources/merged.dir/values/values.xml:245: error: resource android:attr/ttcIndex not found.
error: failed linking references.
Failed to execute aapt
com.android.ide.common.process.ProcessException: Failed to execute aapt
at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:796)
at com.android.build.gradle.tasks.ProcessAndroidResources.invokeAaptForSplit(ProcessAndroidResources.java:551)
at com.android.build.gradle.tasks.ProcessAndroidResources.doFullTaskAction(ProcessAndroidResources.java:285)
at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:794)
... 48 more
Caused by: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.android.builder.internal.aapt.v2.QueueableAapt2.lambda$makeValidatedPackage$1(QueueableAapt2.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.android.builder.png.AaptProcess$NotifierProcessOutput.handleOutput(AaptProcess.java:454)
at com.android.builder.png.AaptProcess$NotifierProcessOutput.err(AaptProcess.java:411)
at com.android.builder.png.AaptProcess$ProcessOutputFacade.err(AaptProcess.java:332)
at com.android.utils.GrabProcessOutput$1.run(GrabProcessOutput.java:104)
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:processReleaseResources'.
> Failed to execute aapt
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
BUILD FAILED in 11s
at ChildProcess.whenDone (/Users/junior-anzolin/Projetos/Testando/BackgroundFirebase/platforms/android/cordova/node_modules/cordova-common/src/superspawn.js:169:23)
at ChildProcess.emit (events.js:193:13)
at maybeClose (internal/child_process.js:1001:16)
at Socket.stream.socket.on (internal/child_process.js:405:11)
at Socket.emit (events.js:193:13)
at Pipe._handle.close (net.js:614:12)
(node:1736) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:1736) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I'm tryed various forms and resolved the problem but I don't know what I doed and the router of Angular doesn't worked more; I came back all the changes and return zero stake
The solution to problem is add plugin cordova-support-google-services in last version
cordova plugin add cordova-support-google-services@1.3.1
| gharchive/issue | 2019-04-16T16:50:04 | 2025-04-01T04:33:47.024268 | {
"authors": [
"junior-anzolin"
],
"repo": "chemerisuk/cordova-plugin-firebase-messaging",
"url": "https://github.com/chemerisuk/cordova-plugin-firebase-messaging/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
172203164 | Support for biological molecules
I'm wondering if there's some space for an AminoAcid class. Objects of this class would be linked to each Frame object and 'contain' many objects of the Atom class. That is, a more 'OpenBabel' way of treating biological molecules and a more intuitive one.
I'm currently developing C++ software to use on biological molecules. Chemfiles is the only library that supports the major molecular dynamics trajectory formats and PDB format at the same time. But the lack of a Residue concept is critical. On the other hand, OpenBabel can't read netcdf and my soft became considerably slower when I made the change from chemfiles.
Support for AminoAcid or some kind of Residue class is something I have been thinking about for a long time.
I'd like to have it, but I see two issues to resolve:
how do we know which atoms are in which residue? For well-formed PDB that is easy, but for other formats, it is harder. Some kind of heuristic could be used, but might be complex to implement.
how do we define the API so that it stays easy to use for non-biochemistry files?
I am on the phone right now, so I'll expand on that next week, when I get back home.
Yes. I think both issues imply the same requisite:
define a non-mandatory Residue class. That is, a class that connects the Frame and Atom classes, but leaving enough room for these classes to interact independently, in case there's no residue info.
There should also be functions to 'pass' the Residue objects from a frame to other frames, so one can read the trajectory of a biomolecule in any of the available formats and also its PDB to get the residue info. I know it seems a bit of a 'hack' but I think is the easiest and safest way to get residue info for trajectories.
I'm not sure of any of this, so please tell me what you think (when you get back). I really want to use chemfiles, so I'll start writing some code. I'm far from being a good programmer but maybe I can write something useful.
There should also be functions to 'pass' the Residue objects from a frame to other frames, so one can read the trajectory of a biomolecule in any of the available formats and also its PDB to get the residue info. I know it seems a bit of a 'hack' but I think is the easiest and safest way to get residue info for trajectories.
This is already possible, using the Trajectory::set_topology function.
define a non-mandatory Residue class. That is, a class that connects the Frame and Atom classes, but leaving enough room for these classes to interact independently, in case there's no residue info.
Yes, I think the Residue class should live in the Topology class, which is already the way to connect atomic informations to positions. I started to write a simple version of the Residue class, like this:
class Residue {
public:
/// Constructor
Residue(std::vector<size_t> atoms, size_t id, std::string name = "");
/// Get the number of atoms contained in the residue
size_t size() const {
return atoms_.size();
}
/// Get the indexes of the atoms
const std::vector<size_t>& atoms() const {
return atoms_;
}
/// Get the name of the residue, if any.
const std::string& name() const {
return name_;
}
/// Get the id of this residue
size_t id() const {
return id_;
}
private:
std::vector<size_t> atoms_;
std::string name_;
size_t id_ = static_cast<size_t>(-1);
};
Then, Residue could be added to the Topology like this:
class Topology {
public:
// ... Existing methods
const std::vector<Residue>& residues() const {
return residues_;
}
bool are_bonded(const Residue& res1, const Residue& res2) const;
private:
// ... Existing members
std::vector<Residue> residues_;
};
Here we have name and id for each residue, list of atoms in the residue, and a way to check if two residues are bonded together. As I do not work with residues at all, what other methods do you think would be useful here?
Sorry for taking so long to answer, working on too many projects right now.
Great solution. When I started coding I thought there was no way to integrate the Residue concept without disturbing the library.
I was actually thinking of a template class called "Residue" and 2 derived classes called "AminoAcid" and "Nucleotide". But maybe this is not necessary.
Anyway, I think the Atom class should have a variable indicating the id and the name of residue to which they belong to:
/// Get the id of the corresponding residue
size_t resid() const {
return resid_;
}
/// Set the id of the corresponding residue
void set_resid(size_t resid) {
resid_ = resid;
}
/// Get the name of the corresponding residue
std::string resname() const {
return name_;
}
/// Set the name of the corresponding residue
void set_resname(std::string resid) {
name_ = name;
}
private:
size_t resid_;
std::string resname_;
Both of them could be read from the PDB:
void PDBFormat::read_ATOM(Frame& frame, const std::string& line) {
...
atom.set_resname( trim(line.substr(18, 3)) );
atom.set_resid( std::stoi(line.substr(23, 4)); ); // maybe cast to size_t previously?
...
}
Then one could make a Topology function to itereate over each atom, checking those next to each other that have the same resid and "build" the residue sequence. Such function would only be used once for each trajectory and then the resulting Topology applied to the rest of the frames.
size_t CurrentResid, PrevResid = atoms_.begin()->resid();
std::string CurrentResname, PrevResname=atoms_.begin()->resname();
std::vector<Atom> NewResidueAtoms;
iterate over all atoms {
CurrentResid = AtomIterator->resid();
CurrentResname = AtomIterator->resname();
if ( PrevResid == CurrentResid) { NewResidueAtoms.push_back(*AtomIterator); }
else { Residue(NewResidueAtoms, PrevResid, PrevResname); }
PrevResid = CurrentResid;
PrevResname = CurrentResname;
}
Residue(NewResidueAtoms, PrevResid, PrevResname);
I'm not sure about this. It's probably just nonsense, haha. Perhaps you can come up with something better.
Other additions to Topology would be:
iterator begin() {return residues_.begin();}
const_iterator begin() const {return residues_.begin();}
const_iterator cbegin() const {return residues_.cbegin();}
iterator end() {return residues_.end();}
const_iterator end() const {return residues_.end();}
const_iterator cend() const {return residues_.cend();}
private:
size_t nresidues_; // number of residues in the topology
Iterators over Residue atoms are also useful. This way one could iterate over the atoms of specific residues, instead of iterating over every atom of the Frame.
Chain id could also be read. An AminoAcidType variable for distinguish between regular and coarse grained amino acids could also be useful (coarse grain MD of macromolecules is becoming popular). But these could be added later, if you so desire.
That's all I can think of. I don't think there's much more to add for an IN/OUT library. For example, the OpenBabel OBResidue class doesn't go far from here.
Again, I'm not sure the way to construct the Residue objects that I'm proposing is the most appropiate.
Hope some of this is useful to you.
Anyway, I think the Atom class should have a variable indicating the id and the name of residue to which they belong to:
I do not like reversing the knowledge: an Atom should not have knowledge of which Residue it lives in, a Residue should know which Atom it contains.
Looking for the Residue containing a specific atom can be provided by the topology, with a const Residue& Topology::residue(size_t atom_id) const function.
Such function would only be used once for each trajectory and then the resulting Topology applied to the rest of the frames.
This can not be done, because some kind of trajectory (for example the GROMACS new format: TNG) have support for grand-cannonical simulation, where the topology changes during the simulation.
Other additions to Topology would be:
iterator begin() {return residues_.begin();}
const_iterator begin() const {return residues_.begin();}
const_iterator cbegin() const {return residues_.cbegin();}
iterator end() {return residues_.end();}
const_iterator end() const {return residues_.end();}
const_iterator cend() const {return residues_.cend();}
All these iterators could be accessed using the Topology::residues function, make thinks like this possible:
for (auto& residue: topology.residues()) {
// do stuff
}
size_t nresidues_; // number of residues in the topology
This is Topology::residues_.size().
Iterators over Residue atoms are also useful. This way one could iterate over the atoms of specific residues, instead of iterating over every atom of the Frame.
Yes, this would be nice!
Again, I'm not sure the way to construct the Residue objects that I'm proposing is the most appropriate.
Your implementation would work for PDB, using a temporary map atom id <=> res id, and then adding the residues to the topology. Something like:
void PDBFormat::read_ATOM(Frame& frame, const std::string& line) {
std::map<size_t, Residue> residues;
...
resname = trim(line.substr(18, 3));
resid = std::stoi(line.substr(23, 4));
res = residues[resid] || Residue(); // Get or create the residue with this id
res.add_atom(current_atom);
...
for (auto& it: residues) {
// transfer residues from the map to the topology
topology.add_residue(it.second);
}
...
}
So an initial version with PDB support is now merged in master and will make it to the next release!
See #38 for improvements to this feature, and please report any enhancement you can think of!
I've been using the residue implementation and just realized I overlooked something when you showed me the initial implementation.
One can iterate over residues and then iterate over each residue's atoms, but you can't get the positions from the atoms. Neither can you get the indices of the atoms you are iterating over.
A intuitive way to solve this would be to modify the Atom class, to include this information.
private:
size_t id_ = static_cast<size_t>(-1);
Vector3D position_; // to diferentiate position_ from positions_
Add functions to access this variables as const reference, and then modify Atom constructor to include id_ and position_. Then, modify the PDB format to construct each atom with this info. And probably more stuff that I can't think of right now.
But maybe you never intended to add position info in the Atom class (since you made the positions_ variable in the Frame class); if that's the case, the id_ will do. Once I have the id, I can plug it into the positions() function of the Frame class. Although, I must say, this seems a bit cumbersome.
You already have the indices, as the number returned by iteration over a residue are the indices in the corresponding Frame/Topology. So you can do something like this:
auto frame = file.read();
auto positions = frame.positions();
for (auto& residue: frame.topology().residues()) {
for (size_t i: residue) {
auto& position = positions[i];
// use the position
}
}
Although, I must say, this seems a bit cumbersome.
I agree. The two advantage of having the positions/velocities separated from the Atom are:
The C API (and thus all languages bindings) is easier to use, as there would be no way to access all positions in a single call, and one would have to create a copy of every atom at each step to extract the data;
The code can be more efficient, only reading the positions/velocities and leaving the whole Topology unchanged.
But there is a reason why chemfiles is still in beta! If this design is really too cumbersome to use, we can change it to something better, while keeping in mind the other constraints.
You already have the indices, as the number returned by iteration over a residue are the indices in the corresponding Frame/Topology.
Great. My mistake.
As for the advantages of not including positions/velocities in the Atom class, they seem quite convincing. id_ for the Atom class could still be included though. It won't change during the run and would allow a more intuitive way to access atoms coordinates.
I have another annoying remark to make.
I think the current name_ variable should be though more as element_ and a new name_ variable added. Atom names in PDB format not only indicate the element, but also the position in the residue. So, for example, the name_ of the last hydrogen in glutamine would be "HE22", and the element_ would be "H". The name_ would be readed from columns 13-16, and the element_ from columns 78-80 in the PDB format.
In case the format does not hold any element_ information, it could be assigned the same contents of the name_ variable.
I have another annoying remark to make.
Go on, feedback is very appreciated!
I think the current name_ variable should be though more as element_ and a new name_ variable added. Atom names in PDB format not only indicate the element, but also the position in the residue. So, for example, the name_ of the last hydrogen in glutamine would be "HE22", and the element_ would be "H". The name_ would be readed from columns 13-16, and the element_ from columns 78-80 in the PDB format.
Yes, this could be nice to have. I would use label_ and element_ but I agree on the idea. Would you have the time to implement this? In any case, feel free to open an issue about this to have a record of the work to do.
id_ for the Atom class could still be included though. It won't change during the run and would allow a more intuitive way to access atoms coordinates.
I am more reticent with this: what would be the use case where you have the atom but not the id? My problem is that an Atom should not know about how it is being used, and where it is being stored. More generally, what would be the id of an atom created outside of a Frame? This is perfectly legal now:
auto atom = Atom("Zn");
auto mass = atom.mass();
auto radius = atom.vdw_radius();
// Use this data for an analysis algorithm
| gharchive/issue | 2016-08-19T19:27:38 | 2025-04-01T04:33:47.049714 | {
"authors": [
"Luthaf",
"pgbarletta"
],
"repo": "chemfiles/chemfiles",
"url": "https://github.com/chemfiles/chemfiles/issues/35",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2233079217 | Migrate to ESLint v9.0.0
https://eslint.org/docs/latest/use/migrate-to-9.0.0
Blocked on https://github.com/cheminfo/eslint-config/issues/49
| gharchive/issue | 2024-04-09T10:03:54 | 2025-04-01T04:33:47.054959 | {
"authors": [
"hamed-musallam",
"targos"
],
"repo": "cheminfo/eslint-config-cheminfo-react",
"url": "https://github.com/cheminfo/eslint-config-cheminfo-react/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1708446162 | 🛑 Homepage is down
In d1b8699, Homepage (https://larrychen.tech) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Homepage is back up in 43c232e.
| gharchive/issue | 2023-05-13T05:42:27 | 2025-04-01T04:33:47.061402 | {
"authors": [
"chendachao"
],
"repo": "chendachao/uptime",
"url": "https://github.com/chendachao/uptime/issues/761",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1875353578 | 🛑 tt is down
In cda3a53, tt (https://telebot-app-serverless-i7z8-chenhaonanniubi.vercel.app/checkhealth) was down:
HTTP code: 500
Response time: 2439 ms
Resolved: tt is back up in efedc8c after 11 minutes.
| gharchive/issue | 2023-08-31T11:37:46 | 2025-04-01T04:33:47.068712 | {
"authors": [
"chenhaonanniubi"
],
"repo": "chenhaonanniubi/upptime",
"url": "https://github.com/chenhaonanniubi/upptime/issues/2320",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1890897688 | 🛑 tt is down
In b87436c, tt (https://telebot-app-serverless-i7z8-chenhaonanniubi.vercel.app/checkhealth) was down:
HTTP code: 500
Response time: 4830 ms
Resolved: tt is back up in 876aaab after 10 minutes.
| gharchive/issue | 2023-09-11T16:49:26 | 2025-04-01T04:33:47.071087 | {
"authors": [
"chenhaonanniubi"
],
"repo": "chenhaonanniubi/upptime",
"url": "https://github.com/chenhaonanniubi/upptime/issues/2459",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2061409143 | 🛑 tt is down
In b6caa84, tt (https://telebot-app-serverless-i7z8-chenhaonanniubi.vercel.app/checkhealth) was down:
HTTP code: 500
Response time: 3197 ms
Resolved: tt is back up in 902504c after 12 minutes.
| gharchive/issue | 2024-01-01T08:22:50 | 2025-04-01T04:33:47.073581 | {
"authors": [
"chenhaonanniubi"
],
"repo": "chenhaonanniubi/upptime",
"url": "https://github.com/chenhaonanniubi/upptime/issues/3869",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2412070369 | 🛑 tt is down
In 0804aa8, tt (https://telebot-app-serverless-i7z8-chenhaonanniubi.vercel.app/checkhealth) was down:
HTTP code: 500
Response time: 2738 ms
Resolved: tt is back up in b3be39f after 11 minutes.
| gharchive/issue | 2024-07-16T21:26:03 | 2025-04-01T04:33:47.075908 | {
"authors": [
"chenhaonanniubi"
],
"repo": "chenhaonanniubi/upptime",
"url": "https://github.com/chenhaonanniubi/upptime/issues/6423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1690600921 | 🛑 tt is down
In 7b1be75, tt (https://telebot-app-serverless-i7z8-chenhaonanniubi.vercel.app/checkhealth) was down:
HTTP code: 500
Response time: 3084 ms
Resolved: tt is back up in 44db1ce.
| gharchive/issue | 2023-05-01T09:24:13 | 2025-04-01T04:33:47.078238 | {
"authors": [
"chenhaonanniubi"
],
"repo": "chenhaonanniubi/upptime",
"url": "https://github.com/chenhaonanniubi/upptime/issues/823",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
195598672 | Adding aliases and root in resolve will not work in tests.
The errors:
If you add any aliases or root to the webpack.config.base.js, it will not function the same in a test environment, and will not resolve appropriately. The error will be something like:
Error: Cannot find module 'app/utils/api'
How to reproduce:
Simply add a single alias or root to the webpack.config.base.js
export default {
// ... whatever
resolve: {
extensions: ['', '.js', '.jsx', '.json', '.css', '.scss'],
packageMains: ['webpack', 'browser', 'web', 'browserify', ['jam', 'main'], 'main'],
alias: { 'app': `${__dirname}/app` },
}
}
Then add this to the webpack.config.test.js
module.exports = {
// ...
resolve: devConfig.resolve
}
Use an aliased import anywhere in the application and import it in a test.
import 'app/utils/api'; // <- just an example.
Run tests.
$ npm run test
Write resolve.alias to test config and add babel-plugin-webpack-aliases to TEST env of babelrc may be a simple workaround.
webpack.config.test.js
...
module.exports = validate({
...
+ resolve: {
+ alias: { app: `${__dirname}/app` },
+ }
});
.babelrc
{
...
"env": {
...
"test": {
"plugins": [
+ ["webpack-aliases", { "config": "webpack.config.test.js" }],
["webpack-loaders", { "config": "webpack.config.test.js", "verbose": false }]
]
}
}
}
| gharchive/issue | 2016-12-14T17:46:20 | 2025-04-01T04:33:47.082582 | {
"authors": [
"AlexFrazer",
"jhen0409"
],
"repo": "chentsulin/electron-react-boilerplate",
"url": "https://github.com/chentsulin/electron-react-boilerplate/issues/620",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
214495457 | Webpack Code Split Problem
I tried to use webpack code splitting, it works on Development Mode, however in Package Mode, the Chunk is not detected, it throws a
file:///Users/name/Desktop/code/name/project-app/release/mac/APP.app/Contents/Resources/dist/0.bundle.js net::ERR_FILE_NOT_FOUND
The file is dist/ folder which should be served to the package.
Nevermind, the problem was the public path in the webpack prod file. I expect to make a PR soon.
Thanks.
| gharchive/issue | 2017-03-15T19:09:40 | 2025-04-01T04:33:47.084601 | {
"authors": [
"amorino"
],
"repo": "chentsulin/electron-react-boilerplate",
"url": "https://github.com/chentsulin/electron-react-boilerplate/issues/817",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1586363840 | 🛑 Chequeo Colectivo is down
In 5a8fb67, Chequeo Colectivo (https://chequeado.com/colectivo) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Chequeo Colectivo is back up in 37fc7bf.
| gharchive/issue | 2023-02-15T18:45:53 | 2025-04-01T04:33:47.086992 | {
"authors": [
"giulianobrunero"
],
"repo": "chequeado/status",
"url": "https://github.com/chequeado/status/issues/1538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1339535095 | 🛑 Chequeado is down
In 569a153, Chequeado (https://chequeado.com) was down:
HTTP code: 502
Response time: 9084 ms
Resolved: Chequeado is back up in 99003cb.
| gharchive/issue | 2022-08-15T21:32:03 | 2025-04-01T04:33:47.089261 | {
"authors": [
"giulianobrunero"
],
"repo": "chequeado/status",
"url": "https://github.com/chequeado/status/issues/907",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1464554703 | [BUG] can't capture to an existing file
Describe the bug
A clear and concise description of what the bug is.
create a new item, set path to be an existing file (valut/Inbox/scratch.md), no template, create if doesn't exist, write to bottom of file, capture format is:
\n{{VALUE:test}}\n
when i run this i get prompted to enter an error message to create a capture path but the file does exist.
Expected behavior
prompt for a value and add to end of the file
Desktop (please complete the following information):
windows 11
version 1.0.3
Additional context
have another quick add to add tasks to bottom of a file that is my daily note and that file exists and the file format id using {{DATE}} values to specify the name
restarted obsidian and now works. weird
| gharchive/issue | 2022-11-25T13:30:28 | 2025-04-01T04:33:47.106086 | {
"authors": [
"pcause"
],
"repo": "chhoumann/quickadd",
"url": "https://github.com/chhoumann/quickadd/issues/334",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
115778916 | Bump svgsaver
Bumped svgsaver to 0.4.0.
Some formatting changes.
Sorry I didn't see this before. Merging.
| gharchive/pull-request | 2015-11-09T02:05:23 | 2025-04-01T04:33:47.111788 | {
"authors": [
"Hypercubed",
"curran"
],
"repo": "chiasm-project/chiasm-svgsaver",
"url": "https://github.com/chiasm-project/chiasm-svgsaver/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2074797268 | feat: set global input state, synchronize methods
To enable users to test that operations happen at the time they're requested, rather than at an indeterminate point in the next couple frames, this PR removes async/await from as many driver functions as practicable. This is especially useful for operations that require same-frame checks, such as Input::IsActionJustPressed/Released(). This change also speeds up many tests, since there is less waiting on the engine to process frames. Users can continue to support multi-frame tests by using the waiting extension methods themselves.
Additionally, this PR adds support/workaround for the separation of input action events and input action state in Godot 4.2, so that StartAction() and EndAction() extension methods continue to set global input state (e.g., Input::IsActionPressed()).
Exceptions to synchronization
ActionsControlExtensions::HoldActionFor() - requires an await to hold the action
KeyboardControlExtensions::HoldKeyFor() - requires an await to hold the key press
Camera2DDriver::MoveIntoView() and Camera2DDriver::WaitUntilSteady() - because Camera2D incorporates its own motion-easing model, it's not straightforward to eliminate asynchrony in waiting for its motion to finish. I've added a method that starts moving the camera and returns immediately, but users who want to have the camera at its new position afterwards will need to use WaitUntilSteady() themselves.
Fixture::AddToRoot() and dependent methods - requires waiting a frame for a new Node's _Ready() method to be called
Fixture cleanup actions - may require asynchronous operations (e.g., deleting a file)
Other notes
I've made TabContainerDriver methods for selecting a tab synchronous, even though Godot takes an extra frame to update content visibility. In lieu of waiting for contents to become visible, I've added a note to the documentation for tab-selection methods about this, and updated the content-visibility test to manually wait a frame after selecting a tab and before checking visibility.
BREAKING CHANGES
Most driver methods are no longer async, are void instead of returning Task (or return T instead of Task<T>), and do not have a frame delay.
Closing in favor of #chickensoft-games/GodotTestDriver#1
| gharchive/pull-request | 2024-01-10T17:02:56 | 2025-04-01T04:33:47.117374 | {
"authors": [
"wlsnmrk"
],
"repo": "chickensoft-games/godot-test-driver",
"url": "https://github.com/chickensoft-games/godot-test-driver/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1526020359 | Possibly missing error reporting
I get status Failed for file-based transcriptions but there is no indication of what the cause is (in UI or logs). Running from main branch.
I didn't know the logs are written to file. This is what I get:
[2023-01-09 21:19:02,656] model_loader.download_model:102 DEBUG -> Downloading model from https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt to /home/user/.cache/whisper/tiny.pt
[2023-01-09 21:19:03,046] transcriber.run:611 DEBUG -> Starting next transcription task
[2023-01-09 21:19:03,046] transcriber.run:358 DEBUG -> Starting whisper file transcription, task = FileTranscriptionTask(file_path='/home/user/Music/Recording 6.ogg', transcription_options=TranscriptionOptions(language=None, task=<Task.TRANSCRIBE: 'transcribe'>, model=TranscriptionModel(model_type=<ModelType.WHISPER: 'Whisper'>, whisper_model_size=<WhisperModelSize.TINY: 'tiny'>, hugging_face_model_id=None), word_level_timings=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), initial_prompt=''), file_transcription_options=FileTranscriptionOptions(file_paths=['/home/user/Music/Recording 6.ogg']), model_path='/home/user/.cache/whisper/tiny.pt', id=827779, segments=[], status=<Status.QUEUED: 'queued'>, error=None)
[2023-01-09 21:19:03,632] transcriber.read_line:417 DEBUG -> whisper (stderr): /home/user/.cache/pypoetry/virtualenvs/buzz-DqnAh-gc-py3.10/lib/python3.10/site-packages/whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")
[2023-01-09 21:19:05,529] transcriber.read_line:417 DEBUG -> whisper (stderr):
[2023-01-09 21:19:05,530] transcriber.read_line:417 DEBUG -> whisper (stderr):
[2023-01-09 21:19:05,555] transcriber.run:380 DEBUG -> whisper process completed with code = 1, time taken = 0:00:02.508427, number of segments = 1
[2023-01-09 21:19:05,555] transcriber.run:591 DEBUG -> Waiting for next transcription task
It says whisper doesn't complete. One question then: is whisper output written to logs just as Buzz output or disregarded?
[2023-01-09 21:19:05,555] transcriber.run:380 DEBUG -> whisper process completed with code = 1, time taken = 0:00:02.508427, number of segments = 1
That's a bit weird that the process returned with code "1" but still had segments. I found this post which seems related: https://stackoverflow.com/a/67274925/9830227. Based on that, it looks like you can try either:
Adding this line to the if __name__ == "__main__" block:
multiprocessing.set_start_method('spawn')
Replacing this block in WhisperFileTranscriber:
if self.current_process.exitcode == 0:
self.completed.emit(self.segments)
else:
self.error.emit('Unknown error')
with:
# Allow Linux to fail with exitcode 1
if self.current_process.exitcode == 0 or (self.current_process.exitcode == 1 and platform.system() == 'Linux'):
self.completed.emit(self.segments)
else:
self.error.emit('Unknown error')
is whisper output written to logs just as Buzz output or disregarded?
The Whisper output is written to logs (see the lines that start with "whisper (stderr)", except the output that prints out the progress bar and transcription text.
multiprocessing.set_start_method('spawn')
This one worked. (haven't tried the other one)
Likely not, I don't have a proper setup for it
| gharchive/issue | 2023-01-09T17:32:22 | 2025-04-01T04:33:47.123504 | {
"authors": [
"chidiwilliams",
"faveoled"
],
"repo": "chidiwilliams/buzz",
"url": "https://github.com/chidiwilliams/buzz/issues/327",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
271526873 | React 16
Hi, @chiedo .
What do you think about supporting latest react version? Also halogen library with loaders freezed and doesn't support it.
I can do PR if you don't mind
That sounds great to me @ostapch
Thanks!
Hi @ostapch!
Did you already have a go at this? I've noticed that the two main issues seem to be that react-stl-viewer as well as halogen are using React.propTypes instead of prop-types (I only briefly tested though).
Hey @sonsoleslp I wasn't planning on doing so if I'm being honest, but I will now that you've brought it to my attention! I'm no longer working on the project I built this for so it's fallen to the back of mind.
That'd be great! I was planning on using it for a project and maybe even contribute to your library if it falls short of my requirements.
Thank you so much @chiedo !
| gharchive/issue | 2017-11-06T16:02:22 | 2025-04-01T04:33:47.126636 | {
"authors": [
"chiedo",
"kopfnuss",
"ostapch",
"sonsoleslp"
],
"repo": "chiedolabs/react-stl-viewer",
"url": "https://github.com/chiedolabs/react-stl-viewer/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
517449080 | Handle async pathRewrite function
Is this a feature request?
Yes. I would like a pathRewrite function that returns a Promise to be handled. Currently it can only return a string.
My use case is: when a request for /api/foo?auth_token=xxx arrives, I need to:
Grab auth_token and check its validity. This is an async operation
If it is valid, use the token to look up some additional information that token has permission to access in a database (also an async operation)
Assuming all goes well, return a rewritten path, like /other-api/bar?db-field-1=abc&db-field-2=abc. If it did not go well, the path will be rewritten to /other-api/404
Similar to #318, but this covers the case where more than just the response headers/status needs to be modified.
I have this working and tested in a local fork, please let me know if you would accept a PR.
same requestion!
Workaround: I ended up just using the underlying http-proxy lib.
const http = require("http");
const httpProxy = require("http-proxy");
const url = require("url");
const proxy = httpProxy.createProxyServer({ target: "http://example.com" });
const server = http.createServer( /* handle HTTP requests */ );
server.on("upgrade", async (req, socket, head) => {
const parsedUrl = url.parse(req.url, true /* also parse query params */);
if (parsedUrl.pathname === "/api/foo") {
// this should be in a try/catch for any situation in which the Promise could
// reject
const usr = await someAuthService.validate(parsedUrl.query.auth_token);
const someVar = await someDbService.getField1(usr);
const otherVar = await someDbService.getField2(usr);
// Now do the path rewrite
req.url = `/other-api/bar?db-field-1=${someVar}&db-field2=${otherVar}`;
// let request through
proxy.ws(req, socket, head);
} else {
// Invalid path, close the WebSocket
socket.destroy();
// Alternatively, could choose some other path rewrite, like:
// req.url = "/other-api/404";
// proxy.ws(req, socket, head);
}
});
server.listen(3000);
I also had a need for this (in my case, the path rewriting behavior is dependent on its own HTTP request, so async is needed). As such I have created a pull request with the fix.
published http-proxy-middleware@0.21.0 with async pathRewrite
Thanks @rsethc !
| gharchive/issue | 2019-11-04T22:49:48 | 2025-04-01T04:33:47.145236 | {
"authors": [
"WestonThayer",
"chimurai",
"rsethc",
"sysoft"
],
"repo": "chimurai/http-proxy-middleware",
"url": "https://github.com/chimurai/http-proxy-middleware/issues/374",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
280110485 | any training script?
How can I train the model?
refer to https://github.com/peiyunh/tiny
My mxnet implementation will be released after I've tested.
| gharchive/issue | 2017-12-07T12:22:43 | 2025-04-01T04:33:47.146528 | {
"authors": [
"IEEE-FELLOW",
"chinakook"
],
"repo": "chinakook/hr101_mxnet",
"url": "https://github.com/chinakook/hr101_mxnet/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1291848456 | Remove nowarn TransitName from Valid.scala
Follow up from #2604 , should have removed this @nowarn then
Contributor Checklist
[x] Did you add Scaladoc to every public function/method?
[N/A] Did you add at least one test demonstrating the PR?
[x] Did you delete any extraneous printlns/debugging code?
[x] Did you specify the type of improvement?
[x] Did you add appropriate documentation in docs/src?
[x] Did you state the API impact?
[x] Did you specify the code generation impact?
[x] Did you request a desired merge strategy?
[x] Did you add text to be included in the Release Notes for this change?
Type of Improvement
code cleanup
API Impact
No API impact
Backend Code Generation Impact
No impact
Desired Merge Strategy
Squash: The PR will be squashed and merged (choose this if you have no preference.
Release Notes
Removed a stray @nowarn annotation left over from removing TransitName
Reviewer Checklist (only modified by reviewer)
[ ] Did you add the appropriate labels?
[ ] Did you mark the proper milestone (Bug fix: 3.4.x, [small] API extension: 3.5.x, API modification or big change: 3.6.0)?
[ ] Did you review?
[ ] Did you check whether all relevant Contributor checkboxes have been checked?
[ ] Did you do one of the following when ready to merge:
[ ] Squash: You/ the contributor Enable auto-merge (squash), clean up the commit message, and label with Please Merge.
[ ] Merge: Ensure that contributor has cleaned up their commit history, then merge with Create a merge commit.
@mwachs5 need to run scalafmt.
| gharchive/pull-request | 2022-07-01T21:32:07 | 2025-04-01T04:33:47.165842 | {
"authors": [
"jackkoenig",
"mwachs5"
],
"repo": "chipsalliance/chisel3",
"url": "https://github.com/chipsalliance/chisel3/pull/2614",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1266745521 | Cannot run test suite in CentOS Linux 7.4
My current OS environment is:
cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)
$ uname --kernel-name --kernel-release --machine
Linux 3.10.0-693.el7.x86_64 x86_64
I cannot use Docker nor can I use sudo/root to install any packages.
When trying to run the directions from README.md, I get stuck while building the Python wheel for rapidyaml (log attached).
rapidyaml-failure.txt
I was able to get past that issue by replacing the line in requirements.txt:
git+https://github.com/litghost/rapidyaml.git@fixup_python_packaging#egg=rapidyaml
with
pyyaml
However, now I get an different error after invoking make all-tests:
/home/clavin/tools/lnx64/python-3.8.3_rdi/bin/python3: Error while finding module specification for 'fpga_interchange.patch' (ModuleNotFoundError: No module named 'fpga_interchange')
make[4]: *** [devices/xc7a35t/CMakeFiles/constraints-xc7a35t-device.dir/build.make:81: devices/xc7a35t/xc7a35t_constraints.device] Error 1
make[4]: Leaving directory '/home/clavin/fpga-interchange-tests/build'
make[3]: *** [CMakeFiles/Makefile2:3589: devices/xc7a35t/CMakeFiles/constraints-xc7a35t-device.dir/all] Error 2
make[3]: Leaving directory '/home/clavin/fpga-interchange-tests/build'
make[2]: *** [CMakeFiles/Makefile2:2597: CMakeFiles/all-tests.dir/rule] Error 2
(see full log for entire context: make-all-tests-failure.txt)
@clavin-xlnx We have added centOS 7 to CI with this PR: https://github.com/chipsalliance/fpga-interchange-tests/pull/118.
This was done to verify whether everything is actually running correctly for multiple OS and it seems the case, also running on a docker image with CentOS 7.4.1708 is not causing any error, at least for the Xilinx devices.
I have tested also the nexus ones but there is a problem with glibc that needs to be at least 2.18 while centOS 7 is shipped with 2.17, so the make all-tests will fail when trying to build the nexus devices.
I suggest to run the following to build all the tests of a specific family:
make all-<device>-tests
Where device can be for instance xc7a35t or xc7a100t. You may see a list of supported devices in here.
On an additional note, what I ran on a local docker image with centOS 7 is the following:
cd fpga-interchange-tests
source .github/scripts/centos-setup.sh # This is to install all the required packages in the system (it also updates the GNU make version to 4.2.1 from the shipped 3.8 to allow --output-sync to be used, manly for CI purposes)
make env
source env/conda/bin/activate fpga-interchange
make build
make update
cd build
make all-xc7a35t-tests -j$(nproc)
Thanks @acomodi - I tried the device specific calls, both xc7a35t and xc7a100t but both return with the same problem. It appears to be something to do with loading the YAML file itself (the error is emanating from a class called OpenSafeFile, but I can't really dig too much farther. Is there a way to simply disable bitstream generation?
Thanks @acomodi - I tried the device specific calls, both xc7a35t and xc7a100t but both return with the same problem. It appears to be something to do with loading the YAML file itself (the error is emanating from a class called OpenSafeFile, but I can't really dig too much farther. Is there a way to simply disable bitstream generation?
Perhaps this is a side effect of dropping rapidyaml for pyyaml?
Perhaps this is a side effect of dropping rapidyaml for pyyaml?
@clavin-xlnx This could be a cause. I suggest to rebase from the current main and try everything once again from a clean build.
The rapidyaml package is now taken from upstream and is now correctly installing in the conda environment in centOS 7.4 (at least in the docker image as well as in CI).
Is there a way to simply disable bitstream generation?
I have opened this PR which adds a target to run all the DCP+bitstream generation test which basically can skip the bitstream generation from the FASM file, and directly uses the DCP from RapidWright.
I think the latest changes have allowed me to run what I needed to. Closing this issue for now.
| gharchive/issue | 2022-06-09T22:49:33 | 2025-04-01T04:33:47.175500 | {
"authors": [
"acomodi",
"clavin-xlnx"
],
"repo": "chipsalliance/fpga-interchange-tests",
"url": "https://github.com/chipsalliance/fpga-interchange-tests/issues/113",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.