id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1216550251 | NATS access helper aids troubleshooting
Newest stemcells prevent processes from accessing NATS (port 4222) on the BOSH Director because security; however, this can hamper troubleshooting.
This commit adds a helper script to enable an operator to access the NATS port on the director; it accomplishes this by adding the PID to the cgroup which is allowed to communicate with the Director's NATS server.
We think the documentation on bosh.io regarding getting access to NATS is simpler to use/maintain over having this script, so we're going to simply close out this PR.
| gharchive/pull-request | 2022-04-26T22:36:26 | 2025-04-01T04:33:48.609884 | {
"authors": [
"cunnie",
"ragaskar"
],
"repo": "cloudfoundry/bosh-linux-stemcell-builder",
"url": "https://github.com/cloudfoundry/bosh-linux-stemcell-builder/pull/228",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
559362272 | Update cflocal plugins
Updating the cflocal plugins in from 0.19.0 to 0.20.0 in conjunction with @sclevine
I approve this version bump.
(@jghiloni very kindly ported CF Local to cflinuxfs3 for this release!)
Hi @jghiloni,
Thanks for the PR! This is failing in Travis because when the tests downloaded the plugin binary and calculated its SHA-1 sum the value was different to that specified in the PR.
We noticed that the length of the checksum field changed in your PR and wondered if you'd used a different algorithm to generate the hash?
Thanks,
Andrew and @bwasmith
I will give that a look-see and figure it out, thanks for letting me know!
On Wed, Feb 12, 2020 at 4:53 PM Andrew Crump notifications@github.com
wrote:
Hi @jghiloni https://github.com/jghiloni,
Thanks for the PR! This is failing in Travis because when the tests
downloaded the plugin binary and calculated its SHA-1 sum the value was
different to that specified in the PR.
We noticed that the length of the checksum field changed in your PR and
wondered if you'd used a different algorithm to generate the hash?
Thanks,
Andrew and @bwasmith https://github.com/bwasmith
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/cloudfoundry/cli-plugin-repo/pull/332?email_source=notifications&email_token=AAN72GE54C2ZFA7HPOWD3ILRCSDZBA5CNFSM4KPM3E6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELS254I#issuecomment-585477873,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAN72GATBXLPHCPUAJCXGF3RCSDZBANCNFSM4KPM3E6A
.
--
Josh Ghiloni
jghiloni@pivotal.io
This may have gotten hairy as I tried to rebase my fork, yikes! Let me know if this is a problem and how we want to proceed if it is (it seems like it just got all the upstream commits into one because I squashed)
(and FWIW, it failed originally because I accidentally checked in MD5 sums)
Merged, thanks @jghiloni!
| gharchive/pull-request | 2020-02-03T22:05:05 | 2025-04-01T04:33:48.639133 | {
"authors": [
"JenGoldstrich",
"acrmp",
"jghiloni",
"sclevine"
],
"repo": "cloudfoundry/cli-plugin-repo",
"url": "https://github.com/cloudfoundry/cli-plugin-repo/pull/332",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
374087062 | Connection refused cf -api - AWS
I have deployed cf on to bosh,--trying to see if cf api works atleast rest of the components are in deployed
ERROR START
ubuntu@ip-10-0-0-173:~/bosh$ cf api 34.239.203.121
Setting api endpoint to 34.239.203.121
Request error: Get https://34.239.203.121/v2/info: dial tcp 34.239.203.121:443: connect: connection refused
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
ERROR END
deployed components
ubuntu@ip-10-0-0-173:~/bosh$ bosh -e vbox deployments
Using environment '34.239.203.121' as client 'admin'
Name Release(s) Stemcell(s) Team(s) Cloud Config
cf binary-buildpack/1.0.27 bosh-warden-boshlite-ubuntu-xenial-go_agent/97.28 - latest
bosh-dns-aliases/0.0.3
bpm/0.13.0
capi/1.70.0
cf-cli/1.9.0
cf-mysql/36.15.0
cf-networking/2.17.0
cf-smoke-tests/40.0.13
cf-syslog-drain/7.1
cflinuxfs2/1.242.0
credhub/2.1.1
diego/2.19.0
dotnet-core-buildpack/2.1.5
garden-runc/1.16.7
go-buildpack/1.8.28
java-buildpack/4.16.1
log-cache/2.0.0
loggregator/104.0
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.32
php-buildpack/4.3.61
python-buildpack/1.6.21
routing/0.182.0
ruby-buildpack/1.7.24
silk/2.17.0
staticfile-buildpack/1.4.32
statsd-injector/1.4.0
uaa/62.0
1 deployments
Hi @xavier007009. This isn't really a CLI issue per se – you might have more luck posting on the CF mailing list.
In terms of your issue: it looks like your BOSH Director and your CF API address are the same, so you must be running BOSH Lite. You won't be able to connect to the CF API using an IP – you need to hit a fully-qualified domain name. This guide describes using sslip.io to get an FQDN from the Director's elastic IP.
| gharchive/issue | 2018-10-25T19:00:52 | 2025-04-01T04:33:48.647175 | {
"authors": [
"henryaj",
"xavier007009"
],
"repo": "cloudfoundry/cli",
"url": "https://github.com/cloudfoundry/cli/issues/1484",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
137551327 | Missing service/plan id for async last_operation call
The documentation at http://docs.cloudfoundry.org/services/api.html describes nicely all API calls a broker needs to implement. Provision, deprovision, bind, and unbind all transport the service and the plan id with each call. That is great and extremely useful. However, when the async feature was added, last_operation misses that information in http://docs.cloudfoundry.org/services/api.html#polling.
That is a problem for us as our broker has no chance to find out to which service/plan the service instance GUID belongs. Maybe most brokers have no issue with this, but the generic https://github.com/cloudfoundry-community/cf-containers-broker can't be migrated to async calls, unfortunately. Our container manager abstraction has multiple implementations and depend on the plan, but without the service/plan ID we don't know which backend to contact (https://github.com/cloudfoundry-community/cf-containers-broker/blob/master/config/settings.yml#L47) to find out the right backend to contact for a given service instance GUID.
Would it be possible to resolve this issue? Maybe add additional query parameters like there are for deprovision (http://docs.cloudfoundry.org/services/api.html#deprovisioning) and unbind (http://docs.cloudfoundry.org/services/api.html#unbinding).
Did anybody check the issue already? Does it make sense to add the seemingly missing information?
Hi @vlerenc,
This makes sense. I'll prioritize. cc @avade
Thanks,
Nick
@vlerenc we're hoping to clarify the problem that you are facing, so we can understand the best approach for solving it.
Do you have the service_id or plan_id of the related services? Or do you only have the guid of the service instance?
Are you expecting to hit the last_operation endpoint for each service_id, or were you expecting to hit it only for the guid, and then see a service_id in the response? Couldn't you hit GET /v2/service_instances/:guid to look up the service_plan_guid when the asynchronous operation is complete?
Hi,
Do you have the service_id or plan_id of the related services?
Yes, but since I don't have/generate the guid, I don't know how they relate.
Are you expecting to hit the last_operation endpoint for each service_id, or were you expecting to hit it only for the guid, and then see a service_id in the response?
Not sure how you mean that. I am not hitting anything (CC calls the broker), but I am hit (by CC). What I would expect is that I then see service_id AND plan_id (both) with the last_operation call.
Couldn't you hit GET /v2/service_instances/:guid to look up the service_plan_guid when the asynchronous operation is complete?
Again, I don't understand. I have a broker, I am called by the CC only with the guid. All other calls deliver service_id AND plan_id, but last_operation doesn't. That looks to me like someone forgot to add that to that call.
Yes, for now I call back to the CC to resolve the service_id AND plan_id for a guid whenever last_operation hits the broker, but a.) this is adding complexity, b.) introducing a dependency you tried to remove (CC should call the broker; only in this direction), c.) forces me to insert the CC credentials into the broker, d.) adds load unnecessarily.
@vlerenc I think there was a misunderstanding regarding your ask. We will add query params similar to the broker delete request.
Hey @vlerenc,
Thanks for submitting this issue. Seems like the query params have been added as part of this story. We're closing this issue for now, feel free to reopen this if you feel like your issue hasn't been addressed.
Thanks,
CAPI Community Pair - @jberkhahn & @michaelxupiv
| gharchive/issue | 2016-03-01T11:43:02 | 2025-04-01T04:33:48.657118 | {
"authors": [
"SocalNick",
"michaelxupiv",
"sax",
"vlerenc"
],
"repo": "cloudfoundry/cloud_controller_ng",
"url": "https://github.com/cloudfoundry/cloud_controller_ng/issues/551",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
417785324 | Bump bits_service_client to 4.0.0
A short explanation of the proposed change:
Use bits_service_client 4.0.0 and adjust configuration to match this new major version.
An explanation of the use cases your change solves
bits_service_client 4.0.0 uses client side signed URLs instead of requesting the signed URLs from Bits-Service.
Links to any other associated PRs
https://github.com/cloudfoundry/capi-release/pull/129 should be merged first.
[x] I have reviewed the contributing guide
[x] I have viewed, signed, and submitted the Contributor License Agreement
[x] I have made this pull request to the master branch
[x] I have run all the unit tests using bundle exec rake
[x] I have run CF Acceptance Tests
Looks like there are some failing tests in https://travis-ci.org/cloudfoundry/cloud_controller_ng/jobs/506307269. Haven't dug into it, but maybe missing a require somewhere?
"uninitialized constant BitsService::ResourcePool::ConfigurationError"
@tcdowney please make sure you merge the referenced PR in capi-release first.
@petergtz I get that for our integration test suites, but the unit tests run outside of the context of anything changes in capi-release. I would expect them to be runnable regardless.
closing since bits-service has been discontinues
| gharchive/pull-request | 2019-03-06T12:44:55 | 2025-04-01T04:33:48.663205 | {
"authors": [
"petergtz",
"selzoc",
"tcdowney"
],
"repo": "cloudfoundry/cloud_controller_ng",
"url": "https://github.com/cloudfoundry/cloud_controller_ng/pull/1305",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2457640366 | Clear the app's current droplet guid when droplet is deleted
[x] I have reviewed the contributing guide
[x] I have viewed, signed, and submitted the Contributor License Agreement
[x] I have made this pull request to the main branch
[x] I have run all the unit tests using bundle exec rake
[ ] I have run CF Acceptance Tests
This would put the app into a state where it could no longer be restarted, right? Would it be better to block deletion of droplets if they are the current droplet for an app?
I like the idea of blocking droplet deletion as long as it used as current droplet of an app. This should be backed by an FK constraint.
Closed in favor of #3960.
| gharchive/pull-request | 2024-08-09T10:43:30 | 2025-04-01T04:33:48.666357 | {
"authors": [
"Gerg",
"philippthun",
"stephanme"
],
"repo": "cloudfoundry/cloud_controller_ng",
"url": "https://github.com/cloudfoundry/cloud_controller_ng/pull/3926",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2040113475 | Experiment: http.ResponseController.EnableFullDuplex()
Experiments with http.ResponseController.EnableFullDuplex().
EnableFullDuplex is currently failing on the ResponseController since ResponseWriter that is received in ServeHTTP is Negroni.ResponseWriter which does not provide EnableFullDuplex. There is an issue opened about Negroni being outdated.
Yes, I've opened that issue after attempting with this branch. The PR was to share progress with @geofffranks and can be closed now. I will do that latest next week when I'm back at work.
Closing this PR. It was an experiment with EnableFullDuplex that didn't work and resulted in https://github.com/cloudfoundry/routing-release/issues/373.
@peanball I am looking into this right now. I think I would make PR to negroni to add this.
Negroni looks like a dead project through.
Before making a PR there we might consider a locally patched version, try if it gives us the gains that @geofffranks was hoping for and only then consider resurrecting this project.
That said, there might be other features we might like from newer Go versions now or later that would have to be added by us to Negroni instead of a framework's maintainers.
@peanball so I looked into this more and negroni provides an Unwrap method to the ResponseWriter - https://github.com/urfave/negroni/commit/59d32aab419da1606c459c848c7586ef52b07a4b which will satisfy the latest ResponseController and eventually EnableFullDuplex will be called on the underlying http.ResponseWriter.
Negroni actually has v2 and v3 versions, that are not visible on their releases page, but can be imported as github.com/urfave/negroni/v3. The problem is that Unwrap was not released on v3 yet. And looking at Issue the last comment from December 2023 stated that author is temporarily unavailable. So I would give it couple more months to see if he will be back and cut a new release. For now I pinned the negroni to latest v3.
Here is the proposed change for the Gorouter: https://github.com/cloudfoundry/gorouter/pull/395
Related change in routing-release: https://github.com/cloudfoundry/routing-release/pull/389
| gharchive/pull-request | 2023-12-13T16:51:22 | 2025-04-01T04:33:48.679629 | {
"authors": [
"mariash",
"peanball"
],
"repo": "cloudfoundry/gorouter",
"url": "https://github.com/cloudfoundry/gorouter/pull/380",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
163382891 | Syslog format (rfc) used
Hi,
I am trying to find out the RFC that is used to model the syslog messages, as configured here:
https://github.com/cloudfoundry/loggregator/blob/37043af8477dadbdcb3f6ab53880fa3bc63f76c3/jobs/metron_agent/templates/syslog_forwarder.conf.erb#L40-L55
This results in messages of the following form.
<pri>timestamp hostname app [key=value key=value] msg
This looks quite similar to rfc5424, though not quite. Following that RFC, the message should have looked more like the following.
<pri> timestamp hostname app [id key="value" key="value"] msg
(where I have omitted some optional fields to get it as close as possible)
Is some variant of rfc3164 used, with non-standardized timestamp and custom-format structured data placed at the beginning of msg, as part of msg?
I was searching around but couldn't find much information on the topic. If there is a standard, it might be good to have that documented somewhere so that consumers can rely on it.
Regards,
Momchil
@momchil-sap Our syslog messages are formatted using rfc5424. See our syslog sink writer.
@wfernandes , is this for application or system logs? (Seems to be the doppler sink for application logs).
I was asking for component logs, where the following template is used:
https://github.com/cloudfoundry/loggregator/blob/develop/jobs/metron_agent/templates/syslog_forwarder.conf.erb#L39-L48
This template is used to send logs to a remote syslog server, as configured here:
https://github.com/cloudfoundry/loggregator/blob/develop/jobs/metron_agent/spec#L29-L47
@momchil-sap Yes. The code reference I provided was for application logs.
Unfortunately, the syslog format for the component logs was added before my time and I can't seem to find a story relating to it either. We can work this issue as a bug to standardize the format.
Also just wanted to inform you that the Bosh team is actually taking responsibility for forwarding component logs to syslog in the future. This is their syslog release. We can coordinate with them to let them know of the format we standardize on since they pretty much used our syslog_forwarder.conf.erb.
Is the id field intended to be [msgid](https://tools.ietf.org/html/rfc5424#section-6.2.7)?
If so, are we OK with setting it and not letting it be dynamic?
@momchil-sap Can you clarify what the id field would represent for a component log?
Reading a bit into the rfc5424 specification, it seems that there can be more than one SD-ELEMENT ([id key="value"]) segment in a message.
STRUCTURED-DATA can contain zero, one, or multiple structured data
elements, which are referred to as "SD-ELEMENT" in this document.
It seems to serve the purpose of a flat metadata hash/object, where id is used to name the object. This would allow you to differentiate between multiple such objects in a syslog message and to know how to interpret them.
An SD-ELEMENT consists of a name and parameter name-value pairs. The
name is referred to as SD-ID. The name-value pairs are referred to
as "SD-PARAM".
SD-IDs are case-sensitive and uniquely identify the type and purpose
of the SD-ELEMENT. The same SD-ID MUST NOT exist more than once in a
message.
If we were to take the above into considerationg, I guess the following structure would be valid and sematically correct.
[job name="api_z1" index="2"]
Note how id is set to job and job is renamed to name.
I'm no expert, so any better ideas are welcome.
P.S. To be aligned with BOSH 2.0, job could be changed to instance.
@momchil-sap Thanks for the suggestions. We'll discuss it and bring this up with the BOSH team since they own the syslog-release which will be the supported way of configuring component logs through rsyslog.
@keymon & @momchil-sap - I scheduled an item to review final changes and merge in under https://github.com/cloudfoundry/loggregator/pull/166
@ahevenor , I am sorry but I fail to see how the pull request #166 links to this issue.
This issue is concerned with the syslog format used to stream VM logs, whereas the pull request has to do with metrics.
I looked at the changes but could not find one that corresponds to the syslog config on the VM. It would be great if you could point to that change in case I have missed it.
@momchil-sap - that was a copy/paste error on my part. I meant to link this issue instead - https://github.com/cloudfoundry/syslog-release/issues/3
Sorry, I lost track here. Do you need my help or input?
All, I am closing this for housekeeping. Resubmit an issue if you still are having an issue.
THanks
Adam
| gharchive/issue | 2016-07-01T12:11:14 | 2025-04-01T04:33:48.694111 | {
"authors": [
"ahevenor",
"apoydence",
"keymon",
"momchil-sap",
"wfernandes"
],
"repo": "cloudfoundry/loggregator",
"url": "https://github.com/cloudfoundry/loggregator/issues/134",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
593556513 | How can we check the /health status of our UAA deployment
Is there an application /health uri that can be enabled like in some of the SpringBoot applications that would show the basic status of the application? We need to register some reliable mechanism in Kubernetes to check the application status so it can be aware when the application becomes unstable and may require a re-start.
What version of UAA are you running?
"app":{"version":"74.7.0"}
How are you deploying the UAA?
Helm deploy into a Kubernetes 1.16 cluster
I am deploying the UAA
helm deploy command into a running Kubernetes cluster
/healthz is the uaa monitoring endpoint. It will check the tomcat is reachable, but will not check for uaa state or healthy DB connection.
Thanks @mwdb!
| gharchive/issue | 2020-04-03T18:14:53 | 2025-04-01T04:33:48.696922 | {
"authors": [
"jdepaul-mx",
"mwdb",
"shamus"
],
"repo": "cloudfoundry/uaa",
"url": "https://github.com/cloudfoundry/uaa/issues/1253",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2572918295 | fix(issue): Issue List View Aggregation Query
Task Description
For performance reasons an aggregation of the several counts is required. Review the existing aggregations whether they work or not and if the performance can be improved
Acceptance Criteria:
[ ] Count activities
[ ] Count issue matches
[ ] Count AffectedServices
[ ] Count ComponentVersions
Expected Test:
Test aggregations in the respective layers
Moving to next sprint
| gharchive/issue | 2024-10-08T11:27:52 | 2025-04-01T04:33:48.760843 | {
"authors": [
"MR2011",
"lolaapenna"
],
"repo": "cloudoperators/heureka",
"url": "https://github.com/cloudoperators/heureka/issues/278",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2494205587 | feat(authN): Implement Token Based Authentication (#83)
Add middleware layer for API access.
Token auth can be set by AUTH_TYPE=token, when token base authentication is used then AUTH_TOKEN_MAP has to be provided with scaner names and their secrets.
Please also add the passing of the Authentication information as context to the Application.
So on the Application Layer we are later on able to access e.g. the "username"
Here is some example how to do it: https://blog.meain.io/2024/golang-context/#context.withvalue
Please also make sure you merge the commits from main into this branch as well (before squash&merge into main).
| gharchive/pull-request | 2024-08-29T11:45:13 | 2025-04-01T04:33:48.763139 | {
"authors": [
"dorneanu",
"michalkrzyz"
],
"repo": "cloudoperators/heureka",
"url": "https://github.com/cloudoperators/heureka/pull/191",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
597650334 | Added Authentication Support for Private Repositories (Bitbucket, Github)
what
Added support for :
private repository auth
git_submodules_config
vpc_config
logs_config
git_clone_depth
source_version
Solved : #54
Added force_detach_policies = true into resource "aws_iam_role" "default"
Solved: #48
Added to main.tf
tags = {
for name, value in module.label.tags:
name => value
if length(value) > 0
}
why
Someone should do it 😄
/rebuild-readme
@brcnblc please fix conflicts
I couldn't resolve conflict in README.yaml. Why does it give a conflict?
Also when I run test I get following error:
Running tests in ../
1..10
ok 1 check if terraform is installed
not ok 2 check if terraform code needs formatting
(from function `skip_if_disabled' in file test/.test-harness/test/terraform/lib.bash, line 12,
in test file test/.test-harness/test/terraform/lint.bats, line 4)
`skip_if_disabled' failed
/Users/bircan/develop/github/terraform-aws-codebuild/test/.test-harness/test/terraform/lib.bash: line 12: ${env^^}: bad substitution
ok 3 check if terraform modules are valid
not ok 4 check if terraform modules are properly pinned
(from function `setup' in test file test/.test-harness/test/terraform/module-pinning.bats, line 4)
`TMPFILE="$(mktemp /tmp/terraform-modules-XXXXXXXXXXX.txt)"' failed
mktemp: mkstemp failed on /tmp/terraform-modules-XXXXXXXXXXX.txt: File exists
ok 5 check if terraform plugins are valid
ok 6 check if terraform providers are properly pinned
ok 7 check if terraform code is valid
ok 8 check if terraform-docs is installed
ok 9 check if terraform inputs have descriptions
ok 10 check if terraform outputs have descriptions
make: *** [module] Error 1
/test all
/test all
/test all
@osterman If you can supply private bitbucket repository credentials having access only to a sample repository used for this bitbucket example, then we can add a test step to check the bitbucket example.
@osterman Resolved
Also when I run test I get following error:
Running tests in ../
not ok 2 check if terraform code needs formatting
(from function `skip_if_disabled' in file test/.test-harness/test/terraform/lib.bash, line 12,
in test file test/.test-harness/test/terraform/lint.bats, line 4)
`skip_if_disabled' failed
/Users/bircan/develop/github/terraform-aws-codebuild/test/.test-harness/test/terraform/lib.bash: line 12: ${env^^}: bad substitution
not ok 4 check if terraform modules are properly pinned
(from function `setup' in test file test/.test-harness/test/terraform/module-pinning.bats, line 4)
`TMPFILE="$(mktemp /tmp/terraform-modules-XXXXXXXXXXX.txt)"' failed
mktemp: mkstemp failed on /tmp/terraform-modules-XXXXXXXXXXX.txt: File exists
make: *** [module] Error 1
FYI:
I sill get after running test all in locally cloned repo/test folder
/test all
@brcnblc what are your thoughts on if move this to a new module called terraform-aws-codebuild-bitbucket? That way we can keep the code paths separate and not complicate the modules trying to support both? Especially since we (cloudposse) have no way of testing bitbucket.
| gharchive/pull-request | 2020-04-10T01:58:01 | 2025-04-01T04:33:48.784752 | {
"authors": [
"brcnblc",
"jamengual",
"osterman"
],
"repo": "cloudposse/terraform-aws-codebuild",
"url": "https://github.com/cloudposse/terraform-aws-codebuild/pull/53",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
345997399 | Add module resources
My general thoughts are that it should include the following resources:
launch config
autoscaling group
security group
dns record
iam instance profile
iam role
elb (w/ variable for enabled)
eip (w/ variable for enabled)
Variables:
min/max size
enabled
volume size
vpc id
user data script var (this would need to be a path using ${path.module} syntax in local module)
elb enabled/disabled
eip enabled/disabled
dns zone id
security groups
Resources sound good, except for not sure how we can leverage the DNS record with an ASG?
We should also define the outputs. I think they would be
outputs
security_group_id
variables
instance_type
namespace, stage, name, attributes, delimiter - for label module
subnet_ids
ssh_key_pair
references
Refer to these modules/variables for inspiration:
https://github.com/cloudposse/terraform-aws-ec2-instance-group/blob/master/variables.tf
https://github.com/cloudposse/terraform-aws-ec2-instance/blob/master/variables.tf
That sounds good. I like that.
I was thinking the DNS record would be used only if enabled and an ELB and EIP was requested. Thoughts?
I think the ALB belongs outside of this module for greatest flexibility.
Check these out:
https://github.com/cloudposse/terraform-aws-alb-ingress
https://github.com/cloudposse/terraform-aws-alb
https://github.com/cloudposse/terraform-aws-alb-target-group-cloudwatch-sns-alarms
Perhaps the module can create the target groups ARNs. Then you can use the alb-ingress module to route traffic to that ARN.
https://www.terraform.io/docs/providers/aws/r/autoscaling_attachment.html
I like that, sounds good!
| gharchive/issue | 2018-07-31T01:11:12 | 2025-04-01T04:33:48.794566 | {
"authors": [
"MoonMoon1919",
"osterman"
],
"repo": "cloudposse/terraform-aws-ec2-autoscale-group",
"url": "https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
640462270 | IAM Policy cannot be created due to InvalidTypeException
Describe the Bug
ES was created without iam_role_arns. After adding it and applying it failed with:
module.elasticsearch.aws_iam_role.elasticsearch_user[0]: Creating...
module.elasticsearch.aws_iam_role.elasticsearch_user[0]: Creation complete after 1s [id=xxx-user]
module.elasticsearch.data.aws_iam_policy_document.default[0]: Refreshing state...
module.elasticsearch.aws_elasticsearch_domain_policy.default[0]: Creating...
Error: InvalidTypeException: Error setting policy: [{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:List*",
"es:ESHttpPut",
"es:ESHttpPost",
"es:ESHttpHead",
"es:ESHttpGet",
"es:Describe*"
],
"Resource": [
"arn:aws:es:us-east-2:xxx:domain/xxx/*",
"arn:aws:es:us-east-2:xxx:domain/xxx"
],
"Principal": {
"AWS": [
"arn:aws:iam::xxx:role/xxx-user",
"arn:aws:iam::xxx:role/xxx"
]
}
}
]
}]
on .terraform/modules/elasticsearch/main.tf line 227, in resource "aws_elasticsearch_domain_policy" "default":
227: resource "aws_elasticsearch_domain_policy" "default" {
This is due to IAM did not yet have Unique Identifier available. Every ARN entity is converted to Unique Identifier for security reasons.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids
Expected Behavior
It should simply apply changes. Second apply works fine.
Steps to Reproduce
Create cluster without iam_role_arns
Add iam_role_arns
It might be hard to reproduce due to a lot of factors.
Have the same issue. When i try add this policy manual by AWS Management console I have error:
This policy contains the following error: This policy contains invalid Json For more information about the IAM policy grammar, see AWS IAM Policies
I'm seeing this failure pretty consistently when the role is created and the domain policy is created at the same time. if i re-apply after the role has been created, the domain policy can be created successfully.
I have the same problem. Any idea?
I'm having the same issue too :(
Maybe the role has to be created first and then the domain policy. Perhaps an explicit depends_on may solve this issue.
The error seems still there, this was working though, trying to figure out what has changed.
│ Error: InvalidTypeException: Error setting policy: [{"Version":"2012-10-17"}]
│
│ with module.central_logs_opensearch.aws_elasticsearch_domain_policy.default[0],
│ on modules/aws-elasticsearch/main.tf line 287, in resource "aws_elasticsearch_domain_policy" "default":
│ 287: resource "aws_elasticsearch_domain_policy" "default" {
│
| gharchive/issue | 2020-06-17T13:53:46 | 2025-04-01T04:33:48.800222 | {
"authors": [
"3h4x",
"ByJacob",
"Warns",
"mmorejon",
"nitrocode",
"timcosta",
"xposix"
],
"repo": "cloudposse/terraform-aws-elasticsearch",
"url": "https://github.com/cloudposse/terraform-aws-elasticsearch/issues/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
866308799 | fix ingress security groups
what
fix ingress security groups
if this module creates the security group then output the security_group_id
why
Having protocol set to "tcp" will set the to/from ports
to the "0" specified. Port 0 is not an ActiveMQ or
RabbitMQ broker port. Instead, use appropriate ActiveMQ or RabbitMQ ports
per allowed_security_groups and allowed_cidr_blocks.
references
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule
Closes https://github.com/cloudposse/terraform-aws-mq-broker/issues/29
@Nuru Would it be better to revert to this commit? Since the brokers are an AWS service it might be fine to just allow all traffic to the service since it's only going to respond on broker ports anyways. If you prefer the specific port options I'd be happy to add it...but I thought I'd suggest this option first.
@heathsnow How about we go with this for now:
from_port = 0
to_port = 65535
protocol = "tcp"
@Nuru Closing this PR and a new PR opened (https://github.com/cloudposse/terraform-aws-mq-broker/pull/31)
| gharchive/pull-request | 2021-04-23T17:38:45 | 2025-04-01T04:33:48.804943 | {
"authors": [
"Nuru",
"heathsnow"
],
"repo": "cloudposse/terraform-aws-mq-broker",
"url": "https://github.com/cloudposse/terraform-aws-mq-broker/pull/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1269977195 | Update test-framework to current
what
Update test-framework to current
why
Enable parallel testing
Bug fixes
/test all
| gharchive/pull-request | 2022-06-13T21:27:30 | 2025-04-01T04:33:48.806714 | {
"authors": [
"Nuru"
],
"repo": "cloudposse/terraform-aws-rds-cluster",
"url": "https://github.com/cloudposse/terraform-aws-rds-cluster/pull/142",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
258848102 | Add parent_zone_name capability
What
Add parent_zone_name capability
Why
User should be able to specify parent_zone_id or parent_zone_name to tf_vanity
Tested with parent_zone_name.
Tested with parent_zone_id
| gharchive/pull-request | 2017-09-19T14:45:21 | 2025-04-01T04:33:48.810425 | {
"authors": [
"SweetOps",
"comeanother"
],
"repo": "cloudposse/tf_s3_website",
"url": "https://github.com/cloudposse/tf_s3_website/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
929699656 | UVI Request
--- UVI JSON ---
{
"vendor_name": "Linux",
"product_name": "Kernel",
"product_version": "versions from to before v5.4.125",
"vulnerability_type": "unspecified",
"affected_component": "unspecified",
"attack_vector": "unspecified",
"impact": "unspecified",
"credit": "",
"references": [
"https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=920697b004e49cb026e2e15fe91be065bf0741b7"
],
"extended_references": [
{
"type": "commit",
"value": "920697b004e49cb026e2e15fe91be065bf0741b7",
"note": "fixed"
}
],
"reporter": "joshbressers",
"reporter_id": 1692786,
"notes": "",
"description": "ext4: fix bug on in ext4_es_cache_extent as ext4_split_extent_at failed\n\nThis is an automated ID intended to aid in discovery of potential security vulnerabilities. The actual impact and attack plausibility have not yet been proven.\nThis ID is fixed in Linux Kernel version v5.4.125 by commit 920697b004e49cb026e2e15fe91be065bf0741b7, it was introduced in version by commit . For more details please see the references link."
}
--- UVI JSON ---
This issue has been assigned UVI-2021-1000753
| gharchive/issue | 2021-06-24T23:49:11 | 2025-04-01T04:33:48.827637 | {
"authors": [
"joshbressers"
],
"repo": "cloudsecurityalliance/security-database",
"url": "https://github.com/cloudsecurityalliance/security-database/issues/73",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
534385843 | Number are not passed correctly when using HTTP/JSON
If I pass a number value in the parameter of a command using HTTP/JSON then the value is always 0 in the command handler.
Using the akka/console with gRPC, we don't have the problem.
Could you submit a reproducer using the Shopping Cart service?
On Sat, 7 Dec 2019 at 11:43, domschoen notifications@github.com wrote:
If I pass a number value in the parameter of a command using HTTP/JSON
then the value is always 0 in the command handler.
Using the akka/console with gRPC, we don't have the problem.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/cloudstateio/cloudstate/issues/162?email_source=notifications&email_token=AAACU5653YA374SWPXCPPQ3QXN46LA5CNFSM4JXLGQ5KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H62CSZQ,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAACU52HCTAOETXZ6PUDZADQXN46LANCNFSM4JXLGQ5A
.
--
Cheers,
√
Sorry I found my mistake (I was not mentioning the body in the protobuf) => I close the issue
| gharchive/issue | 2019-12-07T10:43:49 | 2025-04-01T04:33:48.831685 | {
"authors": [
"domschoen",
"viktorklang"
],
"repo": "cloudstateio/cloudstate",
"url": "https://github.com/cloudstateio/cloudstate/issues/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
331646042 | Prevent Ref where Ref is not allowed
Ran into these today; making an issue here so they're not forgotten (I'd like to provide a patch+PR for this, but I can't take the time immediately).
[ ] GetAtt will not validate if the parameters are not strings. I had GetAtt(Ref(db_cluster), "Endpoint.Address"), which worked fine on the troposphere side, but failed on the cfn side. Need GetAtt(db_cluster, "Endpoint.Address")
[ ] similarly, DependsOn takes a string (or list of strings) and not a reference. DependsOn=Ref(db_cluster) fails on the cfn side; DependsOn=db_cluster works.
I see; it's only on certain kinds of Ref that GetAtt fails, then. That's fair. My use case was on a created resource, not a parameter.
| gharchive/issue | 2018-06-12T15:43:41 | 2025-04-01T04:33:48.834161 | {
"authors": [
"scoates"
],
"repo": "cloudtools/troposphere",
"url": "https://github.com/cloudtools/troposphere/issues/1067",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
172211633 | memory leaks when emitting to kafka
Description
We observed memory leaks in worker processes when emitting to kafka
How to Reproduce
configure crawler to emit to kafka and monitor process size.
@ricarkol and Sastry Duri are working on it
hey @sastryduri this is fixed right?
Fixed
| gharchive/issue | 2016-08-19T20:14:30 | 2025-04-01T04:33:48.838913 | {
"authors": [
"ricarkol",
"sastryduri"
],
"repo": "cloudviz/agentless-system-crawler",
"url": "https://github.com/cloudviz/agentless-system-crawler/issues/133",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1443416319 | feat: add round_robin and weight_round_robin algorithm for loadbalance
What type of PR is this?
feat
Check the PR title.
[x] This PR title match the format: <type>(optional scope): <description>
[x] The description of this PR title is user-oriented and clear enough for others to understand.
(Optional) Translate the PR title into Chinese.
为负载均衡增加轮询以及加权轮询算法
(Optional) More detail description for this PR(en: English/zh: Chinese).
Instance
insList := []discovery.Instance{
discovery.NewInstance("tcp", "127.0.0.1:8881", 10, nil),
discovery.NewInstance("tcp", "127.0.0.1:8882", 20, nil),
discovery.NewInstance("tcp", "127.0.0.1:8883", 50, nil),
discovery.NewInstance("tcp", "127.0.0.1:8884", 100, nil),
discovery.NewInstance("tcp", "127.0.0.1:8885", 200, nil),
discovery.NewInstance("tcp", "127.0.0.1:8886", 500, nil),
}
Use weight_random
127.0.0.1:8886
127.0.0.1:8886
127.0.0.1:8882
127.0.0.1:8885
127.0.0.1:8885
127.0.0.1:8885
127.0.0.1:8886
127.0.0.1:8886
127.0.0.1:8886
127.0.0.1:8885
Use round_robin
127.0.0.1:8881
127.0.0.1:8882
127.0.0.1:8883
127.0.0.1:8884
127.0.0.1:8885
127.0.0.1:8886
127.0.0.1:8881
127.0.0.1:8882
127.0.0.1:8883
127.0.0.1:8884
Use weight_round_robin
127.0.0.1:8886
127.0.0.1:8885
127.0.0.1:8886
127.0.0.1:8884
127.0.0.1:8886
127.0.0.1:8886
127.0.0.1:8885
127.0.0.1:8886
127.0.0.1:8883
127.0.0.1:8886
Which issue(s) this PR fixes:
Fixes #358
I think Hertz-contrib is more suitable to put this code.
I think Hertz-contrib is more suitable to put this code. What's your opinion?
My opinion is that the load balancer which adapts to the third-party service registry can be placed in hertz-contrib, and this kind of load balancing algorithm is placed directly in the main library.
I don't think it should be placed in the main repository or Hertz-Contrib by whether there are third-party dependencies. I should be decided by the design. And the random implementation placed in the main repo is because it's just a simple implementation. Hertz is a HTTP framework, not a framework provided loadbalance algorithm, so I think it's better to be placed in Hertz-COntrib.
Agreed, I'll put it in hertz-contrib.
Thx.
Please create pr in repo: https://github.com/hertz-contrib/loadbalance
Please create pr in repo: https://github.com/hertz-contrib/loadbalance
got it, thx.
| gharchive/pull-request | 2022-11-10T07:48:34 | 2025-04-01T04:33:48.847576 | {
"authors": [
"Duslia",
"L2ncE"
],
"repo": "cloudwego/hertz",
"url": "https://github.com/cloudwego/hertz/pull/361",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
347847672 | gc问题
--master.lua
local mc = require"skynet.multicast"
local dc = require"skynet.datacenter"
local ch
skynet.start(function()
ch = mc.new()
dc.set("channel", ch.channel)
skynet.fork( function() --一直发布消息
ch:publish("xxxxxxxxxxxxxx")
skynet.sleep(100)
end)
end)
--agent.lua
local mc = require"skynet.multicast"
local dc = require"skynet.datacenter"
local ch
skynet.start(function()
ch = mc.new{
channel = dataCenter.get("channel"),
dispatch = function ( channel, src, msg )
print("from channel:", msg) --这里始终打印不出
end
}
ch:subscribe()
skynet.timeout(100, function() --1秒后gc下
collectgarbage()
end)
end)
我的猜测,agent.ch变量只有从nil赋值为非nil后,其他没地方使用过,导致被gc认为该变量可以回收。
云风哥,怎样算"无引用的 channel", agent的ch不算吗
你需要直接或间接的被全局变量引用。
有点怪怪的感觉,gc没法找到模块的local变量吗
模块并不引用 local 变量,函数才引用。
这里主要是 dispatch 表是弱表,改成强表就可以自己记住引用。但是这里的最初的考虑是为了避免在服务退出时忘记 unsubscribe 而造成资源泄露。
哦。。明白了,云风哥,谢谢了。
我看了一下,好像改成强表也没什么问题。等以后想起来有什么别的问题再说,先改了。
之前是不是为了方便使用,即使不显示调用unsubscribe,也能释放资源。
| gharchive/issue | 2018-08-06T09:30:54 | 2025-04-01T04:33:48.852978 | {
"authors": [
"cloudwu",
"shizhenren"
],
"repo": "cloudwu/skynet",
"url": "https://github.com/cloudwu/skynet/issues/872",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1250630331 | MLM Pretraining missing bbox inputs
Hi, Great work on the package.
It seems on some of the model classes, eg. BrosLMHeadModel, the code misses the bbox inputs. Example below. Correct me if I misunderstood, but I guess bbox should be added here.
If you would like I can put in a PR to fix it here and in the other places like BrosForSequenceClassification and BrosPreTrainedModel.
MLM Model input
https://github.com/clovaai/bros/blob/eb3aa51ad7348444bafb06be64c4604182275edd/bros/modeling_bros.py#L1314-L1318
Bros Model call in MLM Model
https://github.com/clovaai/bros/blob/eb3aa51ad7348444bafb06be64c4604182275edd/bros/modeling_bros.py#L1378-L1381
Hi, thank you for your interest in our work!
You are right. Sorry for the confusion.
(When implementing the code, I copied and pasted the modeling_bert.py of Hugging Face's transformers and modified only a part...)
I'd appreciate if you could open the PR!
I've opened a PR for this :)
| gharchive/issue | 2022-05-27T10:55:53 | 2025-04-01T04:33:48.861834 | {
"authors": [
"darraghdog",
"logan-markewich",
"tghong"
],
"repo": "clovaai/bros",
"url": "https://github.com/clovaai/bros/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
547764062 | Doctests for Dependency Tree and more properties on Doc
Other than filling out a few doctests, this PR also simplifies the Doc data model so that all data are in the words attribute. There is now a property for accessing sentences that reads the words list to group words into sentences.
Note that this accessor is written defensively to make no assumptions about the order of words in the words list. There is some performance cost but presumably this is insignificant compared to the cost of running the Stanford statistical models.
The PR also adds new properties to Doc: pos and features.
Note that this accessor is written defensively to make no assumptions about the order of words in the words list.
There is some performance cost but presumably this is insignificant compared to the cost of running the Stanford statistical models.
This is a cleaner model for me to keep in mind, which I hope will be easier for outside contribs to grok, even at the expense of a some speed. Thank you.
Let me know about the Doc.features name. Now might be the time to settle on something, before too much is built on top.
Doc.features name sees all right to me -- would you prefer something different?
Doc.features is very ambiguous if one is unaware of what the method is supposed to do. In general and especially in the ML paradigm, all of the word information may be considered a "feature". What is unique about your dict here is (a) exclusively morphological and (b) structured in a data container instead of being an concatenated str. See where I'm coming from?
In general and especially in the ML paradigm, all of the word information may be considered a "feature".
I see your point. Well, in Linguistics the term is more specialized: there are phonological, morphological, syntactic, and semantic features. When talking of the "features of a word", I think one often understands morphosyntactic features, which I suppose is why the Stanford people have just feats.
Shall we specify then like this? Doc.morphosyntactic_features ?
Crazy idea -- what if we dropped the str altogether and just returned the dict as Doc.pos? This seems aggressive but certainly would make by default the CLTK's "value add" of smartly processed data.
Would we go so far as to create a new morphosyntactic data type, as @free-variation you did for dependency parses?
When talking of the "features of a word", I think one often understands morphosyntactic features, which I suppose is why the Stanford people have just feats.
OK. I wasn't totally aware of this.
Shall we specify then like this? Doc.morphosyntactic_features ?
I like this, but wouldn't it make sense to have parallel names between this and what's in Doc.pos, since it's all the same info only in different form.
Doc.morphology -> dict
Doc.morphology_string -> str
John totally not meaning to be a pest about what is a small issue. If you feel strongly, I defer to your judgment!
If we create a data type, then I would propose going whole-hog and modeling a popular theory of morphosyntactic features, similar to what exists for phonology in the Orthophonology module in cltk. This would provide an interesting value-add, indeed. Shall I propose something?
similar to what exists for phonology in the Orthophonology module in cltk. This would provide an interesting value-add, indeed. Shall I propose something?
I'm definitely game. The key for me would be making such a new type immediately intuitive to philologically trained people like myself (this is why I love the dict you created).
Until then, for this PR how about we settle upon:
Doc.morphosyntactic_features -> dict
Doc.morphosyntactic_features_str-> str
The above has the benefit of nudging the user towards the more meaningful structured dict.
All right. What does Doc.morphosyntactic_features_str return? The unprocessed value that e.g. Stanford generates, like Case=Nom|Degree=Pos|Gender=Fem|Number=Sing ?
All right. What does Doc.morphosyntactic_features_str return? The unprocessed value that e.g. Stanford generates, like Case=Nom|Degree=Pos|Gender=Fem|Number=Sing ?
Yes. That's assuming you care to return this, at all. With the dict, it is now redundant and less convenient.
Wait,
I like this, but wouldn't it make sense to have parallel names between this and what's in Doc.pos, since it's all the same info only in different form.
In general it's not the same information, since the features don't specify the category ("part of speech") of the word. We could take a theoretical stand and assert that category is a morpho feature, but I think it may still be useful to allow for a Doc.pos separately.
Relatedly, I wonder if Doc.pos should return the Word.upos, so that instead of (in the Latin case) getting the rather opaque A1|grn1|casA|gen2|stAM , we just get NOUN.
Oh, and I just realized the obvious -- the unpacking of the features string should happen in the wrapper, which knows about the representation used by Stanford. Duh. I'll move it now.
You've illuminated what has confused me.
I didn't realize that the pos / feature distinction is coming straight out of stanford
In many treebanks (like English I have used), the POS tag is just NOUN, ADV, etc.. In the Greek and Latin treebanks, though, POS contains much more information (as in your doctest, ['A1|grn1|casA|gen2|stAM', 'N3|modA|tem1|gen6|stAV', 'C1|grn1|casA|gen2|stPV'])
Would it be wrong to have ALL of this info in one place.
Maybe related question: how come the features in the doctest look so different than POS:
>>> cltk_doc.pos[:3]
['A1|grn1|casA|gen2|stAM', 'N3|modA|tem1|gen6|stAV', 'C1|grn1|casA|gen2|stPV']
>>> cltk_doc.features[:3]
[{'Case': 'Nom', 'Degree': 'Pos', 'Gender': 'Fem', 'Number': 'Sing'}, {'Mood': 'Ind', 'Number': 'Sing', 'Person': '3', 'Tense': 'Pres', 'VerbForm': 'Fin', 'Voice': 'Act'}, {'Case': 'Nom', 'Degree': 'Pos', 'Gender': 'Fem', 'Number': 'Sing', 'PronType': 'Ind'}]
In many treebanks (like English I have used), the POS tag is just NOUN, ADV, etc.. In the Greek and Latin treebanks, though, POS contains much more information (as in your doctest,
Right, that's the issue. Since historically treebanks haven't worked hard at syncing their representations -- which themselves typically reflect some theory of grammar or some tradition -- POS can mean all sorts of things. That's why there's upos, which just gives one a basic category but not much else.
Even in English we see that under the impulse of providing more detailed information, features other than mere category start appearing in the POS tagset. So in the Penn treebank you have e.g. VBZ, which is verb; 3d person singular;present -- tense and person features have crept into the tag.
I believe that that POS string for Latin maps directly to the feats in the Stanford representation, except that the category feature is represented in upos. See the table here: https://universaldependencies.org/tagset-conversion/la-itconll-uposf.html
So what I'm suggesting is that we let users access the idiosyncratic representation via Word.pos if they want it, but in the Doc accessors, just provide the dict of the morpho features and the upos via `pos.
So what I'm suggesting is that we let users access the idiosyncratic representation via Word.pos if they want it, but in the Doc accessors, just provide the dict of the morpho features and the upos via `pos.
OK we're on the same page. I don't understand why conventions vary so much, but that's another story …
Ready for me to merge?
Hang on, let me make the changes we just discussed.
@kylepjohnson A bit of a messy problem. Currently Word.parent_token is getting set to a Stanford word type, so Stanford representations are leaking into the CLTK classes.
We could:
Just set this to the index of the word, perhaps renaming to Word.parent_index.
Stash the index, and on a second pass through the Doc's words, set parent_token to the correct CLTK Word, perhaps renaming the attribute to parent_word.
I think I'd vote for the second option.
I'm fine with the second, too.
Jan 9, 2020, 19:12 by notifications@github.com:
@kylepjohnson https://github.com/kylepjohnson> A bit of a messy problem. Currently > Word.parent_token> is getting set to a Stanford word type, so Stanford representations are leaking into the CLTK classes.
We could:
Just set this to the index of the word, perhaps renaming to > Word.parent_index> .
Stash the index, and on a second pass through the Doc's words, set > parent_token> to the correct CLTK > Word> , perhaps renaming the attribute to > parent_word> .
I think I'd vote for the second option.
—
You are receiving this because you were mentioned.
Reply to this email directly, > view it on GitHub https://github.com/cltk/cltkv1/pull/33?email_source=notifications&email_token=AAOE36EZSI4AGFT24OCPXC3Q47RSLA5CNFSM4KE7Q5FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEISQ5IA#issuecomment-572853920> , or > unsubscribe https://github.com/notifications/unsubscribe-auth/AAOE36AL5LCHEDDA25Y7YODQ47RSLANCNFSM4KE7Q5FA> .
Actually let's just do 1 for now. It's complicated: these parents are for multi-word tokens, which is tied to CoNLL files and whatnot. I'm not sure how widely MWT are used. So for now we'll just drop and index (sentence scoped).
Of course none of this is document in stanfordnlp.
I see the following error: https://travis-ci.org/cltk/cltkv1/builds/635140704#L495
If I can fix it in 5-10 mins, will edit it in browser :)
Your PR runs fine on my local with make test but you can see what the server says.
Also, a bunch of new warnings our of the NLTK, which would be good to rm. I have a few more mins but don't expect to solve this right now.
I know what it is, and the build server is right. I went ahead and bit the bullet: both governor and parent now are full CLTK Words, not just indices. The trouble is the latter causes a self reference, which I need to interrupt. Will get to it this evening.
What are the NLTK warnings?
What are the NLTK warnings?
Don't worry about those, I guess. As of Aug 26th they've been fixed, but the last release was Aug 20 (3.4.5).
The following pytorch (by way of stanfordnlp) one is annoying since it comes up even when I have warnings suppressed (as with make test):
/pytorch/aten/src/ATen/native/LegacyDefinitions.cpp:19: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
I haven't spent the time to figure out what is causing it. I did do similar with this: https://github.com/cltk/cltkv1/blob/master/src/cltkv1/wrappers/stanford.py#L124
as in this case the output was breaking all of our doctests.
Yeah, the PyTorch ones are annoying. I've been assuming they'll go away in some update of stanfordnlp but haven't investigated.
Tests pass. Ready now?
Yup, thanks.
That pytorch warning is really annoying. I'm trying to figure out where exactly it's coming from in Stanford, but they use masking everywhere so it's going to take a while. If I find it I'll submit a PR to them.
Wonderful, thank you.
Next, John, are you thinking of working on more access methods? If not, I will try my hand at a few (Doc.lemmata and if I'm feeling saucy Doc.embeddings). But if you want to move on these or have a vision, let us know go for it.
Am wondering about your ideas for some kind of universal @dataclass to represent the entire mophosyntactic features of a word (as per discussion yesterday). Does this feel to you like a big experimental project or something we could start small on?
Sure, I can take a stab at the latter. For the former, there's a little utility Doc._get_words_attribute. if you find it useful.
Also, I did something last night that I forgot to signal: repackaged data_types.py and exceptions.py out of utils and into a new core package. Because these are the core structures of the system! I hope you don't mind.
(The warning is coming from the parser. Damn -- that bit is complex. We'll see.)
repackaged data_types.py and exceptions.py out of utils and into a new core package
You anticipated -- and solved -- something that was increasingly annoying me.
former, there's a little utility Doc._get_words_attribute. if you find it useful.
https://github.com/cltk/cltkv1/blob/master/src/cltkv1/core/data_types.py#L109
So handy, thank you. Little things like this help me.
Well, I fixed that warning and submitted a PR. I'm not sure how active Manning's grad students are in managing the project though.
I took a look at it, I hope they can get around to it. A test of how much we could rely on them.
| gharchive/pull-request | 2020-01-09T22:39:27 | 2025-04-01T04:33:48.904639 | {
"authors": [
"free-variation",
"kylepjohnson"
],
"repo": "cltk/cltkv1",
"url": "https://github.com/cltk/cltkv1/pull/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2579435358 | Update tailwind.config.js
Small changes for increased efficiency and uses latest Tailwind CSS configuration options and best practices. (I am early in development career, if I have messed up somewhere feel free to let me know please)
I don't thin it's really needed.
| gharchive/pull-request | 2024-10-10T17:30:03 | 2025-04-01T04:33:48.912264 | {
"authors": [
"DevyRuxpin",
"jalaym825"
],
"repo": "clubgamma/club-gamma-frontend",
"url": "https://github.com/clubgamma/club-gamma-frontend/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
774631078 | Improve documentation readability
Fix typos
Fix inconsistency
Fix unclickable URLs
Fix unintentional rendering (e.g., collapsed lists and paragraphs)
Remove comments that shouldn't be in the docs (e.g., TODO rewrite examples)
Add more useful links to jump to
Didn't fix missing periods for now unless the line had something else to fix.
kind_k8s job is an obvious false negative. merging.
| gharchive/pull-request | 2020-12-25T05:10:32 | 2025-04-01T04:33:48.915594 | {
"authors": [
"clux",
"kazk"
],
"repo": "clux/kube-rs",
"url": "https://github.com/clux/kube-rs/pull/357",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
660023662 | Manually build using Emscripten toolchain
This more complicated that it sounds as the first thing needed to be done is building Boost using the Emscripten toolchain - non-trivial.
We don't need to worry about Boost libraries as I have removed them in #73 and #74. Instead we need to worry about Clang 11's lack of ranges support!
| gharchive/issue | 2020-07-18T09:06:24 | 2025-04-01T04:33:48.938882 | {
"authors": [
"cmannett85"
],
"repo": "cmannett85/malbolge",
"url": "https://github.com/cmannett85/malbolge/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2610005291 | refactor: publish to JSR
This PR makes this package publishable on JSR. This means that the module will no longer be available on https://deno.land/x and the corresponding workflow should be removed.
Further reading: https://jsr.io/docs/publishing-packages
@iuioiua I rebased the PR onto the updated master branch after merging #57 and added a few updates to the examples ( they're now all using jsr: imports instead of the import map) and updated a few links in the README.md to point to jsr.io instead of deno.land.
Apparently GitHub doesn't recognize this as a successful merge of your PR, so I'll just close it manually instead.
The package is now available at JSR!
I'll also try to find some time to test the package with some other runtimes to see if I can get that JSR score closer towards 100% :)
Ah, this also resolves #52
| gharchive/pull-request | 2024-10-23T23:04:07 | 2025-04-01T04:33:48.948860 | {
"authors": [
"cmd-johnson",
"iuioiua"
],
"repo": "cmd-johnson/deno-oauth2-client",
"url": "https://github.com/cmd-johnson/deno-oauth2-client/pull/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
792826808 | nginx errors
Blank page, and errors in the logs
help me please
2021/01/24 18:47:38 [error] 28634#28634: *305 open() "/var/www/commandment/commandment/static/app.js" failed (2: No such file or directory), client: 91.103.205.85, server: , request: "GET /static/app.js HTTP/1.1", host: "mdm.mydomain.pro", referrer: "https://mdm.mydomain.pro/"
2021/01/24 18:47:38 [error] 28634#28634: *306 open() "/var/www/commandment/commandment/static/css/app.css" failed (2: No such file or directory), client: 91.103.205.85, server: , request: "GET /static/css/app.css HTTP/1.1", host: "mdm.mydomain.pro", referrer: "https://mdm.mydomain.pro/"
Same here. These files seem to be missing from the distribution.
| gharchive/issue | 2021-01-24T15:54:32 | 2025-04-01T04:33:48.963024 | {
"authors": [
"AFabyTWE",
"zakachkin"
],
"repo": "cmdmnt/commandment",
"url": "https://github.com/cmdmnt/commandment/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
272045255 | Updating the README of /test/
Added info for Usage of TestingExecutorUtil to create tables
Coverage increased (+0.03%) to 75.293% when pulling 8528e6d1e17ecdee85c821d95cbde07df9a5069f on rohit-cmu-patch-1 into 82430b6ee33c07857d9c0b79bc885b658657d995 on master.
Coverage decreased (-0.02%) to 75.028% when pulling eb127d3210aa4e7ebf354b2b4cbaa679c5eb7cd3 on rohit-cmu-patch-1 into 2f8534183a6bf4d11ac2ef06d4d0b52fc7b98d22 on master.
| gharchive/pull-request | 2017-11-08T01:35:07 | 2025-04-01T04:33:49.313563 | {
"authors": [
"coveralls",
"rohit-cmu"
],
"repo": "cmu-db/peloton",
"url": "https://github.com/cmu-db/peloton/pull/883",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
723549475 | Refactor claims_hosp to use new geomapper functions
Part of #306
Summary of changes:
Use the latest geomapper functions
remove unused zips static file
Couple things to note here:
The previous functions, e.g. county_to_state(), don't exist in the GeoMapper package.
They do exist in emr_hosp, but with different arguments.
There are deprecated functions like convert_fips_to_state_id() in the GeoMapper package which contain the arguments called by this code, so maybe they used to be named county_to_state() but were renamed later. However, if you use these functions, the code doesn't work due to a non-unique multi-index.
After switching to the new functions all tests pass, but I can't verify against previous behaviour since the original code didn't run.
I don't believe this indicator makes any use of the GeoMapper population functions. Is that correct? If we use population data here, I should block this merge until we fix #325.
Yes, this indicator does not use population. So there shouldn't be any problem merging.
| gharchive/pull-request | 2020-10-16T21:44:44 | 2025-04-01T04:33:49.317090 | {
"authors": [
"chinandrew",
"krivard",
"mariajahja"
],
"repo": "cmu-delphi/covidcast-indicators",
"url": "https://github.com/cmu-delphi/covidcast-indicators/pull/322",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1080349869 | Big simulation bug
While comparing JEMRIS with MRIsim using the exact same phantom and sequence, I found a weird simulation bug when I simulated with some TE's:
There was a ghost appearing that vanished as I decreased dt.
The error was due to an incorrect interpretation of the current simulation time, fixed in https://github.com/cncastillo/MRIsim.jl/commit/7cf187d4b35aa1943641d1fd3db2b0b189f9827
| gharchive/issue | 2021-12-14T22:18:05 | 2025-04-01T04:33:49.344183 | {
"authors": [
"cncastillo"
],
"repo": "cncastillo/MRIsim.jl",
"url": "https://github.com/cncastillo/MRIsim.jl/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
903862716 | Add Caddy
Pre-submission checklist:
Please check each of these after submitting your pull request:
[x] Are you only including a repo_url if your project is 100% open source? If so, you need to pick the single best GitHub repository for your project, not a GitHub organization.
[x] Is your project closed source or, if it is open source, does your project have at least 300 GitHub stars?
[x] Have you picked the single best (existing) category for your project?
[x] Does it follow the other guidelines from the new entries section?
[x] Have you included a URL for your SVG or added it to hosted_logos and referenced it there?
[x] Does your logo clearly state the name of the project/product and follow the other logo guidelines?
[x] Does your project/product name match the text on the logo?
[ ] Have you verified that the Crunchbase data for your organization is correct (including headquarters and LinkedIn)?
[ ] ~15 minutes after opening the pull request, the CNCF-Bot will post the URL for your staging server. Have you confirmed that it looks good to you and then added a comment to the PR saying "LGTM"?
Build failed because of:
no headquarter addresses for caddy-2 at https://www.crunchbase.com/organization/caddy-2
No cached entry, and Caddy has issues with logo: caddy.svg, SVG file embeds a png. Please use a pure svg file
No cached entry, and Valve Software (member) has issues with twitter: https://twitter.com/valveoficial, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
No cached entry, and Sosivio (member) has issues with twitter: https://twitter.com/SosivioLtd, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
Empty twitter for Rancher Federal (member): https://twitter.com/rancherfederal
Empty twitter for Banzai Cloud (KCSP): https://twitter.com/banzaicloud
Empty twitter for Banzai Cloud Pipeline Kubernetes Engine (PKE): https://twitter.com/banzaicloud
Empty twitter for StackStorm: https://twitter.com/Stack_Storm
Caddy either has no crunchbase entry or it is invalid
:x: Deploy Preview for landscape failed.
:hammer: Explore the source changes: ad42baee2c1d0a14a04170739eef3eb8b419c9ca
:mag: Inspect the deploy log: https://app.netlify.com/sites/landscape/deploys/60afaf26be83390008058777
Build failed because of:
no headquarter addresses for caddy-2 at https://www.crunchbase.com/organization/caddy-2
No cached entry, and Caddy has issues with logo: caddy.svg, SVG file embeds a png. Please use a pure svg file
No cached entry, and Valve Software (member) has issues with twitter: https://twitter.com/valveoficial, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
No cached entry, and Sosivio (member) has issues with twitter: https://twitter.com/SosivioLtd, 404 - {"errors":[{"code":34,"message":"Sorry, that page does not exist."}]}
Empty twitter for Rancher Federal (member): https://twitter.com/rancherfederal
Empty twitter for Banzai Cloud (KCSP): https://twitter.com/banzaicloud
Empty twitter for Banzai Cloud Pipeline Kubernetes Engine (PKE): https://twitter.com/banzaicloud
Empty twitter for StackStorm: https://twitter.com/Stack_Storm
Caddy either has no crunchbase entry or it is invalid
@ThewBear: This is asking for a better SVG, see: " Caddy has issues with logo: caddy.svg, SVG file embeds a png. Please use a pure svg file"
close due to inactivity
@ThewBear I had to remove your entry as it was blocking the pipeline. (I suggest getting a correct crunchbase listing that fixes these issues.) See issues listed here: https://github.com/cncf/landscape/pull/2480
| gharchive/pull-request | 2021-05-27T14:39:33 | 2025-04-01T04:33:49.403943 | {
"authors": [
"CNCF-Bot",
"Morriz",
"ThewBear",
"amye",
"caniszczyk"
],
"repo": "cncf/landscape",
"url": "https://github.com/cncf/landscape/pull/2116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2252818030 | [Action] How can you rightsize your Kubernetes workloads to optimize for environmental sustainability?
Description
There are multiple ways to scale your workloads inside Kubernetes.
There is a tension between scaling workloads horizontally and vertically. Being too conservative when scaling can lead to throttling and slow performance, with the risk of Pods being OOMKilled. Being too liberal, on the other hand, can lead to wasting resources by either running Pods that are too big, or running too many Pods.
What's the most environmentally efficient way to rightsize workloads running on Kubernetes? Can this be achieved with default Kubernetes behavior? Or is a custom autoscaler a necessary addition?
In another issue, contributors suggested looking into:
kube-green, which can scale workloads down to 0 when there's no activity,
rekuberate-io/sleepcycles, "which is similar to kube-green but it covers a broader range of Kubernetes resources: Deployments, CronJobs, StatefulSets and HorizontalPodAutoscalers."
In our brainstorming session, Max brought up Keda, which allows for event-driven autoscaling.
Note: this could turn out to be a substantial piece of work to do on one's own.
Outcome
A recommendation in our working document that helps the reader make a choice on how they can rightsize their workloads with environmental sustainability in mind. Would using kube-green, sleepcycles, or Keda offer benefits here? How would that look? It would be great to say a few words about expected effort if changes to the cluster are required, and how big that effort could be (small, medium, large). Additionally, if possible, it would be great to add optional, extra reading material with added context if the reader's interested and has time.
To-Do
[ ] research if this could be a worthy recommendation,
[ ] if yes, write a recommendation,
[ ] share it for review, implement feedback.
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Comments
@mkorbi, @JacobValdemar - I'd love your input on this task's wording and scoping. Do you think it's actionable enough, or too vague? Anything worth adding?
Hi, as my mext contribution to the project I can start working on this issue by the end of this week and the star of the next one.
Is ok for you?
@JacobValdemar @xamebax
@graz-dev Fine with me, but let's hear what @xamebax thinks. After all, she is the assignee on this issue ☺️
@graz-dev this is great, go for it! I'm assigned as the creator, and I haven't done any work on this other than create the ticket. :D I'm sorry it took me so long to reply, I am hoping this wasn't demotivating. I can go ahead and assign you!
Yep @xamebax assign it to me!
I can work on it in the following days 😁
@graz-dev done!
@akyriako do I remember correctly you were interested in participating in this one?
Hi @xamebax , yes you do remember correct :+1: Please let me know where/how I could assist.
| gharchive/issue | 2024-04-19T12:09:51 | 2025-04-01T04:33:49.413010 | {
"authors": [
"JacobValdemar",
"akyriako",
"graz-dev",
"xamebax"
],
"repo": "cncf/tag-env-sustainability",
"url": "https://github.com/cncf/tag-env-sustainability/issues/392",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
230257460 | Use the IMU to determine +Z
This feature removes the requirement that you calibrate
with the tracked object in any particular orientation.
The IMU on the tracked object will be used to determine
what direction is "up"
OH MY GOSH I CAN'T WAIT TO TRY THIS OUT TOMORROWWWWWWWWWWWW YEEEESSSSSSS
| gharchive/pull-request | 2017-05-21T23:43:53 | 2025-04-01T04:33:49.434135 | {
"authors": [
"cnlohr",
"mwturvey"
],
"repo": "cnlohr/libsurvive",
"url": "https://github.com/cnlohr/libsurvive/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
34041722 | too many concerns
IMO this comes across as a little confusing, it does ip denial and rate-limiting, which we already have, so two in this org might confuse people. Also when mentioned in https://github.com/koajs/ratelimit/issues/5 I thought koa-limit would be request body limiting.
White/black listing is cool but I think we should make it a separate middleware personally. Also in-memory rate limiting isn't very useful since most if not all real API deployments will be multi-process if not multi-machine
something like .use(allow(ips)) and .use(deny(ips)) would be cool
yeah we don't need this
i added blacklist/whitelist in https://github.com/koajs/ratelimit/pull/17
| gharchive/issue | 2014-05-22T00:42:26 | 2025-04-01T04:33:49.448701 | {
"authors": [
"niftylettuce",
"tj"
],
"repo": "cnpm/koa-limit",
"url": "https://github.com/cnpm/koa-limit/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1052833968 | [BUG] [cluster] master node internal ip address can not be empty"
Describe the bug
Wanna create k3s HA cluster by command:
autok3s create \
--provider native \
--cluster \
--k3s-channel stable \
--k3s-install-mirror INSTALL_K3S_MIRROR=cn \
--k3s-install-script http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh \
--name zkr-tj \
--ssh-key-path ~/.ssh/id_rsa \
--ssh-port 22 \
--ssh-user root \
--master-ips 192.168.129.191,192.168.129.192,192.168.129.193 \
--worker-ips 192.168.129.191,192.168.129.192,192.168.129.193,192.168.129.194,192.168.129.195
but it stuck at:
time="2021-11-14T11:13:03+08:00" level=info msg="[native] executing create logic..."
time="2021-11-14T11:13:03+08:00" level=info msg="[native] executing init k3s cluster logic..."
time="2021-11-14T11:13:03+08:00" level=error msg="[native] failed to create cluster: [cluster] master node internal ip address can not be empty"
time="2021-11-14T11:13:03+08:00" level=info msg="[native] executing rollback logic..."
time="2021-11-14T11:13:03+08:00" level=info msg="[native] instances [192-168-129-191 192-168-129-192 192-168-129-193 192-168-129-194 192-168-129-195] will be rollback"
bash: kubectl: command not found
sh: /usr/local/bin/k3s-agent-uninstall.sh: 没有那个文件或目录
bash: kubectl: command not found
sh: /usr/local/bin/k3s-agent-uninstall.sh: 没有那个文件或目录
sh: /usr/local/bin/k3s-agent-uninstall.sh: 没有那个文件或目录
time="2021-11-14T11:17:04+08:00" level=warning msg="[native] failed to uninstall k3s on worker node 192-168-129-191: [ssh-dialer] init dialer [192.168.129.191:22] error: timed out waiting for the condition"
time="2021-11-14T11:17:04+08:00" level=warning msg="[native] failed to uninstall k3s on worker node 192-168-129-192: Process exited with status 127: bash: kubectl: command not found\nsh: /usr/local/bin/k3s-agent-uninstall.sh: 没有那个文件或目录\n"
time="2021-11-14T11:17:04+08:00" level=warning msg="[native] failed to uninstall k3s on worker node 192-168-129-193: Process exited with status 127: bash: kubectl: command not found\nsh: /usr/local/bin/k3s-agent-uninstall.sh: 没有那个文件或目录\n"
time="2021-11-14T11:17:04+08:00" level=warning msg="[native] failed to uninstall k3s on worker node 192-168-129-194: [ssh-dialer] init dialer [192.168.129.194:22] error: timed out waiting for the condition"
time="2021-11-14T11:17:04+08:00" level=warning msg="[native] failed to uninstall k3s on worker node 192-168-129-195: Process exited with status 127: sh: /usr/local/bin/k3s-agent-uninstall.sh: 没有那个文件或目录\n"
time="2021-11-14T11:17:04+08:00" level=info msg="[native] successfully executed rollback logic"
FATA[0241] [cluster] master node internal ip address can not be empty
没有那个文件或目录 = no such file or dir
Environments (please complete the following information):
OS: CentOS 7.2
AutoK3s Version: v0.4.4
What I missed?
Remove master nodes from --worker-ips. You can try commands below
autok3s create \
--provider native \
--cluster \
--k3s-channel stable \
--k3s-install-mirror INSTALL_K3S_MIRROR=cn \
--k3s-install-script http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh \
--name zkr-tj \
--ssh-key-path ~/.ssh/id_rsa \
--ssh-port 22 \
--ssh-user root \
--master-ips 192.168.129.191,192.168.129.192,192.168.129.193 \
--worker-ips 192.168.129.194,192.168.129.195
@szthanatos Please feel free to reopen if this issue still exists after trying the command above
| gharchive/issue | 2021-11-14T03:20:10 | 2025-04-01T04:33:49.455958 | {
"authors": [
"JacieChao",
"szthanatos"
],
"repo": "cnrancher/autok3s",
"url": "https://github.com/cnrancher/autok3s/issues/352",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1521566115 | feat: allow numeric input for Content-Format and Accept options
This PR introduces the possibility of passing numeric Content-Formats as option values to the setHeader method. It is still in draft status, since I will also add a new test for the feature.
Pull Request Test Coverage Report for Build 3850969466
17 of 19 (89.47%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.1%) to 91.719%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
lib/option_converter.ts
17
19
89.47%
Totals
Change from base Build 3820725058:
-0.1%
Covered Lines:
2854
Relevant Lines:
3069
💛 - Coveralls
Pull Request Test Coverage Report for Build 3850969466
17 of 19 (89.47%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.1%) to 91.719%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
lib/option_converter.ts
17
19
89.47%
Totals
Change from base Build 3820725058:
-0.1%
Covered Lines:
2854
Relevant Lines:
3069
💛 - Coveralls
All fine, ok for me to merge
Pull Request Test Coverage Report for Build 3852188808
Warning: This coverage report may be inaccurate.
This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
For more information on this, see Tracking coverage changes with pull request builds.
To avoid this issue with future PRs, see these Recommended CI Configurations.
For a quick fix, rebase this PR at GitHub. Your next report should be accurate.
Details
27 of 27 (100.0%) changed or added relevant lines in 2 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.02%) to 91.878%
Totals
Change from base Build 3820725058:
0.02%
Covered Lines:
2866
Relevant Lines:
3077
💛 - Coveralls
| gharchive/pull-request | 2023-01-05T22:46:57 | 2025-04-01T04:33:49.505180 | {
"authors": [
"Apollon77",
"JKRhb",
"coveralls"
],
"repo": "coapjs/node-coap",
"url": "https://github.com/coapjs/node-coap/pull/363",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
961877093 | update CockroachCloud FAQ / docs to include clearer load balancer & connection pooling information
Is your feature request related to a problem? Please describe.
It is not clear to customer (at least one) who is responsible for setting up load balancing & connection pooling.
In the CockroachCloud FAQ there is no mention of these topics, however I will note in the production checklist
https://www.cockroachlabs.com/docs/cockroachcloud/production-checklist.html#use-a-pool-of-persistent-connections
There is discussion of connection pooling under "Use a pool of persistent connections" however it is not clear if that is something meant to be configured on the client side or in CockroachCloud somehow.
Describe the solution you'd like
Because there is already a section for connection pooling as noted above, this section could be modified to clarify "connection pooling cannot be configured in CockroachCloud, this is configured on the client side environment" or something like that.
Regarding load balancing, I think this may be more appropriate for the FAQ, to explain how it works in CockroachCloud. This is the information that TSE provided me on the topic, could we add something like this to the FAQ perhaps?
Cockroach Cloud has load balancing built into it. For each region there is a regional load balancer. If you are using a multiregion deployment then you can choose which region to connect to, but if you're only on one region then you have one connection URL. That connection URL will choose the node for you.
Describe alternatives you've considered
N/A this could be approached differently in terms of what goes where in the docs, not sure what makes most sense to docs team.
Additional context
The above information was provided to me by TSE, and has not been verified by SRE team. Just being transparent as I am not 100% sure this is all correct and should be verified before putting into docs.
Thank you!
I think the CockroachCloud docs could link to this section from the CockroahcDB docs: https://www.cockroachlabs.com/docs/v21.1/connection-pooling.html
the same guidance applies.
Thanks for the update @rafiss . Yes, the point of my docs request isn't to say that customers are unclear about how to use connection pooling, they are unclear about whether or not CockroachCloud offers connection pooling and load balancing for them. So the summary of my docs request is: making it clear to CC customers that they still need to configure connection pooling, and make it clear that CC does have a load balancer built into it.
| gharchive/issue | 2021-08-05T14:17:56 | 2025-04-01T04:33:50.609066 | {
"authors": [
"rafiss",
"theodore-hyman"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/11012",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
526181759 | import: add an example of DELIMITED with escaping
We ran into an input file like this today, with | characters as delimiters, and \| representing a literal |:
k1|v1
k2|v2
k3\|k3|v3
k4|v4\|\|\|v4
The proper way to import this is something like:
import table kv (k string primary key, v string) delimited data ('nodelocal:///thing.psv') with fields_terminated_by='|', fields_escaped_by='\';. It's possible to piece this together from the documentation of delimited, but we should at least include one example of how this works.
@rolandcrosby, for i/o topics, please assign directly to @lnhsingh.
| gharchive/issue | 2019-11-20T20:33:36 | 2025-04-01T04:33:50.611150 | {
"authors": [
"jseldess",
"rolandcrosby"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/5895",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
545783885 | [Docs] Improve MovR via Flask and SQLAlchemy [tutorial]
Background: https://airtable.com/tblD3oZPLJgGhCmch/viw1DKmbKhg2MIECH/recD6XfHRQKppYRWF
Description: MovR is a fictional vehicle-sharing company created to demonstrate CockroachDB's features. We expanded upon MovR to provide a more “real-world” example application for developers to evaluate. We now provide a code repo as well as a tutorial for application developers to better understand how to build an application using CockroachDB.
It will also serve as a template for expansion to other ORMs and languages.
The MovR example consists of the following:
The movr dataset, which contains rows of data that populate tables in the movr database. The movr dataset is built into cockroach demo and cockroach workload.
The MovR application, a fully-functional vehicle-sharing application, written in Python. All of MovR application source code is open-source, and available on the movr GitHub repository.
While a great example application for the database, MovR is not built as a robust full stack application.
Team: Andrew Woods, Rafi Shamim, Eric Harmeling
Github Tracking Issue: https://github.com/cockroachdb/docs/issues/5687 https://github.com/cockroachdb/docs/issues/5610
Dup of https://github.com/cockroachdb/docs/issues/6056
| gharchive/issue | 2020-01-06T15:24:21 | 2025-04-01T04:33:50.615036 | {
"authors": [
"jseldess"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/issues/6270",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
243507457 | Add version tags
@jseldess, here's an implementation of version tags based on @kuanluo's screens. <span class="version-tag"> is for in content tags and <div class="version-tag"> is for tags in the table of contents, because the table of contents code seems to strip the text from spans. I added examples on the /dev/information-schema page, under user_privileges.
This change is
http://cockroach-docs-review.s3-website-us-east-1.amazonaws.com/6a0ca272243256305256c39a3e8dcb34730e407b/
http://cockroach-docs-review.s3-website-us-east-1.amazonaws.com/4b6f1d4626d8b9f21bba2701bf02c738436963f9/
http://cockroach-docs-review.s3-website-us-east-1.amazonaws.com/e09eb957eb2b9ebc97a993b767d645fefa67bc3d/
http://cockroach-docs-review.s3-website-us-east-1.amazonaws.com/815bb5d716117fb741d1bea4d4c228a2d82e8563/
http://cockroach-docs-review.s3-website-us-east-1.amazonaws.com/dfc496cb64abc694186480ded3d83546fc57b9cf/
LGTM. Thanks @deronjohnson.
| gharchive/pull-request | 2017-07-17T19:51:51 | 2025-04-01T04:33:50.619567 | {
"authors": [
"cockroach-teamcity",
"deronjohnson",
"jseldess"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/pull/1721",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
349277009 | Update AS OF SYSTEM TIME
AS OF SYSTEM TIME can now use some more complex expressions to compute the desired timestamp
Closes #3455.
This change is
http://cockroach-docs-review.s3-website-us-east-1.amazonaws.com/33311bdd4af87601a8258ea724c7a3137edbe235/
one example: we highly recommend that you run backups 10 seconds in the past to avoid interfering with production traffic and this is perfect for that
I don't think you can have expressions like now() or any other time function in the AOST clause. I think that we should just close this PR and issue because this change doesn't need to be documented beyond the release note. I can't come up with a good example of why a user would care about this. We made the change for internal code reasons, not any user-facing need.
Ok, thanks for info. Will close the issue / PR!
| gharchive/pull-request | 2018-08-09T20:25:27 | 2025-04-01T04:33:50.622591 | {
"authors": [
"cockroach-teamcity",
"danhhz",
"lhirata",
"mjibson"
],
"repo": "cockroachdb/docs",
"url": "https://github.com/cockroachdb/docs/pull/3532",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1991109862 | Cluster SQL Activity CPU Overhead
Enabling the Cluster SQL Activity Query incurs a very large CPU overhead on the CockroachDB cluster, this may not be the case for others.
We tested this on 7 node CockroachDB cluster (Virtual Machines) running on v22.1.20 where each node has 4 cpu's.
Here you can see the CPU increase from a mean of 7.5% to a mean of 49% then back down to the original mean.
You can also see massive network egress which i think is due to a default interval of 10 seconds but the query executes for 16 seconds.
This example should either be improved, or removed from the examples as it sets a bad precedent. The rest of the queries incur a very small overhead.
@Otterpohl - this query is meant to be executed at a cluster level, and very low frequency. It does have a significant overhead otherwise.
If visus is deployed in a self hosted cluster as a sidecar for each node, sql activity should be used instead.
I'll add a comment and change the scope to make this clear.
| gharchive/issue | 2023-11-13T17:21:57 | 2025-04-01T04:33:50.627798 | {
"authors": [
"Otterpohl",
"sravotto"
],
"repo": "cockroachlabs/visus",
"url": "https://github.com/cockroachlabs/visus/issues/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
628081927 | Multiline label
Also, thinking about labels with more than one line like: "label line1<\br> label line2". Maybe some enhancements can be useful here.
https://github.com/cocopon/tweakpane/issues/46#issue-623518660
Added in 233b33f.
| gharchive/issue | 2020-06-01T00:44:05 | 2025-04-01T04:33:50.637866 | {
"authors": [
"cocopon"
],
"repo": "cocopon/tweakpane",
"url": "https://github.com/cocopon/tweakpane/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
365415652 | new features
Hi,
Any chance you could:
support composer?
support query parameterization
https://azure.microsoft.com/en-us/blog/announcing-sql-parameterization-in-documentdb/
Don't require pear.
@atrauzzi I have forked this repo and updated a lot of things including the use of guzzle instead of pear.
Nice! You should share a link here so that people who want to use/help can find it more easily. 😄
https://github.com/jupitern/cosmosdb
| gharchive/issue | 2018-10-01T11:19:28 | 2025-04-01T04:33:50.677805 | {
"authors": [
"atrauzzi",
"jondmcelroy",
"jupitern"
],
"repo": "cocteau666/AzureDocumentDB-PHP",
"url": "https://github.com/cocteau666/AzureDocumentDB-PHP/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
60300475 | Automating builds and deploys
This issue is in regards to the following:
[x] Hook up this repo to Sophicware hosted jenkins instance
[ ] Create jenkins job for building AMIs within C4N AWS account
[x] Setup continuous deployment
New builds are triggered via ppushing new tags.
| gharchive/issue | 2015-03-09T05:30:40 | 2025-04-01T04:33:50.747519 | {
"authors": [
"jcockhren"
],
"repo": "code-for-nashville/hrc-employment-diversity-report",
"url": "https://github.com/code-for-nashville/hrc-employment-diversity-report/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113619932 | Why is the color calibration in gray scale?
Hi, I am calibrating Kinect and and when I run this line rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 record color, I get the gray scale image for calibration. Should it be like this?
Thanks in advance. Also wanted to ask why there is not RGB-D ros topic. It could really make it easy for Matlab scatter3 ploting.
Because calibration does not need color information.
What should a RGB-D topic be? A four channel image with R,G,B,D where R,G,B are 8-bit and D is 16 bit? I think a RGB image and a separate depth image is the better way.
Ok I understood the calibration method. Yeah exacly I meant 8 bit R G B and
16 bit death. It makes it really easy for the matlab user to segment the
rigion of interest with clouding this topic. Otherwise you have to use
cartesian cordination and make the proper matrix for the camera intrinsics
For v1 matlab has this toolbox. Or maybe it's true to say it like this. For
people who use ROS to Matlab for kinect v2 bridge, a cloud topic would be
really helpfull for furture image proccessing. Look at it this way. Now I
have to subscribe to time the same topics to extract the color and depth to
make a cloud which will drain quiet a lot my CPU. But with one cloud topic
I get both at once. What is your idea Thiemo?
On 27 Oct 2015 18:45, "Thiemo Wiedemeyer" notifications@github.com wrote:
Because calibration does not need color information.
What should a RGB-D topic be? A four channel image with R,G,B,D where
R,G,B are 8-bit and D is 16 bit? I think a RGB image and a separate depth
image is the better way.
—
Reply to this email directly or view it on GitHub
https://github.com/code-iai/iai_kinect2/issues/171#issuecomment-151583928
.
For a point cloud, you could simply launch the kinect2.launch. That will give a point cloud topic.
Oh really good thanks. I did not know that.
On 27 Oct 2015 19:45, "Thiemo Wiedemeyer" notifications@github.com wrote:
For a point cloud, you could simply launch the kinect2.launch. That will
give a point cloud topic.
—
Reply to this email directly or view it on GitHub
https://github.com/code-iai/iai_kinect2/issues/171#issuecomment-151606757
.
Oh really good thanks. I did not know that. But what kind of command is that? As a rosrun? Like:
rosrun kinect2.launch, after running roscore?
roslaunch kinect2_bridge kinect2.launch
I get tis error:
buttler@buttler-desktop:~/catkin_ws$ roslaunch kinect2_bridge kinect2.launch
[kinect2.launch] is neither a launch file in package [kinect2_bridge] nor is [kinect2_bridge] a launch file name
The traceback for the exception was written to the log file
Maybe you should get familiar with ROS first. Take a look at the getting started and tutorials on http://wiki.ros.org/.
Ok, sorry I am not the responsible for the ROS environment. I will ask the ROS guy to fix this and will come back to you soon. Thanks again Thiemo.
@shamoorti
ros launches only if launch file exist. In kinect2_bridge/launch if you are able to find kinect2.launch file, then ros executes. it seems you dont have kinect2.launch file.
I would guess the workspace is not sourced...
@shamoorti are you able to launch kinect2_bridge.launch?
roslaunch kinect2_bridge kienct2_bridge.launch
Actually the command is roslaunch kinect2_bridge kinect2_bridge.launch. I think it was a typo from Thiemo. roslaunch kinect2_bridge kinect2.launch, there is now launch file as kinect2.launch. Thank you guys now the Points topic is working. :)
Wow, I can't even subscribe this in matlab. I get this error:
test = rossubscriber('/kinect2/sd/points');
Oct 28, 2015 11:31:06 AM org.jboss.netty.channel.DefaultChannelPipeline
WARNING: An exception was thrown by a user handler while handling an exception event ([id: 0x09edcacb, /127.0.0.1:52842 :> buttler-desktop/127.0.1.1:45131] EXCEPTION: java.lang.OutOfMemoryError: Java heap space)
org.ros.exception.RosRuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.ros.internal.transport.ConnectionTrackingHandler.exceptionCaught(ConnectionTrackingHandler.java:94)
at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:533)
at org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:49)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:311)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:91)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:373)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:247)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.jboss.netty.buffer.LittleEndianHeapChannelBuffer.copy(LittleEndianHeapChannelBuffer.java:125)
at org.jboss.netty.buffer.AbstractChannelBuffer.copy(AbstractChannelBuffer.java:522)
at org.ros.internal.transport.queue.MessageReceiver.messageReceived(MessageReceiver.java:62)
... 13 more
Which it seems to be the heap memory overflow!!!!
Maybe your heap size is too small? Probably is fixable, here.
Thanks a lot Thiemo for your good answers. I have another question. Non of the topics they have the actual size of kinect depth which is 512*424 and I know why you have done that since you wanted to have the depth be synced with the RGB image. But what if I want to just subscribe hd depth and plot it. I did that and I'm not getting the actual depth of course. Any Idea how to fix that? Should I make my own topic for actual depth size?
I don't really understand the question. The topics in sd will have the resolution of the ir sensor (512*424). What do you mean with you are not getting the actual depth?
Ok sorry for making you confused, look at my cloud. You think this is the calibration problem?
@shamoorti
are you using it in addition to any model? if so you need to specify position and orientation of sensor in your model
Looks like a normal point cloud. The far away points are probably just noise. Just set max_depth to something smaller.
It works like a charm Thiemo :). Thanks a lot.
@GaiTech-Robotics
No it's stand alone software for image processing.
| gharchive/issue | 2015-10-27T16:06:39 | 2025-04-01T04:33:50.767851 | {
"authors": [
"GaiTech-Robotics",
"shamoorti",
"vishu2287",
"wiedemeyer"
],
"repo": "code-iai/iai_kinect2",
"url": "https://github.com/code-iai/iai_kinect2/issues/171",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1141675724 | Error
Hi Thank you for this wonderful project, but what is the second option
Hello,
the image cannot be opened for some reason can you specify the name of the option please
ok
the Rate option is an integer value , to define the rate of the concurrency .
| gharchive/issue | 2022-02-17T18:03:42 | 2025-04-01T04:33:50.770974 | {
"authors": [
"O4dg",
"code-l0n3ly"
],
"repo": "code-l0n3ly/instagram-auto-claimer",
"url": "https://github.com/code-l0n3ly/instagram-auto-claimer/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
597000858 | [user1@bm2 ~]$ crc status ERRO The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port? - exit status 1
General information
OS: Linux /
Hypervisor: KVM /
Did you run crc setup before starting it (Yes/No)? yes
CRC version
# Put the output of `crc version`
CRC status
# Put the output of `crc status`
[user1@bm2 ~]$ crc status
ERRO The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
exit status 1
CRC config
# Put the output of `crc config view`
Host Operating System
# Put the output of `cat /etc/os-release` in case of Linux
[user1@bm2 ~]$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Steps to reproduce
all functions have same error with "api.crc.testing:6443" connection is refused.
Expected
show from browser properly
Actual
connection refused suddenly. Is there any expiration problem with certificate or secret ?
Logs
You can start crc with crc start --log-level debug to collect logs.
Please consider posting this on http://gist.github.com/ and post the link in the issue.
Too little information to do an assessment what might cause this. Is this on a cloud environment?
Yes. It happened in IBM Cloud.
I configured on Baremetal server in IBM Cloud.
i could not access console or services, and (oc login) suddenly.
I just ran (oc status) as post above.
This is second time ti see this message.
Whevever I see this message, I should delete cluster and start again.
It there is expiration date, please let me know.
i need to manage schedule..
I received same error after I change the image in deployment descriptor .
this is the error.
[user1@bm2 ~]$ oc logs portal-3-pw2zv
The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
How can I fix this issue ?
We recently identified some issues when things were run on the IBM Cloud/. Can you share information from the host machine? Like lscpu ?
[user1@bm2 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz
Stepping: 9
CPU MHz: 799.938
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7584.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear spec_ctrl intel_stibp flush_l1d
if you need more information, please let me know
output of crc setup --log-level debug and crc start --log-level debug would be useful.
So this looks like a possible network issue. Please provide the full debug log as requested.
[user1@bm2 liferay]$ crc setup --log-level debug
INFO Checking if oc binary is cached
DEBU oc binary already cached
INFO Checking if podman remote binary is cached
DEBU podman remote binary already cached
INFO Checking if CRC bundle is cached in '$HOME/.crc'
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
DEBU Checking if the vmx/svm flags are present in /proc/cpuinfo
DEBU CPU virtualization flags are good
INFO Checking if KVM is enabled
DEBU Checking if /dev/kvm exists
DEBU /dev/kvm was found
INFO Checking if libvirt is installed
DEBU Checking if 'virsh' is available
DEBU 'virsh' was found in /bin/virsh
INFO Checking if user is part of libvirt group
DEBU Checking if current user is part of the libvirt group
DEBU Running '/bin/groups user1'
DEBU Current user is already in the libvirt group
INFO Checking if libvirt is enabled
DEBU Checking if libvirtd.service is enabled
DEBU Running '/bin/systemctl is-enabled libvirtd'
DEBU libvirtd.service is already enabled
INFO Checking if libvirt daemon is running
DEBU Checking if libvirtd.service is running
DEBU Running '/bin/systemctl is-active libvirtd'
DEBU libvirtd.service is already running
INFO Checking if a supported libvirt version is installed
DEBU Checking if libvirt version is >=3.4.0
DEBU Running 'virsh -v'
INFO Checking if crc-driver-libvirt is installed
DEBU Checking if crc-driver-libvirt is installed
DEBU Running '/home/user1/.crc/bin/crc-driver-libvirt version'
DEBU crc-driver-libvirt is already installed in /home/user1/.crc/bin/crc-driver-libvirt
INFO Checking for obsolete crc-driver-libvirt
DEBU Checking if an older libvirt driver crc-driver-libvirt is installed
DEBU No older crc-driver-libvirt installation found
INFO Checking if libvirt 'crc' network is available
DEBU Checking if libvirt 'crc' network exists
DEBU Running 'virsh --connect qemu:///system net-info crc'
DEBU Checking if libvirt 'crc' definition is up to date
DEBU Running 'virsh --connect qemu:///system net-dumpxml --inactive crc'
DEBU libvirt 'crc' network has the expected value
INFO Checking if libvirt 'crc' network is active
DEBU Checking if libvirt 'crc' network is active
DEBU Running 'virsh --connect qemu:///system net-info crc'
DEBU libvirt 'crc' network is already active
INFO Checking if NetworkManager is installed
DEBU Checking if 'nmcli' is available
DEBU 'nmcli' was found in /bin/nmcli
INFO Checking if NetworkManager service is running
DEBU Checking if NetworkManager.service is running
DEBU Running '/bin/systemctl is-active NetworkManager'
DEBU NetworkManager.service is already running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
DEBU Checking NetworkManager configuration
DEBU NetworkManager configuration is good
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
DEBU Checking dnsmasq configuration
DEBU dnsmasq configuration is good
Setup is complete, you can now run 'crc start' to start the OpenShift cluster
[user1@bm2 liferay]$ crc start --log-level debug
DEBU No new version available. The latest version is 1.8.0
INFO Checking if oc binary is cached
DEBU oc binary already cached
INFO Checking if podman remote binary is cached
DEBU podman remote binary already cached
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
DEBU Checking if the vmx/svm flags are present in /proc/cpuinfo
DEBU CPU virtualization flags are good
INFO Checking if KVM is enabled
DEBU Checking if /dev/kvm exists
DEBU /dev/kvm was found
INFO Checking if libvirt is installed
DEBU Checking if 'virsh' is available
DEBU 'virsh' was found in /bin/virsh
INFO Checking if user is part of libvirt group
DEBU Checking if current user is part of the libvirt group
DEBU Running '/bin/groups user1'
DEBU Current user is already in the libvirt group
INFO Checking if libvirt is enabled
DEBU Checking if libvirtd.service is enabled
DEBU Running '/bin/systemctl is-enabled libvirtd'
DEBU libvirtd.service is already enabled
INFO Checking if libvirt daemon is running
DEBU Checking if libvirtd.service is running
DEBU Running '/bin/systemctl is-active libvirtd'
DEBU libvirtd.service is already running
INFO Checking if a supported libvirt version is installed
DEBU Checking if libvirt version is >=3.4.0
DEBU Running 'virsh -v'
INFO Checking if crc-driver-libvirt is installed
DEBU Checking if crc-driver-libvirt is installed
DEBU Running '/home/user1/.crc/bin/crc-driver-libvirt version'
DEBU crc-driver-libvirt is already installed in /home/user1/.crc/bin/crc-driver-libvirt
INFO Checking if libvirt 'crc' network is available
DEBU Checking if libvirt 'crc' network exists
DEBU Running 'virsh --connect qemu:///system net-info crc'
DEBU Checking if libvirt 'crc' definition is up to date
DEBU Running 'virsh --connect qemu:///system net-dumpxml --inactive crc'
DEBU libvirt 'crc' network has the expected value
INFO Checking if libvirt 'crc' network is active
DEBU Checking if libvirt 'crc' network is active
DEBU Running 'virsh --connect qemu:///system net-info crc'
DEBU libvirt 'crc' network is already active
INFO Checking if NetworkManager is installed
DEBU Checking if 'nmcli' is available
DEBU 'nmcli' was found in /bin/nmcli
INFO Checking if NetworkManager service is running
DEBU Checking if NetworkManager.service is running
DEBU Running '/bin/systemctl is-active NetworkManager'
DEBU NetworkManager.service is already running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
DEBU Checking NetworkManager configuration
DEBU NetworkManager configuration is good
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
DEBU Checking dnsmasq configuration
DEBU dnsmasq configuration is good
Checking file: /home/user1/.crc/machines/crc/.crc-exist
Found binary path at /home/user1/.crc/bin/crc-driver-libvirt
Launching plugin server for driver libvirt
Plugin server listening at address 127.0.0.1:37228
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(crc) Calling .GetBundleName
(crc) Calling .GetState
(crc) DBG | Getting current state...
(crc) DBG | Fetching VM...
INFO A CodeReady Containers VM for OpenShift 4.3.8 is already running
Making call to close driver server
(crc) Calling .Close
Successfully made call to close driver server
Making call to close connection to plugin binary
(crc) DBG | Closing plugin on server side
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
I see same message after I deploy liferay/portal:7.3.1-ga2 o( ) https://hub.docker.com/r/liferay/portal.
How can I access KVM ?
What is id and password to access KVM?
how can I access KVM ?
[user1@bm2 portal]$ ssh coreos@api.crc.testing
The authenticity of host 'api.crc.testing (192.168.130.11)' can't be established.
ECDSA key fingerprint is SHA256:FvpOio1tMRj80gx8Hx/xy77ud81f/+gM35gGB3Y9Fkc.
ECDSA key fingerprint is MD5:08:80:96:bc:20:d3:49:40:b9:43:80:a2:5c:c5:3c:68.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'api.crc.testing' (ECDSA) to the list of known hosts.
Warning: the ECDSA host key for 'api.crc.testing' differs from the key for the IP address '192.168.130.11'
Offending key for IP in /home/user1/.ssh/known_hosts:1
Are you sure you want to continue connecting (yes/no)? yes
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
@doyoungim999 there is a private key ~/.crc/machine/crc/id_rsa which you should use during ssh to login to the VM if that you want.
Hi,
I have aquestion regarding reverse DNS with CodeReady.
How can I configure reverse DNS in CodeReady?
Reverse DNS is not something CRC handles, but the DNS server.
DNSmasq is configured as a pattern, so that should be OK.
What would you like to achieve?
On Mon, Apr 20, 2020 at 1:42 PM doyoungim999 notifications@github.com
wrote:
Hi,
I have aquestion regarding reverse DNS with CodeReady.
How can I configure reverse DNS in CodeReady?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/code-ready/crc/issues/1151#issuecomment-616323546,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAAOZVZ7YSENMCUSXSS77TRNPOETANCNFSM4MENIABQ
.
--
Gerard Braad | http://gbraad.nl
[ Doing Open Source Matters ]
i try to deploy itop (https://hub.docker.com/r/vbkunin/itop) on openshift.
Using podman works well.
When i try to create itop on openshift, I get Gateway Time-out error during the initial configuration.
So I am wondier it can be related with dns service.
If possible, can we have a con call to demonstrate error and find an issue?
Can you explain all the steps you perform to deploy the docker container?
Such as the yml you use to deploy the service and have you set up a route?
On Mon, Apr 20, 2020, 16:24 doyoungim999 notifications@github.com wrote:
If possible, can we have a con call to demonstrate error and find an issue?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/code-ready/crc/issues/1151#issuecomment-616391342,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAAOZWRU5T6LDKE5EC32OTRNQBCPANCNFSM4MENIABQ
.
$oc new-app docker.io/vbkunin/itop:2.7.0-beta
$oc serviceaccount useroot
$oc adm policy add-scc-to-user anyuid -z useroot
$oc edit dc/itop ( add serviceAccountName: useroot)
$oc expose svc/itop
$oc get route itop
[user2@codeReady full]$ oc get route itop
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
itop itop-emsproject.apps-crc.testing itop 80-tcp None
// add itop-emsproject.apps-crc.testing in /etc/hosts
//from the browser,
configure mysql; localhost/root/[blankpassword]
// at the last steps, 504 GATEWAY Time-out.
Error happend regardless of your choice on itop initiatlization page. It is very similar with wordpress.
To my original issue,
$crc stop
$crc start
then the issue is resolved.
I don't know the reason. It can be related with out of resource of KVM or other resource related...
| gharchive/issue | 2020-04-09T03:50:07 | 2025-04-01T04:33:50.814908 | {
"authors": [
"cfergeau",
"doyoungim999",
"gbraad",
"praveenkumar"
],
"repo": "code-ready/crc",
"url": "https://github.com/code-ready/crc/issues/1151",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2314568711 | feature: Add Github Organization Link
I wanted to get to the repository via the website, however, there wasn't any link that navigated me here.
Solution:
Add the GitHub Icon link in the footer and a "Contribute" section in the User menu to let beginners navigate easily to this organization.
@hkirat sir, please review the code
Fixes/666 Add Keyboard Shortcut Guide and Custom Scrollbar #667
In this PR github is available, waiting for @hkirat sir's review of this PR
Please reference all the changes made, I will close this issue once the PR gets pulled.
Put out the merged PR reference out here once done.
| gharchive/issue | 2024-05-24T06:53:00 | 2025-04-01T04:33:50.822978 | {
"authors": [
"20santi",
"pkmanas22",
"zeul22"
],
"repo": "code100x/cms",
"url": "https://github.com/code100x/cms/issues/701",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2295380839 | Tnc page and privacy-policy pages added
PR Fixes:
Terms and Conditions page added
Privacy-policy page added
Resolves #416
[x] I have performed a self-review of my code
[x] I assure there is no similar/duplicate pull request regarding same issue
I am providing the screen recording as well as screen shot of both the pages below here.
video for the same
Screencast from 2024-05-14 18-25-58.webm
@hkirat could you please check this PR as these are the important pages for any website tnc and privacy-policy
/bounty $15
| gharchive/pull-request | 2024-05-14T12:57:10 | 2025-04-01T04:33:50.826606 | {
"authors": [
"hkirat",
"kethesainikhil"
],
"repo": "code100x/daily-code",
"url": "https://github.com/code100x/daily-code/pull/417",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1285309095 | 🌸福井伝統工芸クイズ🌸
https://mai-sakulight.github.io/traditionai-craft-idol/
越前和紙クイズの答えが気になります。
アプリ開発がんばってください!
なんかおもしろかったです!
答えを教えてくれん!
急に無言になって笑いました。笑
答えないんかーい!笑
答え知りたいです(;'∀')
続きが気になる!
| gharchive/issue | 2022-06-27T06:24:45 | 2025-04-01T04:33:50.835009 | {
"authors": [
"azukian123",
"mai-sakulight",
"nao03118023",
"natsukokatou",
"obayashir",
"oka6tomo",
"sakulightkako",
"taisukef",
"wakana-macchari"
],
"repo": "code4fukui/fukui-kanko",
"url": "https://github.com/code4fukui/fukui-kanko/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1360908043 | Moment package size
the last commit updated moment-timezone and increased the size of app.js from 1.85 -> 2.49 mb
we should explore alternatives: luxon? vanilla js?
timezone conversion does not seem possible in vanilla js. but if we implement luxon we should be able to get the package down to 1.44 😅
| gharchive/issue | 2022-09-03T16:29:27 | 2025-04-01T04:33:50.838321 | {
"authors": [
"joshreisner"
],
"repo": "code4recovery/tsml-ui",
"url": "https://github.com/code4recovery/tsml-ui/issues/243",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2617220163 | [Feat]: make the opportunity hub page consistent
What feature?
The current opportunity page doesnot have same effect as it has on desktop, I will make it happen to maintain the consistency and it looks good for the site.
Assign me this under hacktoberfest and gssoc-ext
Add screenshots
Add screenshots
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Check out the #issue475 ⬇
https://github.com/codeaashu/DevDisplay/issues/475#issue-2617410305
Get a #level3 on your open-source contribution.
Remember that ⚠ This issue will be closed on 31 October
@codeaashu i dont understand 475 , their is a issue with level 3 already assigned to someone, as this issue is closed can you assign me issues please.
| gharchive/issue | 2024-10-28T03:49:38 | 2025-04-01T04:33:50.842314 | {
"authors": [
"AE-Hertz",
"codeaashu"
],
"repo": "codeaashu/DevDisplay",
"url": "https://github.com/codeaashu/DevDisplay/issues/472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1062294441 | 🛑 CodeChef is down
In 9909b37, CodeChef (https://www.codechef.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CodeChef is back up in b7eb9e0.
| gharchive/issue | 2021-11-24T11:10:19 | 2025-04-01T04:33:50.935549 | {
"authors": [
"codechef-machine-user"
],
"repo": "codechef-org/status",
"url": "https://github.com/codechef-org/status/issues/160",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2664191839 | Layout By Convention not working in Grails 7
When adding <meta name="layout" content="layoutByConvention"/> the layout is applied.
But when using the layout by convention method, the layout is not applied.
See example project with tests: https://github.com/matrei/sitemesh-bug-layout-by-convention
what is layout by convention method? your link doesn't work.
what is layout by convention method? your link doesn't work.
Sorry, I have updated the link: https://gsp.grails.org/6.2.3/guide/layouts.html#layout_by_convention
@matrei thanks, yeah, that is not supported at the moment, but we can get to it.
This involves coming up with a solution that is efficient and doesn't check if a file exists every time. Perhaps bake it into controller logic and just set the request layout attribute if the file exists?
https://github.com/codeconsole/grails-plugin-sitemesh3/blob/84a0e32622cddef7e98ddc6aaeae557bf3a4af71/grails-app/controllers/org/sitemesh/grails/plugins/sitemesh3/DemoController.groovy#L6
| gharchive/issue | 2024-11-16T11:06:44 | 2025-04-01T04:33:50.943700 | {
"authors": [
"codeconsole",
"matrei"
],
"repo": "codeconsole/grails-plugin-sitemesh3",
"url": "https://github.com/codeconsole/grails-plugin-sitemesh3/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
269409740 | CircleCI: inconsistent args for build/job with codecov-python
https://github.com/codecov/codecov-bash/blob/7e6cfa3935aeb29de57298bda496dad27b24aa37/codecov#L541-L542
vs
https://github.com/codecov/codecov-python/blob/adc22c19f3e924cac15a7ee25edfb3a5b518bf7c/codecov/__init__.py#L345-L346
build=os.getenv('CIRCLE_BUILD_NUM') + "." + os.getenv('CIRCLE_NODE_INDEX'),
job=os.getenv('CIRCLE_BUILD_NUM') + "." + os.getenv('CIRCLE_NODE_INDEX'),
This may be a bug in the python uploader, or it is handled differently. The bash uploader correctly handles CircleCI build URLs.
@drazisil
Ok. Can you move the issue there then?
(This issue just documents / points out that they differ - I cannot say which is correct, but they should be the same)
| gharchive/issue | 2017-10-29T15:47:28 | 2025-04-01T04:33:50.946084 | {
"authors": [
"blueyed",
"drazisil"
],
"repo": "codecov/codecov-bash",
"url": "https://github.com/codecov/codecov-bash/issues/95",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1702960481 | [BUG] Invalid form values don't prevent account creation
Describe the bug
When using the signup form, validation errors don't prevent account creation.
To Reproduce
Steps to reproduce the behavior:
Go to '/register'
Fill in a last name with an _, or an invalid phone number
An error should appear, and you will remain on the same screen
Go to '/login'
Try to login with the credentials used during the signup step
Expected behavior
If you encounter an error in the signup form, the account should not be created until the user fixes the issues.
Screenshots
Brian S. is going to take a look.
Followed steps. Could not reproduce issue
Can't reproduce on https://dev.nationalpolicedata.org/ either, closing.
| gharchive/issue | 2023-05-10T01:24:05 | 2025-04-01T04:33:50.982617 | {
"authors": [
"mikeyavorsky",
"rosyntow"
],
"repo": "codeforboston/police-data-trust",
"url": "https://github.com/codeforboston/police-data-trust/issues/275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
830163973 | Contributors section in README.md
Issue Number
fixes #51
Describe the changes you've made
Added a credits section in our README.md file like this to recognize contributors in the project.
Describe if there is any unusual behavior (Any Warning) of your code(Write NA if there isn't)
NA
Additional context (OPTIONAL)
Test plan (OPTIONAL)
A good test plan should give instructions that someone else can easily follow.
Checklist
[x] My code follows the code style of this project.
[ ] My change requires a change to the documentation.
[ ] I have updated the documentation accordingly.
[x] All new and existing tests passed.
[x] The title of my pull request is a short description of the requested changes.
Why are these tests not passing
Don't know, I only made changes in README.md file
Hi @Harshal0902,
I think the tests are failed because of the eslint errors,
Just take the pull from main branch and they will be resolve.
@Harshal0902 try deleting this PR, then pull the recent changes made in the readme and then try adding these changes and then make a PR. Due to constant PR's in the readme this error maybe occuring
| gharchive/pull-request | 2021-03-12T14:20:51 | 2025-04-01T04:33:50.987967 | {
"authors": [
"Harshal0902",
"kaiwalyakoparkar",
"rajatgupta24"
],
"repo": "codeforcauseorg/edu-client",
"url": "https://github.com/codeforcauseorg/edu-client/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2074669224 | Content assist issues (punctuation characters)
Not sure if this is know, but I am experiencing two issues with content assist
If you declare a procedure starting with '#', it seems to break the code completion.
Example
dcl-proc #testProc;
dcl-pi *n ind;
param char(30) const;
end-pi;
end-proc;
If you have a qualified ds that is an array, it seems to only pick up the reference to where the array is defined
Example
dcl-ds t_template template qualified;
elem likeds(t_array) dim(10);
end-ds;
dcl-ds t_array template qualified;
elem1 char(10);
elem2 char(3);
elem3 char(1);
end-ds;
dcl-ds ds_template likeds(t_template);
ds_template.elem(1)
@Wikus86
it seems to break the code completion.
What does this mean?
@Wikus86
it seems to break the code completion.
What does this mean?
Hi @worksofliam ,
So break probably not the correct words :).
So first thing i noticed is that when you hover over the proc and it it starts wih a '#' it does not pop up as below
and here is example with proc without the #
Then also, when hit "ctrl + space" it shows the proc in the list
but when you start typing '#t', it goes missing
Hope this clarifies it a bit.
Made an issue here: https://github.com/barrettotte/vscode-ibmi-languages/issues/136
@Wikus86
IMHO it's not a good practice to use the special characters #, @ and $ in variable and procedure names - actually, any name at all - due to the characters not being consistent across CCSID's.
For additional information, read this article by Bob Cozzi...
@worksofliam ,
I am happy with that. I will just remove it from my procedures.
Btw, the issue I raised about the qualified data structure with an array, is not exactly accurate. Seems like if you have a qualified DS in a qualified DS, it only picks up until the second level. ie ds_example.field
Also picked up another small issue. Only on some of my internal procedures, if I declare a local work field, content assist does not pick up the work field, but in other internal proc's it works. I tried looking for a pattern why this will happen, but no luck as yet. Should I open a separate issue for this?
Otherwise the work you guys are doing is freaking awesome. Really enjoying using vs code these days.
@Wikus86 no separate issue required, but if you could provide more examples I can test with that'd be great.
Here is an exmaple of it not picking up the local workfield.
Another thing i noticed, linter picks up that it has not been references
@Wikus86 Sorry to be a pain. Would you actually put that into a new issue for me? :)
@Wikus86 Sorry to be a pain. Would you actually put that into a new issue for me? :)
No worries, will do
| gharchive/issue | 2024-01-10T15:53:39 | 2025-04-01T04:33:51.000235 | {
"authors": [
"Wikus86",
"chrjorgensen",
"worksofliam"
],
"repo": "codefori/vscode-rpgle",
"url": "https://github.com/codefori/vscode-rpgle/issues/285",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
258348111 | Pitch your CityCamp NC session here
If you plan on pitching a session this year for CityCamp NC, please add a comment to this issue with the proposed title and your name. Feel free to add a brief description. We will list the submitted proposals to the website schedule page so attendees can start to formulate a few topics that will be at the unconference.
Disclaimer: you must be present at CityCamp NC on Friday, September 29 to make your pitch in person at approximately 11:15 am at Nash Hall / Church on Morgan.
I am good to be there all day on Friday. Would love the opportunity to discuss Open Data and Alexa Skills. @jhibbets @ChrisTheDBA Please let me know how long the presentation should be.
Thanks.
Plan for 45 minutes. Note - we will not have a/v, but we've had people
bring their own in the past.
On Fri, Sep 22, 2017 at 4:47 PM, jlbaile1 notifications@github.com wrote:
I am good to be there all day on Friday. Would love the opportunity to
discuss Open Data and Alexa Skills. @jhibbets
https://github.com/jhibbets @ChrisTheDBA
https://github.com/christhedba Please let me know how long the
presentation should be.
Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331556691,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADgoV91kpU96qfdO_yHXOBaaOids8qYsks5slBzngaJpZM4PaVoG
.
Jason,
Looking at the schedule, I really don't see where there is a place for the Town of Cary's
Discussion regarding open data and developing Alexa skills in Friday's program.
Apparently, I am too late to sign up for a lightning talk. Should I just table it to the next event.
Please advise.
Janelle Bailey
Sent from my iPhone
On Sep 22, 2017, at 4:49 PM, Jason Hibbets <notifications@github.commailto:notifications@github.com> wrote:
Plan for 45 minutes. Note - we will not have a/v, but we've had people
bring their own in the past.
On Fri, Sep 22, 2017 at 4:47 PM, jlbaile1 <notifications@github.commailto:notifications@github.com> wrote:
I am good to be there all day on Friday. Would love the opportunity to
discuss Open Data and Alexa Skills. @jhibbets
https://github.com/jhibbets @ChrisTheDBA
https://github.com/christhedba Please let me know how long the
presentation should be.
Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331556691,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADgoV91kpU96qfdO_yHXOBaaOids8qYsks5slBzngaJpZM4PaVoG
.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331556943, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AQPlu-HyJGtvt-gLgUVDnGRAc0Wvv-woks5slB0wgaJpZM4PaVoG.
Janelle
It will be an unconference session. Do you have 40 minutes of material.
On Fri, Sep 22, 2017 at 5:03 PM jlbaile1 notifications@github.com wrote:
Jason,
Looking at the schedule, I really don't see where there is a place for the
Town of Cary's
Discussion regarding open data and developing Alexa skills in Friday's
program.
Apparently, I am too late to sign up for a lightning talk. Should I just
table it to the next event.
Please advise.
Janelle Bailey
Sent from my iPhone
On Sep 22, 2017, at 4:49 PM, Jason Hibbets <notifications@github.com
mailto:notifications@github.com> wrote:
Plan for 45 minutes. Note - we will not have a/v, but we've had people
bring their own in the past.
On Fri, Sep 22, 2017 at 4:47 PM, jlbaile1 <notifications@github.com
mailto:notifications@github.com> wrote:
I am good to be there all day on Friday. Would love the opportunity to
discuss Open Data and Alexa Skills. @jhibbets
https://github.com/jhibbets @ChrisTheDBA
https://github.com/christhedba Please let me know how long the
presentation should be.
Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331556691>,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/ADgoV91kpU96qfdO_yHXOBaaOids8qYsks5slBzngaJpZM4PaVoG>
.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331556943>,
or mute the thread<
https://github.com/notifications/unsubscribe-auth/AQPlu-HyJGtvt-gLgUVDnGRAc0Wvv-woks5slB0wgaJpZM4PaVoG>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331559950,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAOGtQiOAhyp2_K7t4JfI6gUjJjfi-tOks5slCCSgaJpZM4PaVoG
.
--
Chris Mathews
Hey there, @reidserozi just called me. He is going to follow-up with
Janelle about this.
Janelle - if you read this post:
http://citycampnc.org/2017/09/21/2017-unconference-pitches/ it should clear
some things up for you. You will basically need to pitch your topic
starting at 11:25 am, then we'll have at least 4 rounds of 4 sessions
running at the same time in the afternoon. I had assumed that you read the
post and came from there when you submitted the topic.
On Fri, Sep 22, 2017 at 5:07 PM, Chris Mathews notifications@github.com
wrote:
Janelle
It will be an unconference session. Do you have 40 minutes of material.
On Fri, Sep 22, 2017 at 5:03 PM jlbaile1 notifications@github.com wrote:
Jason,
Looking at the schedule, I really don't see where there is a place for
the
Town of Cary's
Discussion regarding open data and developing Alexa skills in Friday's
program.
Apparently, I am too late to sign up for a lightning talk. Should I just
table it to the next event.
Please advise.
Janelle Bailey
Sent from my iPhone
On Sep 22, 2017, at 4:49 PM, Jason Hibbets <notifications@github.com
mailto:notifications@github.com> wrote:
Plan for 45 minutes. Note - we will not have a/v, but we've had people
bring their own in the past.
On Fri, Sep 22, 2017 at 4:47 PM, jlbaile1 <notifications@github.com
mailto:notifications@github.com> wrote:
I am good to be there all day on Friday. Would love the opportunity to
discuss Open Data and Alexa Skills. @jhibbets
https://github.com/jhibbets @ChrisTheDBA
https://github.com/christhedba Please let me know how long the
presentation should be.
Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-
331556691>,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/ADgoV91kpU96qfdO_
yHXOBaaOids8qYsks5slBzngaJpZM4PaVoG>
.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-
331556943>,
or mute the thread<
https://github.com/notifications/unsubscribe-auth/AQPlu-HyJGtvt-
gLgUVDnGRAc0Wvv-woks5slB0wgaJpZM4PaVoG>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-
331559950>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAOGtQiOAhyp2_
K7t4JfI6gUjJjfi-tOks5slCCSgaJpZM4PaVoG>
.
--
Chris Mathews
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-331560875,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADgoVwbOZ-TuBhrAj4fDuG6E5TUYI22Sks5slCGggaJpZM4PaVoG
.
Hi Jason,
Here's a possible session proposal!
Title: Next Gen of Coders
Name: Shah
Description: Come check what students are doing with Code. We have inspired more than 500 kids through various Coding programs. But we would like do more and spread the creativity of Code across Wake county. Please bring ideas on how to further engage with the community and organizations.
Love this idea Shah. Adding this to the website now. Remember to pitch this
on Friday!
On Tue, Sep 26, 2017 at 7:56 AM, shah-kunal notifications@github.com
wrote:
Hi Jason,
Here's a possible session proposal!
Title: Next Gen of Coders
Name: Shah
Description: Come check what students are doing with Code. We have
inspired more than 500 kids through various Coding programs. But we would
like do more and spread the creativity of Code across Wake county. Please
bring ideas on how to further engage with the community and organizations.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-332175422,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADgoV6OHjMM7rM9GuVQ5DlKgxx8-HOJ3ks5smOaFgaJpZM4PaVoG
.
Here's another proposal:
Title: Sustainable energy networks
Name: Tal Yifat
Description: Big data and the IoT will open amazing opportunities to optimize cities' energy and water networks and create efficiencies. Network elements that communicate and adjust to each other can save a lot of money and reduce environmental footprint. How can cities develop their energy infrastructures so that they can make the most out of such opportunities?
Sounds like a great pitch!
On Fri, Sep 29, 2017 at 10:00 AM, tal-yifat notifications@github.com
wrote:
Here's another proposal:
Title: Sustainable energy networks
Name: Tal Yifat
Description: Big data and the IoT will open amazing opportunities to
optimize cities' energy and water networks and create efficiencies. Network
elements that communicate and adjust to each other can save a lot of money
and reduce environmental footprint. How can cities develop their energy
infrastructures so that they can make the most out of such opportunities?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/codeforraleigh/NCOpenPass/issues/693#issuecomment-333134197,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AQ6Ixn87nXKF74f8IvlmMLZlyJEd0EWZks5snPf0gaJpZM4PaVoG
.
--
Zach Ambrose
zambrose@AmbroseStrategy.com
919-438-2752 (office)
919-616-9969 (cell)
www.AmbroseStrategy.com
https://www.linkedin.com/in/zambrose
Final sessions are posted: http://citycampnc.org/2017/09/29/friday-september-29-unconference-schedule-aka-the-grid/
| gharchive/issue | 2017-09-18T00:38:53 | 2025-04-01T04:33:51.052150 | {
"authors": [
"ChrisTheDBA",
"jhibbets",
"jlbaile1",
"shah-kunal",
"tal-yifat",
"zachambrose"
],
"repo": "codeforraleigh/NCOpenPass",
"url": "https://github.com/codeforraleigh/NCOpenPass/issues/693",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
898416867 | Create Pipeline metrics.json
New dashboard with aggregated metrics for a pipeline (taking into consideration builds of the same pipeline)
@francisco-codefresh please update the file https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/templates/dashboards.yaml.gotmpl
Done. I added the requested changes
I don't have write access to the repo in codefresh-io. Can you please merge it?
@palson-cf @vadimgusev-codefresh
| gharchive/pull-request | 2021-05-21T20:00:12 | 2025-04-01T04:33:51.054633 | {
"authors": [
"francisco-codefresh",
"palson-cf"
],
"repo": "codefresh-io/runtime-cluster-monitor",
"url": "https://github.com/codefresh-io/runtime-cluster-monitor/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
386877070 | tried to allocate
Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 20480 bytes) in /var/www/web.ru/system/Config/BaseService.php on line 111
Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 20480 bytes) in /var/www/web.ru/system/Debug/Exceptions.php on line 195
OR
Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 262144 bytes) in /var/www/web.ru/application/Config/Services.php on line 224
Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 262144 bytes) in Unknown on line 0
The problem appears when the cURL or file_get_contents () method is called in the function, I think it is related to the toolbar when it tries to collect all the data from the variables for the report, and somewhere there is a memory leak. for example, parsing the site is not realistic, it appeared just a couple of months ago, before that, everything was ok, I’ll attach sample code
`<?php namespace App\Controllers;
use CodeIgniter\Controller;
class Update extends Controller {
/* @var \Config\Database $db */
public $db;
public $l2oops_url = array(
'https://l2oops.com/',
'https://l2oops.com/chronicle/lineage-2-interlude',
'https://l2oops.com/chronicle/lineage-2-interlude-s-dopolneniyami',
'https://l2oops.com/chronicle/lineage-2-high-five',
'https://l2oops.com/chronicle/lineage-2-gracia-epilogue',
'https://l2oops.com/chronicle/lineage-2-classic',
'https://l2oops.com/chronicle/lineage-2-goddess-of-destruction',
'https://l2oops.com/chronicle/lineage-2-gracia-final',
'https://l2oops.com/chronicle/lineage-2-goddess-of-destruction-lindvior',
'https://l2oops.com/chronicle/lineage-2-freya',
'https://l2oops.com/chronicle/lineage-2-ertheia',
'https://l2oops.com/chronicle/lineage-2-chronicle-c4',
'https://l2oops.com/chronicle/lineage-2-infinite-odyssey',
'https://l2oops.com/chronicle/lineage-2-helios',
'https://l2oops.com/rates/lineage-2-GVE',
'https://l2oops.com/rates/lineage-2-RVR',
);
public $temp_base_server_list = array();
public function index()
{
$this->db = \Config\Database::connect();
$n = new \Simple_html_dom();
unset($n);
if(is_array($this->l2oops_url)) {
foreach ($this->l2oops_url as $url) {
$html = file_get_html($url); // - Simple_html_dom
$ul_server = $html->find('ul.server');
unset($html);
foreach ($ul_server as $article) {
$item = null;
$item['name'] = @$article->find('li.server_name', 0)->plaintext;
if(empty($item['name']))
continue;
if(strripos($item['name'], "PTS") OR strripos($item['name'], "pts"))
$pts = "[PTS] ";
else
$pts = "";
$item['name'] = @trim(str_replace(array("pts", "[pts]"), "" , $item['name']));
$url_site = mb_strtolower(trim(preg_replace('/\\(.*?\\)|\\[.*?\\]/s','',$item['name'])));
$item['name'] = $pts . $item['name'];
$item['rates'] = trim(str_replace(array("x"), "" , $article->find('li.rates', 0)->plaintext));
$item['chronicle'] = trim($article->find('li.chronicles', 0)->plaintext);
$item['date'] = trim($this->pars_time($article->find('li.date', 0)->plaintext));
unset($article);
$this->temp_base_server_list[$url_site][$item['date']] = $item;
}
unset($ul_server);
}
}
}
public function pars_time($time_formated){
if(strripos($time_formated, " в ")){
$temp = explode(" в " , $time_formated);
$date = $temp[0];
$time = $temp[1];
$time = explode(":" , $time);
//заглушка на удаление времени
$time[0] = 0;
$time[1] = 0;
switch ($date){
case "завтра";
$date = mktime((int)$time[0], (int)$time[1], 0, (int)date("m") , (int)date("d")+1, (int)date("Y"));
break;
case "сегодня";
$date = mktime((int)$time[0], (int)$time[1], 0, (int)date("m") , (int)date("d"), (int)date("Y"));
break;
case "вчера";
$date = mktime((int)$time[0], (int)$time[1], 0, (int)date("m") , (int)date("d")-1, (int)date("Y"));
break;
default:
$date = explode("." , $date);
$date = mktime((int)$time[0], (int)$time[1], 0, (int)$date[1] , (int)$date[0], (int)$date[2]);
break;
}
}else{
switch ($time_formated){
case "Завтра";
$date = mktime((int)0, (int)0, 0, (int)date("m") , (int)date("d")+1, (int)date("Y"));
break;
case "Сегодня";
$date = mktime((int)0, (int)0, 0, (int)date("m") , (int)date("d"), (int)date("Y"));
break;
case "Вчера";
$date = mktime((int)0, (int)0, 0, (int)date("m") , (int)date("d")-1, (int)date("Y"));
break;
default:
$date = explode("." , $time_formated);
$date = mktime((int)0, (int)0, 0, (int)$date[1] , (int)$date[0], (int)$date[2]);
break;
}
}
return $date;
}
}`
CodeIgniter 4 version
Last bild
I increased it to 15 gigabytes of RAM
pls update your code - /var/www/web.ru/system/Debug/Exceptions.php on line 195 is not an valid code line, so it looks like your code is outdated. its hard to make any assumptions without knowing your version
https://github.com/codeigniter4/CodeIgniter4/blob/v4.0.0-alpha.2/system/Debug/Exceptions.php
https://github.com/codeigniter4/CodeIgniter4/blob/v4.0.0-alpha.3/system/Debug/Exceptions.php
Last bild dev
`Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 20480 bytes) in /var/www/web.ru/system/Config/BaseService.php on line 111
Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 20480 bytes) in /var/www/web.ru/system/Debug/Exceptions.php on line 195`
https://github.com/codeigniter4/CodeIgniter4/blob/9ca2e656ec73974e76a18823de62e861a0017f86/system/Debug/Exceptions.php#L195
Understood what the problem was, did not pass the parameters in the Services when accessing return static :: getSharedInstance ('Telegram', $ token);
Can be closed
Glad you found the problem! Closing, then.
| gharchive/issue | 2018-12-03T15:43:08 | 2025-04-01T04:33:51.065662 | {
"authors": [
"demortx",
"lonnieezell",
"puschie286"
],
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/issues/1578",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1242068313 | Bug: get_cookie() cookie prefix behavior
PHP Version
7.3
CodeIgniter4 Version
4.1.9
CodeIgniter4 Installation Method
Composer (as dependency to an existing project)
Which operating systems have you tested for this bug?
macOS, Linux
Which server did you use?
apache
Database
MySQL 8
What happened?
The cookie prefixing to prevent collisions doesn't exactly work as expected. There is an issue with at least the get_cookie() helper function. The cookie prefix is specifically toted as a way to prevent collisions, however this function doesn't solve for collisions.
function get_cookie($index, bool $xssClean = false)
{
$prefix = isset($_COOKIE[$index]) ? '' : config(App::class)->cookiePrefix;
$request = Services::request();
$filter = $xssClean ? FILTER_SANITIZE_FULL_SPECIAL_CHARS : FILTER_DEFAULT;
return $request->getCookie($prefix . $index, $filter);
}
This function looks to see if the base cookie without the prefix exists, and if it does not exist then it looks for the prefixed cookie. It should look for the prefixed cookie and reject the un-prefixed cookie entirely, otherwise it creates collisions.
Steps to Reproduce
If you take the following example:
Config/App.php
public $cookiePrefix = 'prefix_';
Theoretical array of cookies:
$_COOKIES = [
'prefix_test' => 'Right Value',
'text' => 'Wrong Value',
];
get_cookie('test');
returns 'Wrong Value' by default, ideally it should return 'Right Value' and even if 'prefix_text' is not set, it should return null and never 'Wrong Value'
Expected Output
returns 'Wrong Value' by default, ideally it should return 'Right Value' and even if 'prefix_text' is not set, it should return null and never 'Wrong Value'
Anything else?
This could be fairly easily fixed by just enforcing the prefix:
$prefix = config(App::class)->cookiePrefix ?? '';
However, I'm not sure if this should be enforced at a deeper stage of getting the cookie, and/or the function should be extended to allow for a prefix to be set independently if desired like:
function get_cookie($index, bool $xssClean = false, string $prefix = null)
{
$prefix = $prefix !== null ? $prefix : config(App::class)->cookiePrefix;
$request = Services::request();
$filter = $xssClean ? FILTER_SANITIZE_FULL_SPECIAL_CHARS : FILTER_DEFAULT;
return $request->getCookie($prefix . $index, $filter);
}
I don't know this is a bug or not.
I came from CI3:
https://github.com/bcit-ci/CodeIgniter/blob/87f9ac6ca2eb125f5991243865e7e5e96fd3987b/system/helpers/cookie_helper.php#L89-L93
In my understanding of the function, $index is assumed to be the prefixed cookie name because of this check:
$prefix = isset($_COOKIE[$index]) ? '' : config(App::class)->cookiePrefix;
If a cookie with the name $index (assumed prefixed) already exists in the $_COOKIE array, then there's no need to get the prefix and prepend that to the name.
If we set prefix in the config, all issued cookies by CI have the prefixed names.
This helper function gives you friendlier syntax to get browser cookies. Refer to the IncomingRequest Library for detailed description of its use, as this function acts very similarly to IncomingRequest::getCookie(), except it will also prepend the Config\Cookie::$prefix that you might’ve set in your app/Config/Cookie.php file.
https://codeigniter4.github.io/CodeIgniter4/helpers/cookie_helper.html#get_cookie
Unlike the Cookie Helper function get_cookie(), this method does NOT prepend your configured Config\Cookie::$prefix value.
https://codeigniter4.github.io/CodeIgniter4/incoming/incomingrequest.html#getGetPost
It seems the user guide says get_cookie() always prepend the prefix.
When the prefix is prefix_ and
$_COOKIES = [
'prefix_test' => 'CI cookie',
'test' => 'Non CI cookie',
];
get_cookie('test'); // Non CI cookie
get_cookie('prefix_test'); // CI cookie
Why do we need to get cookie test? It is not issued by CI.
Why do we need to get prefix_testto get the CI cookie.
If there is no way to reliably only get the prefixed cookie and reject all others, then there isn't really any point to the entire cookie prefixing setup as it would not save collisions between 2 otherwise similar apps installed on the same domain, and if its not solving this problem, then what problem is it solving exactly?
@colethorsen Good question!
The current my answer is "I don't know".
I thought about this.
It seems the current get_cookie() wants to get all cookies including out of CI.
If we change the behavior, it can't get test cookie:
$_COOKIES = [
'prefix_test' => 'CI cookie',
'test' => 'Non CI cookie',
];
If you want to get CI cookie surely, you must call with prefixed name like get_cookie('prefix_test').
Even if the current behavior is correct, the user guide explanation is not correct.
Probably we have two way to go:
Fix the user guide
Change this behavior
If you were to “fix the guide” then there is basically no useful way to globally prefix all the cookies and that entire functionality wouldn’t really be relevant. There is already the current cookie functionality incomingrequest::getCookie() which will allow users to get any cookie prefixed or not that they may need to. get_cookie() should work in tandem with the set_cookie() which prefixes or they should both be adjusted to just be direct shortcuts to the incomingrequest::getCookie() and additional functions added that work with the prefix setup. Given that the incoming request cookie functionality exists I don’t really see a reason why the get_cookie doesn’t function with the prefix as that’s how it’s always been documented and without it the entire prefixing setup as it’s designed doesn’t really work.
set_cookie($name[, $value = ''[, $expire = ''[, $domain = ''[, $path = '/'[, $prefix = ''[, $secure = false[, $httpOnly = false[, $sameSite = '']]]]]]]])
https://codeigniter4.github.io/userguide/helpers/cookie_helper.html#set_cookie
This function has the parameter $prefix.
This is not consistent either.
I certainly agree that the current get_cookie() behavior is strange.
The set_cookie works a bit differently if you start digging into it. There's basically an option within the Cookie object that will either use the prefix that is manually sent initializing the cookie (set_cookie is basically a shortcut to doing this), or it will fall back to the default global prefix (or that which was defined using the static function setDefaults
I'm not overly sure of what real world benefits being able to set other prefixes has at the get_cookie and set_cookie level given their use cases, but at the Cookie object level, it allows for multiple different prefixes to be used in the same app, which there could probably be a use case for at some point.
@colethorsen I sent another PR: #6082
How about this?
Sure this appears to fix it it more or less the same way, except adding in support for a prefix from another spot where the prefix could be set. While not addressing the second part of the cookie prefix problem.
I'm using #6024 in production and will continue to do so as this only addresses part of the issue with cookie prefix inconsistencies.
After testing it out more, your drawback has a major implementation drawback, that it can't actually check for an unprefixed cookie if you wanted to, where as the #6024 can because of the way its configured:
using:
function get_cookie($index, bool $xssClean = false, ?string $prefix = '')
and then checking to see if its empty would always lead to different prefix, whereas using:
function get_cookie($index, bool $xssClean = false, ?string $prefix = null)
and then checking for null would allow you explicitly check for an unprefixed cookie by using get_cookie('whole_cookie_name', true, '')
Thank you for looking into https://github.com/codeigniter4/CodeIgniter4/pull/6082.
But sorry, I don't get what you say.
I think it can check for all cookies, and it solves all the cookie prefix problems.
What's the second part of the cookie prefix problem?
it can't actually check for an unprefixed cookie if you wanted to
get_cookie($index, $xssClean, null) returns unprefixed cookie.
Sorry you're right it would solve it just in the opposite manner by passing null instead of an empty string.
The second part of the problem is that the session cookie sometimes adds the prefix and sometimes doesn't and ends up having multiple cookies and/or multiple sessions. This is solved in the #6024
Thank you for making it clear.
Session cookie should not have the prefix, so there is a bug in Session.
I've found the Session cookie name bug. #6091
| gharchive/issue | 2022-05-19T17:07:36 | 2025-04-01T04:33:51.088418 | {
"authors": [
"colethorsen",
"kenjis",
"paulbalandan"
],
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/issues/6009",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2185180205 | Bug: Debug toolbar hrefs redirect to /#
PHP Version
7.4, 8.1
CodeIgniter4 Version
4.4.6
CodeIgniter4 Installation Method
Composer (using codeigniter4/appstarter)
Which operating systems have you tested for this bug?
Windows
Which server did you use?
apache
Database
MySQL 8
What happened?
I use the toolbar to debug queries and variables. It's important for me to have it open in specific sections, but when I click on items on the toolbar instead of just toggling that section it interprets the click as a regular link click and since those links have # it always takes me to root/# and while the toolbar does re-open to where I clicked, I have to go back to where I was in order to review stuff.
This started recently, although not sure since which version.
Steps to Reproduce
Toggle toolbar or Toggle a section inside the toolbar
Expected Output
Toggled section instead of link redirect
Anything else?
No response
Duplicated of #8594
This bug has already been fixed in develop branch.
Please try it.
If there is still something wrong, feel free to reopen this.
| gharchive/issue | 2024-03-14T01:04:57 | 2025-04-01T04:33:51.093512 | {
"authors": [
"kenjis",
"mihail-minkov"
],
"repo": "codeigniter4/CodeIgniter4",
"url": "https://github.com/codeigniter4/CodeIgniter4/issues/8622",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
28807956 | Sites it doesn't work on
I've got a running list of URLs that newspaper doesn't work phenomenally against. Is there an open issue to catalogue these? In most cases, it's able to grab the list of articles from the home page, but completely unable to decipher each individual article into readable values.
For example, this link gets basically nothing:
http://www.empireonline.com/news/story.asp?NID=40344
@iwasrobbed The issue which you stated on this URL: http://techcrunch.com/2011/06/13/tech-giant-eats-your-lunch/
Has been fixed in: https://github.com/codelucas/newspaper/pull/106
It will be pushed and deployed into python 2 and 3 branches ASAP
Nice work, @codelucas!
Side note: I'm working with some others on an open source scraper as well, but more based on the original Instapaper recipes for each domain where you just specify xpaths instead of using heuristics. We're using Newspaper as a secondary scraper when we don't have a recipe or certain data. If it ever helps, here is more info about implementation: https://assembly.com/saulify/bounties/30 and https://github.com/asm-products/saulify-web/pull/7/files
@codelucas http://blogs.wsj.com/moneybeat/2015/02/23/apple-is-now-more-than-double-the-size-of-exxon-and-everyone-else/
Craigslist entries cause errors.
Hi, it doesn't seem to work with these,
http://www.risingkashmir.com/news-archive/01-August-2016
http://www.risingkashmir.com/
Any suggestions will be helpful. Thanks.
| gharchive/issue | 2014-03-05T17:22:29 | 2025-04-01T04:33:51.105645 | {
"authors": [
"acondiff",
"bendavies",
"bmelton",
"codelucas",
"iwasrobbed",
"zairah10"
],
"repo": "codelucas/newspaper",
"url": "https://github.com/codelucas/newspaper/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
245606654 | Readme
[x] Installation Instructions
[x] Links to other documentation
[x] Update installation docs with new script changes.
I will leave this open. I think the changes on this issue will be on going.
| gharchive/issue | 2017-07-26T05:00:34 | 2025-04-01T04:33:51.172003 | {
"authors": [
"piq9117"
],
"repo": "codergvbrownsville/code-rgv-pwa",
"url": "https://github.com/codergvbrownsville/code-rgv-pwa/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2344652915 | Contact page not have favicon Also not any message shown after submit details.
I will show message after submit details under that div with help of java-script.
Also add favicon in this page.
@codervivek5 Please assign me this task.
| gharchive/issue | 2024-06-10T19:13:33 | 2025-04-01T04:33:51.178070 | {
"authors": [
"zalabhavy"
],
"repo": "codervivek5/VigyBag",
"url": "https://github.com/codervivek5/VigyBag/issues/1340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2425574326 | The filter is not properly styled on the products page
The filter div is not properly styled on the products page in mobile-view.
It is overlaying on the products cards.
This issue is coming on all the products categories.
It should be styled properly in mobile-view.
https://github.com/user-attachments/assets/c641be87-6947-4a8d-8f23-42960f91246c
I am a GSSoC'24 contributor.
I want to work on this issue.
Please assign this issue to me under GSSoC'24.
@codervivek5 Please assign this issue to me under GSSoC'24
| gharchive/issue | 2024-07-23T16:03:36 | 2025-04-01T04:33:51.180700 | {
"authors": [
"vishanurag"
],
"repo": "codervivek5/VigyBag",
"url": "https://github.com/codervivek5/VigyBag/issues/1965",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
344249566 | Bugfix/front matter
Fix several problems in converted front matter yaml.
add missing trailing slash in converted url
fix losing date time precision after converted
fix converted front matter yaml format error
Cool!
| gharchive/pull-request | 2018-07-25T00:30:50 | 2025-04-01T04:33:51.182026 | {
"authors": [
"coderzh",
"networm"
],
"repo": "coderzh/ConvertToHugo",
"url": "https://github.com/coderzh/ConvertToHugo/pull/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
427431895 | Improvise ActiveRecord queries
ActiveRecord query fixes
@prathamesh-sonpatki Thoughts?
@schneems Can this PR be merged?
| gharchive/pull-request | 2019-03-31T19:09:07 | 2025-04-01T04:33:51.232381 | {
"authors": [
"anmolarora"
],
"repo": "codetriage/codetriage",
"url": "https://github.com/codetriage/codetriage/pull/945",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
420467807 | Update chromedriver-helper to fix Travis build
Old setup lead errors of the form
This version of ChromeDriver only supports Chrome version 74
Coverage increased (+5.0e-05%) to 99.286% when pulling 6f4093ac3055b2a92282c810b45a8e6914b89d10 on tf:update-chrome-driver into 1f8c55e786f323b8afc8e52137bc56783dace3b7 on codevise:master.
| gharchive/pull-request | 2019-03-13T12:14:45 | 2025-04-01T04:33:51.234213 | {
"authors": [
"coveralls",
"tf"
],
"repo": "codevise/pageflow",
"url": "https://github.com/codevise/pageflow/pull/1140",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
792010276 | Announcement: new moderation features
Not all features are of any interest for regular users, but some of them would be worth to be announced, so users know that some specific issue can be addressed in general:
[...]
Let mods hide solution group from the list (audited)
Let mods invalidate solution group without verifying (audited)
Let mods recalculate buggy beta votes (audited)
Let mods unpublish beta kata (audited)
Let mods remove rank assessments and votes (audited)
[...]
inform users that some issues are now possible to be handled
ask users to report problematic instances, so mods could handle them
say "thanks" to maintainers who did the hard work of implementing these
Additional point: link to docs on moderation:
https://docs.codewars.com/community/moderation/
the result of codewars/docs#250
Additional point: link to docs on moderation:
https://docs.codewars.com/community/moderation/
the result of codewars/docs#250
My initial idea behind this post was that #18 could be published before https://github.com/codewars/docs/issues/250 gets realized, leaving community without details. But now, since #18 has not been published yet, and it already links to the result of codewars/docs#251, separate post with the announcement of specific features might be not necessary anymore (until new ones will be provided, that is).
My initial idea behind this post was that #18 could be published before https://github.com/codewars/docs/issues/250 gets realized, leaving community without details. But now, since #18 has not been published yet, and it already links to the result of codewars/docs#251, separate post with the announcement of specific features might be not necessary anymore (until new ones will be provided, that is).
| gharchive/issue | 2021-01-22T13:36:13 | 2025-04-01T04:33:51.240347 | {
"authors": [
"hobovsky"
],
"repo": "codewars/blog",
"url": "https://github.com/codewars/blog/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1871246635 | TRY-0002 - Manage EVENT.BUTTON at metaprovider
Que tipo de Pull Request es?
[X] Mejoras
[ ] Bug
[ ] Docs / tests
Descripción
Meta Provider
Añade EVENT.BUTTON y controla la pulsación de los botones
Forma parte de este proyecto.
Discord
Twitter
Youtube
Telegram
@ozzyoss77
| gharchive/pull-request | 2023-08-29T09:23:18 | 2025-04-01T04:33:51.253512 | {
"authors": [
"Trystan4861",
"leifermendez"
],
"repo": "codigoencasa/bot-whatsapp",
"url": "https://github.com/codigoencasa/bot-whatsapp/pull/830",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1879849617 | Eigenvectors c++
📑 Description
copilot:summary
🐞 Related Issue
Closes #2343
📋 Explanation of changes
Calculation of eigenvectors using Eigen library for C++. The FindTheEigenvectorsOfAMatrix.h header file contains four simple functions.
Function: eigenvectors
Calculate the eigenvectors and return the values as MatrixXd type.
Function: eigenvectors_solver
Calculate the eigenvectors and return the values as EigenSolver type.
Function: eigenvectors_help
Explain what it is intended to do.
Function: eigenvectors_example
Gives an example.
copilot:walkthrough
copilot:poem
Thank you @anandfresh and @isyuricunha for your review. Apologies for the delay. It is fixed.
I probably misunderstood what is needed so I preferred to change the extension file from header .h to .cpp to contain the main function. I am using Eigen library which I downloaded from its main page. I am wondering how it is managed as it will give compilation error if it is not somehow installed. Thank you!
| gharchive/pull-request | 2023-09-04T09:12:39 | 2025-04-01T04:33:51.258244 | {
"authors": [
"cshdev110"
],
"repo": "codinasion/codinasion",
"url": "https://github.com/codinasion/codinasion/pull/4606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2279380297 | 🛑 server is down
In f0040eb, server ($API_ENDPOINT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: server is back up in 88fdd8a after 9 minutes.
| gharchive/issue | 2024-05-05T07:55:49 | 2025-04-01T04:33:51.346913 | {
"authors": [
"wasdee"
],
"repo": "codustry/gebwai_status",
"url": "https://github.com/codustry/gebwai_status/issues/387",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2334840049 | 🛑 server is down
In c5cdb4c, server ($API_ENDPOINT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: server is back up in 0a0d661 after 7 minutes.
| gharchive/issue | 2024-06-05T03:52:15 | 2025-04-01T04:33:51.348996 | {
"authors": [
"wasdee"
],
"repo": "codustry/gebwai_status",
"url": "https://github.com/codustry/gebwai_status/issues/405",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
332781986 | Update to latest CoffeeNet parent version 0.31.0
New CoffeeNet parent version 0.31.0 was released! Please check https://github.com/coffeenet/coffeenet-starter/blob/master/CHANGELOG.md for more information
Pull Request Test Coverage Report for Build 180
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 87.778%
Totals
Change from base Build 178:
0.0%
Covered Lines:
158
Relevant Lines:
180
💛 - Coveralls
| gharchive/pull-request | 2018-06-15T13:49:37 | 2025-04-01T04:33:51.354155 | {
"authors": [
"coffeenetrelease",
"coveralls"
],
"repo": "coffeenet/coffeenet-frontpage",
"url": "https://github.com/coffeenet/coffeenet-frontpage/pull/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1204560788 | Fix masking error for complex multipolygon cutlines with holes #393
Hi there!
In reference to #393
Testing
from urllib.request import urlopen
import json
from rio_tiler_pds.sentinel.aws import S2JP2Reader
from matplotlib.pyplot import imshow
import os
os.environ['AWS_REQUEST_PAYER'] = 'requester'
url = 'https://lambdatentor.s3.amazonaws.com/Polygon_with_holes.json'
response = urlopen(url)
data_json = json.loads(response.read())
feature = data_json['features'][0]
scene = 'S2A_L2A_20220315_20HKH_0'
with S2JP2Reader(scene) as sentinel:
img , mask = sentinel.feature(feature, expression="B08")
imshow(img[0])
url = "https://lambdatentor.s3.amazonaws.com/multipoligono.geojson"
response = urlopen(url)
data_json = json.loads(response.read())
feature = data_json['features'][0]
scene = 'S2B_L1C_20220406_20HQF_0'
with S2JP2Reader(scene) as sentinel:
img , mask = sentinel.feature(feature, expression="B08")
imshow(img[0])
Hugs!
Fernando
thanks a lot @Fernigithub, I did some cleanup and I'll merge once the CI ✅
Your welcome @vincentsarago!, Great job! 💪
| gharchive/pull-request | 2022-04-14T13:54:49 | 2025-04-01T04:33:51.358971 | {
"authors": [
"Fernigithub",
"vincentsarago"
],
"repo": "cogeotiff/rio-tiler",
"url": "https://github.com/cogeotiff/rio-tiler/pull/493",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1053911561 | Remove duplicate bounds checking logic
Since the endpoint options are numeric values instead of pointers, they default to 0 when initializing the struct and an explicit value is not specified. This means that all parameters are included in each request instead of omitting empty ones, and it forces us to ensure that those default values will not produce a 400 response. In the future, we might want to transition to using pointers for our options structs so that omitempty can leave out fields which are not specified allowing us to remove the duplicate bounds checking logic.
Hey thanks for the issue. We now auto-generate the SDKs so the codebase is different and this issue no longer applies.
| gharchive/issue | 2021-11-15T17:19:47 | 2025-04-01T04:33:51.377814 | {
"authors": [
"aar10n",
"billytrend-cohere"
],
"repo": "cohere-ai/cohere-go",
"url": "https://github.com/cohere-ai/cohere-go/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1441852270 | Bug: Request method wallet_watchAsset is not supported
Describe the bug
if coinbase extension enabled i get this error, if only metamask - works wwell
Steps
Enable coinbase extension and metamask
just open any site and paste this piece of code in console
ethereum.request({
method: 'wallet_watchAsset',
params: {
type: 'ERC20',
options: {
address: '0xb60e8dd61c5d32be8058bb8eb970870f07233155',
symbol: 'FOO',
decimals: 18,
image: 'https://foo.io/token-image.svg',
},
},
})
you should receive error: Request method wallet_watchAsset is not supported
then disable coinbase and try again - should work
Expected behavior
should work as metamask
Version
No response
Additional info
No response
Desktop
No response
Smartphone
No response
Make sure you are calling ethereum.request({ method: 'eth_requestAccounts', params: [] }) first before making any RPC calls. I was able to reproduce the error you were seeing when in a disconnected state. After calling eth_requestAccounts, the wallet_watchAsset call should work.
I'm using wagmi.js and before i called wallet_watchAsset i had called transaction request and of course account was connected.
Here minimal sample for reproducing
https://codesandbox.io/s/hidden-snow-f66okv?file=/src/App.jsx
pls note what if some of coinbase or metamask extensions disabled - import works well
If both enabled - i receive errors
Gotcha, will take another look next week
require 'coinbase/wallet'
client = Coinbase::Wallet::Client.new(api_key: ,
api_secret: ,
CB_VERSION: 'YYYY-MM-DD')
payment_methods = client.payment_methods
puts payment_methods
BNB Wallet ...59c625
hi. sorry for the belated response.
i tried using both coinbase wallet and metamask installed on my browser, and it's working fine on my end.
please try our latest SDK version 3.6.0. feel free to open a new one if needed. thanks.
@vishnumad reopen pls I'm using wagmi.js and before i called wallet_watchAsset i had called transaction request and of course account was connected. Here minimal sample for reproducing https://codesandbox.io/s/hidden-snow-f66okv?file=/src/App.jsx pls note what if some of coinbase or metamask extensions disabled - import works well If both enabled - i receive errors
Hi @ArtemFokin I think I have nice solution for you!
If you are using wagmi, don't use window.ethereum. You can just use watchAsset method of connector taken for instance from useAccount.
Example below:
const { connector } = useAccount()
const addToken = async () => {
await connector?.watchAsset?.({ address, symbol, decimals, image })
}
Cheers :)
| gharchive/issue | 2022-11-09T10:40:51 | 2025-04-01T04:33:51.418180 | {
"authors": [
"ArtemFokin",
"Maryo77",
"bangtoven",
"mwx27",
"vishnumad"
],
"repo": "coinbase/coinbase-wallet-sdk",
"url": "https://github.com/coinbase/coinbase-wallet-sdk/issues/721",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
284359746 | Cannot use withdraw API using crypto_address
Hi,
When i try to withdraw ETH from gdax to another crypto_address using withdraw API it throws error "opts must include param coinbase_account_id"
Documentation of gdax-node documents it as below:
// Withdraw from your Exchange BTC account to another BTC address.
const withdrawAddressParams = {
'amount': 10.00,
'currency': 'BTC',
'crypto_address': '15USXR6S4DhSWVHUxXRCuTkD1SA6qAdy'
}
authedClient.withdrawCrypto(withdrawAddressParams, callback);
This throws error that withdrawCrypto method doesnt exists. It looks like a typo in documentation.
If i use following code:
const withdrawAddressParams = {
'amount': 0.002,
'currency': 'ETH',
'crypto_address': '03x245276ad8c2747ec69a6005970769280d'
}
authedClient.withdraw(withdrawAddressParams, function(response){
console.log(response);
});
it throws error "opts must include param coinbase_account_id"
Thanks
Aditya
The method definitely exists: https://github.com/coinbase/gdax-node/blob/17619ca073a64f20674a2c125d6cbe7a43b2d420/lib/clients/authenticated.js#L260-L263
Not sure what your error is, would be great if you could provide some more context.
Had this error too.
In my case, it's because i used npm install gdax-node.
In that library, withdrawcrypto is not available.
Doznload directly from github (zip file), problem goes away.
| gharchive/issue | 2017-12-24T12:23:02 | 2025-04-01T04:33:51.422110 | {
"authors": [
"adityamertia",
"fb55",
"wannesdemaeght"
],
"repo": "coinbase/gdax-node",
"url": "https://github.com/coinbase/gdax-node/issues/176",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
563456746 | Add 10PercentStepRolloutNoCanary strategy
This adds a variation on the existing 25% rollout strategy with a 10% rate.
Review Error for sanjayprabhu @ 2020-02-11 20:28:53 UTC
User must have write permissions to review
| gharchive/pull-request | 2020-02-11T20:26:28 | 2025-04-01T04:33:51.425063 | {
"authors": [
"dustMason",
"heimdall-asguard"
],
"repo": "coinbase/odin",
"url": "https://github.com/coinbase/odin/pull/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
316987175 | why does the balance query take two currencies in the url?
The example url in the account balance documentation is this: https://webapi.coinfloor.co.uk:8090/bist/XBT/GBP/balance/
Why are there two currencies? Can I add more? Can I specify only one?
Also, the docs states that I need these 3 parameters:
User ID - user ID
API key - api key
Passphrase - passphrase
That's not correct, is it?
Why are there two currencies? Can I add more? Can I specify only one?
It's because BIST is an emulation of Bitstamp's v1 API. Bitstamp traded only a single currency pair at that time, so their API did not support arbitrary numbers of currencies. When we implemented BIST, we had to replicate Bitstamp's API at multiple base URIs, one for each currency pair traded on Coinfloor. We don't recommend using BIST, as it's inefficient compared to our native WebSocket-based API.
Also, the docs states that I need these 3 parameters: […] That's not correct, is it?
The docs are correct. What is your doubt?
The GetBalances method in the WebSocket API returns all of your balances across all assets. With the JavaScript client library you would call this method like Coinfloor.getBalances(console.log), replacing console.log with your own callback function to receive the balances. There is no plain HTTP API method to get all balances at once.
| gharchive/issue | 2018-04-23T21:18:06 | 2025-04-01T04:33:51.428572 | {
"authors": [
"npomfret",
"whitslack"
],
"repo": "coinfloor/API",
"url": "https://github.com/coinfloor/API/issues/21",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1187459926 | 🛑 Coinsamba API is down
In 18fd647, Coinsamba API (https://api.coinsamba.com/health) was down:
HTTP code: 521
Response time: 189 ms
Resolved: Coinsamba API is back up in 9b5d5ea.
| gharchive/issue | 2022-03-31T03:56:03 | 2025-04-01T04:33:51.432330 | {
"authors": [
"itxtoledo"
],
"repo": "coinsambacom/upptime",
"url": "https://github.com/coinsambacom/upptime/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
972246875 | Can't load plugins in angular "Error: Could not load ffmpeg.wasm plugin"
Description
Can't load plugins in Angular - 'could not load'
Steps to reproduce
Follow install instructions to get angular project up and running. Then same for the plugin (eg ffmpeg-wasm) as in docs.
Results
Expected
Example working with record/transcode
Actual
Error thrown on deviceReady, recorder unresponsive.
Error output
The raw error output is
The inner exception, thrown away in videojs.record.js L1699 is
It seems like the _engineLoader is returning null for the ConvertEngineClass.
Please bear with me as I am a .net dev new to js and angular and the way packages work here. It's possible that I'm not properly including something, but it all seems to be there - if videojs were missing then I'd expect the player/recorder not to work (everything is fine if no plugins are used). The line in the plugin docs:
isn't 100pc clear to me in an Angular project. I think it seems unnecessary in Angular? I also tried bundling the script in the angular.json but that didn't fix anything. In any case, the error message displayed is not very informative.
Additional Information
Please include any additional information necessary here. Including the following:
versions
videojs
what version of videojs does this occur with?
+-- video.js@7.14.3
+-- videojs-record@4.5.0
+-- @ffmpeg/core@0.10.0
+-- @ffmpeg/ffmpeg@0.10.1
browsers
what browser(s) are affected? Make sure to test with all third-party browser extensions disabled.
Chrome
Firefox
Edge
OSes
what platforms (operating systems and devices) are affected?
Android
Windows 10
duplicate of #567?
In my case the error occurs when you load the player, #567 seems to be fine until the actual conversion starts.
I think it seems unnecessary in Angular?
It's definitely needed to include that script.
By unnecessary I mean to say that I think in angular the files' inclusion gets handled by the import process. The code in question is literally just a vanilla angular project set up (for the repro)
with vjs-record as in the docs, plus the plugin installation instructions (the import and then enabling it in the player options as in ffmpeg-wasm docs)
As for including the script explicitly, can anyone tell me how that's dont in an angular template? I can't just drop it into the component html as far as I know.
What I have tried is the following in my angular.json
"scripts": [
{
"input": "node_modules/@ffmpeg/ffmpeg/dist/ffmpeg.min.js",
"inject": true,
"bundleName": "ffmpg"
},]
and then in index.html's head:
This appears to import the script through the whole project, but doesn't solve the issue. Project (minus node_modules) attached.
vid.zip
I also found an old closed ticket that seems to be the same issue on its surface: #507
| gharchive/issue | 2021-08-17T02:39:38 | 2025-04-01T04:33:51.481368 | {
"authors": [
"stoogebag",
"thijstriemstra"
],
"repo": "collab-project/videojs-record",
"url": "https://github.com/collab-project/videojs-record/issues/601",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1071750137 | Recording does not work on iOS 15
Description
Unable to record a video from a device on iOS 15. This is reliably reproducible when recording a video that has a length of about 1.5-2 minutes.
I believe the issue lies in the RecordRTC library as I have tested it on webrtc-experiment and it doesn't work too. I added some more logs on https://github.com/muaz-khan/RecordRTC/issues/782#issuecomment-986439163
If you guys have any ideas on how to use the library for iOS devices considering the caveats in RecordRTC I would appreciate to hear it. Please feel free to contact me if more info is needed, thanks!
Steps to reproduce
Set up this repo for local development
Update the maxLength in https://github.com/collab-project/videojs-record/blob/a36ba88e74ecadbf765c2bf191fb094e724a75b2/examples/audio-video.html to ~360 for testing more long-lasting videos
Host it locally via HTTPS (need to update the start script with https key: npm run build && webpack serve --config ./build-config/webpack.dev.main.js --https)
Proceed to the local network address of your PC from an iOS device
Try recording a video for about 1.5-2 min.
Results
Expected
The replay via online-player should be working and the file should not be 0 bytes in length.
Actual
It both can't be replayed in the online player or to be downloaded and played since the file is broken. Also, the error below is thrown.
Error output
"ERROR:" – "(CODE:4 MEDIA_ERR_SRC_NOT_SUPPORTED)" – "The media could not be loaded, either because the server or network failed or because the format is not supported."
Additional Information
versions
videojs
7.14.3
recordrtc
5.6.2
browsers
Safari 15
Chrome 96
OSes
iOS 15.1.1
Same issue here
For anyone looking, I was able to fix the issue by monkey-patching the bitrate of the video on iOS devices:
you save my day @chernodub
@chernodub this is not working for more than 2 minute videos in iphone 11 with safari 15+
@chernodub this is not working for more than 2 minute videos in iphone 11 with safari 15+
Probably you could try reducing the bitrate even more, but I'm not sure whether it'll work
will reducing bitrate increase the filesize?
will reducing bitrate increase the filesize?
It will decrease the filesize.
| gharchive/issue | 2021-12-06T05:36:34 | 2025-04-01T04:33:51.490600 | {
"authors": [
"chernodub",
"dosdemon",
"markitosgv",
"pradeepaanumalla",
"prdip"
],
"repo": "collab-project/videojs-record",
"url": "https://github.com/collab-project/videojs-record/issues/627",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2129018938 | Use default python 3.11 Docker and include github action for building cpu image?
Hi there,
I was wondering why you chose to use ubuntu:focal as the base for the cpu Dockerfile. Why not use the default python3.11 images. These should be based on debian and already include all necessary build deps for Python. Besides, at the moment the Dockerfile installs the setup.sh but if i'm not mistaken that is only needed for the client not for the server.
Finally, why not include a github actions to build and push containers for cpu at least to gcr? We could also utilize QEMU to build for multiple target architectures like arm64 or amd64.
Just a couple of ideas I had, let me know what you think. I'm happy to work on these and provide a PR.
@stkr22 If you can open a PR, that would be great. Thanks!
| gharchive/issue | 2024-02-11T14:49:44 | 2025-04-01T04:33:51.499239 | {
"authors": [
"makaveli10",
"stkr22"
],
"repo": "collabora/WhisperLive",
"url": "https://github.com/collabora/WhisperLive/issues/140",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
12470541 | Fixed deprecation warnings on Rails 4
Hey! Good news, your gem still works on Rails 4 with MiniTest::Unit. There was only one slight issue: a deprecation warning about ActiveRecord::Fixtures class being deprecated. They recommend using FixtureSet instead and it works fine.
Ping.
@colszowka Ping.
| gharchive/pull-request | 2013-03-26T20:49:35 | 2025-04-01T04:33:51.537476 | {
"authors": [
"Trevoke",
"isabanin",
"rmeyerpagerduty"
],
"repo": "colszowka/transactionata",
"url": "https://github.com/colszowka/transactionata/pull/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2683057398 | [FEAT] Handle entity attributes
Hello,
To begin thanks for your integration, before that I used mqtt but it's easier with your integration. I have a small request can you add attribute management ? I want to do something like that :
- name: Notify Home Assistant of Playbook Start
hosts: localhost
tasks:
- name: Notify HA webhook of playbook start
uri:
url: "http://your-home-assistant-instance/api/webhook/ansible_playbook_monitor_webhook"
method: POST
headers:
Authorization: "Bearer YOUR_API_KEY" # Replace with your actual API key
Content-Type: "application/json"
body: '{"playbook": "deploy_app", "status": "started", "attributes" : {"result" : 10, "datetime" : "XXXXXXX"}}'
body_format: json
And find my attributes on attributes entity. It's can be usefull to pass resultat like number of waiting update on system for exemple.
Thanks in advance,
I've added attribute support which will be available in the next release.
| gharchive/issue | 2024-11-22T12:29:29 | 2025-04-01T04:33:51.539454 | {
"authors": [
"coltondick",
"zoic21"
],
"repo": "coltondick/ansible_playbook_monitor",
"url": "https://github.com/coltondick/ansible_playbook_monitor/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1839517561 | Update scalafmt-core to 3.7.12
About this PR
📦 Updates org.scalameta:scalafmt-core from 3.7.3 to 3.7.12
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Superseded by #97.
| gharchive/pull-request | 2023-08-07T13:59:13 | 2025-04-01T04:33:51.544484 | {
"authors": [
"scala-steward"
],
"repo": "com-lihaoyi/acyclic",
"url": "https://github.com/com-lihaoyi/acyclic/pull/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1487583858 | Update dependency commons-io:commons-io to v20030203
This PR contains the following updates:
Package
Type
Update
Change
commons-io:commons-io (source)
dependencies
major
2.11.0 -> 20030203.000550
⚠ Dependency Lookup Warnings ⚠
Warnings were logged while processing this repo. Please check the Dependency Dashboard for more information.
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
⚠ Artifact update problem
Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.
♻ Renovate will retry this branch, including artifacts, only when one of the following happens:
any of the package files in this branch needs updating, or
the branch becomes conflicted, or
you click the rebase/retry checkbox if found above, or
you rename this PR's title to start with "rebase!" to trigger it manually
The artifact failure details are included below:
File name: undefined
Command failed: ./gradlew updateLicenses
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java:59: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java:60: error: cannot find symbol
import org.apache.commons.io.output.CloseShieldOutputStream;
^
symbol: class CloseShieldOutputStream
location: package org.apache.commons.io.output
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/DirectoryFactory.java:31: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/SolrCore.java:67: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/response/BinaryResponseWriter.java:32: error: cannot find symbol
import org.apache.commons.io.output.ByteArrayOutputStream;
^
symbol: class ByteArrayOutputStream
location: package org.apache.commons.io.output
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:46: error: cannot find symbol
import org.apache.commons.io.input.CloseShieldInputStream;
^
symbol: class CloseShieldInputStream
location: package org.apache.commons.io.input
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/admin/SolrInfoMBeanHandler.java:25: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java:145: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/FieldAnalysisRequestHandler.java:23: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/CatStream.java:32: error: cannot find symbol
import org.apache.commons.io.LineIterator;
^
symbol: class LineIterator
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/CatStream.java:61: error: cannot find symbol
private LineIterator currentFileLines;
^
symbol: class LineIterator
location: class CatStream
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/configsets/UploadConfigSetAPI.java:31: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/configsets/UploadConfigSetFileAPI.java:23: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/CSVLoaderBase.java:28: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java:39: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java:40: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java:27: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:59: error: cannot find symbol
import org.apache.commons.io.FilenameUtils;
^
symbol: class FilenameUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:60: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/DumpRequestHandler.java:28: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/DocumentAnalysisRequestHandler.java:31: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/tagger/XmlOffsetCorrector.java:31: error: cannot find symbol
import org.apache.commons.io.input.ClosedInputStream;
^
symbol: class ClosedInputStream
location: package org.apache.commons.io.input
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/LoadAdminUiServlet.java:26: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/LoadAdminUiServlet.java:27: error: cannot find symbol
import org.apache.commons.io.output.CloseShieldOutputStream;
^
symbol: class CloseShieldOutputStream
location: package org.apache.commons.io.output
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/StandardDirectoryFactory.java:28: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/SolrXmlConfig.java:41: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/backup/repository/LocalFileSystemRepository.java:31: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/ConfigSetProperties.java:23: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessor.java:29: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/update/processor/HTMLStripFieldUpdateProcessorFactory.java:25: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/FileUtils.java:28: error: cannot find symbol
import org.apache.commons.io.FileExistsException;
^
symbol: class FileExistsException
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SafeXMLParsing.java:29: error: cannot find symbol
import org.apache.commons.io.input.CloseShieldInputStream;
^
symbol: class CloseShieldInputStream
location: package org.apache.commons.io.input
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:75: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/packagemanager/PackageUtils.java:33: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/packagemanager/RepositoryManager.java:38: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/packagemanager/DefaultPackageRepository.java:29: error: cannot find symbol
import org.apache.commons.io.FilenameUtils;
^
symbol: class FilenameUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/search/SolrReturnFields.java:31: error: cannot find symbol
import org.apache.commons.io.FilenameUtils;
^
symbol: class FilenameUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/schema/AbstractSpatialPrefixTreeFieldType.java:24: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/schema/CollationField.java:29: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/cloud/ZkCLI.java:43: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionConfigSetProcessor.java:22: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java:37: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java:32: error: package org.apache.commons.io.file does not exist
import org.apache.commons.io.file.PathUtils;
^
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/request/json/RequestUtil.java:26: error: cannot find symbol
import org.apache.commons.io.IOUtils;
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java:31: error: cannot find symbol
import org.apache.commons.io.output.ByteArrayOutputStream;
^
symbol: class ByteArrayOutputStream
location: package org.apache.commons.io.output
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java:1665: error: cannot find symbol
out = new CloseShieldOutputStream(out);
^
symbol: class CloseShieldOutputStream
location: class ReplicationHandler.DirectoryFileStream
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java:1800: error: cannot find symbol
IOUtils.closeQuietly(inputStream);
^
symbol: variable IOUtils
location: class ReplicationHandler.LocalFsFileStream
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java:553: error: cannot find symbol
IOUtils.toByteArray(
^
symbol: variable IOUtils
location: class CollectionsHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java:563: error: cannot find symbol
IOUtils.toByteArray(
^
symbol: variable IOUtils
location: class CollectionsHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/search/SolrReturnFields.java:581: error: cannot find symbol
if (FilenameUtils.wildcardMatch(name, s)) {
^
symbol: variable FilenameUtils
location: class SolrReturnFields
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java:429: error: cannot find symbol
IOUtils.closeQuietly(selector);
^
symbol: variable IOUtils
location: class OverseerTaskProcessor
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/DirectoryFactory.java:399: error: cannot find symbol
PathUtils.deleteDirectory(dirToRm);
^
symbol: variable PathUtils
location: class DirectoryFactory
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java:2006: error: cannot find symbol
org.apache.commons.io.IOUtils.closeQuietly(is);
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/response/BinaryResponseWriter.java:199: error: cannot find symbol
ByteArrayOutputStream out = new ByteArrayOutputStream();
^
symbol: class ByteArrayOutputStream
location: class BinaryResponseWriter
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/response/BinaryResponseWriter.java:199: error: cannot find symbol
ByteArrayOutputStream out = new ByteArrayOutputStream();
^
symbol: class ByteArrayOutputStream
location: class BinaryResponseWriter
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/SolrCore.java:3287: error: cannot find symbol
PathUtils.deleteDirectory(desc.getInstanceDir());
^
symbol: variable PathUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/SolrCore.java:3307: error: cannot find symbol
PathUtils.deleteDirectory(dataDir);
^
symbol: variable PathUtils
location: class SolrCore
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/SolrCore.java:3318: error: cannot find symbol
PathUtils.deleteDirectory(cd.getInstanceDir());
^
symbol: variable PathUtils
location: class SolrCore
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:571: error: cannot find symbol
return new CloseShieldInputStream(inputStream);
^
symbol: class CloseShieldInputStream
location: class HttpRequestContentStream
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:634: error: cannot find symbol
org.apache.commons.io.IOUtils.toString(new PartContentStream(part).getReader());
^
symbol: class IOUtils
location: package org.apache.commons.io
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:741: error: cannot find symbol
in == null ? new CloseShieldInputStream(req.getInputStream()) : in);
^
symbol: class CloseShieldInputStream
location: class FormDataRequestParser
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/admin/SolrInfoMBeanHandler.java:66: error: cannot find symbol
String content = IOUtils.toString(body.getReader());
^
symbol: variable IOUtils
location: class SolrInfoMBeanHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/FieldAnalysisRequestHandler.java:159: error: cannot find symbol
value = IOUtils.toString(reader);
^
symbol: variable IOUtils
location: class FieldAnalysisRequestHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/FieldAnalysisRequestHandler.java:163: error: cannot find symbol
IOUtils.closeQuietly(reader);
^
symbol: variable IOUtils
location: class FieldAnalysisRequestHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/CatStream.java:194: error: cannot find symbol
new LineIterator(
^
symbol: class LineIterator
location: class CatStream
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/CatStream.java:199: error: cannot find symbol
currentFileLines = FileUtils.lineIterator(currentFilePath.absolutePath.toFile(), "UTF-8");
^
symbol: method lineIterator(File,String)
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/configsets/UploadConfigSetAPI.java:98: error: cannot find symbol
configSetName, filePath, IOUtils.toByteArray(zis), true);
^
symbol: variable IOUtils
location: class UploadConfigSetAPI
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/configsets/UploadConfigSetFileAPI.java:85: error: cannot find symbol
configSetName, fixedSingleFilePath, IOUtils.toByteArray(inputStream), allowOverwrite);
^
symbol: variable IOUtils
location: class UploadConfigSetFileAPI
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/CSVLoaderBase.java:414: error: cannot find symbol
IOUtils.closeQuietly(reader);
^
symbol: variable IOUtils
location: class CSVLoaderBase
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java:121: error: cannot find symbol
final byte[] body = IOUtils.toByteArray(is);
^
symbol: variable IOUtils
location: class XMLLoader
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java:129: error: cannot find symbol
IOUtils.closeQuietly(is);
^
symbol: variable IOUtils
location: class XMLLoader
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/XMLLoader.java:141: error: cannot find symbol
IOUtils.closeQuietly(is);
^
symbol: variable IOUtils
location: class XMLLoader
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java:155: error: cannot find symbol
String body = IOUtils.toString(reader);
^
symbol: variable IOUtils
location: class SingleThreadedJsonLoader
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java:165: error: cannot find symbol
IOUtils.closeQuietly(reader);
^
symbol: variable IOUtils
location: class SingleThreadedJsonLoader
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchemaFactory.java:360: error: cannot find symbol
IOUtils.closeQuietly(nonManagedSchemaInputStream);
^
symbol: variable IOUtils
location: class ManagedIndexSchemaFactory
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:1177: error: cannot find symbol
Files.createTempDirectory("schema-designer-" + FilenameUtils.getName(configId));
^
symbol: variable FilenameUtils
location: class SchemaDesignerConfigSetHelper
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/designer/SchemaDesignerConfigSetHelper.java:1215: error: cannot find symbol
PathUtils.deleteDirectory(tmpDirectory);
^
symbol: variable PathUtils
location: class SchemaDesignerConfigSetHelper
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/DumpRequestHandler.java:97: error: cannot find symbol
stream.add("stream", IOUtils.toString(reader));
^
symbol: variable IOUtils
location: class DumpRequestHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/DocumentAnalysisRequestHandler.java:182: error: cannot find symbol
IOUtils.closeQuietly(is);
^
symbol: variable IOUtils
location: class DocumentAnalysisRequestHandler
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/handler/tagger/XmlOffsetCorrector.java:60: error: cannot find symbol
return ClosedInputStream.CLOSED_INPUT_STREAM;
^
symbol: variable ClosedInputStream
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/LoadAdminUiServlet.java:70: error: cannot find symbol
CloseShieldOutputStream.wrap(response.getOutputStream()), StandardCharsets.UTF_8);
^
symbol: variable CloseShieldOutputStream
location: class LoadAdminUiServlet
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/LoadAdminUiServlet.java:74: error: cannot find symbol
IOUtils.toString(in, StandardCharsets.UTF_8)
^
symbol: variable IOUtils
location: class LoadAdminUiServlet
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/LoadAdminUiServlet.java:78: error: cannot find symbol
IOUtils.closeQuietly(in);
^
symbol: variable IOUtils
location: class LoadAdminUiServlet
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/servlet/LoadAdminUiServlet.java:79: error: cannot find symbol
IOUtils.closeQuietly(out);
^
symbol: variable IOUtils
location: class LoadAdminUiServlet
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/StandardDirectoryFactory.java:92: error: cannot find symbol
PathUtils.deleteDirectory(dirPath);
^
symbol: variable PathUtils
location: class StandardDirectoryFactory
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/SolrXmlConfig.java:233: error: cannot find symbol
byte[] buf = IOUtils.toByteArray(is);
^
symbol: variable IOUtils
location: class SolrXmlConfig
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/backup/repository/LocalFileSystemRepository.java:105: error: cannot find symbol
PathUtils.deleteDirectory(Path.of(path));
^
symbol: variable PathUtils
location: class LocalFileSystemRepository
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/ConfigSetProperties.java:69: error: cannot find symbol
IOUtils.closeQuietly(reader);
^
symbol: variable IOUtils
location: class ConfigSetProperties
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/core/ConfigSetProperties.java:87: error: cannot find symbol
IOUtils.closeQuietly(reader);
^
symbol: variable IOUtils
location: class ConfigSetProperties
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/update/processor/RegexpBoostProcessor.java:156: error: cannot find symbol
IOUtils.closeQuietly(reader);
^
symbol: variable IOUtils
location: class RegexpBoostProcessor
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/update/processor/HTMLStripFieldUpdateProcessorFactory.java:75: error: cannot find symbol
IOUtils.closeQuietly(in);
^
symbol: variable IOUtils
location: class HTMLStripFieldUpdateProcessorFactory
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/FileUtils.java:100: error: cannot find symbol
throw new FileExistsException(
^
symbol: class FileExistsException
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SafeXMLParsing.java:109: error: cannot find symbol
.parse(new CloseShieldInputStream(in), SYSTEMID_UNTRUSTED);
^
symbol: class CloseShieldInputStream
location: class SafeXMLParsing
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:1817: error: cannot find symbol
FileUtils.copyDirectoryToDirectory(confDir, coreInstanceDir);
^
symbol: method copyDirectoryToDirectory(File,File)
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:1822: error: cannot find symbol
FileUtils.copyDirectory(configSetDir, new File(coreInstanceDir, "conf"));
^
symbol: method copyDirectory(File,File)
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:1859: error: cannot find symbol
PathUtils.deleteDirectory(coreInstanceDir.toPath());
^
symbol: variable PathUtils
location: class CreateCoreTool
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:3137: error: cannot find symbol
FileUtils.copyDirectory(node1Dir, nodeNDir);
^
symbol: method copyDirectory(File,File)
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:4462: error: cannot find symbol
FileUtils.writeStringToFile(
^
symbol: method writeStringToFile(File,String,Charset)
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:4577: error: cannot find symbol
List<String> includeFileLines = FileUtils.readLines(includeFile, StandardCharsets.UTF_8);
^
symbol: method readLines(File,Charset)
location: class FileUtils
/Users/janhoy/.renovate/repos/github/cominvent/solr-playground/solr/core/src/java/org/apache/solr/util/SolrCLI.java:4619: error: cannot find symbol
FileUtils.writeLines(includeFile, StandardCharsets.UTF_8.name(), includeFileLines);
^
symbol: method writeLines(File,String,List<String>)
location: class FileUtils
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
100 errors
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':solr:core:compileJava'.
> Compilation failed; see the compiler error output for details.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 3s
| gharchive/pull-request | 2022-12-09T22:55:09 | 2025-04-01T04:33:51.567255 | {
"authors": [
"janhoy"
],
"repo": "cominvent/solr-playground",
"url": "https://github.com/cominvent/solr-playground/pull/62",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1043145956 | CFD state machine as of version 0.1.1
Note that this state machine should be evolved during the refactoring of the model, so we keep track of the changes.
Ok, merging it! In case we notice that something was not depicted as the code behaves we can always open another PR.
bors r+
bors r-
I decided to add a bit more information to make things a bit clearer.
Sorry for the complexity, but without that it would have been impossible to validate it 😅
Sorry for the complexity, but without that it would have been impossible to validate it sweat_smile
Complex?!? All the information is present and clear, I'd say this reduces complexity immensely!
bors r+
| gharchive/pull-request | 2021-11-03T07:03:20 | 2025-04-01T04:33:51.573684 | {
"authors": [
"DeliciousHair",
"da-kami"
],
"repo": "comit-network/hermes",
"url": "https://github.com/comit-network/hermes/pull/469",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1239248116 | Lower thumbnail resolution & frequency and remove from camerad
These things are in camerad and terrible. They run in the frame loop and waste CPU. My proposal is this:
Every 15 frames, we have an I-Frame in the qcamera stream. These are about 5kb and qcamera quality. We could add the EncodeData packets for these to the logs instead (maybe every 120 frames aka 6 seconds, do we really need 5?), then change UploadQlogJpegs to decode them in ffmpeg.
Current thumbnails are about 10kb. These I-Frames also have the advantage of being a subset of the qcamera.
Is CPU really a concern if we knock it down to one per segment? We discussed this previously and I believe we concluded we have way more than we need and they are way bigger then they need to be.
It's not just CPU, it's sporadic CPU. I believe most of the time is spent in the downscale actually, so smaller wouldn't even help.
Now that we don't upload qcams by default on cell, we can't use the I-frames to show thumbnails before qcams upload, which we still want.
generate thumbnails in encoderd or loggerd (so loop of camerad isn't variable depending on if it's making a thumbnail or not)
reduce resolution and frequency. 1/8th and 2 per segment? currently it's 1/4th and 12 per segment
| gharchive/issue | 2022-05-17T23:06:25 | 2025-04-01T04:33:51.578470 | {
"authors": [
"geohot",
"gregjhogan",
"sshane"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/issues/24570",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
875113923 | 2021 Lexus RX350 FW versions
Add values for 2021 Lexus ES350
What's your dongle ID?
You added the value to the Lexus RX. Are you sure you didn't want to add it to the LEXUS_ES_TSS2 section?
You added the value to the Lexus RX. Are you sure you didn't want to add it to the LEXUS_ES_TSS2 section?
it was actually for the rx i dont know why i said es
I assume this is working on your RX? We'll still need your Dongle ID too.
Tested on 5e6b6cb332f2cf8e|2021-05-02--21-45-26
| gharchive/pull-request | 2021-05-04T05:23:26 | 2025-04-01T04:33:51.580834 | {
"authors": [
"adeebshihadeh",
"jmoratti",
"pd0wm"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/pull/20813",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
325069573 | Little endian support
From the Tesla current work. We needed little endian support since Tesla uses a mix of little and big endian. Should be useful for many other port, European cars mostly use little endian also.
Just for the sake of credits, this work is 80% Robert Cotran and 20% me (jeankalud).
@jeankalud we are writing few can parser unit tests first. When done, probably for the release after the next, we'll merge the PR. Thanks!
I merged the little endian related stuff (so not the Tesla stuff) to our internal codebase, and it will ship with next release.
Thanks for the PR!
| gharchive/pull-request | 2018-05-21T21:43:56 | 2025-04-01T04:33:51.582310 | {
"authors": [
"jeankalud",
"pd0wm",
"rbiasini"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/pull/250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
763396725 | Discuss: don't dispatch tbuffer in CameraBuf::accquire
I don’t think we can call tbuffer_dispatch in CameraBuf::acquire. we still use this buffer(cur_rgb_buffer) at somewhere else such as camera_process_frame(),dispatch it earyly may cause problems(Maybe the UI will also hang).
@ZwX1616 am I right?
related PR: https://github.com/commaai/openpilot/pull/2613
This is going away in #2668.
| gharchive/pull-request | 2020-12-12T07:12:58 | 2025-04-01T04:33:51.584225 | {
"authors": [
"adeebshihadeh",
"deanlee"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/pull/2759",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2499046286 | latcontrol_torque: deprecate yaw-based curvature
Description
The use_steering_angle = False path of the lateral accel torque controller is broken, possibly by #33283. The breakage escaped notice because this path isn't used by any supported cars.
We can either:
Fix it, the most obvious fix being b6166fc6b2ba815e976778f35119f332ee561d73.
Deprecate it since it's unused and has no CI coverage, this PR.
[ ] Requires commaai/opendbc#1214
Verification
CI testing.
Route
Route: 04836f13759962ab/00000097--450580dbd3
Additional Info
I drove on a fixed version earlier today, trying to gain insight into problems with roll compensation. When we need to hold a straight line on variably-banked rural two-lane highway, it actually performs better than the vehicle-module curvature, but it's too noisy to turn smoothly or handle freeway speeds in its current form.
I would be a little sad to deprecate it, but perhaps the core idea can return in a better form someday.
Traceback (most recent call last):
File "capnp/lib/capnp.pyx", line 1377, in capnp.lib.capnp._DynamicStructBuilder.__setattr__
File "capnp/lib/capnp.pyx", line 1370, in capnp.lib.capnp._DynamicStructBuilder._set
File "capnp/lib/capnp.pyx", line 829, in capnp.lib.capnp._setDynamicField
capnp.lib.capnp.KjException: Tried to set field: 'error' with a value of: '-0.14431941964088504' which is an unsupported type: '<class 'numpy.float64'>'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/openpilot/openpilot/system/manager/process.py", line 40, in launcher
mod.main()
File "/data/openpilot/selfdrive/controls/controlsd.py", line 852, in main
controls.controlsd_thread()
File "/data/openpilot/selfdrive/controls/controlsd.py", line 842, in controlsd_thread
self.step()
File "/data/openpilot/selfdrive/controls/controlsd.py", line 814, in step
CC, lac_log = self.state_control(CS)
^^^^^^^^^^^^^^^^^^^^^^
File "/data/openpilot/selfdrive/controls/controlsd.py", line 587, in state_control
actuators.steer, actuators.steeringAngleDeg, lac_log = self.LaC.update(CC.latActive, CS, self.VM, lp,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/openpilot/openpilot/selfdrive/controls/lib/latcontrol_torque.py", line 71, in update
pid_log.error = torque_from_setpoint - torque_from_measurement
^^^^^^^^^^^^^
File "capnp/lib/capnp.pyx", line 1379, in capnp.lib.capnp._DynamicStructBuilder.__setattr__
File "capnp/lib/capnp.pyx", line 267, in capnp.lib.capnp.KjException._to_python
AttributeError: 'NoneType' object has no attribute 'type'
Turns out RAM_HD was using it, as a positional arg, and we're not currently running CI on dashcam cars.
Choices:
Fix yaw-based curvature, and regain CI coverage by running on dashcam cars again
Check to see if RAM_HD really benefits from this. Yaw-based curvature doesn't seem to work that well, in its current form, but neither does the 1.0 degrees of RAM_HD steering angle deadzone.
Holding in Draft to get opinions.
This PR was closed because of a git history rewrite.
Please read https://github.com/commaai/openpilot/issues/33399 for what to do with your fork and your PRs.
| gharchive/pull-request | 2024-08-31T21:03:34 | 2025-04-01T04:33:51.590200 | {
"authors": [
"jyoung8607",
"maxime-desroches"
],
"repo": "commaai/openpilot",
"url": "https://github.com/commaai/openpilot/pull/33417",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.