id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
2227115058 | OLS-413: Make LLMResponse include page titles for referenced documents
Description
LLMResponse now contains referenced documents that are pairs of (docs_url, title). Title comes from the "title" element of metadata in embedding nodes, added in https://github.com/openshift/lightspeed-rag-content/pull/9, which needs to merge first.
RAG content is now based on the sentence-transformers/all-mpnet-base-v2 embedding model, which necessitated RAG_SIMILARITY_CUTOFF_L2 increase and changes to a few asserts about what documents are retrieved for what queries.
Type of change
[ ] Refactor
[x] New feature
[ ] Bug fix
[ ] CVE fix
[ ] Optimization
[ ] Documentation Update
[ ] Configuration Update
[ ] Bump-up dependent library
Related Tickets & Documents
Related Issue #
Closes # https://issues.redhat.com/browse/OLS-413
Checklist before requesting a review
[x] I have performed a self-review of my code.
[x] PR has passed all pre-merge test jobs.
[ ] If it is a core feature, I have added thorough tests.
Testing
Please provide detailed steps to perform tests related to this code change.
How were the fix/results from this change verified? Please provide relevant screenshots or results.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 94.84%. Comparing base (2a75f81) to head (573f39c).
Report is 20 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #710 +/- ##
==========================================
- Coverage 95.25% 94.84% -0.42%
==========================================
Files 53 53
Lines 1895 1880 -15
==========================================
- Hits 1805 1783 -22
- Misses 90 97 +7
Files
Coverage Δ
ols/app/endpoints/ols.py
100.00% <100.00%> (ø)
ols/app/models/models.py
100.00% <100.00%> (ø)
ols/src/query_helpers/docs_summarizer.py
100.00% <100.00%> (ø)
ols/utils/token_handler.py
100.00% <100.00%> (ø)
... and 1 file with indirect coverage changes
Response format change LGTM. Merging shouldn't break anything except that the reference docs won't show up until the frontend is updated to match. I'll made that change after this PR merges.
please add an e2e test for the api behavior.
also we should not merge this until @kyoto can confirm it won't break the console
who's confirmation we now have https://github.com/openshift/lightspeed-service/pull/710#issuecomment-2041904910
is there no e2e test we can update that confirms the responses have the full RAG reference (url and title)?
if there isn't one already, then we need to introduce one.
I'll modify https://github.com/openshift/lightspeed-service/blob/25f8fd71fed5549a6440e9007736b0fc4456d292/tests/e2e/test_api.py#L402:L404 and the other places in the e2e test.
/test e2e-ols-cluster
/test e2e-ols-cluster
@bparees PTAL
@asamal4 This is the PR that introduces sentence-transformers/all-mpnet-base-v2 embedding model, so I bumped RAG_SIMILARITY_CUTOFF_L2 changed a few asserts about what documents are retrieved for what queries.
| gharchive/pull-request | 2024-04-05T05:59:08 | 2025-04-01T06:45:16.856288 | {
"authors": [
"codecov-commenter",
"kyoto",
"syedriko"
],
"repo": "openshift/lightspeed-service",
"url": "https://github.com/openshift/lightspeed-service/pull/710",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2186382662 | Add vendor/ folder in the docker build context
Blocks https://github.com/openshift/release/pull/49812
/override ci/prow/e2e-gcp-multi-operator-olm
/override ci/prow/e2e-gcp-multi-operator
/override ci/prow/ci-index-multiarch-tuning-operator-bundle
| gharchive/pull-request | 2024-03-14T13:25:06 | 2025-04-01T06:45:16.870352 | {
"authors": [
"aleskandro"
],
"repo": "openshift/multiarch-tuning-operator",
"url": "https://github.com/openshift/multiarch-tuning-operator/pull/60",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
664216588 | Added ODO_S2I_CONVERTED_DEVFILE env variable
To enable easy migration from s2i components to devfile, ODO_S2I_CONVERTED_DEVFILE environment variable is inserted to component while odo utils convert-to-devfile. There are some changes in script if this flag is enabled.
/approve
/lgtm
ping approvers - @kadel
/approve
| gharchive/pull-request | 2020-07-23T05:42:16 | 2025-04-01T06:45:16.871995 | {
"authors": [
"adisky",
"amitkrout",
"dharmit",
"kadel"
],
"repo": "openshift/odo-init-image",
"url": "https://github.com/openshift/odo-init-image/pull/70",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2427086232 | Support arm64 platform release jobs
Support arm64 platform release jobs
/assign @rioliu-rh
stable part is missed. you can make it as empty list
e.g. https://github.com/openshift/release-tests/blob/master/_releases/ocp-4.x-test-jobs-amd64.json
because job controller will check out the stable build as well
Updated.
/lgtm
/approve
/retitle OCPQE-24207 add 4.16 test job definition for arm64
| gharchive/pull-request | 2024-07-24T09:42:49 | 2025-04-01T06:45:17.078871 | {
"authors": [
"rioliu-rh",
"wangke19"
],
"repo": "openshift/release-tests",
"url": "https://github.com/openshift/release-tests/pull/236",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2251262936 | OCM-5247 | fix: block HCP operator-roles with unmanaged policies acco…
https://issues.redhat.com/browse/OCM-5247
/lgtm
| gharchive/pull-request | 2024-04-18T18:03:04 | 2025-04-01T06:45:17.125918 | {
"authors": [
"chenz4027",
"robpblake"
],
"repo": "openshift/rosa",
"url": "https://github.com/openshift/rosa/pull/1947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1486082409 | with latest built operator images, error logs reported every 10min
Setup:
openshift 4.11
Error logs,
I1209 02:36:06.851773 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 02:36:06.858126 1 base_controller.go:72] Caches are synced for LoggingSyncer
I1209 02:36:06.858143 1 base_controller.go:109] Starting #1 worker of LoggingSyncer controller ...
E1209 02:46:06.773198 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
E1209 02:46:06.781812 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
E1209 02:46:06.794765 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
E1209 02:46:06.963972 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
E1209 02:46:07.163888 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
E1209 02:46:07.562933 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
E1209 02:46:07.763993 1 target_config_reconciler.go:338] {configmap my-custom-scheduler-config} failed with : could not get configuration configmap: configmap "secondary-scheduler-config" not found
These logs reported even the configmap secondary-scheduler-config is there. Although we don't use that config. we use my-custom-scheduler-config.
Some investigation:
this change is added by #71, it is using a hard code "secondary-scheduler-config".
As I described above, even secondary-scheduler-config is existed, it is still reporting this error, that means below api is not working correctly, https://github.com/openshift/secondary-scheduler-operator/blob/c6fe67f922d6ffacfb65ec9952e2df3717ae5379/pkg/operator/target_config_reconciler.go#L186,
I built a test image to replace it with below api, it is working as expected.
https://github.com/openshift/secondary-scheduler-operator/blob/c6fe67f922d6ffacfb65ec9952e2df3717ae5379/pkg/operator/target_config_reconciler.go#L172
Need Help,
What's the expected behavior ? is my above test image the correct behavior ?
should we also remove this line https://github.com/openshift/secondary-scheduler-operator/blob/c6fe67f922d6ffacfb65ec9952e2df3717ae5379/pkg/operator/target_config_reconciler.go#L133,
as it is mis-understanding, with new code, the force redeployment may not happen.
@libzhang this is a valid bug. Thank you for reporting
Hi @ingvagabund, thanks for taking this issue. and I still have some questions need your help to understand,
what's the difference b/w below 2 apis(sharedInformerFactory and kubeClient) ? from my testing, it looks the first one could not work properly as I described above,
required, err := c.sharedInformerFactory.Core().V1().ConfigMaps().Lister().ConfigMaps(secondaryScheduler.Namespace).Get()
and
required, err = c.kubeClient.CoreV1().ConfigMaps(secondaryScheduler.Namespace).Get()
why every 10 minutes, the sync will happened ? even the configmap is not changed? where does this timer(10minutes) come from ?
should we also remove this line https://github.com/openshift/secondary-scheduler-operator/blob/c6fe67f922d6ffacfb65ec9952e2df3717ae5379/pkg/operator/target_config_reconciler.go#L133,
as it is mis-understanding, with new code, the force redeployment may not happen.
The sharedInformerFactory allows to read the config map from a local cache (which is continuously updated after a new cm is created/updated/deleted) to avoid directly accessing the kube-apiserver through the kubeClient and thus reducing the traffic.
| gharchive/issue | 2022-12-09T06:00:15 | 2025-04-01T06:45:17.136194 | {
"authors": [
"ingvagabund",
"libzhang"
],
"repo": "openshift/secondary-scheduler-operator",
"url": "https://github.com/openshift/secondary-scheduler-operator/issues/72",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2501405677 | Hasura3 Dagster deploy
What is it?
Just wanna track when we get to a continuous deployment, potentially even triggered off of Dagster dependencies.
The problem right now is that the introspection grabs everything and I manually curate which tables I want to show at the moment.
Oh maybe we can just use this issue
https://github.com/opensource-observer/oso/issues/1861
But it doesn't address the problem with hooking it into Dagster dependencies.
| gharchive/issue | 2024-09-02T18:00:39 | 2025-04-01T06:45:17.177101 | {
"authors": [
"ryscheng"
],
"repo": "opensource-observer/oso",
"url": "https://github.com/opensource-observer/oso/issues/2037",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1761714860 | 🛑 Dev is down
In 69bbf81, Dev (https://dev.opensourcepos.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Dev is back up in 0032e4d.
| gharchive/issue | 2023-06-17T08:42:14 | 2025-04-01T06:45:17.181813 | {
"authors": [
"jekkos"
],
"repo": "opensourcepos/upptime",
"url": "https://github.com/opensourcepos/upptime/issues/630",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1087496726 | Quest v0.2 / G6PD : View G6PD Status on the Patient List view
IMPORTANT: Where possible all PRs must be linked to a Github issue
Fixes #777
Type
Choose one: (Bug fix | Feature | Documentation | Testing | Code health | Release | Other)
Checklist
[x] I have written Unit tests for any new feature(s) and edge cases for bug fixes
[x] I have added any strings visible on UI components to the strings.xml file
[x] I have updated the CHANGELOG.md file for any notable changes to the codebase
[x] I have run ./gradlew spotlessApply and ./gradlew spotlessCheck to check my code follows the project's style guide
[x] I have built and run the fhircore app to verify my change fixes the issue and/or does not break the app
I just read the issue, this was incorrectly described there, will comment there
| gharchive/pull-request | 2021-12-23T08:38:35 | 2025-04-01T06:45:17.186094 | {
"authors": [
"owais-vd",
"pld"
],
"repo": "opensrp/fhircore",
"url": "https://github.com/opensrp/fhircore/pull/899",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
578521371 | [FreeBSD] ld: error: ./libssl.so: undefined reference to secure_getenv
Build is ok, but link error : (
./config --prefix=/usr --openssldir=/etc/pki/tls enable-ec_nistp_64_gcc_128 --system-ciphers-file=/etc/crypto-policies/back-ends/openssl.config zlib enable-camellia enable-seed enable-rfc3779 enable-sctp enable-cms enable-md2 enable-rc5 enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers no-mdc2 no-ec2m no-sm2 no-sm4 shared -Wa,--noexecstack -DPURIFY -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -fstack-protector-strong
Operating system: amd64-whatever-freebsd
...
ld: error: ./libssl.so: undefined reference to secure_getenv
cc: error: linker command failed with exit code 1 (use -v to see invocation)
*** Error code 1
Stop.
make[1]: stopped in /usr/home/zoujiaqing/rpmbuild/BUILD/openssl-1.1.1d
*** Error code 1
The code does pay attention to GLIBC versions, as you can see for yourself:
https://github.com/openssl/openssl/blob/99a16e0459e5089c2cfb92ee775f1221a51b8d05/crypto/getenv.c#L19-L24
It's possible, though, that we need to guard it a little more and check that __GNUC__ is defined ass well. @bernd-edlinger, you are knowledgable in this area, can you say a word or two?
Thanks @levitte !
But freebsd not has glibc : (
You are using gcc, right?
Can you try this "gcc -g3 -x c -E /dev/null |grep GLIBC"
what does it output?
I used clang9 ..
Oh, and why are you using enable-ec_nistp_64_gcc_128 ?
I looks FreeBSD ports has this arg item:
https://svnweb.freebsd.org/ports/head/security/openssl/Makefile?revision=521932&view=markup#l107
I need remove this arg item?
I will try remove this arg item, tomorrow..
No, I meant that as a joke.
If you would use gcc I would suggest you configure
with -g3 -save-temps, and look what defines __GLIBC_PREREQ in getenv.i
But I don't know enough clang to tell if it supports the same debug options as gcc....
Thank you @bernd-edlinger :)
I will go to try at tomorrow. see you !
I can't resolve it ..
I looks FreeBSD's patch files, not found about GLIBC_PREREQ.
https://svnweb.freebsd.org/ports/head/security/openssl/files/
Clearly it is wrong when __GLIBC_PREREQ is defined when this is not a glibc.
You need to look at the preprocessor output (add -dD -save-temps to your configure flags)
not generate *.i files:
ls -lh openssl-1.1.1d/crypto/getenv.*
-rw-r--r-- 1 zoujiaqing wheel 728B Sep 10 2019 openssl-1.1.1d/crypto/getenv.c
-rw-r--r-- 1 zoujiaqing wheel 504B Mar 11 19:44 openssl-1.1.1d/crypto/getenv.d
-rw-r--r-- 1 zoujiaqing wheel 2.8K Mar 11 19:44 openssl-1.1.1d/crypto/getenv.o
https://github.com/openssl/openssl/blob/99a16e0459e5089c2cfb92ee775f1221a51b8d05/ssl/ssl_ciph.c#L1419-L1420
maybe you try to use gcc then, at least it is able to output preprocessor files?
I'm confused about the reference to patches in FreeBSD ports, when the stated build path includes rpmbuild/BUILD/openssl-1.1.1d, which is not a path used internally in the FreeBSD ports collection.
I'm also forced to surmise that there are some additional patches in play based on the ``--system-ciphers-fileargument given toconfig`, which is not recognized by openssl internally and, when passed through to the FreeBSD cc as is expected for options to openssl config, causes a fatal error early in the build. Please tell us more about where this source tree came from
Yep, this looks like Fedora/RHEL patched sources being used. This is certainly something I would not try compiling on FreeBSD. Closing the issue.
| gharchive/issue | 2020-03-10T11:45:01 | 2025-04-01T06:45:17.195719 | {
"authors": [
"bernd-edlinger",
"kaduk",
"levitte",
"t8m",
"zoujiaqing"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/11295",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
638155995 | OpenSSL 1.0.2u allows the client to present certificates using 512 bit rsa keys.
Affected Versions
1.0.2u
Setup
OpenSSL has been compiled from source in an alpine docker container. The following excerpt of the used docker file shows the build related commands:
RUN ./config --prefix=/build/ --openssldir=/build/ no-async
RUN make -s && make install_sw -s
After building it the respective OpenSSL version and required libraries have been copied into an minimal docker container (from scratch).
The server is later started using the following command:
openssl s_server -accept 4433 -Verify 100 -cert /cert/inputCerts/rootv3.pem -key /cert/keys/rootv3.pem -CAfile /cert/inputCerts/root.pem -verify_return_error
All files mentioned in this report (certificates, keys, etc.) have been included in the attached zip archive.
Issue
OpenSSL 1.0.2u allows the client to present a certificate with a 512 bit RSA key. Keys of this length are considered insecure and shouldn't be allowed by default.
Reproduction
Uses OpenSSL 1.1.1 on Ubuntu 18.04.
Connect to the started server using the following command:
openssl s_client -connect localhost:4433 -cert ROOTv3_CAv3_LEAF_RSAv3_weakKey__leaf_certificate1.pem -key rsakey_weak512.pem -CAfile ROOTv3_CAv3_LEAF_RSAv3_weakKey__ca_certificate1.pem -cipher "DEFAULT@SECLEVEL=0" -tls1_1
Expected Result
OpenSSL should reject the certificate due to its weak key.
Actual Result
OpenSSL happily accepts the certificate and proceeds with the handhshake.
Attachment
opensslWeakKey.zip
You state :
Affected Versions :
1.0.2u
...
Reproduction
Uses OpenSSL 1.1.1 on Ubuntu 18.04.
A writing mistake ?
No. I meant to say I use OpenSSL 1.1.1 for the client while 1.0.2u is the server.
It's very unlikely that the 1.0.2 behaviour will be changed. The
security level support only got added in 1.1.0.
Since this is 1.0.2 it won't be fixed. Closing.
| gharchive/issue | 2020-06-13T12:44:30 | 2025-04-01T06:45:17.202959 | {
"authors": [
"FdaSilvaYY",
"Immortalem",
"kroeckx",
"mattcaswell"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/12133",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
761211128 | Mechanism to define custom propq in openssl apps
The app_get0_propq (from apps/lib/apps.c) is used throughout many openssl apps to define propq used to fetch the algorithms. Currently, it just returns NULL and its comment says "TODO(3.0): Make this an environment variable if required".
If a provider is used that does not implement all algorithms (e.g. reuses STORE from the default one) the propq needs to be defined to fetch the right implementation. (One such example has been discussed in #13539.) Therefore, the current app_get0_propq implementation is insufficient.
What should be the right way for a user to specify the propq when executing an openssl command? I'd prefer either a command line argument or (if no more args can be added) the config file, so that all configuration parameters are on a single place. Having providers defined as command line args (-provider) and then propq as an environment variable feels inconsistent.
doc/man5/config.pod says this:
EVP Configuration
The name alg_section in the initialization section names the section
containing algorithmic properties when using the EVP API.
Within the algorithm properties section, the following names have
meaning:
default_properties
The value may be anything that is acceptable as a property query
string for EVP_set_default_properties().
Does that answer your question?
Oh, yes. Thank you! In that case the app_get0_propq needs to be updated to read this from the config. I will (try to) propose a PR.
IMO those two should be something different. There should be a default property query in the config file that is being read when the libcrypto is loaded and initialized. And then there should be a additional way how to specify a non-default propq in the similar way the -provider can be specified for the applications. So yes, there should be a -propquery option that would allow you to override the default propquery from the config file.
In that case the app_get0_propq needs to be updated to read this from the config
The default propq from the config file should already be being used, so no changes should be needed to support it. Note that the implicit default propq is different to any explicit propq that is provided during individual fetches (which is what app_get0_propq is for). The default propq and any explicit propq are merged during the fetch.
IMO those two should be something different.
This is my view too. I think a "-propquery" option would be ideal.
Agreed, a -propquery command line option seems like the way forward. What about -prop-query ?
Either is fine by me.
This won't block beta1 but if a PR is created, it would likely get in.
| gharchive/issue | 2020-12-10T12:45:40 | 2025-04-01T06:45:17.208595 | {
"authors": [
"gotthardp",
"levitte",
"mattcaswell",
"paulidale",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/13656",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
383780063 | DTLS 1.2 strictly support
Hello,
How the openssl must be configured in order to support strictly DTLS 1.2?
Not include DTLS 1.0 chiphers
I just configure the SSL_ctx using the DTLSv1_2_server_method and
DTLSv1_2_client_method respectively
Βut analyzing th wiresharks I detect that in the client hello dtls 1.0
chiphes are included as well.
Is this right?
Morever, how we can configure the openssl to support fallback to lower
version if the server does not support 1.2
Thank you
George
Morever, how we can configure the openssl to support fallback to lower
version if the server does not support 1.2
Use DTLS_server_method and DTLS_client_method instead. These are the preferred "version flexible" methods that will negotiate the highest version available on both client and server.
How the openssl must be configured in order to support strictly DTLS 1.2?
Not include DTLS 1.0 chiphers
Probably you don't really want to do this. If you only configure DTLSv1.2 ciphers then you lose all the benefits of version flexibility since DTLSv1.0 will not work.
Nevertheless, the ciphersuites can be configured using SSL_CTX_set_cipher_list or SSL_set_cipher_list:
https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_cipher_list.html
Thank you for your answer
| gharchive/issue | 2018-11-23T10:55:12 | 2025-04-01T06:45:17.213249 | {
"authors": [
"getsoubl",
"mattcaswell"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/7694",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
427200276 | EC_GROUP_cmp implementation vs documentation
EC_GROUP_cmp() behaviour
Consider the following code snippet:
int nid, rv;
EC_GROUP *group1 = NULL, *group2 = NULL;
ECPARAMETERS *ecparameters = NULL;
BN_CTX *ctx = BN_CTX_new();
group1 = EC_GROUP_new_by_curve_name(nid);
ecparameters = EC_GROUP_get_ecparameters(group1, NULL);
group2 = EC_GROUP_new_from_ecparameters(ecparameters);
rv = EC_GROUP_cmp(group1, group2, NULL);
rv is going to be 0 for most curves, but not for all, e.g. the nistz256 case @slontis mentioned (and in general whenever there is a specialized EC_METHOD for a curve).
It fails to give always the same result because internally EC_GROUP_cmp(a,b,ctx), after matching all the other parameters, will do EC_POINT_cmp(a, Ga, Gb) where Ga and Gb are the generator EC_POINTs for the two groups, and if a and b have different EC_METHODs that will fail for EC_R_INCOMPATIBLE_OBJECTS.
Another example
The same exact behavior is observed if an application tries to do something similar to the following, inspired by #2161.
Imagine, for a given curve, generating a private key through the command line tools to obtain the following:
$curve.priv_named.pem (produced without fancy options)
$curve.priv_explicit.pem (derived from the first file, but encoding explicit parameters instead of the curve OID)
$curve.pub_named.pem (derived from the first file, including only the public key, no fancy options)
$curve.pub_explicit.pem (derived from the first file like the one above, but encoding explicit parameters)
Then an external testapp that takes as arguments a private and a public PEM file and tries to verify if they form a valid keypair through the following function:
static
int verify_ec_keypair(BIO *bio_priv, BIO *bio_pub, BN_CTX *ctx)
{
int rv, ret = -1;
EC_KEY *privk = NULL, *pubk = NULL;
const EC_GROUP *g1 = NULL, *g2 = NULL;
const EC_POINT *p1 = NULL, *p2 = NULL;
BIGNUM *x1 = NULL, *y1 = NULL;
BIGNUM *x2 = NULL, *y2 = NULL;
if (NULL == (privk = PEM_read_bio_ECPrivateKey(bio_priv, NULL, NULL, NULL)))
goto end;
if (NULL == (pubk = PEM_read_bio_EC_PUBKEY(bio_pub, NULL, NULL, NULL)))
goto end;
if (!EC_KEY_check_key(privk))
goto end;
if (!EC_KEY_check_key(pubk))
goto end;
if (NULL == (g1 = EC_KEY_get0_group(privk)))
goto end;
if (NULL == (p1 = EC_KEY_get0_public_key(privk)))
goto end;
if (NULL == (g2 = EC_KEY_get0_group(pubk)))
goto end;
if (NULL == (p2 = EC_KEY_get0_public_key(pubk)))
goto end;
BN_CTX_start(ctx);
x1 = BN_CTX_get(ctx);
y1 = BN_CTX_get(ctx);
x2 = BN_CTX_get(ctx);
y2 = BN_CTX_get(ctx);
if (y2 == NULL)
goto end;
if (!EC_POINT_get_affine_coordinates(g1, p1, x1, y1, ctx))
goto end;
if (!EC_POINT_get_affine_coordinates(g2, p2, x2, y2, ctx))
goto end;
if (BN_cmp(x1, x2) != 0 || BN_cmp(y1, y2) != 0) {
fprintf(stderr, "\tpublic keys do not match!\n");
goto end;
} else {
fprintf(stderr, "\tpublic keys coordinates do match!\n");
}
rv = EC_GROUP_cmp(g1, g2, ctx);
if (rv < 0)
goto end;
fprintf(stderr, "\tgroups %s\n", (rv == 0 ? "match" : "don't match"));
ERR_print_errors_fp(stderr); /* print error stack after EC_GROUP_cmp() */
/* groups of the private key and public key should represent the same curve */
if (rv != 0)
goto end;
ret = 1;
end:
ERR_print_errors_fp(stderr);
BN_CTX_end(ctx);
EC_KEY_free(pubk);
EC_KEY_free(privk);
return ret;
}
Using different curves this process leads to different results:
./testapp secp112r1.priv_named.pem secp112r1.pub_named.pem
public keys coordinates do match!
groups match
PASS!
./testapp secp112r1.priv_expl.pem secp112r1.pub_expl.pem
public keys coordinates do match!
groups match
PASS!
./testapp secp112r1.priv_named.pem secp112r1.pub_expl.pem || true
public keys coordinates do match!
groups match
PASS!
./testapp secp112r1.priv_expl.pem secp112r1.pub_named.pem || true
public keys coordinates do match!
groups match
PASS!
./testapp prime256v1.priv_named.pem prime256v1.pub_named.pem
public keys coordinates do match!
groups match
PASS!
./testapp prime256v1.priv_expl.pem prime256v1.pub_expl.pem
public keys coordinates do match!
groups match
PASS!
./testapp prime256v1.priv_named.pem prime256v1.pub_expl.pem || true
public keys coordinates do match!
groups don't match
140395975592832:error:10071065:elliptic curve routines:EC_POINT_cmp:incompatible objects:../crypto/ec/ec_lib.c:861:
FAIL!
./testapp prime256v1.priv_expl.pem prime256v1.pub_named.pem || true
public keys coordinates do match!
groups don't match
140125474028416:error:10071065:elliptic curve routines:EC_POINT_cmp:incompatible objects:../crypto/ec/ec_lib.c:861:
FAIL!
I tested this on 1.0.2a, 1.0.2-stable, 1.1.0-stable, 1.1.1-stable and master, and at least the behavior is consistent across all the versions.
What the documentation states
EC_GROUP_cmp() documentation, that did not change since 1.0.2, states:
=head1 SYNOPSIS
int EC_GROUP_cmp(const EC_GROUP *a, const EC_GROUP *b, BN_CTX *ctx);
=head1 DESCRIPTION
EC_GROUP_cmp compares B<a> and B<b> to determine whether they represent the same curve or not.
=head1 RETURN VALUES
EC_GROUP_cmp returns 0 if the curves are equal, 1 if they are not equal, or -1 on error.
Description open to interpretation
My interpretation of its description is that EC_GROUP_cmp() should not care about the internal EC_METHOD and return equality if two groups represent the same curve (i.e., an EC_GROUP derived from EC_GFp_nistz256_method() represents the same curve as an EC_GROUP derived from EC_GFp_nistp256_method() or an EC_GROUP derived from EC_GFp_simple_method()/EC_GFp_mont_method()/EC_GFp_nist_method() (with the NIST P-256 parameters).
I could of course be mistaken in my interpretation, but the fact that EC_GROUP_cmp() evidently avoids comparing a->meth and b->meth directly, and that it only happens indirectly through EC_POINT_cmp() after checking everything else, suggests to me that at least when it was first implemented the intent was to compare the curves represented by the two groups and not the implementation details.
This is exactly why I wanted to iterate the tests on all the curves.
We have a few concurrent issues I believe:
different named curves could share the same set of parameters (aliases)
a given curve, even if it has no aliases, could have a specialized EC_METHOD used when the EC_GROUP is created with EC_GROUP_new_by_curve_name() or with EC_GROUP_new_from_ecpkparameters()
EC_GROUP_cmp() is tricky:
The problem
Since 1.0.2b (from 2015) EC_GROUP_cmp() has been throwing EC_R_INCOMPATIBLE_OBJECTS from the inner EC_POINT_cmp() when the two groups have different EC_METHODs internally.
There was a relevant discussion in #6302 about mixing EC_POINT P and EC_GROUP g in function calls, if P was not created by EC_POINT_new(g).
Is it a programmer error?
Do the API signatures suggest that it should be safe (at least as in no SEGFAULTs)?
There were different opinions, and that is where ec_point_is_compat() was born.
Nonetheless EC_GROUP_cmp() is an example of a place inside the library where we can end up calling an EC_POINT_*() function with a group that is not the creator for the given point.
@mattcaswell can share more insight about it, but at the end of the day documentation and implementation seem to diverge, and a decision should be taken if it should be fixed adjusting the documentation to the actual behavior or fixing the implementation according to the description (and what to do in 1.0.2, 1.1.0 and 1.1.1) (both changes appear to me as potentially breaking, but probably adding a caveat to the documentation although less desirable is the less disruptive change for existing applications).
Solutions
Fix the doc
We could update the documentation for the function clarifying when two EC_GROUPs are considered to represent the same curve, explicitly excluding EC_GROUPs that represent the same curve from a Math POV, but use different internal implementations.
Fix the behavior
We could replace the EC_POINT_cmp() inside EC_GROUP_cmp() (or having a separate EC_GROUP_cmp_math() -- ok, I suck at thinking about proper names!) by taking the affine representation of gen_a and gen_b and do two BN_cmp(); this avoids mixing incompatible points and groups.
int EC_GROUP_cmp(const EC_GROUP *a, const EC_GROUP *b)
{
// [...]
// BIGNUM *xa, *ya, *xb, *yb;
// {xa,ya,xb,yb} = BN_CTX_get(ctx);
// [...] all the stuff up to EC_POINT_cmp(), checking every other param
// then instead of EC_POINT_cmp(a, EC_GROUP_get0_generator(a), EC_GROUP_get0_generator(b), ctx)
EC_POINT_get_affine_coordinates(a, EC_GROUP_get0_generator(a), xa, ya, ctx);
EC_POINT_get_affine_coordinates(b, EC_GROUP_get0_generator(b), xb, yb, ctx);
if (r || BN_cmp(xa, xb) != 0 || BN_cmp(ya, yb) != 0)
r = 1;
// [...]
This way ec_point_is_compat() is always happy because you never mix methods of group a on a b_point created from a (potentially) different EC_GROUP b.
Sidenotes
On a related note, reprising #6302, maybe it's worth exploring again the attempts that @mattcaswell did in the process of fixing that issue and improve the API documentation and the internals to clarify the API expectations about EC_GROUP and EC_POINT objects.
I think it's relevant because working on #8555 with @slontis it became clear that in existing tests (e.g., the very first test in parameter_test() from master:test/ectest.c) and in the library code (EC_GROUP_cmp() here is an example), we happen to sometime mix and match EC_POINTs generated from different EC_GROUP objects.
A few discussion points follow
ec_point_is_compat(): EC_POINT does not contain a reference to the creating EC_GROUP
In #6302 there was some form of consensus on the fact that an EC_POINT should always only be used in the API with the associated EC_GROUP (i.e., the same one used when the point was created through EC_POINT_new(group)), and mixing them otherwise is a programmer error.
The agreement was that we should anyway avoid SEGFAULTs inside the library code when the programmer misuses the EC_POINT_*() API in such fashion, and so the internal ec_point_is_compat() was born and called by all the EC_POINT_*() wrappers to raise, consistently, a EC_R_INCOMPATIBLE_OBJECTS error.
The EC_POINT as a structure does not contain a reference to the EC_GROUP that created it, but only the direct EC_METHOD (copied from group->meth and required by the, very few, EC_POINT_*() methods that take as an argument only the EC_POINT itself, without the parent EC_GROUP) and a curve_name field (copied from group->name, which is NID_undef for all the EC_GROUP created from explicit parameters).
As a result of this the ec_point_is_compat() implementation seems quirky (compared to storing an additional pointer to the creating EC_GROUP in the EC_POINT struct and directly checking for mismatches).
It's also clearly designed to be more permissive when either the tested point or group is not named.
Seems like 3.0.0 is a good place to do non-breaking rationalization of the internals of EC.
Most EC_POINT_*() functions require an EC_GROUP *
One could claim that the API of EC_POINT_*() is under-documented at the architectural level and has a suboptimal design: we assume, without stating it explicitly in the documentation, that an EC_POINT object should always be manipulated only through the corresponding creating EC_GROUP object, and that doing otherwise is a programmer error.
(it should also be noted that, at the moment, enforcing such assumption more strictly inside the library reveals that we are already violating this assumption ourselves).
Yet, the API signature for EC_POINT_*() functions shows that most of them require as the first argument the EC_GROUP to use to manipulate the EC_POINT, and in the absence of frequent reminders, this could trick the external developers without deep insight in the internals of the library that an EC_POINT object is not strictly depending on the EC_GROUP that created it, and that mix and matching is allowed (and probably useful in some cases, e.g. to avoid mallocs and frees for performance one might want to reuse the same EC_POINT object instead of creating a new one for different EC_GROUPs).
While still providing a compatibility layer to maintain backward-compatibility, we might want to start deprecating the old methods and transition to a more rigorous EC_POINT API that simply does not allow mistakenly mixing EC_POINTs and EC_GROUPs.
Great description.
I'm leaning towards fixing the behaviour, but I'm not well versed in the EC internals and could be missing something major.
Agree that 3.0.0 is a good place to fix internals. That's what a lot of the FIPS work is actually about doing.
There is also an issue related to curve name aliases (i.e multiple curve name NID’s map to the same curve parameters).
Currently ec_point_is_compat() compares curve_names, which will fail for aliased NID's.
Maybe the curve list should be hashed (key = NID), so that it can check if the names are ‘equal'.
On 30 Mar 2019, at 8:26 am, Nicola Tuveri notifications@github.com wrote:
EC_GROUP_cmp() behaviour
Consider the following code snippet:
int nid, rv;
EC_GROUP *group1 = NULL, *group2 = NULL;
ECPARAMETERS *ecparameters = NULL;
BN_CTX *ctx = BN_CTX_new();
group1 = EC_GROUP_new_by_curve_name(nid);
ecparameters = EC_GROUP_get_ecparameters(group1, NULL);
group2 = EC_GROUP_new_from_ecparameters(ecparameters);
rv = EC_GROUP_cmp(group1, group2, NULL);
rv is going to be 0 for most curves, but not for all, e.g. the nistz256 case @slontis https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_slontis&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=v4LGhdUhvL0kPIa06ZbcGwrWa83MIJSWzyudHVGf8z8&e= mentioned (and in general whenever there is a specialized EC_METHOD for a curve).
It fails to give always the same result because internally EC_GROUP_cmp(a,b,ctx), after matching all the other parameters, will do EC_POINT_cmp(a, Ga, Gb) where Ga and Gb are the generator EC_POINTs for the two groups, and if a and b have different EC_METHODs that will fail for EC_R_INCOMPATIBLE_OBJECTS.
Another example
The same exact behavior is observed if an application tries to do something similar to the following, inspired by #2161 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_2161&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=kSh2Tm53zOlDODyy9rhKO7vubvf8A6Rgb20kSOj4XQc&e=.
Imagine, for a given curve, generating a private key through the command line tools to obtain the following:
$curve.priv_named.pem (produced without fancy options)
$curve.priv_explicit.pem (derived from the first file, but encoding explicit parameters instead of the curve OID)
$curve.pub_named.pem (derived from the first file, including only the public key, no fancy options)
$curve.pub_explicit.pem (derived from the first file like the one above, but encoding explicit parameters)
Then an external testapp that takes as arguments a private and a public PEM file and tries to verify if they form a valid keypair through the following function:
static
int verify_ec_keypair(BIO *bio_priv, BIO *bio_pub, BN_CTX *ctx)
{
int rv, ret = -1;
EC_KEY *privk = NULL, *pubk = NULL;
const EC_GROUP *g1 = NULL, *g2 = NULL;
const EC_POINT *p1 = NULL, *p2 = NULL;
BIGNUM *x1 = NULL, *y1 = NULL;
BIGNUM *x2 = NULL, *y2 = NULL;
if (NULL == (privk = PEM_read_bio_ECPrivateKey(bio_priv, NULL, NULL, NULL)))
goto end;
if (NULL == (pubk = PEM_read_bio_EC_PUBKEY(bio_pub, NULL, NULL, NULL)))
goto end;
if (!EC_KEY_check_key(privk))
goto end;
if (!EC_KEY_check_key(pubk))
goto end;
if (NULL == (g1 = EC_KEY_get0_group(privk)))
goto end;
if (NULL == (p1 = EC_KEY_get0_public_key(privk)))
goto end;
if (NULL == (g2 = EC_KEY_get0_group(pubk)))
goto end;
if (NULL == (p2 = EC_KEY_get0_public_key(pubk)))
goto end;
BN_CTX_start(ctx);
x1 = BN_CTX_get(ctx);
y1 = BN_CTX_get(ctx);
x2 = BN_CTX_get(ctx);
y2 = BN_CTX_get(ctx);
if (y2 == NULL)
goto end;
if (!EC_POINT_get_affine_coordinates(g1, p1, x1, y1, ctx))
goto end;
if (!EC_POINT_get_affine_coordinates(g2, p2, x2, y2, ctx))
goto end;
if (BN_cmp(x1, x2) != 0 || BN_cmp(y1, y2) != 0) {
fprintf(stderr, "\tpublic keys do not match!\n");
goto end;
} else {
fprintf(stderr, "\tpublic keys coordinates do match!\n");
}
rv = EC_GROUP_cmp(g1, g2, ctx);
if (rv < 0)
goto end;
fprintf(stderr, "\tgroups %s\n", (rv == 0 ? "match" : "don't match"));
ERR_print_errors_fp(stderr); /* print error stack after EC_GROUP_cmp() */
/* groups of the private key and public key should represent the same curve */
if (rv != 0)
goto end;
ret = 1;
end:
ERR_print_errors_fp(stderr);
BN_CTX_end(ctx);
EC_KEY_free(pubk);
EC_KEY_free(privk);
return ret;
}
Using different curves this process leads to different results:
./testapp secp112r1.priv_named.pem secp112r1.pub_named.pem
public keys coordinates do match!
groups match
PASS!
./testapp secp112r1.priv_expl.pem secp112r1.pub_expl.pem
public keys coordinates do match!
groups match
PASS!
./testapp secp112r1.priv_named.pem secp112r1.pub_expl.pem || true
public keys coordinates do match!
groups match
PASS!
./testapp secp112r1.priv_expl.pem secp112r1.pub_named.pem || true
public keys coordinates do match!
groups match
PASS!
./testapp prime256v1.priv_named.pem prime256v1.pub_named.pem
public keys coordinates do match!
groups match
PASS!
./testapp prime256v1.priv_expl.pem prime256v1.pub_expl.pem
public keys coordinates do match!
groups match
PASS!
./testapp prime256v1.priv_named.pem prime256v1.pub_expl.pem || true
public keys coordinates do match!
groups don't match
140395975592832:error:10071065:elliptic curve routines:EC_POINT_cmp:incompatible objects:../crypto/ec/ec_lib.c:861:
FAIL!
./testapp prime256v1.priv_expl.pem prime256v1.pub_named.pem || true
public keys coordinates do match!
groups don't match
140125474028416:error:10071065:elliptic curve routines:EC_POINT_cmp:incompatible objects:../crypto/ec/ec_lib.c:861:
FAIL!
I tested this on 1.0.2a, 1.0.2-stable, 1.1.0-stable, 1.1.1-stable and master, and at least the behavior is consistent across all the versions.
What the documentation states
EC_GROUP_cmp() documentation, that did not change since 1.0.2, states:
=head1 SYNOPSIS
int EC_GROUP_cmp(const EC_GROUP *a, const EC_GROUP *b, BN_CTX *ctx);
=head1 DESCRIPTION
EC_GROUP_cmp compares B and B to determine whether they represent the same curve or not.
=head1 RETURN VALUES
EC_GROUP_cmp returns 0 if the curves are equal, 1 if they are not equal, or -1 on error.
Description open to interpretation
My interpretation of its description is that EC_GROUP_cmp() should not care about the internal EC_METHOD and return equality if two groups represent the same curve (i.e., an EC_GROUP derived from EC_GFp_nistz256_method() represents the same curve as an EC_GROUP derived from EC_GFp_nistp256_method() or an EC_GROUP derived from EC_GFp_simple_method()/EC_GFp_mont_method()/EC_GFp_nist_method() (with the NIST P-256 parameters).
I could of course be mistaken in my interpretation, but the fact that EC_GROUP_cmp() evidently avoids comparing a->meth and b->meth directly, and that it only happens indirectly through EC_POINT_cmp() after checking everything else, suggests to me that at least when it was first implemented the intent was to compare the curves represented by the two groups and not the implementation details.
This is exactly why I wanted to iterate the tests on all the curves.
We have a few concurrent issues I believe:
different named curves could share the same set of parameters (aliases)
a given curve, even if it has no aliases, could have a specialized EC_METHOD used when the EC_GROUP is created with EC_GROUP_new_by_curve_name() or with EC_GROUP_new_from_ecpkparameters()
EC_GROUP_cmp() is tricky:
The problem
Since 1.0.2b (from 2015) EC_GROUP_cmp() has been throwing EC_R_INCOMPATIBLE_OBJECTS from the inner EC_POINT_cmp() when the two groups have different EC_METHODs internally.
There was a relevant discussion in #6302 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_6302&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=rEi6yMDgGpHQYXoxcIXC1H_QXwlRWB1onZWigtBFuAQ&e= about mixing EC_POINT P and EC_GROUP g in function calls, if P was not created by EC_POINT_new(g).
Is it a programmer error?
Do the API signatures suggest that it should be safe (at least as in no SEGFAULTs)?
There were different opinions, and that is where ec_point_is_compat() was born.
Nonetheless EC_GROUP_cmp() is an example of a place inside the library where we can end up calling an EC_POINT_*() function with a group that is not the creator for the given point.
@mattcaswell https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mattcaswell&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=X9QNlfrOvMEmaBuNw9OkkLzWANKy3ASkMxtpk0JH6sk&e= can share more insight about it, but at the end of the day documentation and implementation seem to diverge, and a decision should be taken if it should be fixed adjusting the documentation to the actual behavior or fixing the implementation according to the description (and what to do in 1.0.2, 1.1.0 and 1.1.1) (both changes appear to me as potentially breaking, but probably adding a caveat to the documentation although less desirable is the less disruptive change for existing applications).
Solutions
Fix the doc
We could update the documentation for the function clarifying when two EC_GROUPs are considered to represent the same curve, explicitly excluding EC_GROUPs that represent the same curve from a Math POV, but use different internal implementations.
Fix the behavior
We could replace the EC_POINT_cmp() inside EC_GROUP_cmp() (or having a separate EC_GROUP_cmp_math() -- ok, I suck at thinking about proper names!) by taking the affine representation of gen_a and gen_b and do two BN_cmp(); this avoids mixing incompatible points and groups.
int EC_GROUP_cmp(const EC_GROUP *a, const EC_GROUP *b)
{
// [...]
// BIGNUM *xa, *ya, *xb, *yb;
// {xa,ya,xb,yb} = BN_CTX_get(ctx);
// [...] all the stuff up to EC_POINT_cmp(), checking every other param
// then instead of EC_POINT_cmp(a, EC_GROUP_get0_generator(a), EC_GROUP_get0_generator(b), ctx)
EC_POINT_get_affine_coordinates(a, EC_GROUP_get0_generator(a), xa, ya, ctx);
EC_POINT_get_affine_coordinates(b, EC_GROUP_get0_generator(b), xb, yb, ctx);
if (r || BN_cmp(xa, xb) != 0 || BN_cmp(ya, yb) != 0)
r = 1;
// [...]
This way ec_point_is_compat() is always happy because you never mix methods of group a on a b_point created from a (potentially) different EC_GROUP b.
Sidenotes
On a related note, reprising #6302 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_6302&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=rEi6yMDgGpHQYXoxcIXC1H_QXwlRWB1onZWigtBFuAQ&e=, maybe it's worth exploring again the attempts that @mattcaswell https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mattcaswell&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=X9QNlfrOvMEmaBuNw9OkkLzWANKy3ASkMxtpk0JH6sk&e= did in the process of fixing that issue and improve the API documentation and the internals to clarify the API expectations about EC_GROUP and EC_POINT objects.
I think it's relevant because working on #8555 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_pull_8555&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=tEXzUQZfnvm6OjDKrRvkVvbQcFGGUM7Xxt0pAS0GGds&e= with @slontis https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_slontis&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=v4LGhdUhvL0kPIa06ZbcGwrWa83MIJSWzyudHVGf8z8&e= it became clear that in existing tests (e.g., the very first test in parameter_test() from master:test/ectest.c) and in the library code (EC_GROUP_cmp() here is an example), we happen to sometime mix and match EC_POINTs generated from different EC_GROUP objects.
A few discussion points follow
ec_point_is_compat(): EC_POINT does not contain a reference to the creating EC_GROUP
In #6302 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_6302&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=rEi6yMDgGpHQYXoxcIXC1H_QXwlRWB1onZWigtBFuAQ&e= there was some form of consensus on the fact that an EC_POINT should always only be used in the API with the associated EC_GROUP (i.e., the same one used when the point was created through EC_POINT_new(group)), and mixing them otherwise is a programmer error.
The agreement was that we should anyway avoid SEGFAULTs inside the library code when the programmer misuses the EC_POINT_() API in such fashion, and so the internal ec_point_is_compat() was born and called by all the EC_POINT_() wrappers to raise, consistently, a EC_R_INCOMPATIBLE_OBJECTS error.
The EC_POINT as a structure does not contain a reference to the EC_GROUP that created it, but only the direct EC_METHOD (copied from group->meth and required by the, very few, EC_POINT_*() methods that take as an argument only the EC_POINTitself, without the parentEC_GROUP) and a curve_namefield (copied fromgroup->name, which is NID_undeffor all theEC_GROUP` created from explicit parameters).
As a result of this the ec_point_is_compat() implementation seems quirky (compared to storing an additional pointer to the creating EC_GROUP in the EC_POINT struct and directly checking for mismatches).
It's also clearly designed to be more permissive when either the tested point or group is not named.
Seems like 3.0.0 is a good place to do non-breaking rationalization of the internals of EC.
Most EC_POINT_*() functions require an EC_GROUP *
One could claim that the API of EC_POINT_*() is under-documented at the architectural level and has a suboptimal design: we assume, without stating it explicitly in the documentation, that an EC_POINT object should always be manipulated only through the corresponding creating EC_GROUP object, and that doing otherwise is a programmer error.
(it should also be noted that, at the moment, enforcing such assumption more strictly inside the library reveals that we are already violating this assumption ourselves).
Yet, the API signature for EC_POINT_*() functions shows that most of them require as the first argument the EC_GROUP to use to manipulate the EC_POINT, and in the absence of frequent reminders, this could trick the external developers without deep insight in the internals of the library that an EC_POINT object is not strictly depending on the EC_GROUP that created it, and that mix and matching is allowed (and probably useful in some cases, e.g. to avoid mallocs and frees for performance one might want to reuse the same EC_POINT object instead of creating a new one for different EC_GROUPs).
While still providing a compatibility layer to maintain backward-compatibility, we might want to start deprecating the old methods and transition to a more rigorous EC_POINT API that simply does not allow mistakenly mixing EC_POINTs and EC_GROUPs.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openssl_openssl_issues_8615&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=efB_nLBWozzy7gs3ndSpeZKYOWQCuw1AmxOsGcH_ids&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_Amkjeh6KYFj7M35K-5FxIh6naMdKMTTN8-2Dks5vbpMTgaJpZM4cTVzM&d=DwMCaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=b1aL1L-m41VGkedIk-9Q7taAEKIshTBwq95Iah07uCk&m=0vUMnnYVlWEbul1JpJHDUXu9gxXcjUvZWfOH1wm-Yp0&s=HLJmRnj2D42heWoNxL_EH97wyptbJLb8c7tfK_PvAsI&e=.
If the index is by NID, I suggest using a sparse array instead of an hash.
It's designed to be particularly efficient in this case.
(Note: In the very first code-snippet nid is uninitialized.)
I'm thinking that fixing EC_GROUP_cmp to remove the call to EC_POINT_cmp seems like a relatively good idea.
My worry though is - as you state above - we implicitly assume everywhere that whenever we do something we an EC_POINT the programmer must pass in the creating EC_GROUP. It would not be unreasonable for a programmer to think that if they had done an EC_GROUP_cmp with two groups and they end up being equal, then they could use either group in a subsequent EC_POINT operation. But that might not be the case .... right?
Initially I was thinking that if they have the same cmp function then they should be comparable.
It would not be unreasonable for a programmer to think that if they had done an EC_GROUP_cmp with two groups and they end up being equal, then they could use either group in a subsequent EC_POINT operation. But that might not be the case .... right?
Exactly, specifically if EC_GROUP_cmp() returns true but the two groups have different EC_METHODs then most EC_POINT_*() where the first argument is the "equivalent" EC_GROUP (instead of the EC_GROUP that generated the EC_POINT object to be manipulated) will fail due to the ec_point_is_compat() check.
Initially I was thinking that if they have the same cmp function then they should be comparable.
This is an interesting idea: basically if two groups have different group->meth but the same group->meth->point_cmp it seems reasonable to expect at least that the EC_POINT objects are storing X, Y and Z in the same representation.
For ec_GFp_simple_cmp() (that is shared by EC_GFp_simple_method as well as EC_GFp_nist{p224,p256,p521,z256}_method) the coordinate representation is either affine if Z == 1 or Jacobian projective (X/Z^2, Y/Z^3) otherwise.
The problem is that there is no guarantee that they use the same underlying field representation for the BIGNUMs that hold the coordinate values:
https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_smpl.c#L1082-L1083
field_mul and field_sqr can still differ between the groups that generated the 2 points:
https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_smpl.c#L51-L52
https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_nistz256.c#L1676-L1677
This means that it is not safe to use EC_POINT_cmp() if the EC_METHODs of the compared EC_POINTs do not match, even if they have the same meth->point_cmp implementation.
@slontis, @mattcaswell , elaborating on the two answers above:
We have to consider that, even noticing that each EC_POINT holds a reference to its associated EC_METHOD, it's not trivial to refactor ec_GFp_simple_cmp() (and other functions in ecp_*.c) under the assumption that it's not a programmer error to mix EC_GROUPs and EC_POINTs .
For example here in ec_GFp_simple_cmp():
https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_smpl.c#L1099-L1127
we cannot replace field_sqr at L1107 with
if (!(b->meth->field_sqr(group, Zb23, b->Z, ctx)))
because b->meth->field_sqr will likely use group->meth->field_* functions that are likely to differ from b->meth->field_* counterparts.
We could maybe circumvent this by doing something with "ephemeral" EC_GROUPs:
int ec_GFp_simple_cmp(const EC_GROUP *__unused, const EC_POINT *a,
const EC_POINT *b, BN_CTX *ctx)
{
/*
* [...]
*/
EC_GROUP *group_a = NULL, *group_b = NULL;
/*
* [...]
*/
/* first arg is discarded because it might differ for a and b */
(void) __unused;
group_a = EC_GROUP_new(a->meth);
group_b = EC_GROUP_new(b->meth);
/*
* NOTE: this alone won't work because we need to set all the
* parameters for `group_a` and `group_b`, but have no way to access
* them from `a` and `b`!
*/
/*
* [...]
*/
if (!(b->meth->field_sqr(group_b, Zb23, b->Z, ctx)))
/*
* [...]
*/
}
Notice the NOTE comment, "ephemeral" groups cannot really work, and probably going dow that route we would be better off adding reference counting to EC_GROUP, associate each EC_POINT with its creating EC_GROUP and retrieve a->group and b->group instead.
But even with this, we cannot rewrite field_mul at https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_smpl.c#L1109 as we cannot multiply a->X with the square of b->Z if they are in 2 different representations!
As another workaround to fix this we could discard the trick explained at https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_smpl.c#L1099-L1104 and perform expensive divisions/inversions to avoid mixing the coordinates of a with the coordinates of b.
Even after these changes, we would still be left with the problem of BN_cmp()ing two BIGNUMs potentially in 2 different binary representations at https://github.com/openssl/openssl/blob/e9a5932d04f6b7dd25b39a8ff9dc162d64a78c22/crypto/ec/ecp_smpl.c#L1123-L1124
Conclusions
I believe refactoring the EC code to add ref counting to EC_GROUP, associating each EC_POINT with its parent EC_GROUP, and ignoring the EC_GROUP argument of each EC_POINT_*() function (other than EC_POINT_new()) makes sense to ensure the API is not misused (check the Sidenotes section in this PR description).
We will probably still end up considering a programmer error to mix EC_POINTs from different EC_GROUPs in any EC_POINT_*() function that takes more than one EC_POINT * argument.
Let's say we decide then to fix the behaviour of EC_GROUP_cmp() so that it returns true as long as the two EC_GROUP objects represent the same mathematical group, no matter what specialized implementation they might be using.
For use cases like the one described in the Another example section of this PR description, we might want to add an EC_POINT_convert() function:
/*
* Convert EC_POINT `src`` into a new EC_POINT compatible with `group`.
*
* Return values:
* - NULL if any error occurs (including EC_R_INCOMPATIBLE_OBJECTS,
* see Notes below)
* - a _new_ EC_POINT based on `group` otherwise.
*
* Notes:
* - if EC_GROUP_cmp(group, src->group) != 0 an
* EC_R_INCOMPATIBLE_OBJECTS is raised
* - when a valid pointer is returned, the caller is expected to
* call EC_POINT_free() before discarding the pointer.
*
*/
EC_POINT *EC_POINT_convert(const EC_GROUP *group, const EC_POINT *src,
BN_CTX *ctx)
{
BN_CTX *newctx = NULL;
EC_POINT *ret = NULL, *p = NULL;
BIGNUM *x, *y;
if (group == NULL || src == NULL) {
ECerr(EC_F_EC_POINT_CONVERT, EC_R_PASSED_NULL_PARAMETER);
return NULL;
}
if (group == src->group)
return EC_POINT_dup(src, group);
if (ctx == NULL && (ctx = new_ctx = BN_CTX_secure_new()) == NULL)
return NULL;
if (EC_GROUP_cmp(group, src->group, ctx) != 0) {
ECerr(EC_F_EC_POINT_CONVERT, EC_R_INCOMPATIBLE_OBJECTS);
BN_CTX_free(newctx);
return NULL;
}
BN_CTX_start(ctx);
if (NULL == (x = BN_CTX_get(ctx))
|| NULL == (y = BN_CTX_get(ctx)))
goto end;
if (!EC_POINT_get_affine_coordinates(src->group, src, x, y, ctx))
goto end;
if (NULL == (p = EC_POINT_new(group)))
goto end;
if (!EC_POINT_set_affine_coordinates(group, p, x, y, ctx))
goto end;
ret = p;
end:
if (ret != NULL)
EC_POINT_free(p);
BN_CTX_end(ctx);
BN_CTX_free(newctx);
return ret;
}
That seems like quite a lot of complexity, all to support the notion that there are two different kinds of, say, P-256 EC_GROUPs. Is that actually what you all want? In particular, if there's some non-deprecated codepath that produces a funny P-256 EC_GROUP, you presumably want the more efficient one anyway.
It seems this could be solved largely by making EC_GROUP_new_from_ecparameters recognize the built-in curves. That's what we did for BoringSSL:
https://github.com/openssl/openssl/issues/9251#issuecomment-506534600
@davidben I believe that could be a way of fixing #9251 , but here the question is what should EC_GROUP_cmp() do?
Are these 2 objects compatible?
Compare the programming objects returning true when they are equivalent from an implementation point of view and could be swapped safely.
Are these 2 objects mathematically equivalent?
Compare the represented mathematical objects, returning true when they are equivalent from a math point of view, but say nothing about mixing them or their EC_POINT "children" in function calls.
A libcrypto user might want an answer to the first question:
We have an EC API (that we cannot change) that requires users manipulating EC_POINTs to always provide some EC_GROUP.
There is some kind of consensus that using those functions should require the EC_GROUP argument to always be the parent of the manipulated EC_POINTs, and treat everything else as a programmer error
Such consensus is not documented officially, so it's a bit unfair to "blame" external users for API misuse
The source code of the library itself contains such "programmer errors", e.g. in EC_GROUP_cmp(), and potentially elsewhere
A libcrypto user might want an answer to the second question, as the library was born as a toolkit and people are using OpenSSL for all kind of experimental stuff: testing new primitives, parsing/serializing to random formats/protocols, and stuff I cannot even imagine.
I might be assuming too much (and I apologize in advance if that's the case!), but it seems you are suggesting, as a resolution to this issue, to ensure that there are no cases in which the two questions above have a different answer, but I don't see a way to achieve that without refactoring the EC API.
I'm saying focusing on EC_GROUP_cmp is a mistake. The prominence of the "equal but not compatible" state is the root problem here. EC_GROUP_cmp's oddities are a symptom of it.
EC_GROUP_cmp should answer the first question, as it's pertinent to whether you can actually use an EC_GROUP. Then reduce the cases where first and second differ by always returning the same version of named groups in supported paths. For paths where recognizing curves is difficult to do OpenSSL's API mistakes (#5158), document that manually passing in a named curve's parameters won't get you that curve. Then remove the unpredictability there by making EC_GROUP_cmp check always mismatch named and arbitrary curves (otherwise EC_GROUP_cmp's behavior depends on implementation details, as it does today).
If someone wants to know whether a named group and an arbitrary group share a curve equation, they can always EC_GROUP_get_curve and friends. If it turns out this is common, add a new function, but I cannot imagine any meaningful use case. Ultimately any instances of this would be because of API mistakes that make it difficult to recognize the parameters.
Then remove the unpredictability there by making EC_GROUP_cmp check always mismatch named and arbitrary curves (otherwise EC_GROUP_cmp's behavior depends on implementation details, as it does today).
By this I mean the difference between how these two implementations treat the curve name:
https://github.com/google/boringssl/blob/master/crypto/fipsmodule/ec/ec.c#L585
https://github.com/openssl/openssl/blob/master/crypto/ec/ec_lib.c#L502
In OpenSSL, a named and arbitrary P-256 compare as equal. In BoringSSL, they do not. This is important because the named P-256 may use a curve-specific EC_METHOD or it may not depending on what optimizations OpenSSL has enabled. Adding a new curve-specific EC_METHOD should not change behavior.
There is some kind of consensus that using those functions should require the EC_GROUP argument to always be the parent of the manipulated EC_POINTs, and treat everything else as a programmer error
To clarify, when you say it has to "be" the parent, do you mean by pointer equality, or by logical equality? I don't think pointer equality is workable in OpenSSL right now because EC_GROUPs are currently copied everywhere. It needs to be logical equality... which brings us back to this issue.
https://github.com/openssl/openssl/blob/master/crypto/ec/ec_key.c#L499
(BoringSSL calls EC_GROUP_cmp in EC_POINT operations, not an EC_METHOD check. We make this efficient by only making named groups fully static, and then ref-counting arbitrary groups. This way it's rare that you actually try to compare the parameters themselves. But we also consider arbitrary groups a legacy thing to help Conscrypt implement some Java API mistakes, so we don't care about its performance much.)
https://github.com/google/boringssl/blob/master/crypto/fipsmodule/ec/ec.c#L571
https://github.com/google/boringssl/blob/master/crypto/fipsmodule/ec/ec.c#L825
| gharchive/issue | 2019-03-29T22:25:25 | 2025-04-01T06:45:17.304338 | {
"authors": [
"davidben",
"lzsiga",
"mattcaswell",
"paulidale",
"romen",
"slontis"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/8615",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
877397535 | Unify parameter types in documentation
Checklist
[x] documentation is added or updated
There are two instances of int in doc/man7/EVP_KDF-KB.pod that should be integer.
There are two instances of int in doc/man7/EVP_KDF-KB.pod that should be integer.
Fixup pushed with 3 instances :grin: Still approved?
| gharchive/pull-request | 2021-05-06T11:29:08 | 2025-04-01T06:45:17.309615 | {
"authors": [
"paulidale",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/15178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1634171409 | Fix documentation of X509_VERIFY_PARAM_add0_policy() [3.1]
The function was incorrectly documented as enabling policy checking.
Fixes: CVE-2023-0466
Checklist
[x] documentation is added or updated
Rebased to fix the CHANGES.md conflict. Assuming approvals hold.
Merged to 3.1 branch. Thank you.
| gharchive/pull-request | 2023-03-21T15:32:01 | 2025-04-01T06:45:17.311823 | {
"authors": [
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/20562",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2370242091 | Fix SSL_select_next_proto (3.1/3.0)
Ensure that the provided client list is non-NULL and starts with a valid
entry. When called from the ALPN callback the client list should already
have been validated by OpenSSL so this should not cause a problem. When
called from the NPN callback the client list is locally configured and
will not have already been validated. Therefore SSL_select_next_proto
should not assume that it is correctly formatted.
We implement stricter checking of the client protocol list. We also do the
same for the server list while we are about it.
CVE-2024-5535
In addition we add numerous test cases related to NPN and ALPN. While there are some existing tests for this we expand this out. Finally we also fix a number of related bugs that were discovered during the development of the tests and clarify the documentation.
This is a backport of https://github.com/openssl/openssl/pull/24716 to the 3.1/3.0 branches
@nhorman - please also approve #24716 and #24717 and well as premium backport.
This pull request is ready to merge
Pushed. Thanks.
| gharchive/pull-request | 2024-06-24T13:20:13 | 2025-04-01T06:45:17.315078 | {
"authors": [
"mattcaswell",
"openssl-machine"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/24718",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
423016476 | Updated doc for BN_clear, BN_CTX_end when param is NULL
Checklist
[x] documentation is added or updated
[ ] tests are added or updated
Merged
master:
138ef774fedb567b29d6e5a96541a396cadc6135 Updated doc for BN_clear, BN_CTX_end when param is NULL
1.1.1:
20a8bce4bb70a3c4bfc69035c703fcdf8dcbc6cf Updated doc for BN_clear, BN_CTX_end when param is NULL
| gharchive/pull-request | 2019-03-20T00:46:25 | 2025-04-01T06:45:17.317329 | {
"authors": [
"levitte",
"slontis"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/8532",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1657224651 | Add capability to use version 4.12
The new Microshift version has more pod security enhancements, that can break the operator deployment.
Until new patches that fixes the deployment are not merged, let's add back capability to version 4.12 to continue develop the operator (sf-operator).
Could we merge it ?
| gharchive/pull-request | 2023-04-06T11:15:10 | 2025-04-01T06:45:17.318366 | {
"authors": [
"danpawlik",
"morucci"
],
"repo": "openstack-k8s-operators/ansible-microshift-role",
"url": "https://github.com/openstack-k8s-operators/ansible-microshift-role/pull/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2127840328 | Temp switch adoption nodeset back to CRC 2.19
Related CIX: OSPCIX-182
As a pull request owner and reviewers, I checked that:
[x] Appropriate testing is done and actually running
[x] Appropriate documentation exists and/or is up-to-date:
[x] README in the role
[x] Content of the docs/source is reflecting the changes
/lgtm
/hold
I think https://github.com/openstack-k8s-operators/install_yamls/pull/720 should fix the issue in 4.14
Testing here: https://github.com/openstack-k8s-operators/ci-framework/pull/1131
recheck
| gharchive/pull-request | 2024-02-09T21:01:28 | 2025-04-01T06:45:17.321648 | {
"authors": [
"lewisdenny",
"rlandy",
"viroel"
],
"repo": "openstack-k8s-operators/ci-framework",
"url": "https://github.com/openstack-k8s-operators/ci-framework/pull/1129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2150047116 | Add DesignateSpecCore struct
This version of the struct (called "core") is meant to be used via the OpenStackControlplane. It is the same as DesignateSpec only it is missing the containerImages.
The Default() function for webhooks has been split accordingly.
Jira: OSPRH-4835
/test designate-operator-build-deploy-kuttl
/lgtm
| gharchive/pull-request | 2024-02-22T22:25:58 | 2025-04-01T06:45:17.323624 | {
"authors": [
"beagles",
"dprince"
],
"repo": "openstack-k8s-operators/designate-operator",
"url": "https://github.com/openstack-k8s-operators/designate-operator/pull/157",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1154897501 | Deploy to static site server
In email conversation with Włodzimierz Bartczak I learned that OSM-PL is worried about attacks on the site after start of promotion. Static site hosting on a large service such as Amazon S3 with a CDN like Cloudfront can protect against most foreseeable attacks. This draft PR demonstrates how the site can be deployed to a static site host.
In this example, we deploy to a bucket visible online at https://dopomoha.teczno.com.
See https://github.com/migurski/ua-2022-map/runs/5370557331 for the latest successful run.
To keep the data up-to-date, AWS and other cloud hosts support scheduled tasks that can run download_data.py periodically.
Hey, thanks for the example. We have managed to set up CloudFront in front of our server so we should be fine for now. We are generating OSM tiles with ukrainian language so we need the server anyway.
It's a good idea with trigerring script on build tho, we'll need to add that
Great, thanks for the feedback! Closing this now.
| gharchive/pull-request | 2022-03-01T05:44:24 | 2025-04-01T06:45:17.405081 | {
"authors": [
"migurski",
"ttomasz"
],
"repo": "openstreetmap-polska/ua-2022-map",
"url": "https://github.com/openstreetmap-polska/ua-2022-map/pull/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
164726571 | POI presets are not recognized in interpolated address lines
If you use lines for interpolating house numbers, and there are POIs in it, they will not be recognized with presets. This happens either if they are in the middle of the line or on the end nodes. Not only they will not have an icon on the map, but they will also display a “Other” preset in the left sidebar if you click on it.
They are also not recognized if they are mapped on the edge of a building—a practice that I have noticed mainly in strip malls where I live. I don't know if this practice is correct in the first place though, so it may not be a problem there.
ohhh #3219 asked for this and I didn't see the need - but yes I agree it's important for address interpolation lines.
Thanks for this suggestion, I implemented it in 0d8fb87
We now treat entities on address interpolation lines as points, not vertices.. This affects both the rendering and the preset matching and preset suggestions:
Great! Thanks for considering my suggestion.
| gharchive/issue | 2016-07-10T17:58:24 | 2025-04-01T06:45:17.408303 | {
"authors": [
"bhousel",
"virgilinojuca"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/3241",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
352081972 | Add Using GitHub and Git
See Issue #4324 (comment).
Very well written, and I like how it links out to other great git resources for beginners, thanks @manfredbrandl ! 👍
If I'm not completely mistaken, adding simple stuff like new presets, or fixing some documentation could be entirely done from the Github in your browser: this includes forking, creating a new branch, creating or updating some files, creating a pull request, etc.
One example might be this PR: https://github.com/openstreetmap/iD/pull/5233/files
@mmd-osm Can you write a minimal howto or a short step-by-step description of a GitHub-only PR?
Sure... a very high level summary would be as follows:
(Creating Github account as before)
Navigate to https://github.com/openstreetmap/iD
Click on "Fork"
Click on "Branch: master"
Enter the name of a new branch, create new branch
Navigate to the file you want to edit
Click on "Edit this file"
Apply your changes to your file
Alternatively, you could also "Create a new file"
When finished, enter a commit text and description, Commit directly to the (my-new-branch) branch.
Click on "Commit changes"
Navigate back to your "id" project - https://github.com/{{user}}/iD
Follow https://help.github.com/articles/about-pull-requests/ to create a new pull request for your change
Tested on https://github.com/mmd-osm/iD/tree/my-new-branch
@mmd-osm Can you please look at CONTRIBUTING.md if improvents are still possible?
LGTM!
One thing I'm not sure of, is if we should promote https://help.github.com/articles/editing-files-in-another-user-s-repository/ even more, as it would be easier for typical users (I only remembered that this is possible, after finishing the step-by-step list above).
| gharchive/pull-request | 2018-08-20T10:48:45 | 2025-04-01T06:45:17.415763 | {
"authors": [
"bhousel",
"manfredbrandl",
"mmd-osm"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/pull/5241",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
608635024 | Remove corridors from path category
This removes corridors from the path category, so that they are only affected by the indoor features category.
Fixes #7478
Looks good, @JamesKingdom, thanks!!
| gharchive/pull-request | 2020-04-28T21:42:39 | 2025-04-01T06:45:17.416922 | {
"authors": [
"JamesKingdom",
"quincylvania"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/pull/7548",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
1324064099 | fix: sync both diff editor to extension host
Types
[x] 🐛 Bug Fixes
Background or solution
https://user-images.githubusercontent.com/17701805/182111449-ad15c1e4-7dca-4eb6-9601-d7b9b37ac202.mp4
Changelog
修复 gitlens 左侧 originaleditor 无法展示 blame 的问题
/publish
| gharchive/pull-request | 2022-08-01T08:53:24 | 2025-04-01T06:45:17.426288 | {
"authors": [
"Aaaaash"
],
"repo": "opensumi/core",
"url": "https://github.com/opensumi/core/pull/1452",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2128355532 | [OSDEV-781] API. Add a flag on API Limit page to indicate if package renews monthly or yearly.
OSDEV-781 API. Add a flag on API Limit page to indicate if package renews monthly or yearly.
added field "renewal_period" & migration for it
renamed field "yearly_limit" to "period_limit"
updated logic to support montly & yearly limitation count reset
if the permissions were set up on the 29th-31st of a month, then the permissions will reset on the 1st of the next month
The implementation is done, working on tests.
| gharchive/pull-request | 2024-02-10T11:55:43 | 2025-04-01T06:45:17.428433 | {
"authors": [
"roman-stolar"
],
"repo": "opensupplyhub/open-supply-hub",
"url": "https://github.com/opensupplyhub/open-supply-hub/pull/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
200600378 | Config routes containing same prefix as components are never hit
Hi @matteofigus , just a small issue here
In my config, I have something like
prefix: '/abc/',
routes: [{
route: '/abc/<some_custom_route>',
method: 'get',
handler: function(req, res) {
//return data
}
}]
But this route is never hit as the routes added via config are registered after the default OC component routes
https://github.com/opentable/oc/blob/master/src/registry/router.js#L54
If you agree, more than happy to make a PR for a fix :)
Hi, thanks for your submission.
This is all for enforcing simplicity and keeping the API clear and easily understandable. In our case we have our prefix like //componentsurl.com/components/... and have all the custom routes outside of the prefix: //componentsurl.com/custom-route and it works well for us. I guess this is the recommended way and it is not very well documented (and not nicely handled as the API starts without any warning or error).
Consider that in the current setup you can still extend the components' namespace, for instance adding a /:component/:version/hello which would automatically created for each component.
If you are still designing your API (as I guess you are given you are now taking care of the extra endpoints) would be too much of a compromise to follow the same convention? I mean having an extra prefix for the components and keep the custom routes one level down or with a different prefix. If that's ok for you, we can still keep the issue opened to clarify the docs and return an error in case somebody tries to add a route to the same components' namespace.
Hi, the namespace separation makes sense and agreed should return an explicit error if possible :)
Ok, I'll keep this issue opened to update docs and error handling ;)
Wiki updated: https://github.com/opentable/oc/wiki/Registry
| gharchive/issue | 2017-01-13T10:47:52 | 2025-04-01T06:45:17.432745 | {
"authors": [
"debopamsengupta",
"matteofigus"
],
"repo": "opentable/oc",
"url": "https://github.com/opentable/oc/issues/350",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2491463076 | fix: VariantIndex partitioned by chromosome has to be read without resursive file lookup flag
✨ Context
After https://github.com/opentargets/gentropy/pull/735 the VariantIndexStep produces the VariantIndex table that is partitionned by chromosome (previously the dataframe was collected by pandas and saved without partitionning).
To preserve the partitioning column, we need to drop the recursiveFileLookup when reading variant index in LocusToGeneStep, otherwise the chromosome column is Null.
🛠 What does this PR implement
🙈 Missing
🚦 Before submitting
[x] Do these changes cover one single feature (one change at a time)?
[x] Did you read the contributor guideline?
[ ] Did you make sure to update the documentation with your changes?
[x] Did you make sure there is no commented out code in this PR?
[x] Did you follow conventional commits standards in PR title and commit messages?
[x] Did you make sure the branch is up-to-date with the dev branch?
[ ] Did you write any new necessary tests?
[x] Did you make sure the changes pass local tests (make test)?
[x] Did you make sure the changes pass pre-commit rules (e.g poetry run pre-commit run --all-files)?
@ireneisdoomed I have changed the default to False as discussed.
| gharchive/pull-request | 2024-08-28T08:45:04 | 2025-04-01T06:45:17.440546 | {
"authors": [
"project-defiant"
],
"repo": "opentargets/gentropy",
"url": "https://github.com/opentargets/gentropy/pull/738",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1706733295 | [Genetics] Update Link component props to use ts types
Pull Request Template (PR Tittle)
Should match thease format: [Scoped application (Genetics || Platform || Package || AppConfig)]: Short description mentioning the affected page and/or section component
Description
Please include a summary of the change and which issue is fixed. List any dependencies that are required for this change.
Issue: #(2871
Deploy preview: (link)
PR change is recommended by react: https://react.dev/reference/react/Component#static-proptypes
Type of change
Please delete options that are not relevant.
[x] Refactor (non-breaking change)
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce.
[ ] Test A
[ ] Test B
Checklist:
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
[ ] Any dependent changes have been merged and published in downstream modules
Thanks a lot for your collaboration @riyavsinha.
PR tested and good to go 👍
| gharchive/pull-request | 2023-05-12T00:16:25 | 2025-04-01T06:45:17.446564 | {
"authors": [
"carcruz",
"riyavsinha"
],
"repo": "opentargets/ot-ui-apps",
"url": "https://github.com/opentargets/ot-ui-apps/pull/132",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1918008512 | [Platform]: Fix display of empty results for Known Drugs widget filter
Description
Introduces a new property in the SectionItem component that allows rendering the body with empty data. When users enter in the search of the Known Drugs widget an input that generates empty results, they can still see the widget and try again with a new input.
Issue: https://github.com/opentargets/issues/issues/3084
Type of change
[X] Bug fix (non-breaking change which fixes an issue)
How Has This Been Tested?
Testing instructions:
run Platform app using this branch
visit http://localhost:3000/target/ENSG00000073756
in the Known Drugs widget, search for "asd"
empty results should be displayed as below
clearing search should return to previous, unfiltered results
Checklist:
[X] I have commented my code, particularly in hard-to-understand areas
[X] I have made corresponding changes to the documentation
[X] My changes generate no new warnings
[X] Any dependent changes have been merged and published in downstream modules
Thanks for your initiative @raskolnikov-rodion, but this is out of the scope of external collaborators since it requires a backend fix, a bigger refactor in the SectionIItem to ensure other side scenarios and prototype/design. This is why I point you directly to the JS to TS refactor. https://github.com/opentargets/issues/issues/2871
Thanks for your initiative @raskolnikov-rodion, but this is out of the scope of external collaborators since it requires a backend fix, a bigger refactor in the SectionIItem to ensure other side scenarios and prototype/design. This is why I point you directly to the JS to TS refactor. opentargets/issues#2871
Hi @carcruz , sure, all good, thanks for the clarification!
| gharchive/pull-request | 2023-09-28T17:52:02 | 2025-04-01T06:45:17.453025 | {
"authors": [
"carcruz",
"raskolnikov-rodion"
],
"repo": "opentargets/ot-ui-apps",
"url": "https://github.com/opentargets/ot-ui-apps/pull/265",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
976804609 | Lf update probes
Update the chemical probes widget to the new design (new table) and api.
See https://github.com/opentargets/platform/issues/1677
Since the widget changed (i.e. not just data fields), will wait for the all clear from @andrewhercules or @d0choa before code review / merge.
Looks great @LucaFumis!
I agree with the points @d0choa raised in https://github.com/opentargets/platform/issues/1677#issuecomment-903674786.
Also, in the "Score" column, can we please style the "N/A" entries as chips and have a tooltip that says "No reported score"?
Current implementation:
Proposed implementation:
What do you think about "Not available" instead of "N/A" @andrewhercules ? The styling most definetely
Yes, agree @d0choa!
@LucaFumis, can we please update the chip to show "Not available" if there is no score?
Looks great @LucaFumis! Thank you!
@d0choa please review and if okay, the front-end team can review and merge into main.
Thanks for all the feedback: all points should now have been addressed - PR ready for code review.
It looks fantastic. Just 2 last requests:
[ ] Is it possible to add a second layer of sorting? Now, we are ordering by the column Quality. However, very often there are so many low-quality results that we would benefit to have more order. PIK3CA is a good example in which PICTILISIB is lost on the 4th page. Can we try to implement the next logic:
Sort by isHighQuality (as currently implemented)
Within each isHighQuality category, display first entries that contain experimental in the origin field.
[ ] Can we add an on-hover message in the star that says High quality or Low quality.
Quality tooltip and custom sorting now ready for testing :)
Awesome, one very minor thing. I promise it's the last:
[ ] Can you prevent the scores to show a 0 as a decimal point? Like in: 100.0, 35.0,... of course without rounding 35.7 etc.
| gharchive/pull-request | 2021-08-23T09:07:31 | 2025-04-01T06:45:17.461507 | {
"authors": [
"LucaFumis",
"andrewhercules",
"d0choa"
],
"repo": "opentargets/platform-app",
"url": "https://github.com/opentargets/platform-app/pull/437",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
978918182 | Nginx keepalive_timeout update
Update setting to match recommendations from GCP as seen here
620 seconds is the recommended value
| gharchive/issue | 2021-08-25T09:22:24 | 2025-04-01T06:45:17.462735 | {
"authors": [
"mbdebian"
],
"repo": "opentargets/terraform-google-opentargets-platform",
"url": "https://github.com/opentargets/terraform-google-opentargets-platform/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2286138149 | chore(ci): Fix license check
Lets license check block merges and PRs
Adds make license command for easy testing from the command line
downstream projects will have to bump to Go 1.22.3 or keep go.mods as is
downstream projects will have to bump to Go 1.22.3 or keep go.mods as is
https://github.com/opentdf/platform/pull/773
@pflynn-virtru I've updated the mods to all require at least go 1.21, instead of 1.22.3
| gharchive/pull-request | 2024-05-08T17:54:42 | 2025-04-01T06:45:17.465594 | {
"authors": [
"dmihalcik-virtru",
"pflynn-virtru"
],
"repo": "opentdf/platform",
"url": "https://github.com/opentdf/platform/pull/771",
"license": "BSD-3-Clause-Clear",
"license_type": "permissive",
"license_source": "github-api"
} |
214814291 | Compiler vrs Compiler issues, vrs IDE issues.
Open thread relies upon a number of standard C header files that are different across compilers, and compiler releases.
For example:
#include <string.h> - depending on the compiler does not have some functions.
This also varies by compiler version, (ie: a fix was introduce in a later version)
In the GNU Autoconf world this is solved via either "libiberty' or various "#ifdefs"
GCC supports an #include_next scheme - other compilers do not.
a) This does not play nicely with IDEs that do not support Autoconf.
b) This also makes code "messy" with lots of #if/#else/#endif fixes.
I propose the following solution:
If a standard header files needs to be fixed for a specific tool.
Then - the following action is taken.
Step 1:
A "pseudo-replacement" header file is created in a new directory called: "fixed"
#include <openthread/platform/fixed/fixed_FILENAME.H>
Alternatively it could be placed in ${openthread}/src//fixed/fixed_FILENAME.H
The concern here is if thread ever provids a "C++" api - with functions defined within a class.
Note: specifically the original filename should not be used "as is", but a standard prefix or suffix be added to the original filename to make it unique. In this example I propose: "fixed_FILENAME.h"
Example: #include <stdio.h>
Becomes: #include <openthread/platform/fixed/stdio.h
Step 2:
This new <openthread/platform/fixed/fixed_FILENAME.H>
Must eventually include the tool chain provided FILENAME.H
But also contains any required compiler/library fixes in the form of: #if/#else/#endif
Step 3: Problematic Thread code then includes the "fixed" header file in lue of the standard header file.
These standard headers, and the function signatures, should be the same across compilers that support C99 (ISO/IEC 9899), shouldn't they? Since we are using C99, I think as long as we are carful to only use standard functions that are from C99 and not a later revision, we should be safe.
Question: Is strnlen() c99? or was this added at a later date
I believe this is an online copy of the C99 standard:
http://cs.nyu.edu/courses/summer12/CSCI-GA.2110-001/downloads/C99.pdf
the function is not described in that document.
Other places I have tried/looked.
I do not see it here: http://en.cppreference.com/w/c/string/byte
I do not see it here: https://en.wikibooks.org/wiki/C_Programming/Strings
I do not see it here: https://www-s.acm.illinois.edu/webmonkeys/book/c_guide/2.14.html
It looks like size_t strnlen(const char *s, size_t maxlen); is actually a posix function. If we are using it in the code then it should be replaced with a C99 compliant alternative.
So, for strnlen it looks like this issue has already been addressed, with spinel providing its own implementation of strnlen for compilers that dont provide one.
I don't think there exists a "C99" compliant version.
I believe that the idea of "strnlen()" is generaly from a security point of view. If the string is really super long (ie: buffer overflow attack) then stop now - do not search until the end of memory or crash.
the spinel solution does not solve the problem.
I don't think: https://github.com/openthread/openthread/issues/279
completed/fixed the problem here - it addressed "spinel" only.
a) As Adam points out - an alternately named function is effectively implemented in the NCP library here: https://github.com/openthread/openthread/blob/master/src/ncp/spinel.c
b) Code such as:
https://github.com/openthread/openthread/blob/master/src/core/mac/mac_frame.hpp#L913
Problem 1: Does not have a header that it can include that provides this decl.
Problem 2: And unless an application links the NCP library, and uses that alternatively named function this will not solve the problem.
#737 is a somewhat unloved pull request that I need to get working again. It sets up a framework for pulling in implementations of functions which aren't present on some platforms. It could be easily extended to include strnlen().
Unfortunately that PR hasn't gotten a lot of love since I first proposed it, and I need to do some additional work on it. It also doesn't really take into considerations IDEs.
Keep in mind that these "replacement functions" aren't really intended to be exposed to the applications, they should only be exposed for OpenThread internal use only.
The Spinel strnlen() was just a quick implementation of a function that was conveniently small to implement. It is statically defined and so obviously not intended to be exported out of the file it is used in.
@jwhui, @darconeous, and @nibanks
I'd like to draw your attention to this issue, it is the next one I would like to better address.
This is the next large item I want to push into OpenThread.
Code in openthread currently does this:
#include <string.h>
To fix missing/problems with string.h - and other related, platforms do some "Gymnastics"
In ./configure land, we "force feed" header files into the compiler.
In windows-land, the openthread-windows-config.h file includes these files.
Doing this header file force feed into the compiler gets ugly in IDE land. Its doable, but ugly.
Another aspect of this is the skill-set of the target developer, some of us propeller heads know well the internal workings of the compiler, and the ide configuration files, etc... In the broad market, not all developers really understand this, especially noobs, if the goal is to have a mass market for thread - then doing things that are easy and quick to understand, is the better way.
I don't think the gymnastics approach used right now is a good long term solution, and thus I want to put forth what I think is a better choice or method
It comes down to this:
STEP 1
OLD CODE:
#include <string.h> /* old code */
NEW CODE
#include <utils/wrap_string.h> /* use this name instead */
The same would apply to stdbool.h, and stdint.h for windows, and more importantly - the same model can be used for ALL future files in the same way, just create "wrap_SOMENAME.h" as needed.
In a perfect world, this new "wrap_string.h" - just includes <string.h>
But we don't live in that world.
An example implementation of 'wrap_string.h' might be like this:
Note: I have this ready to go now, we are using this internally now.
My only questions are, if I push this up, how should this be handled
a) Separate library? if so what name?
This would obvious need a bit of tweaking on windows side (a new library to make)
b) Or integrating into the existing openthread library
I think this is the correct/better choice, it is more transparent.
That said, it would require add/remove a few files to the vcxproject files.
c) Or nope, this is not the correct way to solve this.
/* copyright statement here */
#if !defined(WRAP_STRING_H)
#define WRAP_STRING_H
#include "wrap_common.h" // common wrapper macros */
/* include the system provided header */
#include <string.h>
/* these are the alternate implementations */
/* See: https://www.freebsd.org/cgi/man.cgi?query=strlcpy */
WRAP_EXTERN_C size_t missing_strlcpy(char *dst, const char *src, size_t dstsize);
/* See: https://www.freebsd.org/cgi/man.cgi?query=strlcat */
WRAP_EXTERN_C size_t missing_strlcat(char *dst, const char *src, size_t dstsize);
/* See: https://www.freebsd.org/cgi/man.cgi?query=strnlen */
WRAP_EXTERN_C size_t missing_strnlen(const char *s, size_t maxlen);
/* undefine any compiler supplied function. */
#undef strlcat
#undef strlcpy
#undef strnlen
/* Define these, it just works always by default ... */
#define strlcat missing_strlcat
#define strlcpy missing_strlcpy
#define strnlen missing_strnlen
/* Undefine below - as [er platform */
#if defined(_MSV_VER)
#undef strnlen /* provided by visual studio */
#endif
#if _wrap_ti_arm
/* TI_ARM compiler is missing all */
#endif
#if __linux__
#undef strnlen /* provided by gcc */
/* strlcpy - missing */
/* strlcat - missing */
#endif
#if _wrap_iar_arm
/* IAR systems for ARM */
/* does not supply any in all cases */
#endif
#endif // WRAP_STRING_H
I'm not opposed to it. Seems like a workable and convenient solution.
Regarding the library... I would assume this stuff would just go with the platform bits. I'm guessing that is option B? I would think we certainly wouldn't want to expose these headers to applications linking against us.
Technically it is doable (putting the library in platform) but currently things in "src" do not include headers from the platform directory, and there is no "platform include" directory. Is that what we want? I don't think so.
If we want these private to openthread, I think the headers should be inside the ${openthread}/src directory somewhere.
The question is really this: Do we want to require linking another library to link against? Yes or No.
Sort of: Always link: ${LIB_OPENTHREAD_PART1} and ${LIB_OPENTHREAD_PART2}
That part2 library would have the missing string functions... I think this is awkward and weird, it is btw - what I have currently implemented internally. I'm not married to "two-libraries" I am questioning if two libraries is the correct choice.
Hence I think Option (B) - but part of: "libopenthread" [not platform] is the place it belongs.
Resolved by #1642.
| gharchive/issue | 2017-03-16T19:30:55 | 2025-04-01T06:45:17.513183 | {
"authors": [
"DuaneEllis-TI",
"aeliot",
"darconeous",
"jwhui"
],
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/issues/1476",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
244914043 | [spinel] support in place unpack.
This RP added a new API spinel_datatype_unpack_in_place () in spinel parser to support parsing composite data and filling in supplied buffers.
The current parser API returns data only valid inside the frame handler. This makes it hard to implement a general property getter like GetProperty(spinel_key_t aKey, const char *aFormat, ...) where the aFormat is the structure of the expected response. The newly added API accepts composite-type data pointer instead of pointer to composite-type data pointer, which would save repeated code for parsing and simplify host side implementing general getter for spinel properties.
A lot of thought went into the original design of spinel_datatype_unpack(). There are two design goals that the current mechanism satisfies:
Argument symmetry with spinel_datatype_pack(). The only difference between the arguments you pass in to _unpack() versus _pack() is that _unpack() takes a pointer to the type that would be passed in to _pack(). So a int for _pack() becomes a int* for _unpack(). A char* for _pack() becomes a char** for _unpack(). And, importantly, a spinel_eui64_t* for _pack() becomes a spinel_eui64_t** for _unpack(). This helps reduce argument confusion.
Minimize unnecessary copies, to improve performance. If you need a copy of something outside of the context that the frame will exist, the intended solution is to make a copy of it after unpacking.
It is not terribly clear to me exactly what specific onerous problem this change is attempting to solve. Can you provide some more specific examples?
I'm implementing a general spinel property getter function with prototype like GetProperty(spinel_key_t aKey, const char *aFormat, ...).
The caller assumes all data are properly assigned when the function returns successfully. Instead of writing parsing code for each property, I reuse the spinel_datatype_vunpack*() API to implement a simple general property handler for all properties with the supplied aFormat as the pack_format argument of spinel_datatype_vunpack*(). Since composite data like string returned from spinel_datatype_vunpack() are actually located in the HDLC frame, which are only valid during the HDLC decoder's frame callback. Once GetProperty(spinel_key_t aKey, const char *aFormat, ...) returned, these data already became invalid. So in this PR I implemented a alternative way for parsing the spinel data so that composite data are copied into the caller supplied buffers.
As for the symmetry design, it's really beautiful and I like it. And I also noticed that scanf() and printf() are very similar to spinel's *unpack() and *pack(), reader and writer. The reader scanf() just like *unpack() accepts a pointer to the type that would be passed in to the writer printf(), while for string both scanf() and printf() accepts a pointer to char and the format code is the same %s. I didn't feel confused, IMO, maybe because in C, a parameter is passed by pointer if,
the parameter is going to be changed by the function,
or the parameter is composite data types and passing by pointer is more efficient by value.
So maybe, it's acceptable for composite data type that they are not that symmetry in reader and writer, i.e. *unpack() and *pack().
As for the performance concern, this PR didn't change the existing API, so the caller only use the in place parsing API if needed. Besides, to enable making copy after unpacking as suggested, that lead to providing specified parsing code for each spinel property, which may result in a lot of repeated code.
Thoughts? @darconeous
Can you paste in some code showing me how you are using this new API to make things nicer? I think that would make it's utility more clear to me.
static va_list sArgs;
static const char* sFormat;
void GetEui64(uint8_t *aEui64)
{
GetProp(SPINEL_PROP_HWADDR, SPINEL_DATATYPE_EUI64_S, aEui64);
}
int GetProp(spinel_key_t aKey, const char* aFormat, ...);
{
sFormat = aFormat;
va_start(sArgs, aFormat);
int ret = SendGetPropAndWaitReply(aKey);
va_end(sArgs);
return ret;
}
int SendGetPropAndWaitReply(spinel_key_t aKey)
{
// code to send the get property
...
// loop waiting for serial port data and feed to HDLC decoder.
char buf[];
while (/* the property response is not received or not timeout */)
{
int rval = read(sSock, buf, sizeof(buf));
sHdlcDecoder.Decode(buf, rval);
}
return error;
}
void HandleHdlcFrame(uint8_t *aBuffer, uint16_t aBufferLength)
{
// some code to parse command
...
// if this is response of the get prop command, aBuffer has already been adjusted to point to the start of the property value.
{
rval = spinel_datatype_vunpack_in_place(aBuffer, aBufferLenth, aFormat, aArgs);
}
...
}
Thinking about this, handling the failure case may require some care with the new model specially for handling of structs and types that contain more than one value.
When the spinel_datatype_vunpack_in_place is called we don't yet know if the received spinel frame (content of aBuffer) is valid and can be parsed correctly for the given format. We need to ensure that for a failed case, the passed-in variables remain unchanged. This become tricky if the format contains structs or more than one value, as the unpack may be able to parse/decode some of the values but not all.
The existing model/API addresses this as follows:
The caller upacks the content into local variables (gets pointers into the buffer where the data is without copying it over),
Checks the status of upack and ensure the operation succeeded
Only if successful, the caller copies/moves the content where they need to go.
Wit new model, we may need to keep a copy of original values of parameters sent to int GetProp(spinel_key_t aKey, const char* aFormat, ...); to be able to revert back in case of failure...
Thanks @bukepo, that helps my understanding greatly. I see you are trying to write blocking API calls that combine both sending the request and parsing the response. Introducing this method makes for much more elegant code, so I can see your motivation.
I'm going to give a little more thought to this and respond more decisively later today.
Regarding @abtink's comment:
We need to ensure that for a failed case, the passed-in variables remain unchanged.
This was never a guarantee, even with the standard spinel_datatype_vunpack(). All arguments that were successfully parsed up to the point of the parse error will be changed. This generally isn't a problem as long as you can signal some sort of error condition that indicates that the frame was not parsed as expected. Unless I am missing something, I don't think there is a need to restore previous values.
@darconeous I meant from from caller of GetProp(spinel_key_t aKey, const char* aFormat, ...); perspective.
Any chance I could get you to add unit tests (see bottom of file) that exercise the newly added code?
@darconeous I have added some unit tests.
| gharchive/pull-request | 2017-07-23T14:36:31 | 2025-04-01T06:45:17.528072 | {
"authors": [
"abtink",
"bukepo",
"darconeous"
],
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/pull/2025",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
319960928 | FeignTracingAutoConfiguration.tracer causes application startup time to up x10
I'm not entirely sure where the issue is. But here is the behavior I'm observing when using the opentracing-spring-cloud-feign tracing module.
The Spring Boot application that consumes this module goes from a ~15 sec startup time to ~5 min startup time.
From log files it seems to indicate excessive RMI activity to be the culprit
Renaming the tracer variable in FeignTracingAutoConfiguration.java to anything but tracer resolves the issue
I have good and bad trace level startup logs I can share for comparison.
The log files would be appreciated. Could you please also share a reproducer?
I was able to consistently reproduce and resolve the issue as noted above until I opened this issue! Now I can't reproduce it anymore. Yikes! But I do have the logs from a run earlier this morning that demonstrates the higher RMI activity. Can I ship them to non-public email address.
it was a weird one :). But if it happens again I am happy to look at it. I just need more info how to reproduce.
| gharchive/issue | 2018-05-03T15:05:56 | 2025-04-01T06:45:17.587887 | {
"authors": [
"pavolloffay",
"sudr"
],
"repo": "opentracing-contrib/java-spring-cloud",
"url": "https://github.com/opentracing-contrib/java-spring-cloud/issues/134",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1871338465 | 🛑 [CN] OpenUPM API /feeds/updates/rss is down
In 1a1097d, [CN] OpenUPM API /feeds/updates/rss (https://api.openupm.cn/feeds/updates/rss) was down:
HTTP code: 500
Response time: 212 ms
Resolved: [CN] OpenUPM API /feeds/updates/rss is back up in dbad036 after 14 minutes.
| gharchive/issue | 2023-08-29T10:16:25 | 2025-04-01T06:45:17.862324 | {
"authors": [
"favoyang"
],
"repo": "openupm/upptime-openupmcn",
"url": "https://github.com/openupm/upptime-openupmcn/issues/1539",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
861036344 | Finishing the job should take you to the most recent pagination page
My actions before raising this issue
[x] Read/searched the docs
[x] Searched past issues
Expected Behaviour
When pressing 'Finish the job', we are returned to the job page in the task of the job we just finished or to proceed to the next job in the task (latter preferrable). This is particularly important when there are 100s of jobs per task.
Current Behaviour
When pressing 'Finish the job', we are returned to the first job page in the task.
Possible Solution
Steps to Reproduce (for bugs)
Context
Your Environment
Git hash commit (git log -1):
Docker version docker version (e.g. Docker 17.0.05):
Are you using Docker Swarm or Kubernetes?
Operating System and version (e.g. Linux, Windows, MacOS):
Code example or link to GitHub repo or gist to reproduce problem:
Other diagnostic information / logs:
Logs from `cvat` container
Next steps
You may join our Gitter channel for community support.
@bsekachev Any guidance on the best way to fix this please? We are currently facing this issue from out annotators and would be happy to contribute with the PR.
Hi, @dreaquil
Thanks for your interest.
Antd has API prop defaultCurrent for Pagination element. We can use it, but need to store the page somewhere in global storage (Redux is used).
Also need to remember that number of pages depends on applied filters and order depends on applied sortings, probably need to store them also somewhere in the storage and restore after reopening a page.
Hi @bsekachev
Trying to figure out the best place to store the current job page. Does this seem reasonable currentJobPage in the scope of the global store?
@dreaquil
I think we should store { currentJobsPage, appliedJobsFilters, appliedJobsSortings } in the array of current tasks (on the same level where instance and previews are stored).
I've start a PR that's a work in progress. Can you let me if this seems reasonable or I'm working in some kind of anti pattern. Full disclosure: not really done front end dev before.
@dreaquil , does it the same as https://github.com/openvinotoolkit/cvat/issues/3144?
hi @nmanovic! These are different; one is about returning to the same job list page that contains the previously finished job. #3144 is about having a button that allows the user to finish a job and move directly to the next job in a list.
| gharchive/issue | 2021-04-19T08:16:07 | 2025-04-01T06:45:17.871684 | {
"authors": [
"bsekachev",
"dreaquil",
"nmanovic"
],
"repo": "openvinotoolkit/cvat",
"url": "https://github.com/openvinotoolkit/cvat/issues/3101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1132101499 | Tus for task annotations import
Motivation and context
Continue TUS integration to provide chunk upload for large annotation files in Tasks and Jobs
Resolved #964
How has this been tested?
Checklist
[x] I submit my changes into the develop branch
[x] I have added a description of my changes into CHANGELOG file
[ ] I have updated the documentation accordingly
[ ] I have added tests to cover my changes
[x] I have linked related issues (read github docs)
[x] I have increased versions of npm packages if it is necessary (cvat-canvas,
cvat-core, cvat-data and cvat-ui)
License
[x] I submit my code changes under the same MIT License that covers the project.
Feel free to contact the maintainers if that's a concern.
[x] I have updated the license header for each file (see an example below)
# Copyright (C) 2022 Intel Corporation
#
# SPDX-License-Identifier: MIT
@bsekachev @nmanovic Could you please take a look at this patch?
I've encountered strange problem with test_tasks_delete cli test. After new functionality in this pr was added, self.mock_stdout.getvalue() in this test started returning:
on upload finished test_task
Task ID 1 deleted
instead of
Task ID 1 deleted
This test checked presence of test_task in those lines, so it failed. But eventually task was deleted, therefore I think it's better to check for Task ID 1 deleted, so I've changed the test too.
Hmm, uploading annotations to an empty task does not work for me. UI says "Annotations have been loaded", but the task is still empty.
Please, check.
@klakhov Could you please resolve conflicts?
Looks like the same issue is still reproducable for me, or maybe did I miss something? (pull, restart server, restart UI, ... ?)
@nmanovic I've removed data_type as we discussed and added several actions for each upload type
| gharchive/pull-request | 2022-02-11T08:47:22 | 2025-04-01T06:45:17.881050 | {
"authors": [
"bsekachev",
"klakhov"
],
"repo": "openvinotoolkit/cvat",
"url": "https://github.com/openvinotoolkit/cvat/pull/4327",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
647329454 | Pf/third party sanity
Added tests for third-party integration, rebased on develop
Jenkins please retry a build
Remove the tests/data/mock_datasets/glue/glue_data/MNLI/cached_dev_roberta_mnli_128_mnli and other cached* files.
Jenkins please retry a build
Jenkins please retry a build
Jenkins please retry a build
| gharchive/pull-request | 2020-06-29T12:24:31 | 2025-04-01T06:45:17.883205 | {
"authors": [
"pfinashx",
"vshampor"
],
"repo": "openvinotoolkit/nncf_pytorch",
"url": "https://github.com/openvinotoolkit/nncf_pytorch/pull/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1668231269 | Fixed crash on arm64 Linux
Description
Please include a summary of the change. Please also include relevant motivation and context. See contribution guidelines for more details. If the change fixes an issue not documented in the project's Github issue tracker, please document all steps necessary to reproduce it.
Fixes # (github issue)
Checklist
General
[ ] Do all unit and benchdnn tests (make test and make test_benchdnn_*) pass locally for each commit?
[ ] Have you formatted the code using clang-format?
Performance improvements
[ ] Have you submitted performance data that demonstrates performance improvements?
New features
[ ] Have you published an RFC for the new feature?
[ ] Was the RFC approved?
[ ] Have you added relevant tests?
Bug fixes
[ ] Have you included information on how to reproduce the issue (either in a github issue or in this PR)?
[ ] Have you added relevant regression tests?
RFC PR
[ ] Does RFC document follow the template?
[ ] Have you added a link to the rendered document?
merged https://github.com/openvinotoolkit/oneDNN/commit/cd01f845f89844184f2e45982a12b4e327e573d5
| gharchive/pull-request | 2023-04-14T13:16:54 | 2025-04-01T06:45:17.888561 | {
"authors": [
"ilya-lavrenov"
],
"repo": "openvinotoolkit/oneDNN",
"url": "https://github.com/openvinotoolkit/oneDNN/pull/187",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2360441248 | memory leak
Hi, guys. Thanks for your great open-source work. When I use your rlds dataset built on tf.data.Dataset, I found there is memory leak when train=True. Specifically, when it's true and shuffle is called without cache, the memory gradually increases as the iteration goes.
https://github.com/openvla/openvla/blob/388244e88555150b67520419932189f459ad74cf/prismatic/vla/datasets/rlds/dataset.py#L572
However, I don't understand why it happens. In my understanding, shuffle preloads a fixed size of data before iteration and then replace the used data with new data on the fly during iteration. So the memory should keep constant after preloading.
Could you provide some solution to the memory leak? If the leak is hard to solve, could you provide an approximate memory it needs to run the whole iteration during training?
Hi @yueyang130,
Thank you for the question. Technically, this is not a "memory leak" -- i.e., it is not a bug. The TFDS dataloader may use more memory than expected (likely due to aggressive prefetching or other operations happening under the hood in TFDS), and the memory utilization may look like it will increase forever, but it will actually stop increasing and stay constant at some point. If your system has limited RAM and the training shuffle buffer size is too large, your program may crash before you can observe that the memory utilization eventually plateaus.
Therefore, if you are running into out-of-memory issues while training, we recommend that you decrease the value of shuffle_buffer_size in your VLA training configuration (see openvla/prismatic/conf/vla.py). Every user has a different amount of RAM available, so we cannot recommend a default shuffle buffer size that will work for everyone. But just for reference, in some of our smaller-scale experiments during development, we had 1 TB of RAM available on a single node with 8 A100 GPUs, and we set shuffle_buffer_size to 256_000 (so our effective shuffle buffer size was 256K * 8 == ~2M samples, since we had spawned 8 processes that each had their own shuffle buffer; this is how our dataloader works by default). And this configuration would only be at ~70% system memory utilization even after 100K steps over 4-5 days, which was sufficient for our purposes. For your use case, you can experiment with the shuffle buffer size and try to reduce it while making sure that model training is still stable.
Once you decrease the shuffle_buffer_size to something that works for your machine, you will be able to confirm that there is no "memory leak" because the utilization will stabilize after a certain point.
Thank you for the question and hope this helps!
-Moo Jin
Thanks for your kind response. I will try your advice later.
Another question is that your dataloader for vla training don't use DistributedSampler (since IterableDataset doesn't allow it...). However, will it result that each torch process have the same data instead of the situation that each process have s separate subset of the dataset?
Hi @yueyang130,
Thank you for the question. Technically, this is not a "memory leak" -- i.e., it is not a bug. The TFDS dataloader may use more memory than expected (likely due to aggressive prefetching or other operations happening under the hood in TFDS), and the memory utilization may look like it will increase forever, but it will actually stop increasing and stay constant at some point. If your system has limited RAM and the training shuffle buffer size is too large, your program may crash before you can observe that the memory utilization eventually plateaus.
Therefore, if you are running into out-of-memory issues while training, we recommend that you decrease the value of shuffle_buffer_size in your VLA training configuration (see openvla/prismatic/conf/vla.py). Every user has a different amount of RAM available, so we cannot recommend a default shuffle buffer size that will work for everyone. But just for reference, in some of our smaller-scale experiments during development, we had 1 TB of RAM available on a single node with 8 A100 GPUs, and we set shuffle_buffer_size to 256_000 (so our effective shuffle buffer size was 256K * 8 == ~2M samples, since we had spawned 8 processes that each had their own shuffle buffer; this is how our dataloader works by default). And this configuration would only be at ~70% system memory utilization even after 100K steps over 4-5 days, which was sufficient for our purposes. For your use case, you can experiment with the shuffle buffer size and try to reduce it while making sure that model training is still stable.
Once you decrease the shuffle_buffer_size to something that works for your machine, you will be able to confirm that there is no "memory leak" because the utilization will stabilize after a certain point.
Thank you for the question and hope this helps! -Moo Jin
You're right! I use a machine with larger RAM. 8 processes with shuffle_size=256,000 finally utilizes 870G memory and stop increasing. Btw, could you please provide some explanation / experimental data to illusrate why shuffle is so important?
Hi @yueyang130,
Therefore, if you are running into out-of-memory issues while training, we recommend that you decrease the value of shuffle_buffer_size in your VLA training configuration (see openvla/prismatic/conf/vla.py). Every user has a different amount of RAM available, so we cannot recommend a default shuffle buffer size that will work for everyone. But just for reference, in some of our smaller-scale experiments during development, we had 1 TB of RAM available on a single node with 8 A100 GPUs, and we set shuffle_buffer_size to 256_000 (so our effective shuffle buffer size was 256K * 8 == ~2M samples, since we had spawned 8 processes that each had their own shuffle buffer; this is how our dataloader works by default). And this configuration would only be at ~70% system memory utilization even after 100K steps over 4-5 days, which was sufficient for our purposes. For your use case, you can experiment with the shuffle buffer size and try to reduce it while making sure that model training is still stable.
One question regarding this part: I currently use a very similar setup (same Dataloader, because it is a very smooth way to load the OXE Dataset) but with HF Accelerate instead of directly using torch.nn.parallel.DDP.
Why did you chose to initialize 1 Dataloader per prozess and not share the memory?
My training speed is currently a little bit low but im not sure if i configurated accelerate wrong or if it is coming from the Dataloader. From your code i can see that you experimented with it a little run_training function in base strategy with workers = 2. Are there some findings you can report?
Im currently doing my Master Thesis and still struggle with it, so it would be really nice to get some more insights :)
Hi @yueyang130,
Thanks for your kind response. I will try your advice later.
Another question is that your dataloader for vla training don't use DistributedSampler (since IterableDataset doesn't allow it...). However, will it result in that each torch process have the same data instead of the situation that each process have s separate subset of the dataset?
You're right that DistributedSampler doesn't work well with IterableDataset by default. However, each process will not have the same sequence of data samples at training time. This is because when the dataset is initialized at training time, the sharded dataset files (TFRecords) are shuffled before they are loaded. Therefore, assuming that the dataset is sharded into multiple TFRecords and not just a single TFRecord, each process will have a dataloader with a different ordering of training samples. You can verify this by initializing the dataloader for a large dataset -- multiple times across multiple runs -- and observing that the first few samples differ every time. (Note, however, that samples within TFRecords are not shuffled during dataset initialization, so we still need to call dataset.shuffle(shuffle_buffer_size) to ensure that local shuffling is being done.)
You're right! I use a machine with larger RAM. 8 processes with shuffle_size=256,000 finally utilizes 870G memory and stop increasing. Btw, could you please provide some explanation / experimental data to illustrate why shuffle is so important?
Great to hear that! Regarding the importance of shuffling: We haven't conducted extensive analysis of the shuffle buffer size and its relation to policy performance for this project due to compute constraints. In the beginning, we simply chose to use a large shuffle buffer -- as much as we could fit in RAM: ~2M samples for single-node training with 8 GPUs, and ~16M for multi-node training with 8 nodes x 8 GPUs each. We did this because the Octo paper mentions that using a larger buffer (e.g., 500K samples) was important for improving policy performance. Also, intuitively, we want training samples to be as independent and identically distributed as possible.
Hi @Toradus,
One question regarding this part: I currently use a very similar setup (same Dataloader, because it is a very smooth way to load the OXE Dataset) but with HF Accelerate instead of directly using torch.nn.parallel.DDP. Why did you choose to initialize 1 Dataloader per prozess and not share the memory?
We opted for the simplest implementation based on native PyTorch Distributed training logic, which automatically spawns one dataloader per process by default if you use torchrun. It is possible to initialize a single dataloader that is shared across processes (the Hugging Face Trainer library may be doinig this already, for example), but we found that this was not necessary to implement given that we observed good training performance without it.
My training speed is currently a little bit low but im not sure if i configurated accelerate wrong or if it is coming from the Dataloader. From your code i can see that you experimented with it a little run_training function in base strategy with workers = 2. Are there some findings you can report? Im currently doing my Master Thesis and still struggle with it, so it would be really nice to get some more insights :)
That default number of workers set to 2 was just a default value that had worked well during our development, but it may be system-dependent, so I would recommend that you experiment with a different value if you find that dataloading is the bottleneck in your training speed. However, we observed that dataloading speed is usually not the bottleneck; dataloading is usually much faster than operations with these large models, such as computing gradients and taking gradient steps. I would recommend that you profile different parts of your training script to see what the current bottleneck is before experimenting further with the dataloader!
Hi @moojink,
Thanks a lot for your explanation!
Hi, I observed the memory keeps increasing util crashed. The memory first quickly increases, and then keeps slowly increasing. Do you have any suggestions?
| gharchive/issue | 2024-06-18T18:20:05 | 2025-04-01T06:45:18.068731 | {
"authors": [
"Toradus",
"moojink",
"yueyang130",
"zhihou7"
],
"repo": "openvla/openvla",
"url": "https://github.com/openvla/openvla/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1590813205 | Figure out where to put the indy-sdk postgres plugin node.js bindings
This currently lives in the @aries-framework/node package, but ideally we extract this out of there, as the indy-sdk is not provided anymore by the agent dependencies. That said, it only adds a depedency on ffi-napi / ref-napi and not the indy-sdk, so we can take some time to figure out the best approach.
We could add it to the indy-sdk module, but we should make sure the code doesn't get bundled in react-native packages in that case
Indy SDK has been removed 🙌
| gharchive/issue | 2023-02-19T19:01:04 | 2025-04-01T06:45:18.075619 | {
"authors": [
"TimoGlastra"
],
"repo": "openwallet-foundation/credo-ts",
"url": "https://github.com/openwallet-foundation/credo-ts/issues/1323",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
822838906 | Unused figaro gem in Gemfile
Is it inserted into Gemfile for future usage or we must remove it?
it will be replaced by peatio/app config store
| gharchive/issue | 2021-03-05T07:45:22 | 2025-04-01T06:45:18.076539 | {
"authors": [
"dapi",
"mnaichuk"
],
"repo": "openware/peatio",
"url": "https://github.com/openware/peatio/issues/2846",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
863286044 | [feature] Allow to easily find out REST auth token
We should add a way to easily find out the REST framework auth token of the user logged in the admin and document this.
@nemesisdesign Where should we document the auth token, imo we should show it in navbar dropdown menu.
| gharchive/issue | 2021-04-20T22:16:57 | 2025-04-01T06:45:18.104632 | {
"authors": [
"aagman945",
"nemesisdesign"
],
"repo": "openwisp/openwisp-users",
"url": "https://github.com/openwisp/openwisp-users/issues/240",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
864511138 | [cleanup] Clean format=api HTTP_AUTHORIZATION and mixed quotes in test_filter_classes.py
The tests/testapp/tests/test_filter_classes.py suffers of the following problems:
calls to the browsable API are done in a not clean way, we should do those calls as indicated in https://github.com/openwisp/openwisp-users/pull/239#discussion_r618058751
quoting style is mixed, we should keep consistency and use only single quotes whenever possible
I would like to work on this if it's not been taken
@hannasalam Yes, you can work on it.
@hannasalam go for it :+1:
| gharchive/issue | 2021-04-22T03:49:03 | 2025-04-01T06:45:18.107274 | {
"authors": [
"ManishShah120",
"hannasalam",
"nemesisdesign"
],
"repo": "openwisp/openwisp-users",
"url": "https://github.com/openwisp/openwisp-users/issues/242",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1741870432 | build cleanups
The mlir-print-op-on-diagnostic=false flags used to be passed separately in N places, some were dropped semi-consciously in #13889, now I think this is really a useful flag to have, but let's pass it consistently in one place, in iree_bytecode_module.
Some comments were out of date.
In ukernel/arch/CMakeLists.txt, some if-else chain could be dropped.
We really need to do something that's llvm-cpu-specific though, so if not that, we would need a separate function like iree_llvmcpu_check_test, etc. Having backend-specific args is for now helping share more code.
My point is that this isn't something inherently llvm-cpu specific and just that it currently is constructed that way - layering it with a cpu-specific-wrapper calling the generic impls would be much nicer (call into a common base vs injecting bespoke code paths into the common code) but there are other ways to handle it that still keeps more code common. Anyway, just highlighting that this has a smell to it and that it exists like this today is not going to help the next person who comes along for a non-llvm-cpu target and has to do the same thing.
e.g., iree_trace_runner_test takes TARGET_CPU_FEATURES but all that does is "--iree-llvmcpu-target-cpu-features=${TARGET_CPU_FEATURES}" - that shouldn't be there and instead anything using that rule could just pass that flag as an item in COMPILER_FLAGS. Then iree_single_backend_generated_trace_runner_test that currently takes TARGET_CPU_FEATURES wouldn't need to take it either - it just passes it along and then adds a --requirements= flag - a REQUIREMENTS param could do the same thing and the target cpu features could come on the COMPILER_FLAGS it already takes and passes through. The only thing that seems to do anything with the features is the root iree_generated_trace_runner_test, and that could be forked for a CPU-specific one to start but I suspect it'd be pretty easy to generalize it to TARGET_FEATURES and use it on other targets as well - it's mostly just doing string parsing and concatenation and some normalization of labels/flags/etc could make it work for just about anything.
ok, i see. Yes, we can do that.
| gharchive/pull-request | 2023-06-05T14:07:27 | 2025-04-01T06:45:18.188634 | {
"authors": [
"benvanik",
"bjacob"
],
"repo": "openxla/iree",
"url": "https://github.com/openxla/iree/pull/13947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2530398693 | Remove QUERY_ENABLED environment variable in Dockerfile
query is now available
How about UI_ENABLED?
The image being pushed has the UI disabled, so should this property be explicitly set to true?
| gharchive/pull-request | 2024-09-17T08:22:44 | 2025-04-01T06:45:18.221804 | {
"authors": [
"making"
],
"repo": "openzipkin-contrib/zipkin-otel",
"url": "https://github.com/openzipkin-contrib/zipkin-otel/pull/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
323522101 | zipkin ui can not get service names from /api/v1/services
The number of service names is 320, and zipkin UI can not get service names from /api/v1/services, but can get service names from /api/v2/services. When i reduced the number of service names, it can be loaded from /api/v1/services.
zipkin version: 2.7.1
can you give me some help ?
Can you try this with the latest Zipkin and check?
Since #1802, the UI now uses the v2 API endpoints. Therefore, I think it's no longer an issue for you, but if it is, please try with the latest version and come chat with us on Gitter: https://gitter.im/openzipkin/zipkin
| gharchive/issue | 2018-05-16T08:40:52 | 2025-04-01T06:45:18.223747 | {
"authors": [
"larrying",
"shakuzen",
"zeagord"
],
"repo": "openzipkin/zipkin",
"url": "https://github.com/openzipkin/zipkin/issues/2050",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
128614624 | Single-sided RPC spans are not recognized for dependency diagram
Specifically, in Cassandra implementation in zipkin-dependencies-spark, not sure about other stores.
There are situations where only the server or the client is instrumented, yet it is able to identify the peer and emit a ClientAddress or ServerAddress annotation within the single-sided span. I want to add a test to the DependecyStoreSpec that will expect the store to return a parent-child link even for this span.
I also want to extend the DependencyLink model to include a boolean flag singleSided that will indicate that even though this link has been detected from the span, the instrumentation is not actually complete, which is useful for adoption tracking stats. Such link can be displayed in the UI as a dashed line. At minimum I would like to have this flag in the thrift class.
fyi, the change is meant to pass these two tests:
/**
* In some cases an RPC call is made where one of the two services is not instrumented.
* However, if the other service is able to emit "sa" or "ca" annotation with a service
* name, the link can still be constructed.
*/
@Test def getDependenciesOnlyServerInstrumented() = {
val client = Endpoint(127 << 24 | 1, 9410, "not-instrumented-client")
val server = Endpoint(127 << 24 | 2, 9410, "instrumented-server")
val trace = ApplyTimestampAndDuration(List(
Span(10L, "get", 10L, annotations = List(
Annotation((today + 100) * 1000, Constants.ServerRecv, Some(server)),
Annotation((today + 350) * 1000, Constants.ServerSend, Some(server))),
binaryAnnotations = List(
BinaryAnnotation(Constants.ClientAddr, true, Some(client))))
))
processDependencies(trace)
val traceDuration = Trace.duration(trace).get
result(store.getDependencies(today + 1000)).sortBy(_.parent) should be(
List(
new DependencyLink("not-instrumented-client", "instrumented-server", 1)
)
)
}
/**
* In some cases an RPC call is made where one of the two services is not instrumented.
* However, if the other service is able to emit "sa" or "ca" annotation with a service
* name, the link can still be constructed.
*/
@Test def getDependenciesOnlyClientInstrumented() = {
val client = Endpoint(127 << 24 | 1, 9410, "instrumented-client")
val server = Endpoint(127 << 24 | 2, 9410, "not-instrumented-server")
val trace = ApplyTimestampAndDuration(List(
Span(10L, "get", 10L, annotations = List(
Annotation((today + 100) * 1000, Constants.ClientSend, Some(client)),
Annotation((today + 350) * 1000, Constants.ClientRecv, Some(client))),
binaryAnnotations = List(
BinaryAnnotation(Constants.ServerAddr, true, Some(server))))
))
processDependencies(trace)
val traceDuration = Trace.duration(trace).get
result(store.getDependencies(today + 1000)).sortBy(_.parent) should be(
List(
new DependencyLink("instrumented-client", "not-instrumented-server", 1)
)
)
}
As I am not familiar with other span store implementations, the new test will be disabled there, for now.
Any objections / suggestions?
Actually, it might be better to use an enum {ParentAndChild, ParentOnly, ChildOnly} instead of a boolean flag.
this looks sensible. I'd just name the endpoints "client" "server" so it is easier to understand (than the prior test)
Actually, it might be better to use an enum {ParentAndChild, ParentOnly,
ChildOnly} instead of a boolean flag.
So the crux of this is the single-side span. Ex where these annotations are
logged into separate span ids vs the norm in zipkin where the same span
includes both client and server sides of the operation.
This would impact users who do this intentionally in zipkin, and also those
who use HTrace, which only supports single-size spans.
One key question here is whether or not this aspect should be the job of
the aggregator vs the job of the data format.
For example, can you raise visibility of the impacts of having the
aggregation job address this case? Ex. leave the result alone, and just
address the collapsing etc before the links are created, or by an
intermediate step?
@eirslett there are two issues in this issue maybe you can help us think through.
one is supporting the parent/child case with the dependency link format as exists today, particularly when someone doesn't do the zipkin norm, which is to log client and server events in the same span.
the second issue is a proposal to add flags to the link data type itself. I'm usually sensitive to adding things that look complex, so probably more balanced for someone else to comment besides me.
ps deleted my above note, which worried about the {ParentAndChild, ParentOnly, ChildOnly}. I don't want to cause FUD. I think this is actually not risking the dependency link itself, and the use case of tracking sounds quite interesting.
It might make the aggregation work more complex (ex more in a SQL query, or another intermediate state), but as far as I can tell doesn't change the cardinality of dependencyLinks (which was my first concern).
@yurishkuro mind bumping this into its own issue? I think I'm on board
I do not think it is necessary.
You can get all unique services by sr/ss compare them with unique by ca and there you have your stats.
@jcarres-mdsol to be clear, you are suggesting that the concern of reporting uninstrumented edges can be addressed in a different manner. Ex that the dependency graph doesn't need to hold this concern?
Just to clarify what I want to do. I think of Dependency Graph as representing the architecture of the system, showing which services talk to each other. That function should be independent of how different tracers report data on RPC spans - one span or two spans. But at least in the Cassandra Spark job it is implemented to only build a link between two services if their names come from two distinct spans. I want to change that, and allow the job to consider, for example, standalone spans where the other service is still known (from SA or CA), as that still represents a valid architectural link.
As an illustration, look at the issue https://github.com/openzipkin/zipkin/issues/917. For a human it is obvious that there are 3 services involved in that "headless" trace, but the test expects only one link in the output.
So that's the primary objective of this ticket. The enum field is just a bit of extra data to record in each link, since the job now knows which of the two services was actually sending spans that were responsible for the link.
@jcarres-mdsol I am not quite following your suggestion. There is no way to query Cassandra in bulk, a M/R job is the only way to process all the data. Per above, all the information is already present during the main Spark job, so the recording that information in the link seems like the best approach.
@adriancole @jcarres-mdsol I've updated the description with the actual two new tests I am using to test the behavior. So far they only account for the shape of the graph, not for the extra enum describing the edge origin.
@yurishkuro We have services (api gateway services) and databases which only are instrumented at either server or client side and they appear in the dependency overview when using zipkin-dependencies-spark.
@kristofa it does work if the client is instrumented and emits ServerAddress. It does not work if only the server is instrumented, see https://github.com/openzipkin/zipkin/issues/917
Seems to make sense to add support for one-sided spans (where client is instrumented and emits ServerAddress).
I'm hesitant to adding an enum though. I get the impression it's just added schema when it's otherwise implied when reading the dependency graph.
@michaelsembwever the graph only says a->b, it doesn't say if a and b have been instrumented, and that info is critical in driving the adoption efforts in the company.
@yurishkuro
that info is critical in driving the adoption efforts in the company.
that type of rationale bears no weight in any open source project.
and i think you know that.
either promote the idea for what it's worth, or fork the project.
Forking the project isn't a negative thing always. If you've got a company that requires a faster cadence that it can rationalise and get patches accepted then it's perfectly normal that the company keeps a fork so it gets what it wants and can take patience in arguing for and moving ideas back into the main repository. Maintaining a forked repository so it stay close to the original involves a little effort, but a rather trivial amount if you ask me.
In this manner, a company fork that's kept close to the original, is a Capabilities Interface, without weighing the community with the effort of it.
It's nearing on 5 years since this idea was floated. Apart from that, the idea of using the dependency diagram to track adoption stats has never really caught on.
Other than that, this issue is notable as we see @michaelsembwever foretell the Forking of Zipkin.
| gharchive/issue | 2016-01-25T19:15:53 | 2025-04-01T06:45:18.239184 | {
"authors": [
"adriancole",
"jcarres-mdsol",
"jorgheymans",
"kristofa",
"michaelsembwever",
"yurishkuro"
],
"repo": "openzipkin/zipkin",
"url": "https://github.com/openzipkin/zipkin/issues/914",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1214916552 | updated gopsutil to fix mac deprication
Integrate changes from https://github.com/openziti/sdk-golang/pull/248
Small cert error check addition also due to Mac
| gharchive/pull-request | 2022-04-25T19:02:32 | 2025-04-01T06:45:18.241658 | {
"authors": [
"camotts"
],
"repo": "openziti/ziti",
"url": "https://github.com/openziti/ziti/pull/700",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1056417559 | Copy latest code from private to public repository
This PR updates the GitHub PROTEUS repository from the private DSWx-Optical repository after updates on top of the interface (IF) delivery.
Although really great comments, the sheer number of comments seems overwhelming to me ;). I don't know how we, I mean @gshiroma ;), could handle them all in one PR. To me, a first working version is more important than a non-working version; although it could be further improved, it still deserves a place in the commit history. So maybe take care of the small changes, and leave the big ones (doc style, logging, unit tests, etc.) for another PR or issues?
Thank you, @yunjunz ! I created the Python module and named it modules. I also removed bin/__init__.py following your suggestion. Thanks!
Hi @gshiroma, the python module name should be more meaningful and unique, such as isce or compass. It is meant to be used in Python as import compass for example.
On the repo structure, I would recommend using https://github.com/opera-adt/sentinel1-reader as the reference: everything that the python module depended on should be inside one top-level folder, this includes the defaults and schemas. I would imagine a structure as:
PROTEUS
/bins #all scripts inside will be copied to "miniconda3/envs/opera/bin" folder after installation via pip/conda
/docs #for all non-jupyter-notebook documentations
/src/proteus #this folder will be copied to "miniconda3/envs/opera/lib/python3.8/site-packages" after installation via pip/conda
/defaults
/schemas
/tests #for unit tests
Hi @gshiroma, the python module name should be more meaningful and unique, such as isce or compass. It is meant to be used in Python as import compass for example.
On the repo structure, I would recommend using https://github.com/opera-adt/sentinel1-reader as the reference: everything that the python module depended on should be inside one top-level folder, this includes the defaults and schemas. I would imagine a structure as:
PROTEUS
/bins #all scripts inside will be copied to "miniconda3/envs/opera/bin" folder after installation via pip/conda
/docs #for all non-jupyter-notebook documentations
/src/proteus #this folder will be copied to "miniconda3/envs/opera/lib/python3.8/site-packages" after installation via pip/conda
/defaults
/schemas
/tests #for unit tests
Thank you, @yunjunz ! I updated the code to follow your suggested directory structure. I had also to recreate bin/__init__.py because the command python3 setup.py sdist was complaining that the file didn't exist. I hope I didn't do anything wrong.
Hi @gshiroma, I committed a few changes to the code structure, as described above. There are a few more things to change:
We should use setuptools for code installation because distutils is deprecated and planned to be removed in python 3.12 (https://docs.python.org/3/library/distutils.html). For setuptools approach, only the following 3 files are needed to setup the installation:
MANIFEST.in
pyproject.toml
setup.cfg or setup.py
Please git rebase this PR against the latest main branch.
Thank you for the updates, @yunjunz ! Changes look great!
Hi all , I addressed all review comments that we intended to address on this initial release. Hopefully, we can merge it soon so I can move on to the next updates. Please, let me know if you have any other concerns or comments. Thank you.
Thank you @gshiroma for addressing all the comments. This PR is close to be merged. I put in one comment to move back the thresholds to the code instead of exposing to users.
| gharchive/pull-request | 2021-11-17T17:46:49 | 2025-04-01T06:45:18.250705 | {
"authors": [
"gshiroma",
"hfattahi",
"yunjunz"
],
"repo": "opera-adt/PROTEUS",
"url": "https://github.com/opera-adt/PROTEUS/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
961402207 | No Longer needed?
I had the extension disabled and checked a chrome extension and the "add to Opera" was still there. and it worked too!
then I tried uninstalling the "chrome webstore extension" and tried another chrome extension and It still worked!
I'm using Opera GX LVL3 (core: 77.0.4054.275) is that is needed
Yes, 99% of install Chrome Extension should be possible (don't install themes or web apps these are not supported).
77.0.4054.275
Rather, already from version 76 "GX" is possible as the GX version appeared already then:
https://blogs.opera.com/desktop/changelog-for-76/
https://techdows.com/2020/12/opera-gets-native-chrome-web-store-extensions-installation-support.html (https://blogs.opera.com/desktop/changelog-for-74/)
| gharchive/issue | 2021-08-05T03:54:07 | 2025-04-01T06:45:18.255537 | {
"authors": [
"Dividedby0KSJ",
"krystian3w"
],
"repo": "operasoftware/chrome-webstore-extension",
"url": "https://github.com/operasoftware/chrome-webstore-extension/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
831539146 | [DO NOT MERGE] pr in development
Signed-off-by: Martin Vala mavala@redhat.com
/ok-to-test
| gharchive/pull-request | 2021-03-15T08:05:22 | 2025-04-01T06:45:18.293718 | {
"authors": [
"mvalarh"
],
"repo": "operator-framework/community-operators",
"url": "https://github.com/operator-framework/community-operators/pull/3288",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2157510838 | Add a check for new required field
This builds on top of #661 and focuses on adding a check to ensure that performing an update operation to a CustomResourceDefinition does not result in the addition of a new required field.
If the proposal for adding this functionality to carvel-dev/kapp has been accepted prior to starting this work, all changes should be made against carvel-dev/kapp, otherwise the changes should be made against https://github.com/everettraven/kapp/tree/feature/crd-upgrade-safety-preflight
The proposal has been merged and a new issue has been created in carvel-dev/kapp to track this change. In turn, this is now a placeholder/tracker issue for https://github.com/carvel-dev/kapp/issues/911
Identified as obsolete. Closing.
| gharchive/issue | 2024-02-27T20:05:15 | 2025-04-01T06:45:18.296582 | {
"authors": [
"everettraven"
],
"repo": "operator-framework/operator-controller",
"url": "https://github.com/operator-framework/operator-controller/issues/663",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
578857203 | [release-4.3] Bug 1807631: Duplicate packages in packageserver APIService response
Description of the change:
4.3 backport of bugfix - see BZ for details.
Motivation for the change:
Continuation of #1330
Reviewer Checklist
[ ] Implementation matches the proposed design, or proposal is updated to match implementation
[ ] Sufficient unit test coverage
[ ] Sufficient end-to-end test coverage
[ ] Docs updated or added to /docs
[ ] Commit messages sensible and descriptive
@exdx Looks like there was some bad merging in the e2e tests. Can you fix?
/retest
/retest
/retest
I am a little concerned the console test failing might not be a flake but lets see if it passes.
/test e2e-aws-console-olm
@spadgett could you take a look at this CI result and confirm that there is an issue with this patch going into 4.3 on the console side? I haven't seen this test pass so it doesn't seem like a flake. Thanks!
@exdx This doesn't look like it's just a flake. We expect to get the Jaeger Tracing operator form in the UI test:
https://github.com/openshift/console/blob/release-4.3/frontend/packages/operator-lifecycle-manager/integration-tests/scenarios/global-installmode.scenario.ts#L99
But we're seeing a form for the PlanetScale operator:
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/operator-framework_operator-lifecycle-manager/1363/pull-ci-operator-framework-operator-lifecycle-manager-release-4.3-e2e-aws-console-olm/94/artifacts/e2e-aws-console-olm/gui_test_screenshots/855255f4ed0cc57854a627f6f405b101.png
Interesting, thanks for the quick feedback. I guess I will look into this more. This did make it into 4.5 and 4.4 with no UI issues though so curious what could be the difference here.
In order to backport this fix to 4.3 I had to cherry-pick code that enabled the packagserver to List by a label selector. Is this UI code depending on that functionality? It's not apparent from the test, but if console is trying to select based on name=jaeger and that label is not being propagated down to the packageserver its possible its getting the wrong response?
It looks like we use fieldSelector and labelSelector. The request looks like
/apis/packages.operators.coreos.com/v1/namespaces/openshift-marketplace/packagemanifests?limit=250&labelSelector=catalog%3Dcommunity-operators&fieldSelector=metadata.name%3D3scale-community-operator
https://github.com/openshift/console/blob/release-4.3/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-subscribe.tsx#L437-L450
Interesting. Looks like the label selector is being used to specify the queried catalog as community operators - its the field selector that is filtering based on metadata.name and is returning the planet scale operator instead of jaeger. we haven't edit the field selector capacity at all.
I think its possible some of the code that was removed was supplying the correct response based on the field selectors https://github.com/operator-framework/operator-lifecycle-manager/pull/1363/files#diff-bf57e0360adde6e4fd72252ca39438c7L119 - I will have to look into it and potentially add some back
/retest
/lgtm
/bugzilla refresh
| gharchive/pull-request | 2020-03-10T20:58:18 | 2025-04-01T06:45:18.308161 | {
"authors": [
"ecordell",
"exdx",
"njhale",
"spadgett"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/pull/1363",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
602826258 | Catch failed Ginkgo assertion in test goroutine.
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/operator-framework_operator-lifecycle-manager/1462/pull-ci-operator-framework-operator-lifecycle-manager-master-e2e-aws-olm/5098
/lgtm
/lgtm
/retest
/retest
| gharchive/pull-request | 2020-04-19T21:36:47 | 2025-04-01T06:45:18.310050 | {
"authors": [
"benluddy",
"ecordell",
"kevinrizza",
"njhale"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/pull/1465",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
341589862 | readme: add protocol to URL
if copy-pasted or typed verbatim the clone would fail.
@fanminshi Done.
| gharchive/pull-request | 2018-07-16T16:30:32 | 2025-04-01T06:45:18.311108 | {
"authors": [
"rdeusser"
],
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/pull/348",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
170129086 | Print statement
Hi
Please put parenthesis to all the print statements:
print("stuff to print")
not
print "stuff to print"
The second one makes everything crash with python 3.x
I replaced all of it in my current PR but please remember it for the future.
Do we need Python 3?
We're not the only one using this, and some (will) use python 3
Anyway, a good point to make is that we shouldn't use print statements but Devito logger class
We should never use print. We should rather use the logger module. You can use parentheses in there if you think are needed.
| gharchive/issue | 2016-08-09T10:06:52 | 2025-04-01T06:45:18.313803 | {
"authors": [
"FabioLuporini",
"mloubout",
"vincepandolfo"
],
"repo": "opesci/devito",
"url": "https://github.com/opesci/devito/issues/79",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
261273331 | Travis: Get utility packages from conda-forge
This should get our trvis testing back online and green (he said hopefully). :wink:
Ok, merging....
| gharchive/pull-request | 2017-09-28T10:45:13 | 2025-04-01T06:45:18.314664 | {
"authors": [
"mlange05"
],
"repo": "opesci/devito",
"url": "https://github.com/opesci/devito/pull/363",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1335026796 | security: Move to vici instead of files
This commit moves away from writing strongSwan configuration files
towards instead using the vici API to program connections.
Related to #220
Signed-off-by: Kyle Mestery mestery@mestery.com
great work @mestery keep it going
| gharchive/pull-request | 2022-08-10T18:36:23 | 2025-04-01T06:45:18.392376 | {
"authors": [
"glimchb",
"mestery"
],
"repo": "opiproject/opi-poc",
"url": "https://github.com/opiproject/opi-poc/pull/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
346449896 | Wrong user / password after upgrade to 18.7
On one system (VM on proxmox), in transparent firewall mode: After upgrading to 18.7 root user can't login to the GUI nor by SSH. Restored 18.1.13 and created an "admin" user (and added to admin group) before upgrade gives the same error after upgrade to 18.7: unable to login to the GUI or by SSH.
Also tried upgrade by console: same problem. No errors shown during upgrade...
https://forum.opnsense.org/index.php?topic=9284.0
Thanks.
| gharchive/issue | 2018-08-01T05:08:18 | 2025-04-01T06:45:18.408277 | {
"authors": [
"abplfab",
"fichtner"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/2593",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1077936754 | Edit Alias enhancement suggestion
Rather than a content box full of addresses, a 2 column table of addresses and descriptions would be more humanly useful.
This is a good idea, and would be helpful in better tracking what things are. However, this would require an ArrayField within an ArrayField which isn't possible given the design of the current model structure.
I did a mock up of what it might look like regardless though:
| gharchive/issue | 2021-12-12T23:20:18 | 2025-04-01T06:45:18.410149 | {
"authors": [
"NOYB",
"agh1467"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/5403",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2429566731 | API calls never succeed since 24.7
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
[ X ] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[ X ] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Describe the bug
The last release (24.7) introduced a bug, where API calls using an API token almost never work for POST requests.
The issue is most probably caused by this commit:
https://github.com/opnsense/core/commit/d7d016f400fbcc30c29973b1315db12055cce0f7
It deletes the call to $this->parseJsonBodyData() which causes, that JSON data is never read to the $_POST variable.
for example, $this->request->hasPost($post_field) will always be false.
Sample from: ApiMutableModelControllerBase.php
public function setBase($post_field, $path, $uuid, $overlay = null)
{
if ($this->request->isPost() && $this->request->hasPost($post_field) && $uuid != null) {
$mdl = $this->getModel();
To Reproduce
POST http://172.17.128.2/api/wireguard/server/setServer/f74b989b-a885-4600-9c55-a688bba87f69 HTTP/1.1
Host: 172.17.128.2
Content-Type: application/json
Authorization: Basic XXX
Accept: */*
Content-Length: 337
{
"server": {
"enabled": "1",
"name": "wg0",
"pubkey": "8mmkypKcr9mOgpA1/Lhj0Xk54WXrIj4eynwD43XL1H4=",
"privkey": "CM6fAXgN1Th7jz5m9vxNq0jT+Akzlfm4VVr+20AQC3k=",
"port": "443",
"mtu": "",
"dns": "",
"tunneladdress": "10.0.0.1,10.0.0.3",
"carp_depend_on": "",
"peers": "",
"disableroutes": "0",
"gateway": ""
}
}
which always responds with
{
"result": "failed"
}
The exact same request using interactive authentication succeeds.
Expected behavior
The API call should work regardless of the used authentication method.
Describe alternatives you considered
Doing an "interactive" login for services when API Keys exists is not a very good way to go.
Screenshots
Not applicable
Relevant log files
Not applicable
Additional context
Not applicable
Environment
OPNsense 24.7-amd64
HyperV
looks like I cleaned up a bit too much here, will post a patch asap
https://github.com/opnsense/core/commit/9024abe3f8f2cec10c037f8a7d84bf20fa13b2d1 should fix this, easy to apply using:
opnsense-patch 9024abe
| gharchive/issue | 2024-07-25T10:06:58 | 2025-04-01T06:45:18.418342 | {
"authors": [
"AdSchellevis",
"svenso"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/7645",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2488431501 | firmware: os-cpu-microcode plugins cross install will force the other one as missing
Important notices
Our forum is located at https://forum.opnsense.org , please consider joining discussions there in stead of using GitHub for these matters.
Before you ask a new question, we ask you kindly to acknowledge the following:
[x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Minor annoyance is that installing Intel version first then AMD will mark the Intel one as missing instead of removing it from the config.xml. It doesn't know any better. Best to add more glue to register.php to handle such edge cases like we have with opnsense/core#7195 now anyway.
Ok, vaguely recall AMD did not require the cpu_microcode_name but maybe it does for the early update now...
I'll leave all those things there for now for testing some magic hackery. Cheers! 😉
Ok, vaguely recall AMD did not require the cpu_microcode_name but maybe it does for the early update now...
Correct. We have a testing branch and kernel for that: https://github.com/opnsense/src/tree/amd_early
First bit is 5d346589ed4431a048aeece4840c75d4bac9d753, now back to core lol
| gharchive/issue | 2024-08-26T18:10:59 | 2025-04-01T06:45:18.423618 | {
"authors": [
"doktornotor",
"fichtner"
],
"repo": "opnsense/plugins",
"url": "https://github.com/opnsense/plugins/issues/4202",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
794960506 | Replace nocoin-justdomains.txt with the original CoinBlockerLists
Hi @mimugmail,
I replaced the list because CoinBlockerLists are more up-to-date and contain some URLs more than other lists, and poor copies of CoinBlockerLists.
CoinBlockerLists Homepage:
https://zerodot1.gitlab.io/CoinBlockerListsWeb/index.html
If necessary, there are also other lists on the homepage see:
https://zerodot1.gitlab.io/CoinBlockerListsWeb/downloads.html
If you have any questions please let me know.
unbound plus is moved into core, only merges to the master branch are considered for adoption.
unbound plus is moved into core, only merges to the master branch are considered for adoption.
For me it all looks a bit messed up, in which repository should I put it?
For me it all looks a bit messed up, in which repository should I put it?
https://github.com/opnsense/core
https://github.com/opnsense/core
Please change it yourself, this is a unique mess (I could not find the corresponding file directly.). It is no fun to contribute here.
Please change it yourself, this is a unique mess (I could not find the corresponding file directly.). It is no fun to contribute here.
It is not against any of you, but the project should be sorted in a consistent way. Such a mess simply makes no sense.
Sorry.
It is not against any of you, but the project should be sorted in a consistent way. Such a mess simply makes no sense.
Sorry.
Fair enough ;)
Fair enough ;)
| gharchive/pull-request | 2021-01-27T10:12:34 | 2025-04-01T06:45:18.429485 | {
"authors": [
"AdSchellevis",
"ZeroDot1",
"fichtner"
],
"repo": "opnsense/plugins",
"url": "https://github.com/opnsense/plugins/pull/2206",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
58611371 | Removed password details from output
Changes to hide password from chef server installation output.
Looks good.
| gharchive/pull-request | 2015-02-23T16:36:42 | 2025-04-01T06:45:18.522960 | {
"authors": [
"adamedx",
"siddheshwar-more"
],
"repo": "opscode-cookbooks/chef-server-image",
"url": "https://github.com/opscode-cookbooks/chef-server-image/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
63736025 | need this attribute to be writable, from outside of the resource
Hi @martinb3
I though i could work around needing to define this as writable attribute, but it appears i really need it ;-)
without this, i can't modify the resource for the zap provider when cleaning up the non-chef defined rules.
Would you be so kind as to include this?
Thanks,
Ronald
Hi! This seems like a bad practice to have these both:
attr_accessor :raw
attribute(:raw, :kind_of => String)
I'm honestly not even sure what happens when you put these together; can you explain what these two lines do in combination? Can you give post a code example or a failing test that would be fixed by merging this PR?
Hi @martinb3,
the way i understood it, the attr_accessor makes that the attribute is readable or writable for outside the resource, where as the attribute() command describes the type of the attribute
if however these 2 lines can be combined in to 1 via additional parameters, that would suit my purpose too. but having them defined separate will not cause errors as they provision different settings.
behind the scenes, it will allow something like:
r = Chef::Resource::FirewallRule.new("test")
r.raw = "-A INPUT -s 127.0.0.1 -j ALLOW"
@run_context.resource_collection << r
which is basically what the zap cookbook does in this piece of code (though the above is a simplification of what its doing):
https://github.com/rdoorn/zap/blob/master/libraries/firewall_iptables.rb
It generates a new rule to remove the lines based on the output and non-chef defined rules, and then add this to the run-list for processing
if the "raw" variable is not writable you'll get the error:
NoMethodError: undefined method `raw=' for Chef::Resource::FirewallRule
as the current resource has no access to change the raw attribute without having the attr_accessor set.
If you know of alternatives, i'd be happy to hear that of course!
Kind Regards,
Ronald
@rdoorn Can you try this instead? It's a method already (no equals sign):
r.raw rule
I was able to get this to work with no problem:
fr = Chef::Resource::FirewallRule.new('open port 9999', run_context)
fr.raw 'INPUT -s 1.2.3.4/32 -d 5.6.7.8/32 -i lo -p tcp -m tcp -m state --state NEW -m comment --comment "hello" -j DROP'
fr.run_action :allow
@martinb3, your a star!
Apparently I've been breaking my head on something thats too simple to solve ;-)
Thanks!, I'll close the pull request :)
Kind Regards,
Ronald
| gharchive/pull-request | 2015-03-23T14:26:23 | 2025-04-01T06:45:18.529838 | {
"authors": [
"martinb3",
"rdoorn"
],
"repo": "opscode-cookbooks/firewall",
"url": "https://github.com/opscode-cookbooks/firewall/pull/46",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
55028606 | remove comment
Don't we have a policy for this?
No but I agree we should remove it.
| gharchive/pull-request | 2015-01-21T14:54:45 | 2025-04-01T06:45:18.530858 | {
"authors": [
"patrick-wright",
"schisamo"
],
"repo": "opscode/artifactory-client",
"url": "https://github.com/opscode/artifactory-client/pull/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
721698148 | Immutably owned clones
My cloned proxy deploys costs less the 1/10th of what they did before I found EIP 1167, so thanks for the awesome contract!
Some projects with vote locking (such as https://dao.curve.fi) require smart wallets to have restrictions on transfer in order to be approved. I've been thinking of how to accomplish this and I have a working, but likely not optimal, solution (which also uses CREATE2 and so is one way to solve #18).
At first I was just going to remove the functions for changing the owner, but because the proxy contract has a delegatecall, it would be able to change the state anyway.
Then I was going to try using the new immutable keyword, but we are using an init function instead of a standard constructor to do the setup.
Here's my current solution:
My factory contract appends the immutable owner's address to the end of the contract. This ensures that create2 gives a new address for every user.
https://github.com/SatoshiAndKin/argobytes-contracts-brownie/blob/slim/contracts/abstract/clonefactory/CloneFactory.sol#L56
And then my proxy contract gets its owner from the contract's code:
https://github.com/SatoshiAndKin/argobytes-contracts-brownie/blob/slim/contracts/abstract/ArgobytesAuth.sol#L41
My testnet deploy script works and I'm working on adding more tests this week. I wanted to post this here in case this is helpful for someone else (or in case its a terrible idea and I should be stopped). I'm very new to assembly and so this is probably not perfect. Any suggestions are welcome.
Made a bunch of improvements.
https://github.com/SatoshiAndKin/argobytes-contracts-brownie/blob/master/contracts/abstract/clonefactory/CloneFactory.sol
https://github.com/SatoshiAndKin/argobytes-contracts-brownie/blob/master/contracts/abstract/clonefactory/CloneOwner.sol
https://github.com/SatoshiAndKin/argobytes-contracts-brownie/blob/master/contracts/abstract/ArgobytesAuth.sol
Hey @WyseNynja, thanks for sharing your findings here. I found myself needing immutable ownership as well so this is helpful.
Any chance you could update the links you shared? They lead to "404 Not Found" pages now. Pro tip: if you hit CTRL/CMD+Y while viewing a file in a repo, GitHub will add the current commit to the browser URL.
Then I was going to try using the new immutable keyword, but we are using an init function instead of a standard constructor to do the setup.
Could you shed some light on this? Is it that only your specific set up requires an init function, or all clones deployed with EIP-1167 cannot use the usual Solidity constructor.
I'm still coming to grips with how this clone factory works.
| gharchive/issue | 2020-10-14T18:50:07 | 2025-04-01T06:45:18.566950 | {
"authors": [
"WyseNynja",
"paulrberg"
],
"repo": "optionality/clone-factory",
"url": "https://github.com/optionality/clone-factory/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
981882171 | Totsugeki not opening
I have downloaded totsugeki.exe but when I try to run the executable nothing happens. I've tried disabling firewall and there is no anti-virus that could make this clash.
Which version of Totsugeki? What do you mean "nothing happens"? Does Totsugeki launch at all?
Which version of Totsugeki? What do you mean "nothing happens"? Does Totsugeki launch at all?
No instance of a Totsugeki process opens, I'm running the 1.3.0 version released after the Jack-o release
What happens if you try to launch Totsugeki through the command prompt? (https://www.howtogeek.com/235101/10-ways-to-open-the-command-prompt-in-windows-10/)
You can drag and drop Totsugeki into the command prompt (and then hit enter) to run it.
Glad it works for you, but kinda weird Totsugeki won't start normally. Not sure what's going on here.
Gonna close this out for now. Feel free to reopen if this is happening again.
| gharchive/issue | 2021-08-28T17:04:05 | 2025-04-01T06:45:18.571732 | {
"authors": [
"alexprimeiro",
"optix2000"
],
"repo": "optix2000/totsugeki",
"url": "https://github.com/optix2000/totsugeki/issues/45",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1898268384 | Error: 400-InvalidParameter, Compartment {ocid1.compartment.oc1....} does not exist or is not part of the policy compartment subtree
Hi,
I am trying to deploy the OELZ version https://github.com/oracle-quickstart/oci-landing-zones/releases/tag/v2.1.2
with Resouce Manager.
It failed during the "module.prod_environment.module.workload.module.workload_osms_dg_policy.oci_identity_policy.policy," creation.
Below is the plan and error log.
PLAN:
module.prod_environment.module.workload.module.workload_osms_dg_policy.oci_identity_policy.policy will be created
resource "oci_identity_policy" "policy" {
ETag = (known after apply)
compartment_id = "ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq"
defined_tags = (known after apply)
description = "Workload OCI Landing Zone OS Management Service Dynamic Group Policy"
freeform_tags = (known after apply)
id = (known after apply)
inactive_state = (known after apply)
lastUpdateETag = (known after apply)
name = "OCI-P-ELZ-Workload1-OSMS-DG-Policy"
policyHash = (known after apply)
state = (known after apply)
statements = [
"Allow dynamic-group OCI-P-ELZ-Workload1-DG to read instance-family in compartment ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq",
"Allow dynamic-group OCI-P-ELZ-Workload1-DG to use osms-managed-instances in compartment ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq",
]
time_created = (known after apply)
version_date = (known after apply)
}
Error log:
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Error: 400-InvalidParameter, Compartment {ocid1.compartment.oc1..aaaaaaaaacxbinb6bzfo36g4i55radlsepp2pb53aekcx6x3cqoj72udt65q} does not exist or is not part of the policy compartment subtree
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Suggestion: Please update the parameter(s) in the Terraform config as per error message Compartment {ocid1.compartment.oc1..aaaaaaaaacxbinb6bzfo36g4i55radlsepp2pb53aekcx6x3cqoj72udt65q} does not exist or is not part of the policy compartment subtree
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/identity_policy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] API Reference: https://docs.oracle.com/iaas/api/#/en/identity/20160918/Policy/CreatePolicy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Request Target: POST https://identity.eu-milan-1.oci.oraclecloud.com/20160918/policies
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Provider version: 5.1.0, released on 2023-06-13. This provider is 13 Update(s) behind to current.
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Service: Identity Policy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Operation Name: CreatePolicy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] OPC request ID: 534cea82b65ffa33a5559b4898870d0b/FFDE662416A3FDCFE7F6BC31689BCF76/F0F0C07537FCD17CDFE25B91C0522329
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] with module.nonprod_environment.module.workload.module.workload_osms_dg_policy.oci_identity_policy.policy,
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] on ../../modules/policies/main.tf line 9, in resource "oci_identity_policy" "policy":
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] 9: resource "oci_identity_policy" "policy" {
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Error: 400-InvalidParameter, Compartment {ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq} does not exist or is not part of the policy compartment subtree
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Suggestion: Please update the parameter(s) in the Terraform config as per error message Compartment {ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq} does not exist or is not part of the policy compartment subtree
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/identity_policy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] API Reference: https://docs.oracle.com/iaas/api/#/en/identity/20160918/Policy/CreatePolicy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Request Target: POST https://identity.eu-milan-1.oci.oraclecloud.com/20160918/policies
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Provider version: 5.1.0, released on 2023-06-13. This provider is 13 Update(s) behind to current.
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Service: Identity Policy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] Operation Name: CreatePolicy
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] OPC request ID: bd820545a1472ab2b697394e0091ab98/C2CFD28C6FCD5C5381D4E0FE17109682/F3A8FC75899AB9D4307E7A9AB7300706
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] with module.prod_environment.module.workload.module.workload_osms_dg_policy.oci_identity_policy.policy,
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] on ../../modules/policies/main.tf line 9, in resource "oci_identity_policy" "policy":
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO] 9: resource "oci_identity_policy" "policy" {
2023/09/15 09:25:30[TERRAFORM_CONSOLE] [INFO]
2023/09/15 09:25:31[TERRAFORM_CONSOLE] [INFO]
The statements of the policy are wrong. The correct ones are:
statements = [
"Allow dynamic-group OCI-P-ELZ-Workload1-DG to read instance-family in compartment in ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq",
"Allow dynamic-group OCI-P-ELZ-Workload1-DG to use osms-managed-instances in compartment in ocid1.compartment.oc1..aaaaaaaawmbt6l4wgvf6zdwchpfds3pni2egwt5cujfxshgy4ulax2rb4dcq",
]
The correct syntax is explained here: https://docs.oracle.com/en-us/iaas/Content/Identity/Concepts/policysyntax.htm#:~:text=To specify a compartment by OCID
Needs to be changed the policy statement in the file "security.tf" of the "elz-workload", specifically the "local.osms_dg_policy_workload.statements"
Fixed the issue on latest release v2.2.0.
| gharchive/issue | 2023-09-15T11:51:26 | 2025-04-01T06:45:18.628582 | {
"authors": [
"Reply-emgargano",
"VinayKumar611",
"fpacilio"
],
"repo": "oracle-quickstart/oci-landing-zones",
"url": "https://github.com/oracle-quickstart/oci-landing-zones/issues/100",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
953002081 | Error executing start-db-service.sh Update 4 (Quick Start)
I'm doing the Quick Start, but I can't finish Update 4.
I'm getting two errors when I execute start-db-service.sh :
The first error I solved:
-bash: ./start-db-service.sh: /bin/bash^M: bad interpreter: No such file or directory
The first error occurs because I'm on Windows. I solved this problem with this command:
sed -i -e 's/\r$//' start-db-service.sh
Maybe it's good to be in the documentation.
The second error, I didn't solve:
After executing the script start-db-service.sh I'm getting " Status is NotReady Iter [N/60]", then the error "[ERROR] Unable to start the Pod"
Enviroment
Windows 10
WebLogic version: 12.2.1.4
Oracle Database: 12.2.0.1-slim
I have pushed from the master.
Git commit ID: 59b297b30de8cc773ee5c4175df8b86370e13dd6
Date: Tue Jul 20 11:09:03 2021
I discovered.
The problem is, I don't have enough memory.
I fixed it by changing the configuration file "oracle.db.yaml" memory properties.
| gharchive/issue | 2021-07-26T15:04:56 | 2025-04-01T06:45:18.692172 | {
"authors": [
"ThiagoBfim"
],
"repo": "oracle/weblogic-kubernetes-operator",
"url": "https://github.com/oracle/weblogic-kubernetes-operator/issues/2481",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
300144860 | Patch netcat command to make it compatible with CentOS stemcell
Grafana admin password shell script uses "nc -z" command to test Grafana TCP port availability (jobs/grafana/templates/bin/grafana-admin-password).
I propose to replace (line 8) :
if nc -z 127.0.0.1 <%= p('grafana.server.http_port') %>; then
by
if nc 127.0.0.1 <%= p('grafana.server.http_port') %> < /dev/null; then
This command works on both Ubuntu and CentOS stemcells.
LGTM
| gharchive/pull-request | 2018-02-26T08:21:00 | 2025-04-01T06:45:18.693965 | {
"authors": [
"aveyrenc",
"osaluden"
],
"repo": "orange-cloudfoundry/prometheus-boshrelease",
"url": "https://github.com/orange-cloudfoundry/prometheus-boshrelease/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1069067773 | Memory Target
Resolves #54.
Add test for oras.Copy() using the Memory target.
I believe this addreses @deitch's earlier questions. Would be good to have an LGTM from @deitch
I was going to merge this in, but don't know if you are ready for it @shizhMSFT .
@deitch I'm ready to merge it.
| gharchive/pull-request | 2021-12-02T02:43:33 | 2025-04-01T06:45:18.704546 | {
"authors": [
"deitch",
"sajayantony",
"shizhMSFT"
],
"repo": "oras-project/oras-go",
"url": "https://github.com/oras-project/oras-go/pull/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1932877316 | Intent job blocking code inside another intent
Version orbit: 6.0.0
I want to call two intents in parallel inside another intent but wait until their jobs are done. After that, I call some methods, but the code stops after calling the 'join' function.
ViewModel
override val container: Container<SomeViewState, SomeSideEffect> =
createContainer(SomeViewState()) { onCreate() }
fun onCreate() = intent {
// Initiate calendarJob and screenStateJob concurrently
val calendarJob = initCalendar()
val screenStateJob = initScreenState()
// Wait for calendarJob and screenStateJob to complete
calendarJob.join() // We wait for calendarJob to complete, but it doesn't.
screenStateJob.join()
// After joining, the code doesn't continue working
someMethod()
}
fun initCalendar() = intent {
reduce { state.copy(today = "") }
}
fun initScreenState() = intent {
reduce { state.copy(inited = true) }
}
@drinko-dr
Thanks for the report, I'll investigate. In the meantime, this code can be rewritten like this, which should help:
override val container: Container<SomeViewState, SomeSideEffect> =
createContainer(SomeViewState()) {
// onCreate lambda is already an intent
coroutineScope {
val job1 = launch { initCalendar() }
val job2 = launch { initScreenState() }
}
}
private suspend fun SimpleSyntax<SomeViewState, SomeSideEffect>.initCalendar() {
reduce { state.copy(today = "") }
}
private suspend fun SimpleSyntax<SomeViewState, SomeSideEffect>.initScreenState() {
reduce { state.copy(inited = true) }
}
Closing due to lack of activity, feel free to reopen if this continues to be an issue
| gharchive/issue | 2023-10-09T11:41:09 | 2025-04-01T06:45:18.725594 | {
"authors": [
"Rosomack",
"drinko-dr"
],
"repo": "orbit-mvi/orbit-mvi",
"url": "https://github.com/orbit-mvi/orbit-mvi/issues/201",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
218421597 | npm install don't work
Just did as described in README:
git clone....
cd orbit-db
npm install
Got a bunch of errors:
$ npm install
npm ERR! git rev-list -n1 feat/floodsub-rebase: fatal: ambiguous argument 'feat/floodsub-rebase': unknown revision or path not in the working tree.
npm ERR! git rev-list -n1 feat/floodsub-rebase: Use '--' to separate paths from revisions, like this:
npm ERR! git rev-list -n1 feat/floodsub-rebase: 'git <command> [<revision>...] -- [<file>...]'
npm ERR! git rev-list -n1 feat/floodsub-rebase:
npm ERR! git rev-list -n1 feat/floodsub-rebase: fatal: ambiguous argument 'feat/floodsub-rebase': unknown revision or path not in the working tree.
npm ERR! git rev-list -n1 feat/floodsub-rebase: Use '--' to separate paths from revisions, like this:
npm ERR! git rev-list -n1 feat/floodsub-rebase: 'git <command> [<revision>...] -- [<file>...]'
npm ERR! git rev-list -n1 feat/floodsub-rebase:
npm ERR! git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e: Cloning into bare repository '/home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e'...
npm ERR! git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e: Permission denied (publickey).
npm ERR! git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e: fatal: Could not read from remote repository.
npm ERR! git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e:
npm ERR! git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e: Please make sure you have the correct access rights
npm ERR! git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e: and the repository exists.
npm ERR! Linux 4.8.0-45-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install"
npm ERR! node v6.10.1
npm ERR! npm v3.10.10
npm ERR! code 128
npm ERR! Command failed: git clone --template=/home/user/.npm/_git-remotes/_templates --mirror git@github.com:ipfs/js-ipfs-api.git /home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e
npm ERR! Cloning into bare repository '/home/user/.npm/_git-remotes/git-github-com-ipfs-js-ipfs-api-git-feat-floodsub-rebase-784cdc1e'...
npm ERR! Permission denied (publickey).
npm ERR! fatal: Could not read from remote repository.
npm ERR!
npm ERR! Please make sure you have the correct access rights
npm ERR! and the repository exists.
npm ERR!
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! <https://github.com/npm/npm/issues>
npm ERR! Please include the following file with any support request:
npm ERR! /home/user/Projects/orbit-db/npm-debug.log
Thanks for reporting this! This has been fixed now in master and in 0.17.3.
| gharchive/issue | 2017-03-31T07:38:31 | 2025-04-01T06:45:18.735039 | {
"authors": [
"gagarin55",
"haadcode"
],
"repo": "orbitdb/orbit-db",
"url": "https://github.com/orbitdb/orbit-db/issues/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2503323978 | Orb Migrate Docker via UI hangs when there is not enough disk space
Describe the bug
When I try to use the UI to migrate from Docker, it just hangs at "Gathering info":
Using the CLI shows the underlying issue:
➜ ~ orb migrate docker
INFO[0000] Starting Docker Desktop
INFO[0000] Gathering info
INFO[0000] progress=2.3255813953488373
Not enough free disk space in Docker Desktop: 5 GB required. Please free up space or increase virtual disk size in Docker Desktop settings.
➜ ~
To Reproduce
Ensure there's not enough disk space for this process (less than 5 GB).
Use the UI to migrate from Docker to Orb.
Expected behavior
Same error that shows in the CLI should be shown rather than hanging forever.
Diagnostic report (REQUIRED)
OrbStack info:
Version: 1.7.2
Commit: 50f93373f351fe839fd72948e6aad032774c0f6c (v1.7.2)
System info:
macOS: 14.6.1 (23G93)
CPU: arm64, 12 cores
CPU model: Apple M3 Pro
Model: Mac15,7
Memory: 36 GiB
Full report: https://orbstack.dev/_admin/diag/orbstack-diagreport_2024-09-03T16-38-25.731310Z.zip
Screenshots and additional context (optional)
No response
Fixed for the next version.
Released in v1.7.3.
It's still happening.
| gharchive/issue | 2024-09-03T16:39:32 | 2025-04-01T06:45:18.746411 | {
"authors": [
"FezVrasta",
"NatElkins",
"kdrag0n"
],
"repo": "orbstack/orbstack",
"url": "https://github.com/orbstack/orbstack/issues/1428",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2756003965 | Migration impossible
Describe the bug
Migration used to fail as it was already reported, but since the new version which tries to start to Docker Desktop, the migration process is blocked as I ain't got Docker anymore (Orbstack is much more useful)
To Reproduce
Start orbstack
Check for updates
Start mirgation process
Expected behavior
Is there a way to achieve this migration manually or to detect for Docker for Desktop before trying to migrate?
Diagnostic report (REQUIRED)
OrbStack info:
Version: 1.9.2
Commit: f56c5adaa796a0902c648f038307ed8d434b0522 (v1.9.2)
System info:
macOS: 15.1.1 (24B91)
CPU: arm64, 10 cores
CPU model: Apple M1 Max
Model: MacBookPro18,4
Memory: 32 GiB
Full report: https://orbstack.dev/_admin/diag/orbstack-diagreport_2024-12-23T12-43-54.196800Z.zip
Screenshots and additional context (optional)
No response
This feature is specifically for migrating from Docker Desktop. As per #1679, it shouldn't be showing up automatically and the bug is already fixed for the next version.
| gharchive/issue | 2024-12-23T12:45:02 | 2025-04-01T06:45:18.751027 | {
"authors": [
"kdrag0n",
"loranger"
],
"repo": "orbstack/orbstack",
"url": "https://github.com/orbstack/orbstack/issues/1682",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2060493392 | Autocompleting Public Properties - error on boolean false value
Describe the bug
If an item with the Boolean value false is returned in the return of the query function of a screen, the autocompletion of public properties fails.
To Reproduce
Create a new Screen class extending App\Orchid\Screens\BaseScreen
use Orchid\Screen\Screen as Screen;
class DemoScreen extends BaseScreen
Declare 3 public properties in the class
public $var1 = null;
public $var2 = null;
public $var3 = null;
Define the query function by returning an array with 3 keys corresponding to the names of the three properties, as described in the documentation.
The first element of the array must be a Boolean value false.
The elements after the first can be any value.
public function query(Request $request): array
{
return [
'var1' => false,
'var2' => "example1",
'var3' => "example2"
];
}
Define the layout function by printing the value of $this->var2
public function layout(): array
{
dd($this->var2);
}
Check the printed output, it will be "null" or the default value of the var2 variable.
Expected behavior
string "example1" is expected.
Screenshots
Server (please complete the following information):
Platfrom Version: 14.17.0
Laravel Version: 10.38.0
PHP Version: 8.1
Additional context
If the array returning from query function contains a boolean false value, the Autocompleting Public Properties does not work and values are not set to public properties.
In Orchid\Screen\Screen class, in function fillPublicProperty each function is used to iterate properties.
Laravel Each function stop iterating if the callback returns false.
Thank you for reaching out, and I apologize for the delay in response. At the moment, I don't seem to notice the issue you mentioned. It's possible that I have already addressed it earlier but forgot to mention it here.
| gharchive/issue | 2023-12-30T01:36:35 | 2025-04-01T06:45:18.780873 | {
"authors": [
"m3rlo87",
"tabuna"
],
"repo": "orchidsoftware/platform",
"url": "https://github.com/orchidsoftware/platform/issues/2783",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
227721361 | chore: check token exists before making any calls
F51-169
Next time can you be sure to put the description of your changes in the PR comments instead of just the task number?
description was in the title
| gharchive/pull-request | 2017-05-10T15:33:35 | 2025-04-01T06:45:18.782122 | {
"authors": [
"crhistianr",
"robertsoniv"
],
"repo": "ordercloud-api/angular-seller",
"url": "https://github.com/ordercloud-api/angular-seller/pull/108",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1108613213 | python3.7.12 user installation of NetfilterQueue in ubuntu20.04.3
With python3.7.12, a month ago I was able to use without a problem:
python -m pip --proxy http://myproxy:80 install -U git+https://github.com/kti/python-netfilterqueue
Now it does not work and while trying to perform the installation with NetfilterQueue, I get a lot of errors:
$ python -m pip --proxy http://myproxy:80 install NetfilterQueue
Defaulting to user installation because normal site-packages is not writeable
Collecting NetfilterQueue
Downloading NetfilterQueue-1.0.0.tar.gz (87 kB)
Installing build dependencies: started
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/bin/python /tmp/pip-standalone-pip-n1csqjap/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-q2bt9bw6/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel
cwd: None
Complete output (7 lines):
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7ff5daee01d0>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7ff5daee05d0>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7ff5daee0910>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7ff5daee0c50>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7ff5daee0fd0>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
ERROR: Could not find a version that satisfies the requirement setuptools (from versions: none)
ERROR: No matching distribution found for setuptools
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/52/40/cc706275da4c9b968ce1223f586e0ab6ef20f3f6e840724b43070e85234e/NetfilterQueue-1.0.0.tar.gz#sha256=507be475d8c9f98834763aacf2f6cfe800b253ccd283f14f3d6f89a4f87a5878 (from https://pypi.org/simple/netfilterqueue/) (requires-python:>=3.6). Command errored out with exit status 1: /usr/bin/python /tmp/pip-standalone-pip-n1csqjap/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-q2bt9bw6/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel Check the logs for full command output.
Downloading NetfilterQueue-0.9.0.tar.gz (79 kB)
Installing build dependencies: started
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: still running...
Installing build dependencies: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/bin/python /tmp/pip-standalone-pip-w83tvr07/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-2d5444oc/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel
cwd: None
Complete output (7 lines):
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fefdd5eb210>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fefdd5eb610>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fefdd5eb990>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fefdd5ebcd0>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fefdd047050>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/setuptools/
ERROR: Could not find a version that satisfies the requirement setuptools (from versions: none)
ERROR: No matching distribution found for setuptools
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/7d/34/27b2dafb00d6c4dd4c6e88cc6eaeba2e345e6d84f33748520dc3ebe813b6/NetfilterQueue-0.9.0.tar.gz#sha256=31c0bcddb72efba6d58c32cb5103c56206c7ddd55693f8eb2d990770ee4004ea (from https://pypi.org/simple/netfilterqueue/). Command errored out with exit status 1: /usr/bin/python /tmp/pip-standalone-pip-w83tvr07/__env_pip__.zip/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-2d5444oc/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel Check the logs for full command output.
Downloading NetfilterQueue-0.8.1.tar.gz (58 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: NetfilterQueue
Building wheel for NetfilterQueue (setup.py): started
Building wheel for NetfilterQueue (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/setup.py'"'"'; __file__='"'"'/tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-pccui99e
cwd: /tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/
Complete output (94 lines):
running bdist_wheel
running build
running build_ext
skipping 'netfilterqueue.c' Cython extension (up-to-date)
building 'netfilterqueue' extension
creating build
creating build/temp.linux-x86_64-3.7
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-5GEFrE/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-5GEFrE/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c netfilterqueue.c -o build/temp.linux-x86_64-3.7/netfilterqueue.o
netfilterqueue.c: In function ‘__pyx_f_14netfilterqueue_6Packet_set_nfq_data’:
netfilterqueue.c:2150:68: warning: passing argument 2 of ‘nfq_get_payload’ from incompatible pointer type [-Wincompatible-pointer-types]
2150 | __pyx_v_self->payload_len = nfq_get_payload(__pyx_v_self->_nfa, (&__pyx_v_self->payload));
| ~^~~~~~~~~~~~~~~~~~~~~~~
| |
| char **
In file included from netfilterqueue.c:440:
/usr/include/libnetfilter_queue/libnetfilter_queue.h:122:67: note: expected ‘unsigned char **’ but argument is of type ‘char **’
122 | extern int nfq_get_payload(struct nfq_data *nfad, unsigned char **data);
| ~~~~~~~~~~~~~~~~^~~~
netfilterqueue.c: In function ‘__pyx_pf_14netfilterqueue_6Packet_4get_hw’:
netfilterqueue.c:2533:17: warning: implicit declaration of function ‘PyString_FromStringAndSize’; did you mean ‘PyBytes_FromStringAndSize’? [-Wimplicit-function-declaration]
2533 | __pyx_t_3 = PyString_FromStringAndSize(((char *)__pyx_v_self->hw_addr), 8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
| PyBytes_FromStringAndSize
netfilterqueue.c:2533:15: warning: assignment to ‘PyObject *’ {aka ‘struct _object *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
2533 | __pyx_t_3 = PyString_FromStringAndSize(((char *)__pyx_v_self->hw_addr), 8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error)
| ^
netfilterqueue.c: In function ‘__Pyx_PyCFunction_FastCall’:
netfilterqueue.c:6436:13: error: too many arguments to function ‘(PyObject * (*)(PyObject *, PyObject * const*, Py_ssize_t))meth’
6436 | return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs, NULL);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
netfilterqueue.c: In function ‘__Pyx__ExceptionSave’:
netfilterqueue.c:7132:21: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7132 | *type = tstate->exc_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7133:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7133 | *value = tstate->exc_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7134:19: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7134 | *tb = tstate->exc_traceback;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c: In function ‘__Pyx__ExceptionReset’:
netfilterqueue.c:7141:24: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7141 | tmp_type = tstate->exc_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7142:25: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7142 | tmp_value = tstate->exc_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7143:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7143 | tmp_tb = tstate->exc_traceback;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c:7144:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7144 | tstate->exc_type = type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7145:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7145 | tstate->exc_value = value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7146:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7146 | tstate->exc_traceback = tb;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c: In function ‘__Pyx__GetException’:
netfilterqueue.c:7201:24: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7201 | tmp_type = tstate->exc_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7202:25: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7202 | tmp_value = tstate->exc_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7203:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7203 | tmp_tb = tstate->exc_traceback;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c:7204:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7204 | tstate->exc_type = local_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7205:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7205 | tstate->exc_value = local_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7206:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7206 | tstate->exc_traceback = local_tb;
| ^~~~~~~~~~~~~
| curexc_traceback
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for NetfilterQueue
Running setup.py clean for NetfilterQueue
Failed to build NetfilterQueue
Installing collected packages: NetfilterQueue
Running setup.py install for NetfilterQueue: started
Running setup.py install for NetfilterQueue: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/setup.py'"'"'; __file__='"'"'/tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-r8h3mo0_/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/qemu/.local/include/python3.7m/NetfilterQueue
cwd: /tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/
Complete output (94 lines):
running install
running build
running build_ext
skipping 'netfilterqueue.c' Cython extension (up-to-date)
building 'netfilterqueue' extension
creating build
creating build/temp.linux-x86_64-3.7
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-5GEFrE/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-5GEFrE/python3.7-3.7.12=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c netfilterqueue.c -o build/temp.linux-x86_64-3.7/netfilterqueue.o
netfilterqueue.c: In function ‘__pyx_f_14netfilterqueue_6Packet_set_nfq_data’:
netfilterqueue.c:2150:68: warning: passing argument 2 of ‘nfq_get_payload’ from incompatible pointer type [-Wincompatible-pointer-types]
2150 | __pyx_v_self->payload_len = nfq_get_payload(__pyx_v_self->_nfa, (&__pyx_v_self->payload));
| ~^~~~~~~~~~~~~~~~~~~~~~~
| |
| char **
In file included from netfilterqueue.c:440:
/usr/include/libnetfilter_queue/libnetfilter_queue.h:122:67: note: expected ‘unsigned char **’ but argument is of type ‘char **’
122 | extern int nfq_get_payload(struct nfq_data *nfad, unsigned char **data);
| ~~~~~~~~~~~~~~~~^~~~
netfilterqueue.c: In function ‘__pyx_pf_14netfilterqueue_6Packet_4get_hw’:
netfilterqueue.c:2533:17: warning: implicit declaration of function ‘PyString_FromStringAndSize’; did you mean ‘PyBytes_FromStringAndSize’? [-Wimplicit-function-declaration]
2533 | __pyx_t_3 = PyString_FromStringAndSize(((char *)__pyx_v_self->hw_addr), 8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
| PyBytes_FromStringAndSize
netfilterqueue.c:2533:15: warning: assignment to ‘PyObject *’ {aka ‘struct _object *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]
2533 | __pyx_t_3 = PyString_FromStringAndSize(((char *)__pyx_v_self->hw_addr), 8); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 111, __pyx_L1_error)
| ^
netfilterqueue.c: In function ‘__Pyx_PyCFunction_FastCall’:
netfilterqueue.c:6436:13: error: too many arguments to function ‘(PyObject * (*)(PyObject *, PyObject * const*, Py_ssize_t))meth’
6436 | return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs, NULL);
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
netfilterqueue.c: In function ‘__Pyx__ExceptionSave’:
netfilterqueue.c:7132:21: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7132 | *type = tstate->exc_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7133:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7133 | *value = tstate->exc_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7134:19: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7134 | *tb = tstate->exc_traceback;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c: In function ‘__Pyx__ExceptionReset’:
netfilterqueue.c:7141:24: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7141 | tmp_type = tstate->exc_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7142:25: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7142 | tmp_value = tstate->exc_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7143:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7143 | tmp_tb = tstate->exc_traceback;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c:7144:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7144 | tstate->exc_type = type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7145:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7145 | tstate->exc_value = value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7146:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7146 | tstate->exc_traceback = tb;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c: In function ‘__Pyx__GetException’:
netfilterqueue.c:7201:24: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7201 | tmp_type = tstate->exc_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7202:25: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7202 | tmp_value = tstate->exc_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7203:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7203 | tmp_tb = tstate->exc_traceback;
| ^~~~~~~~~~~~~
| curexc_traceback
netfilterqueue.c:7204:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?
7204 | tstate->exc_type = local_type;
| ^~~~~~~~
| curexc_type
netfilterqueue.c:7205:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?
7205 | tstate->exc_value = local_value;
| ^~~~~~~~~
| curexc_value
netfilterqueue.c:7206:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?
7206 | tstate->exc_traceback = local_tb;
| ^~~~~~~~~~~~~
| curexc_traceback
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/setup.py'"'"'; __file__='"'"'/tmp/pip-install-et6ey1bb/netfilterqueue_b1b84e577db948a1aa21f88d681a8497/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-r8h3mo0_/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/qemu/.local/include/python3.7m/NetfilterQueue Check the logs for full command output.
This appears to be a bug in pip: it doesn't pass its proxy server to the child process that it uses to perform an isolated build. Try:
pip install --proxy http://myproxy:80 wheel
pip install --proxy http://myproxy:80 --no-build-isolation NetfilterQueue
It might also work to do env https_proxy=http://myproxy:80 pip install NetfilterQueue, because perhaps the environment variable gets passed to the child process, but I don't have a proxy handy so I can't verify for sure.
Your proposal for the separated installation of wheel and the installation of NetFilterQueue with --no-build-isolation solved the problem.
Thanks!
| gharchive/issue | 2022-01-19T22:14:11 | 2025-04-01T06:45:18.797256 | {
"authors": [
"nunoapaiva",
"oremanj"
],
"repo": "oremanj/python-netfilterqueue",
"url": "https://github.com/oremanj/python-netfilterqueue/issues/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
325645102 | Nifi nar isn't build
Hello,
I have an issue with the Nifi nar. The following error:
cp: cannot stat ‘/home/nifi/trucking-iot-master/nifi-bundle/nifi-trucking-nar/target/nifi-trucking-nar-0.5.4.nar’: No such file or directory.
The file isn't anywhere. I just change the list of drivers and the routes to use the getTruckingData processor.
Thanks in advance.
@igbayilola Heya! :) Have you tried building the nar first?
Run this script: https://github.com/orendain/trucking-iot/blob/master/scripts/builds/nifi-bundle.sh
@orendain Thanks you for answering. I did. Its the script run that give me the error.
sbt nifiBundle/compile shows success
cp -f $projDir/trucking-nifi-bundle/nifi-trucking-nar/target/nifi-trucking-nar-$projVer.nar $nifiLibDir fails with the following error cp: cannot stat ‘/home/nifi/trucking-iot-master/nifi-bundle/nifi-trucking-nar/target/nifi-trucking-nar-0.5.4.nar’: No such file or directory.
I'm doing this beacause I change some conf, the drivers list and the routes to correspond to my country.
Thanks you again for your job.
@igbayilola The project must not have built correctly. Can you cd into the root /trucking-iot directory and run sbt nifiBundle/compile
Hmm ... that directory isn't the one in the script. Could you make sure that the directory /home/nifi/trucking-iot-master/nifi-bundle/nifi-trucking-nar/ event exists? The directory in the script above expects /trucking-nifi-bundle/nifi-trucking-nar/
@igbayilola Ah, good catch - I can't believe I missed that.
Instead of sbt nifiBundle/compile, run sbt nifiBundle/package
@orendain Thanks you. It has work. But the drivers names and route are still the same.
| gharchive/issue | 2018-05-23T10:30:18 | 2025-04-01T06:45:18.804093 | {
"authors": [
"igbayilola",
"orendain"
],
"repo": "orendain/trucking-iot",
"url": "https://github.com/orendain/trucking-iot/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1620423884 | p0stcapone's construction changes
rebase of https://github.com/orenyomtov/openordex/pull/29
I dont know what the construction is supposed to look like, so @p0stcapone please check diff's for errors.
Reviewers please note that I haven't actually read the changes to the PSBT construction. I dont know if its good or bad. This PR is just rebasing #29 to make it reviewable.
The diff here is much more clear, #29 has been closed in favor of this for review
Resolves issue #2
| gharchive/pull-request | 2023-03-12T16:22:06 | 2025-04-01T06:45:18.807839 | {
"authors": [
"p0stcapone",
"rayonx",
"rot13maxi"
],
"repo": "orenyomtov/openordex",
"url": "https://github.com/orenyomtov/openordex/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2527072320 | Remove github, gitlab, bitbucket, gitea fields in the commit context
Is there an existing issue or pull request for this?
[X] I have searched the existing issues and pull requests
Feature description
https://github.com/orhun/git-cliff/pull/822 added the remote field and deprecated these fields. We want to remove them to optimize memory usage, as requested in https://github.com/orhun/git-cliff/pull/822#pullrequestreview-2305451241
Desired solution
These fields don't exist anymore
Alternatives considered
keep these fields for retrocompatibility
Additional context
should we release git-cliff 3 after this change?
Or we are allowed to change this part of the changelog because it's considered "experimental"?
should we release git-cliff 3 after this change?
Or we are allowed to change this part of the changelog because it's considered "experimental"?
Probably not, and yes, should be fine :)
| gharchive/issue | 2024-09-15T17:53:02 | 2025-04-01T06:45:18.849355 | {
"authors": [
"MarcoIeni",
"orhun"
],
"repo": "orhun/git-cliff",
"url": "https://github.com/orhun/git-cliff/issues/856",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2139983647 | 🛑 TTS (Oribi Speak) is down
In 240abee, TTS (Oribi Speak) ($TTS_ENDPOINT/speak) was down:
HTTP code: 0
Response time: 0 ms
Resolved: TTS (Oribi Speak) is back up in ffc7b58 after 32 minutes.
| gharchive/issue | 2024-02-17T11:27:12 | 2025-04-01T06:45:18.854297 | {
"authors": [
"nilslockean"
],
"repo": "oribisoftware/upptime",
"url": "https://github.com/oribisoftware/upptime/issues/762",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
65912678 | select from index:... throws java.lang.UnsupportedOperationException
#studio
create class Person
create property Person.propOne string
create index Person.propOne unique
insert into Person set propOne = 'propOne'
select from index:Person.propOne
# ok
# but
create property Person.propTwo string
create index Person.propTwo unique_hash_index
insert into Person set propTwo = 'propTwo'
select from index:Person.propTwo
#server
2015-04-02 13:08:53:840 SEVERE Internal server error:
java.lang.UnsupportedOperationException: firstKey [ONetworkProtocolHttpDb]
Same for notunique_hash_index, fulltext_hash_index and dictionary_hash_index.
hash index aren't browsable for its nature.
| gharchive/issue | 2015-04-02T12:05:34 | 2025-04-01T06:45:18.857713 | {
"authors": [
"lvca",
"vitorenesduarte"
],
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/3859",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
323999645 | CREATE INDEX doesn't automatically bound index to the schema property in v3.0.0
OrientDB Version: v3.0.0
Java Version: 9.0.1
OS: MacOS
Expected behavior
CREATE INDEX should automatically bound index to a property using index name:
CREATE INDEX User.id UNIQUE /* throws error */
CREATE INDEX User.id ON User (id) UNIQUE /* working */
Actual behavior
It throws this error:
com.orientechnologies.orient.core.exception.ODatabaseException: Impossible to create an index without specify the key type or the associated property: CREATE INDEX User.id UNIQUE
Steps to reproduce
Run these commands (from https://orientdb.com/docs/3.0.x/sql/SQL-Create-Index.html):
CREATE PROPERTY User.id BINARY
CREATE INDEX User.id UNIQUE
Hi @dastoori
That syntax will be deprecated soon, it's there only for backward compatibility in the legacy SQL executor. Please use the full syntax.
Anyway, I'll give it a look, probably it's an easy fix
Thanks
Luigi
Hi @dastoori
I just pushed a fix for this, it will be released with v 3.0.2
Thanks
Luigi
| gharchive/issue | 2018-05-17T12:23:29 | 2025-04-01T06:45:18.861456 | {
"authors": [
"dastoori",
"luigidellaquila"
],
"repo": "orientechnologies/orientdb",
"url": "https://github.com/orientechnologies/orientdb/issues/8268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1758266712 | Issue running "run_due" notebook
The following error occurs when I follow the procedure in the read me
Thanks for reporting the issue, it should work now
| gharchive/issue | 2023-06-15T08:00:43 | 2025-04-01T06:45:18.862624 | {
"authors": [
"mohanrajroboticist",
"orientino"
],
"repo": "orientino/dum-components",
"url": "https://github.com/orientino/dum-components/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
210648586 | Task is not a function
I can't use task without getting Task is not a function errors. I literally paste in the examples and they don't work.
Steps to reproduce
Just create the tests in a separate file and manually run them. You will get an error saying Task is not a function. I did download and run all of the source, and it worked fine. The tests are importing different tasks than the actual tasks.
Expected behaviour
I expect the Task to be the task instead of not a function.
Observed behaviour
I get Task is not a function
Environment
OS: MacOS
VM: js 4.3.2,
Folktale: ^2.0.0-alpha2
Additional information
It's very likely I am doing something wrong, but I can't figure out what. This library rules, sorry to be a pain in the ass.
I just rolled my code back to 1.0 and it's working fine. I'm not sure if that helps or not.
No worries, the issue tracker exists pretty much to help people getting the library to work :)
Can you give more information on what code you're running, and how you're running it? There are no examples of using Task (yet) in this repository ('sides https://github.com/origamitower/folktale/pull/50), and the test files for Task can't be ran directly as they use Babel for some non-JS-ish transformations.
Ah, I should note that the Task implemented in this repository is not compatible with the Task defined in https://github.com/folktale/data.task#datatask, which is what you'd find in older blog posts and stuff. Older examples (using new Task((reject, resolve) => { ... })) are for the Data.Task (or Folktale 1). The new API (Folktale 2, in this repository) changed this a bit to provide better support for cancellation and resource lifecycles.
Hah, that’s WAY better. I was hoping the answer would be that I’m doing it wrong. That link to #50 is all I needed to get it working. That example is actually perfect for the docs. Thanks for your help.
| gharchive/issue | 2017-02-28T00:16:28 | 2025-04-01T06:45:18.867253 | {
"authors": [
"robotlolita",
"smashedtoatoms"
],
"repo": "origamitower/folktale",
"url": "https://github.com/origamitower/folktale/issues/76",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
170599718 | Ability to add upload failed callback to options
Resolves: #350
Maybe we should talk about the parameters we could send to the callback
Todos
[ ] Update wiki and add callback to options
Could you not commit dist files please? Thanks
I removed the dist files. I wasn't sure about your work process. Thanks 👍
@dazorni Thanks! Can you update wiki to reflect the changes?
@orthes Done
@dazorni Thanks
| gharchive/pull-request | 2016-08-11T09:03:16 | 2025-04-01T06:45:18.898028 | {
"authors": [
"dazorni",
"j0k3r",
"orthes"
],
"repo": "orthes/medium-editor-insert-plugin",
"url": "https://github.com/orthes/medium-editor-insert-plugin/pull/377",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1075902966 | Use specific versions instead of latest for image tags
Closes #122
/approve
| gharchive/pull-request | 2021-12-09T19:05:50 | 2025-04-01T06:45:18.985415 | {
"authors": [
"MichaelClifford",
"chauhankaranraj"
],
"repo": "os-climate/aicoe-osc-demo",
"url": "https://github.com/os-climate/aicoe-osc-demo/pull/123",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
734840193 | Unhandled Promise Rejection
Is this library support expo?
I face error unhandled Promise Rejection Null is not an object.
ImageColorsModule.getColors
Expo is not supported. I think you have to ask expo to add support for this library instead.
| gharchive/issue | 2020-11-02T21:33:41 | 2025-04-01T06:45:18.996340 | {
"authors": [
"Osamasomy",
"osamaq"
],
"repo": "osamaq/react-native-image-colors",
"url": "https://github.com/osamaq/react-native-image-colors/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
343968531 | Revokable option get's ignored on init
Needs more info. Closing ticket for not enough info and staleness.
If you wish this ticket reopened, please use the following form to better help us reproduce your issue:
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Smartphone (please complete the following information):
Device: [e.g. iPhone6]
OS: [e.g. iOS8.1]
Browser [e.g. stock browser, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
| gharchive/issue | 2018-07-24T10:01:47 | 2025-04-01T06:45:19.002645 | {
"authors": [
"Schuer84",
"relicmelex"
],
"repo": "osano/cookieconsent",
"url": "https://github.com/osano/cookieconsent/issues/423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
942183404 | image-info: cover situation when /boot is on a separate partition
Some images with ESP, e.g. the rhel-ec2-aarch64, have the /boot on
a separate partition. image-info currently produces traceback on such
images, e.g.:
Traceback (most recent call last):
File "/home/thozza/devel/osbuild-composer/./tools/image-info", line 1997, in <module>
main()
File "/home/thozza/devel/osbuild-composer/./tools/image-info", line 1991, in main
report = analyse_image(target)
File "/home/thozza/devel/osbuild-composer/./tools/image-info", line 1863, in analyse_image
append_partitions(report, device, loctl)
File "/home/thozza/devel/osbuild-composer/./tools/image-info", line 1849, in append_partitions
append_filesystem(report, tree)
File "/home/thozza/devel/osbuild-composer/./tools/image-info", line 1809, in append_filesystem
with open(f"{tree}/grub2/grubenv") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp3i__6m1w/grub2/grubenv'
The reason is that grub2/grubenv on the /boot partition is a symlink
to ../efi/EFI/redhat/grubenv. However the efi directory on the
/boot partition is empty and the ESP must be mounted to it for the
expected path to exist.
Modify image-info to mount the ESP to efi directory if it exists on
the inspected partition.
This pull request includes:
[ ] adequate testing for the new functionality or fixed issue
[ ] adequate documentation informing people about the change such as
[ ] create a file in news/unreleased directory if this change should be mentioned in the release news
[ ] submit a PR for the guides repository if this PR changed any behavior described there: https://www.osbuild.org/guides/
Rebased on top of main to get the new GCP zone used in CI and get rid of failures...
| gharchive/pull-request | 2021-07-12T15:29:54 | 2025-04-01T06:45:19.017435 | {
"authors": [
"thozza"
],
"repo": "osbuild/osbuild-composer",
"url": "https://github.com/osbuild/osbuild-composer/pull/1546",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1455422832 | re-enable cs9 runner for simplified installer
Signed-off-by: Antonio Murdaca antoniomurdaca@gmail.com
This pull request includes:
[ ] adequate testing for the new functionality or fixed issue
[ ] adequate documentation informing people about the change such as
[ ] submit a PR for the guides repository if this PR changed any behavior described there: https://www.osbuild.org/guides/
@runcom We need a new CS9 repo to have new coreos-installer.
@runcom PR #3136 merged, I make a rebase on main to have new CS9 repo. Let's see what happens.
@runcom The failure is due to bug https://bugzilla.redhat.com/show_bug.cgi?id=2123611. The repo yesterday does not have fix included. We needs CS9 CentOS-Stream-9-20221121.0 compose to have this fix.
@runcom I found PR https://github.com/osbuild/osbuild-composer/pull/3155. Let's try 20221124 repo and see what happens.
@runcom Repo 20221124 works. The only failed case is not ostree related.
works again now :)
@runcom We should change repo to 20221124. Repo 20221121 still has DNF error.
All green now!
| gharchive/pull-request | 2022-11-18T16:26:18 | 2025-04-01T06:45:19.023493 | {
"authors": [
"henrywang",
"runcom"
],
"repo": "osbuild/osbuild-composer",
"url": "https://github.com/osbuild/osbuild-composer/pull/3145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
955616913 | 🛑 kids-box-1 is down
In 7e85002, kids-box-1 (https://kids-box-1.tttwonder.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: kids-box-1 is back up in 06ba100.
| gharchive/issue | 2021-07-29T08:56:14 | 2025-04-01T06:45:19.047354 | {
"authors": [
"oseau"
],
"repo": "oseau/upptime",
"url": "https://github.com/oseau/upptime/issues/230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1317281127 | When a user declines a charge, the return URL has no chargeId, causing 500 error
Expected Behavior
When a user decline a charge, he is redirected to Shopify Settings page as charge decline is handled by Shopify since 2018. When a user closes the setting popup, it should be redirected to App home page
Current Behavior
Problem is, on closing the popup, the user is redirected to the return URL, i.e. billing/process/{plan_id}/?shop=myshop.myshopify.com. The return URL, in case of decliced charge, does not contain the charge ID. This causes the route to show an error.
Failure Information
Billing process page shows an error if chargeId is missing from the query string
Steps to Reproduce
Install a new app or open the change plan screen of your app
Select a different plan. You will be redirected to Shopify Charge page
Do not approve the Charge and decline the charge by clicking on the cancel button.
You will be redirected to Shopify admin, and will see the new setting popup.
Close the popup, you will be redirected to /billing/process/{plan_id} page of the app
The page shows 500 error
Context
Package Version: v17.1.0
Laravel Version: v8.76
PHP Version: v7.4
Failure Logs
See the screencast here
https://gyazo.com/fae78b8d3653141dbdd0fcabef63924c
@osiset @Kyon147 Can you people look into this. Thanks
Closing this now it has been merged, thanks for the PR!
| gharchive/issue | 2022-07-25T19:39:05 | 2025-04-01T06:45:19.073136 | {
"authors": [
"Kyon147",
"usmanpakistan"
],
"repo": "osiset/laravel-shopify",
"url": "https://github.com/osiset/laravel-shopify/issues/1174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1331037878 | I am having hmac code in URL after authenticate
For bug reporting only! If you're posting a feature request or discussion, please ignore.
Expected Behavior
i am expecting it shouldn't show hmac code inside URL.
(
[https://shopifycurrencyapp.cordcomtechnologies.com/?hmac=84dc3ac9950610a1ac5842554dd152e832de94a59f9c25bb84e3ee6d2eaf&host=FwcHN0b3JlZm9yYXBwLm15c2hvcGlmeS5jb20vYWRtaW4&shop=p.myshopify.com×tamp=1659168387]
Expecting URL
(https://shopifycurrencyapp.cordcomtechnologies.com)
Current Behavior
AFter app authenticate redirect to outside shopify admin its good but goes with hmac code that shouldn't be add to in uRL
please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.
"php": "^7.2.5|^8.0",
"fideloper/proxy": "^4.4",
"fruitcake/laravel-cors": "^2.0",
"guzzlehttp/guzzle": "^6.3.1|^7.0.1",
"laravel/framework": "^7.29",
"laravel/tinker": "^2.5",
"laravel/ui": "2.*",
"maatwebsite/excel": "^3.1",
"osiset/laravel-shopify": "^14.0"
Failure Logs
Please include any relevant log snippets or files here.
@talkwithdeveloper I'm unable to replicate this.
Does your app authenticate properly and land on the app homepage?
yes
It looks to me as a left over from the previous route - as you need to use AppBridge to update the route, you can just make sure to rewruite the history yourself that way.
is it possible can we make a zoom call ?
Hi @talkwithdeveloper,
Sadly I don't have time but you just need to update the route when you land on the homepage to overwrite the last history.
I tend to just put a watcher in the root app of Vue to update it anytime my root changes. This means you don't need to manually sire it every view change.
if (window.AppBridge){
var History = actions.History;
var history = History.create(window.AppBridge);
history.dispatch(History.Action.PUSH, to.path);
}
You could try something like that but I have seen other apps where this happens, so it's not always limited to this package.
I've taken a look at this and can confirm it is because of needing to use appbridge to update the history sometimes.
| gharchive/issue | 2022-08-07T14:49:07 | 2025-04-01T06:45:19.079170 | {
"authors": [
"Kyon147",
"talkwithdeveloper"
],
"repo": "osiset/laravel-shopify",
"url": "https://github.com/osiset/laravel-shopify/issues/1184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.