added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:01.709811
| 2024-02-01T21:41:58
|
2113537352
|
{
"authors": [
"peternied",
"shiv0408"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9404",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/issues/12137"
}
|
gharchive/issue
|
[BUG] Minimal approval workflow blocks PR merge
Describe the bug
Github Action Minimum Approval workflow blocks the PR merge if PR already has approval but a new commit is pushed. As the trigger for this action is "pull_request_review", this action is only triggered if a approval is given or removed or description is edited.
https://github.com/opensearch-project/OpenSearch/blob/3c074617e6582d8347547df313cc50e7f2abfb36/.github/workflows/maintainer-approval.yml#L4
Related component
Other
To Reproduce
Raise a PR
Get approval on the PR from a maintainer
Push a commit to the PR branch
Expected behavior
The workflow should run all the times we have updated the PR or after getting approval as well.
Additional Details
No response
Example #8218 currently we can't merged because this expected check is not getting triggered. So, it is neither failing or succeeding.
We should modify the trigger to use "pull_request" webhook with opened, reopened and synchronized action.
@peternied as the original author of the workflow, do you have any comments or suggestion on this?
I believe the root cause of the issue is that PR checks are associated with a commit SHA, whereas PR approvals are associated with a PR, this puts us in a bad state because if the PR changes, the workflow is not triggered dismiss or re-approval and it looks like the check is stalled.
Here are some possible approaches and issues they might encounter.
Use "pull_request_target" trigger
This would work, but it would only be run after new commits have been pushed. If you've got code that isn't change and then approved by a maintainer, it would not get the update.
Use both "pull_request_target" & "pull_request_review" triggers
With how the backing approval check works you'd get two maintainer-approvals checks, one for each trigger source, this would create a different and strange bottle neck where you'd need to make changes / get more approvals till they both lit up green.
[Recommendation] Separate the trigger source from the check
By decoupling the result of peternied/required-approval from the check on the PR this would allow any number of sources from restarting the check on the PR workflow which could add or updates an existing check. This requires making changes to that GitHub action.
I'd be happy to review a PR if there are other ideas, or on the required-approval GitHub action.
[Triage - attendees 1 2 3 4 5]
@shiv0408 Thanks for filing, there could be improvements in this space, we'd gladly review a pull request to improve.
|
2025-04-01T04:35:01.711932
| 2022-05-19T04:57:30
|
1241207332
|
{
"authors": [
"ashking94",
"sachinpkale"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9405",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/issues/3397"
}
|
gharchive/issue
|
[Remote Store] Handle transient and permanent un-availability of the remote store
Describe the solution you'd like
When remote store is un-available, either we need to stop ingestion in order to keep the durability guarantees or ingestion continues and data is back-filled once remote store is available. There are various scenarios that need to be considered while handling these failures. This task describes all such scenarios and provide implementation.
we have implemented refresh retry and remote segments upload backpressure - #7363 and #6851.
@sachinpkale closing this issue for now. Lets think if there are more things that we can add and reopen if needed.
|
2025-04-01T04:35:01.715860
| 2022-10-21T09:54:07
|
1418074991
|
{
"authors": [
"minalsha",
"pranikum"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9406",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/issues/4866"
}
|
gharchive/issue
|
[Enhancement]: Discuss the Node draining state feasiblity.
Is your feature request related to a problem? Please describe.
As part of Decommission request we have added DRAINING state having the exact state when the decommission process fails.
We can check if we need to have this state altogether.
Describe the solution you'd like
Remove the DRAINING state and use IN_PROGRESS
Describe alternatives you've considered
NA
Additional context
NA
@pranikum are you planning to contribute to this issue?
Closing the issue. Have pushed the changes as part of
https://github.com/opensearch-project/OpenSearch/pull/4586
Closing the issue. Have pushed the changes as part of
https://github.com/opensearch-project/OpenSearch/pull/4586
Closing the issue. Have pushed the changes as part of
https://github.com/opensearch-project/OpenSearch/pull/4586
|
2025-04-01T04:35:01.719471
| 2023-07-27T09:40:42
|
1824002099
|
{
"authors": [
"Bukhtawar",
"mch2",
"sachinpkale"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9407",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/issues/8917"
}
|
gharchive/issue
|
Make repository a first-class citizen for remote store backed cluster
Is your feature request related to a problem? Please describe.
With remote store we need repository to be bootstrapped with the cluster to ensure internal system indices are also durably backed-up. Repositories are traditionally registered into the cluster state by an external API which poses durability risks to the cluster if repositories are mutated in inconsistent ways.
Also when we need to recover from quorum loss, we shouldn't require to rely on cluster state to fetch the repository information, which might create a cyclic dependency on recovery.
The repository information should sit locally on the nodes and the cluster should be able to bootstrap automatically with a repository being a first class citizen
Relates to #8623
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
related - https://github.com/opensearch-project/OpenSearch/issues/8158
@Bukhtawar I believe the change is made related to this issue. Closing it now. Feel free to open if still pending.
|
2025-04-01T04:35:01.723425
| 2021-09-03T07:35:15
|
987498905
|
{
"authors": [
"itiyamas",
"saikaranam-amazon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9408",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/1210"
}
|
gharchive/pull-request
|
Changes to support retrieval of operations from translog based on specified range
Description
Changes to support retrieval of operations from translog based on specified range
Issues Resolved
https://github.com/opensearch-project/OpenSearch/issues/1100
Check List
[x] New functionality includes testing.
[x] All tests pass
[x] New functionality has been documented.
[x] New functionality has javadoc added
[x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
start gradle check
start gradle run
start gradle check
|
2025-04-01T04:35:01.733462
| 2024-05-13T22:43:23
|
2293968708
|
{
"authors": [
"dblock",
"peteralfonsi",
"sohami"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9409",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/13655"
}
|
gharchive/pull-request
|
[Tiered Caching] Additional ITs for cache stats
Description
Adds more ITs and UT coverage for cache stats in the tiered spillover cache. Also has 3 small bugfixes which were found during testing:
Cache clear API incorrectly wiped hits, misses, and eviction stats
Items evicted from the heap tier, but rejected from the disk tier due to policies, incorrectly weren't counted towards the total evictions for the cache
request_cache object in XContent response for the cache stats API was incorrectly at nodes.[node_id].request_cache instead of nodes.[node_id].caches.request_cache
Related Issues
Resolves https://github.com/opensearch-project/OpenSearch/issues/13455
Check List
[x] New functionality includes testing.
[x] All tests pass
[x] New functionality has been documented.
[x] New functionality has javadoc added
~- [N/A] API changes companion pull request created.~
[x] Failing checks are inspected and point to the corresponding known issue(s) (See: Troubleshooting Failing Builds)
[x] Commits are signed per the DCO using --signoff
~- [N/A] Commit changes are listed out in CHANGELOG.md file (See: Changelog)~
~- [N/A] Public documentation issue/PR created~
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
❌ Gradle check result for 54e396c: FAILURE
Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?
org.opensearch.indices.IndicesRequestCacheIT.testStaleKeysCleanupWithMultipleIndices {p0={"search.concurrent_segment_search.enabled":"true"}}
Flaky test: https://github.com/opensearch-project/OpenSearch/issues/13600
❌ Gradle check result for bbedece: FAILURE
Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?
Known Failures:
https://github.com/opensearch-project/OpenSearch/issues/7791
https://github.com/opensearch-project/OpenSearch/issues/13939
|
2025-04-01T04:35:01.737164
| 2024-12-04T17:46:28
|
2718419807
|
{
"authors": [
"cwperks",
"sandeshkr419"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9410",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/16778"
}
|
gharchive/pull-request
|
Bump com.azure:azure-identity from 1.13.2 to 1.14.2 in /plugins/repository-azure
Description
Recreates https://github.com/opensearch-project/OpenSearch/pull/16772 and fixes precommit failure
Check List
[ ] Functionality includes testing.
[ ] API changes companion pull request created, if applicable.
[ ] Public documentation issue/PR created, if applicable.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Thanks @cwperks for looking into this. I guess you'll probably have to look on failing tests after dependency upgrade.
|
2025-04-01T04:35:01.744650
| 2023-06-21T23:36:31
|
1768672227
|
{
"authors": [
"kotwanikunal",
"reta"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9411",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/8208"
}
|
gharchive/pull-request
|
Add safeguard limits for file cache during node level allocation
Description
Related to #7713
Adds safeguards to prevent file cache over-subscription during allocation for individual node level decisions.
Fetches the filecache stats to get node cache size, calculates the remote shard size on nodes and verifies if the shard can be safely allocated to the said node
size of shard + sum(remote shards on the node) < 5 * (node cache size)
The constant value will be replaced by a setting in a following PR.
Related Issues
Partially resolves #7713
Check List
[x] New functionality includes testing.
[x] All tests pass
[x] New functionality has been documented.
[x] New functionality has javadoc added
[x] Commits are signed per the DCO using --signoff
[x] Commit changes are listed out in CHANGELOG.md file (See: Changelog)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Tagging previous reviewers: @andrross / @reta / @Bukhtawar
@reta Resolved your comments. 3 green gradle checks in a row might be a sign :)
@reta Resolved your comments. 3 green gradle checks in a row might be a sign :)
@kotwanikunal my apologies, missed that somehow, will look first thing tomorrow morning
The backport to 2.x failed:
The process '/usr/bin/git' failed with exit code 128
To backport manually, run these commands in your terminal:
# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add ../.worktrees/backport-2.x 2.x
# Navigate to the new working tree
pushd ../.worktrees/backport-2.x
# Create a new branch
git switch --create backport/backport-8208-to-2.x
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 91bfa01606974b947455fbc289e21a1aad096fa8
# Push it to GitHub
git push --set-upstream origin backport/backport-8208-to-2.x
# Go back to the original working tree
popd
# Delete the working tree
git worktree remove ../.worktrees/backport-2.x
Then, create a pull request where the base branch is 2.x and the compare/head branch is backport/backport-8208-to-2.x.
Will look into the backport after the followup PR.
|
2025-04-01T04:35:01.758581
| 2023-08-01T22:34:31
|
1832120984
|
{
"authors": [
"neetikasinghal",
"sohami"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9412",
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/9047"
}
|
gharchive/pull-request
|
Make MultiBucketConsumerService thread safe to use across slices during search
Description
callCount variable needs to be made thread safe as there is a periodic check on the CircuitBreaker in the MultiConsumerService that checks if the CB has tripped. With Concurrent Search, for each shard, there are multiple slices running on different threads and having one instance of MultiBucketConsumer. Multiple threads within the same instance of MultiBucketConsumer will try to access the callCount variable, and as of current logic that checks for CB trip after every 1024 calls to the accept function, there is possibility that if the callCount is not made thread safe, the CB trip check might never happen.
Initializing callCount as a LongAdder variable making it thread safe and also having an additional volatile` variable that tries to trip the CB for the other threads in case for one of the threads CB has already tripped.
Related Issues
Resolves #7785
Check List
[ ] New functionality includes testing.
[ ] All tests pass
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[ ] Commits are signed per the DCO using --signoff
[ ] Commit changes are listed out in CHANGELOG.md file (See: Changelog)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
@reta @sohami please review the changes.
Gradle Check (Jenkins) Run Completed with:
RESULT: FAILURE ❌
URL: https://build.ci.opensearch.org/job/gradle-check/21727/
CommitID: 3d02f73
Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green.
Is the failure a flaky test unrelated to your change?
@neetikasinghal There are some test failures. Can you please check those ?
@sohami these tests failures have nothing to do with the changes in the pr.
Gradle Check (Jenkins) Run Completed with:
RESULT: FAILURE ❌
URL: https://build.ci.opensearch.org/job/gradle-check/21727/
CommitID: 3d02f73
Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green.
Is the failure a flaky test unrelated to your change?
@neetikasinghal There are some test failures. Can you please check those ?
@sohami thanks for calling this out.
testAllocationBucketsBreaker seemed to be failing with the changes, i have fixed that now, MixedClusterClientYamlTestSuiteIT test seems unrelated to my changes.
@neetikasinghal Can you please rebase your branch ?
@neetikasinghal Can you please rebase your branch ?
Done
Gradle Check (Jenkins) Run Completed with:
RESULT: FAILURE ❌
URL: https://build.ci.opensearch.org/job/gradle-check/21842/
CommitID: 4a20a20
Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green.
Is the failure a flaky test unrelated to your change?
Known Flaky test: https://github.com/opensearch-project/OpenSearch/issues/9034
@neetikasinghal Backport failed. Can you please do it manually
|
2025-04-01T04:35:01.760746
| 2023-07-27T21:22:09
|
1825247678
|
{
"authors": [
"JacobCho-i",
"eirsep"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9413",
"repo": "opensearch-project/alerting",
"url": "https://github.com/opensearch-project/alerting/issues/1058"
}
|
gharchive/issue
|
[FEATURE] Need an API to disable/enable monitor
making an update API call to disable or enable a monitor is not the best practice and is also very cumbersome as user would need to be aware of mapping of monitor document.
Since monitors are jobs and require to be enabled or disabled we should provide one of the following:
PUT _plugins/_alerting/monitors/{monitor_id}/enable and PUT _plugins/_alerting/monitors/{monitor_id}/disable
PUT _plugins/_alerting/monitors/{monitor_id}}/status {"enable" = true/false}
I will work on this one
|
2025-04-01T04:35:01.778882
| 2022-06-24T19:18:25
|
1284105348
|
{
"authors": [
"amitgalitz",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9414",
"repo": "opensearch-project/anomaly-detection",
"url": "https://github.com/opensearch-project/anomaly-detection/pull/585"
}
|
gharchive/pull-request
|
Adding HCAD data ingestion script to AD
Signed-off-by: Amit Galitzky<EMAIL_ADDRESS>Description
Initially want to give credit to @kaituo for a majority of the logic in this code.
First iteration of data ingestion script for AD testing.
This script generates cosine wave data with anomalies injected with 2 fields (potential features).
The current code is set to have two categorical fields where you can decide how many entities are created (must be at least 1), users also have control over the ingestion_frequency and how many points are created.
Other params are also explained with the help section seen in the code.
I want to initially add this script and I will be iterating over it with the ability to change the number of fields and easily create single entity data.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Codecov Report
Merging #585 (6e004e9) into main (7f3820a) will decrease coverage by 0.11%.
The diff coverage is 94.11%.
:exclamation: Current head 6e004e9 differs from pull request most recent head a4ef462. Consider uploading reports for the commit a4ef462 to get more accurate results
@@ Coverage Diff @@
## main #585 +/- ##
============================================
- Coverage 79.03% 78.92% -0.12%
- Complexity 4203 4204 +1
============================================
Files 296 296
Lines 17679 17684 +5
Branches 1879 1880 +1
============================================
- Hits 13973 13957 -16
- Misses 2806 2825 +19
- Partials 900 902 +2
Flag
Coverage Δ
plugin
78.92% <94.11%> (-0.12%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
...search/ad/cluster/ClusterManagerEventListener.java
94.59% <ø> (ø)
...opensearch/ad/indices/AnomalyDetectionIndices.java
71.93% <ø> (ø)
...n/java/org/opensearch/ad/ml/EntityColdStarter.java
81.92% <90.90%> (+0.14%)
:arrow_up:
...va/org/opensearch/ad/settings/AbstractSetting.java
90.90% <100.00%> (+0.90%)
:arrow_up:
...ava/org/opensearch/ad/settings/EnabledSetting.java
100.00% <100.00%> (ø)
...rch/ad/transport/ForwardADTaskTransportAction.java
94.06% <0.00%> (-3.39%)
:arrow_down:
...ain/java/org/opensearch/ad/task/ADTaskManager.java
75.62% <0.00%> (-0.91%)
:arrow_down:
...rch/ad/transport/AnomalyResultTransportAction.java
80.13% <0.00%> (-0.69%)
:arrow_down:
...java/org/opensearch/ad/task/ADBatchTaskRunner.java
81.91% <0.00%> (-0.61%)
:arrow_down:
...c/main/java/org/opensearch/ad/util/ParseUtils.java
77.77% <0.00%> (-0.08%)
:arrow_down:
... and 2 more
|
2025-04-01T04:35:01.782037
| 2021-09-23T07:26:56
|
1005100161
|
{
"authors": [
"Bukhtawar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9415",
"repo": "opensearch-project/asynchronous-search",
"url": "https://github.com/opensearch-project/asynchronous-search/pull/41"
}
|
gharchive/pull-request
|
Updating CI workflows
Description
[Describe what this change achieves]
Issues Resolved
[List any issues this PR will resolve]
Check List
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Supersedes https://github.com/opensearch-project/asynchronous-search/pull/39
|
2025-04-01T04:35:01.787357
| 2023-07-17T16:59:02
|
1808185404
|
{
"authors": [
"AntonEliatra",
"cwillum",
"seashman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9416",
"repo": "opensearch-project/documentation-website",
"url": "https://github.com/opensearch-project/documentation-website/issues/4578"
}
|
gharchive/issue
|
[DOC]Link keystore and truststore files documentation with related securityadmin script documentation
What do you want to do?
Before updating the Security plugin with configured keystore and truststore files using the securityadmin script, a user should be made aware of the actual keystore and truststore file settings that configure their location and passwords. These separate pages of documentation should be linked to facilitate the process.
When making these updates, it's probably a good idea to look into whether the upcoming Authorization in REST layer feature will have an impact on the current information and guidance.
[x] Request a change to existing documentation
[ ] Add new documentation
[ ] Report a technical problem with the documentation
[ ] Other
Tell us about your request.
Link these two sources of information and check whether the current information needs to be updated.
What other resources are available?
Authorization in the REST layer
It looks like keystore CLI ommands work about the same as the elasticsearch-keystore: https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-keystore.html. It would be good to include how to use the keystore CLI to add credentials for plugins.
@hdhalter I'll pick this one up
PR raised https://github.com/opensearch-project/documentation-website/pull/7015
|
2025-04-01T04:35:01.798373
| 2022-12-16T09:12:35
|
1499848170
|
{
"authors": [
"SuZhou-Joe",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9417",
"repo": "opensearch-project/index-management-dashboards-plugin",
"url": "https://github.com/opensearch-project/index-management-dashboards-plugin/pull/472"
}
|
gharchive/pull-request
|
Some fixs after internal demo review
Signed-off-by: suzhou<EMAIL_ADDRESS>Description
[Describe what this change achieves]
Issues Resolved
[List any issues this PR will resolve]
Check List
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Codecov Report
Merging #472 (c330ae1) into index-operation (54f6e41) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## index-operation #472 +/- ##
================================================
Coverage 60.69% 60.69%
================================================
Files 249 249
Lines 8228 8228
Branches 1444 1444
================================================
Hits 4994 4994
Misses 2892 2892
Partials 342 342
Impacted Files
Coverage Δ
...ublic/pages/Aliases/containers/Aliases/Aliases.tsx
89.89% <0.00%> (ø)
...pages/Templates/containers/Templates/Templates.tsx
82.25% <0.00%> (ø)
...CreateIndex/components/IndexDetail/IndexDetail.tsx
81.60% <0.00%> (ø)
...eateIndex/components/IndexMapping/IndexMapping.tsx
82.73% <0.00%> (ø)
...plate/containers/TemplateDetail/TemplateDetail.tsx
93.25% <0.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
2025-04-01T04:35:01.807492
| 2022-04-13T19:08:57
|
1203679555
|
{
"authors": [
"codecov-commenter",
"ylwu-amzn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9418",
"repo": "opensearch-project/ml-commons",
"url": "https://github.com/opensearch-project/ml-commons/pull/279"
}
|
gharchive/pull-request
|
support dispatching execute task; don't dispatch ML task again
Signed-off-by: Yaliang Wu<EMAIL_ADDRESS>Description
support dispatching execute task;
don't dispatch ML task again
add request id to track the request easily
Check List
[ ] New functionality includes testing.
[ ] All tests pass
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Codecov Report
:exclamation: No coverage uploaded for pull request base (main@5eb38f7). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #279 +/- ##
=======================================
Coverage ? 92.16%
Complexity ? 360
=======================================
Files ? 51
Lines ? 1111
Branches ? 51
=======================================
Hits ? 1024
Misses ? 69
Partials ? 18
Flag
Coverage Δ
ml-commons
92.16% <0.00%> (?)
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5eb38f7...00eb304. Read the comment docs.
|
2025-04-01T04:35:01.811992
| 2023-11-27T19:05:51
|
2012916075
|
{
"authors": [
"Swiddis",
"jordarlu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9419",
"repo": "opensearch-project/opensearch-build",
"url": "https://github.com/opensearch-project/opensearch-build/issues/4238"
}
|
gharchive/issue
|
Output link to Cypress logs in integ-test-opensearch-dashboards
Is your feature request related to a problem? Please describe
When tests fail in Jenkins, there's no easy way to locate the logs associated with that run. My understanding is that there's some workflow involving copying various IDs to manually assemble that URL, but I'm not sure where it's documented.
Describe the solution you'd like
It would save a lot of debugging time if there were links to the Cypress logs (and video recordings!) within the Jenkins output itself, a simple message like 2023-12-25 12:00:00 INFO Saved Cypress logs to: https://.../output.log.
Describe alternatives you've considered
Linking to documentation on how to generate the link could also help, without as much string building complexity, but seems roundabout compared to just formatting a string to the correct place directly.
Additional context
No response
Hi, @Swiddis , we currently have AUTOCUT issue sent to plugin repos with the test related links, please refer to this https://github.com/opensearch-project/OpenSearch-Dashboards/issues/5506 where you would find the * Test-report manifest:* with all the links to the detail.. hopefully that would resolve this issue? thanks,
cc : @kavilla
Thanks for the feedback, @Swiddis , we have the AUTOCUT mentioned at here in opensearch-build repo; and the intention was to provide more visibility and easier access to the logs when people needs it. ( by using the links in those AUTOCUT issues to access to the manifest or the logs on Jenknis ) ; good to know that worked for you.
Issue should have been addressed with instructions in the comments. thanks!
|
2025-04-01T04:35:01.813061
| 2021-12-21T01:20:48
|
1085342834
|
{
"authors": [
"dblock"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9420",
"repo": "opensearch-project/opensearch-build",
"url": "https://github.com/opensearch-project/opensearch-build/pull/1389"
}
|
gharchive/pull-request
|
Revert "Remove plugins that still need a log4j version bump."
Reverts opensearch-project/opensearch-build#1387, re-adds sql and PA plugins.
GRRR DCO
|
2025-04-01T04:35:01.822146
| 2022-11-08T18:20:39
|
1440692203
|
{
"authors": [
"BSFishy",
"KrooshalUX"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9421",
"repo": "opensearch-project/oui",
"url": "https://github.com/opensearch-project/oui/pull/116"
}
|
gharchive/pull-request
|
Remove single letter beta badges
Description
Remove single letter beta badges, as mentioned here: https://github.com/opensearch-project/oui/issues/93#issuecomment-1306592845.
Note: This is a breaking change
Issues Resolved
[List any issues this PR will resolve]
Check List
[ ] New functionality includes testing.
[ ] New functionality has been documented.
[x] All tests pass
[x] yarn lint
[x] yarn test-unit
[x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
I thought we were going to make these changes in experimental badge instead of beta badge after precious discussions. Matt and I connected today on not making this breaking by leaving beta badge as is, duplicating it and renaming it experimental and then making the changes to OUIExperimental.
On Dec 8, 2022, at 4:16 PM, Josh Romero @.***> wrote:
@joshuarrrr commented on this pull request.
Because we don't actually prevent the previous usage, I think there's an argument to be made that it's not really breaking (it didn't depend on a prop we're removing). But if it is still considered breaking, I'd suggest creating a label that marks it as such - it will make your future backporting/releasing much less painful.
In src/components/badge/beta_badge/_beta_badge.scsshttps://github.com/opensearch-project/oui/pull/116#discussion_r1043944310:
-.ouiBetaBadge--singleLetter {
padding: 0 0 0 1px;
width: $ouiSizeL;
&.ouiBetaBadge--small {
width: $ouiSize + $ouiSizeXS;
padding: 0 0 0 1px;
}
-}
Should we note somewhere that we're not actually preventing a user from providing a single character label - we've just removed custom styling for that scenario and encouragement of that usage pattern from the doc. But have we validated the look/behavior of a single letter label without these styles? Or do we actually want to add some minimum length enforcement within the component (that prevents rendering?)
—
Reply to this email directly, view it on GitHubhttps://github.com/opensearch-project/oui/pull/116#pullrequestreview-1210945167, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AZHKXIOJCTNXOXOCHC5SPHTWMJ25VANCNFSM6AAAAAAR2TTQHQ.
You are receiving this because your review was requested.Message ID: @.***>
Close to redo work to align with #161
|
2025-04-01T04:35:01.825247
| 2022-05-30T19:11:00
|
1253053669
|
{
"authors": [
"javimed",
"krisfreedain"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9422",
"repo": "opensearch-project/project-website",
"url": "https://github.com/opensearch-project/project-website/pull/848"
}
|
gharchive/pull-request
|
Replace Wazuh logos with the new company image
Description
This PR updates the Wazuh logo in:
Partners list
Community projects list
Issues Resolved
No issues involved.
Check List
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the BSD-3-Clause License.
Hi. Could a maintainer check this PR for me? This is my first PR here and needs further reviewing.
Hello @javimed - looks like DCO isn't signed - can you fix that and let us know: https://github.com/opensearch-project/project-website/pull/848/checks?check_run_id=6660105094
|
2025-04-01T04:35:01.827930
| 2018-11-19T07:04:41
|
382082664
|
{
"authors": [
"leonwanghui",
"norshtein"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9423",
"repo": "openservicebrokerapi/osb-checker",
"url": "https://github.com/openservicebrokerapi/osb-checker/pull/61"
}
|
gharchive/pull-request
|
Make schema extensible and flexible
For broker implementation of some service providers, they have enhanced some features by adding some additional fields in the schema. But currently validateJSONSchema method doesn't work because the definition of schema is fixed, so this patch is proposed to make schema more flexible so that service providers can implement some advanced features at the same time passing validation check.
Additional fields is enabled in v2.14: https://github.com/openservicebrokerapi/servicebroker/blob/v2.14/spec.md#vendor-extension-fields
https://github.com/openservicebrokerapi/servicebroker/blob/v2.14/spec.md#changes-since-v213
https://github.com/openservicebrokerapi/servicebroker/pull/436
So v2.13 checker shouldn't contain this PR. You may want to create a new folder 2.14 and apply the PR to 2.14
Thanks, will close it for now
|
2025-04-01T04:35:01.845047
| 2023-03-18T08:48:33
|
1630235353
|
{
"authors": [
"codecov-commenter",
"mudit-01"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9424",
"repo": "openservicemesh/osm",
"url": "https://github.com/openservicemesh/osm/pull/5296"
}
|
gharchive/pull-request
|
Removed deprecated functionality
Removed Go 1.16 functionality of io/ioutil that is deprecated now.
Signed-off-by: mudit singh<EMAIL_ADDRESS>
Description:
Testing done: Yes
Affected area:
Functional Area
New Functionality
[ ]
CI System
[ ]
CLI Tool
[ ]
Certificate Management
[ ]
Control Plane
[ ]
Demo
[ ]
Documentation
[ ]
Egress
[ ]
Ingress
[ ]
Install
[ ]
Networking
[ ]
Observability
[ ]
Performance
[ ]
SMI Policy
[ ]
Security
[ ]
Sidecar Injection
[ ]
Tests
[X ]
Upgrade
[ ]
Other
[ ]
Please answer the following questions with yes/no.
Does this change contain code from or inspired by another project?
Did you notify the maintainers and provide attribution?
Is this a breaking change?
Has documentation corresponding to this change been updated in the osm-docs repo (if applicable)?
Codecov Report
Merging #5296 (0c0acac) into main (dc3f841) will decrease coverage by 0.02%.
The diff coverage is 0.00%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## main #5296 +/- ##
==========================================
- Coverage 69.53% 69.51% -0.02%
==========================================
Files 197 197
Lines 16070 16070
==========================================
- Hits 11174 11171 -3
- Misses 4839 4842 +3
Partials 57 57
Flag
Coverage Δ
unittests
69.51% <0.00%> (-0.02%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
cmd/cli/version.go
43.96% <0.00%> (ø)
... and 1 file with indirect coverage changes
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
Hi @keithmattix @jaellio the lint error is showing in the files that are unchanged by me is there any upgrade in the lint checker?
@trstringer @keithmattix @jaellio PTAL waiting for your review
|
2025-04-01T04:35:01.846560
| 2023-01-19T13:46:48
|
1549194730
|
{
"authors": [
"osherdp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9425",
"repo": "openshift-assisted/assisted-installer-deployment",
"url": "https://github.com/openshift-assisted/assisted-installer-deployment/pull/335"
}
|
gharchive/pull-request
|
Remove CI reporting code
Now that implemented in
https://github.com/openshift-assisted/prow-jobs-scraper we no longer need the daily report and its code.
/cc @adriengentil @danmanor
|
2025-04-01T04:35:01.850894
| 2023-01-18T12:32:00
|
1537924300
|
{
"authors": [
"aliok"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9426",
"repo": "openshift-knative/eventing-kafka-broker",
"url": "https://github.com/openshift-knative/eventing-kafka-broker/pull/555"
}
|
gharchive/pull-request
|
[release 1.6] Don't set ownerRefs on cluster scoped resources
Backport https://github.com/knative-sandbox/eventing-kafka-broker/pull/2911
/cherrypick release-v1.7
/hold
Hold until https://github.com/knative-sandbox/eventing-kafka-broker/pull/2911 is merged
/test 48-test-reconciler-aws-ocp-48
/unhold
https://github.com/knative-sandbox/eventing-kafka-broker/pull/2911 is merged
/override ci/prow/48-test-reconciler-aws-ocp-48
Gonna merge this one without 48-test-reconciler-aws-ocp-48 passing since 411-test-reconciler-aws-ocp-411 is passed.
|
2025-04-01T04:35:01.871264
| 2024-07-10T12:45:16
|
2400665979
|
{
"authors": [
"piyush-garg",
"savitaashture",
"vdemeester"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9427",
"repo": "openshift-pipelines/pipelines-as-code",
"url": "https://github.com/openshift-pipelines/pipelines-as-code/pull/1737"
}
|
gharchive/pull-request
|
.github/workflows: small "step" name fix…
This makes it a little bit more clear what this step does, compared to
the previous with the exact same name.
Signed-off-by: Vincent Demeester<EMAIL_ADDRESS>
/retest
/test linters
@enarha are you okay with the change
Shall we merge this PR ?
|
2025-04-01T04:35:01.875681
| 2024-06-06T08:13:31
|
2337624859
|
{
"authors": [
"danmanor"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9428",
"repo": "openshift/assisted-image-service",
"url": "https://github.com/openshift/assisted-image-service/pull/213"
}
|
gharchive/pull-request
|
NO-ISSUE: [release-ocm-2.8] Use archived dnf repositories for centos8 as the current ones are no longer valid
CentOS Linux 8 had reached the End Of Life (EOL) on December 31st, 2021. It means that CentOS 8 will no longer receive development resources from the official CentOS project. After Dec 31st, 2021, if you need to update your CentOS, you need to change the mirrors to vault.centos.org where they will be archived permanently. Alternatively, you may want to upgrade to CentOS Stream.
from - https://techglimpse.com/failed-metadata-repo-appstream-centos-8/
Description
Assignees
/cc @gamli75
/cc @eifrach
Checklist
[x] Title and description added to both, commit and PR
[x] Relevant issues have been associated
[x] Reviewers have been listed
[x] This change does not require a documentation update (docstring, docs, README, etc)
[ ] Does this change include unit tests (note that code changes require unit tests)
/retitle [release-ocm-2.8] NO-ISSUE: Use archived dnf repositories for centos8 as the current ones are no longer valid
|
2025-04-01T04:35:01.887175
| 2023-03-20T21:57:38
|
1632923666
|
{
"authors": [
"codecov-commenter",
"dustman9000",
"mjlshen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9429",
"repo": "openshift/aws-vpce-operator",
"url": "https://github.com/openshift/aws-vpce-operator/pull/150"
}
|
gharchive/pull-request
|
Improve logging to help isolate a bug
{"level":"debug","ts":"2023-03-20T21:24:01Z","logger":"controller.VpcEndpoint","msg":"Found infrastructure name:","controller":"vpcendpoint","controllerGroup":"avo.openshift.io","controllerKind":"VpcEndpoint","VpcEndpoint":{"name":"service-cluster-endpoint","namespace":"openshift-aws-vpce-operator"},"namespace":"openshift-aws-vpce-operator","name":"service-cluster-endpoint","reconcileID":"46cc7b16-4208-40d6-aa36-47e17d4187d2","name":"hs-sc-ge77jaj60-cw28q"}
{"level":"debug","ts":"2023-03-20T21:24:01Z","logger":"controller.VpcEndpoint","msg":"Found cluster tag:","controller":"vpcendpoint","controllerGroup":"avo.openshift.io","controllerKind":"VpcEndpoint","VpcEndpoint":{"name":"service-cluster-endpoint","namespace":"openshift-aws-vpce-operator"},"namespace":"openshift-aws-vpce-operator","name":"service-cluster-endpoint","reconcileID":"46cc7b16-4208-40d6-aa36-47e17d4187d2","clusterTag":"kubernetes.io/cluster/hs-sc-ge77jaj60-cw28q"}
{"level":"debug","ts":"2023-03-20T21:24:01Z","logger":"controller.VpcEndpoint","msg":"Parsed region from infrastructure","controller":"vpcendpoint","controllerGroup":"avo.openshift.io","controllerKind":"VpcEndpoint","VpcEndpoint":{"name":"service-cluster-endpoint","namespace":"openshift-aws-vpce-operator"},"namespace":"openshift-aws-vpce-operator","name":"service-cluster-endpoint","reconcileID":"46cc7b16-4208-40d6-aa36-47e17d4187d2","region":"us-east-1"}
{"level":"debug","ts":"2023-03-20T21:24:02Z","logger":"controller.VpcEndpoint","msg":"Selecting vpc id","controller":"vpcendpoint","controllerGroup":"avo.openshift.io","controllerKind":"VpcEndpoint","VpcEndpoint":{"name":"service-cluster-endpoint","namespace":"openshift-aws-vpce-operator"},"namespace":"openshift-aws-vpce-operator","name":"service-cluster-endpoint","reconcileID":"46cc7b16-4208-40d6-aa36-47e17d4187d2","vpcId":""}
For reasons that aren't entirely clear yet, somehow we are selecting a vpcId: "", which shouldn't be possible. This PR just adds some additional logging to help isolate how this is happening.
OSD-15465
/lgtm
Codecov Report
Merging #150 (3f49ed7) into main (c312ced) will decrease coverage by 0.08%.
The diff coverage is 0.00%.
Additional details and impacted files
@@ Coverage Diff @@
## main #150 +/- ##
==========================================
- Coverage 41.82% 41.75% -0.08%
==========================================
Files 29 29
Lines 1585 1588 +3
==========================================
Hits 663 663
- Misses 843 846 +3
Partials 79 79
Impacted Files
Coverage Δ
controllers/vpcendpoint/helpers.go
36.36% <0.00%> (-0.08%)
:arrow_down:
pkg/aws_client/vpc_endpoint.go
40.62% <0.00%> (-0.87%)
:arrow_down:
|
2025-04-01T04:35:01.897493
| 2021-03-15T18:59:04
|
832103065
|
{
"authors": [
"adambkaplan",
"gabemontero",
"nalind",
"vikaslaad",
"xiuwang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9430",
"repo": "openshift/builder",
"url": "https://github.com/openshift/builder/pull/224"
}
|
gharchive/pull-request
|
Bug 1939218: bump(containers/*):
Bump containers/image and containers/storage to fix blobinfocache bugs
that caused builds to push image manifests that did not conform to docker
v2schema2. Removed the "replace" clause for containers/image because
buildah can support containers/image v5.10.5.
/assign @nalind
LGTM
/retest
/lgtm
/bugzilla refresh
/hold
Needs openshift/origin#25966 to merge for build suite to pass
@adambkaplan you are also hitting 4.7 version the route name length issue with the image-eco tests.
error: Route.route.openshift.io "nodejs-postgresql-example" is invalid: spec.host: Invalid value: "nodejs-postgresql-example-e2e-test-nodejs-postgresql-repo-test-m6qv5.apps.ci-op-13w3dbgf-5958f.origin-ci-int-aws.dev.rhcloud.com": host must conform to DNS 1123 naming conventions: [spec.host: Invalid value: "nodejs-postgresql-example-e2e-test-nodejs-postgresql-repo-test-m6qv5": must be no more than 63 characters]
The PR I have up for that, https://github.com/openshift/origin/pull/25884 , is blocked by other unrelated failures (bz's for which I have noted in that PR).
depending on the urgency, perhaps more ci overrides in our future
/bugzilla cc-qa
/test e2e-aws-image-ecosystem
/retest e2e-aws-builds
/test e2e-aws-builds
Since cluster-bot doesn't work with PR, qe need to wait pr merge, then validate on nightly build.
Check the failed e2e-aws-builds job,
There are lot of error "failed to sync configmap cache" in ci job logs,
https://search.ci.openshift.org/?search=failed+to+sync+configmap+cache&maxAge=48h&context=1&type=build-log&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
/bugzilla refresh
/retest
@vikaslaad e2e-aws-builds is in a state of perma-fail, blocked by https://github.com/openshift/origin/pull/25966
/retest
/retest
/hold cancel
|
2025-04-01T04:35:01.903087
| 2023-09-13T19:29:13
|
1895125176
|
{
"authors": [
"coreydaley",
"nalind"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9431",
"repo": "openshift/builder",
"url": "https://github.com/openshift/builder/pull/358"
}
|
gharchive/pull-request
|
WIP: mount read-only build volumes as "overlay" mounts
Mount build volumes which are marked read-only as "overlay" mounts, so that we can more easily bind mount them into our mount namespace, where attempting to change the mount flags while doing a regular bind mount in an unprivileged namespace would trigger errors. This extends the changes made in #349 to include configMaps and other read-only volumes.
/retest
/retest
/retest
/test all
/retitle bump github.com/containers/buildah
I don't want to have to work around not having https://github.com/openshift/origin/pull/28352, but we can go back to having RunGitClone() omit its name in its log message if we have to.
/retest
/skip
/approve
/lgtm
/label px-approved
/label docs-approved
/label qe-approved
@nalind This needs a bug in OCPBUGS
/title OCPBUGS-23128: bump github.com/containers/buildah to fix transient mounting in chroot isolation
/retitle OCPBUGS-23128: bump github.com/containers/buildah to fix transient mounting in chroot isolation
|
2025-04-01T04:35:01.911598
| 2023-11-30T03:09:38
|
2017790115
|
{
"authors": [
"TrilokGeer",
"lunarwhite",
"swghosh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9432",
"repo": "openshift/cert-manager-operator",
"url": "https://github.com/openshift/cert-manager-operator/pull/168"
}
|
gharchive/pull-request
|
CM-230: Add e2e test for operands override resources
Mainly refer to override args logic.
add util funcs addOverrideResources and verifyDeploymentResources
add 1 valid and 1 invalid cases for each operand: controller, cainjector, webhook
It's the initial attempt to add new e2e test in dev's repo.
Test profile: aos-4_14/ipi-on-aws/versioned-installer-ci
Pass log
Overrides test
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:16
When adding valid cert-manager controller override resources
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:146
should add the resources to the cert-manager controller deployment
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:148
> Enter [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:04:25.332
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:19 @ 11/30/23 00:04:25.332
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:23 @ 11/30/23 00:04:26.399
< Exit [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:04:26.874 (1.54s)
> Enter [It] should add the resources to the cert-manager controller deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:148 @ 11/30/23 00:04:26.874
STEP: Adding cert-manager controller override resources to the cert-managaer operator object - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:150 @ 11/30/23 00:04:26.874
STEP: Waiting for cert-manager controller status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:164 @ 11/30/23 00:04:27.359
STEP: Waiting for the resources to be added to the cert-manager controller deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:168 @ 11/30/23 00:04:27.601
< Exit [It] should add the resources to the cert-manager controller deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:148 @ 11/30/23 00:04:27.84 (965ms)
• [2.505 seconds]
------------------------------
Overrides test
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:16
When adding valid cert-manager webhook override resources
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:174
should add the resources to the cert-manager webhook deployment
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:176
> Enter [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:04:27.84
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:19 @ 11/30/23 00:04:27.84
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:23 @ 11/30/23 00:04:28.918
< Exit [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:04:31.667 (3.825s)
> Enter [It] should add the resources to the cert-manager webhook deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:176 @ 11/30/23 00:04:31.667
STEP: Adding cert-manager webhook override resources to the cert-managaer operator object - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:178 @ 11/30/23 00:04:31.668
STEP: Waiting for cert-manager webhook controller status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:192 @ 11/30/23 00:04:32.468
STEP: Waiting for the resources to be added to the cert-manager webhook deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:196 @ 11/30/23 00:04:44.045
< Exit [It] should add the resources to the cert-manager webhook deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:176 @ 11/30/23 00:04:44.284 (12.612s)
• [16.437 seconds]
------------------------------
Overrides test
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:16
When adding valid cert-manager cainjector override resources
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:202
should add the resources to the cert-manager cainjector deployment
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:204
> Enter [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:04:44.284
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:19 @ 11/30/23 00:04:44.284
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:23 @ 11/30/23 00:04:45.493
< Exit [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:04:56.339 (12.053s)
> Enter [It] should add the resources to the cert-manager cainjector deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:204 @ 11/30/23 00:04:56.339
STEP: Adding cert-manager cainjector override resources to the cert-managaer operator object - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:206 @ 11/30/23 00:04:56.339
STEP: Waiting for cert-manager cainjector controller status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:220 @ 11/30/23 00:04:56.817
STEP: Waiting for the resources to be added to the cert-manager cainjector deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:224 @ 11/30/23 00:05:00.297
< Exit [It] should add the resources to the cert-manager cainjector deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:204 @ 11/30/23 00:05:00.635 (4.296s)
• [16.349 seconds]
------------------------------
Overrides test
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:16
When adding invalid cert-manager controller override resources
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:230
should not add the resources to the cert-manager controller deployment
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:232
> Enter [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:05:00.636
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:19 @ 11/30/23 00:05:00.636
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:23 @ 11/30/23 00:05:02.048
< Exit [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:05:04.628 (3.992s)
> Enter [It] should not add the resources to the cert-manager controller deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:232 @ 11/30/23 00:05:04.628
STEP: Adding cert-manager controller override resources to the cert-managaer operator object - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:234 @ 11/30/23 00:05:04.628
STEP: Waiting for cert-manager controller status to become degraded - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:246 @ 11/30/23 00:05:05.245
STEP: Checking if the resources are not added to the cert-manager controller deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:250 @ 11/30/23 00:05:05.551
< Exit [It] should not add the resources to the cert-manager controller deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:232 @ 11/30/23 00:05:05.799 (1.171s)
• [5.163 seconds]
------------------------------
Overrides test
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:16
When adding invalid cert-manager webhook override resources
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:256
should not add the resources to the cert-manager webhook deployment
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:258
> Enter [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:05:05.799
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:19 @ 11/30/23 00:05:05.799
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:23 @ 11/30/23 00:05:06.82
< Exit [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:05:07.395 (1.595s)
> Enter [It] should not add the resources to the cert-manager webhook deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:258 @ 11/30/23 00:05:07.395
STEP: Adding cert-manager webhook override resources to the cert-managaer operator object - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:260 @ 11/30/23 00:05:07.395
STEP: Waiting for cert-manager webhook controller status to become degraded - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:272 @ 11/30/23 00:05:08.012
STEP: Checking if the resources are not added to the cert-manager webhook deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:276 @ 11/30/23 00:05:08.251
< Exit [It] should not add the resources to the cert-manager webhook deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:258 @ 11/30/23 00:05:08.492 (1.098s)
• [2.693 seconds]
------------------------------
Overrides test
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:16
When adding invalid cert-manager cainjector override resources
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:282
should not add the resources to the cert-manager cainjector deployment
/Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:284
> Enter [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:05:08.493
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:19 @ 11/30/23 00:05:08.493
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:23 @ 11/30/23 00:05:09.464
< Exit [BeforeEach] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:18 @ 11/30/23 00:05:09.954 (1.461s)
> Enter [It] should not add the resources to the cert-manager cainjector deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:284 @ 11/30/23 00:05:09.954
STEP: Adding cert-manager cainjector override resources to the cert-managaer operator object - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:286 @ 11/30/23 00:05:09.954
STEP: Waiting for cert-manager cainjector controller status to become degraded - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:298 @ 11/30/23 00:05:10.442
STEP: Checking if the resources are not added to the cert-manager cainjector deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:302 @ 11/30/23 00:05:10.681
< Exit [It] should not add the resources to the cert-manager cainjector deployment - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:284 @ 11/30/23 00:05:10.978 (1.025s)
> Enter [AfterAll] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:308 @ 11/30/23 00:05:10.979
STEP: Reset cert-manager state - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:309 @ 11/30/23 00:05:10.979
STEP: Waiting for operator status to become available - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:313 @ 11/30/23 00:05:12.076
< Exit [AfterAll] Overrides test - /Users/yuewu/Documents/workspace/fork/cert-manager-operator/test/e2e/overrides_test.go:308 @ 11/30/23 00:05:12.727 (1.749s)
• [4.235 seconds]
------------------------------
/cc @swghosh @xingxingxia
/uncc @deads2k @stlaz
/label tide/merge-method-squash
/lgtm
/label qe-approved
(this PR from QE itself)
/label docs-approved
(no doc changes required by this PR)
/label px-approved
(no user facing changes introduced by this PR)
@TrilokGeer need your approval to merge this PR.
TIA!
/assign @TrilokGeer
/lgtm
/approve
|
2025-04-01T04:35:01.929764
| 2023-09-22T13:15:46
|
1908891909
|
{
"authors": [
"ardaguclu",
"dgrisonnet",
"gangwgr",
"kasturinarra",
"knelasevero",
"soltysh",
"tkashem"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9433",
"repo": "openshift/cluster-kube-apiserver-operator",
"url": "https://github.com/openshift/cluster-kube-apiserver-operator/pull/1556"
}
|
gharchive/pull-request
|
[release-4.14] OCPBUGS-19553: Update static pod manifests perms
This PR bumps library-go to get the latest static pod permissions(more specifically https://github.com/openshift/library-go/pull/1576).
@dgrisonnet could you also PTAL this backport?(we will have to do this for 4.13 and 4.12). Thanks.
@knelasevero could you also PTAL this backport?(we will have to do this for 4.13 and 4.12). Thanks.
@ardaguclu do you want me to update https://github.com/openshift/cluster-kube-apiserver-operator/pull/1544 and get it merge?
@ardaguclu do you want me to update #1544 and get it merge?
Thanks. You can update yours(and I tag it) or we can continue with the one I opened https://github.com/openshift/cluster-kube-apiserver-operator/pull/1557
@knelasevero could you also PTAL this backport?(we will have to do this for 4.13 and 4.12). Thanks.
LGTM
/hold cancel
Have pre-merge tested ans see the right permissions, thanks !!
/label cherry-pick-approved
/label backport-risk-assessed
/label cherry-pick-approved
maybe manual triggering can work;
/jira refresh
I'm not sure why @tkashem 's label request above has been ignored ;
/label backport-risk-assessed
/label backport-risk-assessed
|
2025-04-01T04:35:01.930787
| 2019-01-21T18:20:27
|
401465359
|
{
"authors": [
"deads2k",
"sttts"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9434",
"repo": "openshift/cluster-kube-apiserver-operator",
"url": "https://github.com/openshift/cluster-kube-apiserver-operator/pull/212"
}
|
gharchive/pull-request
|
move to openshift/api types
switches us to use the new API type
@tnozicka is there anything I have to do the makefile?
/retest
|
2025-04-01T04:35:01.984489
| 2020-08-31T10:25:53
|
689094996
|
{
"authors": [
"codecov-commenter",
"thiagoalessio",
"vfreex"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9435",
"repo": "openshift/doozer",
"url": "https://github.com/openshift/doozer/pull/271"
}
|
gharchive/pull-request
|
[ART-2186] Remove non-metadata files from manifests
Apparently some repos contain additional files (YAML or not) that are
present in the manifests for other purposes, irrelevant to appregistry,
and that is causing CVP tests to fail.
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@4a8f6a0). Click here to learn what that means.
The diff coverage is 44.44%.
@@ Coverage Diff @@
## master #271 +/- ##
=========================================
Coverage ? 27.36%
=========================================
Files ? 30
Lines ? 6410
Branches ? 1293
=========================================
Hits ? 1754
Misses ? 4594
Partials ? 62
Impacted Files
Coverage Δ
doozerlib/operator_metadata.py
79.06% <44.44%> (ø)
doozerlib/pushd.py
95.65% <0.00%> (ø)
doozerlib/runtime.py
27.62% <0.00%> (ø)
doozerlib/distgit.py
32.64% <0.00%> (ø)
doozerlib/rpmcfg.py
12.73% <0.00%> (ø)
doozerlib/source_modifications.py
68.29% <0.00%> (ø)
doozerlib/__init__.py
54.54% <0.00%> (ø)
doozerlib/brew.py
40.16% <0.00%> (ø)
doozerlib/config.py
0.00% <0.00%> (ø)
doozerlib/exectools.py
46.87% <0.00%> (ø)
... and 21 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4a8f6a0...9dcd297. Read the comment docs.
/lgtm
|
2025-04-01T04:35:01.989063
| 2019-11-13T19:46:27
|
522433720
|
{
"authors": [
"vfreex",
"yazug"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9436",
"repo": "openshift/elliott",
"url": "https://github.com/openshift/elliott/issues/84"
}
|
gharchive/issue
|
Python 3 support
In starting to look at elliott I noticed a number of possible issues with python3 support in the code base. Below are a few that I saw so far.
tox missing py3 target (available in the python-3 branch)
setup.py is missing test deps
print() issue
urlparse error
unicode
absolute vs relative imports
iteritems
dict values() is view not a list in python3
contextlib.next was replace by ExitStack (in a unittest)
unittests and elliottlib.assertions.FileNotFoundError and ChildProcessError are not working in py3
I did see pull request #82 started looking at this
Looks like there is topic branch of python-3 with some work
Fixed by #85 and #104.
|
2025-04-01T04:35:01.993462
| 2023-03-13T13:24:28
|
1621485709
|
{
"authors": [
"christianvogt",
"karthikjeeyar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9437",
"repo": "openshift/hac-dev",
"url": "https://github.com/openshift/hac-dev/pull/475"
}
|
gharchive/pull-request
|
Support tekton minimal status configuration
Fixes
https://issues.redhat.com/browse/HAC-3352
Description
when embedded-status configuration is set to minimal, which will remove the status.taskruns from the pipelineruns.status. This PR fetches the taskruns separately instead of reading from pipelinerun.status.taskruns
Type of change
[x] Feature
Screen shots / Gifs for design review
No UI changes
Pipelinerun/Taskrun/logs should work as before.
https://user-images.githubusercontent.com/9964343/224714069-4579d68b-ffdd-44e8-a1e9-2dda642866d1.mov
Browser conformance:
[x] Chrome
[x] Firefox
[x] Safari
[ ] Edge
cc: @christianvogt
/retest
/lgtm
/retest
|
2025-04-01T04:35:01.995821
| 2019-04-26T12:25:16
|
437648302
|
{
"authors": [
"dgoodwin",
"jhernand",
"nimrodshn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9438",
"repo": "openshift/hive",
"url": "https://github.com/openshift/hive/pull/347"
}
|
gharchive/pull-request
|
Use +genclient for the configuration object
The description of the HiveConfig type has the
+genclient:nonNamespaced comment, but not +genclient. Both need to
be present, otherwise the code generation tools don't generate a client
for the type.
@dgoodwin @nimrodshn please review.
/retest
/test e2e
/test e2e
@jhernand LGTM :+1:
/lgtm
|
2025-04-01T04:35:01.996789
| 2019-10-15T15:02:16
|
507305573
|
{
"authors": [
"staebler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9439",
"repo": "openshift/hive",
"url": "https://github.com/openshift/hive/pull/597"
}
|
gharchive/pull-request
|
controllers: simplify conds changed check in controlplanecerts
The code in the controlplanecerts controller that sets the conditions does a deep copy of the clusterdeployment and a full reflect comparison. This is inefficient when there is already a function that can tell us whether the conditions changed.
/test e2e-gcp
|
2025-04-01T04:35:02.000672
| 2023-03-02T14:56:00
|
1606958623
|
{
"authors": [
"enxebre"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9440",
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/2239"
}
|
gharchive/pull-request
|
Skip pod restart check for NTO
What this PR does / why we need it:
NTO is restarting a few times making all presumits fail skipping until this is solved. Discussion for fixing is taking place.
Which issue(s) this PR fixes (optional, use fixes #<issue_number>(, fixes #<issue_number>, ...) format, where issue_number might be a GitHub issue, or a Jira story:
Fixes #
Checklist
[ ] Subject and description added to both, commit and PR.
[ ] Relevant issues have been referenced.
[ ] This change includes docs.
[ ] This change includes unit tests.
/area hypershift-operator
ok, seems https://github.com/openshift/hypershift/pull/2239#issuecomment-1452031849 is not blocking at all.
|
2025-04-01T04:35:02.002132
| 2021-11-29T16:59:41
|
1066276713
|
{
"authors": [
"alvaroaleman",
"csrwng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9441",
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/722"
}
|
gharchive/pull-request
|
Hypershift CLI: Dont use json-encoding for logs
JSON-Encoding makes sense for long-running components but not for a CLI
that is only ever invoked by a human. Also changes the time encoding to
be RFC3339 instead of unix epoch.
/cc @csrwng
/lgtm
|
2025-04-01T04:35:02.027851
| 2024-12-02T07:45:43
|
2710915148
|
{
"authors": [
"codecov-commenter",
"onmete",
"tisnik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9442",
"repo": "openshift/lightspeed-service",
"url": "https://github.com/openshift/lightspeed-service/pull/1983"
}
|
gharchive/pull-request
|
Use datetime.now instead of utznow
Description
Use datetime.now instead of utznow
Python datetime objects can be naive or timezone-aware. While an aware
object represents a specific moment in time, a naive object does not
contain enough information to unambiguously locate itself relative to other
datetime objects. Since this can lead to errors, it is recommended to
always use timezone-aware objects.
Type of change
[x] Refactor
[ ] New feature
[ ] Bug fix
[ ] CVE fix
[ ] Optimization
[ ] Documentation Update
[ ] Configuration Update
[ ] Bump-up dependent library
[ ] Bump-up library or tool used for development (does not change the final image)
[ ] CI configuration change
[ ] Konflux configuration change
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 74.78%. Comparing base (e1ae8a7) to head (7486b55).
Report is 33 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #1983 +/- ##
===========================================
- Coverage 96.95% 74.78% -22.17%
===========================================
Files 69 68 -1
Lines 2886 2883 -3
===========================================
- Hits 2798 2156 -642
- Misses 88 727 +639
Files with missing lines
Coverage Δ
ols/app/endpoints/feedback.py
100.00% <100.00%> (ø)
... and 35 files with indirect coverage changes
/lgtm
|
2025-04-01T04:35:02.092446
| 2020-07-15T15:27:11
|
657442490
|
{
"authors": [
"Shraddhak22",
"groeges",
"jaideepr97",
"neeraj-laad",
"sbose78"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9443",
"repo": "openshift/odo",
"url": "https://github.com/openshift/odo/issues/3583"
}
|
gharchive/issue
|
Support Kubernetes build and deploy support in odo
/kind feature
Which functionality do you think we should add?
The following proposal adds support for building a container image and deploying it:
https://github.com/openshift/odo/blob/master/docs/proposals/odo-deploy.md
There is a PR open to support this capability on OpenShift: https://github.com/openshift/odo/pull/3478
We should add support for any Kubernetes as target. At a high level this will involve:
detect when we are not running on Openshift
use kaniko for building the container image
accept image registry credentials from the user as argument and use it to push the built image
support Ingress while detecting the deployed application URL
Why is this needed?
Provide generic Kubernetes support for odo deploy
@sbose78 @wtam2018 @jaideepr97 - Lets keep all discussions on Kubernetes path for odo deploy here.
As @EnriqueL8 has mentioned, you can use some aspects fro mhttps://github.com/EnriqueL8/odo/tree/buildah_deploy. This was the work we had started.
We started looking at buildah first , but were not sure if we could use it without privileged access, were looking to make a switch to kaniko.
Raising this comment as a potential change that might be needed when supporting Ingress on Kube (and/or OCP).
This was something we found when working on the odo deploy when attempting to use a deployment manifest that contained an Ingress, ie
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: {{.COMPONENT_NAME}}
spec:
rules:
- host: {{.COMPONENT_NAME}}-{{.NAMESPACE}}-{{.ROUTE_SUFFIX}}
http:
paths:
- path: /
backend:
serviceName: {{.COMPONENT_NAME}}
servicePort: {{.PORT}}-tcp
The spec.rules.host seems to need to have a host value defined in order to correctly setup the remote URL.
The {{.COMPONENT_NAME}} and {{.NAMESPACE}} template variables are already available within odo but the {{.ROUTE_PREFIX}} is not easily obtainable (specifically from OCP - not sure about Kube).
This may be OCP specific as I have only tried this on Kube running in my Docker Desktop (on Mac) and that Kube setup didn't need the host and just used localhost. Just highlighting this just in case it is needed on a full Kube system.
There might be a need to include a --host (or some suitable named flag) on the odo deploy command in order to provide this info when using an Ingress.
@neeraj-laad @EnriqueL8 @groeges @sbose78 @wtam2018
Hey folks,
At the moment, we are leaning more towards going with Buildah as the first build strategy to support for the following reasons:
RedHat has an officially supported image on registry.redhat.io whereas this is not the case for Kaniko
If needed we can reach out to the Buildah team to request changes/features that align with our requirements, whereas it would
not be so with Kaniko
Buildah is working towards supporting unprivileged builds in the foreseeable future
Buildah has better compatibility with service accounts and it is relatively easier for buildah to read creds from the associated
service account the pod is running as
If there isn't an immediate need for unprivileged builds in our collective vision for what odo deploy should be doing in the near future, we think starting with Buildah makes sense for now. We could work to add Kaniko support at a later stage -- and we'll continue to to work with the team to make unprivileged builds using Buildah work in the coming months
But it would be useful to get everybody's perspective, and understanding the nature of the use cases or other considerations being made on your side would help drive the decision that would be more in line with our vision
cc @Shraddhak22 @ranakan19 @reginapizza
+1
We started looking at buildah first , but were not sure if we could use it without privileged access,
Note, there are few other places we are trying to get buildah unprivileged work
https://github.com/redhat-developer/build/issues/134#issuecomment-619494883
Sounds good to me. Our preference was to start with buildah too. The only reason for switching from buildah to kaniko was need for privileged access. As long as we can solve that, this seems like a good plan.
@jaideepr97 I'm keen to hear about your plans on handling registry credentials. Have you started thinking about that yet?
@jaideepr97 I'm keen to hear about your plans on handling registry credentials. Have you started thinking about that yet?
@neeraj-laad haven't gotten into the weeds yet, but the initial idea is that the --credentials flag in odo deploy would point to the location of the dockerconfig file with external registry credentials, and odo deploy could leverage that file to create a secret in the specified namespace before spinning up the builder pod
The Buildah container in the pod can then just access this secret through a volume mount and use it to push the image to the registry
@jaideepr97 you might want to assign this issue to yourself, so odo team knows this is coming from you.
@jaideepr97 you might want to assign this issue to yourself, so odo team knows this is coming from you. Perhaps even put it under right milestone etc.
@neeraj-laad will do, thanks for all your help!
/assign jaideepr97
@kadel
@neeraj-laad @EnriqueL8 ,
Hello Folks,
After further discussion with team and research on buildah unprivileged build, we have decided to go ahead with,
Buildah strategy driven by BuildConfig on OpenShift clusters.
Kaniko dockerfile strategy for Kubernetes
Please let us know your thoughts.
cc @sbose78 @wtam2018 @jaideepr97 @reginapizza @ranakan19
|
2025-04-01T04:35:02.113787
| 2020-06-17T11:28:06
|
640366741
|
{
"authors": [
"adisky",
"girishramnani",
"jaideepr97",
"kadel",
"neeraj-laad",
"sbose78"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9444",
"repo": "openshift/odo",
"url": "https://github.com/openshift/odo/pull/3368"
}
|
gharchive/pull-request
|
Proposal for odo deploy
What type of PR is this?
/kind design
[skip ci]
What does does this PR do / why we need it:
This is a design proposal for adding limited outer-loop capabilities to odo. Additional information non this is captured here: #3300
Which issue(s) this PR fixes:
Relates to: #3300
How to test changes / Special notes to the reviewer:
This is a design proposal, once the design is agreed upon we can submit a PR for the implementation. We might start making some progress on a prototype based on this proposal and then adjust as the design evolves.
Some questions -
Would the odo log work with the deployed slim container?
Would the odo exec command be allowed to run commands on the deployed prod container?
What kind of kubernetes resource would the Image be deployed as? Like a low level Pod resource or a replicated Deployment or something else?
Would we allow configure health checks (liveness probes)?
/ok-to-test
Would the odo log work with the deployed slim container?
yes, but we can address this later.
Would the odo exec command be allowed to run commands on the deployed prod container?
yes, but we can address this later.
What kind of kubernetes resource would the Image be deployed as? Like a low level Pod resource or a replicated Deployment or something else?
That will be complete under the devfile creator control. deployment-manifest: could point to anything. It can be Pod, Deployment, Knative Service, or CR, or even multiple resources-
Would we allow configure health checks (liveness probes)?
Same as above, it will depend on what will be defined in deployment-manifest should be able to add there almost anything they want.
@girishramnani
Would the odo log work with the deployed slim container?
I have to admit I had not thought about this up till now, so based on current thinking No. It would be good to not intermingle the commands for inner-loop/outer-loop too much. But we can put this as a future piece if we think it might be useful.
Would the odo exec command be allowed to run commands on the deployed prod container?
I would think No - If this is a production-like container, I do not see why you would need to do this. Can you think of a reason why this would be useful?
What kind of kubernetes resource would the Image be deployed as? Like a low level Pod resource or a replicated Deployment or something else?
We would like to support any Kubernetes resource. So devfile creator can decide what makes most sense. standard k8s deployment, knative service, operator custom resource etc.
Would we allow configure health checks (liveness probes)?
My current thought is that we will not do anything over and above what the devfile creator specifies. If they want specific health/liveness probes they need to provide those in the deployment manifest. or maybe consider this as a future piece.
@neeraj-laad Could you please address in your proposal why we aren't going to use the BuildConfig Dockerfile strategy in OpenShift ?
@girishramnani @kadel @dharmit
Would it be messy if we had a couple of if-else blocks where
if openshift, use buildconfig and push to imagestream
if non-openshift, use plain pods, and push to quay.io-ish registry with credentials.
Would it be messy if we had a couple of if-else blocks where
if openshift, use buildconfig and push to imagestream
if non-openshift, use plain pods, and push to quay.io-ish registry with credentials.
This would be ok. We already doing something similar for URLs where odo creates Routes when working with OpenShift and Ingress when working with Kubernetes
@sbose78 @kadel I have updated to proposal to use BuildConfig if present on cluster or use kaniko if not.
This will mean the first PR could provide OpenShift support using BuildConfig. and a follow on PR could add support for generic Kubernetes with Kaniko.
/lgtm
overriding CI, we don't need tests for this PR
/override ci/prow/v4.5-integration-e2e
/override ci/prow/v4.4-integration-e2e
/override ci/prow/v4.3-integration-e2e
/override ci/prow/v4.2-integration-e2e
Hi folks, based on my understanding of this proposal, I had a few questions (apologies for any trivial/previously answered questions) :-
odo deploy flags mentions multiple ways to handle flag inputs (registry credentials etc) depending on the type of target( k8/OS) - is the target specified in the devfile or supplied in some other way?
I understand that as of now odo deploy is intended to only be used for deployments in the dev environment. However, I was wondering, given enough time to mature through successive improvements, if there was any likelihood of configuring existing pipelines to leverage odo deploy internally to deploy applications to prod and other environments down the line
A couple peripheral questions that also seemed relevant to me :
Is the intention of having a devfile that the user would not have to worry about any of the deployment details at all? Because as per my understanding the devfile points to variable resources like the deployment manifest and source URL which would probably be specific to an application and can't just be picked up by odo deploy from a stock devfile for a particular language/framework
Has the structure of the devfile 2.0 been fixed or is it still being debated? The link for the documentation (https://devfile.github.io/website/) provided on the kubernetes-api github page seems to be broken (https://github.com/devfile/kubernetes-api)
Appreciate any inputs!
@jaideepr97 Sorry missed this message as the PR got merged.
odo deploy flags mentions multiple ways to handle flag inputs (registry credentials etc) depending on the type of target ( k8s/OS) - is the target specified in the devfile or supplied in some other way?
The intent was to sue a fully qualified image name (registry, repo, name, version etc.) to be supplied as the --tag argument and corresponding credentials to be supplied as credentials. That said the first implementation will be using Openshift BuildConfig so we will not need tag and credentials. for that case.
I understand that as of now odo deploy is intended to only be used for deployments in the dev environment. However, I was wondering, given enough time to mature through successive improvements, if there was any likelihood of configuring existing pipelines to leverage odo deploy internally to deploy applications to prod and other environments at some point down the line
It can be though, using a developer CLI tool in a pipeline was a bit odd. Instead the odo pipelines init/bootstrap will use the same code to provide appropriate build/deploy artifacts that the pipelines can use. This will allow the pipelines to be generic and not have to rely on odo.
Is the intention of having a devfile that the user would not have to worry about any of the deployment details at all? Because as per my understanding the devfile points to variable resources like the deployment manifest and source URL which would probably be specific to an application and can't just be picked up by odo deploy from a stock devfile for a particular language/framework
The intent is that the devfile can provide default dockerfile/deployment manifests that will work with standard apps for that technology/framework. If the developer wants to override this, they do have the capability to do so by utilising standard inheritance and override capabilities that will be available to devfiles.
Has the structure of the devfile 2.0 been fixed or is it still being debated? The link for the documentation (https://devfile.github.io/website/) provided on the kubernetes-api github page seems to be broken (https://github.com/devfile/kubernetes-api)
I would think this would be temporary. It is close to getting finalised but this feature was suggesting no changes to devfile spec. As a proper syntax and first class support will be added as part of Devfile 2.1.0
|
2025-04-01T04:35:02.188027
| 2016-11-28T17:41:46
|
192070097
|
{
"authors": [
"joelddiaz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9445",
"repo": "openshift/openshift-tools",
"url": "https://github.com/openshift/openshift-tools/pull/1645"
}
|
gharchive/pull-request
|
wait rather than exit while the iptables lock is held
rather than wait forever, bound the wait to 10 minutes
👍 #1644 in stg
|
2025-04-01T04:35:02.360305
| 2015-02-16T19:40:12
|
57843648
|
{
"authors": [
"mfojtik",
"rhcarvalho"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9446",
"repo": "openshift/source-to-image",
"url": "https://github.com/openshift/source-to-image/pull/128"
}
|
gharchive/pull-request
|
Fix typos
@mfojtik asked, so here we go!
typokiller found very few typos here, awesome!
awesome! thanks!
[merge]
|
2025-04-01T04:35:02.362231
| 2015-08-31T17:11:26
|
104093003
|
{
"authors": [
"bparees",
"hhorak",
"mfojtik"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9447",
"repo": "openshift/sti-base",
"url": "https://github.com/openshift/sti-base/pull/64"
}
|
gharchive/pull-request
|
Add entrypoint so the commands as CMD on non-standard path are found
With current sti-base image the following runs python 2.7, while one would expect python 3.3:
#> docker pull openshift3/python-34-rhel7
#> docker run --rm -ti openshift3/python-34-rhel7 python --version
Python 2.7.5
Adding entrypoint will ensure correct PATH is set also at the time the CMD is being found.
[test]
[test]
lgtm.
|
2025-04-01T04:35:02.420130
| 2017-07-17T19:31:14
|
243501548
|
{
"authors": [
"zanedb"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9448",
"repo": "openssf/open-journal-android",
"url": "https://github.com/openssf/open-journal-android/issues/1"
}
|
gharchive/issue
|
ListView displays over custom toolbar
On the dev branch, where there is a custom Toolbar on HomeActivity (with a search icon), adding a note in the ListView displays over the custom Toolbar. I will fix this soon.
Fixed. Will be up in next commit.
Just pushed, fixed now!
|
2025-04-01T04:35:02.422245
| 2020-01-21T21:00:16
|
553126338
|
{
"authors": [
"kaduk",
"nhorman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9449",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/10922"
}
|
gharchive/issue
|
Add BIO_get_conn_mode() missing accessor
We have a BIO_set_conn_mode() API that directly sets the entire connect_mode word, but also have more fine-grained operations like BIO_C_SET_NBIO to tweak just a single flag.
Is the expectation that we have a dedicated control operation per flag (and that the generic setter is to be deprecated) or that we should add a generic getter so that applications can control the behavior they're interested in.
This originally arose in the context of #8962 but is perhaps more general, so I am raising a separate issue for it.
I don't know the history, but my observation has been that generally all control operations are fine grained, using BIO_ctrl, with wrapper macros like BIO_set_conn_mode created where needed/requested to make their calling context a bit more readable.
So I think the answer to your question is that you can expect a mix of both.
Hope that helps
Marking as inactive, to be closed at the end of 3.4 dev, barring further input
|
2025-04-01T04:35:02.427104
| 2020-08-28T20:51:44
|
688348478
|
{
"authors": [
"Croydon",
"jeroen",
"kroeckx",
"levitte"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9450",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/12743"
}
|
gharchive/issue
|
configure script fails if path contains "@" symbol
In Homebrew, the formula will be called "openssl@3" and get installed in /usr/local/Cellar/openssl@3. I noticed that the @ symbol in the path seems to break the configure script as below. This did not happen for<EMAIL_ADDRESS>perl ./Configure<EMAIL_ADDRESS>--openssldir=/usr/local/etc/openssl3 no-ssl3 no-ssl3-method no-zlib darwin64-x86_64-cc enable-ec_nistp_64_gcc_128
make
#Last 15 lines from<EMAIL_ADDRESS>#2020-08-28 22:43:13 +0200
#
#make
#
#Makefile:2: *** missing separator. Stop.
mind giving us the output of head -5 Makefile?
Program fragment delivered error ``Can't locate platform.pm in @INC (you may need to install the platform module) (@INC contains: . /private/tmp/openssl-20200829-19268-1d88s2b/openssl-3.0.0-alpha6/Configurations /private/tmp/openssl-20200829-19268-1d88s2b/openssl-3.0.0-alpha6/util/perl<EMAIL_ADDRESS>/Library/Perl/5.18/darwin-thread-multi-2level /Library/Perl/5.18 /Network/Library/Perl/5.18/darwin-thread-multi-2level /Network/Library/Perl/5.18 /Library/Perl/Updates/5.18.4 /System/Library/Perl/5.18/darwin-thread-multi-2level /System/Library/Perl/5.18 /System/Library/Perl/Extras/5.18/darwin-thread-multi-2level /System/Library/Perl/Extras/5.18<EMAIL_ADDRESS>at (eval 6) line 5.
BEGIN failed--compilation aborted at (eval 6) line 5.''
##
## Makefile for OpenSSL
##
Seems to be a duplicate of #12078
Please try the fix in #13225
I don't think this fixed it. Now seeing this:
make
perl "-I." "-Idoc" -Mconfigdata -Mperlvars "util/dofile.pl" "-oMakefile" doc/man1/openssl-asn1parse.pod.in > doc/man1/openssl-asn1parse.pod
Can't locate platform.pm in @INC (you may need to install the platform module) (@INC contains: . /private/tmp/openssl.0-20201023-10411-hewbzl/openssl-3.0.0-alpha7/util/../Configurations<EMAIL_ADDRESS>doc /Library/Perl/5.18/darwin-thread-multi-2level /Library/Perl/5.18 /Network/Library/Perl/5.18/darwin-thread-multi-2level /Network/Library/Perl/5.18 /Library/Perl/Updates/5.18.4 /System/Library/Perl/5.18/darwin-thread-multi-2level /System/Library/Perl/5.18 /System/Library/Perl/Extras/5.18/darwin-thread-multi-2level /System/Library/Perl/Extras/5.18<EMAIL_ADDRESS><EMAIL_ADDRESS>at (eval 6) line 4.
BEGIN failed--compilation aborted at (eval 6) line
Please check if /private/tmp/openssl.0-20201023-14132-zhzxv1/openssl-3.0.0-alpha7/util/../Configurations contains platform.pm. It should...
I can confirm @jeroen's report
Logs:
https://github.com/conan-io/conan-center-index/pull/2054#issuecomment-714911728
Ah, found the culprit. #13225 is updated, please try once more?
Thanks, it now works with the latest #13225 patch!
Works for Conan too. All 88 configurations build now again 👍
|
2025-04-01T04:35:02.429100
| 2021-09-21T13:08:57
|
1002452020
|
{
"authors": [
"TheKinrar",
"t8m"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9451",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/16649"
}
|
gharchive/issue
|
File encrypted on x86 linux cannot be decrypted on M1 Mac
Hi!
Encrypting a file on ArchLinux x86:
openssl enc -aes-256-cfb -in file.txt -out file.txt.enc -k "enc_key"
Then decrypting on a Mac M1:
openssl enc -d -aes-256-cfb -in file.txt.enc -out file.txt -k "enc_key"
Results in garbled output in file.txt on the M1 side. Tried using base64. Encrypting then decrypting on the same machine works as intended.
Which openssl version(s) are they?
Closing for no response.
|
2025-04-01T04:35:02.436771
| 2022-02-07T20:18:38
|
1126453627
|
{
"authors": [
"mattcaswell",
"tomato42"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9452",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/17656"
}
|
gharchive/issue
|
Unexpected fragmentation with large writes and max_fragment_length=4096
Using current master (eafd3e9d07e9).
When using openssl s_server -rev to make openssl send large records, I've noticed that every line size sent to server is handled as expected, when I send 11111 bytes, I get 11111 bytes back. Similarly, when I negotiate max_fragment_length extension, the server will answer with packets fragmented in a way the client sent it. Negotiate 4096 byte records, send two 4096 byte and one 512 byte records, receive three records back, two 4096 byte long and one 512 byte long.
That is with the exception of negotiation of 4096 byte max_fragment_length and 16KiB long application data line, then server replies with 5 records, four 3584 byte long and one 2048 byte long.
This does not happen without max_fragment_lenght, or with smaller fragment sizes like 2048, 1024 or 512.
It also happens with ciphers with different overhead, like TLS_RSA_WITH_AES_128_CBC_SHA, or TLS_RSA_WITH_AES_128_CBC_SHA256.
But not with TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Reproducer:
openssl req -x509 -newkey rsa -keyout localhost.key -out localhost.crt -subj /CN=localhost -nodes -batch
openssl s_server -key /tmp/localhost.key -cert /tmp/localhost.crt -rev -cipher @SECLEVEL=0:ALL -quiet
in another console:
git clone https://github.com/tomato42/tlsfuzzer
pushd tlsfuzzer
# won't be needed after https://github.com/tlsfuzzer/tlsfuzzer/pull/762 gets merged
git checkout length-testing
git clone https://github.com/tlsfuzzer/tlslite-ng .tlslite-ng
ln -s .tlslite-ng/tlslite tlslite
git clone https://github.com/tlsfuzzer/python-ecdsa .python-ecdsa
ln -s .python-ecdsa/src/ecdsa ecdsa
PYTHONPATH=. python scripts/test-lengths.py --size-limit 4096 -n 0 "length: $((1024*16))"
Use -C TLS_RSA_WITH_AES_128_CBC_SHA or -C TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 to test different ciphers (place before the name of the test, i.e. "length...")
Run without "length: $((1024*16))" to test all lengths.
Even more curiously if I add -trace onto the end of the s_server command line the test suddenly starts to pass!
The reproducer seems to run 3 tests 2 of which pass and 1 of which fails. Is it possible to get it to only run the 1 failing test?
It's a feature! Sort of.
This is actually due to the multiblock performance enhancement. This kicks in if you are sending more than a set amount of data and encrypt-then-mac is not being used, and the cipher supports it. In such a case some ciphers can actually create the whole TLS records themselves and can encrypt 4 or 8 records worth of data all in one go.
In this case, sending 16KiB data with 4096 max_fragment_length the code is detecting that it can do 4 records in one go via multiblock. The fragment size to send is actually reduced slightly for the period that multiblock has kicked in due to this code:
https://github.com/openssl/openssl/blob/eafd3e9d07e99583a1439bb027e4d6af43e2df27/ssl/record/rec_layer_s3.c#L448-L450
So when multiblock is operational we are actually using a fragment size of 4096 - 512 = 3584. So the 4 records you see of 3584 bytes have been encrypted using the multiblock code. Finally there is 2048 bytes remaining and it is sent in a fifth and final record.
I actually doubt that multiblock should be kicking in, in the case that max_fragment_length has been negotiated. Multiblock is optimised for full length records. I doubt we see the performance benefit for smaller fragment lengths, and it is probably a bug that it is kicking in for these smaller fragments.
Not sure if it's a bug though, looking at benchmark speeds at my 4-core 8-thread i7-8650U @ 1.9GHz I'm getting:
Thanks. Looks like we're best to keep that behaviour then.
|
2025-04-01T04:35:02.441892
| 2016-10-26T08:20:36
|
185326364
|
{
"authors": [
"EricDeveaud",
"levitte"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9453",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/1783"
}
|
gharchive/issue
|
Openssl 1.0.2j Not linking its own libcrypto correctly
Hello,
I compiled and installed openssl-1.0.2j with shared support, in a non standard directory and noticed that installed libgost.so is not linked versus embeded libcrypto
openssl was compiled using following config options:
./config --prefix=/tmp/openssl threads zlib-dynamic shared -fPIC -Wl,--enable-new-dtags,-rpath,/tmp/openssl/lib
(see https://github.com/openssl/openssl/issues/1740)
see for example:
find /tmp/openssl/lib/ -name \*.so | xargs ldd | grep -v \(
/tmp/openssl/lib/libssl.so:
/tmp/openssl/lib/engines/libcswift.so:
/tmp/openssl/lib/engines/libpadlock.so:
/tmp/openssl/lib/engines/libaep.so:
/tmp/openssl/lib/engines/libsureware.so:
/tmp/openssl/lib/engines/lib4758cca.so:
/tmp/openssl/lib/engines/libatalla.so:
/tmp/openssl/lib/engines/libchil.so:
/tmp/openssl/lib/engines/libnuron.so:
/tmp/openssl/lib/engines/libubsec.so:
/tmp/openssl/lib/engines/libcapi.so:
/tmp/openssl/lib/engines/libgmp.so:
/tmp/openssl/lib/engines/libgost.so:
libcrypto.so.1.0.0 => not found
/tmp/openssl/lib/libcrypto.so:
did I missed something ?
best regards
Eric
PS attached is a Dockerfile that allows to reproduce the problem
Dockerfile-openssl.txt
I made a build according to your recipe and looked at /tmp/openssl/lib/engines/libgost.so with objdump -x. 'lo and behold, no RUNPATH set! That explains your problem.
I'll have a look in the Makefiles, that's probably where the problem lies.
Please try #1803
yes it fixes the problem.
let's wait for next officiel 1.0.2 release
thanks
Eric
Grand. Thank you.
|
2025-04-01T04:35:02.451815
| 2023-01-27T12:21:08
|
1559674535
|
{
"authors": [
"1268",
"BugOfBugs",
"levicki",
"nhorman",
"paulidale"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9454",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/20154"
}
|
gharchive/issue
|
Provide OpenSSL binaries under GitHub Releases
I asked about the possibility to provide binaries under GitHub Releases in #20114 and @t8m provided some clarification but since it is a separate topic I am creating this feature request.
I understand that there are many possible build configurations and that not all of them can be provided but providing for a few major platforms would be nice.
On the other hand, if I am not mistaken this project uses automated builds as part of CI. For this to be useful they probably have to build binaries and run tests for a few platforms if not all of them. If the build process already produces binaries then I see no harm in releasing those.
Provided binaries don't need to come with support and/or warranty — they could have a disclaimer (e.g. "use at your own risk"), and if someone needs different build configuration than a couple of major ones provided, they can still clone and build OpenSSL themselves.
I would really appreciate OpenSSL maintainers' thoughts on this subject.
OMC: the project will investigate this.
OMC: we're looking into the possibility of a binary Windows release.
openssl/installer#2
not sure about author, but I would prefer just a normal Zip file against any other option. I would like to be able to just extract and run the program, similar to Linux. I dont need an installer.
not sure about author, but I would prefer just a normal Zip file against any other option. I would like to be able to just extract and run the program, similar to Linux. I dont need an installer. until then I am using this:
https://indy.fulgan.com/SSL
Make installer separate from archive. Would satisfy both: who likes to unpack, and who likes installers.
Publish binaries that other people contribute for various systems.
Post here requests to contribute binaries for missing systems.
Teach, and link how to configure. My tutorial:
https://github.com/openssl/openssl/issues/21643#issuecomment-1664927017
note that @BugOfBugs is a known spammer, please ignore
note that @BugOfBugs is a known spammer, please ignore
we are planning an installer binary that will be buildable via the repository https://github.com/openssl/installer. Thats available now, and are considering distributing a binary installer artifact for 3.5
considering distributing a binary installer artifact
Do you mean ready to use archive, and installer? So there would be no need to compile.
User would simply copy them, and run.
installer artifact
Do you mean ready to use archive, and installer? So there would be no need to compile.
User would simply copy them, and run.
installer artifact
Do you mean ready to use archive, and installer? So there would be no need to compile.
User simply copies, and runs.
|
2025-04-01T04:35:02.456887
| 2023-11-15T15:27:05
|
1995003150
|
{
"authors": [
"t8m",
"xiaodengchao"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9455",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/22738"
}
|
gharchive/issue
|
openssl 3.x use http method DELETE,TRACE,OPTIONS visit https port not return SSL_R_HTTP_REQUEST error
https://github.com/openssl/openssl/blob/72f98c5d5df345f7818d279c0623c95d87175d63/ssl/record/ssl3_record.c#L341
openssl 3.x use http method DELETE,TRACE,OPTIONS visit https port not return SSL_R_HTTP_REQUEST error
I have use curl test https port
curl -ikv -X TRACE http://${ip}:8443/
curl -ikv -X OPTIONS http://${ip}:8443/
curl -ikv -X DELETE http://${ip}:8443/
the fail reason:
ErrorStack: lib: (SSL routines), func: (0), reason: (wrong version number), code: 167772427, line: 359, file: ssl/record/ssl3_record.c
and I have test other 4 method
curl -ikv -X POST http://${ip}:8443/
curl -ikv -X GET http://${ip}:8443/
curl -ikv -X PUT http://${ip}:8443/
curl -ikv -X HEAD http://${ip}:8443/
the fail reason:
ErrorStack: lib: (SSL routines), func: (0), reason: (http request), code: 167772316, line: 349, file: ssl/record/ssl3_record.c
so what about the differences between POST、GET、PUT、 HEAD and TRACE、OPTIONS、DELETE
I am sorry but this is not a question for OpenSSL but curl.
|
2025-04-01T04:35:02.482882
| 2024-01-26T17:17:43
|
2102571623
|
{
"authors": [
"paulidale",
"richsalz",
"slontis",
"t8m",
"wbl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9456",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/23400"
}
|
gharchive/issue
|
3.1 cannot install 3.0 FIPS module
With changes to the fipsinstall command we cannot install a 3.0 FIPS module when using the 3.1 binary as the configuration file comes out different from when the 3.0 fipsinstall is used. We naively assumed that module compatibility meant the FIPS install process could use future binaries to achieve the install: this seems not to be the case. Is there a compatibility flag we can pass? In the alternative we'd have to extensively rework our internal packaging and releasing.
Relatedly having to run the fipsinstall command on every machine seems suboptimal: will this be relaxed in 3.1?
In 3.1, running the fipsinstall command will not be mandated by the security policy. Just using the fipsmodule.cnf file as generated by make install step will be sufficient.
Good to know that won't affect us going forward, but it's still an issue for installing 3.0. Would you want a compatibility mode for that? We're going to have to build one and I'd be happy to contribute it.
Yes, it would a feature for master branch though. It would be nice if we could detect the version of the provider and if it was 3.0.x it would use the compat mode.
I think if 3.1 requires 3.0 provider to have a 3.0 executable around, that is arguably a bug in 3.1 and you should fix it in 3.1
The project has said multiple times that you can use an older provider with a newer library, but breaking fipsinstall means that the promise doesn't really work.
@slontis or other @openssl/otc members Any opinions here?
The reason for the fipsinstall running on every machine was that it was running the self tests just once and setting the config to indicate this.
Due to changes in rules the self tests always run so that is no longer an issue.
As to the fipsinstall, are you using the 3.0 fips install or the 3.1 fips install exe with the FIPS 3.0 provider?
Let me explain a little more about how we package the software: we have a package that contains the FIPS provider and the configuration. However, we install a different package for the openssl executable and the openssl shared library. Right now that's 3.0.12 and 3.0.8. So the fipsinstall command gets run again, and if there is a difference between what it generates on the build machine and the install machine the installation errors.
What we'd like to do is run the 3.1 fipsinstall because that's the openssl binary with the FIPS 3.0 provider. Even if the security policy changes for the 3.1 provider, we still have this issue until that module goes through the process and we adopt it, and due to some patches we had to apply for performance this might get tricky.
So I guess we need to figure out if this is really allowed,
My personal opinion is that it feels a little bit wrong to be using the fips installer from 3.1 with the fips 3.0 provider. (Input from others welcome). i.e. I dont this fits in with a security policy.
@t-j-h do you have any thoughts on this?
Due to changes in rules the self tests always run so that is no longer an issue.
When did the rules change?
FIPS 140-3
But you don't yet have a validated 140-3 module, and according to https://csrc.nist.gov/CSRC/media/projects/cryptographic-module-validation-program/documents/security-policies/140sp4282.pdf fipsinstall is outside the module boundary so there is no reason this cannot be made to work. As it standards right now, it is inconvenient to some users and, personal feelings aside, it doesn't square with the commitment that had been made.
It really depends on how the security policy is worded...
It really depends on how the security policy is worded...
Yes. And I've read it multiple times. And just now I re-read Appendix A.
My comments related to self tests changing refer to the 3.1 FIPS provider.. self test behaviour for FIPS 140-2 validations wont change.
The other way to avoid the fips install on each machine would be not to write out the field to the config saying that self tests have already run (this would of course mean they run every time on startup).
So does the current release support the currently-validated FIPS module or not? Right now it does not. It could. Fixing that would be more in line with the promise that the FIPS module will be forward-compatible, and it would be much less inconvneient to the current userbase.
OTC: we are currently looking into this issue.
The following steps appeared to work for me..
I just built 3.1.2 and 3.0.9 using the tar sources and then ran the following on both directories.
./config enable-fips
I then copied the 3.0.9/providers/fips.so file to 3.1.2.providers/fips.so
And then ran the following for 3.1.2
LD_LIBRARY_PATH=. ./apps/openssl fipsinstall -module providers/fips.so -out providers/fipsmodule.cnf
And then did
./util/wrap/pl -fips apps/openssl list -provider-path providers -provider default -provider fips -providers
Are you doing something different to this?
I would expect that to work. Did the config file get updated with the digest and verification flag?
The fields
install-mac and install-status are not currently written out by 3.1. (This is because the related OpenSSL provider 3.1 does not do self tests just once). Which means the self tests run every time. With this scenario the .cnf file can be copied without running fips install on every machine.
Can you explain what scenario doesnt work?
Well, aren't its contents different from what the install instructions say to produce? As it is we process the fips module with its fipsinstall and then check the file is unchanged on install and rerunning: if it was possible to comply with a file that didn't need this we would have done that in the first place, but that wasn't our read of the security policy. Maybe the recent certificate revision has changed it.
In terms of the actual FIPS 140-2 the requirement is that the self tests run before using the module.
The FIPS 140-2 IG also allows the scenario where the self tests are run just once (i.e. it is optional). So the instructions we have relate to running once. If these config items are not present then the self tests always run, which also meets the requirement..
So the options are either
We make 3.1 always generate these config items (they will just get ignored by 3.1 fips provider anyway).
The security policy for 3.0 is modified to allow this alternate setup. The option -self_test_onload does not write out the fields.
Just to be clear - you are not getting any error here? This is just about not having the field in the config file?
Not having that field and having two other fields that weren't in what came out in 3.0. I can upstream the patch to the behavior of the 3.1 fipsinstall so we can all see exactly what I did to make it work for us.
It is still not clear to me what steps you are doing, and what the result is.
Just showing me a patch doesnt give me any idea of why it is a problem. Especially since I can see it working without changing anything..
The problem is that the fips configuration generated by the 3.1 tool is different from the 3.0 tool, which suggests that it doesn't do what is required to comply with the security policy of 3.0.8 and 3.0.9. It's possible that we are being overly cautious here.
We want to run the selftest once at system install. Not every time a program starts.
We want to run the selftest once at system install. Not every time a program starts.
Then unfortunately you have to use the 3.0 fipsinstall (at least for now).
It also won't be possible with 3.1 FIPS provider at all.
We want to run the selftest once at system install. Not every time a program starts.
Then unfortunately you have to use the 3.0 fipsinstall (at least for now).
Is that your view or was that a formal OTC decision? (see https://github.com/openssl/openssl/issues/23400#issuecomment-1929221405)
Will you take a PR to bring back the current behavior for the 3.1 module?
Adding seconds of startup time to every OpenSSL FIPS application is not helpful to your community. In the 3.0 design meetings, being able to do this once was touted as a benefit after all.
It also won't be possible with 3.1 FIPS provider at all.
Sure, but it's not yet validated and we have years until the 140-2 validation expires.
Then unfortunately you have to use the 3.0 fipsinstall (at least for now).
Is that your view or was that a formal OTC decision? (see #23400 (comment))
This is a statement of the current state of the code. It does not say anything about possible future.
Will you take a PR to bring back the current behavior for the 3.1 module?
I assume you mean a PR that allows creating a fipsmodule.cnf by new fipsinstall when running against the 3.0 FIPS module in such way that it is fully equivalent to a fipsmodule.cnf file running by 3.0 fipsinstall. Yes, I think such PR would be welcome. Whether such PR would be acceptable only for the master branch or whether it would be also OK for backporting to 3.2 and 3.1 stable branches is a question that would have to be answered by OTC.
It would be ok for the 3.1 fips provider to still have these fields in since they would be ignored anyway.
@paulidale would have removed them since they were no longer needed (for the 3.1 version). So I dont see any harm in adding them back in to support the older 3.0, 3.08 fips providers. It would make sense to backport them to 3.1/3.2 if we are going to add this change to master.
Added the hold so it gets discussed next week.
It sure does seem that removing the fields was a mistake/bug that affected forward/backward compatibility.
OTC: If the patch autodetected the fips module version and did not require any additional options being added then the OTC hold would not apply and it could go through normal review proces.
The FIPS provider ought to be installed using the fipsinstall from the same version.
fipsinstall can and does change. I have no expectation that a new fipsinstall should work with an older FIPS provider.
In fact, we ought to not support this. The older version has been built to get the associated FIPS provider, use the same fipsinstall. Maintaining backward compatibility is only going to cause pain.
One could extend the argument to say the older FIPS provider should not be usable with the current library, and the project explicitly says otherwise. Requiring a mix of old provider, old library, old executable and new library is a burden on your downstream users that you should avoid. Fortunately, the OTC decided to allow that.
It will be up to the end user to decide if using a tool that is not part of the FIPS tarball to generate the config file is a good idea or not.
To be clear, we check the results of rerunning the tool against our generation with the version generated with the utility shipped with the FIPS module. Because of binary load paths and the like installing both the openssl utility and libraries from the fips module version and the new one is a real PITA, although potentially doable. I'd prefer to see a compatibility mode, and will work to autodetect the FIPS provider version.
The extra fields should make no difference at all, since the fips provider has no way of interpreting them, so it will silently ignore them (just like any other param that the provider doesnt know about).
The only change you really need to do is make sure that the self test fields are present..
|
2025-04-01T04:35:02.490889
| 2024-08-06T10:06:12
|
2450482096
|
{
"authors": [
"paulidale",
"prmjh4",
"tom-cosgrove-arm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9457",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/25104"
}
|
gharchive/issue
|
OpenSSL 0.9.8 on Vulnerability check gives critical threat warning.
Discussed in https://github.com/openssl/openssl/discussions/25103
Originally posted by prmjh4 August 6, 2024
I ran the vulnerability scan check on my application that used OpenSSL 0.9.8 and it throws it in the critical section threat to the application.
Description as given by the Scan:
The version of OpenSSL installed on the remote host is prior to 0.9.8m. It is, therefore, affected by multiple vulnerabilities as referenced in the 0.9.8m advisory.
Solution as given by the Scan:
Upgrade to OpenSSL version 0.9.8m or later.
Path : /opt/lib/extra/libcrypto.so.0.9.8
Reported version : 0.9.8i
Fixed version : 0.9.8m
Path : /opt/lib/extra/libssl.so.0.9.8
Reported version : 0.9.8i
Fixed version : 0.9.8m
I couldn't find the OpenSSL0.9.8m version anywhere so I instead downloaded the OpenSSL 1.1.1w but then it instead started displaying the below error message under critical vulnerability.
Path : /opt/lib/extra/libcrypto.so.1.1
Reported version : 1.1.1w
Fixed version : 1.1.1za
I couldn't found the OpenSSL1.1.1za version anywhere so what can I do now to resolve the error.
I have a constraint that I have to stick with the OpenSSL 1.x series version because of many dependency issues, how can I makes sure that issue gets removed from critical section. Please help me out with it!
This version is long long out of support. It won't be addressed.
I have a constraint that I have to stick with the OpenSSL 1.x series version because of many dependency issues
OpenSSL 1.1.1 was released in 2018 as an LTS release, to be supported for 5 years, and consequently became end-of-life on 11th September 2023. The EOL date was reconfirmed in March 2023. If you absolutely must have support for it, premium support contracts are available from the OpenSSL Corporation - see https://openssl-corporation.org/support/
|
2025-04-01T04:35:02.493592
| 2017-05-19T12:42:29
|
229963383
|
{
"authors": [
"dot-asm",
"levitte"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9458",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/3497"
}
|
gharchive/issue
|
'make test' says "Result: FAIL", yet CI is green...
To reproduce 'make test TESTS=test_test; echo $?'. Last modification to test/run_tests.pl somehow makes it exit(0) irregardless of tests outcome.
Ah, runtests returns a TAP::Harness::Aggregator, and we should check the status explicitly (method get_status).
I'm on the road right now, commenting from my mobile... Not exactly the means to produce a PR, but if you do, I can review.
Ref: http://search.cpan.org/~leont/Test-Harness-3.39/lib/TAP/Parser/Aggregator.pm
#3501
|
2025-04-01T04:35:02.495011
| 2016-04-18T16:20:44
|
149199590
|
{
"authors": [
"richsalz",
"stas730"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9459",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/973"
}
|
gharchive/issue
|
Add ASN.1 extension. Unsupported?
I need to add this (http://www.ietf.org/rfc/rfc3709.txt) logotype extension. I can do this? Mailing lists not working for me, so I continue posting questions on GitHub…
I'm using "Example extension" but I don't know how to add this to CA/user certificate.
use the mailing lists.
|
2025-04-01T04:35:02.504509
| 2021-02-15T17:37:48
|
808717368
|
{
"authors": [
"beldmit",
"levitte",
"openssl-machine",
"paulidale",
"richsalz",
"slontis",
"t-j-h",
"t8m"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9460",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/14194"
}
|
gharchive/pull-request
|
Allow 'openssl enc' and 'openssl dgst' to use "unknown" ciphers and digests
'openssl enc' and 'openssl dgst' use opt_md() and opt_cipher() to get
the algorithms the user asks for, which only used EVP_get_cipherbyname()
and EVP_get_digestbyname(). That would only return legacy implementations
for things the libcrypto has prior knowledge of.
To allow all provider backed algorithms to be fully used, even without
libcrypto's prior knowledge, opt_md() and opt_cipher() now also use
EVP_MD_fetch() and EVP_CIPHER_fetch(), and return them in a second
pointer that our apps has to free. This is made in such a way that
the application can otherwise continue to use the constant EVP_MD and
EVP_CIPHER pointers.
As a discussion point, this reiterates that application must know what
they have fetched explicitly, and therefore must also free (with
EVP_MD_free() or EVP_CIPHER_free() in this case), and what they have
not (which includes all constant pointers they get from all sorts of
other functions, such as EVP_MD_CTX_md()), and therefore must NOT free.
-1
I believe this needs an OTC discussion and decision.
Er... ok
This is an alternative to the flags approach discussed in https://github.com/openssl/openssl/pull/14182
This code doesn't look too bad (without using any flags)..
This code doesn't look too bad (without using any flags)..
Thank you, I'm glad to see you say that
Yeah, I agree with that as well.
I'm not sure it's the best approach. We get to objects instead of one (which is not a problem) and should pass the proper one downwards (which is)...
should pass the proper one downwards (which is)...
So, uhm, do you mean to say that, for example, md should not be aliased to fetched_md when an explicit fetch was performed, and that every place that uses md should be changed to (md == NULL ? md : fetched_md). I'd say that only achieves in making the code uglier, for no gain.
Sorry, I was wrong. Yes, it's a reasonable approach.
I think this sets a bad precedent for downstream users, requiring them to complicate their code more than is really necessary. OpenSSL is already complicated, and requiring developers to track lifetimes, as opposed to adding some code to OpenSSL to make it not necessary, is the wrong trade-off in my view. "Will nobody think of the users?" :)
I think this sets a bad precedent for downstream users, requiring them to complicate their code more than is really necessary.
It's actually not that complicated, it's a very simple "rule": if you fetch something explicitly, you must free it. If you don't, you don't have to. It does means that you have to keep track of the fetched pointers.
The complicating factor is when the application mixes two paradigms, which our app does.
"Will nobody think of the users?" :)
Oh I do, and I for one do not want to add fragility and corner cases that the users can't control into the library.
Since the project decided to use the proposed EVP_xxx_free semantics, this PR isn't necessary. Please don't merge it until after #14219.
@richsalz is right. I also wasn't the person behind the OTC hold and therefore shouldn't remove it.
24 hours has passed since 'approval: done' was set, but as this PR has been updated in that time the label 'approval: ready to merge' is not being automatically set. Please review the updates and set the label manually.
This PR is in a state where it requires action by @openssl/otc but the last update was 30 days ago
At this point, #14193 is merged, and #14219 is approved. I see no more reason to hang on to this PR.
|
2025-04-01T04:35:02.507672
| 2017-10-08T14:42:14
|
263722757
|
{
"authors": [
"mattcaswell",
"tatsuhiro-t"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9461",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/4490"
}
|
gharchive/pull-request
|
Don't change client random in Client Hello in its second flight
Addresses #4292
It looks like https://tools.ietf.org/html/draft-ietf-tls-tls13-21#section-4.1.2 does not explicitly allow a client to send the difference client random in its second flight Client Hello.
At least picotls checks they are the same, and aborts a handshake if they are different.
Checklist
[ ] documentation is added or updated
[ ] tests are added or updated
Yes, this is a partial fix.
Should I push squashed commit?
Squashed and pushed. A test would be nice, but I've pushed this for now anyway
|
2025-04-01T04:35:02.509565
| 2019-04-13T08:14:07
|
432822374
|
{
"authors": [
"bernd-edlinger"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9462",
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/8739"
}
|
gharchive/pull-request
|
Fix a crash in the speed command with wrap ciphers
e.g. openssl speed -evp id-aes256-wrap-pad
was crashing because the return code from EVP_CipherInit_ex
was ignored.
Not going to allow that cipher mode because wrap ciphers
produces more bytes output than the input length
and EVP_Update_loop is not really prepared for that.
Ping?
Merged to master as 5d238a1, and 1.1.1 as 69fd7d1.
Thanks!
|
2025-04-01T04:35:02.514583
| 2023-05-18T11:42:55
|
1715492633
|
{
"authors": [
"bogdando",
"jpodivin",
"rebtoor"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9463",
"repo": "openstack-k8s-operators/edpm-ansible",
"url": "https://github.com/openstack-k8s-operators/edpm-ansible/pull/156"
}
|
gharchive/pull-request
|
Manage swap partition as a hard requirement
Add swap partition management into the bootstrap role
The swap file/partition management logic is carried over from the existing EDPM pre-adoption implementation, but moved out of the nova compute specific scope into generic tasks for EDPM.
Closes: OSPRH-133
The molecule testing is blocked as we cannot use swapon from podman containers, we'd need a delegated driver for that, likely. Any hints to make this unblocked, @raukadah perchance?
the rdo-project job is failed, probably by unrelated reason, because I can see the swap task completed there
https://logserver.rdoproject.org/56/156/814b11000ae0869a6e1559bee243ec11f332ff28/github-check/edpm-ansible-github-rdo-integration-centos-8-crc-singlenode-centos-9-external-compute/79eb0cc/controller/controller/pod/deploy-external-dataplane-compute-n4zc8-logs.txt
/test rdoproject.org/github-check
check-rdo
check-rdo
Just to clarify. The molecule in podman seems to work with this patch after all https://github.com/openstack-k8s-operators/edpm-ansible/actions/runs/5014407281/jobs/8988616358?pr=156 . Do you still want to replace it with delegated driver?
Ci failure looks unrelated https://github.com/openstack-k8s-operators/edpm-ansible/actions/runs/5199776741/jobs/9377637527#step:6:249
/recheck
@kajinamit PTAL
recheck
|
2025-04-01T04:35:02.519699
| 2022-11-04T12:50:26
|
1436027043
|
{
"authors": [
"fmount",
"fultonj"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9464",
"repo": "openstack-k8s-operators/glance-operator",
"url": "https://github.com/openstack-k8s-operators/glance-operator/pull/75"
}
|
gharchive/pull-request
|
Introduce Glance Extra Volumes support
The Glance operator can now get rid of the CephBackend specific implementation and rely on the general purpose approach provided by lib-common/modules/storage.
ExtraVolumes are now defined as part of the glance top level crd and can be propagated to the GlanceAPI pods according to the specified policy.
Depends-on: https://github.com/openstack-k8s-operators/lib-common/pull/88
Depends-on: openstack-operator#38
Signed-off-by: Francesco Pantano<EMAIL_ADDRESS>
@abays Not sure here I can really add a dependency on lib-common [1] that can be included in the CI pipeline: I suspect we should keep this change as it is until [1] is available.
[1] https://github.com/openstack-k8s-operators/lib-common/pull/88
@abays Not sure here I can really add a dependency on lib-common [1] that can be included in the CI pipeline: I suspect we should keep this change as it is until [1] is available.
[1] openstack-k8s-operators/lib-common#88
This is now solved: lib-common PR merged and we updated it in go.mod
Patch looks fine to me. I tested this in my environment and it works.
|
2025-04-01T04:35:02.580263
| 2015-01-24T04:47:28
|
55358910
|
{
"authors": [
"1ec5",
"bhousel",
"pnorman",
"tas50",
"tristen"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9465",
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/2515"
}
|
gharchive/issue
|
Display location in the footer of interface
With tools like http://osmlab.github.io/to-fix when you select an error to edit it could any location in the world. It would be nice to get a sense of its geographic spot. ie:
+1 this would be incredibly useful
We're planning to add a small "map-in-map" popup pretty soon to help users better know where they are editing. (Think like a little zoomed out map with a square representing the current viewport bbox). I think I'll find a way to include the location text in this feature.
see #2554
A breadcrumb would be useful on the main map as well: openstreetmap/openstreetmap-website#848.
I have been using this gist to test out nominatim reverse geocode:
https://gist.github.com/bhousel/05be464d7f53c95c4eab
Even zoom=10, which I would think should return 'cities', nominatim is still returning hamlets, and the hamlet data in the USA is pretty bad.
If I go out to zoom=9, it returns county/region data, which is less useful.
I think it would be useful to go ahead, even if there is bad data in a region which makes it less useful.
Coming soon - A location pane that toggles with ⌘L
|
2025-04-01T04:35:02.586660
| 2020-10-30T00:50:45
|
732799324
|
{
"authors": [
"GregRetro",
"jidanni",
"quincylvania"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9466",
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/8138"
}
|
gharchive/issue
|
Can't copy text of relation member names
Here the text I want to copy, 天輪-翁子線, is sitting in front of me,
but there is no easy way to copy it with the mouse.
Sure, I could get it from elsewhere, but still...
Poking around in the "v" menus doesn't find it either.
@jidanni What browser are you using? I'm able to select and copy the text in Firefox and Safari on macOS.
Closing this in favor of #8136, but more info would still be useful if you have it @jidanni.
Even in Firefox,
Browse https://www.openstreetmap.org/edit?relation=5668778
Try to copy any text out of any of the Members
You end up grabbing the whole member and reordering it
That is because instead of just giving a "reordering handle to grab on"
on the edge of each member, somebody has made the whole member its own handle.
Looks like this is a "feature" of Power Route relations. I can reorder the relation members in this way with other power routes as well - and accordingly cannot copy the relation member names.
Definitely NOT the same issue as #8136, and this probably deserves to be reopened.
BUT... I really like the ability to reorder relation members here. Would be great if this state were toggle-able for all "route" relations. Flip a bit and you wouldn't be able to reorder the relation members, but you could now copy their names.
I defend the right of anybody to reorder relations.
They just need little handles added to them, for users to grab on and pull them around.
The 'lazy' way is to forget the handles, and make the whole item one big handle.
With the concurrent loss of Accessibility as the price.
Here on my Zenfone, grabbing the hamburger icon does the trick_
@jidanni Oh okay, I think I was looking at the Relations section by mistake, not the Members section.
|
2025-04-01T04:35:02.627497
| 2024-06-17T15:17:32
|
2357607100
|
{
"authors": [
"d0choa",
"project-defiant"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9467",
"repo": "opentargets/gentropy",
"url": "https://github.com/opentargets/gentropy/pull/647"
}
|
gharchive/pull-request
|
refactor: delete unnecessary config files
✨ Context
This is the beginning of a series of work intended to move the airflow layer outside the gentropy package.
🛠 What does this PR implement
This is a refactor to remove step configurations that were unnecessary because they were configured entirely in the Airflow layer.
🚦 Before submitting
[x] Do these changes cover one single feature (one change at a time)?
[x] Did you read the contributor guideline?
[x] Did you make sure to update the documentation with your changes?
[x] Did you make sure there is no commented out code in this PR?
[x] Did you follow conventional commits standards in PR title and commit messages?
[x] Did you make sure the branch is up-to-date with the dev branch?
[x] Did you write any new necessary tests?
[x] Did you make sure the changes pass local tests (make test)?
[x] Did you make sure the changes pass pre-commit rules (e.g poetry run pre-commit run --all-files)?
I just wonder why changing default p-value do not break any test
The input parameters are poorly tested. We haven't reached that far, unfortunately.
|
2025-04-01T04:35:02.635626
| 2024-07-05T13:31:23
|
2392618991
|
{
"authors": [
"DSuveges"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9468",
"repo": "opentargets/gentropy",
"url": "https://github.com/opentargets/gentropy/pull/672"
}
|
gharchive/pull-request
|
feat(StudyIndex): validation for study type, disease, target etc
✨ Context
Logic to validate study index.
🛠 What does this PR implement
The following validation steps are implemented:
StudyIndex.validate_disease - validating diseases against the provided disease index (a view on the disease index).
StudyIndex.validate_study_type - flagging studies that are not gwas or some kind of qtls.
StudyIndex.validate_target - flagging qtl studies which doesnt' have valid Ensembl gene id, against target index.
StudyIndex.validate_unique_study_id - flagging studies with non-unique study identifiers.
Tests for all methods.
How it works:
qcd_study_index = (
study_index
.validate_disease(disease_map)
.validate_target(target_index)
.validate_study_type()
.validate_unique_study_id()
)
Failing at any validation steps leads to adding a correspoding flag into the qualityControls column.
Heads up! - to enable the use of the existing flagging instruments from StudyLocus class, the necessary function (update_quality_flag) is moved to Dataset class, so all of our datasets are QC-able the same way.
🙈 Missing
In this PR the logic IS NOT organised into a step. There's no orchestration, just the business logic.
Gentropy has no disease index dataset. So there's no point in ingesting disease index as it is. So in the current implementation of the .validate_disease() method, a dataframe is expected with all the current and obsolete EFOs. Once we'll have a disease dataset class, this assumption can be changed.
🚦 Before submitting
[x] Do these changes cover one single feature (one change at a time)?
[x] Did you make sure there is no commented out code in this PR?
[x] Did you follow conventional commits standards in PR title and commit messages?
[x] Did you make sure the branch is up-to-date with the dev branch?
[x] Did you write any new necessary tests?
[x] Did you make sure the changes pass local tests (make test)?
[x] Did you make sure the changes pass pre-commit rules (e.g poetry run pre-commit run --all-files)?
Validation at work:
# Study index:
studies = (
StudyIndex.from_parquet(session, "/Users/dsuveges/project_data/gentropy/study_index", recursiveFileLookup=True)
)
# Gene index:
gene_index = (
GeneIndex.from_parquet(session, "/Users/dsuveges/project_data/gentropy/gene_index")
)
# Disease Index:
disease_map = (
session.spark.read.parquet('/Users/dsuveges/project_data/gentropy/diseases')
.select(
f.col('id').alias('diseaseId'),
f.explode_outer(
f.when(
f.col('obsoleteTerms').isNotNull(),
f.array_union(
f.array('id'),
f.col('obsoleteTerms')
)
)
).alias('efo')
)
.withColumn(
'efo',
f.coalesce(f.col('efo'), f.col('diseaseId'))
)
)
validated_studies = (
studies
.validate_disease(disease_map)
.validate_target(gene_index)
.validate_study_type()
.validate_unique_study_id()
.persist()
)
Out of 1,975,874, 14,871 studies are flagged:
+--------------+-----+
| projectId|count|
+--------------+-----+
| Nedelec_2016| 40|
| OneK1K| 21|
| Alasoo_2018| 30|
| GTEx| 1317|
| FINNGEN_R10| 4816|
| GCST| 7670|
| Nathan_2022| 38|
| FUSION| 69|
|Schmiedel_2018| 119|
| Cytoimmgen| 39|
| GENCORD| 25|
| BLUEPRINT| 82|
| GEUVADIS| 22|
| Lepik_2017| 26|
| Quach_2016| 104|
| Fairfax_2014| 15|
| ROSMAP| 56|
| HipSci| 40|
| BrainSeq| 53|
| TwinsUK| 83|
+--------------+-----+
Distribution of quality flags:
+----------------------------------------------------+-----+
|qualityControl |count|
+----------------------------------------------------+-----+
|Failed summary statistics quality control |472 |
|Target/gene identifier could not match to reference.|2385 |
|No valid disease identifier found. |11962|
|The identifier of this study is not unique. |4838 |
|Non-additive model |32 |
+----------------------------------------------------+-----+
As it can be seen from the labels, the flags are carried over from the pre-validated study indices.
|
2025-04-01T04:35:02.637352
| 2020-04-29T16:01:58
|
609173718
|
{
"authors": [
"andrewhercules"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9469",
"repo": "opentargets/platform",
"url": "https://github.com/opentargets/platform/issues/1002"
}
|
gharchive/issue
|
Integrate phenotypes from new API into widget
Data needs to be made available in GraphQL API as ETL pipeline has been updated in #924
Once data is available, please update the alpha branch and display the list of phenotypes similar to how it is currently displayed on the MVP branch
Ticket closed as changes merged into alpha branch
|
2025-04-01T04:35:02.662589
| 2021-02-24T14:48:48
|
815548742
|
{
"authors": [
"andrewhercules",
"d0choa",
"mkarmona"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9470",
"repo": "opentargets/platform",
"url": "https://github.com/opentargets/platform/issues/1420"
}
|
gharchive/issue
|
Migrate graphQL queries in platform-app to GQL files
After discussion derived from #1418, it was concluded that we should migrate as many queries as possible to .gql files.
It was discussed that some queries were so concise that they were not moved to an outside file. We see a point now to separate them from the rest of the code. Also, some of the queries are dynamically changed, so we will need to consider the dynamic parameters for future applications (e.g. testing).
A list of the currently implemented GQL files:
❯ find platform-app -type f -name "*.gql"
platform-app/src/sections/evidence/CancerGeneCensus/sectionQuery.gql
platform-app/src/sections/evidence/OTGenetics/sectionQuery.gql
platform-app/src/sections/evidence/GenomicsEngland/sectionQuery.gql
platform-app/src/sections/evidence/EuropePmc/sectionQuery.gql
platform-app/src/sections/evidence/Phenodigm/sectionQuery.gql
platform-app/src/sections/evidence/Gene2Phenotype/sectionQuery.gql
platform-app/src/sections/evidence/IntOgen/sectionQuery.gql
platform-app/src/sections/target/ProteinInformation/sectionQuery.gql
platform-app/src/sections/target/ProteinInformation/summaryQuery.gql
platform-app/src/sections/target/Safety/summaryQuery.gql
platform-app/src/sections/target/ProteinInteractions/sectionQuery.gql
platform-app/src/sections/target/ProteinInteractions/summaryQuery.gql
platform-app/src/components/Search/SearchQuery.gql
platform-app/src/pages/SearchPage/SearchPageQuery.gql
Next, the queries that need to be migrated, resulting from the next query:
❯ find platform-app -type f -name '*.js' -exec grep -H "= gql\`" {} \;
[ ] platform-app/src/sections/disease/RelatedDiseases/Body.js:const RELATED_DISEASES_QUERY = gql`
[ ] platform-app/src/sections/disease/RelatedDiseases/Summary.js:const RELATED_DISEASES_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/disease/Phenotypes/Body.js:const PHENOTYPES_BODY_QUERY = gql`
[ ] platform-app/src/sections/disease/Phenotypes/Summary.js:const PHENOTYPES_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/disease/KnownDrugs/Body.js:const KNOWN_DRUGS_BODY_QUERY = gql`
[ ] platform-app/src/sections/disease/KnownDrugs/Summary.js:const KNOWN_DRUGS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/disease/Ontology/Summary.js:const ONTOLOGY_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/ClinGen/Body.js:const CLINGEN_QUERY = gql`
[ ] platform-app/src/sections/evidence/ClinGen/Summary.js:const CLINGEN_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/EVA/Body.js:const EVA_QUERY = gql`
[ ] platform-app/src/sections/evidence/EVA/Summary.js:const EVA_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/PheWASCatalog/Body.js:const PHEWAS_CATALOG_QUERY = gql`
[ ] platform-app/src/sections/evidence/PheWASCatalog/Summary.js:const PHEWAS_CATALOG_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/CancerGeneCensus/Summary.js:const CANCER_GENE_CENSUS_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/OTGenetics/Summary.js:const OPEN_TARGETS_GENETICS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/GenomicsEngland/Summary.js:const GENOMICS_ENGLAND_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/CRISPR/Body.js:const CRISPR_QUERY = gql`
[ ] platform-app/src/sections/evidence/CRISPR/Summary.js:const CRISPR_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/UniProtLiterature/Body.js:const UNIPROT_LITERATURE_QUERY = gql`
[ ] platform-app/src/sections/evidence/UniProtLiterature/Summary.js:const UNIPROT_LITERATURE_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/ExpressionAtlas/Body.js:const EXPRESSION_ATLAS_QUERY = gql`
[ ] platform-app/src/sections/evidence/ExpressionAtlas/Summary.js:const EXPRESSION_ATLAS_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/SysBio/Body.js:const INTOGEN_QUERY = gql`
[ ] platform-app/src/sections/evidence/SysBio/Summary.js:const SYSBIO_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/Chembl/Body.js:const CHEMBL_QUERY = gql`
[ ] platform-app/src/sections/evidence/Chembl/Summary.js:const CHEMBL_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/EVASomatic/Body.js:const EVA_SOMATIC_QUERY = gql`
[ ] platform-app/src/sections/evidence/EVASomatic/Summary.js:const EVA_SOMATIC_SUMMARY = gql`
[ ] platform-app/src/sections/evidence/EuropePmc/Summary.js:const EUROPE_PMC_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/Progeny/Body.js:const PROGENY_QUERY = gql`
[ ] platform-app/src/sections/evidence/Progeny/Summary.js:const PROGENY_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/SlapEnrich/Body.js:const SLAPENRICH_QUERY = gql`
[ ] platform-app/src/sections/evidence/SlapEnrich/Summary.js:const SLAPENRICH_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/Phenodigm/Summary.js:const PHENODIGM_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/Gene2Phenotype/Summary.js:const GENE_2_PHENOTYPE_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/IntOgen/Summary.js:const INTOGEN_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/evidence/Reactome/Body.js:const REACTOME_QUERY = gql`
[ ] platform-app/src/sections/evidence/Reactome/Summary.js:const REACTOME_SUMMARY = gql`
[ ] platform-app/src/sections/target/Tep/Summary.js:const TEP_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/CancerHallmarks/Summary.js:const CANCER_HALLMARKS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/ProteinInformation/Summary.js:const PROTEIN_INFORMATION_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/ChemicalProbes/Summary.js:const CHEMICAL_PROBES_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/KnownDrugs/Body.js:const KNOWN_DRUGS_BODY_QUERY = gql`
[ ] platform-app/src/sections/target/KnownDrugs/Summary.js:const KNOWN_DRUGS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/RelatedTargets/Body.js:const RELATED_TARGETS_QUERY = gql`
[ ] platform-app/src/sections/target/RelatedTargets/Summary.js:const RELATED_TARGETS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/CancerBiomarkers/Body.js:const BIOMARKERS_QUERY = gql`
[ ] platform-app/src/sections/target/CancerBiomarkers/Summary.js:const CANCER_BIOMARKERS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/Tractability/Summary.js:const TRACTABILITY_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/MousePhenotypes/Summary.js:const MOUSE_PHENOTYPES_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/Expression/SummaryTab.js:const EXPRESSION_QUERY = gql`
[ ] platform-app/src/sections/target/Expression/Summary.js:const EXPRESSION_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/Pathways/Summary.js:const PATHWAYS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/target/GeneOntology/Summary.js:const GENE_ONTOLOGY_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/drug/KnownDrugs/Body.js:const KNOWN_DRUGS_BODY_QUERY = gql`
[ ] platform-app/src/sections/drug/KnownDrugs/Summary.js:const KNOWN_DRUGS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/drug/Indications/Body.js:const INDICATIONS_QUERY = gql`
[ ] platform-app/src/sections/drug/Indications/Summary.js:const INDICATIONS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/drug/AdverseEvents/Body.js:const ADVERSE_EVENTS_QUERY = gql`
[ ] platform-app/src/sections/drug/AdverseEvents/Summary.js:const ADVERSE_EVENTS_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/sections/drug/MechanismsOfAction/Body.js:const MECHANISMS_OF_ACTION_QUERY = gql`
[ ] platform-app/src/sections/drug/MechanismsOfAction/Summary.js:const MECHANISM_OF_ACTION_SUMMARY_FRAGMENT = gql`
[ ] platform-app/src/pages/DrugPage/Profile.js:const DRUG_PROFILE_QUERY = gql`
[ ] platform-app/src/pages/DrugPage/ProfileHeader.js:const DRUG_PROFILE_HEADER_FRAGMENT = gql`
[ ] platform-app/src/pages/DrugPage/DrugPage.js:const DRUG_PAGE_QUERY = gql`
[ ] platform-app/src/pages/TargetPage/ClassicAssociationsTable.js:const TARGET_ASSOCIATIONS_QUERY = gql`
[ ] platform-app/src/pages/TargetPage/Profile.js:const TARGET_PROFILE_QUERY = gql`
[ ] platform-app/src/pages/TargetPage/ProfileHeader.js:const TARGET_PROFILE_HEADER_FRAGMENT = gql`
[ ] platform-app/src/pages/TargetPage/TargetPage.js:const TARGET_PAGE_QUERY = gql`
[ ] platform-app/src/pages/TargetPage/Wrapper.js:const ASSOCIATIONS_VIZ_QUERY = gql`
[ ] platform-app/src/pages/TargetPage/ClassicAssociations.js:const TARGET_FACETS_QUERY = gql`
[ ] platform-app/src/pages/DiseasePage/ClassicAssociationsTable.js:const DISEASE_ASSOCIATIONS_QUERY = gql`
[ ] platform-app/src/pages/DiseasePage/Profile.js:const DISEASE_PROFILE_QUERY = gql`
[ ] platform-app/src/pages/DiseasePage/DiseasePage.js:const DISEASE_PAGE_QUERY = gql`
[ ] platform-app/src/pages/DiseasePage/ProfileHeader.js:const DISEASE_PROFILE_HEADER_FRAGMENT = gql`
[ ] platform-app/src/pages/DiseasePage/ClassicAssociations.js:const DISEASE_FACETS_QUERY = gql`
[ ] platform-app/src/pages/EvidencePage/Profile.js:const EVIDENCE_PROFILE_QUERY = gql`
[ ] platform-app/src/pages/EvidencePage/EvidencePage.js:const EVIDENCE_PAGE_QUERY = gql`
@d0choa thanks! this will be really convenient to help with systematic API testing.
Epic ticket closed as the work noted in tickets #1526, #1527, #1528, and #1529 has been completed and merged into main
|
2025-04-01T04:35:02.665115
| 2018-12-06T12:00:54
|
388182924
|
{
"authors": [
"andrewhercules",
"peatroot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9471",
"repo": "opentargets/platform",
"url": "https://github.com/opentargets/platform/issues/354"
}
|
gharchive/issue
|
Allow search by UniProt keyword
It would be a useful feature to be able to search by a UniProt keyword. The action taken on successful search needs discussion, but we might direct the user to the batch search for all targets linked to the keyword.
Example: User types Acetylation or KW-0007.
This would require some work on the rest_api.
This issue has been closed as it has been captured on the Master Feature Prioritisation worksheet available on the team's Google Drive.
CC: @d0choa
|
2025-04-01T04:35:02.673326
| 2019-06-19T09:21:46
|
457900820
|
{
"authors": [
"ChristopherJamesTaylor",
"afaulconbridge"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9472",
"repo": "opentargets/platform",
"url": "https://github.com/opentargets/platform/issues/638"
}
|
gharchive/issue
|
--gen no such file or directory h
When Running: sudo docker-compose run --rm mrtarget --gen --data-config https://storage.googleapis.com/open-targets-data-releases/19.04/input/mrtarget.data.19.04.5.yml
I receive the following issue:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/local/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/src/app/mrtarget/CommandLine.py", line 225, in
sys.exit(main())
File "/usr/src/app/mrtarget/CommandLine.py", line 98, in main
process.merge_all(args.dry_run)
File "mrtarget/modules/GeneData.py", line 293, in merge_all
self.data_config, self.es_config)
File "/usr/src/app/mrtarget/plugins/gene/chembl.py", line 23, in merge_data
chembl_handler.download_molecules_linked_to_target()
File "mrtarget/common/chembl_lookup.py", line 68, in download_molecules_linked_to_target
with URLZSource(uri).open() as f_obj:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in enter
return self.gen.next()
File "/usr/local/lib/python2.7/site-packages/opentargets_urlzsource/init.py", line 89, in open
with self._open_local(file_to_open, mode) as fd:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in enter
return self.gen.next()
File "/usr/local/lib/python2.7/site-packages/opentargets_urlzsource/init.py", line 73, in _open_local
with open_f(filename) as fd:
IOError: [Errno 2] No such file or directory: 'h'
This occurs even with or without the other required indices. Any thoughts would be helpful.
This is a consequence of trying to run the master code against the previous data release - in 19.04.5 chembl molecules were a single file but in master it accepts a list of files. Please ensure all versions of components match i.e. use the 19.04.5 tag for the 19.04.5 data and you should avoid this and other issues.
Depending on what you are trying to do, you may not need to run the pipeline at all. If you want to access the associations & evidence, these can be downloaded for processing at https://www.targetvalidation.org/downloads/data If you want to host a local copy of OpenTargets Platform, see https://docs.targetvalidation.org/faq/spin-your-own-instance
Brilliant! Just trying to build the data pipeline so that we can import our own data. But before we use that data I would like to have all the indices available and working. Do you think you could provide an example of the command that would include tags?
|
2025-04-01T04:35:02.675568
| 2024-11-22T20:33:06
|
2684467828
|
{
"authors": [
"abtink",
"jwhui"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9473",
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/pull/10960"
}
|
gharchive/pull-request
|
[num-utils] add DivideAndRoundUp() helper
This commit introduces the DivideAndRoundUp() method, which divides two given unsigned integers and always rounds the result up.
@abtink , please resolve conflicts :)
Rebased and fixed conflicts. Thanks.
|
2025-04-01T04:35:02.678837
| 2016-09-27T09:59:15
|
179448797
|
{
"authors": [
"codecov-io",
"jwhui",
"xiaom-GitHub"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9474",
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/pull/696"
}
|
gharchive/pull-request
|
THCI: update thci with MGMT_ED_SCAN method.
9.2.13 four scenarios (DUT as Router, MED, SED, FED) pass.
Current coverage is 70.17% (diff: 100%)
Merging #696 into master will decrease coverage by <.01%
@@ master #696 diff @@
==========================================
Files 97 97
Lines 13680 13680
Methods 1883 1883
Messages 0 0
Branches 1604 1604
==========================================
- Hits 9601 9600 -1
Misses 3591 3591
- Partials 488 489 +1
Powered by Codecov. Last update fc20a0f...8622495
Looks good. Please resolve the conflict so we can merge.
|
2025-04-01T04:35:02.682294
| 2024-01-18T12:09:08
|
2088155520
|
{
"authors": [
"canisLupus1313",
"karthick-grl"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9475",
"repo": "openthread/ot-reference-release",
"url": "https://github.com/openthread/ot-reference-release/issues/72"
}
|
gharchive/issue
|
sdk-nrf : CONFIG_OPENTHREAD_BLE_TCAT kconfig flag error. Require latest commit id?
Error while attempting build ncs with TCAT kconfig flag enabled CONFIG_OPENTHREAD_BLE_TCAT, will updating to latest sdk-nrf commit help? @canisLupus1313
@karthick-grl Yes this flag and functionality is in never ncs. Unfortunatelly we currently are facing with few major bugs (also affecting certification) so I can update this repo when those will be resolved.
@karthick-grl I have new EXPERIMENTAL version of reference SW where I can upload it for You?
@canisLupus1313 Will share you a google drive space offline.
@karthick-grl I have uploaded the packages.
Acknowledged! Thanks for the help @canisLupus1313
I would say it's up to you. Currently wye are working on few major changes in NSC so the bump wont happen soon in this repo.
@karthick-grl Requested update of NCS #73
|
2025-04-01T04:35:02.715679
| 2024-10-21T11:47:04
|
2602298527
|
{
"authors": [
"MomoPoppy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9476",
"repo": "opentiny/tiny-vue",
"url": "https://github.com/opentiny/tiny-vue/pull/2340"
}
|
gharchive/pull-request
|
refactor(date-picker): [date-picker] modify variable names and styles for date-picker
…es for date-picker
PR
PR Checklist
Please check if your PR fulfills the following requirements:
[x] The commit message follows our Commit Message Guidelines
[ ] Tests for the changes have been added (for bug fixes / features)
[ ] Docs have been added / updated (for bug fixes / features)
PR Type
What kind of change does this PR introduce?
[x] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[x] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] CI related changes
[ ] Documentation content changes
[ ] Other... Please describe:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Does this PR introduce a breaking change?
[ ] Yes
[x] No
Other information
pc模板的修改没有影响saas
|
2025-04-01T04:35:02.858837
| 2023-11-01T02:52:24
|
1971600158
|
{
"authors": [
"belfner",
"blaz-r"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9481",
"repo": "openvinotoolkit/anomalib",
"url": "https://github.com/openvinotoolkit/anomalib/issues/1454"
}
|
gharchive/issue
|
[Task]: Prevent EfficientAD Looping Twice Over Teacher Outputs (Solutions included)
What is the motivation for this task?
This is a continuation of the discussion at #1301.
Recap: The "per-patch" mean and standard deviation of the teacher model needs to be calculated at the beginning of training. The initial solution was essentially to pass all batches from the dataset through the teacher model and collect the outputs. While collecting the outputs the "per-patch" mean is calculated. Then, the outputs are looped through a second time to calculate the "per-patch" standard deviation. This method is very memory-intensive since all outputs must be in memory at once. To reduce the memory consumption, the function was changed to no longer store the outputs but instead regenerate them for the standard deviation calculations lowering the memory usage at the cost of computation time.
It was suggested in the issue that an iterative solution should be implemented. This would prevent the recalculation of the outputs while also keeping the memory usage low. However, there were concerns over the stability and accuracy of an iterative solution.
Describe the solution you'd like
I implemented 2 possible iterative algorithms and compared the results to the original non-iterative versions.
I first implemented the algorithm described here, I will call it algorithm A. In the issue, another algorithm was linked (link) which I also implemented, I will call it algorithm B.
There was discussion about whether these incremental methods would be stable and accurate. To test accuracy, I ran some tests where I would have the two new algorithms run alongside the existing algorithm in a standard training run. The default Efficient AD config was used for each run. This way I could directly compare the output of each algorithm. For the data, I just used the imagenet validation data that is automatically used at the beginning of training.
To compare accuracy, I use mean squared error between the different outputs. To determine memory added, I cleared the cache before each function, then tracked the change in allocated memory (using torch.cuda.memory_allocated(0) ) from the start to the end of the function (before the return). Speed was calculated simply using time.perf_counter().
Below are the compiled statistics averaged over several runs (algorithm call order was randomized):
Algorithm
Mean MSE
STD MSE
Added Memory
Speed Relative to Base Algorithm (Avg. 25 Runs)
Base
-
-
12.403 MB
1x
A
3.7644e-13
6.1821e-14
4.8292 MB
1.993x
B
3.6637e-13
6.5522e-14
14.460 MB
1.974x
These tests were performed using an RTX 3080 on Ubuntu 22.04 with an unaltered anomalib development python environment. Using driver version: 535.113.01.
Since both new algorithms loop through the data only once rather than than twice like the current algorithm does, they are approximately two times faster since the majority of the time in each algorithm is spent loading data.
These tests are enough to convince me that these incremental algorithms are sufficiently accurate but I understand if some want more rigorous testing. Since algorithm A was similar in accuracy to B and uses less memory than the other two, that is the algorithm I have committed for a pull request. I am open to suggestions or improvements to either of these algorithms.
Once I submit the PR I will link it below.
Implementations
Reference (Current)
@torch.no_grad()
def teacher_channel_mean_std(self, dataloader: DataLoader) -> dict[str, Tensor]:
"""Calculate the mean and std of the teacher models activations.
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of channel-wise mean and std
"""
y_means = []
means_distance = []
logger.info("Calculate teacher channel mean and std")
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel mean", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
y_means.append(torch.mean(y, dim=[0, 2, 3]))
channel_mean = torch.mean(torch.stack(y_means), dim=0)[None, :, None, None]
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel std", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
distance = (y - channel_mean) ** 2
means_distance.append(torch.mean(distance, dim=[0, 2, 3]))
channel_var = torch.mean(torch.stack(means_distance), dim=0)[None, :, None, None]
channel_std = torch.sqrt(channel_var)
return {"mean": channel_mean, "std": channel_std}
Algorithm A
@torch.no_grad()
def teacher_channel_mean_std(self, dataloader: DataLoader) -> dict[str, Tensor]:
"""Calculate the mean and std of the teacher models activations.
Adapted from https://math.stackexchange.com/a/2148949
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of channel-wise mean and std
"""
arrays_defined = False
n: torch.Tensor | None = None
chanel_sum: torch.Tensor | None = None
chanel_sum_sqr: torch.Tensor | None = None
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel mean & std", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
if not arrays_defined:
_, num_channels, _, _ = y.shape
n = torch.zeros((num_channels,), dtype=torch.int64, device=y.device)
chanel_sum = torch.zeros((num_channels,), dtype=torch.float64, device=y.device)
chanel_sum_sqr = torch.zeros((num_channels,), dtype=torch.float64, device=y.device)
arrays_defined = True
n += y[:, 0].numel()
chanel_sum += torch.sum(y, dim=[0, 2, 3])
chanel_sum_sqr += torch.sum(y ** 2, dim=[0, 2, 3])
assert n is not None
channel_mean = chanel_sum / n
channel_std = (torch.sqrt((chanel_sum_sqr / n) - (channel_mean ** 2))).float()[None, :, None, None]
channel_mean = channel_mean.float()[None, :, None, None]
return {"mean": channel_mean, "std": channel_std}
Algorithm B
@torch.no_grad()
def teacher_channel_mean_std_B(self, dataloader: DataLoader) -> dict[str, Tensor]:
"""Calculate the mean and std of the teacher models activations.
Adapted from https://math.stackexchange.com/a/1769248
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of channel-wise mean and std
"""
arrays_defined = False
n: torch.Tensor | None = None
channel_mean: torch.Tensor | None = None
M2: torch.Tensor | None = None
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel mean & std", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
if not arrays_defined:
_, num_channels, _, _ = y.shape
n = torch.zeros((1, num_channels, 1, 1), dtype=torch.int64, device=y.device)
channel_mean = torch.zeros((1, num_channels, 1, 1), dtype=torch.float64, device=y.device)
M2 = torch.zeros((1, num_channels, 1, 1), dtype=torch.float64, device=y.device)
arrays_defined = True
n += y[:, 0].numel()
delta = y - channel_mean
channel_mean += torch.sum(delta / n, dim=[0, 2, 3], keepdim=True)
M2 += torch.sum(delta * (y - channel_mean), dim=[0, 2, 3], keepdim=True)
assert n is not None
channel_mean = channel_mean.float()
channel_std = torch.sqrt(M2 / n).float()
return {"mean": channel_mean, "std": channel_std}
Additional context
I have a version of lightning_model.py with the new function that passes all pre-commit checks here. If you deem my solution adequate I am happy to submit a pull request (though I am unable to run the tests). Or if changes need to be made I am also happy to work with you to get those implemented. Let me know if you have any questions.
I have also appended my lightning_model.py that I used to gather these statistics.
lightning_model.py
"""EfficientAd: Accurate Visual Anomaly Detection at Millisecond-Level Latencies.
https://arxiv.org/pdf/2303.14535.pdf
"""
# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from __future__ import annotations
import logging
import secrets
import time
from functools import wraps
from pathlib import Path
import albumentations as A
import numpy as np
import torch
import tqdm
from albumentations.pytorch import ToTensorV2
from lightning_fabric import seed_everything
from omegaconf import DictConfig, ListConfig
from pytorch_lightning.utilities.types import STEP_OUTPUT
from torch import Tensor, optim
from torch.utils.data import DataLoader
from torchmetrics.functional import mean_squared_error
from torchvision.datasets import ImageFolder
from anomalib.data.utils import DownloadInfo, download_and_extract
from anomalib.models.components import AnomalyModule
from .torch_model import EfficientAdModel, EfficientAdModelSize, reduce_tensor_elems
torch.use_deterministic_algorithms(True)
logger = logging.getLogger(__name__)
IMAGENETTE_DOWNLOAD_INFO = DownloadInfo(
name="imagenette2.tgz",
url="https://s3.amazonaws.com/fast-ai-imageclas/imagenette2.tgz",
hash="fe2fc210e6bb7c5664d602c3cd71e612",
)
WEIGHTS_DOWNLOAD_INFO = DownloadInfo(
name="efficientad_pretrained_weights.zip",
url="https://github.com/openvinotoolkit/anomalib/releases/download/efficientad_pretrained_weights/efficientad_pretrained_weights.zip",
hash="ec6113d728969cd233271eeed7d692f2",
)
LOG_MEM = True
def profile(func):
@wraps(func)
def wrapper(*args, **kwargs):
s = time.perf_counter()
ret = func(*args, **kwargs)
e = time.perf_counter()
duration = e - s
print(f'{func.__name__} took {duration} s')
return ret, duration
return wrapper
class TransformsWrapper:
def __init__(self, t: A.Compose):
self.transforms = t
def __call__(self, img, *args, **kwargs):
return self.transforms(image=np.array(img))
class EfficientAd(AnomalyModule):
"""PL Lightning Module for the EfficientAd algorithm.
Args:
teacher_file_name (str): path to the pre-trained teacher model
teacher_out_channels (int): number of convolution output channels
image_size (tuple): size of input images
model_size (str): size of student and teacher model
lr (float): learning rate
weight_decay (float): optimizer weight decay
padding (bool): use padding in convoluional layers
pad_maps (bool): relevant if padding is set to False. In this case, pad_maps = True pads the
output anomaly maps so that their size matches the size in the padding = True case.
batch_size (int): batch size for imagenet dataloader
"""
def __init__(
self,
teacher_out_channels: int,
image_size: tuple[int, int],
model_size: EfficientAdModelSize = EfficientAdModelSize.S,
lr: float = 0.0001,
weight_decay: float = 0.00001,
padding: bool = False,
pad_maps: bool = True,
batch_size: int = 1,
) -> None:
super().__init__()
self.model_size = model_size
self.model: EfficientAdModel = EfficientAdModel(
teacher_out_channels=teacher_out_channels,
input_size=image_size,
model_size=model_size,
padding=padding,
pad_maps=pad_maps,
)
self.batch_size = batch_size
self.image_size = image_size
self.lr = lr
self.weight_decay = weight_decay
self.prepare_pretrained_model()
self.prepare_imagenette_data()
def prepare_pretrained_model(self) -> None:
pretrained_models_dir = Path("./pre_trained/")
if not pretrained_models_dir.is_dir():
download_and_extract(pretrained_models_dir, WEIGHTS_DOWNLOAD_INFO)
teacher_path = (
pretrained_models_dir / "efficientad_pretrained_weights" / f"pretrained_teacher_{self.model_size}.pth"
)
logger.info(f"Load pretrained teacher model from {teacher_path}")
self.model.teacher.load_state_dict(torch.load(teacher_path, map_location=torch.device(self.device)))
def prepare_imagenette_data(self) -> None:
self.data_transforms_imagenet = A.Compose(
[ # We obtain an image P ∈ R 3×256×256 from ImageNet by choosing a random image,
A.Resize(self.image_size[0] * 2, self.image_size[1] * 2), # resizing it to 512 × 512,
A.ToGray(p=0.3), # converting it to gray scale with a probability of 0.3
A.CenterCrop(self.image_size[0], self.image_size[1]), # and cropping the center 256 × 256 pixels
A.ToFloat(always_apply=False, p=1.0, max_value=255),
ToTensorV2(),
]
)
imagenet_dir = Path("./datasets/imagenette")
if not imagenet_dir.is_dir():
download_and_extract(imagenet_dir, IMAGENETTE_DOWNLOAD_INFO)
imagenet_dataset = ImageFolder(imagenet_dir, transform=TransformsWrapper(t=self.data_transforms_imagenet))
self.imagenet_loader = DataLoader(imagenet_dataset, batch_size=self.batch_size, shuffle=True, pin_memory=True)
self.imagenet_iterator = iter(self.imagenet_loader)
@profile
@torch.no_grad()
def teacher_channel_mean_std(self, dataloader: DataLoader) -> dict[str, Tensor]:
if LOG_MEM:
torch.cuda.empty_cache()
a = torch.cuda.memory_allocated(0)
"""Calculate the mean and std of the teacher models activations.
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of channel-wise mean and std
"""
y_means = []
means_distance = []
logger.info("Calculate teacher channel mean and std")
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel mean", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
y_means.append(torch.mean(y, dim=[0, 2, 3]))
channel_mean = torch.mean(torch.stack(y_means), dim=0)[None, :, None, None]
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel std", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
distance = (y - channel_mean) ** 2
means_distance.append(torch.mean(distance, dim=[0, 2, 3]))
channel_var = torch.mean(torch.stack(means_distance), dim=0)[None, :, None, None]
channel_std = torch.sqrt(channel_var)
if LOG_MEM:
print(f'Allocated: {torch.cuda.memory_allocated(0) - a} bytes')
return {"mean": channel_mean, "std": channel_std}
@profile
@torch.no_grad()
def teacher_channel_mean_std_A(self, dataloader: DataLoader) -> dict[str, Tensor]:
if LOG_MEM:
torch.cuda.empty_cache()
a = torch.cuda.memory_allocated(0)
"""Calculate the mean and std of the teacher models activations.
Adapted from https://math.stackexchange.com/a/2148949
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of channel-wise mean and std
"""
arrays_defined = False
n: torch.Tensor | None = None
chanel_sum: torch.Tensor | None = None
chanel_sum_sqr: torch.Tensor | None = None
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel mean & std", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
if not arrays_defined:
_, num_channels, _, _ = y.shape
n = torch.zeros((num_channels,), dtype=torch.int64, device=y.device)
chanel_sum = torch.zeros((num_channels,), dtype=torch.float64, device=y.device)
chanel_sum_sqr = torch.zeros((num_channels,), dtype=torch.float64, device=y.device)
arrays_defined = True
n += y[:, 0].numel()
chanel_sum += torch.sum(y, dim=[0, 2, 3])
chanel_sum_sqr += torch.sum(y ** 2, dim=[0, 2, 3])
assert n is not None
channel_mean = chanel_sum / n
channel_std = (torch.sqrt((chanel_sum_sqr / n) - (channel_mean ** 2))).float()[None, :, None, None]
channel_mean = channel_mean.float()[None, :, None, None]
if LOG_MEM:
print(f'Allocated: {torch.cuda.memory_allocated(0) - a} bytes')
return {"mean": channel_mean, "std": channel_std}
@profile
@torch.no_grad()
def teacher_channel_mean_std_B(self, dataloader: DataLoader) -> dict[str, Tensor]:
if LOG_MEM:
torch.cuda.empty_cache()
a = torch.cuda.memory_allocated(0)
"""Calculate the mean and std of the teacher models activations.
Adapted from https://math.stackexchange.com/a/1769248
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of channel-wise mean and std
"""
arrays_defined = False
n: torch.Tensor | None = None
channel_mean: torch.Tensor | None = None
M2: torch.Tensor | None = None
for batch in tqdm.tqdm(dataloader, desc="Calculate teacher channel mean & std", position=0, leave=True):
y = self.model.teacher(batch["image"].to(self.device))
if not arrays_defined:
_, num_channels, _, _ = y.shape
n = torch.zeros((1, num_channels, 1, 1), dtype=torch.int64, device=y.device)
channel_mean = torch.zeros((1, num_channels, 1, 1), dtype=torch.float64, device=y.device)
M2 = torch.zeros((1, num_channels, 1, 1), dtype=torch.float64, device=y.device)
arrays_defined = True
n += y[:, 0].numel()
delta = y - channel_mean
channel_mean += torch.sum(delta / n, dim=[0, 2, 3], keepdim=True)
M2 += torch.sum(delta * (y - channel_mean), dim=[0, 2, 3], keepdim=True)
assert n is not None
channel_mean = channel_mean.float()
channel_std = torch.sqrt(M2 / n).float()
if LOG_MEM:
print(f'Allocated: {torch.cuda.memory_allocated(0) - a} bytes')
return {"mean": channel_mean, "std": channel_std}
@torch.no_grad()
def map_norm_quantiles(self, dataloader: DataLoader) -> dict[str, Tensor]:
"""Calculate 90% and 99.5% quantiles of the student(st) and autoencoder(ae).
Args:
dataloader (DataLoader): Dataloader of the respective dataset.
Returns:
dict[str, Tensor]: Dictionary of both the 90% and 99.5% quantiles
of both the student and autoencoder feature maps.
"""
maps_st = []
maps_ae = []
logger.info("Calculate Validation Dataset Quantiles")
for batch in tqdm.tqdm(dataloader, desc="Calculate Validation Dataset Quantiles", position=0, leave=True):
for img, label in zip(batch["image"], batch["label"]):
if label == 0: # only use good images of validation set!
output = self.model(img.to(self.device))
map_st = output["map_st"]
map_ae = output["map_ae"]
maps_st.append(map_st)
maps_ae.append(map_ae)
qa_st, qb_st = self._get_quantiles_of_maps(maps_st)
qa_ae, qb_ae = self._get_quantiles_of_maps(maps_ae)
return {"qa_st": qa_st, "qa_ae": qa_ae, "qb_st": qb_st, "qb_ae": qb_ae}
def _get_quantiles_of_maps(self, maps: list[Tensor]) -> tuple[Tensor, Tensor]:
"""Calculate 90% and 99.5% quantiles of the given anomaly maps.
If the total number of elements in the given maps is larger than 16777216
the returned quantiles are computed on a random subset of the given
elements.
Args:
maps (list[Tensor]): List of anomaly maps.
Returns:
tuple[Tensor, Tensor]: Two scalars - the 90% and the 99.5% quantile.
"""
maps_flat = reduce_tensor_elems(torch.cat(maps))
qa = torch.quantile(maps_flat, q=0.9).to(self.device)
qb = torch.quantile(maps_flat, q=0.995).to(self.device)
return qa, qb
def configure_optimizers(self) -> optim.Optimizer:
optimizer = optim.Adam(
list(self.model.student.parameters()) + list(self.model.ae.parameters()),
lr=self.lr,
weight_decay=self.weight_decay,
)
num_steps = min(
self.trainer.max_steps, self.trainer.max_epochs * len(self.trainer.datamodule.train_dataloader())
)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=int(0.95 * num_steps), gamma=0.1)
return {"optimizer": optimizer, "lr_scheduler": scheduler}
def compare_output(self):
seed_everything(1001)
baseline_0, _ = self.teacher_channel_mean_std(self.trainer.datamodule.train_dataloader())
seed_everything(1001)
baseline_1, _ = self.teacher_channel_mean_std(self.trainer.datamodule.train_dataloader())
print('Baseline comparison')
print(mean_squared_error(baseline_0['mean'], baseline_1['mean']))
print(mean_squared_error(baseline_0['std'], baseline_1['std']))
seed_everything(1001)
A_output, _ = self.teacher_channel_mean_std_A(self.trainer.datamodule.train_dataloader())
seed_everything(1001)
B_output, _ = self.teacher_channel_mean_std_B(self.trainer.datamodule.train_dataloader())
seed_everything(1001)
channel_mean_std, _ = self.teacher_channel_mean_std(self.trainer.datamodule.train_dataloader())
print('Full comparison')
print('A')
print(mean_squared_error(channel_mean_std['mean'], A_output['mean']))
print(mean_squared_error(channel_mean_std['std'], A_output['std']))
print('B')
print(mean_squared_error(channel_mean_std['mean'], B_output['mean']))
print(mean_squared_error(channel_mean_std['std'], B_output['std']))
def compare_speed(self):
funcs = [self.teacher_channel_mean_std, self.teacher_channel_mean_std_A, self.teacher_channel_mean_std_B]
for func in funcs:
func(self.trainer.datamodule.train_dataloader())
import random
results = {func.__name__: 0 for func in funcs}
for x in range(25):
seed = secrets.randbits(30)
random.seed(seed)
random.shuffle(funcs)
for func in funcs:
seed_everything(seed)
results[func.__name__] += func(self.trainer.datamodule.train_dataloader())[1]
print(results)
def on_train_start(self) -> None:
global LOG_MEM
"""Calculate or load the channel-wise mean and std of the training dataset and push to the model."""
if not self.model.is_set(self.model.mean_std):
# channel_mean_std = self.teacher_channel_mean_std(self.trainer.datamodule.train_dataloader())
# self.model.mean_std.update(channel_mean_std)
self.compare_output()
LOG_MEM = False
self.compare_speed()
exit()
def training_step(self, batch: dict[str, str | Tensor], *args, **kwargs) -> dict[str, Tensor]:
"""Training step for EfficientAd returns the student, autoencoder and combined loss.
Args:
batch (batch: dict[str, str | Tensor]): Batch containing image filename, image, label and mask
Returns:
Loss.
"""
del args, kwargs # These variables are not used.
try:
# infinite dataloader; [0] getting the image not the label
batch_imagenet = next(self.imagenet_iterator)[0]["image"].to(self.device)
except StopIteration:
self.imagenet_iterator = iter(self.imagenet_loader)
batch_imagenet = next(self.imagenet_iterator)[0]["image"].to(self.device)
loss_st, loss_ae, loss_stae = self.model(batch=batch["image"], batch_imagenet=batch_imagenet)
loss = loss_st + loss_ae + loss_stae
self.log("train_st", loss_st.item(), on_epoch=True, prog_bar=True, logger=True)
self.log("train_ae", loss_ae.item(), on_epoch=True, prog_bar=True, logger=True)
self.log("train_stae", loss_stae.item(), on_epoch=True, prog_bar=True, logger=True)
self.log("train_loss", loss.item(), on_epoch=True, prog_bar=True, logger=True)
return {"loss": loss}
def on_validation_start(self) -> None:
"""
Calculate the feature map quantiles of the validation dataset and push to the model.
"""
if (self.current_epoch + 1) == self.trainer.max_epochs:
map_norm_quantiles = self.map_norm_quantiles(self.trainer.datamodule.val_dataloader())
self.model.quantiles.update(map_norm_quantiles)
def validation_step(self, batch: dict[str, str | Tensor], *args, **kwargs) -> STEP_OUTPUT:
"""Validation Step of EfficientAd returns anomaly maps for the input image batch
Args:
batch (dict[str, str | Tensor]): Input batch
Returns:
Dictionary containing anomaly maps.
"""
del args, kwargs # These variables are not used.
batch["anomaly_maps"] = self.model(batch["image"])["anomaly_map"]
return batch
class EfficientAdLightning(EfficientAd):
"""PL Lightning Module for the EfficientAd Algorithm.
Args:
hparams (DictConfig | ListConfig): Model params
"""
def __init__(self, hparams: DictConfig | ListConfig) -> None:
super().__init__(
teacher_out_channels=hparams.model.teacher_out_channels,
model_size=hparams.model.model_size,
lr=hparams.model.lr,
weight_decay=hparams.model.weight_decay,
padding=hparams.model.padding,
pad_maps=hparams.model.pad_maps,
image_size=hparams.dataset.image_size,
batch_size=hparams.dataset.train_batch_size,
)
self.hparams: DictConfig | ListConfig # type: ignore
self.save_hyperparameters(hparams)
Hello.
Thanks for this 😄. I believe that going with algorithm A is a good decision, but others have to confirm this and maybe some additional test should be conducted to confirm. The code looks good to me, but I believe it's possible to init the arrays outside the loop. You can open a PR, and we'll discuss it there.
|
2025-04-01T04:35:02.871178
| 2021-09-03T23:26:09
|
988153306
|
{
"authors": [
"alalek",
"mmaaz60"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9482",
"repo": "openvinotoolkit/cvat",
"url": "https://github.com/openvinotoolkit/cvat/issues/3639"
}
|
gharchive/issue
|
How to host CVAT on local network
I will appreciate if anyone can share a detailed instruction to deploy CVAT on a local network. Thanks
Check "Installation guide" on the main page: https://github.com/openvinotoolkit/cvat
Hi @alalek,
I did that but was still not able to figure out the steps. Are you referring to the Deploying CVAT behind a proxy section in the installation guide? If that so, I tried it and it doesn't work
The current link points to this location: https://openvinotoolkit.github.io/cvat/docs/administration/basics/installation/
The current link points to this location: https://openvinotoolkit.github.io/cvat/docs/administration/basics/installation/
Start from the beginning. Don't jump to the end.
Provide your steps and test logs in case of troubles.
Thank You @alalek for your response,
In my case what I want is to install CVAT on one machine and access it from another machine/machines on the same network. So, for example, if I install CVAT on a machine with IP address <IP_ADDRESS>. Then what I want is to use it from any other machine on the same network. It may be done by just typing, <IP_ADDRESS>:8080 in the browser of let's say machine with IP <IP_ADDRESS>.
I have followed the instructions at https://openvinotoolkit.github.io/cvat/docs/administration/basics/installation and was able to successfully run CVAT locally. But when I try to access it from another machine, it says
This site can’t be reached
<IP_ADDRESS> refused to connect.
Following the instructions under Deploying CVAT behind a proxy I updated my ~/.docker/config.json file and it looks like the following.
{
"proxies": {
"default": {
"httpProxy": "<IP_ADDRESS>:8080",
"httpsProxy": "<IP_ADDRESS>:8080",
"noProxy": "*"
}
}
}
Also, following some comments found on web, I updated cvat-ui/react_nginx.conf file as,
server {
root /usr/share/nginx/html;
# Any route that doesn't have a file extension (e.g. /devices)
location / {
try_files $uri $uri/ /index.html;
add_header Access-Control-Allow-Origin "*";
}
}
I don't have any experience with web programming so I may be overlooking some things. I wanted you to point me out where I am doing a mistake or otherwise, the instructions to deploy CVAT on a machine and access it using other machines on the same network. Furthermore, I am attaching screeshot of docker-compose logs -f in case it may be helpful. Thanks
was able to successfully run CVAT locally
I want is to use it from any other machine on the same network
<IP_ADDRESS> refused to connect.
IMHO, this is off-topic. You HAVE to learn, how to configure your firewall and open ports properly on your own, we can't help you here.
It is better to ask your system administrator to configure that. We don't know your network configuration details or policies of your corporate network. So any suggestions would be inaccurate.
BTW, There are some guidelines in Internet:
https://askubuntu.com/questions/911765/open-port-on-ubuntu-16-04
https://www.bojankomazec.com/2019/12/how-to-open-ports-on-ubuntu.html
https://www.ibm.com/docs/en/spectrum-scale/4.2.2?topic=firewall-examples-how-open-ports
Caution: don't try to play with network through remote connection. You may broke network settings and lost your remote connection without any chance to recover.
Deploying CVAT behind a proxy
You don't need that at all if you already run CVAT locally (you have direct/transparent access to Internet).
|
2025-04-01T04:35:02.886523
| 2021-07-28T12:28:00
|
954804011
|
{
"authors": [
"aschernov",
"bsekachev",
"nmanovic"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9483",
"repo": "openvinotoolkit/cvat",
"url": "https://github.com/openvinotoolkit/cvat/pull/3473"
}
|
gharchive/pull-request
|
Semi-automatic tools enhancements (Non-blocking UI, tips)
Motivation and context
Related #2936
Resolved #2515
How has this been tested?
Manual testing
Checklist
[x] I submit my changes into the develop branch
[x] I have added description of my changes into CHANGELOG file
[ ] I have updated the documentation accordingly
[ ] I have added tests to cover my changes
[x] I have linked related issues (read github docs)
[x] I have increased versions of npm packages if it is necessary (cvat-canvas,
cvat-core, cvat-data and cvat-ui)
License
[x] I submit my code changes under the same MIT License that covers the project.
Feel free to contact the maintainers if that's a concern.
[x] I have updated the license header for each file (see an example below)
# Copyright (C) 2021 Intel Corporation
#
# SPDX-License-Identifier: MIT
@bsekachev , I like the experience, especially with DEXTR. But I'm missing a feature. When I'm annotating using DEXTR, it doesn't respect "selected opacity" parameter. In some cases it is really difficult to annotate if opacity is 0 during drawing.
Added feature: "Selected opacity" slider now defines opacity level of shapes being drawn (works when draw shapes, or work with interactors | trackers)
@aschernov @TOsmanov
Could you please update the user guide about using interactors according to new changes (non-blocking UI, Selected opacity slider)? Contact @azhavoro to get a CVAT instance with these changes.
Also, we expect a feedback from the DA team about the previous PR (with points minimizer) and this PR, since they were implemented to satisfy your requests.
@dvkruchinin
Could you please prepare a test for:
Added feature: "Selected opacity" slider now defines opacity level of shapes being drawn (works when drawing shapes)
@bsekachev , we will update the documentation.
Also, we will prepare a feedback about the implementations you mentioned.
@bsekachev , we made one more annotation experiment with another but similar annotation task we used before, and got the following conclusions:
After test annotation, comparing the traditional method (manual) and the semi-automatic (DEXTR) method with the points minimizer, we got the following data:
Speed of manual annotation is faster than getting a polygon with a semi-automatic annotation tool + in most cases we have to correct it.
Examples of annotation with DEXTR with and without points minimizer:
In this example, there are inaccuracies in the result of work of the semi-automatic annotation tool. If reduce the number of points with the points minimizer, we will need to correct only 3 points.
Often, a semi-automatic annotation tool annotated an object badly. A lot of corrections are required, even if we set a lot of points.
We can also use the minimizer to increase the number of points to get better polygon quality for objects that are well annotated with semi-automatic annotation tool, where no additional corrections are required, if the task requires it. It took 8 seconds to receive a response from the server, excluding the time to set the first 4 points. Annotating such objects could possibly take less time than manual annotating if the request to the server took less time.
Some complex objects require a lot of points. Using points minimizer in this case increased the time a bit for correcting the polygon. If the task does not require excessive accuracy, then using points minimizer will spend less time than without it.
In general, using points minimizer in many cases reduces the time for correcting the polygon created with semi-automatic annotation tool. But manual annotation is faster, mainly this may be related with a long wait for a response from the server. For simple objects, reducing the server response time would bring the speed annotation with a semi-automatic annotation tool closer to manual annotating speed.
Perhaps, absence of blocking UI has made the annotation with a semi-automatic annotation tool a little faster. Blocking UI didn't let additional viewing of the polygon while awaiting a response from the server. Working with a clear and large object, seeing some inaccuracies in the polygon, we can add additional points to those places that were not very well annotated with a semi-automatic annotation tool. We can also see an incorrectly placed point and we can remove it without waiting for a response from the server, this is convenient.
@aschernov , can we try the same experiments on objects like persons? Each DL model is trained on a specific dataset. DEXTR was trained on Pascal or COCO. It means that the model didn't see classes which you try to annotate. This is why results are extremely bad (my guess). Probably it should be mentioned in documentation. When we are able to retrain the model on the fly, it makes sense to come back to experiments which were described in the comment above.
@nmanovic , sure. We'll prepare a corresponding test task and repeat the experiment. I'll let you know about the results.
|
2025-04-01T04:35:02.890017
| 2022-12-16T02:06:49
|
1499446221
|
{
"authors": [
"harimkang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9484",
"repo": "openvinotoolkit/model_preparation_algorithm",
"url": "https://github.com/openvinotoolkit/model_preparation_algorithm/pull/112"
}
|
gharchive/pull-request
|
[OTX] Fix Classification Stage & Classifier for supporting public backbone
Summary
BACKBONE Registy change mmcls to mmcv used in Classification Stage
If the output of the backbone is a tuple (other tasks backbone),
adjust the classification head to receive the output of the last layer
Through this work, i confirmed the backbone replacement, build, and training of mmdet, mmseg, torchvision, and omz based on SAMImageClassifier
latest otx(feature/otx) + this mpa branch
otx cli test for classification
@sungmanc @goodsong81 This only applied to the current SAMImageClassifier. Should it apply to other Classifiers as well? (All currently used classification models use SAMImageClassifier) What do you think?
|
2025-04-01T04:35:02.958950
| 2017-03-17T00:03:11
|
214873057
|
{
"authors": [
"csantanapr",
"pjdurai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9485",
"repo": "openwhisk/openwhisk-client-js",
"url": "https://github.com/openwhisk/openwhisk-client-js/issues/42"
}
|
gharchive/issue
|
Are multiple openwhisk() objects supported ?
Greetings
I am looking into the JS api and have a question.
var options = {apihost: 'openwhisk.ng.bluemix.net', api_key: '...'};
var ow = openwhisk(options);
Is it possible to have multiple 'ow' objects , with different API key/Namespace combos in the same Node.js application?
Does the library have any global/singleton properties that preclude that kind of usage?
Thanks
pj
yes every openwhisk(options); it's a new instance
https://github.com/openwhisk/openwhisk-client-js/blob/master/lib/main.js#L12
|
2025-04-01T04:35:02.960127
| 2017-01-06T14:55:58
|
199212300
|
{
"authors": [
"mdeuser"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9486",
"repo": "openwhisk/openwhisk",
"url": "https://github.com/openwhisk/openwhisk/issues/1691"
}
|
gharchive/issue
|
wsk api 'list' command columular output should dynamically handle large values
The column output currently looks like:
ok: APIs
Action Verb API Name URL
Handle long action names as well as long API names. Either adjust column sizes and/or judiciously truncate column value to fit nicely.
Closing as duplicate of #1646
|
2025-04-01T04:35:02.961856
| 2017-02-01T22:04:29
|
204719372
|
{
"authors": [
"csantanapr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9487",
"repo": "openwhisk/openwhisk",
"url": "https://github.com/openwhisk/openwhisk/issues/1797"
}
|
gharchive/issue
|
Update CLI instructions not set namespace anymore
No more setting namespace with new key auth scheme
https://github.com/openwhisk/openwhisk/blob/master/docs/README.md
found by @rabbah
fixed https://github.com/openwhisk/openwhisk/blob/master/docs/cli.md
|
2025-04-01T04:35:02.963479
| 2017-01-31T19:33:02
|
204401184
|
{
"authors": [
"dubeejw"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9488",
"repo": "openwhisk/openwhisk",
"url": "https://github.com/openwhisk/openwhisk/pull/1787"
}
|
gharchive/pull-request
|
Do not Print Trigger After Update
Just inform the user that a trigger has been updated instead of displaying the trigger in JSON format
Closes https://github.com/openwhisk/openwhisk/issues/1750
@rabbah, please review.
PG2 994
PG approved.
|
2025-04-01T04:35:03.047903
| 2024-03-05T21:19:41
|
2170161571
|
{
"authors": [
"MaheshRavishankar",
"benvanik",
"bjacob"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9489",
"repo": "openxla/iree",
"url": "https://github.com/openxla/iree/issues/16670"
}
|
gharchive/issue
|
Missing CPU features attributes on dispatch functions lead to UB / missed target instructions
Testcase: just a i8 x i8 -> i32 matmul:
func.func @matmul_dynamic(%lhs: tensor<?x?xi8>, %rhs: tensor<?x?xi8>, %acc: tensor<?x?xi32>) -> tensor<?x?xi32> {
%result = linalg.matmul ins(%lhs, %rhs: tensor<?x?xi8>, tensor<?x?xi8>) outs(%acc: tensor<?x?xi32>) -> tensor<?x?xi32>
return %result: tensor<?x?xi32>
}
Reproduce:
tools/iree-compile \
~/matmul_i8.mlir -o /tmp/a.vmfb \
--iree-hal-target-backends=llvm-cpu \
--iree-llvmcpu-target-cpu=znver4 \
--iree-llvmcpu-enable-ukernels=all \
--iree-hal-dump-executable-intermediates-to=/tmp \
-mlir-disable-threading \
-mlir-print-ir-after-all \
2>/tmp/log
Inspection of the generated assembly /tmp/module_matmul_i8_linked_llvm_cpu_embedded_elf_x86_64.s shows that baseline AVX-512 code is generated (VPMADDWD) instead of the expected AVX-512-VNNI code (VPDPWSSD):
matmul_dynamic_dispatch_3_mmt4d_DxDxDx16x16x2_i8xi8xi32:
[...]
vshufi64x2 $27, %zmm16, %zmm16, %zmm19
vpmaddwd %zmm16, %zmm21, %zmm24
vpmaddwd %zmm17, %zmm21, %zmm26
vpmaddwd %zmm18, %zmm21, %zmm25
vpmaddwd %zmm19, %zmm21, %zmm21
[...]
Why? The dumped intermediates show that all the way to the post-linking optimized IR (/tmp/module_matmul_i8_linked_llvm_cpu_embedded_elf_x86_64.optimized.ll), it was the expected AVX-512-VNNI intrinsic function:
define internal noundef i32 @matmul_dynamic_dispatch_3_mmt4d_DxDxDx16x16x2_i8xi8xi32(ptr noalias nocapture nonnull readonly align 16 %0, ptr noalias nocapture nonnull readonly align 16 %1, ptr noalias nocapture nonnull readonly align 16 %2) #1 !dbg !90 {
[...]
%358 = tail call <16 x i32> @llvm.x86.avx512.vpdpwssd.512(<16 x i32> %334, <16 x i32> %354, <16 x i32> %347), !dbg !91
%359 = tail call <16 x i32> @llvm.x86.avx512.vpdpwssd.512(<16 x i32> %333, <16 x i32> %354, <16 x i32> %348), !dbg !91
%360 = tail call <16 x i32> @llvm.x86.avx512.vpdpwssd.512(<16 x i32> %332, <16 x i32> %354, <16 x i32> %349), !dbg !91
%361 = tail call <16 x i32> @llvm.x86.avx512.vpdpwssd.512(<16 x i32> %331, <16 x i32> %354, <16 x i32> %350), !dbg !91
%362 = tail call <16 x i32> @llvm.x86.avx512.vpdpwssd.512(<16 x i32> %330, <16 x i32> %355, <16 x i32> %347), !dbg !91
%363 = tail call <16 x i32> @llvm.x86.avx512.vpdpwssd.512(<16 x i32> %329, <16 x i32> %355, <16 x i32> %348), !dbg !91
[...]
But wait, what is that attribute #1 on that function? Does it have the required CPU feature enabled? Nope:
attributes #1 = { nofree norecurse nosync nounwind "frame-pointer"="all" "hot" "no-builtins" "nonlazybind" }
So our code here is Undefined Behavior, and indeed, while initially minimizing it with llc, I did run into should-not-get-here crashes in x86 instruction selection. And in our current e2e IREE use case, the Undefined Behavior, while not crashing or affecting correctness, is still causing us to miss the intended VNNI instruction.
"Of course" this dispatch function doesn't have the required +avx512vnni CPU feature attribute, since we never put it there. The only functions that have the +avx512vnni CPU feature attribute are the ukernel internal VNNI implementation functions, which are compiled with this CPU feature enabled in the first place.
I guess I was expecting the attribute to be propagated from callee to caller as the VNNI inner tile function gets inlined first into iree_uk_mmt4d and then into the dispatch function. It's not.
How do we resolve that in a way that doesn't violate the design with target specialization in LLVMCPUTarget ? @benvanik
if the LLVM inliner is doing the inlining and not propagating the flag, that feels like an LLVM bug that needs to be fixed there - or we'd need to hook the inliner somehow and do the propagation ourselves
Oh right, that makes sense. I'll start by minimizing the .linked.ll.
When I run llc on the .linked.ll, even with -O3, I get no inlining at all (of the tile functions with CPU feature attributes, into the callers without these attributes).
So the inlining behavior of iree-compile here is a behavior departure from llc.
Incidentally, https://github.com/llvm/llvm-project/pull/83820 just went in and sheds light on the semantics of inlining vs CPU features on x86: "The caller features must be a superset of the callee features."
Notice that this logic (which says no inlining in this case) exists inside the X86 Target, while our logic (which does inline this case) is using a more like middle-end pass manager, https://github.com/openxla/iree/blob/7782a414ea473c59f6d7a882cb510690ed666c79/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/LLVMIRPasses.cpp#L48 . I checked llc source code and it does not use that.
Ultimately, the perfect inlining (of say that AVX-512-VNNI tile function into the dispatch function, and DCE'ing of everything else) is only possible if we are actually specializing the code for this specific CPU feature. So either we accept that and then the easy fix is to add the target machine's CPU features to the dispatch function, or we don't accept that and then we need to accept that we won't get the inlining and the subsequent optimizations? I'd love to hear that there's a third way but I don't see it right now ? @benvanik
Some progress of sorts. I put together this patch to try locally perfectly aligning the caller and callee target-features:
diff --git a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/Builtins/UKernel.cpp b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/Builtins/UKernel.cpp
index 46d1978d00..c1da9812ab 100644
--- a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/Builtins/UKernel.cpp
+++ b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/Builtins/UKernel.cpp
@@ -9,7 +9,9 @@
#include "iree/builtins/ukernel/ukernel_bitcode.h"
#include "iree/compiler/Codegen/Utils/Utils.h"
#include "llvm/Bitcode/BitcodeReader.h"
+#include "llvm/IR/Attributes.h"
#include "llvm/Support/MemoryBufferRef.h"
+#include "mlir/IR/Builders.h"
#include "mlir/Support/LLVM.h"
namespace mlir::iree_compiler::IREE::HAL {
@@ -57,6 +59,11 @@ loadUKernelBitcode(llvm::TargetMachine *targetMachine,
// can result in a large penalty in both performance and code size.
for (auto &func : module.get()->functions()) {
func.addFnAttr(llvm::Attribute::AlwaysInline);
+ llvm::AttrBuilder builder(context);
+ func.removeFnAttr("target-cpu");
+ func.removeFnAttr("target-features");
+ func.addFnAttr("target-cpu", targetMachine->getTargetCPU());
+ func.addFnAttr("target-features", targetMachine->getTargetFeatureString());
}
return module;
}
diff --git a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/LLVMCPUTarget.cpp b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/LLVMCPUTarget.cpp
index f3b5311921..8c328e4176 100644
--- a/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/LLVMCPUTarget.cpp
+++ b/compiler/src/iree/compiler/Dialect/HAL/Target/LLVMCPU/LLVMCPUTarget.cpp
@@ -371,6 +371,12 @@ public:
// Our dispatches are all hot - that's kind of the point.
// This may favor more aggressive optimizations.
func.addFnAttr("hot");
+
+ func.addFnAttr("target-cpu", executableBuilder.getStringAttr(
+ targetMachine->getTargetCPU()));
+ func.addFnAttr("target-features",
+ executableBuilder.getStringAttr(
+ targetMachine->getTargetFeatureString()));
}
With that, I still get exactly the same problem with iree-compile's output, the vpmaddwd instruction instead of the vpdpwssd, but now this isn't UB anymore, as far as I can see. llc now processes the .optimized.ll without crashing and produces the same result, the unexpected vpmaddwd instruction, despite having the vpdpwssd intrinsics and now (unlike before) having all the right CPU feature attributes. So at least I can try to minimize that .optimized.ll now with llc. Before, I couldn't, due to the crashes.
This is the PR Ben was referring to https://github.com/openxla/iree/pull/16665
|
2025-04-01T04:35:03.050135
| 2023-04-12T21:37:53
|
1665298134
|
{
"authors": [
"hanhanW"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9490",
"repo": "openxla/iree",
"url": "https://github.com/openxla/iree/pull/13064"
}
|
gharchive/pull-request
|
Make pack/unpack benchmarks have same amount of elements for all benchmarks
The shape is derived from working_set_size. If the number is not a multiple of 4, The final number of elements could have large variants between avx2 and avx512 configurations.
E.g., I tried benchmarking 384x512xf32, the working_set_size is 1572864 in this case. However, the total elements passed to avx512 is 1.5x of avx2. It misleads the metrics in terms of latency. Now I realize that we should mostly look at bandwidth from the report, not latency. Maybe we should compute the shape correctly, so they will pack the same amount of elements? Or maybe it's just an approximation to get memory bandwidth and we don't really care about the actual number of elements passed to benchmarks?
I'm good if we don't land the PR. Because I can get what I want now. I send out the PR for checking if my understanding is correct; exposing what I found to public users.
|
2025-04-01T04:35:03.053794
| 2021-12-09T02:58:09
|
1075086332
|
{
"authors": [
"cuisongliu",
"rambohe-ch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9491",
"repo": "openyurtio/openyurt",
"url": "https://github.com/openyurtio/openyurt/pull/667"
}
|
gharchive/pull-request
|
ci(master): doc coredns min version tips
What type of PR is this?
Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:
/kind bug
/kind documentation
/kind enhancement
/kind good-first-issue
/kind feature
/kind question
/kind design
/sig ai
/sig iot
/sig network
/sig storage
/sig storage
What this PR does / why we need it:
coredns min version tips
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
other Note
/lgtm
/approve
|
2025-04-01T04:35:03.075010
| 2021-12-27T12:18:17
|
1089205500
|
{
"authors": [
"adamzhoul",
"rambohe-ch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9492",
"repo": "openyurtio/openyurt",
"url": "https://github.com/openyurtio/openyurt/pull/697"
}
|
gharchive/pull-request
|
remove k8s.io/kubernetes dependency from yurtctl join/reset
What type of PR is this?
Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:
/kind bug
/kind documentation
/kind enhancement
/kind good-first-issue
/kind feature
/kind question
/kind design
/sig ai
/sig iot
/sig network
/sig storage
/sig storage
/kind enhancement
What this PR does / why we need it:
Remove k8s.io/kubernetes dependency from yurtctl join and yurtctl reset.
1. feature update points:
In order to reduce the complexity of yurtctl join command and integrate with sealer, join ControlPlane node feature is removed, so yurtctl join can only join cloud or edge worker node. and end user should use yurtctl init to install ControlPlane through sealer.
yurtctl reset can only reset worker node that joined by yurtctl join command. if end user use yurtctl reset to reset other kind of nodes, the result maybe unknown.
2. yurtctl join command
implementation update points:
dependency on kubeadm copied from k8s.io/kubernetes/cmd/kubeadm to github.com/openyurtio/openyurt/pkg/yurtctl/kubernetes. so KubeletConfiguration has been left.
command line parameter:
In order to reduce the complexity of input parameter, kubeadmapi.JoinConfiguration and kubeadmapi.InitConfiguration have been removed. only command line parameters are supported in yurtctl join.
add YurtJoinData interface.
apiServerAddress and token parameter for yurtctl join command
organizations parameter is added for adding customized Organizations info into yurthub client certificate.
node-labels parameter is added for adding customized labels for worker node.
prepare phase(pkg/yurtctl/cmd/join/phases/prepage.go):
kubelet service conf file(/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) is the same of cloud and edge worker node, so merge EdgeKubeletUnitConfig and CloudKubeletUnitConfig into KubeletUnitConfig
add /etc/kubernetes/kubelet.conf file preparation.
add /etc/kubernetes/pki/ca.crt file preparation.
preflight phase(pkg/yurtctl/cmd/join/phases/preflight.go)
add /etc/kubernetes/kubelet.conf and /etc/kubernetes/pki/ca.crt file existing check
add pause and yurthub image pull check
joinNode phase(pkg/yurtctl/cmd/join/phases/joinnode.go)
merge joinCloudNode/joindEdgeNode into joinNode because the join routine is the same except the WorkingMode parameter of yurthub.
openyurt.io/is-edge-worker label is added as kubelet command line parameter, so after kubelet startup, only CRISocket annotation is need to patch to node.
because kubelet use HTTP to connect yurthub, so rotate-certificates=faluse is added to kubelet command line parameter.
use templates.SubsituteTemplate to populate yurthub yaml, so the format of fields in YurthubTemplate are modified. for example: from __workingMode__ to {{.workingMode}}
postCheck phase(pkg/yurtctl/cmd/join/phases/postcheck.go)
remove nodeLabel patch.
3. yurtctl reset command
add resetData interface
add preflight phase
separate cleanfile phase into cleanupnode and cleanyurtfile phases.
Which issue(s) this PR fixes:
Fixes #671
Special notes for your reviewer:
Does this PR introduce a user-facing change?
other Note
@adamzhoul @Peeknut @zyjhtangtang
@neo502721
hi @rambohe-ch
I see this is a big and important update. we may take a while to fully understand.
can you have a simple description of what you did? so can help us understand more.
something like :
copy k8s code to dir yurtctl/kubernetes
merge joinCloudNode/joindEdgeNode into one( and I see you simplify the implantation can you have some introduction ?)
...
PS: can some change in node can be implantation in node-servant? to make the update to kubelet/yurthub keep same.
very thanks.
hi @rambohe-ch
I see this is a big and important update. we may take a while to fully understand.
can you have a simple description of what you did? so can help us understand more.
something like :
copy k8s code to dir yurtctl/kubernetes
merge joinCloudNode/joindEdgeNode into one( and I see you simplify the implantation can you have some introduction ?)
...
PS: can some change in node can be implantation in node-servant? to make the update to kubelet/yurthub keep same.
very thanks.
hi @rambohe-ch I see this is a big and important update. we may take a while to fully understand. can you have a simple description of what you did? so can help us understand more. something like :
copy k8s code to dir yurtctl/kubernetes
merge joinCloudNode/joindEdgeNode into one( and I see you simplify the implantation can you have some introduction ?)
...
PS: can some change in node can be implantation in node-servant? to make the update to kubelet/yurthub keep same.
very thanks.
@adamzhoul Thanks for your suggestions. i
hi @rambohe-ch I see this is a big and important update. we may take a while to fully understand. can you have a simple description of what you did? so can help us understand more. something like :
copy k8s code to dir yurtctl/kubernetes
merge joinCloudNode/joindEdgeNode into one( and I see you simplify the implantation can you have some introduction ?)
...
PS: can some change in node can be implantation in node-servant? to make the update to kubelet/yurthub keep same.
very thanks.
@adamzhoul Thanks for your suggestions. i have updated the pull request messages.
/lgtm
|
2025-04-01T04:35:03.128891
| 2019-04-25T17:30:05
|
437314781
|
{
"authors": [
"abesto",
"adriancole"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9493",
"repo": "openzipkin-contrib/apache-release-verification",
"url": "https://github.com/openzipkin-contrib/apache-release-verification/pull/17"
}
|
gharchive/pull-request
|
Add --github-reponame-template, generalize optional placeholders
Watch out, this is on top of #16, you probably only want to look at the last commit if reviewing before that's merged.
This allows verifying Apache Zipkin Layout Factory 0.0.5 currently up for vote as follows:
python src/main.py --project zipkin --module zipkin-layout-factory \
--version 0.0.5 --gpg-key 50D90C2C \
--git-hash 23dbddb426b4113c4b8633808b9ff0df3454e201 --repo dev \
--zipname-template 'apache-{module}{dash_incubating}-{version}-source-release' \
--github-reponame-template '{incubator_dash}{module}.git'
PS. Depending on the outcome of the discussion around naming the SVN path, this might be the trigger for shipping some template presets. I'm thinking it'd be nice if the script could figure out on its own which templates to use for a project. We have a bootstrapping problem fetching the repo though, which means it'd need to be "out of band", so either shipped with the script, or fetched from a known location managed separately from the script. Or otherwise it could "just" brute-force its way through all template presets, though that feels dirty and hacky and moderately confusion-inducing.
one way would be to remove zipkin- unless that results in empty string
On Fri, Apr 26, 2019 at 5:06 PM Zoltán Nagy<EMAIL_ADDRESS>wrote:
PS. Depending on the outcome of the discussion around naming the SVN path,
this might be the trigger for shipping some template presets. I'm thinking
it'd be nice if the script could figure out on its own which templates to
use for a project. We have a bootstrapping problem fetching the repo
though, which means it'd need to be "out of band", so either shipped with
the script, or fetched from a known location managed separately from the
script. Or otherwise it could "just" brute-force its way through all
template presets, though that feels dirty and hacky and moderately
confusion-inducing.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/openzipkin-contrib/apache-release-verification/pull/17#issuecomment-486967741,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAPVVYDWIZSQZ6N3BH2RCDPSKZ2HANCNFSM4HIPF3AQ
.
Hm, that generalizes to "try to (add or) remove $PROJECT- to/from $MODULE". Sounds reasonable. It's still trial-and-error, which I'd like to avoid if possible. Let's not make a decision just yet, let's see what we learn from future migrations. I might also start looking at "reviewing" releases of other projects to gauge how much variability there is in the wild.
|
2025-04-01T04:35:03.130348
| 2017-11-23T14:19:29
|
276391845
|
{
"authors": [
"adriancole",
"nicmunroe"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9494",
"repo": "openzipkin/brave",
"url": "https://github.com/openzipkin/brave/pull/539"
}
|
gharchive/pull-request
|
Fixes some state bugs in our servlet filter thanks Nic Munroe
@nicmunroe pointed out some problems in our filter and how to test for
them https://github.com/Nike-Inc/wingtips/pull/49#issuecomment-336529316
Looks good!
|
2025-04-01T04:35:03.132944
| 2022-05-24T11:36:54
|
1246409339
|
{
"authors": [
"mary-dcouto",
"qrkourier"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9495",
"repo": "openziti/ziti-tunnel-sdk-c",
"url": "https://github.com/openziti/ziti-tunnel-sdk-c/pull/362"
}
|
gharchive/pull-request
|
Disable intercepts in host only mode, supports intercepts only mode
This PR contains the following changes
Disable intercepts while running the tunnel in host only mode.
New command run_intercepts is added that supports intercepts only mode
run certain windows script, only if the user is running the tunnel using admin privileges
@mary-dcouto We need to eliminate two remaining dependencies when run-host mode is invoked:
ZET still requires the device /dev/net/tun is present
ZET still requires elevated privileges on that device
These need to be eliminated because ZET will never use them at all when using run-host mode, and they prevent deploying ZET in a container without special devices and without elevated privileges. This is one of the main goals of the feature, to run in a restricted container with bridge networking where neither of those two requirements can be satisfied.
|
2025-04-01T04:35:03.135306
| 2024-06-14T15:39:52
|
2353633908
|
{
"authors": [
"dovholuknf",
"michaelquigley",
"qrkourier"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9496",
"repo": "openziti/zrok",
"url": "https://github.com/openziti/zrok/issues/653"
}
|
gharchive/issue
|
docker fails with configuration version '4', your configuration is version '3'
see: https://openziti.discourse.group/t/zrok-self-hosted-zrok-controller-doesnt-run/2721/2
with the latest zrok, the config version was moved to 4 but the docker file must be version 3
@qrkourier Can you take a look at this for v0.4.32?
In the meantime, affected users can set the environment variable ZROK_CTRL_CONFIG_VERSION=4 to override the version of the file contained in the config.
Is bumping the spec version sufficient for this simple controller config?
https://github.com/openziti/zrok/blob/main/docker/compose/zrok-instance/zrok-controller-config.yml.envsubst
|
2025-04-01T04:35:03.145003
| 2019-10-12T21:45:31
|
506241290
|
{
"authors": [
"ksatirli"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9497",
"repo": "operatehappy/terraform-aws-s3-buckets",
"url": "https://github.com/operatehappy/terraform-aws-s3-buckets/pull/3"
}
|
gharchive/pull-request
|
Clean up
This PR cleans up the development tooling
@kibertoad can you check this when you have some time?
|
2025-04-01T04:35:03.148995
| 2024-09-16T21:05:43
|
2529499251
|
{
"authors": [
"LalatenduMohanty",
"everettraven",
"joelanford"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9498",
"repo": "operator-framework/operator-controller",
"url": "https://github.com/operator-framework/operator-controller/pull/1276"
}
|
gharchive/pull-request
|
✨ use controller-runtime Terminal error instead of our custom Unrecoverable error
Fixes #1271
Description
Reviewer Checklist
[ ] API Go Documentation
[ ] Tests: Unit Tests (and E2E Tests, if appropriate)
[ ] Comprehensive Commit Messages
[ ] Links to related GitHub Issue(s)
This makes sense, was not aware of func TerminalError ¶
I don't think it needs to block this PR since it could be done in a follow-up, but it seems like we never made use of the unrecoverable error in the ClusterExtensionReconciler and we should probably update that.
@everettraven the beauty of this error type is that controller-runtime type checks for it after we return from Reconcile and does the same logic that we did (but with a few extras like incrementing a different metric)
https://github.com/kubernetes-sigs/controller-runtime/blob/2eb879f25c4829825e6f2511cda035de41ec7030/pkg/internal/controller/controller.go#L306-L310
|
2025-04-01T04:35:03.154898
| 2023-07-27T07:36:07
|
1823793661
|
{
"authors": [
"24sama",
"kevinrizza"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9499",
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/issues/3002"
}
|
gharchive/issue
|
Why olm allows the same packagemanifest objects in the same namespace
Bug Report
What did you do?
Two packagemanifest objects with the same name are created in the same namespace.
What did you expect to see?
Usually, we use client-go to Get() packagemanifest (treat the packagemanifest as a normal cr), but this bug(or feature? hhh) will make the Get() function return a random response.
What did you see instead? Under which circumstances?
kubectl get packagemanifest -n test
test etcd catalogsource 1m
test etcd catalogsource 2h
kubectl get packagemanifest -n test etcd
test etcd catalogsource 1m
kubectl get packagemanifest -n test etcd
test etcd catalogsource 2h
or use Get() in the project, both return a random response.
Environment
operator-lifecycle-manager version:
Kubernetes version information:
Kubernetes cluster kind:
Possible Solution
Additional context
Add any other context about the problem here.
This is definitely intended behavior. The packagemanifest API isn't backed by a CRD, it's an aggregated API service. If you're querying for these, you need to also get the catalog source name + namespace for a given packagemanifest -- package name uniqueness is enforced in the context of a single catalog.
This is definitely intended behavior. The packagemanifest API isn't backed by a CRD, it's an aggregated API service. If you're querying for these, you need to also get the catalog source name + namespace for a given packagemanifest -- package name uniqueness is enforced in the context of a single catalog.
Thanks @kevinrizza , if I want to query one packagemanifest by Get() or others, is there any suggestion? Using list() with labelSelector ?
|
2025-04-01T04:35:03.159413
| 2020-06-02T18:28:02
|
629421016
|
{
"authors": [
"ecordell",
"exdx",
"kevinrizza"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9500",
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/pull/1564"
}
|
gharchive/pull-request
|
docs: add design docs for including additional objects in bundles
Description of the change:
Docs for including new objects in the bundle.
PodDisruptionBudget (policy/v1beta1)
PriorityClass (scheduling.k8s.io/v1alpha,v1beta1,v1)
VerticalPodAutoScaler
Docs were written in a generic way to target both upstream and downstream OpenShift audiences. The underlying assumption is that we will allow users to include these objects in the bundle with no limitations on the actual content of these objects.
Motivation for the change:
Reviewer Checklist
[ ] Implementation matches the proposed design, or proposal is updated to match implementation
[ ] Sufficient unit test coverage
[ ] Sufficient end-to-end test coverage
[ ] Docs updated or added to /docs
[ ] Commit messages sensible and descriptive
As this is doc-only we should be OK to merge without CI pending approval.
/approve
/lgtm
manually merging as this is doc-only.
|
2025-04-01T04:35:03.171260
| 2020-03-26T09:33:15
|
588277905
|
{
"authors": [
"dmvolod",
"joelanford",
"pzghost",
"surajssd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9501",
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/issues/2725"
}
|
gharchive/issue
|
operator-sdk fails to add an api on brand new project
Bug Report
What did you do?
I was following the Quick Start guide to create a project and when add api it fails with error, Hit an unsupported type invalid type for invalid type, from ./pkg/apis/app/v1alpha1.AppService.
$ operator-sdk new app-operator --repo github.com/example-inc/app-operator
INFO[0000] Creating new Go operator 'app-operator'.
INFO[0000] Created go.mod
INFO[0000] Created tools.go
INFO[0000] Created cmd/manager/main.go
INFO[0000] Created build/Dockerfile
INFO[0000] Created build/bin/entrypoint
INFO[0000] Created build/bin/user_setup
INFO[0000] Created deploy/service_account.yaml
INFO[0000] Created deploy/role.yaml
INFO[0000] Created deploy/role_binding.yaml
INFO[0000] Created deploy/operator.yaml
INFO[0000] Created pkg/apis/apis.go
INFO[0000] Created pkg/controller/controller.go
INFO[0000] Created version/version.go
INFO[0000] Created .gitignore
INFO[0000] Validating project
INFO[0004] Project validation successful.
INFO[0004] Project creation complete.
$ cd app-operator
/tmp/app-operator
$ operator-sdk add api --api-version=app.example.com/v1alpha1 --kind=AppService
INFO[0000] Generating api version app.example.com/v1alpha1 for kind AppService.
INFO[0000] Created pkg/apis/app/group.go
INFO[0000] Created pkg/apis/app/v1alpha1/appservice_types.go
INFO[0000] Created pkg/apis/addtoscheme_app_v1alpha1.go
INFO[0000] Created pkg/apis/app/v1alpha1/register.go
INFO[0000] Created pkg/apis/app/v1alpha1/doc.go
INFO[0000] Created deploy/crds/app.example.com_v1alpha1_appservice_cr.yaml
INFO[0000] Running deepcopy code-generation for Custom Resource group versions: [app:[v1alpha1], ]
F0326 14:54:04.512973 3863302 deepcopy.go:885] Hit an unsupported type invalid type for invalid type, from ./pkg/apis/app/v1alpha1.AppService
What did you see instead? Under which circumstances?
This is the error I saw in a brand new project:
$ operator-sdk add api --api-version=app.example.com/v1alpha1 --kind=AppService
INFO[0000] Generating api version app.example.com/v1alpha1 for kind AppService.
INFO[0000] Created pkg/apis/app/group.go
INFO[0000] Created pkg/apis/app/v1alpha1/appservice_types.go
INFO[0000] Created pkg/apis/addtoscheme_app_v1alpha1.go
INFO[0000] Created pkg/apis/app/v1alpha1/register.go
INFO[0000] Created pkg/apis/app/v1alpha1/doc.go
INFO[0000] Created deploy/crds/app.example.com_v1alpha1_appservice_cr.yaml
INFO[0000] Running deepcopy code-generation for Custom Resource group versions: [app:[v1alpha1], ]
F0326 14:54:04.512973 3863302 deepcopy.go:885] Hit an unsupported type invalid type for invalid type, from ./pkg/apis/app/v1alpha1.AppService
Environment
operator-sdk version:
operator-sdk version: "v0.16.0-dirty", commit: "55f1446c5f472e7d8e308dcdf36d0d7fc44fc4fd", go version: "go1.13.8 linux/amd64"
55f1446c5f472e7d8e308dcdf36d0d7fc44fc4fd
go version:
go version go1.14.1 linux/amd64
Kubernetes version information:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes cluster kind:
minikube
Are you writing your operator in ansible, helm, or go?
golang
Maybe the operator-sdk cann't get your GOROOT, you should set it obviously.
e.g.
[pzghost@localhost app-operator]$ echo $GOROOT
[pzghost@localhost app-operator]$ operator-sdk add api --api-version=app.example.com/v1alpha1 --kind=AppService
INFO[0000] Generating api version app.example.com/v1alpha1 for kind AppService.
INFO[0000] Created pkg/apis/app/group.go
INFO[0000] Created pkg/apis/app/v1alpha1/appservice_types.go
INFO[0000] Created pkg/apis/addtoscheme_app_v1alpha1.go
INFO[0000] Created pkg/apis/app/v1alpha1/register.go
INFO[0000] Created pkg/apis/app/v1alpha1/doc.go
INFO[0000] Created deploy/crds/app.example.com_v1alpha1_appservice_cr.yaml
INFO[0000] Running deepcopy code-generation for Custom Resource group versions: [app:[v1alpha1], ]
F0326 17:43:26.216455 23113 deepcopy.go:885] Hit an unsupported type invalid type for invalid type, from app-operator/pkg/apis/app/v1alpha1.AppService
[pzghost@localhost app-operator]$ rm -rf pkg/apis/
[pzghost@localhost app-operator]$ export GOROOT=/usr/local/go
[pzghost@localhost app-operator]$ operator-sdk add api --api-version=app.example.com/v1alpha1 --kind=AppService
INFO[0000] Generating api version app.example.com/v1alpha1 for kind AppService.
INFO[0000] Created pkg/apis/app/group.go
INFO[0000] Created pkg/apis/app/v1alpha1/appservice_types.go
INFO[0000] Created pkg/apis/addtoscheme_app_v1alpha1.go
INFO[0000] Created pkg/apis/app/v1alpha1/register.go
INFO[0000] Created pkg/apis/app/v1alpha1/doc.go
INFO[0000] Created deploy/crds/app.example.com_v1alpha1_appservice_cr.yaml
INFO[0000] RBAC rules in deploy/role.yaml already up to date for the resource (app.example.com/v1alpha1, AppService)
INFO[0000] Running deepcopy code-generation for Custom Resource group versions: [app:[v1alpha1], ]
INFO[0008] Code-generation complete.
INFO[0008] Running CRD generator.
INFO[0008] CRD generation complete.
INFO[0008] API generation complete.
That is my GOROOT as well:
$ go env GOROOT
/usr/local/go
If that is default why do I have to set it again?
This is a known issue. See https://github.com/operator-framework/operator-sdk/issues/1854#issuecomment-525132306
If setting GOROOT doesn't resolve this, we can re-open.
@joelanford I can see, that PR #2754 introduce to set GOROOT on k8s generating. But looks like we should set this env inside operator-sdk code for all gen and api add.
What do you think about to implement this. Is it useful or not. A can take care about impl.
|
2025-04-01T04:35:03.175837
| 2019-12-19T07:29:00
|
540131618
|
{
"authors": [
"joelanford",
"t-matsuo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9502",
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/pull/2346"
}
|
gharchive/pull-request
|
add checking GOROOT environment variable
If there is no GOROOT environment variable, 'operator-sdk add api'
fails.
see https://github.com/operator-framework/operator-sdk/issues/1854#issuecomment-525132306
Description of the change:
Motivation for the change:
Technically, GOROOT only has to be explicitly set if the user's GOROOT is different than the GOROOT used to build the binary.
However, given that different maintainers create the release artifacts and that they have different environments themselves, there's no guarantee that the build GOROOT would be consistent release to release. There's also no way to determine what the build GOROOT was at runtime. So it seems to me that setting GOROOT is something most users who are not building from source are probably having to do.
There may be a way to solve this in the SDK binary itself, so that users do not need to worry about this at all. Something like the following is what I've been thinking of adding in the code generation code where this problem arises:
out, err := exec.Command("go", "env", "GOROOT").CombinedOutput()
if err != nil {
// handle err
}
if err := os.SetEnv("GOROOT", string(out)); err != nil {
// handle err
}
I think solving it in the SDK binary itself is better than checking GORROOT too, so I will close this PR.
Thanks.
@t-matsuo Would you be interested in working on a PR to solve it in the SDK binary?
@joelanford I'm interested in it, but I am beginner at operator-sdk. I want someone to create a patch.
|
2025-04-01T04:35:03.177479
| 2020-03-30T12:00:53
|
590234407
|
{
"authors": [
"camilamacedo86"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9503",
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/pull/2743"
}
|
gharchive/pull-request
|
fix marker issue by testing just the doc folder
Description of the change:
just to unblock the current PR's but it will not solve the issue in #2741
Motivation for the change:
Related to #2741
I will move forward here as agreed for now in order to fix the broken CI for master.
However, we can change it in the future as well.
|
2025-04-01T04:35:03.184491
| 2020-07-07T20:09:47
|
652602621
|
{
"authors": [
"camilamacedo86",
"jmccormick2001"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9504",
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/pull/3352"
}
|
gharchive/pull-request
|
Scorecard kuttl image
Description of the change:
this PR adds the scorecard-test-kuttl image to the travis configuration. It will deploy the scorecard-test-kuttl (amd64) image to quay.io. Other architectures will require kuttl base images to also support more than just amd64 architectures and will be added in follow on work if required.
Motivation for the change:
a new scorecard-test-kuttl image is made available and meant to be released.
Checklist
If the pull request includes user-facing changes, extra documentation is required:
[ ] Add a new changelog fragment in changelog/fragments (see changelog/fragments/00-template.yaml)
[ ] Add or update relevant sections of the docs website in website/content/en/docs
https://travis-ci.com/github/jmccormick2001/operator-sdk/builds/174678026
this is the all green from the Travis CI build on the forked branch used for this PR.
Hi @jmccormick2001,
Just a few nits.
It shows missing the other arch types. See:
This image is useful to users. Am I right? So, should we not have fragment and docs to let them know how and when using it?
Hi @jmccormick2001,
Just a few nits.
It shows missing the other arch types. See:
This image is useful to users. Am I right? So, should we not have fragment and docs to let them know how and when using it?
the other architectures are not available at the moment due to the kuttl image only supports amd64 currently. The user docs were already merged. Since this was a new image I didn't think a changelog fragment was necessary given the new user facing documentation, but could add one if you think that is still necessary.
https://github.com/operator-framework/operator-sdk/blob/master/changelog/fragments/3278-add-scorecard-kuttl-image.yaml
I had added that changelog it appears in PR 3278.
|
2025-04-01T04:35:03.214909
| 2023-04-07T10:06:02
|
1658628408
|
{
"authors": [
"AdSchellevis",
"GottemHams",
"fichtner"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9505",
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/6481"
}
|
gharchive/issue
|
New IPSec configuration interface: XAuth PAM (group) constraints
Important notices
[X] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[X] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Is your feature request related to a problem? Please describe.
Using the new IPSec VPN configuration interface you can pretty much configure VPNs in any way you want, including support for mobile users. The only thing missing is a way to enforce group membership when using XAuth PAM. This option can still be found under Mobile Client Settings, but enabling that also means OPNsense will insert quite insecure phase configuration in swanctl.conf (probably a very conscious decision to remain compatible with many mobile clients). Also the way the new interface is set up might make people think everything should be configured there to begin with.
Describe alternatives you considered
See PR #6480.
@GottemHams https://github.com/opnsense/core/commit/621d1b015bc42f0e5e7fd0e1db30ed992d0ad143 should do the trick.
The now-separate Extended Authentication (Xauth) subsection doesn't have its own save/apply button. :D Other than that it seems to work fine, I also don't see additional phase config in strongswan.conf anymore.
How fast do things end up in an actual release, by the way?
I think this will be in 23.1.6. if we have a ticket and user confirmation that does speed things up considerably.
|
2025-04-01T04:35:03.229902
| 2023-05-19T10:44:02
|
1717037909
|
{
"authors": [
"belerovon",
"doktornotor"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9506",
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/6572"
}
|
gharchive/issue
|
Strongswan curl fetcher
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
[x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Describe the bug
I have the following Situation:
Setting up an IPSec with X509 Certificate
IPSec working and is up but in logs I can see CRL and OCSP Check are failing due to "no capable fetcher found"
CURL to OCSP and CRL from my Opnsense working correctly
Risk: That means that I can not revoke Certificates what is in my point of view a security Issue.
What i also have seen from swanctl --stats or ipsec statusall I can see that following modules are loaded:
loaded plugins: charon aes des blowfish rc2 sha2 sha1 md4 md5 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs12 pgp dnskey sshkey pem openssl pkcs8 fips-prf curve25519 xcbc cmac hmac kdf gcm drbg attr kernel-pfkey kernel-pfroute resolve socket-default stroke vici updown eap-identity eap-md5 eap-mschapv2 eap-radius eap-tls eap-ttls eap-peap xauth-generic xauth-eap xauth-pam whitelist addrblock counters
For me, it seems the Opnsense Strongwan package isn't build with the strongswan-mod-curl.
Think this would be really easy for you and increase the security.
To Reproduce
Using a PKI (e.g. openssl) and sign X509 Usercertificate with a CRL and/or OCSP
Import CA to Truststore of Opnsnse so the certificates can be checked and trusted
VPN > IPsec > Tunnel Settings : Add in IPSec VPN with Mutal RSA Authentication method and select imported CA
Add a Phase 2
Open in terminal a tail -f: tail -f /var/log/ipsec/latest.log | grep "con"
Connect to IPSec
in logs you will see the connection comes up but with hint:
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="293"] 05[LIB] <con4|52> unable to fetch from http://ocsp.xxxxxx.tld:8080, no capable fetcher found
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="294"] 05[CFG] <con4|52> ocsp request to http://ocsp.xxxxxx.tld:8080 failed
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="295"] 05[CFG] <con4|52> ocsp check failed, fallback to crl
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="296"] 05[CFG] <con4|52> fetching crl from 'http://crl.xxxxxx.tld/intermediate.crl.pem' ...
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="297"] 05[LIB] <con4|52> unable to fetch from http://crl.xxxxxx.tld/intermediate.crl.pem, no capable fetcher found
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="298"] 05[CFG] <con4|52> crl fetching failed
Expected behavior
Strongswan should be able to use curl to fetch OCSP or CRL.
Strongswan on a Linux system have installed strongswan-mod-curl and with it seems to work correctly.
Relevant log files
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="293"] 05[LIB] <con4|52> unable to fetch from http://ocsp.xxxxxx.tld:8080, no capable fetcher found
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="294"] 05[CFG] <con4|52> ocsp request to http://ocsp.xxxxxx.tld:8080 failed
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="295"] 05[CFG] <con4|52> ocsp check failed, fallback to crl
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="296"] 05[CFG] <con4|52> fetching crl from 'http://crl.xxxxxx.tld/intermediate.crl.pem' ...
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="297"] 05[LIB] <con4|52> unable to fetch from http://crl.xxxxxx.tld/intermediate.crl.pem, no capable fetcher found
<30>1 2023-05-19T12:35:04+02:00 zh-fw.xxxxxx.tld charon 17843 - [meta sequenceId="298"] 05[CFG] <con4|52> crl fetching failed
Environment
OPNsense 23.1.7_3 (amd64).
strongswan | 5.9.10_1
Hey would be nice to hear from you …
This does not depend on #6838 at all. strongswan simply needs to be compiled with CURL support enabled. It grabs the OCSP/CRL location from certificate just fine, but cannot download it.
|
2025-04-01T04:35:03.233381
| 2023-06-20T23:13:28
|
1766331935
|
{
"authors": [
"RayllanSouza",
"g-a-c"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9507",
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/6627"
}
|
gharchive/issue
|
otp seed working on multiple devices
Important notices
Our forum is located at https://forum.opnsense.org , please consider joining discussions there in stead of using GitHub for these matters.
Before you ask a new question, we ask you kindly to acknowledge the following:
[X] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[X] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Hey guys, I'm implementing this opnsense MFA, but I realized that the seed is not invalid when activating my device, is there any way I can make the seed invalid after activation?
Could you elaborate more? Which sets of instructions are you following? What steps have you taken?
TOTP seeds work on multiple devices by design, the seeds are not "activated" and "deactivated", it's just a shared key that both sides of the transaction need to know about, and if both have the same seed and time, then they generate the same "random" code.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.