id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1036126763
Receipts can be invalid if isssuing profile does not set their address Who is your user? A financial contributor What are they trying to achieve? They need a valid receipt for accounting purposes. How are they currently doing this? They get a receipt via email or download it from transaction details, but if the issuing profile (host or independent collective) has not set their address, that area is blank on the receipt. So right now people are making do with such invalid receipts. I have seen multiple examples of this recently, such as the below: P4 - not super frequent or blocking essential functions, but we should try to guide people to set their address more clearly. @Betree is this query correct if I want to email host admins without addresses set which have had at least one order (i.e. needed to generate a receipt) https://opencollective-metabase.herokuapp.com/question/617-hosts-without-address @alanna Sorry for the late reply. It's not: We're only looking at orders made directly to the fiscal host. We should also look at hosted collectives Some accounts have their addresses set in data.location.structured (we're planning on unifying everything with #3494 @alanna let me know if I should go ahead and write a query for this
gharchive/issue
2021-10-26T10:38:30
2025-04-01T06:45:14.679847
{ "authors": [ "Betree", "alanna" ], "repo": "opencollective/opencollective", "url": "https://github.com/opencollective/opencollective/issues/4870", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1113584955
[DOCS] Documentation unusable on mobile Firefox As a user of OpenCollective, I want to be able to access the documentation on any device. Right now it is not possible for me to use the documentation site in portrait mode on the mobile Mozilla Firefox app, or on desktop Firefox in mobile preview. Toggling Enhanced Tracking Protection has no effect. This is because the menu is stuck in the open state, and all I can do is navigate around without actually seeing any content. The issue can be replicated on the official GitBook site, and I am sending a link to this issue to their support e-mail. Appears on Firefox Daylight 96.2.0 on Android and Firefox 96.0 on Linux. I have also used LambdaTest to fire up a recent mobile and desktop Firefox (92.0) on Android and Windows to make sure this is not just my machine. How to reproduce Open a current version of Firefox on a mobile phone, or enable mobile preview on desktop ("Responsive Design Mode" - Ctrl-Shift-M or the icon in the dev tools panel), and make sure you are viewing the page in portrait orientation. Navigate to https://docs.opencollective.com/ You will briefly see the content, it is replaced by the menu which obscures the rest of the page. Tapping and sliding and navigating all have no effect. Screenshots Related issues #2595 Thank you for documenting this issue. I've been personally facing it for quite some time now but I was hoping that Gitbook would fix it given that it must be affecting a lot of users. I'll open a support ticket on their side now. Got an answer from Gitbook today: Thanks for reporting this. This is something we are already tracking and I have linked your report to the related topic in our internal product knowledge base. I can't promise a date for the fix, but it should be quite soon! Hi folks. Could you double check. This should not be an issue anymore. This is now fixed & looking good - thanks @petros
gharchive/issue
2022-01-25T08:39:26
2025-04-01T06:45:14.686418
{ "authors": [ "Betree", "loleg", "petros" ], "repo": "opencollective/opencollective", "url": "https://github.com/opencollective/opencollective/issues/5139", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1594023449
Update TE-11.1 Changes: Change the topology to have traffic flow through VRFs, instead of all happening in the DEFAULT VRF. Remove the non hierarchical scenario. Make all tests happening under the same hierarchical route resolution scenario. Pull Request Test Coverage Report for Build 4236360796 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 57.219% Totals Change from base Build 4235503690: 0.0% Covered Lines: 1181 Relevant Lines: 2064 💛 - Coveralls TE 11.1 is currently having 2 TC's case 1: TestDirectBackupNexthopGroup case 2: TestIndirectBackupNexthopGroup As per the instructions in the readme, we are only mapping to case 2 with some additional steps, should we update the TC's with provided instructions?
gharchive/pull-request
2023-02-21T19:57:18
2025-04-01T06:45:14.695798
{ "authors": [ "coveralls", "manan-patel", "xw-g" ], "repo": "openconfig/featureprofiles", "url": "https://github.com/openconfig/featureprofiles/pull/1164", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1735503160
RT-1.12 bgp_always_compare_med Hi, Please check new automation for RT-1.12 BGP always compare MED. This will resolve https://github.com/openconfig/featureprofiles/issues/1556. Thanks, Prabha Pull Request Functional Test Report for #1693 / 583a080bbf0feb22691f55ea483eb2cf26aca842 No tests identified for validation. Pull Request Test Coverage Report for Build 5182675142 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 47.896% Totals Change from base Build 5179396083: 0.0% Covered Lines: 1320 Relevant Lines: 2756 💛 - Coveralls Could you elaborate on changes from the original test plan specified in #1556 ? It seems that you've moved from 1 port with 2x EBGP sessions to 3 ports with 2x EBGP + 1x IBGP /gcbrun /gcbrun Could you elaborate on changes from the original test plan specified in #1556 ? It seems that you've moved from 1 port with 2x EBGP sessions to 3 ports with 2x EBGP + 1x IBGP To verify traffic for best route selected based on MED , need one traffic source port, hence added port3. 2 eBGP peers on different ports to verify the traffic on the best path.
gharchive/pull-request
2023-06-01T05:37:14
2025-04-01T06:45:14.703234
{ "authors": [ "LimeHat", "OpenConfigBot", "arulkumarsekar", "coveralls", "cprabha" ], "repo": "openconfig/featureprofiles", "url": "https://github.com/openconfig/featureprofiles/pull/1693", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
246679126
Modify the ref option Increase the search conditions, you can better retrieve descriptor. Resolved when the search is not unique or non-existent. cc/ @opencontainers/image-tools-maintainers Fixes #164 Signed-off-by: zhouhao zhouhao@cn.fujitsu.com This PR is partly duplicated with #89 ping @stevvooe @coolljt0725 @coolljt0725 @stevvooe PTAL ping @coolljt0725 @xiekeyang @stevvooe @stevvooe @xiekeyang @coolljt0725 @vbatts PTAL @wking @coolljt0725 @xiekeyang @stevvooe @cyphar PTAL reping @stevvooe @wking @coolljt0725 @cyphar @xiekeyang @vbatts I need your advices. Bump. ping @wking @xiekeyang @stevvooe @vbatts @cyphar I just think org.opencontainers.image.ref.name is tool long for user to type it, can we make a short name like --ref name=xx? or just keep the old way --ref xxx to stand for --ref org.opencontainers.image.ref.name=xxx? @coolljt0725 upadted. Thanks for your advices. with this patch when I validated a image, the output is $ oci-image-tool validate busybox/ oci-image-tool: reference "latest": OK oci-image-tool: reference "linux": OK oci-image-tool: reference "sha256:40a114053d955a2b80ee2cf6e13410b28b59594ceee9036b41e12c42d3e16615": OK busybox/: OK Validation succeeded I think it's a bit of redundant and make user confuse, and I use oci-image-tool validate --ref name=latest busybox/ , it doesn't work, oci-image-tool validate --ref name=latest busybox/ 1 errors detected: busybox/: validation failed: reference name=latest not found So the oci-image-tool validate has a different UI for --ref? It's better to keep consistent. Modified to become the following result, and I think this will be better. ./oci-image-tool validate ../ubuntu oci-image-tool: reference "name=latest": OK oci-image-tool: reference "platform.os=linux": OK oci-image-tool: reference "digest=sha256:ab8bbdc63526cb49eba4a9dc6833bc59ae978c0d0234726939ea762a26beb396": OK ../ubuntu: OK Validation succeeded ./oci-image-tool validate --ref name=latest ../ubuntu oci-image-tool: reference "name=latest": OK ../ubuntu: OK Validation succeeded @coolljt0725 @wking @stevvooe @xiekeyang @vbatts @cyphar Any other comments? ping @coolljt0725 @xiekeyang @stevvooe @coolljt0725 @xiekeyang PTAL ping @coolljt0725 @xiekeyang I just think the output is a bit of redundant if we don't specify any ref. $ oci-image-tool validate busybox/ oci-image-tool: reference "latest": OK oci-image-tool: reference "linux": OK oci-image-tool: reference "sha256:40a114053d955a2b80ee2cf6e13410b28b59594ceee9036b41e12c42d3e16615": OK busybox/: OK Validation succeeded All these three OK are for a same image, but this output make users think there are three images. Can we only display a OK if we don't specify a ref ?Or make the output more user friendly? @coolljt0725 updated, PTAL. ping @jonboulle ping @coolljt0725 @xiekeyang ping @opencontainers/image-tools-maintainers @coolljt0725 @stevvooe @xiekeyang @cyphar @vbatts PTAL ping @opencontainers/image-tools-maintainers @opencontainers/image-tools-maintainers @vbatts @coolljt0725 @stevvooe @Mashimiao I updated this Pr, and now I change the following: Validate changes to validate a file once Find reference more accurately through multiple ref inputs The following is the specific operation results: oci-image-tool validate --ref name=latest --ref platform.os=linux ubuntu.tar oci-image-tool: reference [name=latest platform.os=linux]: OK ubuntu.tar: OK Validation succeeded LGTM ping @opencontainers/image-tools-maintainers PTAL. This is a change on UI, I think it needs more lgtm
gharchive/pull-request
2017-07-31T08:22:05
2025-04-01T06:45:14.724067
{ "authors": [ "Mashimiao", "coolljt0725", "q384566678" ], "repo": "opencontainers/image-tools", "url": "https://github.com/opencontainers/image-tools/pull/169", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
158469285
Migrate to urfave/cli GitHub is redirecting codegansta/cli to urfave/cli: $ curl -sI https://github.com/codegangsta/cli | grep Location Location: https://github.com/urfave/cli 👍
gharchive/issue
2016-06-03T22:52:48
2025-04-01T06:45:14.725322
{ "authors": [ "mrunalp", "wking" ], "repo": "opencontainers/ocitools", "url": "https://github.com/opencontainers/ocitools/issues/96", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
831568481
Allow Siammask active tracker to run for more frames My actions before raising this issue [✓] Read/searched the docs [✓] Searched past issues Expected Behaviour Would like to suggest a new feature to include to allow user to set the number of frames the Siammask active tracker runs for. User should be able to: choose number of frames to run the active tracker run up to the next keyframe run to the end of current track Current Behaviour Currently the active tracker is fixed to running for 10 frames, to continue active tracking user has to run tracker again Context I am annotating a long video(900 frames) to create a tracking dataset. The active works alright but has to be rerun every 10 frames. Would like to be able to set the active tracker to run to the end of the video Your Environment cvat/develop 86eef84 Docker version 20.10.5, build 55c4c88 Next steps You may join our Gitter channel for community support. I think this exists when you use the cvat siammask tracker (tracking frames option). You should also be able to set siammask to run until the end of the video by just setting the tracking frames to a large number (larger than the number of frames in the video) and it'll run until the end of the video. Is it possible to specify the #frames on which the tracker has to run automatically? Do I have to specify this in the function.yaml of SiamMask or TransT ? Please help! I think this exists when you use the cvat siammask tracker (tracking frames option). You should also be able to set siammask to run until the end of the video by just setting the tracking frames to a large number (larger than the number of frames in the video) and it'll run until the end of the video. Where is this setting? I've looked in the siammask repo's yaml files and in the CVAT gui, and haven't found it there.
gharchive/issue
2021-03-15T08:39:35
2025-04-01T06:45:14.756720
{ "authors": [ "Hiddenfire21", "LucaBallan96", "lucidBrot", "plato-ron" ], "repo": "opencv/cvat", "url": "https://github.com/opencv/cvat/issues/2949", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
437628529
Can't find a DetectionOutput layer in the topology I'm using NCS2 with Raspi3 B+. I followed the instructions from the official page to install latest version of OpenVino toolkit on my Raspbian OS Stretch. The facial detection example works fine with object_detection_sample_ssd. So, I downloaded the facial-landmarks-35-adas-0002.bin and .xml in the similar manner and tried running it with the following command: ./armv7l/Release/object_detection_sample_ssd -m facial-landmarks-35-adas-0002.xml -d MYRIAD -i ~/image.png It gives the following output: [ INFO ] InferenceEngine: API version ............ 1.6 Build .................. 22443 Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] /home/pi/image.png [ INFO ] Loading plugin API version ............ 1.6 Build .................. 22443 Description ....... myriadPlugin [ INFO ] Loading network files: facial-landmarks-35-adas-0002.xml facial-landmarks-35-adas-0002.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ ERROR ] Can't find a DetectionOutput layer in the topology I got the same error with the 2018 OpenVino toolkit. Hey, @Manjot-Singh-Randhawa ! facial-landmarks-35-adas-0002 is not an object detection network and so it doesn't have any DetectionOutput layers. To use it, you have to first call a face detection model, crop the face of interest and feed it to facial-landmarks-35-adas-0002. You can use interactive_face_detection sample as a reference. Hey, @snosov1 ! I'm sorry for asking on a closed thread. How to use interactive_face_detection sample on an image? because I find that this sample only works on videos. VideoCapture can read a folder of images if you specify the correct filename pattern. Please, refer to the documentation: https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-videocapture Thank you @snosov1 !
gharchive/issue
2019-04-26T11:27:59
2025-04-01T06:45:14.762482
{ "authors": [ "Manjot-Singh-Randhawa", "satriaadhii", "snosov1" ], "repo": "opencv/open_model_zoo", "url": "https://github.com/opencv/open_model_zoo/issues/103", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
635579322
Update ci/requirements-*.txt using a prerelease build of OpenVINO 2020.4 Don't bump the versions, just update the lists. @Wovchena I just added a workaround to make the YOLO Python demo work with the new version of OpenVINO, could you take a look?
gharchive/pull-request
2020-06-09T16:25:01
2025-04-01T06:45:14.763631
{ "authors": [ "IRDonch" ], "repo": "opencv/open_model_zoo", "url": "https://github.com/opencv/open_model_zoo/pull/1211", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1054566680
Add YoutuReID for person ReID [x] wrapper [x] demo [x] benchmark impl [x] benchmark results [x] CPU x86_64 [x] CPU ARM [x] GPU CUDA cc @kaingwade @zihaomu
gharchive/pull-request
2021-11-16T07:58:23
2025-04-01T06:45:14.849795
{ "authors": [ "fengyuentau" ], "repo": "opencv/opencv_zoo", "url": "https://github.com/opencv/opencv_zoo/pull/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
536925461
XML file format for evaluation.py @AlexanderDokuchaev what is the .xml annotation file format used for evaluating the model in evaluation.py ? @omair18 It's look like this <?xml version='1.0' encoding='utf-8'?> <opencv_storage> <image000000> <path>1.jpg</path> <object000000> <type>pedestrian</type> <id>0</id> <bbox>295 14 61 175</bbox> </object000000> <object000001> <type>pedestrian</type> <id>1</id> <bbox>355 47 44 186</bbox> </object000001> </image000000> </opencv_storage> Format of box is <bbox>X Y W H</bbox> Dont ask why used several different formats, it was hard time :disappointed: And it will no be fixed because we dont use Caffe anymore. Great. Thanks. :)
gharchive/issue
2019-12-12T11:39:04
2025-04-01T06:45:14.852825
{ "authors": [ "AlexanderDokuchaev", "omair18" ], "repo": "opencv/training_toolbox_caffe", "url": "https://github.com/opencv/training_toolbox_caffe/issues/18", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2515850636
[Documentation] Support of Machine-to-Machine (M2M) token We need to update the documentation according to the changes made in https://github.com/opendatadiscovery/odd-platform/pull/1646 to facilitate machine-to-machine (M2M) (a. k. a. system-to-system (S2S)) communication raised in https://github.com/opendatadiscovery/odd-platform/issues/1639. There is a blog post to be used for reference https://blog.opendatadiscovery.org/odd-update-support-for-machine-to-machine-m2m-tokens-c1e2bf71c566 Documentation has been updated: Features: https://docs.opendatadiscovery.org/features#machine-to-machine-m2m-tokens ODD Platform configuration: https://docs.opendatadiscovery.org/configuration-and-deployment/odd-platform#machine-to-machine-m2m-tokens-configuration
gharchive/issue
2024-09-10T09:03:34
2025-04-01T06:45:14.858095
{ "authors": [ "RamanDamayeu" ], "repo": "opendatadiscovery/odd-platform", "url": "https://github.com/opendatadiscovery/odd-platform/issues/1704", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2604683273
add ADR for auth CRD Description Add an ADR recording the decision to add an auth CRD. How Has This Been Tested? Merge criteria: [x] The commits are squashed in a cohesive manner and have meaningful messages. [ ] Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious). [ ] The developer has manually tested the changes and verified that the changes work In term of naming, as part of the RHOAIENG-10498, we are going to introduce a new API group services.opendatahub.io for shared services / concerns RHOAIENG-13009 so it could be something like: apiVersion: services.opendatahub.io/v1alpha1 name: Auth the ADR is for "we need a new CRD in Operator to handle auth releated work" but nothing defined what need/should be captured in this CRD for now, except it will handle adminGroup is my understanding correct? In term of naming, as part of the RHOAIENG-10498, we are going to introduce a new API group services.opendatahub.io for shared services / concerns RHOAIENG-13009 so it could be something like: apiVersion: services.opendatahub.io/v1alpha1 name: Auth This makes sense, I'll add that. Moving it under this services API will reduce friction later. LGTM
gharchive/pull-request
2024-10-22T08:29:57
2025-04-01T06:45:14.863267
{ "authors": [ "StevenTobin", "lburgazzoli", "zdtsw" ], "repo": "opendatahub-io/architecture-decision-records", "url": "https://github.com/opendatahub-io/architecture-decision-records/pull/70", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1947281992
Update ovms and add Caikit Custom Serving Runtime Closes: #1940 #1911 Description Add support for kserve to OVMS OOTB and add Caikit as an OOTB Custom Serving Runtime How Has This Been Tested? Apply oc apply -k manifests/modelserving -n <namespace> Check that all the modeles are present in the Custom Serving Runtime page Test Impact Not applying, it's a manifest change Request review criteria: Self checklist (all need to be checked): [X] The developer has manually tested the changes and verified that the changes work [X] Commits have been squashed into descriptive, self-contained units of work (e.g. 'WIP' and 'Implements feedback' style messages have been removed) [X] Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious). [X] The developer has added tests or explained why testing cannot be added (unit tests & storybook for related changes) If you have UI changes: [ ] Included any necessary screenshots or gifs if it was a UI change. [ ] Included tags to the UX team if it was a UI/UX change (find relevant UX in the SMEs section). After the PR is posted & before it merges: [ ] The developer has tested their solution on a cluster by using the image produced by the PR to main wow, very nice, thank you all Tested on #1969 /approve FWIW, when you need to convert a JSON blob to string -- use JSON.stringify and not manipulate it yourself. @lucferbux @DaoDaoNoCode @andrewballantyne That's what I am doing in the PRs to add the annotation. Ah apologizes, I saw the JSON blob in my email -- didn't consider it would be for a yaml file. Nvm me!
gharchive/pull-request
2023-10-17T12:20:22
2025-04-01T06:45:14.870807
{ "authors": [ "DaoDaoNoCode", "andrewballantyne", "lucferbux", "shalberd" ], "repo": "opendatahub-io/odh-dashboard", "url": "https://github.com/opendatahub-io/odh-dashboard/pull/1972", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1137410873
TestWebhookInterceptor fails if defining ods.yaml with further tasks After provding fix in https://github.com/opendevstack/ods-pipeline/pull/445 we have seen that the test does work with a pipeline like: version: 2022.2.0 branchToEnvironmentMapping: - branch: master environment: dev environments: - name: dev stage: dev pipeline: {} but not with: version: 2022.2.0 branchToEnvironmentMapping: - branch: master environment: dev environments: - name: dev stage: dev pipeline: tasks: - name: package-image taskRef: kind: ClusterTask name: ods-package-image workspaces: - name: source workspace: shared-workspace In the second, looks like the method waitForPipelineRunToBeTriggered never gets to desired state, reporting context deadline exceeded error. just pushed b26b1fe <- I don't understand why it is now working on my host, so let's see if also works in github actions...if it works I will close this issue closing since https://github.com/opendevstack/ods-pipeline/pull/445/commits/b26b1fe466b66f9e31eda16314cccb941a819800 passed 👍
gharchive/issue
2022-02-14T14:47:06
2025-04-01T06:45:14.898662
{ "authors": [ "gerardcl" ], "repo": "opendevstack/ods-pipeline", "url": "https://github.com/opendevstack/ods-pipeline/issues/446", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1747401118
feat(lvol): snapshots enumeration method for lvol store Lvol object now has a dedicated method for listing all snapshots that reside on the pool. bors try bors try borse merge bors merge
gharchive/pull-request
2023-06-08T08:48:33
2025-04-01T06:45:14.905268
{ "authors": [ "mtzaurus" ], "repo": "openebs/mayastor", "url": "https://github.com/openebs/mayastor/pull/1403", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1818210719
chore(nexus/channel): reconfigure only device which is involved in DR… … event bors try bors try Opening for review. The extended test run of "change volumes and replicas" which disrupts channels quite a bit has been running on this change for more than 12hours. Another run of extended test framework test "Change Replicas and Volume" has been running for more than 7 hours on this change. bors try Changes merged via #1506 to handle the situation differently.
gharchive/pull-request
2023-07-24T11:20:00
2025-04-01T06:45:14.907242
{ "authors": [ "dsharma-dc" ], "repo": "openebs/mayastor", "url": "https://github.com/openebs/mayastor/pull/1469", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
307244041
ndm should support dynamically attaching external disks to k8s nodes. Feature Request: Example Usecase: Let us say Kubernetes Cluster is running in Amazon Cloud. As the Stateful workloads increase in number or there is an increase in data/capacity usage, ndm should provide an option to increase the storage available on the local nodes, triggering a provisioning of an EBS Disk and attaching to the Kubernetes node. Now that CSI drivers are prevalent, to provision dynamic disks onto Kubernetes disks - we can use CSI driver of the external system. Some examples of interest are : Cinder CSI driver OpenSDS storage dock Is there a rough estimate on when this will be available? @muenchdo this is something that I have been currently working on. Its very alpha & is in my personal github i.e. https://github.com/AmitKumarDas/storage-provisioner Do review and you can suggest on ways to integrate with auto scalers. NDM will be used only for device discovery on the nodes.
gharchive/issue
2018-03-21T13:28:03
2025-04-01T06:45:14.910466
{ "authors": [ "AmitKumarDas", "akhilerm", "kmova", "muenchdo", "umamukkara" ], "repo": "openebs/node-disk-manager", "url": "https://github.com/openebs/node-disk-manager/issues/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
367432590
fix cstyle error in src/replication.h Repo: openebs/istgt Branch: replication File: src/replication.h Steps to compile in: README.md PR subject should be like: fmt(cstyle): cstyle fixes in ./src/replication.h: 2: #define followed by space instead of tab ./src/replication.h: 21: #define followed by space instead of tab ./src/replication.h: 22: #define followed by space instead of tab ./src/replication.h: 23: #define followed by space instead of tab ./src/replication.h: 24: #define followed by space instead of tab ./src/replication.h: 25: #define followed by space instead of tab ./src/replication.h: 34: #define followed by space instead of tab ./src/replication.h: 36: missing space around relational operator ./src/replication.h: 36: comma or semicolon followed by non-blank ./src/replication.h: 36: #define followed by space instead of tab ./src/replication.h: 87: line > 80 characters ./src/replication.h: 140: line > 80 characters ./src/replication.h: 175: whitespace before right paren ./src/replication.h: 186: #define followed by space instead of tab ./src/replication.h: 212: #define followed by space instead of tab Resources: -> Countributers guide: https://github.com/openebs/openebs/blob/master/CONTRIBUTING.md Complexity: Easy can i take this up? sure @kmjayadeep go ahead
gharchive/issue
2018-10-06T07:28:58
2025-04-01T06:45:14.915550
{ "authors": [ "ashishranjan738", "kmjayadeep" ], "repo": "openebs/openebs", "url": "https://github.com/openebs/openebs/issues/2007", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
369764432
Fix golint issues in select_disk.go Following file has a lint issue: maya/cmd/maya-apiserver/spc-watcher/select_disk.go The lint issue is : Line 118: warning: error should be the last type when returning multiple items (golint) Line 222: warning: error should be the last type when returning multiple items (golint) Line 242: warning: error should be the last type when returning multiple items (golint) Help Links : Contributors guide I am unable to fix the sign-off should I create another pull request? #707 fixed this! https://github.com/openebs/maya/pull/707 Fixes this!
gharchive/issue
2018-10-13T04:02:45
2025-04-01T06:45:14.918330
{ "authors": [ "damsehgal", "satyamz" ], "repo": "openebs/openebs", "url": "https://github.com/openebs/openebs/issues/2092", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1178834652
fix: make sure attachment beans use the real version Checklist [x] the [contributor license agreement][] is signed [x] commit message follows [commit guidelines][] [ ] tests are included [ ] screenshots are included showing significant UI changes [ ] documentation is changed or added Description of change Make sure the attachment beans used to get thumbnail link use the real version rather than 0. Hmm, CodeBuild failed a NewSearchPage test - searchWithLessACLS. Hmm, CodeBuild failed a NewSearchPage test - searchWithLessACLS. A NPE caused by bean.getAttachments.
gharchive/pull-request
2022-03-24T01:31:45
2025-04-01T06:45:15.028865
{ "authors": [ "PenghaiZhang", "edalex-ian" ], "repo": "openequella/openEQUELLA", "url": "https://github.com/openequella/openEQUELLA/pull/3971", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
305984137
formatMoney broken precision (if bigger then 7) I think there is a bug related to formatMoney and precision: formatMoney(10, { symbol: '฿', precision: 8 }); result in ฿10.00000001 Such a big precision is needed for Bitcoin and possible other cryptocurrencies. I have the same problem. If the precision is 8 (which most cryptocurrency exchanges use), it show a weird number 1 at the end Hello, If anyone is still experiencing this problem, I believe I have managed to resolve it. The toFixed function (of the component) uses a technique of adding a theoretically insignificant value to the end of the number (1e-8). This is done so that rounding numbers like 2.22385, for example, are done to 2.2239, and not to 2.2238, avoiding problems with financial systems. However, the added value (1e-8) becomes significant when we work with many decimal places (when more than 7 are used). Thus, it is necessary that the sum realized is of an even smaller value instead of this, in order to accommodate a larger number of houses in the system (or to rethink the question of implementation for financial systems). So, change Math.Round((value + 1e-8) * power) / power).toFixed(precision) for Math.Round((value + 1e-31) * power) / power).toFixed(precision)
gharchive/issue
2018-03-16T15:59:54
2025-04-01T06:45:15.031886
{ "authors": [ "arekstryjski", "lucasgehl3n", "piavgh" ], "repo": "openexchangerates/accounting.js", "url": "https://github.com/openexchangerates/accounting.js/issues/192", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
456420876
security fixes Should fix #232 and #351. #232 is little bit problematic, as same overflow will probably happen in applications using similar code as in exrmakepreview or exrmaketiled around setFrameBuffer(), for example kimageformats and vigra. Perhaps some other check could be introduced in library itself? Hey, I apologize, we needed to do a force push to fix up a couple of historical commits that were merged prior to clean up, could you rebase / cherry pick your commits against the new master and re-push this? And thanks for trying to fix these! Once we get our history cleaned up, will start the review - I have only briefly looked, but I think there are a couple of modifications for other corner cases we should add to make the fixes more complete. @kdt3rd, should I do right now? Hey, it looks like we were modifying some common places. I've added a new commit on my PR #414 that should include your fixes (although actually moves the range checks you added to the common sanityCheck in ImfHeader), and then added a utility function to fix the pointer math in a few places. Could you test that it still fixes the issues you were addressing? Thanks in advance @kdt3rd, I will happy to test. However I need compilable patch on the top of openexr-2.3.0 sources. See https://github.com/kdt3rd/openexr/blob/address_part_232/OpenEXR/exrmaketiled/Image.h#L197 for example. w was removed by the fix. ah, sorry, this is not the 2.3.0 branch - this was against the development (master) branch. I have merged the fixes (your original ones plus a more complete version in all the tools I have done) to both master and release/2.3. If you could validate that I pulled in (or have a more general version) of your fixes, would appreciate it, and will close this one out. Thank you again for testing and helping! CVE-2018-18444 BEFORE $ valgrind -q exrmultiview left poc right AllHalfValues.exr 12.exr ==11719== Invalid write of size 8 ==11719== at 0x483D604: memset (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==11719== by 0x10C398: UnknownInlinedFun (makeMultiView.cpp:142) ==11719== by 0x10C398: main (main.cpp:251) ==11719== Address 0x5153c50 is 0 bytes after a block of size 16,000 alloc'd ==11719== at 0x483750F: operator new[](unsigned long) (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==11719== by 0x10DF01: UnknownInlinedFun (ImfArray.h:277) ==11719== by 0x10DF01: TypedImageChannel<half>::resize() (Image.h:222) ==11719== by 0x10C4E2: UnknownInlinedFun (Image.h:162) ==11719== by 0x10C4E2: UnknownInlinedFun (Image.cpp:100) ==11719== by 0x10C4E2: UnknownInlinedFun (makeMultiView.cpp:141) ==11719== by 0x10C4E2: main (main.cpp:251) ==11719== ==11719== Invalid write of size 8 ==11719== at 0x483D607: memset (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==11719== by 0x10C398: UnknownInlinedFun (makeMultiView.cpp:142) ==11719== by 0x10C398: main (main.cpp:251) ==11719== Address 0x5153c58 is 8 bytes after a block of size 16,000 alloc'd ==11719== at 0x483750F: operator new[](unsigned long) (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==11719== by 0x10DF01: UnknownInlinedFun (ImfArray.h:277) ==11719== by 0x10DF01: TypedImageChannel<half>::resize() (Image.h:222) ==11719== by 0x10C4E2: UnknownInlinedFun (Image.h:162) ==11719== by 0x10C4E2: UnknownInlinedFun (Image.cpp:100) ==11719== by 0x10C4E2: UnknownInlinedFun (makeMultiView.cpp:141) ==11719== by 0x10C4E2: main (main.cpp:251) [..] $ AFTER $ valgrind -q exrmultiview left poc right AllHalfValues.exr 12.exr Error reading pixel data from image file "poc". Unexpected data block y coordinate. $ CVE-2017-9111 BEFORE $ exrmakepreview id:000087,sig:11,src:000562+000300,op:splice,rep:2 foo Segmentation fault (core dumped) $ AFTER $ valgrind -q exrmakepreview id:000087,sig:11,src:000562+000300,op:splice,rep:2 foo Cannot read image file "id:000087,sig:11,src:000562+000300,op:splice,rep:2". Data window [ (808464432, 808464432) - (808478000, 808478000) ] offset / size will overflow pointer calculations $ CVE-2017-9113 BEFORE $ exrmakepreview id:000131,sig:11,src:000514+002831,op:splice,rep:16 foo Segmentation fault (core dumped) $ AFTER $ valgrind -q exrmakepreview id:000131,sig:11,src:000514+002831,op:splice,rep:16 foo Cannot read image file "id:000131,sig:11,src:000514+002831,op:splice,rep:16". Data window [ (-858993460, -858993460) - (-858993430, -858993430) ] offset / size will overflow pointer calculations $ CVE-2017-9115 BEFORE $ exrmakepreview id:000104,sig:11,src:001329+000334,op:splice,rep:2 foo Segmentation fault (core dumped) $ AFTER $ valgrind -q exrmakepreview id:000104,sig:11,src:001329+000334,op:splice,rep:2 foo Cannot read image file "id:000104,sig:11,src:001329+000334,op:splice,rep:2". Data window [ (808464384, 808464384) - (808464432, 808464432) ] offset / size will overflow pointer calculations $ From my point of view, it is fixed in openexr-2.3.0 plus 45f9912, a7eec54 and ec64836. I would not claim me as coauthor of any of the changes, there's no line left from original patch, consider it rather as a hint. Thanks for confirming, I believe I used your fix for the image::black function still, but all good - appreciate the help. Hopefully with the revived project under the ASWF, we will handle these kinds of things a bit more expediently in the future. Closing this one out for now.
gharchive/pull-request
2019-06-14T20:26:45
2025-04-01T06:45:15.040909
{ "authors": [ "kdt3rd", "pgajdos" ], "repo": "openexr/openexr", "url": "https://github.com/openexr/openexr/pull/401", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2036480951
Breaking Change in Proto Field Ordering This proto field re-ordering breaks compatibility. The user and object fields get swapped, and tuple checks are just entirely misinterpreted: https://github.com/openfga/api/commit/5daf658e21c2aa392532f7ab1e0549a877f48eeb#diff-2a88655b667aad16ec564eded7b5739e88e7d8b9da8a5231008519c3d3b80bb9L28 PR here: https://github.com/openfga/api/pull/97# It looks like assertions were similarly affected and that change was reverted. Was there some kind of safe, backwards compatible migration that I missed? I would have assumed more people would be affected by this breaking change. @alee792 releases v1.3.8 through v1.3.10 have been building up in preparation for a bigger feature release v1.4.0 which we just released today. As part of this development we had to make some changes to the protobuf API to better model the changes we introduced to support a new feature we call Conditional Relationship Tuples. One of the most notable changes we made that had an impact pervasively throughout the whole protobuf API was the changes to TupleKey. Namely, we introduced a new protobuf message called TupleKeyWithoutCondition and we repurposed the TupleKey to contain an optional condition, because tuples can now be optionally conditioned upon something. To accommodate these changes in a way that we felt made the API more uniform we decided to make some changes across the board that has impact to both field level and message level breaking changes. Since we had decided to make message level breaking changes, which would impact compatibility with old clients anyways, we also decided to re-order the fields of the new message structure so that it better matched the order of our documentation. We had intended to include these details about client compatibility in the v1.3.8 release notes, but it appears we failed to do so. I will be sure we get that retroactively updated and sincerely apologize for any disruption that may have caused. In the v1.4.0 release notes we issued a warning not to rollback to a release prior to v1.3.9 of OpenFGA if you've upgraded to v1.4.0 to avoid some side-effects, and part of that includes the incompatibilities between older clients and servers. I will make sure we update the documentation to reflect the concern with older grpc clients as well. Note that the OpenAPI (HTTP API) was not impacted, but gRPC clients were. We will soon be releasing some official documentation on the new Conditional Relationship Tuples feature and as part of that documentation we will have a more detailed overview of the impact of upgrades from v1.3.7 to v1.3.8 and up to v1.4.0. Our guidance/advise at this time would be either a) stay on v1.3.7 and use a gRPC client that references the older protobuf definition(s) until you can upgrade client application code paths to use the newer protobuf definitions introduced since the v1.3.9 release of OpenFGA or b) upgrade your gRPC clients to the newer protobuf definitions and make the upgrade to v1.4.0. If you want to be able to use the newer feature of Conditional Relationship Tuples, then we want developers to update their clients. I appreciate the prompt response! Breaking changes with gRPC are severe, but common, and I completely understand the desire to re-align your API. As long as we are given appropriate notification of breaking changes, we can do our best to accommodate, but were left a bit blind sided by this one. Once a server has been upgraded to v1.4.0 or v1.3.9, is it possible to downgrade to v1.3.7? Ideally, we'd like to go back to pre-v1.3.3 to test a full migration path to test a forwards/backwards compatible client I've clobbered together. The release notes for v1.4.0 suggest that a downgrade will be compatible for all but conditional components of APIs, but I wanted to double check, given that there seem to be schema changes throughout several v1.3.* patches. We will soon be releasing some official documentation on the new Conditional Relationship Tuples feature and as part of that documentation we will have a more detailed overview of the impact of upgrades from v1.3.7 to v1.3.8 and up to v1.4.0. Do we already have this? a) stay on v1.3.7 and use a gRPC client that references the older protobuf definition(s) until you can upgrade client application code paths to use the newer protobuf definitions introduced since the v1.3.9 release of OpenFGA Since the newer protobuf definitions are not compatible with OpenFGA v1.3.7, it'd be unfeasible to upgrade without causing a downtime. Any hints? @wilerson the upgrade from v1.3.7 to >= v1.4.0, unfortunately, is a big upgrade with incompatibility on the gRPC client front. If you want a 0 downtime rolling deploy for the upgrade, then you could switch to temporarily use the HTTP API (I realize that is non-ideal, but it is an option). That's probably the quickest way to unblock the upgrade. Other ways could include to temporarily deploy two instances of OpenFGA with their own backing datastores and replicate between the two, but that would be a much larger effort. Would the HTTP route work for your use case? Would the HTTP route work for your use case? It'd be quite cumbersome, as we've setup a good chunk of our infrastructure around OpenFGA to use gRPC, and would need to rewrite a few wrappers around the client to use the HTTP API.
gharchive/issue
2023-12-11T20:34:06
2025-04-01T06:45:15.080497
{ "authors": [ "alee792", "jon-whit", "wilerson" ], "repo": "openfga/api", "url": "https://github.com/openfga/api/issues/126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1370226763
refactor: update makefile Description Now that we are adding other storage engines, I think it would be useful to have commands in the Makefile that help with the local development of OpenFGA for these engines. Previously to start Postgres we could rely on the docker-compose file, but I don't suppose this will be the case with MySQL. So to keep things consistent, this PR adds a start-postgres-container command to the Makefile which can be used with the migrate-postgres and run-postgres commands. I envision we will have similar commands for MySQL. References Review Checklist [ ] I have clicked on "allow edits by maintainers". [ ] I have added documentation for new/changed functionality in this PR or in a PR to openfga.dev [Provide a link to any relevant PRs in the references section above] [ ] The correct base branch is being used, if not main [ ] I have added tests to validate that the change in functionality is working as expected Codecov Report Merging #227 (b45470e) into main (8edfaca) will decrease coverage by 0.05%. The diff coverage is n/a. @@ Coverage Diff @@ ## main #227 +/- ## ========================================== - Coverage 76.96% 76.90% -0.06% ========================================== Files 80 80 Lines 8847 8847 ========================================== - Hits 6809 6804 -5 - Misses 1687 1690 +3 - Partials 351 353 +2 Impacted Files Coverage Δ server/errors/errors.go 80.26% <0.00%> (-1.32%) :arrow_down: server/commands/check_utils.go 93.27% <0.00%> (-1.27%) :arrow_down: storage/postgres/utils.go 88.65% <0.00%> (-0.71%) :arrow_down: storage/postgres/postgres.go 65.28% <0.00%> (-0.63%) :arrow_down: server/commands/check.go 86.55% <0.00%> (-0.57%) :arrow_down: pkg/testutils/testutils.go 76.92% <0.00%> (+19.23%) :arrow_up: Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. I'm happy to make them less chatty. Anything in particular?
gharchive/pull-request
2022-09-12T16:53:44
2025-04-01T06:45:15.094293
{ "authors": [ "codecov-commenter", "craigpastro" ], "repo": "openfga/openfga", "url": "https://github.com/openfga/openfga/pull/227", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
127720881
Safari embeded images not loaded Embeded images working in Safari with Lime 2.8.2, but broken in 2.8.3. Culprit is image.crossOrigin = "Anonymous"; in lime/graphics/Image.hx, see this commit: https://github.com/openfl/lime/commit/603457892c3c71e1b6a06a9cad575c2a6b67698c Linked to https://github.com/openfl/lime/pull/697
gharchive/issue
2016-01-20T16:25:49
2025-04-01T06:45:15.110948
{ "authors": [ "ibilon", "iskolbin" ], "repo": "openfl/lime", "url": "https://github.com/openfl/lime/issues/675", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
305862969
Offline mode is slow Saving a product in offline mode is slow. It should be instant, even if work is done on the background: People want to scan / save as many products as possible, very quickly. @teolemon I think saving the products in local database does not take time, but uploading takes time. Currently when the user saves the product, we are uploading the product and if there is no network or any other failure occurs we are saving it in offline edit. The updates we can make is that we will save the product with out uploading and then we will make a scheduler which will upload the products from time to time , in a similar way it is done in #887 . I will work on this after #1228 gets merged, If not there will be many merge conflicts. @jaztriumph are you working on this @Karljoones Currently I am not working on this. Thank you for confirming Closed in #1712
gharchive/issue
2018-03-16T09:50:48
2025-04-01T06:45:15.116373
{ "authors": [ "Karljoones", "huzaifaiftikhar", "jaztriumph", "teolemon" ], "repo": "openfoodfacts/openfoodfacts-androidapp", "url": "https://github.com/openfoodfacts/openfoodfacts-androidapp/issues/1258", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
180450856
[Suggestion] Use hosted.weblate.org as translation system You can add your project for free to weblate https://hosted.weblate.org/ and get translations and verification of translations in many languages from contributors all over the world. Making fixes and/or adding new languages is automated by weblate. Please consider doing it :) We use Launchpad right now because it has a community of translators (especially from Ubuntu). Is there a change to get translations for languages we don't have yet ? I guess we could do a double sync if that is the case. translations.launchpad.net/openfoodfacts I have very good experience with weblate, a few times happened that people came to my projects and just translated it into some languages. I didn't have to create a file for language or anything, they just opened the project and added missing language for them, did translations and left. weblate is generic for opensource projects, not related to any community or project. I've started a migration to Crowdin The README.md still mentions Launchpad, I reckon that is obsolete now (maybe also a2po?).
gharchive/issue
2016-10-01T10:57:53
2025-04-01T06:45:15.120316
{ "authors": [ "agilob", "mikini", "teolemon" ], "repo": "openfoodfacts/openfoodfacts-androidapp", "url": "https://github.com/openfoodfacts/openfoodfacts-androidapp/issues/144", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
314983094
Each time searching a product number of results is shown together with an error page. User has to tap once again to see the products list. Each time searching a product number of results is shown together with an error page. User has to tap once again to see the products list. I think this error is long gone. Closing.
gharchive/issue
2018-04-17T09:43:13
2025-04-01T06:45:15.121422
{ "authors": [ "VaiTon", "yeldartoktasynov" ], "repo": "openfoodfacts/openfoodfacts-androidapp", "url": "https://github.com/openfoodfacts/openfoodfacts-androidapp/issues/1456", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
302940254
Fixed #1109 : Make country clickable on Product Page Description Made country clickable and the products of a country open in a recycler view. Related issues and discussion #1109 Screen-shots, if any @teolemon @Karljoones Can you merge this fast , so that we can use this add other facets like labels, categories etc . I have made the BrandActivity.java such that new facets can be easily added . Merging @PrajwalM2212
gharchive/pull-request
2018-03-07T02:18:19
2025-04-01T06:45:15.123790
{ "authors": [ "PrajwalM2212", "teolemon" ], "repo": "openfoodfacts/openfoodfacts-androidapp", "url": "https://github.com/openfoodfacts/openfoodfacts-androidapp/pull/1140", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1247148524
Wrong or missing miniature in product addition What [ ] Wrong or missing miniatures in product additionv (Related to #1706) [ ] At some point some are switched [ ] At some point also go missing Screenshot Already fixed Probably not relevant by now. Feel free to reopen or refresh with up-to-date screenshots if needed.
gharchive/issue
2022-05-24T21:33:16
2025-04-01T06:45:15.129293
{ "authors": [ "M123-dev", "monsieurtanuki", "teolemon" ], "repo": "openfoodfacts/smooth-app", "url": "https://github.com/openfoodfacts/smooth-app/issues/1974", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1660792979
allow deploying to internal from PR for select users What allow deploying to internal from PR for select users on a protected branch this is to facilitate work on PRs that required signed builds to work (eg the deep links one) 👌 thank you Pierre!
gharchive/pull-request
2023-04-10T13:33:17
2025-04-01T06:45:15.130544
{ "authors": [ "g123k", "teolemon" ], "repo": "openfoodfacts/smooth-app", "url": "https://github.com/openfoodfacts/smooth-app/pull/3867", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1081335989
CEO-369: Move delete institution into the edit page. Purpose Moves the delete institution button into the edit institution page. Related Issues Closes CEO-369 Submission Checklist [x] Included Jira issue in the PR title (e.g. CEO-### <title>) [x] Code passes linter rules (npm run eslint/clj-kondo --lint src) Testing Clicking the "Delete Institution" button from inside of the edit institution page should properly delete the institution. Screenshots src/js/reviewInstitution.js 286, 301, 316 you add this space a bunch. Consider a new PR to pass in text to ButtonSvgIcon and use padding / margin instead of &nbsp. Also why are you using ButtonSvgIcon when its already inside a button? I though ButtonSvgIcon was to also look like a button. The space has been removed in the latest commit to #1414
gharchive/pull-request
2021-12-15T18:09:36
2025-04-01T06:45:15.198732
{ "authors": [ "Oliver-BE", "sirmspencer" ], "repo": "openforis/collect-earth-online", "url": "https://github.com/openforis/collect-earth-online/pull/1416", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
681151590
🐛(core) create only required acme issuer CM Purpose As mentioned in openshift-acme's documentation: Let's encrypt provides two environments: live and staging. The environment is chosen based on the issuer ConfigMap that is created. Hence, when creating both live and staging CM issuers, on particular cases, openshift-acme can issue staging certificates for production environments. Proposal [x] only create required issuer CM given the environment Hat tip to @lunika for this suggestion. :tophat: :muscle:
gharchive/pull-request
2020-08-18T15:53:11
2025-04-01T06:45:15.201255
{ "authors": [ "jmaupetit", "lunika" ], "repo": "openfun/arnold", "url": "https://github.com/openfun/arnold/pull/534", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
953164604
Section 5.3.3 CRSum...is this an error? Where does this come from? @ayoumans it is a mistake. @joanma747 At least this sentence needs to be rewritten in my opinion as I cannot make sense of it grammatically: Specifically, Ordered list of names of the dimensions defined in the CRSum rectangular bounding region surrounding a geometry whose set of points span the value discontinuity in an angular coordinate axis. The CRSum mistake was introduced by https://github.com/opengeospatial/2D-Tile-Matrix-Set/commit/b0b6cbeea61291c379f5e4a918368287979c396b#diff-0add2601d8308c9591e3a5cf929e41a16c4e9c1d7916e8888e7eea9aa49c8815L124 The sentence starts with specifically, but I have no idea what this specifically refers to? It talks about Ordered list of names, but that seems unrelated to the rest of the sentence. This paragraph and the next seems to be talking about intricacies with angular units and some kind of projections near the poles when specifying the bounding box of the TileMatrixSet, but I am not clear on why that bounding box would not be a minimum rectangular bounding region? Perhaps with version 2 which has a separate bounding box for the TileSet this is no longer relevant, since the bounding box here is not about the data, but about the definitions of the tile matrix set itself? Thanks for spotting the mistake and when the mistake was produced. It is clear to me that this was some wrong editorial process. I have rolled the sentence back.
gharchive/issue
2021-07-26T18:18:28
2025-04-01T06:45:15.206731
{ "authors": [ "ayoumans", "jerstlouis", "joanma747" ], "repo": "opengeospatial/2D-Tile-Matrix-Set", "url": "https://github.com/opengeospatial/2D-Tile-Matrix-Set/issues/39", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1513657400
Which license should this project have? I added an MIT license for the software - is that appropriated for the building blocks register? Should the building blocks live somewhere else, separated from the code? @doublebyte1 Why an MIT Licence? Why not a Creative Commons Licence, which may have wider acceptance on the Web? @chris-little although CC is good for content on the web, it lacks some specific information about software like what to do with the source code or patents. I am open to use any license from the Free Software Foundation or Open Source Initiative which specify these aspects.
gharchive/issue
2022-12-29T11:45:34
2025-04-01T06:45:15.248488
{ "authors": [ "chris-little", "doublebyte1" ], "repo": "opengeospatial/bblocks", "url": "https://github.com/opengeospatial/bblocks/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
423274869
OpenAPI basics: what does "/" mean? If in a URL, say "http://....../..../a/b/c", what does "a/b" actually mean? Naturally, I would expect that "b is a sub-resource of a", but that does not always apply. Before we start building a whole framework on this we may want to clarify this. Dear all, my deep regrets, due to jetlag I totally forgot about the telecon. I'll duly study any minutes available. Any next telecon around the corner? -Peter -- Dr. Peter Baumann Executive Director, rasdaman GmbH Bremen (HRB 26793) www.rasdaman.com, mail: baumann@rasdaman.com tel: 0800-rasdaman, fax: 0800-rasdafax, mobile: +49-173-5837882 Professor of Computer Science, Jacobs University Bremen www.faculty.jacobs-university.de/pbaumann mail: p.baumann@jacobs-university.de tel: +49-421-200-3178, fax: +49-421-200-493178 "A brilliant idea is a job halfdone." Hi @pebau, yes, as agreed we have weekly telecons at least until the hackathon. 20190412 WCS.SWG Telecon: The overall URI/IRI is hierarchical. We don't need this discussion right now and agree to close the issue (at least for now).
gharchive/issue
2019-03-20T14:15:06
2025-04-01T06:45:15.252888
{ "authors": [ "Schpidi", "pebau" ], "repo": "opengeospatial/ogc_api_coverages", "url": "https://github.com/opengeospatial/ogc_api_coverages/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
981417859
Modified dashboard text I've modified the dashboard text a little. Some bigger suggestions: I think we could do with a panel which illustrates the model in the Explainer page. I've added an outline element for this (not sure about the syntax... sorry!). Need a figure/animation though. Will work on this with @rt17603 I've added a file (FAQ.txt) with some suggested text that we could use as an FAQ page. I think it might be nice to add an FAQ tab on the left. What do you think? we need some hyperlinks in there. What's the syntax for that? I've added some text that would be better as bullet points, but I don't know how to do that. can we introduce subscripts for CO2 and CH4? Again, not sure how to do it myself! Looks good overall. As you said a few things which would need to be formatted: Hyperlinks Bullet points Incorporate createModelExplainer into render() function if we want this to display Presumably would also need to add more content to incorporate an image or a map? New FAQ page (with bullet points etc.) For the new createModelExplainer panel - what were you thinking for being displayed alongside this?
gharchive/pull-request
2021-08-27T16:42:27
2025-04-01T06:45:15.258332
{ "authors": [ "mrghg", "rt17603" ], "repo": "openghg/dashboard", "url": "https://github.com/openghg/dashboard/pull/36", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1568427956
Distinguish between "levels" of data release Is your feature request related to a problem? At the moment, in addition to versions of a data file, there can be several functional levels of data depending on the stages in the processing. This could include: "raw" data with no calibration applied (NRT?) partially finalised data but not fully signed off (or would this be closer to NRT?) public access data from e.g. CEDA repository (QA/QC?) This data will need to be distinguished and sometimes compared. c.f. ICOS data level Describe the solution you'd like Some way to distinguish between these different "data levels" using the metadata e.g. add data_level key or otherwise to observation metadata. Features needed: This will need to be decided in some way in advance (e.g. 2 = public facing data, 1 = local processing) Could allow for flexibility (e.g. 1.1, 1.2, 1.3 etc. or could this be covered by existing versioning?) Can be related to where the data originates OR an input e.g. 2 = from external resource / matches to data level on that platform. Can tie into ranking where data level could be used as a distinguisher to specify a preference between "degenerate" data Describe alternatives you've considered No response Additional context No response I like the idea of flexibility. It probably makes sense to keep the "L1" and "L2" definitions the same as ICOS. So having the ability to specify sublevels (e.g. L1.1, L1.2) could help with this? Then all L2 data could match a publicly-accessible data repository (and specify the source in the metadata?). @joe-pitt I suppose the main benefit of the "sub-levels" would be to allow easy comparison between pre-finalised versions of the data? Is that the use case you would have in mind for this? Yes - to allow comparison, and also to allow the NRT data to be L1, the final archived data to be L2, and the partially-finalised data to have some level in between. From a pure openghg perspective, it generally seems useful to allow for this flexibility in level naming (i.e. level numbers don't have to be integers). Then in the "acrg implementation" of openghg (i.e. the object store that we maintain on BP1) we can decide exactly how we want to define each sub-level (but in principle another group could maintain a local object store and define different sub-levels). Great, that makes sense. We could either do this (A) as proposed above with non-integer levels in some way or (B) have an additional label (e.g. data_sub_level). Either way this could be an extra way to distinguish which can be set by the data owners. Another question around this is if we would want a search for "L1" to return ALL data labelled with L1 (even if this has a sub-level) OR only the data labelled as "L1" with no sub-level. Yeah good point - an additional data_sub_level label works just as well. It could potentially help to clarify things too - especially if there was a comment associated with each data_sub_level explaining what it refers to. I guess in that case it would make sense for a search of level=L1 to return all the sublevels? But I don't have a strong view on this - people with more end-user experience are probably better able to see the different pros/cons For OpenGHG Can add new options: "data level", "data sub level" - check alignment with ICOS Carbon Portal also "data sub level comment" - or something less cumbersome - Joe: something to help describe what is in the sub level sub level can also be highly flexible as needed (could be numeric or otherwise) level should be more strict - has to be an integer, >=0 Need to consider how this ties into ranking for both us and for the general product (ADVANCED) Do we want this to be completely empty or could this be shipped with a few basic rules? Could allow these rules to be set up by choice? Using command line interface could be good. E.g. $ openghg --setup-ranking Could present each basic rule and ask if user wants this to be included? Use highest height when there is choice? Use highest level when there is a choice? Would need to include "highest" keyword and set up in some way. From Architecture planning meeting on 07/09/2023. Still add data_level but rather than adding a data_sublevel keyword could use the versioning for this, moving to the following. Planning Initial Step: Add ‘data_level’ as a general key for all objects Next phase: Change how versioning is detailed Include details of versions within the metastore Full UUID becomes uuid/version e.g. 4d5-63u-rr3/v1 Could split this out to be within a version column (just numerical) [Longer term could de-normalise – linking to an entry per datasource containing all the versions] Need to add layer within searching to group together details linked to the same Datasource and return the latest version Group on everything except specific version and version related details. Allow search for specific versions. “version” becomes special essentially Can also include version specific metadata “label” / “comment” as a free form text input for each added piece of data – this can be different per version and added by the user “date” / “timestamp” for when the data was added to the object store. This would be automatically generated. Consider how to make sure this is not a breaking change – need to ensure backwards compatability e.g. don’t worry about entries in the metastore with no version entry for instance and check there’s only one entry. Can I check I've understood - now the version number will replace the data_sublevel? One of my use cases is the AGAGE data, where we have L1 data (NRT), L2 data (from the archive), and also some data that is halfway between. This data has been checked by station operators and has been through the European data review process, but has not yet been signed off for release by the wider AGAGE group. It is this data that Alistair uses in his annual report/NIR inversions. I had in mind that I would call this something like: data_level=1, data_sublevel=Euro_QC. Now should I use: data_level=1, version=Euro_QC? I think the idea is that if data at level X is updated, it is a new version, but it is still level X. If you want to add data at level X+1, it is a new "datasource", starting at version 0, and the data at level X is unaffected. So maybe we need L0 (NRT), L1 (some checks, not signed off), and L2 (signed off for release). (Or L1, L1.5, L2 if NRT must be L1.) For versions, I think the plan was, for each level, start the version numbers at 0 and just use the next whole number when a version is added. This made sense to us for cases when a new version was just a minor correction to the data. It sounds like NRT and Euro_QC should be different levels. (So if you decide to update the data after it has had some checks, but it is still not ready to be L2, then it would be a new version of the Euro_QC level.) I think with Rachel's versioning updates, you can decide if a new version will replace the old version, or if both versions will be kept. Although for observations, it might be worth it to just keep most copies, since the data is small. (Then if the data is updated after someone has used it for an inversion, we would know exactly what data they used.) We also discussed adding an option to include a message when data is updated. So you could have L1 include NRT and Euro_QC, and mention in the comment that the new version added is Euro_QC. I don't think we planned to make the comments searchable, but the default behaviour would be to return the latest version, so once NRT is updated to Euro QC as a new version, the Euro QC would be returned by default. Does that make sense/seem like it would work for you? Can I check I've understood - now the version number will replace the data_sublevel? One of my use cases is the AGAGE data, where we have L1 data (NRT), L2 data (from the archive), and also some data that is halfway between. This data has been checked by station operators and has been through the European data review process, but has not yet been signed off for release by the wider AGAGE group. It is this data that Alistair uses in his annual report/NIR inversions. I had in mind that I would call this something like: data_level=1, data_sublevel=Euro_QC. Now should I use: data_level=1, version=Euro_QC? Happy to set up a chat about this so we can make sure this doesn't move too far away from what you were thinking (or just stick with the previously discussed plan)? The idea of folding this into the versioning was so this didn't create additional work for you when adding new data (i.e. you always have to label if) but if it isn't mapping to what would work practically then we can re-tool it. Ok cool - yeah essentially what I want is the freedom to set arbitrary levels. When we discussed this before, I think the following thoughts came up: It would be nice to stick to the ICOS convention for data levels (NRT=1, final archive=2) Having non-integer data levels might be confusing Therefore to accommodate arbitrary levels (like my Euro_QC) we would have a data_sublevel I'm not too concerned about non-integer data levels - it might have been @mrghg who made this suggestion? Either allowing flexibility in the data levels or having a sublevel would be fine by me. Comment on the current "icos_data_level" key, is that we need to consider how this relates to the more generic data_level key. Suggestion for this would be: retire icos_data_level and just use data_level instead. When pulling from ICOS what was icos_data_level would be used for data_level instead but could also be set in other circumstances as well. Would there be any reason not to do this? We need to consider and ensure this still works well enough with a current object store which already used icos_data_level. Retiring icos_data_level sounds fine to me Comment on the current "icos_data_level" key is that we need to consider how this relates to the more generic data_level key. Suggestion for this would be: * retire `icos_data_level` and just use `data_level` instead. When pulling from ICOS what was `icos_data_level` would be used for `data_level` instead but could also be set in other circumstances as well. Would there be any reason not to do this? We need to consider and ensure this still works well enough with a current object store which already used icos_data_level. New issue created for this - #1067 Keys have now been added.
gharchive/issue
2023-02-02T17:05:47
2025-04-01T06:45:15.282213
{ "authors": [ "brendan-m-murphy", "joe-pitt", "rt17603" ], "repo": "openghg/openghg", "url": "https://github.com/openghg/openghg/issues/551", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
662294951
Wrong VM options for Windows Under "JavaFX and IntelliJ," "5. Add VM options" of "Modular from IDE" shows % for Windows instead of $: --module-path "%PATH_TO_FX%;mods\production" vs. --module-path "$PATH_TO_FX$;mods\production" or --module-path "${PATH_TO_FX};mods\production" Resolved by #180
gharchive/issue
2020-07-20T21:43:49
2025-04-01T06:45:15.551332
{ "authors": [ "Imericxu" ], "repo": "openjfx/openjfx-docs", "url": "https://github.com/openjfx/openjfx-docs/issues/157", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
920951124
安卓模拟器打开空白,调试显示dlopen failed: library "libkraken_websocket_jsc.so" not found 使用的 Kraken 版本 | What version of kraken are you using 重现步骤 | Steps To Reproduce git clone kraken 打开安卓模拟器(Android9.0) npm run start 打开空白 调试显示 Warning: You are using these overridden dependencies: ! kraken 0.8.0-dev.1 from path .. Running "flutter pub get" in example... 4.9s Using hardware rendering with device AOSP on IA Emulator. If you notice graphics artifacts, consider enabling software rendering with "--enable-software-rendering". Launching lib/main.dart on AOSP on IA Emulator in debug mode... 注: /Users/yichao/flutter/.pub-cache/hosted/pub.dartlang.org/jsc-0.2.0/android/src/main/java/com/example/jsc/JscPlugin.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 注: /Users/yichao/workplace/kraken/kraken/android/src/main/java/com/openkraken/kraken/KrakenPlugin.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 注: /Users/yichao/flutter/.pub-cache/hosted/pub.dartlang.org/kraken_websocket-0.2.0-dev.2/android/src/main/java/com/example/kraken_websocket/KrakenWebsocketPlugin.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 注: /Users/yichao/flutter/.pub-cache/hosted/pub.dartlang.org/kraken_devtools-0.2.0-dev.1/android/src/main/java/com/example/kraken_devtools/KrakenDevtoolsPlugin.java使用或覆盖了已过时的 API。 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。 Running Gradle task 'assembleDebug'... Running Gradle task 'assembleDebug'... Done 40.1s ✓ Built build/app/outputs/flutter-apk/app-debug.apk. Installing build/app/outputs/flutter-apk/app.apk... 8.6s E/flutter (10808): [ERROR:flutter/lib/ui/ui_dart_state.cc(199)] Unhandled Exception: Invalid argument(s): Failed to load dynamic library 'libkraken_websocket_jsc.so': dlopen failed: library "libkraken_websocket_jsc.so" not found E/flutter (10808): #0 _open (dart:ffi-patch/ffi_dynamic_library_patch.dart:11:55) E/flutter (10808): #1 new DynamicLibrary.open (dart:ffi-patch/ffi_dynamic_library_patch.dart:20:12) E/flutter (10808): #2 nativeDynamicLibrary (package:kraken_websocket/platform.dart:20:20) E/flutter (10808): #3 nativeDynamicLibrary (package:kraken_websocket/platform.dart) E/flutter (10808): #4 _initBridge (package:kraken_websocket/kraken_websocket.dart:10:1) E/flutter (10808): #5 _initBridge (package:kraken_websocket/kraken_websocket.dart) E/flutter (10808): #6 initBridge (package:kraken_websocket/kraken_websocket.dart:13:3) E/flutter (10808): #7 KrakenWebsocket.initialize (package:kraken_websocket/kraken_websocket.dart:18:5) E/flutter (10808): #8 main (package:kraken_example/main.dart:8:19) E/flutter (10808): #9 _runMainZoned.<anonymous closure>.<anonymous closure> (dart:ui/hooks.dart:142:25) E/flutter (10808): #10 _rootRun (dart:async/zone.dart:1354:13) E/flutter (10808): #11 _CustomZone.run (dart:async/zone.dart:1258:19) E/flutter (10808): #12 _runZoned (dart:async/zone.dart:1789:10) E/flutter (10808): #13 runZonedGuarded (dart:async/zone.dart:1777:12) E/flutter (10808): #14 _runMainZoned.<anonymous closure> (dart:ui/hooks.dart:138:5) E/flutter (10808): #15 _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:283:19) E/flutter (10808): #16 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:184:12) E/flutter (10808): Syncing files to device AOSP on IA Emulator... 399ms Flutter run key commands. r Hot reload. 🔥🔥🔥 R Hot restart. h Repeat this help message. d Detach (terminate "flutter run" but leave application running). c Clear the screen q Quit (terminate the application on the device). 💪 Running with sound null safety 💪 An Observatory debugger and profiler on AOSP on IA Emulator is available at: http://127.0.0.1:50117/FXCWznGWwGM=/ The Flutter DevTools debugger and profiler on AOSP on IA Emulator is available at: http://127.0.0.1:9100?uri=http%3A%2F%2F127.0.0.1%3A50117%2FFXCWznGWwGM%3D%2F 请问遇到这种情况要怎么处理 重现代码 | Code example: 预期结果 | Expected results: 实际结果 | Actual results: 不支持 android 模拟器,请使用真机 不支持 android 模拟器,请使用真机 那这调试有点麻烦 有没有别的办法 iOS 支持模拟器 iOS 支持模拟器 我们的用户主要是Android,那这个以后会支持嘛,大概什么时候 暂时没有计划,使用真机就可以了。 暂时没有计划,使用真机就可以了。 好的
gharchive/issue
2021-06-15T03:33:44
2025-04-01T06:45:15.714288
{ "authors": [ "andycall", "cielyic" ], "repo": "openkraken/kraken", "url": "https://github.com/openkraken/kraken/issues/415", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1627609599
how to filter data from a VectorTileLayer Describe the bug My data is about 200w, and I published a Tile Caching layer based on Geoserver. I use VectorTileLayer base on Openlayer6, but I just want to get part of the data to display. To Reproduce const aSource = new VectorTileSource({ name: info.node.layerName, format: new MVT(), tileGrid: createXYZ({ extent: olProj.get('EPSG:900913')!.getExtent(), maxZoom: MAXZOOM, tileSize: 256, }), url: `/geoserver2/gwc/service/tms/1.0.0/${info.node.workspace}:${info.node.layerName}@EPSG%3A900913@pbf/{z}/{x}/{-y}.pbf`, tilePixelRatio: 2, minZoom: MINZOOM, maxZoom: MAXZOOM, }); const layer = new VectorTileLayer({ declutter: true, source: aSource, opacity: DefaultOpacity, style: function (feature) { // style.getText().setText(feature.get('name')); if (feature.getProperties().province_name === orgParams.province) { const nodeStyle = nodeListStyles && nodeListStyles.length && nodeListStyles.filter((item: any) => item.layerName === info.node.layerName) && nodeListStyles.filter((item: any) => item.layerName === info.node.layerName) .length && nodeListStyles.filter((item: any) => item.layerName === info.node.layerName)[0] ? nodeListStyles.filter( (item: any) => item.layerName === info.node.layerName, )[0] : {}; const style = info.node.geomType === 'line' ? renderLineStyle(feature, nodeStyle, _) : info.node.geomType === 'polygon' ? renderStyle(feature, nodeStyle, _) : renderPointStyle(feature, nodeStyle, _); return style; } else { return null; } }, }); mapRef.current?.addLayer(layer); Expected behavior I display part of data from "style", but the page got stuck. I want to filter the data by province instead of the style method. And how? If you need to dynamically show or hide features based on their properties your approach is correct. If not you could filter the features read from the tiles, for example format: new MVT({layerName: 'layerName', layers: [info.node.layerName]}),
gharchive/issue
2023-03-16T14:22:04
2025-04-01T06:45:15.717931
{ "authors": [ "chuweiyan", "mike-000" ], "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/issues/14577", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
232030588
Webgl vector improvements Some minor improvements in rendering vectors with the WebGL renderer. Fixes #6823's rendering problems. Makes WebGL vector replays compatible with ol.render.Feature. Maps flat coordinates to numbers (see also #6823). How did the string values get it the flat coordinates? I tried to trace it back, but failed to catch anything suspicious. Thus, I have absolutely no idea. How should we deal with the rendering tests? There are different threshold errors on my local CI test. Should we raise the threshold? Then the WebGL rendering tests will be practically useless (not that they are so useful in their current form). @GaborFarkas, regarding the failing test, there is actually a noticeable difference in the rendered image: instead of the expected: Than you @gberaudo, I must have tested another branch. This PR is ready for review. @GaborFarkas about .map(Number), in what condition you had non number values ? (example, unit test, ...) @fredj with the example data @ondrejlaga uploaded in issue #6823. When you zoom around that polygon, it disappears on higher zoom levels. If you log the flat coordinates in debug mode, you will notice, when the polygon disappears, the coordinates are strings. Thank you @gberaudo for your review! @GaborFarkas that's because the data in #6823 is not valid GeoJSON; the coordinates are encoded as string instead of numbers (["-585733.80000000", "-1138268.90000000"] instead of [-585733.80000000, -1138268.90000000]) I've uploaded a fixed version: https://gist.github.com/fredj/ea28e945e49bc93e26339fdafb99c300 Please test with this file and remove ea4d451 if it works Thank you @fredj, it works as expected with your GeoJSON. Before merging I would like to fix another edge case I just noticed. I will remove ea4d451 in the final rebase. Sorry for the mess. Now the PR is ready for review again.
gharchive/pull-request
2017-05-29T13:58:36
2025-04-01T06:45:15.724902
{ "authors": [ "GaborFarkas", "fredj", "gberaudo", "tschaub" ], "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/pull/6858", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
821754379
add partprobe before retry getting devNode Signed-off-by: SharpRazor bjcb@cn.ibm.com @dongyanyang @jackydalong @bjhuangr please help review, thanks. @jackydalong @bjhuangr please check this ,thanks lgtm @dongyanyang @bjhuangr @jichenjc FYI: in my env, I move the probePartition code out of the if statement to manually trigger this code, and deploy VMs, the output log: Mar 8 00:44:26 113-cmp-cb root[1241067]: 1238469.0 refresh_bootmap: refresh_bootmap: refreshZIPL: Begin to refreshZIPL. Mar 8 00:44:26 113-cmp-cb root[1241071]: 1238469.0 refresh_bootmap: refresh_bootmap: refreshZIPL: multipath is enabled, so probe partition on dev of wwid: 3600507640083826de0000000000034d5 Mar 8 00:44:26 113-cmp-cb root[1241078]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: Before partprobe, /dev/disk/by-path/ folder content: ccw-0.0.0100#012ccw-0.0.0100-part1#012ccw-0.0.1a14-fc-0x500507680b21bac6-lun-0#012ccw-0.0.1a14-fc-0x500507680b21bac6-lun-0-part1#012ccw-0.0.1a14-fc-0x500507680b21bac7-lun-0#012ccw-0.0.1a14-fc-0x500507680b22bac6-lun-0#012ccw-0.0.1a14-fc-0x500507680b22bac7-lun-0#012ccw-0.0.1a14-fc-0x500507680b22bac7-lun-0-part1#012ccw-0.0.1a14-zfcp-0x500507680b21bac6:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b21bac6:0x0000000000000000-part1#012ccw-0.0.1a14-zfcp-0x500507680b21bac7:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b22bac6:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b22bac7:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b22bac7:0x0000000000000000-part1#012ccw-0.0.1b18-fc-0x500507680b21bac6-lun-0#012ccw-0.0.1b18-fc-0x500507680b21bac7-lun-0#012ccw-0.0.1b18-fc-0x500507680b22bac6-lun-0#012ccw-0.0.1b18-fc-0x500507680b22bac7-lun-0#012ccw-0.0.1b18-zfcp-0x500507680b21bac6:0x0000000000000000#012ccw-0.0.1b18-zfcp-0x500507680b21bac7:0x00000000000000 Mar 8 00:44:26 113-cmp-cb root[1241083]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: Before partprobe, /dev/disk/by-id/ folder content: dm-name-mpathbp#012dm-name-mpathbq#012dm-name-mpathbq1#012dm-uuid-mpath-3600507640083826de00000000000331b#012dm-uuid-mpath-3600507640083826de0000000000034d5#012dm-uuid-part1-mpath-3600507640083826de0000000000034d5#012scsi-3600507640083826de00000000000331b#012scsi-3600507640083826de0000000000034d5#012scsi-3600507640083826de0000000000034d5-part1#012scsi-SIBM_2145_010020e09b78XX00#012scsi-SIBM_2145_010020e09b78XX00-part1#012wwn-0x600507640083826de00000000000331b#012wwn-0x600507640083826de0000000000034d5#012wwn-0x600507640083826de0000000000034d5-part1. Mar 8 00:44:27 113-cmp-cb root[1241088]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: Before partprobe, /dev/mapper/ folder content: control#012mpathbp#012mpathbq#012mpathbq1. Mar 8 00:44:27 113-cmp-cb root[1241100]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: Manually run command partprobe on /dev/mapper/mpathbq of wwid 3600507640083826de0000000000034d5 with return code: 0 Mar 8 00:44:27 113-cmp-cb root[1241107]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: After partprobe, /dev/disk/by-path/ folder content: ccw-0.0.0100#012ccw-0.0.0100-part1#012ccw-0.0.1a14-fc-0x500507680b21bac6-lun-0#012ccw-0.0.1a14-fc-0x500507680b21bac6-lun-0-part1#012ccw-0.0.1a14-fc-0x500507680b21bac7-lun-0#012ccw-0.0.1a14-fc-0x500507680b22bac6-lun-0#012ccw-0.0.1a14-fc-0x500507680b22bac7-lun-0#012ccw-0.0.1a14-fc-0x500507680b22bac7-lun-0-part1#012ccw-0.0.1a14-zfcp-0x500507680b21bac6:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b21bac6:0x0000000000000000-part1#012ccw-0.0.1a14-zfcp-0x500507680b21bac7:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b22bac6:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b22bac7:0x0000000000000000#012ccw-0.0.1a14-zfcp-0x500507680b22bac7:0x0000000000000000-part1#012ccw-0.0.1b18-fc-0x500507680b21bac6-lun-0#012ccw-0.0.1b18-fc-0x500507680b21bac7-lun-0#012ccw-0.0.1b18-fc-0x500507680b22bac6-lun-0#012ccw-0.0.1b18-fc-0x500507680b22bac7-lun-0#012ccw-0.0.1b18-zfcp-0x500507680b21bac6:0x0000000000000000#012ccw-0.0.1b18-zfcp-0x500507680b21bac7:0x000000000000000 Mar 8 00:44:27 113-cmp-cb root[1241116]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: After partprobe, /dev/disk/by-id/ folder content: dm-name-mpathbp#012dm-name-mpathbq#012dm-name-mpathbq1#012dm-uuid-mpath-3600507640083826de00000000000331b#012dm-uuid-mpath-3600507640083826de0000000000034d5#012dm-uuid-part1-mpath-3600507640083826de0000000000034d5#012scsi-3600507640083826de00000000000331b#012scsi-3600507640083826de0000000000034d5#012scsi-3600507640083826de0000000000034d5-part1#012scsi-SIBM_2145_010020e09b78XX00#012scsi-SIBM_2145_010020e09b78XX00-part1#012wwn-0x600507640083826de00000000000331b#012wwn-0x600507640083826de0000000000034d5#012wwn-0x600507640083826de0000000000034d5-part1. Mar 8 00:44:27 113-cmp-cb root[1241121]: 1238469.0 refresh_bootmap: refresh_bootmap: probePartition: After partprobe, /dev/mapper/ folder content: control#012mpathbp#012mpathbq#012mpathbq1. Mar 8 00:44:27 113-cmp-cb root[1241126]: 1238469.0 refresh_bootmap: refresh_bootmap: refreshZIPL: devNode is: /dev/disk/by-id/dm-uuid-part1-mpath-3600507640083826de0000000000034d5 point: /mnt/OOvBK Only one question: Is partprobe really required? At least one reason to keep partprobe is that we need manually load partition in case of 'skip_kpartx = yes' in multipath.conf @bjhuangr Do you agree ? yes, just like Jacky said, we found that manually run partprobe help to reduce the issue that devNode not exist.
gharchive/pull-request
2021-03-04T05:00:27
2025-04-01T06:45:15.741160
{ "authors": [ "SharpRazor", "bjhuangr", "jackydalong", "jichenjc" ], "repo": "openmainframeproject/feilong", "url": "https://github.com/openmainframeproject/feilong/pull/430", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1493571881
MmapFileList.truncateOffset's else block may be in wrong place. branch: master class: io.openmessaging.storage.dledger.store.file.MmapFileList method: io.openmessaging.storage.dledger.store.file.MmapFileList#truncateOffset description: truncateOffset should truncate data whose position is after the offset, so else block may be in wrong place. origin code: //method truncateOffset for (int i = 0; i < mfs.length; i++) { MmapFile file = (MmapFile) mfs[i]; long fileTailOffset = file.getFileFromOffset() + this.mappedFileSize; if (fileTailOffset > offset) { if (offset >= file.getFileFromOffset()) { file.setWrotePosition((int) (offset % this.mappedFileSize)); file.setCommittedPosition((int) (offset % this.mappedFileSize)); file.setFlushedPosition((int) (offset % this.mappedFileSize)); } else { willRemoveFiles.add(file); } } } suggest code: for (int i = 0; i < mfs.length; i++) { MmapFile file = (MmapFile) mfs[i]; long fileTailOffset = file.getFileFromOffset() + this.mappedFileSize; if (fileTailOffset > offset) { if (offset >= file.getFileFromOffset()) { file.setWrotePosition((int) (offset % this.mappedFileSize)); file.setCommittedPosition((int) (offset % this.mappedFileSize)); file.setFlushedPosition((int) (offset % this.mappedFileSize)); } } else { willRemoveFiles.add(file); } } I think the original code is right~ In this case: we have two mmap file, each one is 1024B. we want to truncate offset to 1100, which means we need to keep the first mmap file and just reset three positions in the second mmap file but your suggest code will delete the first mmap file because fileTailOffet[1024] is less than offet[1100], we the first mmap file is added to willRemoveFiles
gharchive/issue
2022-12-13T07:25:44
2025-04-01T06:45:15.777549
{ "authors": [ "TheR1sing3un", "c1258445690" ], "repo": "openmessaging/dledger", "url": "https://github.com/openmessaging/dledger/issues/264", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2164894069
feat(quickstart): use unified meters Update quickstart and example meters to follow common meters: https://openmeter.io/docs/getting-started/meter-examples @sagikazarmark I created a separate issue about updating examples: https://github.com/openmeterio/openmeter/issues/657 Please make sure to squash commits (that merge commit triggers me)
gharchive/pull-request
2024-03-02T17:12:42
2025-04-01T06:45:15.779492
{ "authors": [ "hekike", "sagikazarmark" ], "repo": "openmeterio/openmeter", "url": "https://github.com/openmeterio/openmeter/pull/652", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2222837934
Create a separate service to manage a LedgerCtx in separate thread. This PR adds ledger_manager module, which defines LedgerManager, a service which launches a thread keeping track of a LedgetCtx and handling requests querying its state and making changes. All operations are performed asyncronously in a thread separate from the caller's, but syncronous wrappers for these calls are provided to avoid dramatic changes to ledger service API yet. This new service is not use anywhere yet. As a next step I plan to remove ledger field from NodeService and replace it with a LedgerManager. Then implementations of ledger-related traits on NodeService will be updated to call this external service and forward results to callers. @tizoc , thanks. I removed the shared references from the API. I will continue in this branch to actually use the code then.
gharchive/pull-request
2024-04-03T12:44:05
2025-04-01T06:45:15.802530
{ "authors": [ "Sventimir" ], "repo": "openmina/openmina", "url": "https://github.com/openmina/openmina/pull/322", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1087467009
Introduce pending_commit This PR does a few things. CoreGroup It disallows the processing of own commit messages via stage_commit. Instead, clients now have to process the StagedCommit resulting from their own create_commit calls directly via merge_staged_commit. It gets rid of the logic in stage_commit that covers own commits (and some code that was unused as a consequence). There is probably more to be cleaned up, but I'll leave that as a follow-up (see below). It fixes all tests that use the core_group interface to implement the flow described above. In some cases, this means that a new party had to be added to the group, because alice can't process her own commits anymore to test things. MlsGroup It stores the StagedCommit resulting from the most recent call to create_commit in the MlsGroup as the pending_commit and implements functions for inspecting and clearing it. It forces the consumer to clear the pending commit explicitly before creating another commit. If this is not done, commit-creating functions will return an error. If a commit is merged (be it the pending one, or one from another group member), the pending_commit is cleared. It introduces a merge_pending_commit function that merges the pending_commit and consumes it in the process. It fixes all tests using the MlsGroup interface. As with the CoreGroup, this means that in some cases, an extra group member had to be added. TODOs [x] explicit tests for the new API flows in MlsGroup Follow-ups [ ] clean up unneeded code/logic. For example, apply_own_update_path doesn't need to return a KPB anymore. Thanks for the review! I'm not sure if mixing the two state variables (is_active and pending_commit) is a good idea. It helps us combine two checks in the beginning of the committing functions, but otherwise the states partially overlap. For example, a group is active with or without a pending commit. Also, it's conceivable that there's still a pending commit when the client is removed from the group (although we could probably clear it at that point). Not a super strong opinion here, but I'd prefer to keep the states separated at the cost of an extra explicit check at the start of the committing functions. On the logic side: Shouldn't we also reject the creation of proposals when there is a pending commit? Generating application messages might still be ok I guess. Yes, that makes sense. I'll put it in.
gharchive/pull-request
2021-12-23T07:54:44
2025-04-01T06:45:15.824036
{ "authors": [ "kkohbrok" ], "repo": "openmls/openmls", "url": "https://github.com/openmls/openmls/pull/659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2076532132
Add Video Element with tag Hello, I'm sending this pull request to propose adding a new video component to the project. This component uses the native HTML <video> tag to provide an improved user experience when playing videos. I tested the new component on several modern browsers on Windows to ensure compatibility. I believe this new component will add significant value to the project, offering a native solution for video playback. I am open to feedback and suggestions to further improve this implementation. The motivation to suggest the component arises from the need to add videos to the documentation of an internal process that we are building with Hyperbook. Please review the changes when possible and let me know if there are any questions or adjustments needed. Thanks! Yours sincerely, Eliel Martins Looks great. Can you please update the pnpm.lock file by running pnpm i and commting the new file. Looks great. Can you please update the pnpm.lock file by running pnpm i and commting the new file. Hi, pnpm lock file updated. Perfect. Thank you :)
gharchive/pull-request
2024-01-11T12:38:30
2025-04-01T06:45:15.960476
{ "authors": [ "elielmartinsbr", "mikebarkmin" ], "repo": "openpatch/hyperbook", "url": "https://github.com/openpatch/hyperbook/pull/847", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1103353675
Team scJoint Hi, I've added predict modality train and test scripts for team scJoint. Thanks for the contribution! :relaxed:
gharchive/pull-request
2022-01-14T09:38:17
2025-04-01T06:45:16.003163
{ "authors": [ "itscassie", "rcannood" ], "repo": "openproblems-bio/neurips2021_multimodal_topmethods", "url": "https://github.com/openproblems-bio/neurips2021_multimodal_topmethods/pull/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
329800632
A694379226563213 transferring Add tests for transferring Depends on: https://github.com/openprocurement/openregistry.lots.core/pull/51 Pull Request Test Coverage Report for Build 173 47 of 55 (85.45%) changed or added relevant lines in 2 files are covered. No unchanged relevant lines lost coverage. Overall coverage decreased (-0.2%) to 96.042% Changes Missing Coverage Covered Lines Changed/Added Lines % openregistry/lots/loki/tests/transferring.py 30 38 78.95% Totals Change from base Build 171: -0.2% Covered Lines: 2499 Relevant Lines: 2602 💛 - Coveralls
gharchive/pull-request
2018-06-06T10:02:36
2025-04-01T06:45:16.019186
{ "authors": [ "Scandie", "coveralls" ], "repo": "openprocurement/openregistry.lots.loki", "url": "https://github.com/openprocurement/openregistry.lots.loki/pull/48", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
932634816
More Output Formatting Tweaks Put the violations above the unchecked rules as they are more important. Remove Rule GUID from outputs and replace with rule filename. Already Done.
gharchive/issue
2021-06-29T13:00:04
2025-04-01T06:45:16.028170
{ "authors": [ "belosh59", "kickroot" ], "repo": "openraven/magpie", "url": "https://github.com/openraven/magpie/issues/166", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
248446080
Add IntelliJ run steps ...as this did not work out of the box. This is cool. Just a few small comments I'll put in the appropriate places now...
gharchive/pull-request
2017-08-07T15:24:05
2025-04-01T06:45:16.029035
{ "authors": [ "gidsg", "karlbaker02" ], "repo": "openregister/openregister-java", "url": "https://github.com/openregister/openregister-java/pull/317", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
653344241
Remove legacy classes row and panel that conflict with bootstrap CSS This is important for https://github.com/openreview/openreview-web/pull/63, however it could be deployed before the migration is complete as it does not contain any breaking changes. Should we migrate current webfields when we switch to openreview-web? Well, the existing recruitment webfields will not be viewed again, so we don't need to migrate those, but it would be nice to migrate some of the active PC console webfields. The only downside to not migrating them is that the table won't quite fill the whole width of the page, it will have some extra padding on the sides. So I'd say it's not totally necessary. It looks good to me, I prefer to merge this before sending invitations to ICLR reviewers. waiting fro @mohituniyal review.
gharchive/pull-request
2020-07-08T14:28:12
2025-04-01T06:45:16.086653
{ "authors": [ "melisabok", "zbialecki" ], "repo": "openreview/openreview-py", "url": "https://github.com/openreview/openreview-py/pull/712", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2745706855
Major adjustments to recipes with certain licenses We need to make significant changes to recipes based on their licenses. Any recipes that are proprietary to Moderne should have most of their instructions removed (except for the CLI) and shouldn't have any YAML stuff. They also should have information about how to become a Moderne customer to get the recipes. We also need to figure out if there are any additional recipes to include that we aren't already -- and how we can securely get the information for all of this. Additional context in the Moderne Slack: https://moderneinc.slack.com/archives/C01VADFPJQZ/p1733866200482319 I’ll keep this open for now as I think there’s still quite a few more things to do unless I’ve missed something. Things I am thinking about include: Making a license section or note in the individual recipe pages. Removing links to github or issues for recipes that are proprietary as they will just result in 404s for people. Cleaning up instructions to say that certain recipes can only be used if you’re a Moderne customer.
gharchive/issue
2024-12-17T18:25:38
2025-04-01T06:45:16.089442
{ "authors": [ "mike-solomon" ], "repo": "openrewrite/rewrite-recipe-markdown-generator", "url": "https://github.com/openrewrite/rewrite-recipe-markdown-generator/issues/147", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1602106451
RSPEC-6104 - Map computeIfAbsent() and computeIfPresent() should not be used to add null values. See https://rules.sonarsource.com/java/RSPEC-6104 @timtebeek I can try this one
gharchive/issue
2023-02-27T23:17:48
2025-04-01T06:45:16.090876
{ "authors": [ "yeikel" ], "repo": "openrewrite/rewrite", "url": "https://github.com/openrewrite/rewrite/issues/2902", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
927495326
TabsAndIndentsVistor sets the wrong column level for Tabs to Spaces. Discovered white running the saas on MorganStanley repos. The code base has a mix of Tabs and Spaces, and I'm not sure what the exact cause is yet. But from the look of the output the convention is set to Spaces, but the ident seems to be set based on the number of tabs. In some cases the column is set to 0 and the entire file is aligned to the left. MorganStanely repos were ingested a long time ago, and may be ignored for now.
gharchive/issue
2021-06-22T17:56:50
2025-04-01T06:45:16.092256
{ "authors": [ "traceyyoshima" ], "repo": "openrewrite/rewrite", "url": "https://github.com/openrewrite/rewrite/issues/696", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1620468236
Reader Package Deprecation Implementation(npm) FIXES #15 Approach- Will try to implement all the method that uses the reader package and then refractor and deprecate the reader's methods. npm implementation for the deprecating package. (btw we expect some tests to still fail as we are repairing them) is fine but I want to make sure we strip the module. Ok, So there are two occurrences for the reader package, yarn and npm I can make one PR for yarn and then and PR for Npm. since,I have made changes in npm rightnow so this PR would be now in context of npm now. Thanks. [ ] Squashed previous commits into major commit [ ] Added error messages [ ] npm methods done, reader import removed. @puerco Can you give it a look? @puerco @nishakm Can you please give it a look? Can someone please review this PR? Hi @Ash-KODES, would it be possible for you to rebase your changes so we can test it? Hi @Ash-KODES, would it be possible for you to rebase your changes so we can test it? sure @nishakm @puerco @nishakm Please Review, I have rebased it. Hey Guys @nishakm @puerco ,I want to ask few questions regarding these projects,do we have a channel or how can i connect with you guys ? thanks & Regards
gharchive/pull-request
2023-03-12T18:42:11
2025-04-01T06:45:16.143851
{ "authors": [ "Ash-KODES", "nishakm", "puerco" ], "repo": "opensbom-generator/parsers", "url": "https://github.com/opensbom-generator/parsers/pull/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1095112728
got error dial tcp: lookup host.docker.internal: no such host 这个地址是不是不能用 跟docker版本有关,如果无法访问,填写docker网络中宿主机的IP即可 跟docker版本有关,如果无法访问,填写docker网络中宿主机的IP即可 是models报错,models.Setup failed ,数据库不对 填写docker-compose里面定义的服务名也可访问到指定容器服务
gharchive/issue
2022-01-06T09:08:50
2025-04-01T06:45:16.168386
{ "authors": [ "GargantuaX", "daleiyinsi" ], "repo": "openscrm/api-server", "url": "https://github.com/openscrm/api-server/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
325439562
Installation of OpenSDS through docker-compose hangs Is this a BUG REPORT or FEATURE REQUEST?: Uncomment only one, leave it on its own line: /kind bug /kind feature What happened: I was following steps in the opensds installation guide for using docker-compose to install. It seems to work but keeps timing out. Not sure if it really did start up and the docker-compose just didn't finish, or if it's really trying to wait for something that isn't happening. Let me know if there's something else I can provide to give more detail. Thanks. sudo docker-compose up bill_osdsdb_1 is up-to-date bill_osdslet_1 is up-to-date bill_osdsdock_1 is up-to-date Attaching to bill_osdsdb_1, bill_osdslet_1, bill_osdsdock_1 osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5 osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620 osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6 osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{} osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040 osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64 osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1 osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380 osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050 osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379 osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000 osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379 osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380 osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32 osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0 osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1 osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided] osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10) osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1 osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2 osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2 osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2 osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32 osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3 osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3 osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3 osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02 ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information. If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60). root@ubuntu:~# COMPOSE_HTTP_TIMEOUT=300 root@ubuntu:~# sudo docker-compose up bill_osdsdb_1 is up-to-date bill_osdslet_1 is up-to-date bill_osdsdock_1 is up-to-date Attaching to bill_osdsdb_1, bill_osdslet_1, bill_osdsdock_1 osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5 osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620 osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6 osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64 osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1 osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{} osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040 osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379 osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000 osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379 osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32 osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050 osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0 osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1 osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided] osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10) osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1 osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2 osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2 osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2 osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32 osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3 osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3 osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3 osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:33:01.077091 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:33:01.083595 7 discovery.go:152] Backend default discovered pool sample-pool-02 ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information. If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60). root@ubuntu:~# sudo COMPOSE_HTTP_TIMEOUT=300 docker-compose up bill_osdsdb_1 is up-to-date bill_osdsdock_1 is up-to-date bill_osdslet_1 is up-to-date Attaching to bill_osdsdb_1, bill_osdsdock_1, bill_osdslet_1 osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5 osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620 osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6 osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64 osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050 osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1 osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379 osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000 osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379 osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380 osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32 osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0 osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1 osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{} osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040 osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided] osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10) osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1 osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2 osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2 osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2 osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32 osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3 osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3 osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3 osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:33:01.077091 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:33:01.083595 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:34:01.114775 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:34:01.114958 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:35:01.117435 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:35:01.117502 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:36:01.120567 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:36:01.120702 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:37:01.125978 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:37:01.126012 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:38:01.128279 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:38:01.128300 7 discovery.go:152] Backend default discovered pool sample-pool-02 osdsdock_1 | I0522 12:39:01.130688 7 discovery.go:152] Backend default discovered pool sample-pool-01 osdsdock_1 | I0522 12:39:01.130711 7 discovery.go:152] Backend default discovered pool sample-pool-02 ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information. If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 300). What you expected to happen: How to reproduce it (as minimally and precisely as possible): Anything else we need to know?: Environment: Hotpot(release/branch) version: OS (e.g. from /etc/os-release): Kernel (e.g. uname -a): Install tools: Others: Ubuntu 16.04 Hi @wjeiv , can you also update the status of this issue? Thanks! Hi @wjeiv , I'll just close this issue for now, please be free to reopen it if you still have any question : )
gharchive/issue
2018-05-22T19:53:41
2025-04-01T06:45:16.238256
{ "authors": [ "leonwanghui", "wjeiv" ], "repo": "opensds/opensds", "url": "https://github.com/opensds/opensds/issues/399", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
328967881
Overlay over overlay selection How to re-produce issue : Create a small overlay (using addOverlay()) Add a MouseTracker() to the small overlay element. Create a large overlay over the first small overlay. Add a MouseTracker() to the large overlay element. Unable to select / delete (top right x) the first small overlay. Is there a way to configure openSeaDragon to check for all overlays regardless of the creation / depth ordering? Can you make a Code Pen (or similar) to illustrate this issue?
gharchive/issue
2018-06-04T09:03:46
2025-04-01T06:45:16.241383
{ "authors": [ "AlexDarbyFujitsuGmail", "iangilman" ], "repo": "openseadragon/openseadragon", "url": "https://github.com/openseadragon/openseadragon/issues/1476", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1610373425
Work in progress: Refactor drawing code to allow plugin renderers, with three.js WebGL drawer as a demo This draft/WIP PR is related to conversations had in https://github.com/openseadragon/openseadragon/issues/2294 about the benefits of using webgl for rendering in some circumstances (nearly all?), rather than context2d. A demo page showing the changes is available when built/testing locally at http://localhost:8000/test/demo/webgl.html Rather than jumping to a pure webgl implementation built-in to OSD, as a first step my approach was to allow a custom drawer (rendering) implementation to be passed into the viewer as an option on construction. A few things had to change to enable this. The context2d drawing operations in core OpenSeadragon have been consolidated from the TiledImage and Tile classes into the Drawer class, which inherits from a new DrawerBase base class. The TiledImage and Tile classes now handle the logic of managing tile positioning and image data, with cleaner separation from details of the rendering process. DrawerBase defines a public API that core OpenSeadragon code uses to interact with the drawer implementation. To use a custom drawer/render, define a new class that inherits from DrawerBase and implements the public API. The constructor of this class can be passed in during construction of the viewer using the new customDrawer option. My goal while refactoring core OSD (to establish separation of functionality, with tile fetching and positioning being separated from drawing instead of mashed all together in various places) was to leave the existing drawing pipeline unchanged. However, a lot of logic and lines of code needed to be moved around! Hopefully the various tests and demos can help ensure nothing was inadvertently broken. I tried to leave all existing "public" functions unchanged in signature and functionality (where possible). In addition to testing and review, there are a lot of parts of this that will need to be modified before it could be merged. E.g. the webgl renderer I've created is included in this PR for simplicity of testing, but ultimately would need to be a separate project and would serve as a plugin. Similarly, I've included three.js source code in the lib folder, and this will need to be removed. All that said, I wanted to get this out so people can start testing it out, thinking about it, and to get a conversation going. Very cool!! You also probably don't want to include package.lock in the PR... I also don't know any more about the correct way to create a bult-in webgl renderer. Lately, I was thinking maybe a minimalistic implementation would be better and let plugins handle more features, people would probably go to a three.js renderer anyway if they need to render stuff flexibly. I am all for breaking changes in this implementation (related to low-level API), simply because if someone is using such functions related to Tile drawing etc, then they are gonna need to adjust their libs sooner or later. Keeping the compatibility to all the drawing calls that are IMHO design-wise misplaced would only pollute the library and 99% of use cases do not call them. Optionally we could create DeprecatedTile class to temporarily allow manually to keep the old API but honestly, this IMHO needs a clean cut. I agree, I would be in favor of cleaning this up too. Especially those methods that are not part of the officially documented public API but which hadn't previously been marked as private seem like it would be OK to modify as part of a big breaking change release like 5.0.0. Getting rid of previously deprecated methods would also be reasonable to me. Looking at the implementation, DrawerBase and Drawer would need some polishing. Yes, there are quite a few parts of this that need polishing. Before going further, though, I want to have a discussion including input from maintainers about what approaches we should take at this point. I don't want to make huge changes away from the existing structure, and then have to try to add them back in later. I understand, still I want to have it written here explicitly so it won't be forgotten. This is very exciting! I agree with the general direction and the discussion here. I'll try to have more detailed thoughts to share as we go along. One thing that surprises me is that none of drawTiles is being used for the WebGL version. In retrospect, it makes sense, especially because you're using Three.js, which is a scene graph based renderer, not an immediate mode renderer like canvas is. That said, I hope we can come to a state where as much logic as possible (or at least sensible) can be shared between the drawers, so when we want to add new features, we don't have to do it separately for each drawer. Who knows, there may be nothing like that where it makes sense sharing them, but we should at least look for such commonalities. As for making a pure WebGL renderer that doesn't use Three.js, I know it's a bit daunting, but it shouldn't be too terrible. This is a good starting point: https://webglfundamentals.org/ This is very exciting! I agree with the general direction and the discussion here. I'll try to have more detailed thoughts to share as we go along. Thanks! I hope we can come to a state where as much logic as possible (or at least sensible) can be shared between the drawers, so when we want to add new features, we don't have to do it separately for each drawer. Yeah, that would be nice. When I was getting started I thought perhaps a more direct port to webgl would be easier. However, at least the way I was envisioning it, the approach would only easily work for fully opaque images. The problem I was running into is that when you add the possibility of transparency, you can't just add a new texture with the higher-res data - you need to be able to remove any existing image data from the same TiledImage at the location where the new tile needs to be drawn. Otherwise, the lower-res layers will be visible underneath and mess up the image. Fixing this problem would mean having to add a hole somehow into the geometry of the lower-res tile. The context2d drawer does this with clearRect, which is the root of the ongoing seams issue for rotated images with transparency. Probably doable in webgl, but it is not something I know how to implement without digging into it a lot further. If anyone has ideas on how to implement this, I'd love to hear them! Treating each TiledImage as a scene graph poses its own challenges - and may not be ideal - but it was a more tractable problem for me. My goal was to create a proof-of-principle implementation more than anything else, to show that it fixes problems with stitching artifact and can be used for all existing features. I'm certainly not opposed to a more direct mapping of existing drawing logic, as long as it works well and avoids artifacts in the images. I'm currently working on cleaning up the core code further and making the internal low-level class APIs more explicitly private and streamlined. I'll push the new changes up to the PR soon, hopefully. @iangilman I want to create the WebGL renderer, I will just have time in June at the earliest, unfortunately. I have pretty much most of the implementation (including fallback support for WebGL 1.0) in my custom renderer, the only thing I would have to modify would be the vertex shader. Reusing it would be IMHO a good idea because I have their interfaces ready for custom shader rendering implementation for tiles, and it's polished code. It would need some feature trimming, but (with the exception of the vertex shader) can be pretty much plugged into the drawing routine right away. I want to create the WebGL renderer, I will just have time in June at the earliest, unfortunately. That's awesome that you have something nearly ready to go! June will be here before we know it. Not sure there wouldn't be any seam problems, since it has to do some aliasing on the borders of the regions. I guess we have to try it out. Depending on the implementation, I could see this working or not. If this is done in output pixel space - as is done by the current drawer - I wouldn't be surprised if there are tiling artifacts despite it being done via webgl. Composing the tiles in pixel-perfect image space, and letting the viewport transform matrix calculate output pixel values by accounting for data from all tiles avoids this. As I said, the only thing I am concerned with is loading a bunch of textures to the GPU each animation frame. This is also something that might, or might not, be an issue with the approach of directly mimicking the current drawing pipeline. But it might work fine! Lots of things to test out. I hope we can come to a state where as much logic as possible (or at least sensible) can be shared between the drawers, so when we want to add new features, we don't have to do it separately for each drawer. Who knows, there may be nothing like that where it makes sense sharing them, but we should at least look for such commonalities. I've been thinking about this more. I think an important principle is to keep two major components of the pipeline as separate as possible: the logic of how the images/tiles are positioned, and ideally composited the details of actually rendering the pixels to the screen In the existing Drawer, virtually all of drawTile has to do with figuring out how to use context2d operations to get a rectangle of image data into the right place - not within the world, just within the canvas. That, to me, is an implementation detail specific to the rendering technique, and it doesn't make sense to share it with renderers that don't make use of it. In fact, a ton of logic in the existing code is wrapped in if( this.useCanvas ){} blocks, because it only makes sense for one of the two current rendering strategies - the HTML approach doesn't use it at all. Along those lines, I don't think it's a great design to have both approaches lumped into a single implementation, because it makes the code messier and harder to follow. Therefore, I'm working on separating the two methods out into CanvasDrawer and HTMLDrawer, each of which inherit from DrawerBase. I had moved blendTile into Drawer, but now I'm realizing it probably makes more sense to move that back to TiledImage (or even Tile?), since that's a detail about the desired state of each tile that is independent of the specifics of rendering. I think it's hard to get away from the fact that different drawer implementations will inherently support or not support specific features - like how the HTML drawing method doesn't support rotation, or how the CanvasDrawer would find it hard to support a 3D "page turn" transition (for sequence mode) that a Three.js-based implementation might fairly easily enable. Keeping logic about desired state in Tile/TiledImage and leaving only the rendering details to the Drawer(s) should minimize duplication of code across drawer implementations. Making use of inheritance in the Drawer class hierarchy to share methods where it makes sense to do so could also be useful. Just thinking out loud... thanks everyone for the ideas and discussion and help with implementing this change! I will have a look next week when I will be less busy. I just want to drop a note that I would like to have also this flexibility reflected in the OpenSeadragon configuration - e.g. instantiate the viewer with a config subobject which is dependent on what rendering engine is used, i.e. refactor also the configurations. some might be solely drawer-dependent. E.g., debugMode could move to this new configurator so that we could new OpenSeadragon({drawer: 'canvas', drawerOptions:{debugMode: true}}) Name Type Attributes Default Description drawer Object / String 'canvas' -- drawerOptions Object Drawer-dependent options object. Options supported are dependent on the used drawer engine. Something along these lines. I want to create the WebGL renderer, I will just have time in June at the earliest, unfortunately. I have pretty much most of the implementation (including fallback support for WebGL 1.0) in my custom renderer, the only thing I would have to modify would be the vertex shader. Wonderful! And yeah, we've been waiting 10 years so far (#68 was filed April 16, 2013), so what's a few more months?? I think an important principle is to keep two major components of the pipeline as separate as possible: the logic of how the images/tiles are positioned, and ideally composited the details of actually rendering the pixels to the screen Yeah, I think you're right! It just struck me by surprise at the beginning. One issue I have with your current Three.js implementation is that the draw function itself doesn't actually do anything; instead it's handled by various events. Is that fundamental to the approach, or just something you did for expedience? I'm working on separating the two methods out into CanvasDrawer and HTMLDrawer, each of which inherit from DrawerBase. Awesome, yes! I see there's still a Drawer class… I assume that's going away? I think it's hard to get away from the fact that different drawer implementations will inherently support or not support specific features True. We need to get better at making the distinctions clear, something we're currently doing very badly with the HTML renderer. Instantiate the viewer with a config subobject which is dependent on what rendering engine is used Seems like this could be a good way to draw attention to the fact the different drawers are capable of different things. Of course we'll need to keep the old flags alive for a while for backwards compatibility. I wonder, might we end up needing something like this at the TiledImage level as well? We may want to engage features that only work in some drawers on a per image basis (like we already do with rotation, for instance). Sounds messy, though... I'm concerned it would push implementation details into the API too much. Why should people have to remember that rotation uses a different API because it's drawer specific? Maybe there are other ways to make these distinctions clear. Now I'm leaning back towards maybe not having a drawerOptions, or at least being very specific about what goes in it. One issue I have with your current Three.js implementation is that the draw function itself doesn't actually do anything; instead it's handled by various events. Is that fundamental to the approach, or just something you did for expedience? @iangilman I did it that way originally so that I could directly compare the original rendering pipeline to the new one (as in the demo page), and using events to trigger the second renderer to draw the scene was more straightforward than modifying core code to use two drawers and two canvases. Eventually I think it would make sense to move away from events and back towards using the draw function and other aspects of the API, now that things have matured a bit in the design/refactoring process. I see there's still a Drawer class… I assume that's going away? Yup! It is gone now. I was just keeping it around as an easy reference while cleaning up CanvasDrawer. I think it's hard to get away from the fact that different drawer implementations will inherently support or not support specific features True. We need to get better at making the distinctions clear, something we're currently doing very badly with the HTML renderer. What would you think about adding getter/setter methods for each drawing-related config option into DrawerBase, which by default generate a warning (or error?) to the console if a feature is requested (as a truthy value, for example, but it could vary by property). Child implementations would override the getter/setter for supported properties, which would make it explicit (from a developer perspective, at least) which options are supported or not. It probably wouldn't automatically help with the documentation, though... Eventually I think it would make sense to move away from events and back towards using the draw function and other aspects of the API, now that things have matured a bit in the design/refactoring process. The latest version of the three-based renderer uses draw() to trigger rendering if it is the main renderer, and falls back to event handlers otherwise, to preserve the ability to mirror the main viewer in the demo. What would you think about adding getter/setter methods for each drawing-related config option into DrawerBase, which by default generate a warning (or error?) to the console if a feature is requested That seems like a good piece of the puzzle, at least! The latest version of the three-based renderer uses draw() to trigger rendering if it is the main renderer, and falls back to event handlers otherwise, to preserve the ability to mirror the main viewer in the demo. Cool :-) What would you think about adding getter/setter methods for each drawing-related config option into DrawerBase, which by default generate a warning (or error?) to the console if a feature is requested That seems like a good piece of the puzzle, at least! I was just getting started on this, but it feels like something that would better be done post release 4.1.0, when modern syntax options for this type of thing are (or, can be) enabled. I'll be happy to get back to it at that point. I feel like the best way forward at this point is to wait for 4.1.0, handle some of the option documentation stuff, and then move forward with getting the refactoring part of the PR finished/merged (i.e. ignoring/removing the three.js based plugin renderer and associated files). This will set the groundwork for @Aiosa to integrate webgl rendering mimicking the existing drawing pipeline (when there's time), or for alternative implementations that don't have external dependencies to be developed. In the mean time, it would add support plugin renderers. @Aiosa @iangilman @msalsbery Thoughts? @pearcetm Sounds like a plan! Thank you for getting this off to a great start! At this point, the big thing that's holding back 4.0.1 is https://github.com/openseadragon/openseadragon/pull/2287 (as you can see in https://github.com/openseadragon/openseadragon/milestone/13). @iangilman I'm getting back to working on this now. I wanted to pull in the current version of the library to make sure the new fixes and features that have been implemented since I started working on this patch were not broken by these changes. I ran into some git difficulties and ended up doing a rebase to make it work right, but now the history is not exactly clean... I'm not sure what the best way to clean this up is. Maybe I need to create a new branch from master and merge this one with it? I'm open to suggestions... @maxbogue I'm getting back to it now that 4.0.1 has been released and we can start using ES6 features. While the three.js implementation works well for almost everything, it still has the problem of external dependencies and will be a plugin instead of a built in option. I still need someone to write a native webgl version that renders individual tiles (or who can provide some example code to get me started with that). @pearcetm I'd prefer not to have things like rebases in PRs, just so I can look at each new set of changes without having to review all of the stuff that came before, but in a case like this where it was necessary (and besides, it's been so long I'll probably have to reacquaint myself with everything) it's fine. As long as the code at the head is where you want it to be, I don't think we need to worry about the history. Patience, I am almost finished, just one month and a bit and I start working on it :D It's pretty obnoxious to view/review changed files with all these other commits included. I created a new branch from master and merged the current changes. I have a PR on my fork here: https://github.com/pearcetm/openseadragon/pull/1 It would be nice if there was a way I could do the same thing for this branch, which would keep this PR and all the comments in place. I'm not really sure how to go about doing that, or if it's even possible. I could also close this PR and start a new one from the cleaner branch. Thoughts? @iangilman or anyone else... Not sure why the build is failing here, the same branch is passing elsewhere and on my machine. This PR will fix https://github.com/openseadragon/openseadragon/issues/2363. As part of this refactoring, I added logic to allow coverage by existing loaded levels that are higher than the otherwise-highest level that would be requested. @pearcetm Can you take your new branch and overwrite rotation-seams somehow? I'm afraid this is beyond my git/GitHub knowledge. I'd say it's certainly fine to start a fresh PR, though... Just link back to this one for the history. Can you take your new branch and overwrite rotation-seams somehow? I think I figure out how to do this. I reverted to the initial commit, pulled all the current changes from the upstream repo, merged the drawer-refactoring branch, and pushed to my fork on github with -f. This seems to have done the trick, now the only changes showing up here are actually related to this PR :) Great! :) I still need someone to write a native webgl version that renders individual tiles (or who can provide some example code to get me started with that). I might be able to help with this! I have a basic knowledge of WebGL now after doing some work to run tiles through filters using fragment shaders. I still need someone to write a native webgl version that renders individual tiles (or who can provide some example code to get me started with that). I might be able to help with this! I have a basic knowledge of WebGL now after doing some work to run tiles through filters using fragment shaders. The problem is that this is not just about implementing a renderer, but doing so in a flexible way: supporting custom data loaders (textures) supporting multiple data sources for a single layer supporting different WebGL versions lifecycle-based components you can bind to / override I don't want to discourage, but this is not just about WebGL. The design is important too :) I don't want to discourage, but this is not just about WebGL. The design is important too :) I also started with 'viaWebGL' and it took me two years to get something useable out of it. For whatever it's worth I just completely rewrote viaWebGL in a couple weeks, commented every line of code in what was left, and got it doing exactly what I wanted. I wouldn't consider myself an expert but I feel like at this point I could help @pearcetm figure out the solutions to those problems you listed. For whatever it's worth I just completely rewrote viaWebGL in a couple weeks, commented every line of code in what was left, and got it doing exactly what I wanted. I wouldn't consider myself an expert but I feel like at this point I could help @pearcetm figure out the solutions to those problems you listed. That would be great! I'd appreciate your input. @Aiosa has also offered to help with this process. The problem is that this is not just about implementing a renderer, but doing so in a flexible way: supporting custom data loaders (textures) supporting multiple data sources for a single layer (basic usecase - drawing debug info, but also support for advanced rendering) supporting different WebGL versions lifecycle-based components you can bind to / override I agree all of this will be great. We can start incrementally though - a "simple" WebGL drawer without these features would be valuable as an option for some use cases, and the existing canvas drawer can be used in the meantime until more features are added. My idea was skipping the incremental development, since I did that already three years ago and now I have a similar renderer at disposal. The 'all great' opts are just the renderer features, not some future goals :) My idea was skipping the incremental development, since I did that already three years ago and now I have a similar renderer at disposal. The design considerations are just the renderer features, not some future goals :) Understood! Sometimes other things get in the way though, and if real work doesn't allow enough time for you to get to it, we could make incremental progress in the mean time @Aiosa. Ok. Btw, the drawer class refactoring is still on this branch or already merged? I wonder, might we end up needing something like this at the TiledImage level as well? We may want to engage features that only work in some drawers on a per image basis (like we already do with rotation, for instance). Sounds messy, though... I'm concerned it would push implementation details into the API too much. Why should people have to remember that rotation uses a different API because it's drawer specific? Maybe there are other ways to make these distinctions clear. Now I'm leaning back towards maybe not having a drawerOptions, or at least being very specific about what goes in it. The fact that most options are currently passed through to the TiledImage class has left me at the point of adding a drawerOptions parameter but not putting anything in it for now. I think we need to decide how to handle this still. These options are drawer-dependent, but we've already decided to support enabling them on a per-TiledImage basis. At the moment, Viewer passes these parameters directly through to the TiledImages at the time of addTiledImage. Perhaps we should instead delegate this to whatever drawer implementation is instantiated, and require that class to pass the default values that the drawer supports along to the TiledImage objects as they are added? The problem is that this is not just about implementing a renderer, but doing so in a flexible way: supporting custom data loaders (textures) supporting multiple data sources for a single layer (basic usecase - drawing debug info, but also support for advanced rendering) supporting different WebGL versions lifecycle-based components you can bind to / override @Aiosa It's great that you've implemented these features already! I think we should keep in mind separation of concerns too. Supporting custom data loaders (textures) For the drawing of each underlying TiledImage I think we want this to happen via WebGL in a straightforward way. Things like custom data loaders should be implemented outside of the drawing pipeline. supporting multiple data sources for a single layer (basic usecase - drawing debug info, but also support for advanced rendering) Multiple data sources, including debug info, should be drawn as layers (or at least supported to be drawn as layers) instead of forcing it to be built-in to the rendering of the basic image data. Advanced rendering options, such as webgl filters etc, could possibly be done on a per-tile basis (like multiple plugins do now) but would largely be superior on a per-viewport basis, to avoid edge artifacts due to tiling (this is a main advantage of the webgl rendering). supporting different WebGL versions I would advocate for different WebGL versions being implemented as distinct drawer classes, which would make explicit what each one supports, and which can be selected individually or as part of a priority list (see current implementation). lifecycle-based components you can bind to / override Sounds good, but I'm not 100% sure what you mean by this... if it obeys separation of concerns and doesn't unnecessarily complicate the basic drawing process I think I'd be in favor. Finally I have a bit more time, so I will try to have a look. Supporting custom data loaders (textures) For the drawing of each underlying TiledImage I think we want this to happen via WebGL in a straightforward way. Things like custom data loaders should be implemented outside of the drawing pipeline. It is 'outside of the pipeline' but still needs to be close enough. WebGL needs internally some considerations and in order to enable users to work internally with generic data type (which is supported officially as of 4.0), you need the power to do so. Don't worry, you basically say 'type XY is loaded onto GPU like this'. Should be modular enough. supporting multiple data sources for a single layer (basic usecase - drawing debug info, but also support for advanced rendering) Multiple data sources, including debug info, should be drawn as layers (or at least supported to be drawn as layers) instead of forcing it to be built-in to the rendering of the basic image data. Advanced rendering options, such as webgl filters etc, could possibly be done on a per-tile basis (like multiple plugins do now) but would largely be superior on a per-viewport basis, to avoid edge artifacts due to tiling (this is a main advantage of the webgl rendering). This still needs some design considerations but I would like to have the option for user to say 'I have several data caches for layer XY available. I want this data to arrive at the rendering as textures and I want to use this data in my own way.' This comes with the previous point: to be able to use arbitrary data amount per TileSource for rendering. Naturally, this could drive the debug info data rendering too. supporting different WebGL versions I would advocate for different WebGL versions being implemented as distinct drawer classes, which would make explicit what each one supports, and which can be selected individually or as part of a priority list (see current implementation). I was not sure whether detached renderers will be able to gracefully degrade, e.g. fallback to older version if not supported. I will have a look. My current implementation supports a priority list of versions and treats it similarly how CSS treats multiple font specifications. lifecycle-based components you can bind to / override Sounds good, but I'm not 100% sure what you mean by this... if it obeys separation of concerns and doesn't unnecessarily complicate the basic drawing process I think I'd be in favor. Shader communication has several stages. Events are good to some point, but here I think the ability to inherit from a generic 'Shader-like' class and provide custom logic is way more flexible. The idea is that the basic rendering is driven by a shader base class. If you want custom drawing, you register your renderer and override things you need to change. The shader layer class has some additional OSD GLSL API available that basically implements proxy for GLSL version agnostic implementation. This is the second reason why I want to keep this in a single system - the renderer can share a lot of code and only provide proxy functions here and there. Then your shader 'just work' on any machine, regardless of the version (possibly with the limit of texturing units available). That's what I have right now. I could start looking at the changes, and start working on a prototype around 10.7. then we could use it as a basis for further discussions... Brief update - I'd dived into getting a webgl renderer going. It's nearly working! I'm holding off on making a commit until a couple more things get sorted out. @pearcetm Exciting! I still have to read through the latest version... I'll get around to it one of these days! I'm delighted at the forward progress though. 😊 I still have to read through the latest version... @iangilman That's probably good thing, and it's why I mentioned the incoming changes - there are lots of new updates in the latest commits, including reverting some of the previous changes that weren't needed anymore. This is turning into a beast of a refactor-enhancement PR - apologies for the amount of changes to review! There were a ton of interconnected parts related to drawing vs world composition that really needed to be untangled. Then, a bunch of stuff that was only tangentially related to the actual PR came up while testing. I'm happy to clarify changes as the review progresses... I'll try to remember why I did things along the way :) Since there's a functional built-in webgl drawer now with no external dependencies I've gotten rid of the three.js-based plugin renderer and some associated stuff. I'm happy to remove more changes that were originally intended to support that, if they are no longer needed. For the purposes of testing at least, I've made the default renderer the new webgl version, so all the demo pages should use it. Drawers can be selected in the viewer options as a simple string (i.e. webgl, context2d or html), as a constructor (e.g. OpenSeadragon.WebGLDrawer) or as an array of drawers in priority order (e.g. the default value of ['webgl', 'context2d', 'html'] which are tried in order until one is supported by the browser. (n.b. I'm not sure of a scenario where support for webgl and context2d would be different, but I think the mechanism is valuable for future uses too.) The current version uses webgl to compose the tiles for each TiledImage but doesn't composite those together (when there are multiple images), and instead uses existing context2d compositing (and clipping/cropping) methods, to maximize backwards compatibility. Likewise, the actual output canvas is context2d so downstream apps can access pixel data the same as they always do. For now at least, if a plugin and/or app wants to modify tile pixel data on the fly ( ?only after the initial loading process?) it would require the drawerOptions.webgl.continuousTileRefresh: true option so tile data would be sent to the graphics card on each draw - without this, tile data is sent over when it's loaded and not updated per-draw for performance reasons. The to-do list still includes changes that support plugin webgl programs to modify the rendered scene before it's drawn to the output canvas. I'm happy to add to this list as well, either as part of this PR or future improvements. I can confirm based on testing so far that, as anticipated, composing tiles with webgl gets rid of seams and all related tiling artifacts :) Note that opacity is currently disabled. I need to figure out how to deal with tile overlap still. I've figured out how to avoid drawing the overlap regions of tiles, so opacity works now as it should. I haven't dealt yet with this bit of code in TiledImage: https://github.com/openseadragon/openseadragon/blob/ffbd8f985a4dab4fdde855458383f50f6973eb7a/src/tiledimage.js#L1847-L1849 I feel like introducing overlap when there is none in the underlying tile data will cause subtle distortions/artifacts when rendering with webgl, so I wonder if we should make this conditional on which drawer is in use. I need to find a test image with zero overlap to work with still - perhaps one of the existing demo data files will work, I haven't looked yet. Suggestions are welcome. I renamed the demo file to drawercomparison.html (http://localhost:8000/test/demo/drawercomparison.html) just FYI for anyone testing this out. Another issue I'm wrestling with is that there are lots of warnings being generated in the tests because of too many active WebGL contexts, but calling viewer.destroy() causes other errors in some of the tests. The current version uses webgl to compose the tiles for each TiledImage but doesn't composite those together (when there are multiple images), and instead uses existing context2d compositing (and clipping/cropping) methods, to maximize backwards compatibility. Likewise, the actual output canvas is context2d so downstream apps can access pixel data the same as they always do. Interesting! Does that require multiple canvasses or can you have both 3D and 2D contexts on the same canvas? The benefits to this seem great if it can work. I'm curious about the performance... Would it be faster to do it all in WebGL? I feel like introducing overlap when there is none in the underlying tile data will cause subtle distortions/artifacts when rendering with webgl, so I wonder if we should make this conditional on which drawer is in use. Yes! Getting rid of the forced overlap is definitely on the wishlist for the WebGL drawer! Another issue I'm wrestling with is that there are lots of warnings being generated in the tests because of too many active WebGL contexts, but calling viewer.destroy() causes other errors in some of the tests. I understand WebGL contexts are potentially challenging to clean up. What sorts of problems is destroy causing? Perhaps worth fixing anyway. I'm playing with the demo... The new drawer seems nice and solid! Very cool 🎉 The only issue I've found is this one with the canvas drawer: I actually don't know if that's already an existing bug in the 4.1.0 canvas drawer. If it is, I wouldn't consider fixing it a priority, but if it is a new bug, it's worth addressing. Looks like webglfiltering.html is currently broken. What's the purpose of test/demo/webgldemodrawer.js? Looks like webglfiltering.html is currently broken. Yes, that's a work in progress still. I'm trying to figure out how to support adding filters and overlays, which is not entirely trivial - though hopefully once I get an API sorted out it will be closer to trivial to implement them :). What's the purpose of test/demo/webgldemodrawer.js? It lets me iterate on code changes faster because I don't have to wait for grunt to rebuild the library including eslint etc every time I save changes. Once things are settled I move the updated code into the file in src. I actually don't know if that's already an existing bug in the 4.1.0 canvas drawer. If it is, I wouldn't consider fixing it a priority, but if it is a new bug, it's worth addressing. Wrapping works right with context2d for the first image (i.e. world.getItemAt(0)) but doesn't seem to work for additional images - whether or not there is rotation like you are trying here. I only noticed that this afternoon myself, coincidentally! I'm not entirely sure whether it works or not in 4.1.0 when you try to wrap additional tiled images... would be worth testing. If not, I agree its not a priority to fix - perhaps documenting the limitation would be of use though. Interesting! Does that require multiple canvasses or can you have both 3D and 2D contexts on the same canvas? The benefits to this seem great if it can work. I'm curious about the performance... Would it be faster to do it all in WebGL? It does require multiple canvases unfortunately (much like the "sketch canvas" in the context2d implementation) - as far as I know it's still not possible to get both context types from the same canvas. I'm not sure what the performance impact is. The call itself is very simple, you can pass a canvas (whether webgl or context2d) to canvas.drawImage directly and it will copy the pixels over, which I imagine would be pretty highly optimized by browsers. If you have a preferred way to test performance of this type of thing I'd be happy to look into it. Re: doing it all in WebGL, I think eventually that would be great. I don't think it is worth holding up a first version of webgl-based drawing to accomplish that, though. For example, unless someone has already released a library to replicate all of the context2d composite operations in webgl, that would be a huge undertaking. Handling clipping and cropping without relying on context2d operations would be more tractable, but still quite a bit of work to do. At least if clip and crop are not specified, one of the copy operations is skipped. Getting rid of the forced overlap is definitely on the wishlist for the WebGL drawer! Great! I'll look into conditionally skipping this step if the WebGL drawer is being used. I understand WebGL contexts are potentially challenging to clean up. What sorts of problems is destroy causing? Perhaps worth fixing anyway. I think the contexts are being cleaned up appropriately in destroy() - the issue is that destroying viewers after each test causes other errors like the following: Running that particular test on it's own (as the first test) does not cause problems. Relatedly (I think), destroying some of the viewers causes global MouseTracker errors when the tests are run in the browser: On the plus sized, by destroying all viewers, all the warnings about too many webgl contexts go away. :) I actually don't know if that's already an existing bug in the 4.1.0 canvas drawer. I played around in the debugger on the documentation site and found it's an existing bug: I also discovered it only happens when the image source.hasTransparency() == true. In the drawer comparison I defined that to be true even when the image itself does not, because it helped show the problem with tiling and seams more clearly. Turning off "transparency" in the image fixes the problem in the new canvas drawer: Those are all excellent points about using multiple canvasses... I think the benefits definitely outweigh the potential downsides! Especially being able to maintain compatibility with existing plugins and with things like all of the different transfer modes is appealing. I actually don't have a good way to test performance here. Unfortunately, WebGL performance tracking in the browser doesn't seem to be very robust. The best I know to do is to use an FPS counter (Chrome has one built in, but there are also libraries). You could make 100 viewers all stacked on top of each other in z-space and have them constantly animating. Anything to get the FPS below 60, because otherwise, you don't know what effect you're having. In addition to performance, the other possible concern with the multiple canvas approach is memory, especially on mobile. Again, I don't really have a good way to test it, though I do think Chrome has some memory tracking capabilities. Hopefully the smaller screen on mobile means the extra canvas isn't that big (especially compared to all of the image tiles). Thank you for looking at the wrap stuff... The fact that they don't work above the base layer sounds familiar. Nice that they work with transparency off! Maybe that means it has something to do with the sketch canvas? Anyway, not a big deal as long as we're not introducing new bugs. I think the contexts are being cleaned up appropriately in destroy() - the issue is that destroying viewers after each test causes other errors like the following: Interesting. Presumably, these are race conditions that have to do with destroying and re-creating so quickly, or something? It would be great to fix them if possible, since they might be bugs that come up in real life too. Of course, I realize that's not really part of doing drawer code... I suppose any big project like this is inevitably going to uncover other things like this. Thank you for continuing to charge forward on this! 😊 You could make 100 viewers all stacked on top of each other in z-space and have them constantly animating. Anything to get the FPS below 60, because otherwise, you don't know what effect you're having. Unfortunately, this straightforward approach of making a bunch of viewers wouldn't really work with webgl since browsers limit the number of contexts that can be active at any time (to 8 or 16). Thank you for looking at the wrap stuff... The fact that they don't work above the base layer sounds familiar. Nice that they work with transparency off! Maybe that means it has something to do with the sketch canvas? Anyway, not a big deal as long as we're not introducing new bugs. Yes, it definitely does have to do with the sketch canvas, specifically the size of it. Even though it is an existing bug, I'll probably try to look into it before not too long, as I suspect it is the same underlying problem that causes the issue recently raised in https://github.com/openseadragon/openseadragon/issues/2375. Speaking of, I enabled setting clipping in the demo for this PR. The behavior of the existing canvas drawer is quite funky when clipping and cropping are both used! Fortunately, the new webgl drawer already works correctly. Still, it'd be good to fix this at some point. Presumably, these are race conditions that have to do with destroying and re-creating so quickly, or something? It would be great to fix them if possible, since they might be bugs that come up in real life too. Of course, I realize that's not really part of doing drawer code... I suppose any big project like this is inevitably going to uncover other things like this. I haven't yet had a chance to look further into these issues, but I agree it would be nice to fix them if possible. Unfortunately, this straightforward approach of making a bunch of viewers wouldn't really work with webgl since browsers limit the number of contexts that can be active at any time (to 8 or 16). Good point! I suppose you could stress the system with a single viewer by just adding a ton of images to it. Yes, it definitely does have to do with the sketch canvas, specifically the size of it. Even though it is an existing bug, I'll probably try to look into it before not too long, as I suspect it is the same underlying problem that causes the issue recently raised in https://github.com/openseadragon/openseadragon/issues/2375. Speaking of, I enabled setting clipping in the demo for this PR. The behavior of the existing canvas drawer is quite funky when clipping and cropping are both used! Fortunately, the new webgl drawer already works correctly. Still, it'd be good to fix this at some point. Nice! Glad to hear the new drawer continues to solve bugs 😄 I am free to code BTW, shall I wait till this PR gets merged or...? I would like to contribute my own ideas as well as connect it with the design of the newer cache system I am in the progress of designing... @Aiosa That's great! I think it'll be a while still before this PR gets merged. I've never done this before, but if you forked my fork, would that let you integrate your changes into this PR by making a PR to my forked repo? I know you have a lot of fancy rendering stuff already working. What would you think of turning that into a separate drawer implementation, for now at least? class AiosasCoolDrawer extends DrawerBase{ ... } for example :) Ultimately we could consolidate implementations into a single version if that makes sense, or leave them as separate options if they do different jobs well. You could also start integrating with your changes to the cache as well. I would like to integrate into the WebGL drawer directly, possibly create a visualisation plugin atop of it. I was thinking about integrating only essential bits like texture loading utilities, and WebGL independency. I still think the webgl drawer should support version-less rendering in one instance and allow for custom shaders integration. Default shaders can be simple, and we can allow users to override them as they like. People then do not care about what drawer version they need, they just write a single thing that works everywhere. To do so, of course, we need to provide proxy functions for texture manipulation and color output (and possibly more, the time will show) so people can use GLSL + the proxy API to write versionless universal code. These two are crucial features and IMHO follow design of all major rendering libraries. @Aiosa What you're proposing sounds good, though I worry the API for this could become quite complicated. Hopefully we can keep it clean, simple, and easy to understand. I'm interested to hear your thoughts (and see your implementation) of how/where to add custom shaders. Composing the tiles correctly into the overall frame is something I think should be implemented in the core code and not have to be touched by any custom code. Part of the reason I proposed starting by creating a separate drawer is that I've been playing around a lot with the current WebGLDrawer still, working on cleaning up the code and trying to improve performance. I'll have another update pushed in the next couple of days, which will include a new demo page that shows frame rate with different numbers of images. @iangilman I've been looking into performance by following your suggestion of adding a bunch of images to a single viewer. It turns out that copying the image data from a canvas with a webgl context over to one with a context2d canvas is pretty costly. Even adding some optimizations to the drawing pipeline to reduce/minimize the number of gl.drawArrays() calls (not pushed yet, but will be soon), performance is worse than the canvas drawer. Without copying pixel data over per-image, performance is better. Perhaps it would be a good idea to have this be conditional on an option or something like that. Thoughts? @pearcetm Do you have a functioning version that doesn't copy over the pixels? Does that mean there's no 2D canvas and it's all done in WebGL? What are the downsides to that version? The big one I can think of is none of the canvas overlay stuff will work. Oh, and I guess it would require our own implementation of things like blend modes. @iangilman I have blend modes already implemented as in my visualisation I allow rendering of multiple layers within a single TileSource - not an issue, we got the code. I have blend modes already implemented as in my visualisation @Aiosa the issue of backwards compatibility isn't just blend modes generically, it's that all of the globalCompositeOperation values supported by 2D canvas would need to be fully implemented, or else it could lead to different behavior versus existing applications. @iangilman Maintaining the output as a context2d canvas allows the following: globalCompositeOperation, accessing pixel data for things like colorpickers and snapshot tools, and shape-based cropping. Plus, integration with existing canvas-based overlays. The existing debug info layer is drawn onto the canvas this way, even in the new webgl viewer currently. There may be other things I'm not thinking of as well. Yeah, but all the different blending modes are mostly modifications of a colour mixing linear equation, IMHO quite easy to do.. all the different blending modes are just some combinations of a colour mixing linear equation arguments, pretty similar and well documented stuff, IMHO quite easy to do There are libraries out there that try to implement many features of context2d in webgl contexts. https://github.com/jagenjo/Canvas2DtoWebGL does a lot of it, but doesn't support globalCompositeOperation. Neither does https://github.com/play-co/webgl-2d. I'm not opposed to incorporating some of these concepts into the drawer using webgl operations, but fully recapitulating everything is going to be a major undertaking, especially if we don't utilize external libraries. @pearcetm So is maintaining the output as a context2d canvas the thing that's causing the slowdown, or is it some other middle step? And have you implemented a version that doesn't include that, or are we just discussing it? Just trying to get a sense of the state of play. So is maintaining the output as a context2d canvas the thing that's causing the slowdown, or is it some other middle step? @iangilman It's complicated. Copying data to a 2D canvas contributes to the slowdown, but isn't the whole story. In the latest push, I have added a demo page where drawer performance can be tested, which can be found at http://localhost:8000/test/demo/drawerperformance.html I used this page to do some testing. When adding a lot (e.g. 100) of tiled images, if every tiled image needs to be copied individually over to the output canvas, that's 100 full-size canvas copies. That definitely contributes to decreased performance, but it isn't the only thing (e.g. 100 full-size draw calls in webgl, 1 per tiled image, is roughly as expensive). Only copying once at the end of the draw operation (once ALL tiled images have been rendered) doesn't have much of a performance hit at all. I changed the drawing pipeline a bit to improve performance in the following ways: minimize the number of actual drawing operations while composing individual tiled images by batching as many textures as the underlying webgl implementation allows into a single draw, only copy to the 2D canvas output layer when needed (e.g. if cropping polygons are defined) or after all tiled images have been drawn, implementing a two-pass rendering pipeline in webgl instead of relying on the 2D canvas for that, and only use two-pass rendering when actually needed (if an image doesn't have transparency or other special cases, tiles can just be drawn to the rendering layer directly) These changes have improved performance quite a bit. Different combos of requirements based on how images are selected can lead to the webgl drawer having either better or worse performance than the canvas drawer. Overall I think they're roughly similar at this point. And have you implemented a version that doesn't include that, or are we just discussing it? I briefly modified the current version of the WebGLDrawer to not copy data to a 2D canvas just to test out performance. That wasn't a full implementation though, just skipping the final output step. It would be possible with small tweaks to allow the drawer to be configured to skip the copy to 2D canvas. This does improve performance (at least, when the viewer has tons of images) but then the drawer would not support all the functions we've discussed already. Memory usage could also be improved a bit by not having an extra canvas. When adding a lot (e.g. 100) of tiled images, if every tiled image needs to be copied individually over to the output canvas, that's 100 full-size canvas copies. I should note that this test involved all of those stacked on top of each other and semi-transparent so when zoomed in it really would require full-size draw operations. Thinking about this more, though, a more typical scenario would be many small non-overlapping (or minimally overlapping) images. A further optimization would be to only draw the bounding box of the tiled image in pixel space when doing both the second-pass in webgl and when copying over to 2d canvas. Noting this idea here for future reference. @pearcetm Those all sound like good optimizations. I imagine there are also things throughout the system that could be done to improve performance, not just the drawing itself. Do you have a sense for what percentage of the frame rate is based on actual drawing (versus all the other bookkeeping the system does)? The drawer performance page is great! Surely there are other scenarios that could be tested, but this is a good one, and it's a huge jump over what we've had before. Based on my testing, it looks like the WebGL pipeline is a little faster than canvas, but not a huge amount. That seems like a fine starting point. And yeah, of course I'm all for finding the right balance where we can continue to support all of the existing features and extensibility. Seems like you're doing a great job of that! @pearcetm I started to program then my own renderer with the inspiration from your implementation. I go with WebGL 2.0, modify my renderer, first, and see where I get. I also add comments to the sourcecode where I notice something worth discussing. Update: I have already a first working version (did not try cropping, drawing debug info or other advanced features yet)... @pearcetm tile positioning looks good, was able to reuse it pretty nicely @Aiosa Awesome! I look forward to seeing what you end up implementing. Let me know if anything comes up that would be good to discuss or if you have questions. @Aiosa Awesome! I look forward to seeing what you end up implementing. Let me know if anything comes up that would be good to discuss or if you have questions. I left some comments up in the code review, so you can have a look :) Should I push my stuff to this PR somehow? I made a few changes here and there (like exposing matrix implementation outside for reuse), not sure how and whether to push it here Can you target this branch with a PR with the changes you made? I left some comments up in the code review, so you can have a look Not sure where I should be looking for these, when I scroll up I don't see new comments Here (latest) and up. I prefer writing it directly on the reffered code. Can you target this branch with a PR with the changes you made? I will try. I will also include my webgl version but it still needs a lot of trimming optimization etc... Here (latest) and up. I prefer writing it directly on the reffered code. I don't see any code review by you. Maybe it's hidden from me for some reason? I don't see any code review by you. Maybe it's hidden from me for some reason? That would explain why reviews from March are still without a response :D Hmm I have to actually submit it to be visible for anyone. Good to know. Some might be outdated :) That would explain why reviews from March are still without a response :D @Aiosa Indeed! I wasn't ignoring you! But now many of those comments are quite outdated so I'm ignoring some of them. Great to see all this productive conversation! I'm keeping up with it, but don't have much to add at the moment. One thought: Looking at https://caniuse.com/webgl2, it seems we should be fine considering WebGL 2 our bottom-end. All the current browsers support it. We don't support IE, and Opera Mini doesn't have any version of WebGL. @iangilman even with the newest browsers, I had a laptop on which WebGL 2.0 just did not run and 1.0 was used instead. Idk why, but I would keep 1.0 compatibility - it's not necessarily just a browser thing. I imagine there are also things throughout the system that could be done to improve performance, not just the drawing itself. Do you have a sense for what percentage of the frame rate is based on actual drawing (versus all the other bookkeeping the system does)? @iangilman From what I've seen so far, the non-drawing parts of the calculations do play a significant role. These numbers are just ballpark, but when I was playing around a couple of weeks ago, with 100 tiled images in the viewport (zoomed in, ~1500 tiles or so), when I skipped rendering entirely the framerate dropped from 120 fps to ~60fps. Adding the rendering back in and it dropped further to 40-50fps. These numbers aren't necessarily to be trusted though. Perhaps the performance demo could be modified to try to keep track of the different parts of the pipeline to give a better idea. Even with the newest browsers, I had a laptop on which WebGL 2.0 just did not run and 1.0 was used instead. Idk why, but I would keep 1.0 compatibility - it's not necessarily just a browser thing, it depends on the device too. Excellent point! From what I've seen so far, the non-drawing parts of the calculations do play a significant role. Good to know. Yeah, if the performance demo could help shed light on this, it would be great. Don't get distracted optimizing the rest of the pipeline, though... You've got plenty on your plate making the WebGL drawer! If you do run across things that seem worth addressing in the future, though, please file issues as you find them. 😊 Yeah, if the performance demo could help shed light on this, it would be great. @iangilman I added a couple of extra panels to the performance monitor, which capture separately the time to perform viewer.world.update() and viewer.world.draw() stages. The output is "milliseconds per second" - that is, in the last 1 second, how many milliseconds were spent on each operation, cumulative over all frames that were drawn. @pearcetm Great! Wonderful to have this window into the performance. On my computer with 200 images I'm seeing the drawing drop from 380 to 170 when I go from Canvas to WebGL. In both cases the update is around 240. Oddly, though, with Canvas I'm getting 40fps and with WebGL I'm getting 30. Perhaps some of the WebGL drawing isn't showing up in the "draw" stats? I know that some of the GPU processing is hard to nail down the timing on, since it doesn't block the JavaScript. I also noticed that the FPS dropped even further if I let it run for a while on WebGL. I haven't tried that on Canvas. @iangilman Thanks for taking a look. I just pushed an updated demo with a different performance monitor for further granularity. I also made some changes so it's more readily reproducible and less variable. Perhaps some of the WebGL drawing isn't showing up in the "draw" stats? I know that some of the GPU processing is hard to nail down the timing on, since it doesn't block the JavaScript. I agree with you - that's what I think this is what's going on too. In the updated demo, I added an "other" category that measures from when the javascript draw() call returns until the beginning of the next update() call. With lots of images, the ratio of time spent in "other" drops to nearly zero in the canvas drawer while the draw and update steps grow. In contrast, for the webgl drawer, "other" gets large instead of draw. I haven't come across a way to directly measure how much time it's taking for the GPU side of things to complete. I think we can infer somewhat based on "other" however - there's minimal extra processing on the javascript side during this time. Under high-FPS scenarios, "other" just captures however long it takes until the next frame (from requestAnimationFrame) is rendered. It's interesting, on my machine, webgl FPS seems to correlate pretty consistently with the number of images, but canvas performance is quite weird - e.g. 150 images is somehow worse FPS than 180 or 200 images... I also noticed that the FPS dropped even further if I let it run for a while on WebGL. I haven't tried that on Canvas. That's strange. I just let webgl run for over an hour with 800 images loaded and the FPS stayed totally constant. I haven't seen time-based changes in canvas either though I haven't let it run that long. One nice thing about the new demo page is that is shows the number of webgl resources in uses. Those also don't change over time (which is comforting :) ) Wow, fancy! Great to have all this info! Where did the code come from? I'm assuming you borrowed some of it from somewhere, but I didn't see a link (I may have just missed it). Having the "other" is great... I agree that's probably the WebGL processing. I'll take another look at letting it run. I had it in the background for a while so it may just have been getting throttled or something. I've been running it for a while now and so far no change, so it may have just been a glitch. Thank you for going down this road! Good to get some real numbers on the perf. Now that we know what we know, I'm not sure what the next steps would be. Perhaps just return to finishing up the refactor, and we can focus on perf more in the future? @iangilman I added attribution in the demo source, but your comment made me realize I hadn't pushed those changes to github yet. Pushed now. It's from https://github.com/spite/rstats. I think where we stand right now is that the webgl drawer solves certain problems nicely (seams, tiling artifacts) but may come with a (relatively small overall) FPS performance penalty relative to the canvas drawer in certain circumstances. I think for a first pass this would be nice to have available even if isn't more performant (yet) than the canvas implementation, so I agree with returning to finishing up the refactor. Maybe @Aiosa can come up with a way to further streamline the pipeline... We also need to figure out why the tests are failing. @Aiosa has added the beginnings of a feature-rich webgl implementation. There are still a number of parts that sound like work-in-progress. See https://github.com/pearcetm/openseadragon/pull/4 for a complete description. I've merged it into this PR, for now at least, to enable testing and to see how it fits into the overall framework. We can pull it back out into a separate PR in the future if we want to just wrap up the refactor and basic webgl drawer here first. As for whether to merge @pearcetm's implementation soon vs figuring out how to integrate and finish up @Aiosa's, I'm fine with waiting... After all, this is a major change and something we've been looking forward to for a long time. A little more time is fine 😄 @Aiosa Any thoughts on how much time/effort it will take for you to get your implementation ready? @iangilman I should have some time in the next few weeks to get back to this. Do you happen to have any idea why the remaining tests would be failing? I'm looking for some pointers... I also need to figure out which parts of the refactoring still need work, aside from @Aiosa's implementation. I should have some time in the next few weeks to get back to this. Lovely 😊 Do you happen to have any idea why the remaining tests would be failing? I'm looking for some pointers... Looks like they are all the same error? >> Error: TypeError: Cannot read properties of null (reading 'ok') >> at Tile.getUrl I'm not familiar enough without area to know what it might be. Bad data somehow, presumably, but why? I can poke into it more if you'd like… I also need to figure out which parts of the refactoring still need work, aside from Aiosa's implementation. Yeah, good idea to take stock… I've certainly lost track at this point! Maybe start a checklist in the top comment here? @pearcetm Honestly I am not sure. First, I would like to finish the cache overhaul since it impacts, unifies, and fixes the way of accessing the data. It should simplify data access and manipulation. Merging with this PR will be a bit painful. Also, with the 'rich renderer' there is a problem the texture cpu->gpu loading is for some reason slow, I did not yet find out why (although in one of the implementations I do almost the same thing as your renderer does). Maybe I will get some student working on this within a uni course. If it is necessary, I can give this a bit more priority. @iangilman the test itself is buggy, calls .ok() assertion on a null reference. I think I fixed this in the cache overhaul PR. @pearcetm @Aiosa As you've probably noticed, I'm back from my vacation. How are things going on this patch? Where do we stand? Aiosa, sounds like you're suggesting we finish up the cache overhaul before proceeding here? Welcome back, @iangilman! @Aiosa and I have been continuing to work on this over at https://github.com/pearcetm/openseadragon/pull/5 - you're welcome to take a look at that thread for updates on the process. Would love to get your thoughts on the design process. Aiosa, sounds like you're suggesting we finish up the cache overhaul before proceeding here? At first I thought so, since the webgl renderer can just say what formats it can receive and the cache system will take care of it. On the other hand, this is much bigger PR and leaving the data access dirty for now, adding the cache overhaul and finallizing the renderer might prove easier. Oh, I mentioned this in the cache patch, but it's true here as well: we're going to want a page of documentation for the new APIs that this patch brings! I'm getting back to this now - I'm hoping we can get a basic first version of WebGL merged soon! Is it worth trying to incorporate a mechanism for filtering or other custom shaders into this version? I had some skeleton code here, but how it would actually work isn't fleshed out: https://github.com/openseadragon/openseadragon/blob/a578b97d96571431405d993a027492f818bfcd3b/src/webgldrawer.js#L349C21-L357C22 This would provide a hook into the drawing pipeline, adding flexibility, but I would rather just not support such things at all right now rather than add a new but incomplete API since then it may need to be supported moving forward... Thoughts? @iangilman @Aiosa The purpose of this option was to enable the webgl drawer to work with things like the filtering plugin, which may modify the tile data on the fly, thus necessitating re-uploading the texture to the graphics card before every draw call. However, I'd prefer to drop support for this, at least for the first iteration of the webgl drawer. Performance would suck, and it would be better to move forward with @Aiosa's modular drawer for this type of thing. Would you be OK with that @iangilman? Honestly, I would not support this explicitly and let users program the tile refresh when they really want it the cache overhaul will define both type conversions and type destructors, so when they use tile.getData() and tile.setData the system will automatically recognize change in type and re-upload the item on the GPU without the user intervention as the drawer must define how the texture is loaded and deleted within the type definition, see the current implementation So with the cache PR, the filtering plugin will work out of the box (supposing tile-drawing event is preserved here, which actually might be at least optional to turn off, we discussed removing this event at the cache PR but I think it makes sense to keep it for for canvas drawer IMHO). I had to modify a couple of tests to get them all to pass - I'd appreciate some other eyes on these changes to make sure I'm not missing something wrong with the code that the tests were trying to point out. Also, I had to manually merge changes in TiledImage related to https://github.com/openseadragon/openseadragon/pull/2387 so it would be great if @jetic83 could make sure this is still working right. @pearcetm Tests imho look good, especially when we talk about deprecating tíle-drawing event. Btw do you have these performance demo pages around somewhere? I would like to have them when testing the async cache system. I will have to manually merge also both of these PRs :D @pearcetm Exciting! I'm in favor of dropping those things you suggested dropping, so the first version is simple and clean. We can build on it from there. Btw do you have these performance demo pages around somewhere? I would like to have them when testing the async cache system. Yes, they're in the test/demo folder. There are two demo pages, drawercomparison and drawerperformance with an HTML file and a javascript file for each. @Aiosa the framerate stuff is in the drawerperformance demo @iangilman this is ready for review, and I'm looking for guidance on anything else that needs to happen as part of this PR. You'd mentioned new documentation pages, however, the jsdoc comments should be updated (this should be part of the review of course, I may have missed some things) and from what I can tell with this basic version we aren't really adding new APIs that should need more extensive documentation pages. @pearcetm Awesome! I'll have to chew on it a bit 😄 As promised, I'm chewing on this! I'm going to take it a bit at a time so my eyes don't glaze over too much Totally understand! Thanks for starting to dive in. Doing it a bit at a time will also give me a chance to respond, implement the fixes, and ask followup questions along the way. When going through the docs I noticed that the Mat3 class should be fully documented, so I've tried to do that. However I'm running into some syntax issues with jsdoc that I'm not sure how to resolve. Specifically, the static members are getting a . (period) before their names: I don't know where this is coming from or how to fix it. Pointers would be very useful! I don't know what the best practice is, but I messed around a little and this seems to work: * @function makeTranslation * @memberof OpenSeadragon.Mat3 * @static BTW, we should think about the naming of the matrix file. I assume we don't forsee needing anything other than a matrix3, but what if we were to add a matrix2 or a matrix4? Would they all go in the same file? That's different than how we've broken up the classes so far. So maybe it would be good for the file to be named matrix3.js? Looking at the current ReferenceStrip implementation, I see that potentially many viewers get created. It is possible this will cause issues with the WebGLDrawer since there are limited webgl contexts that can be active at the same time. However, presumably the draw operations aren't happening often in the reference strip so it might work OK. It needs to be tested, but I'm not familiar with using the reference strip. In addition, there's the open PR from @msalsbery (https://github.com/openseadragon/openseadragon/pull/1961) - I'm not sure if anything in that PR would impact this at all. I don't know what the best practice is, but I messed around a little and this seems to work: * @function makeTranslation * @memberof OpenSeadragon.Mat3 * @static This worked for the static methods (I swear I tried it with this combo before and it didn't work, but clearly something was different...) However, for the non-static method multiply when I used this syntax but without the @static tag it still inappropriately marked it as static. So now there's a combo of @alias and @function that at least makes it look correct on the built docs page. BTW, we should think about the naming of the matrix file. I assume we don't forsee needing anything other than a matrix3, but what if we were to add a matrix2 or a matrix4? Would they all go in the same file? That's different than how we've broken up the classes so far. So maybe it would be good for the file to be named matrix3.js? Sure, I can rename it. Hmm... Now that you mention it, the reference strip should probably always use canvas even if the main viewer is using WebGL. Regarding the JSDoc, yeah, whatever works! Hmm... Now that you mention it, the reference strip should probably always use canvas even if the main viewer is using WebGL. I was thinking about that too. It would only make sense to use webgl in the reference strip if there was something the canvas or html drawers can't do - like custom shaders or such. Since that's not really part of this "first release" of webgl support, maybe we can just use drawer: ['canvas', 'html'] in the reference strip? That should provide existing functionality including fallback in the event canvas isn't supported (really old browsers, I guess). I doubt there is a browser supporting newer es features (const, class..) and not implementing canvas. Maybe some less popular and powerful devices... I doubt there is a browser supporting newer es features (const, class..) and not implementing canvas. Maybe some less popular and powerful devices... Oh, good point @Aiosa. Sticking to canvas should be just fine then. Are there other big changes in TiledImage I should be aware of? That was the gist of it. I tried to leave the logic of as untouched as possible, while pulling out the drawing-related code. I fixed on source of spontaneously failing test (springs not quite done when the animation-finished event was triggered), but now another weird test failure cropped up in the Travis build. When I run the tests on my machine, there are no errors... maybe it will just work next time. Hmm... I'm not getting the build log at all from Travis at the moment. Perhaps the problem is on their end. We'll see with the next one! General thought on tests: I feel like we should be more explicitly testing all 3 of the drawers, like running the drawer tests on WebGL, then doing it again on canvas and again on HTML. Can that be set up? @iangilman Sure, this can probably be set up some how. I guess we could run the entire test suite three times (once per drawer implementation) just to be sure. I'm not intimately familiar with how the entire test suite is set up. I can look into it but if you have tips for configuring this, that would be very useful. @pearcetm I'm not aware of a way to just tell the whole suite to run 3 times with different configurations. QUnit is a pretty basic test system. It may be possible, but nothing comes to mind. One could retrofit individual files like so: const drawerTypes = ['webgl', 'canvas', 'html']; drawerTypes.forEach((drawerType) => { QUnit.test(`basics-${drawerType}`, function(assert) { createViewer(drawerType); // Testing code... }); }); That should definitely be done for the drawer tests. Perhaps some other modules? It wouldn't be necessary for all of them, though. I suppose another, blanket approach would be to modify the grunt file to run the tests three times, each time with a different URL parameter that specified the drawer type. That would certainly involve less fiddly work within the tests. It would mean if you ran the tests in a browser you would only get one drawer, but you could change the URL param to test the others.\ BTW, are there any tests that should be added for the WebGL drawer? That should definitely be done for the drawer tests. Perhaps some other modules? It wouldn't be necessary for all of them, though. Yeah, it's really overkill to do all of it three times. The point of separating the drawing from the rest of the logic was to decouple these things; the other aspects can be tested independently of the drawer, so they can be done once and use whatever drawer. So I changed the approach here. We're coming down the home stretch here! Just a few more little testing things. Looks like the Travis build is failing, but at the moment I'm just seeing "We're sorry, but this data is not available. Please check the repository settings in Travis CI." I haven't changed the settings... It may be an intermittent thing. I assume the tests are passing locally for you? The other thing on my mind is I still think it would be good to have a comment block somewhere, maybe on the WebGL draw function or maybe at the top of the file, that goes into the overall rendering strategy, including the first and second passes, and the 2D pipeline. I think that's what we're down to unless there's anything else on your mind! @Aiosa what do you think? One thing I noticed and don't like is that DrawerBase calls the target drawing element 'canvas' which collides with canvas API name (e.g. CanvasDrawer). And then you have a html drawer calling something like let canvas = $.makeNeutralElement("div");. This is confusing for me. BTW, are there any tests that should be added for the WebGL drawer? The other thing on my mind is I still think it would be good to have a comment block somewhere, maybe on the WebGL draw function or maybe at the top of the file, that goes into the overall rendering strategy, including the first and second passes, and the 2D pipeline. I just added a description into the documentation for the entire class under the @description block for jsdocs. Let me know if this captures the info you were looking for. BTW, are there any tests that should be added for the WebGL drawer? I'm going to run the multi-image tests for both the webgl and canvas drawers separately, since these tests do actually test the drawing process. I don't think there's anything more specific that would be particularly useful to test at this point. Looks like the Travis build is failing, but at the moment I'm just seeing "We're sorry, but this data is not available. Please check the repository settings in Travis CI." I haven't changed the settings... It may be an intermittent thing. I assume the tests are passing locally for you? Yes, they're passing locally both in the console and in the browser. I'm not sure why they're failing on github, though it has been happening for the past few weeks. One thing I noticed and don't like is that DrawerBase calls the target drawing element 'canvas' which collides with canvas API name (e.g. CanvasDrawer). And then you have a html drawer calling something like let canvas = $.makeNeutralElement("div");. This is confusing for me. I agree it is kind of confusing. However, viewer.drawer.canvas has been a public API for a long time, and I think it would probably be a mistake to break it at this point. In fact, it has been called canvas forever even when actual canvas elements aren't supported (or are disabled) and the tiles are assembled using HTML only - that's not new in the HTMLDrawer, it is also in the current code base. Since we decided to go with CanvasDrawer as the terminology, I think we're kind of stuck. If there are specific places in the code that you think we could document this better to avoid confusion, I'd be happy to do so. Well, why not do something like using get canvas that on the first call issues a deprecation warning and then replaces itself with the object (so that the warning does not fire every time) and add new replacement api? And remove it in later version? Well, why not do something like using get canvas that on the first call issues a deprecation warning and then replaces itself with the object (so that the warning does not fire every time) and add new replacement api? And remove it in later version? That's true, we could deprecate it with a warning and replace with a differently-named API that does the same thing. I'll defer to @iangilman on whether we should do that or not (and if so, what the new API should be named). When I dig into the Travis error I see this: Error: RangeError: offset is out of bounds at Float32Array.set () at w.WebGLDrawer._getTileData (http://localhost:8000/build/openseadragon/openseadragon.min.js:8:205681) at http://localhost:8000/build/openseadragon/openseadragon.min.js:8:203077 at Array.forEach () at w.WebGLDrawer.draw (http://localhost:8000/build/openseadragon/openseadragon.min.js:8:202034) at g.World.draw (http://localhost:8000/build/openseadragon/openseadragon.min.js:78:58760) at http://localhost:8000/build/openseadragon/openseadragon.min.js:8:109897 at http://localhost:8000/build/openseadragon/openseadragon.min.js:8:109939 at k (http://localhost:8000/build/openseadragon/openseadragon.min.js:8:110149) at http://localhost:8000/build/openseadragon/openseadragon.min.js:8:97438 Why this is happening is not clear to me. The size of the array is defined as so: let maxTextures = this._gl.getParameter(this._gl.MAX_TEXTURE_IMAGE_UNITS); let texturePositionArray = new Float32Array(maxTextures * 12); // 6 vertices (2 triangles) x 2 coordinates per vertex When looping over the tiles, the index to use is calculated like this: for(let tileIndex = 0; tileIndex < tilesToDraw.length; tileIndex++){ ... let indexInDrawArray = tileIndex % maxTextures; So, the index should always be between 0 and maxTextures-1 meaning that index * 12 should always be in range (since the size of the array is 12 * maxTextures). Then, the Float32Array.set() operation is here: texturePositionArray.set(textureQuad, index * 12); This is what's throwing the error. But why? I'm not sure. It might have something to do with some of the GL warnings that are thrown - a few of these: .[.WebGL-0xe9c006ada00]GL Driver Message (OpenGL, Performance, GL_CLOSE_PATH_NV, High): GPU stall due to ReadPixels and these: WARNING: Too many active WebGL contexts. Oldest context will be lost. Finally, there's a second error that is happening again: TiledImage > animation >> Message: current bounds after animation >> Actual: { >> "x": 1, >> "y": 2, >> "width": 2.9999999999999996, >> "height": 2.9999999999999996, >> "degrees": 0 >> } >> Expected: { >> "x": 1, >> "y": 2, >> "width": 3, >> "height": 3, >> "degrees": 0 >> } I thought I had fixed this already but maybe I inadvertently reverted that change somehow... In viewport.update the code that lets the viewer know we're still animating is here: var isAnimating = changed || !this.zoomSpring.isAtTargetValue() || !this.centerSpringX.isAtTargetValue() || !this.centerSpringY.isAtTargetValue() || !this.degreesSpring.isAtTargetValue(); return isAnimating; The return value from this is used to determine whether to fire the animation-finished event. I don't understand how in the Travis test the spring values aren't at their final value when that event is fired... @pearcetm Are you seeing reasonable output from Travis? Sounds like you're seeing actual error messages. I'm still seeing just "We're sorry, but this data is not available. Please check the repository settings in Travis CI." Wait a second... I just tried it in Chrome and I can see the full log there. I guess Travis just doesn't like Firefox anymore. I tried running it locally in my command line and I'm seeing the TiledImage > animation error there as well. I figure for that one we just need to include an epsilon in our test so it accepts values within rounding error distance. For Error: RangeError: offset is out of bounds maybe the headless browser Travis is using is giving us some weird values for MAX_TEXTURE_IMAGE_UNITS? Might there be some way to have it be more conservative during the tests? Or add a little buffer to the array to see if that fixes it? How about doing some console.log-ing to see what those values are during the test run? I agree the code looks solid. Btw, I tried running the tests in the browser (Chrome) and I got a zillion errors: Probably unrelated. In fact it looks like it's MouseTracker stuff... Paging @msalsbery :) So, probably nothing to worry about at the moment, but I mentioned it just in case it's relevant. Regarding renaming canvas, I'm of two minds... On the one hand, it's a weirdness we've had for a long time and it would be lovely to sort it out, and while making this big drawer change is a pretty good time to do it (as far as disruptions are concerned). On the other hand, it kind of seems unrelated to the thrust of this patch and I don't want to expand the scope any further. Probably the best thing to do would be to file an issue for it and do it as a follow-up patch (if there's interest). That way it can still be part of the 5.0 release but doesn't have to hold up this PR. Probably unrelated. In fact it looks like it's MouseTracker stuff... It looks like getActivePointersListByType() is being called after a viewer (and mousetracker) is destroyed, probably related to the mousetracker test. This patch destroys viewers when tests are completed (instead of just closing them) to release resources, so I think that introduced the problem. I added a check to getActivePointersListByType() so that it doesn't throw an error but just returns the default value of an empty list if the tracker has been destroyed. Probably the best thing to do would be to file an issue for it and do it as a follow-up patch (if there's interest). That way it can still be part of the 5.0 release but doesn't have to hold up this PR. Seems to me like a good plan. That can be its own separate PR to introduce the deprecation of the drawer.canvas property and provide a more explicit interface to do the same thing, but depending on drawer type. I tried running it locally in my command line and I'm seeing the TiledImage > animation error there as well. I figure for that one we just need to include an epsilon in our test so it accepts values within rounding error distance. Used Util.assertRectangleEquals for this test, which seems to have fixed this. Everything looks great! I added a check to abort the drawing if MAX_TEXTURE_IMAGE_UNITS <= 0, which seems to have fixed the error. Seems sensible. How many tests does this end up skipping? Would you say we still have good coverage on the WebGL drawer despite this issue? I think that's the last thing, unless I'm forgetting something? Seems sensible. How many tests does this end up skipping? Would you say we still have good coverage on the WebGL drawer despite this issue? I'm not sure how to write a message to the Travis log, which I think is what would be needed to know this. Since I can't reproduce it locally it is difficult to know when and why it is happening. I'm open to suggestions. I think it should be ok to just write into console.error or console.warn in the test suite for this purpose. Maybe wrap it into some utility function logCIMessage(...) so that the purpose is clear. Yeah, QUnit seems to suppress console.logging, which is annoying. I'm not sure how to turn that off. Anyway, here's an approach that works: let webglStrangenessCount = 0; QUnit.done(details => { console.log('webglStrangenessCount', webglStrangenessCount); }); ... And inside any test you can increment webglStrangenessCount. All of these appear to allow console.log in them: https://api.qunitjs.com/callbacks/ With some testing, I figured out that console.log works to send messages to the command line within the tests (including in the afterEach hook). OpenSeadragon.console.log doesn't work for some reason, but I didn't look further into that. I added a counter to the webgl drawer that increments each time the MAX_TEXTURE_IMAGE_UNITS value is bad, and updated the tests to print a message when this happens. And... no messages. So I'm still not sure what was happening with the Travis CI tests before or why it is better now. I will note that there were periods in the past where the tests were passing in Travis even with the webgl drawer. So maybe it is just sporadic for some reason. I guess the question is, do we want to leave the tests updated like this or roll them back. What about Qunit.log ? From what I found qunit is not parcitularly good in this usecase unlike mocha.. Plain console.log was working to print test messages to the console, so I'm not sure what QUnit.log would add. Funny, I hadn't thought to just try the bare console.log until I tested the QUnit.done but then when it worked I didn't connect it with the fact that I switched log functions. Just to be sure: did you include some console.log calls that always happen (not just conditionally) when you tested on Travis, to make sure that they actually work in that environment? Also, I'm interested in knowing how many times MAX_TEXTURE_IMAGE_UNITS had a good value... In other words, what percentage of tests are we missing? If it's bad once, is it always bad? I feel it's worth pursuing this a little further at least, but honestly if we find we're missing a lot of tests because of this issue I'm not sure what we would do about that; we'll probably just have to move forward anyways. I just figure it's good to know if we can know. Thank you for pursuing this last detail! Here's a build that prints a message with the number of times the drawing pipeline was used without errors too: https://app.travis-ci.com/github/openseadragon/openseadragon/builds/268403499 This shows two things: 1) plain console.log is working in Travis, and 2) there were no GL errors. I'm going to remove that guard and all the logging and see what happens... Okay, I think this is ready to merge! Sound good to you? One last question: now that we found the cause of the MAX_TEXTURE_IMAGE_UNITS and fixed the error, do we really want to be letting the drawer "silently" fail with just a console error message if this situation happens? Although the original illegal access error message wasn't the most helpful in immediately tracking down the root of the problem, we could throw a more informative error instead. This way if something gets messed up again, the tests will actually catch it (which seems beneficial). They will have to use canvas. This restriction will be removed with the cache overhaul. Random question... I feel like we've discussed this but I don't know the answer. How will the new drawer affect plugins that expect the 2D canvas? For instance https://github.com/joshua-gould/OpenSeadragonCanvasOverlayHd? Will they work with the WebGL drawer or does one need to choose the canvas drawer if using them? Well, the output canvas of the WebGLDrawer is still a context2d canvas - so, I don't think plugins would necessarily break. The filtering plugin (https://github.com/usnistgov/OpenSeadragonFiltering) should still work too, because the filter is applied before the data is used for drawing. Overall, plugins would need to be tested and perhaps some of them would need the drawer: canvas option to be used, but I expect most should work OK. The canvas overlay has two modes. If it uses async functions any refresh on the tile data replaces the tiled image because osd API cannot handle async (vis the cache overhaul). If it does not, it tries to in place apply the changes in tile-drawing event, which will not work. So it will sort of work if you force any filter or it's param update to throw away all the data and download again. How will the new drawer affect plugins that expect the 2D canvas? For instance https://github.com/joshua-gould/OpenSeadragonCanvasOverlayHd? @iangilman Looking at this plugin more closely, it doesn't appear to touch the OSD drawer at all - it just creates a second canvas element on top that is synced to the size of the viewer and provides a place for context2d drawing operations to happen under user control. If it [does not use async], it tries to in place apply the changes in tile-drawing event, which will not work. @Aiosa you're saying the filtering plugin won't work in sync mode, right? I guess this would make sense because the data for a given tile would have already been sent to the graphics card, and we don't have a mechanism (yet) to tell the drawer that the data needs to be refreshed. Come to think of it, this plugin won't work at all with WebGLDrawer because it depends on the tile-drawing event, which is only fired by the CanvasDrawer. I suppose we could add a check at the time that addHandler is called, and throw an error/warning if someone subscribes to an event that doesn't make sense for the drawer type... Edit: other plugins will not break unless they try to use context2d API on the drawing canvas, which will not work. @Aiosa Since the drawer.canvas is a context2d canvas still (only the first pass of the pipeline uses webgl) even context2d operations on the final output should still work. It is only if a plugin touches the tile data (like the filtering plugin does) that I'd expect things to break. I was designing the overhaul so it will work out of the box - but you will have to (re) design the plugin to conform to the new api, so my saying the plugin will work meaning once it stops using the hacky things the cache overhaul will deprecate anyway. @Aiosa you're saying the filtering plugin won't work in sync mode, right? I guess this would make sense because the data for a given tile would have already been sent to the graphics card, and we don't have a mechanism (yet) to tell the drawer that the data needs to be refreshed. The overhaul will allow any type conversion and ensure the data will get converted to supported format for target drawer once it attempts to draw the tile and the type does not conform. Which means uploading to GPU here. Event during tile drawing. @Aiosa Since the drawer.canvas is a context2d canvas still (only the first pass of the pipeline uses webgl) even context2d operations on the final output should still work. It is only if a plugin touches the tile data (like the filtering plugin does) that I'd expect things to break. I think you cannot request a different context of the same canvas that already one context gave. So I think trying context2d getter on webgl canvas will throw. I think you cannot request a different context of the same canvas that already one context gave. So I think trying context2d getter on webgl canvas will throw. If it always makes second pass without webgl then that's different story. But I do have usecases where the second pass is also webgl pass, which I will try to add support for. In the current implementation of WebGLDrawer, the final output is always a context2d canvas, regardless of how many passes are done with the internal webgl canvas: _setupCanvases(){ ... this._outputCanvas = this.canvas; //output canvas this._outputContext = this._outputCanvas.getContext('2d'); This is good for backwards compatibility, obviously. As we iterate more on the drawer, I think it would be good to keep this part in place. Looking at this plugin more closely, it doesn't appear to touch the OSD drawer at all - it just creates a second canvas element on top that is synced to the size of the viewer and provides a place for context2d drawing operations to happen under user control. Oh I see! For some reason I thought it was drawing directly on OSD's canvas. Turns out https://github.com/altert/OpenSeadragonCanvasOverlay doesn't do that either. Probably sensible, really. I suppose we could add a check at the time that addHandler is called, and throw an error/warning if someone subscribes to an event that doesn't make sense for the drawer type... That's a good idea! Better to have the errors than just mysteriously failing. I added validation to EventSource.addHandler that checks for a compatible drawer for the tile-drawing and tile-drawn events. This could be extended to other events and event sources, but for now I think it at least addresses potential new incompatibilities introduced by this patch. My one thought is whether we'll run into trouble if there's ever a viewer that switches between drawers... Is that even something that's possible? If so, we would want to be able to unreject events (if we switched from WebGL to canvas, for instance). If it's not possible, we don't need to worry about it. Although I this scenario isn't currently used anywhere in core OSD, it would be theoretically possible to do something like this at any point: viewer.drawer = new CanvasDrawer({viewer, ...}); There are probably all sorts of unexpected things that would go wrong (extra HTML elements, etc). However, it is pretty trivial to provide a mechanism to re-enable an event, so I've done that now. Maybe it'll be useful in the future. So, beyond that, once again, I think maybe this is ready to merge? I appreciate all of the attention to detail, locking down all of these last bits! I pushed one last tiny tweak to the docs (an outdated @link Viewer.Drawer...). I'm guessing there may be other things, especially in the documentation, that have flown under the radar, but I think these can be fixed in small subsequent PRs as they crop up. So I think it is finally ready! How I personally like to think about it is to use _underscoreMethods when a function shouldn't be called outside of the class they are defined on (even by other parts of the overall library), while @private nonUnderscoreMethods are meant to be used as a "public" API within the library but aren't intended for use by outside code. That seems like a fine way to think about it! I'm guessing there may be other things, especially in the documentation, that have flown under the radar, but I think these can be fixed in small subsequent PRs as they crop up. Absolutely. Okay, let's do this! This patch is an amazing piece of work! Thank you so much for making it happen and sticking with it through all the twists and turns. I think it's safe to say this is the biggest PR we've had, certainly in a while if ever! It's been almost a year in the making, 113 commits, 498 comments... And it's looking great! And now finally we get to close issue https://github.com/openseadragon/openseadragon/issues/68, filed 11 years ago 😄 Viva WebGL! 🎉 🎉 🎉 @pearcetm Now that this has landed, I've updated various related issues. Are there others you know of that should be closed or updated? Also, I'm just noticing this unchecked checkbox from the description: Add support for placeholderFillStyle for images during loading (including solid color plus gradients/patterns) Did that get addressed? If not, we should file a follow-up issue for it. I've added the changelog (https://github.com/openseadragon/openseadragon/commit/f9a8b97cf965f4625977021859d7aaa0cb6a272f). I tagged both @pearcetm and @Aiosa on it, as you both contributed. Thank you again! This is awesome! I've been waiting for WebGL in openseadragon for years! And now finally we get to close issue #68, filed 11 years ago 😄 Viva WebGL! 🎉 🎉 🎉 Hooray! Re: issue #68, this is really only the first step to all the fancy business discussed there :) Also, I'm just noticing this unchecked checkbox from the description: Add support for placeholderFillStyle for images during loading (including solid color plus gradients/patterns) Did that get addressed? If not, we should file a follow-up issue for it. Let's open a follow-up issue for it. I think some of the changes related to https://github.com/openseadragon/openseadragon/pull/2407 will greatly impact how this should best be implemented, which is part of why I didn't address it yet. In the meantime, the canvas drawer still supports this property. BTW, I've announced this PR on Discord and Twitter https://twitter.com/openseadragon/status/1752752055272505670). 😊 @pearcetm Cool... I've filed the follow-up :) Looking at the current ReferenceStrip implementation, I see that potentially many viewers get created. It is possible this will cause issues with the WebGLDrawer since there are limited webgl contexts that can be active at the same time. However, presumably the draw operations aren't happening often in the reference strip so it might work OK. It needs to be tested, but I'm not familiar with using the reference strip. In addition, there's the open PR from @msalsbery (#1961) - I'm not sure if anything in that PR would impact this at all. @pearcetm I can use drawer 'canvas' for the reference strip viewers, right? Any caveats? I can use drawer 'canvas' for the reference strip viewers (like it's currently implemented), right? Any caveats? Yes, using drawer: 'canvas' for the reference strip viewers should work fine, and would avoid creating too many webgl contexts. The only caveat (in the future) would be that if a viewer used a custom webgl pipeline to create a filter/effect of some type, the thumbnails in the reference strip wouldn't look the same if they used the canvas drawer. The ability to do this isn't implemented yet though, so not a big concern. I can use drawer 'canvas' for the reference strip viewers (like it's currently implemented), right? Any caveats? @msalsbery Yes, using drawer: 'canvas' for the reference strip viewers should work fine, and would avoid creating too many webgl contexts. The only caveat (in the future) would be that if a viewer used a custom webgl pipeline to create a filter/effect of some type, the thumbnails in the reference strip wouldn't look the same if they used the canvas drawer. The ability to do this isn't implemented yet though, so not a big concern. @pearcetm Excellent thank you!
gharchive/pull-request
2023-03-05T21:32:01
2025-04-01T06:45:16.449220
{ "authors": [ "Aiosa", "iangilman", "joedf", "maxbogue", "msalsbery", "pearcetm" ], "repo": "openseadragon/openseadragon", "url": "https://github.com/openseadragon/openseadragon/pull/2310", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1193801585
[BUG] Aggregation on termX field gets 400 status code. & failed to create query: [nested] nested object under path [my_nested_object] is not of nested type Describe the bug 2 requests that got 200 response now get 400 response from core. Branch:https://github.com/cliu123/security/tree/upgrade_to_opensearch_2.0.0_alpha1 The test passed until OpenSearch 2.0.0-SNAPSHOT, but starts failing since 2.0.0-alpha1. {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [termX] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"logs","node":"wdsYHp_iRLi3gQHdAP0mQg","reason":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [termX] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}],"caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [termX] in order to load field data by uninverting the inverted index. Note that this can use significant memory.","caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [termX] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}},"status":400} Another test starts failing since 2.0.0-alpha1 too. Response "error" : { "root_cause" : [ { "type" : "query_shard_exception", "reason" : "failed to create query: [nested] nested object under path [my_nested_object] is not of nested type", "index" : "deals", "index_uuid" : "hR3R1jlwRz63UOMbbDoQhw" } ], "type" : "search_phase_execution_exception", "reason" : "all shards failed", "phase" : "query", "grouped" : true, "failed_shards" : [ { "shard" : 0, "index" : "deals", "node" : "05SczuM4SpWsP-GURvChGw", "reason" : { "type" : "query_shard_exception", "reason" : "failed to create query: [nested] nested object under path [my_nested_object] is not of nested type", "index" : "deals", "index_uuid" : "hR3R1jlwRz63UOMbbDoQhw", "caused_by" : { "type" : "illegal_state_exception", "reason" : "[nested] nested object under path [my_nested_object] is not of nested type" } } } Did the change removing mapping types get in after that old 2.0.0-SNAPSHOT? this smells like a mapping type issue Did the change removing mapping types get in after that old 2.0.0-SNAPSHOT? this smells like a mapping type issue 2.0.0-SNAPSHOT/ Wed Mar 09 01:41:04 UTC 2022 2.0.0-alpha1-SNAPSHOT/ Tue Apr 05 21:48:15 UTC 2022 Some of the removing types changes got in after 2.0.0-SNAPSHOT(Tue Apr 05 21:48:15 UTC 2022). I locally updated the security plugin to 2.0.0-alpha1 and after tidying up some build failures from missed type removals and other lucene 9.x changes the offending test passes. I did not run the entire test suite but all of DlsTest.java passes for me. BUILD SUCCESSFUL in 2m 1s Unrelated: I noticed security isn't following the SPDX header standard adopted by OpenSearch and is instead using the old ALv2 header. Probably something we want to remedy sooner than later. Note also: a quick indication of the stale OpenSearch-2.0.0 build was the lucene 8.10 transitive dependency pulled in by gradle. 2.0.0 is now running on Lucene 9.1 yet gradle pulled in an old 8.10 build. Let's make sure no other plugin repos are accidentally running w/ old versions. @nknize Thanks for checking! The failures are in this branch:https://github.com/cliu123/security/tree/upgrade_to_opensearch_2.0.0_alpha1 Github workflow where failures are: https://github.com/opensearch-project/security/runs/5839382149?check_suite_focus=true The failures are in this branch... I bumped the main branch locally and tests pass. Maybe check if there is something else changed on the feature branch causing the failure. In DlsTest.java#L276 Set the mapping as follows: client.admin().indices().create(new CreateIndexRequest("logs").simpleMapping("termX", "type=keyword")).actionGet(); The mainline code here is different from your branch which is why my this test is passing in my local change. @nknize That's a good way to verify. I wonder how you bumped OpenSearch core version to 2.0.0-alpha1-SNAPSHOT on main branch though. I bumped version and verified that the new version is alpha1, but I got compilation error as you can see in the screenshots because new changes in core that got in after 2.0.0-SNAPSHOT, which was why I made those changes. I wonder how you bumped OpenSearch core version to 2.0.0-alpha1-SNAPSHOT on main branch though. I checked out the main branch, bumped the version in the build gradle file, cleared gradle cache, then fixed the errors locally and ran the test. In DlsTest.java#L276 Set the mapping as follows: client.admin().indices().create(new CreateIndexRequest("logs").simpleMapping("termX", "type=keyword")).actionGet(); The mainline code here is different from your branch which is why my this test is passing in my local change. Nice! This fixes the test. I applied the similar fix to the other test. It also passes now. Thanks for the guide! @nknize
gharchive/issue
2022-04-05T22:48:10
2025-04-01T06:45:16.497507
{ "authors": [ "cliu123", "davidlago", "nknize" ], "repo": "opensearch-project/OpenSearch", "url": "https://github.com/opensearch-project/OpenSearch/issues/2776", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1268081710
Created helper method to consolidate GetAlerts API calls. Modified AcknowledgeAlertsModal GetAlerts call. Signed-off-by: Richard Fu richfu@amazon.com Description Created a helper method within helpers class that can be used to consolidate GetAlerts API calls. Modified AcknowledgeAlertsModal GetAlerts, and will update other calls to GetAlertsAPI in the future. The existing tests for AcknowledgeAlertsModal passed locally. Issues Resolved https://github.com/opensearch-project/alerting-dashboards-plugin/issues/252 Check List [ ] New functionality includes testing. [ ] All tests pass [ ] New functionality has been documented. [ ] New functionality has javadoc added [x] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. Codecov Report Merging #272 (c95223f) into main (01d9c09) will decrease coverage by 0.23%. The diff coverage is n/a. @@ Coverage Diff @@ ## main #272 +/- ## ========================================== - Coverage 52.94% 52.71% -0.24% ========================================== Files 209 209 Lines 5445 5452 +7 Branches 762 763 +1 ========================================== - Hits 2883 2874 -9 - Misses 2560 2576 +16 Partials 2 2 Impacted Files Coverage Δ .../FormControls/FormikFieldRadio/FormikFieldRadio.js 40.00% <0.00%> (-40.00%) :arrow_down: ...alerting-dashboards-plugin/public/utils/helpers.js 47.36% <0.00%> (-34.45%) :arrow_down: ...CreateTrigger/components/Action/actions/Message.js 53.33% <0.00%> (-11.58%) :arrow_down: ...ateMonitor/containers/MonitorIndex/MonitorIndex.js 90.52% <0.00%> (-0.97%) :arrow_down: ...er/containers/ConfigureActions/ConfigureActions.js 7.40% <0.00%> (-0.07%) :arrow_down: ...s/AcknowledgeAlertsModal/AcknowledgeAlertsModal.js 1.94% <0.00%> (+0.10%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 01d9c09...c95223f. Read the comment docs.
gharchive/pull-request
2022-06-10T22:52:31
2025-04-01T06:45:16.513883
{ "authors": [ "codecov-commenter", "rf5138" ], "repo": "opensearch-project/alerting-dashboards-plugin", "url": "https://github.com/opensearch-project/alerting-dashboards-plugin/pull/272", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1526528906
[2.x]Manually backport to 2.x from main Signed-off-by: Junqiu Lei junqiu@amazon.com Description Manually backport changes to 2.x from main By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. @junqiu-lei put the list of all the PRs which are added as backport in this PR. @junqiu-lei put the list of all the PRs which are added as backport in this PR. Yes, thanks reminds.
gharchive/pull-request
2023-01-10T00:03:11
2025-04-01T06:45:16.516804
{ "authors": [ "junqiu-lei", "navneet1v" ], "repo": "opensearch-project/dashboards-maps", "url": "https://github.com/opensearch-project/dashboards-maps/pull/180", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1819300702
[BUG] Integration asset parsing behavior doesn't match Saved Object API What is the bug? While integration assets and the saved object API both use NDjson for sharing objects, the two have different behavior. In particular, it seems like the saved object API can handle empty lines, while the integration assets raises an exception. These are both in line with the spec, but they should not disagree with each other. How can one reproduce the bug? Steps to reproduce the behavior: Take any valid NDJson Saved Object export. Add a newline to the end. Import that same export with the newline and observe that the import is successful. Try to load the same export as an integration asset and the integration fails to validate. What is the expected behavior? The saved object API and the integration asset parser should have the same behavior. What is your host/environment? Mac, commit dd278ade Do you have any screenshots? N/A Do you have any additional context? Documentation on how the saved object API parses ndjson would be helpful, in addition to the newline issue there could be other implementation quirks that we might want to copy. It makes a better user experience to have the same parser behavior everywhere. See: Source code for current saved object collection
gharchive/issue
2023-07-24T23:41:54
2025-04-01T06:45:16.521617
{ "authors": [ "Swiddis" ], "repo": "opensearch-project/dashboards-observability", "url": "https://github.com/opensearch-project/dashboards-observability/issues/743", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2145396849
Fix undefined date error while generating CSV report Description Added condition to check if date is undefined. Issues Resolved https://github.com/opensearch-project/dashboards-reporting/issues/308 Check List [ ] New functionality includes testing. [ ] All tests pass, including unit test, integration test and doctest [ ] New functionality has been documented. [ ] New functionality has javadoc added [ ] New functionality has user manual doc added [ ] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. possible to add test coverage in https://github.com/opensearch-project/dashboards-reporting/blob/87e0934fb87d3af25a262666f2e8267be1ac098d/server/routes/utils/tests/savedSearchReportHelper.test.ts? will raise separate PR to increase test coverage
gharchive/pull-request
2024-02-20T22:26:20
2025-04-01T06:45:16.526463
{ "authors": [ "rupal-bq" ], "repo": "opensearch-project/dashboards-reporting", "url": "https://github.com/opensearch-project/dashboards-reporting/pull/309", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
992646713
[Improvement] add OpenSearch Dashboards plugin development docs We have removed the docs related to plugin manifest and don't have a replacement for external plugin development (APIs, how to build them, how to install). External plugin development might be painful with the lack of docs so it would be beneficial to add this. This also would assist if we plan on adding new properties for example: https://github.com/opensearch-project/OpenSearch-Dashboards/pull/761. If this is merged there will be no way of knowing this new property unless you are familiar with the code. Related issue: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/778 The manifest used to look like the following (note the links are no accurate but the property and descriptions are): [Home](./index.md) &gt; [opensearch-dashboards-plugin-core-server](./opensearch-dashboards-plugin-core-server.md) &gt; [PluginManifest](./opensearch-dashboards-plugin-core-server.pluginmanifest.md) ## PluginManifest interface Describes the set of required and optional properties plugin can define in its mandatory JSON manifest file. <b>Signature:</b> ```typescript export interface PluginManifest Remarks Should never be used in code outside of Core but is exported for documentation purposes. Properties Property Type Description configPath ConfigPath Root used by the plugin, defaults to "id" in snake_case format. description string TODO: make required once all plugins specify this. A brief description of what this plugin does and any capabilities it provides. extraPublicDirs string[] Specifies directory names that can be imported by other ui-plugins built using the same instance of the @osd/optimizer. A temporary measure we plan to replace with better mechanisms for sharing static code between plugins id PluginName Identifier of the plugin. Must be a string in camelCase. Part of a plugin public contract. Other plugins leverage it to access plugin API, navigate to the plugin, etc. opensearch-dashboardsVersion string The version of opensearch-dashboards the plugin is compatible with, defaults to "version". optionalPlugins readonly PluginName[] An optional list of the other plugins that if installed and enabled **may be** leveraged by this plugin for some additional functionality but otherwise are not required for this plugin to work properly. owner { readonly name: string; readonly githubTeam?: string; } TODO: make required once all internal plugins have this specified. requiredBundles readonly string[] List of plugin ids that this plugin's UI code imports modules from that are not in requiredPlugins. requiredPlugins readonly PluginName[] An optional list of the other plugins that **must be** installed and enabled for this plugin to function properly. server boolean Specifies whether plugin includes some server-side specific functionality. serviceFolders readonly string[] Only used for the automatically generated API documentation. Specifying service folders will cause your plugin API reference to be broken up into sub sections. ui boolean Specifies whether plugin includes some client/browser specific functionality that should be included into client bundle via public/ui_plugin.js file. version string Version of the plugin. Someone reached out to me and I typed this out I will just copy-pasta what I sent them: To get you started, you can start out with using the scaffolding process of building a plugin: node scripts/generate_plugin This will ask you a series of questions: Plugin name (use camelCase) {yourPluginName} Will this plugin be part of the OpenSearch Dashboards repository? # if Yes then the plugin will be added to src/plugins if N then the plugin will be added to plugins Should an UI plugin be generated? # if Yes then the plugin will have a public folder with components you can see in the browser if N then no components in the browser Should a server plugin be generated? # if Yes then the plugin will have a server folder that will be started with the Node layer that can host routes where you can transform or interact with different services or interact with the OpenSearch Client, if No then no server folder will be started for that specific plugin). Once built you should have your plugin with some basic functionality, you can start OpenSearch Dashboards and you can hit the server for that plugin or see the component in the UI. You are ready for adding more features! Please note, if you say no to 'Will this plugin be part of the OpenSearch Dashboards repository', this will build your plugin to plugins/. This folder is not tracked in git for OpenSearch Dashboards if you delete your plugin within plugins/ it will be lost forever if you do not specifically go into that folder and git init there. It is common to delete the plugin if it is in the plugins/ folder because running the integration tests clears /plugins. (This is terrible and was inherited from the legacy application and we haven't gotten around to prioritizing the issue). So that said, once you run through the process of using the CLI to scaffold the plugin, I'd recommend creating a new repo and navigating to your plugin and pushing that plugin to your repo. That way you guarantee not to lose anywork. Another piece of information is that the data plugin is the plugin that does the most interaction with OpenSearch Client which is the npm package that actually sends the requests to OpenSearch so if you are thinking about interacting with OpenSearch with your plugin then consider adding that plugin to your plugins 'opensearch_dashboards.json' plugin under requiredPlugins. 'opensearch_dashboards.json' is the JSON file that OpenSearch Dashboards will read while installing the plugin so you can define the version of the plugin and OpenSearch Dashboards Version. For example, if your plugin version is 1.2.3 but you built it for OpenSearch Dashboards 2.0.0, otherwise if you don't provide the version of OpenSearch Dashboards then it will use your plugin version. You can also provide configPath within opensearch_dashboards.json. This will allow you to to provide configuration settings within the root layer config/opensearch_dashboards.yml file for example, if you set the config path to be myPluginConfig and a setting like "isDev", in the opensearch_dashboards.yml file you can define myPluginConfig.isDev: true and your plugin will get the setting. Closing this issue in favor of #492. Please add anything that's missing to that issue. Thanks.
gharchive/issue
2021-09-09T21:27:30
2025-04-01T06:45:16.546041
{ "authors": [ "hdhalter", "kavilla" ], "repo": "opensearch-project/documentation-website", "url": "https://github.com/opensearch-project/documentation-website/issues/171", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1058028789
Add documentation support for opensearch-php client Add documentation to help users to install/connect to OpenSearch clusters using opensearch-php by creating a file "php" at https://github.com/opensearch-project/documentation-website/tree/main/_clients, using following java client as template. Java client (Template) The OpenSearch Java client allows you to interact with your OpenSearch clusters through Java methods and data structures rather than HTTP methods and raw JSON. For example, you can submit requests to your cluster using objects to create indices, add data to documents, or complete some other operation using the client’s built-in methods. Install the client To start using the OpenSearch Java client, ensure that you have the following dependency in your project’s pom.xml file: <dependency> <groupId>org.opensearch.client</groupId> <artifactId>opensearch-java</artifactId> <version>0.1.0</version> </dependency> If you’re using Gradle, add the following dependencies to your project. dependencies { implementation 'org.opensearch.client:opensearch-rest-client: 1.1.0' implementation 'org.opensearch.client:opensearch-java:0.1.0' } You can now start your OpenSearch cluster. Security This code example uses basic credentials that come with the default OpenSearch configuration. If you’re using the OpenSearch Java client with your own OpenSearch cluster, be sure to change the code to use your own credentials. The following sample demonstrates how to point your client to a keystore and set basic authentication credentials that can access a secure cluster. System.setProperty("javax.net.ssl.trustStore", "/full/path/to/keystore"); System.setProperty("javax.net.ssl.trustStorePassword", "password-to-keystore"); //Only for demo purposes. Don't specify your credentials in code. final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("admin", "admin")); Store data This section uses a class called IndexData, which is a simple Java class that stores basic data and methods. For your own OpenSearch cluster, you might find that you need a more robust class to store your data. IndexData class static class IndexData { private String firstName; private String lastName; public IndexData(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } @Override public String toString() { return String.format("IndexData{first name='%s', last name='%s'}", firstName, lastName); } } Initialize the client with SSL and TLS enabled The following sample code initializes a client with SSL and TLS enabled: import org.apache.http.HttpHost; import org.apache.http.auth.AuthScope; import org.apache.http.auth.UsernamePasswordCredentials; import org.apache.http.client.CredentialsProvider; import org.apache.http.impl.client.BasicCredentialsProvider; import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; import org.opensearch.client.RestClient; import org.opensearch.client.RestClientBuilder; import org.opensearch.clients.base.RestClientTransport; import org.opensearch.clients.base.Transport; import org.opensearch.clients.json.jackson.JacksonJsonpMapper; import org.opensearch.clients.opensearch.OpenSearchClient; import org.opensearch.clients.opensearch._global.IndexRequest; import org.opensearch.clients.opensearch._global.IndexResponse; import org.opensearch.clients.opensearch._global.SearchResponse; import org.opensearch.clients.opensearch.indices.*; import org.opensearch.clients.opensearch.indices.put_settings.IndexSettingsBody; import java.io.IOException; public class OpenSearchClientExample { public static void main(String[] args) { try{ System.setProperty("javax.net.ssl.trustStore", "/full/path/to/keystore"); System.setProperty("javax.net.ssl.trustStorePassword", "password-to-keystore"); //Only for demo purposes. Don't specify your credentials in code. final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("admin", "admin")); //Initialize the client with SSL and TLS enabled RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200, "https")). setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { @Override public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); } }).build(); Transport transport = new RestClientTransport(restClient, new JacksonJsonpMapper()); OpenSearchClient client = new OpenSearchClient(transport); OpenSearch client example This section has sample code that shows you how to create an index with non-default settings, add a document to the index, search for the document, delete the document, and finally delete the index. Create an index with non-default settings The following sample code creates an index with non-default settings. String index = "sample-index"; CreateRequest createIndexRequest = new CreateRequest.Builder().index(index).build(); client.indices().create(createIndexRequest); IndexSettings indexSettings = new IndexSettings.Builder().autoExpandReplicas("0-all").build(); IndexSettingsBody settingsBody = new IndexSettingsBody.Builder().settings(indexSettings).build(); PutSettingsRequest putSettingsRequest = new PutSettingsRequest.Builder().index(index).value(settingsBody).build(); client.indices().putSettings(putSettingsRequest); Index some data The following sample code adds a document to the index: IndexData indexData = new IndexData("first_name", "Bruce"); IndexRequest<IndexData> indexRequest = new IndexRequest.Builder<IndexData>().index(index).id("1").value(indexData).build(); client.index(indexRequest); Search for the document The following sample code searches for the document: SearchResponse<IndexData> searchResponse = client.search(s -> s.index(index), IndexData.class); for (int i = 0; i< searchResponse.hits().hits().size(); i++) { System.out.println(searchResponse.hits().hits().get(i).source()); } Delete the document The following sample code deletes the document: client.delete(b -> b.index(index).id("1")); Delete the index The following sample code deletes the index: DeleteRequest deleteRequest = new DeleteRequest.Builder().index(index).build(); DeleteResponse deleteResponse = client.indices().delete(deleteRequest); restClient.close(); } catch (IOException e){ System.out.println(e.toString()); } finally { try { if (client != null) { client.close(); } } catch (IOException e) { System.out.println(e.toString()); } } } } Complete code sample import org.apache.http.HttpHost; import org.apache.http.auth.AuthScope; import org.apache.http.auth.UsernamePasswordCredentials; import org.apache.http.client.CredentialsProvider; import org.apache.http.impl.client.BasicCredentialsProvider; import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; import org.opensearch.client.RestClient; import org.opensearch.client.RestClientBuilder; import org.opensearch.clients.base.RestClientTransport; import org.opensearch.clients.base.Transport; import org.opensearch.clients.json.jackson.JacksonJsonpMapper; import org.opensearch.clients.opensearch.OpenSearchClient; import org.opensearch.clients.opensearch._global.IndexRequest; import org.opensearch.clients.opensearch._global.IndexResponse; import org.opensearch.clients.opensearch._global.SearchResponse; import org.opensearch.clients.opensearch.indices.*; import org.opensearch.clients.opensearch.indices.put_settings.IndexSettingsBody; import java.io.IOException; public class OpenSearchClientExample { public static void main(String[] args) { try{ System.setProperty("javax.net.ssl.trustStore", "/full/path/to/keystore"); System.setProperty("javax.net.ssl.trustStorePassword", "password-to-keystore"); //Only for demo purposes. Don't specify your credentials in code. final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("admin", "admin")); //Initialize the client with SSL and TLS enabled RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200, "https")). setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { @Override public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) { return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); } }).build(); Transport transport = new RestClientTransport(restClient, new JacksonJsonpMapper()); OpenSearchClient client = new OpenSearchClient(transport); //Create the index String index = "sample-index"; CreateRequest createIndexRequest = new CreateRequest.Builder().index(index).build(); client.indices().create(createIndexRequest); //Add some settings to the index IndexSettings indexSettings = new IndexSettings.Builder().autoExpandReplicas("0-all").build(); IndexSettingsBody settingsBody = new IndexSettingsBody.Builder().settings(indexSettings).build(); PutSettingsRequest putSettingsRequest = new PutSettingsRequest.Builder().index(index).value(settingsBody).build(); client.indices().putSettings(putSettingsRequest); //Index some data IndexData indexData = new IndexData("first_name", "Bruce"); IndexRequest<IndexData> indexRequest = new IndexRequest.Builder<IndexData>().index(index).id("1").value(indexData).build(); client.index(indexRequest); //Search for the document SearchResponse<IndexData> searchResponse = client.search(s -> s.index(index), IndexData.class); for (int i = 0; i< searchResponse.hits().hits().size(); i++) { System.out.println(searchResponse.hits().hits().get(i).source()); } //Delete the document client.delete(b -> b.index(index).id("1")); // Delete the index DeleteRequest deleteRequest = new DeleteRequest.Builder().index(index).build(); DeleteResponse deleteResponse = client.indices().delete(deleteRequest); restClient.close(); } catch (IOException e){ System.out.println(e.toString()); } finally { try { if (client != null) { client.close(); } } catch (IOException e) { System.out.println(e.toString()); } } } } Closed with #285
gharchive/issue
2021-11-19T01:20:42
2025-04-01T06:45:16.557250
{ "authors": [ "VijayanB", "keithhc2" ], "repo": "opensearch-project/documentation-website", "url": "https://github.com/opensearch-project/documentation-website/issues/282", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2323763093
Unable to execute the Agent flow. Describe the bug Hi, I have setup the opensearch cluster to run on single instance. I have created the embedding model and setup the pipeline to do the text-embedding on indexing the documents. I am able to query the index to get the records. To setup RAG, I have created the remote connector and able to create the model and deployed it. Using the VectorDBTool and MLModelTool, I have created the agent. On executing the error, I am observing the following error: "{"error":{"reason":"Invalid Request","details":"Error from remote service: ","type":"OpenSearchStatusException"},"status":400}" All the above, I have tried it in DevTools of the dashboard web. Related component Plugins To Reproduce Below is the list of configurations I have enabled: Cluster settings: _GET /cluster/settings { "persistent": { "plugins": { "ml_commons": { "rag_pipeline_feature_enabled": "true", "agent_framework_enabled": "true", "memory_feature_enabled": "true", "only_run_on_ml_node": "false", "trusted_connector_endpoints_regex": [ """^https://runtime\.sagemaker\..*[a-z0-9-]\.amazonaws\.com/.*$""", """^https://api\.openai\.com/.*$""", """^https://api\.cohere\.ai/.*$""", ], "model_access_control_enabled": "true", "native_memory_threshold": "99", "allow_registering_model_via_local_file": "true", "connector_access_control_enabled": "true", "allow_registering_model_via_url": "true" }, "index_state_management": { "template_migration": { "control": "-1" } } } }, "transient": {} } Embedding model settings: _GET /_plugins/ml/models/cyEKpY8BQusrTe_CRBnv { "name": "huggingface/sentence-transformers/all-MiniLM-L12-v2", "model_group_id": "4BQThY8B2f3NeJALZupD", "algorithm": "TEXT_EMBEDDING", "model_version": "7", "model_format": "TORCH_SCRIPT", "model_state": "DEPLOYED", "model_content_size_in_bytes": 134568911, "model_content_hash_value": "f8012a4e6b5da1f556221a12160d080157039f077ab85a5f6b467a47247aad49", "model_config": { "model_type": "bert", "embedding_dimension": 384, "framework_type": "SENTENCE_TRANSFORMERS", "all_config": """{"_name_or_path":"microsoft/MiniLM-L12-H384-uncased","attention_probs_dropout_prob":0.1,"gradient_checkpointing":false,"hidden_act":"gelu","hidden_dropout_prob":0.1,"hidden_size":384,"initializer_range":0.02,"intermediate_size":1536,"layer_norm_eps":1e-12,"max_position_embeddings":512,"model_type":"bert","num_attention_heads":12,"num_hidden_layers":12,"pad_token_id":0,"position_embedding_type":"absolute","transformers_version":"4.8.2","type_vocab_size":2,"use_cache":true,"vocab_size":30522}""" }, "created_time": 1716460864731, "last_updated_time": 1716797846671, "last_registered_time": 1716460884584, "last_deployed_time": 1716797846671, "auto_redeploy_retry_times": 0, "total_chunks": 14, "planning_worker_node_count": 1, "current_worker_node_count": 1, "planning_worker_nodes": [ "SgpCQj20SGWdpTIL0-bXGg" ], "deploy_to_all_nodes": true, "is_hidden": false } Index pipeline settings: _GET /ingest/pipeline/song_lyrics_pipeline { "song_lyrics_pipeline": { "description": "Song Lyrics pipeline", "processors": [ { "text_embedding": { "model_id": "cyEKpY8BQusrTe_CRBnv", "field_map": { "lyrics": "vector_lyrics" } } } ] } } Index settings: _GET /song_lyrics_index/settings { "song_lyrics_index": { "settings": { "index": { "replication": { "type": "DOCUMENT" }, "number_of_shards": "1", "provided_name": "song_lyrics_index", "knn.space_type": "cosinesimil", "default_pipeline": "song_lyrics_pipeline", "knn": "true", "creation_date": "1716470171990", "number_of_replicas": "1", "uuid": "hFv5slJlRr-9DVkYCxSH1A", "version": { "created": "136347827" } } } } } Index mappings: _GET /song_lyrics_index/mapping { "song_lyrics_index": { "mappings": { "properties": { "album": { "type": "text" }, "artist": { "type": "text" }, "id": { "type": "text" }, "lyrics": { "type": "text" }, "meta_data": { "properties": { "media": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "release_date": { "type": "date" }, "url": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "writers": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "year": { "type": "float" } } }, "vector_lyrics": { "type": "knn_vector", "dimension": 384 } } } } } Sample Index data - vector_lyrics field - purposely removed that data as it is huge (embedding field)...: { "_index": "song_lyrics_index", "_id": "1caeb10095ab4c419eae32d6dab2cb0e", "_score": 1, "_source": { "artist": "Connie Francis", "album": "Rock ‘n’ Roll Million Sellers", "meta_data": { "release_date": "1959-01-01", "year": 1959, "media": "[{'provider': 'youtube', 'start': 0, 'type': 'video', 'url': 'http://www.youtube.com/watch?v=V8x5cUFoDnU'}]", "writers": "[]", "url": "https://genius.com/Connie-francis-lipstick-on-your-collar-lyrics" }, "lyrics": "When you left me all alone At the record hop Told me you were going out For a soda pop You were gone for quite awhile Half an hour or more You came back And man, oh man This is what I saw Chorus Lipstick on your collar Told a tale on you Lipstick on your collar Said you were untrue Bet your bottom dollar You and I are through Cause lipstick on your collar Told a tale on you, yeah [Instrumental Interlude] You said it belonged to me Made me stop and think And then I noticed yours was red Mine was baby pink Who walked in but Mary Jane Lipstick all a mess Were you smoochin' my best friend? Guess the answer's yes Repeat Chorus Cause lipstick on your collar Told a tale on you, boy Told a tale on you, man Told a tale on you, yeah", "vector_lyrics": [...] } }, { "_index": "song_lyrics_index", "_id": "d10f9cf494bc4b688016929682598fc6", "_score": 1, "_source": { "artist": "Drifters", "album": "Under the Boardwalk", "meta_data": { "release_date": "1959-04-24", "year": 1959, "media": "[{'provider': 'youtube', 'start': 0, 'type': 'video', 'url': 'http://www.youtube.com/watch?v=noFS4_oEakI'}]", "writers": "[{'api_path': '/artists/19688', 'header_image_url': 'https://images.genius.com/469flyng83mlexo1nrpbo5qsn.500x418x1.jpg', 'id': 19688, 'image_url': 'https://images.genius.com/469flyng83mlexo1nrpbo5qsn.500x418x1.jpg', 'is_meme_verified': False, 'is_verified': False, 'name': 'Ben E. King', 'url': 'https://genius.com/artists/Ben-e-king'}, {'api_path': '/artists/41574', 'header_image_url': 'https://images.genius.com/f06bc78c0be8d48120eeecc8318fd359.204x264x1.jpg', 'id': 41574, 'image_url': 'https://images.genius.com/f06bc78c0be8d48120eeecc8318fd359.204x264x1.jpg', 'is_meme_verified': False, 'is_verified': False, 'name': 'Mike Stoller', 'url': 'https://genius.com/artists/Mike-stoller'}, {'api_path': '/artists/41573', 'header_image_url': 'https://images.genius.com/36315105955fb3074737590940b80fb5.300x272x1.jpg', 'id': 41573, 'image_url': 'https://images.genius.com/36315105955fb3074737590940b80fb5.300x272x1.jpg', 'is_meme_verified': False, 'is_verified': False, 'name': 'Jerry Leiber', 'url': 'https://genius.com/artists/Jerry-leiber'}, {'api_path': '/artists/557305', 'header_image_url': 'https://assets.genius.com/images/default_avatar_300.png?1582151316', 'id': 557305, 'image_url': 'https://assets.genius.com/images/default_avatar_300.png?1582151316', 'is_meme_verified': False, 'is_verified': False, 'name': 'George Treadwell', 'url': 'https://genius.com/artists/George-treadwell'}, {'api_path': '/artists/557304', 'header_image_url': 'https://assets.genius.com/images/default_avatar_300.png?1582151316', 'id': 557304, 'image_url': 'https://assets.genius.com/images/default_avatar_300.png?1582151316', 'is_meme_verified': False, 'is_verified': False, 'name': 'Lover Patterson', 'url': 'https://genius.com/artists/Lover-patterson'}]", "url": "https://genius.com/The-drifters-there-goes-my-baby-lyrics" }, "lyrics": "[Intro] (Bo-bo) (doo-doot-doo-doo-doo-doo) (There she goes) (doo-doot-doo-doo-doo-doo) (There she goes) (doo-doot-doo-doo-doo-doo) (Bo-bo) (doo-doot-doo-doo) (Bo-bo) (doo-doo-doo-doo) There goes my baby Movin' on down the line Wonder where, wonder where Wonder where she is bound? I broke her heart And made her cry Now I'm alone, so all alone What can I do, what can I do? (There goes my baby) Whoa-oh-oh-oh-oh (There goes my baby) Yeah, yeah, yeah, yeah (There goes my baby) Whoa-oh-oh-oh (There she goes) Yeah! (There she goes) I wanna know if she loved me Did she really love me? Was she just playing Me for a fool? I wonder why she left me Why did she leave me So all alone So all alone? I was gonna tell her that I loved her And that I need her Beside my side To be my guide I wanna know where is my (doo-doot-doo-doo-doo-doo) Where is my baby? (doo-doot-doo-doo-doo-doo) I want my baby (doo-doot-doo-doo-doo-doo) I need my baby Yeah, whoa-oh-oh (There goes my baby) Whoa-oh-oh-oh-oh (There goes my baby) Whoa-oh-oh-oh-oh There goes my baby) Whoa-oh-oh-oh-oh (There goes my baby) Whoa-oh-oh-oh-oh", "vector_lyrics": [...] } }, { "_index": "song_lyrics_index", "_id": "6c6eb041cf4f4ce7b19207a6ffc452b1", "_score": 1, "_source": { "artist": "Elvis Presley", "album": "50,000,000 Elvis Fans Can’t Be Wrong (Elvis’ Gold Records, Vol. 2)", "meta_data": { "release_date": "1959-06-23", "year": 1959, "media": "[{'provider': 'youtube', 'start': 0, 'type': 'video', 'url': 'http://www.youtube.com/watch?v=HnrZ7MQsT0s'}]", "writers": "[{'api_path': '/artists/1014527', 'header_image_url': 'https://images.genius.com/d2fcd79abf3d5d805bb61d555cb141a4.200x200x1.jpg', 'id': 1014527, 'image_url': 'https://images.genius.com/d2fcd79abf3d5d805bb61d555cb141a4.200x200x1.jpg', 'is_meme_verified': False, 'is_verified': False, 'name': 'Sidney Wyche', 'url': 'https://genius.com/artists/Sidney-wyche'}, {'api_path': '/artists/368580', 'header_image_url': 'https://images.genius.com/33fad1faa4fef3a4605899e42fa642fc.220x220x1.jpg', 'id': 368580, 'image_url': 'https://images.genius.com/33fad1faa4fef3a4605899e42fa642fc.220x220x1.jpg', 'is_meme_verified': False, 'is_verified': False, 'name': 'Aaron Schroeder', 'url': 'https://genius.com/artists/Aaron-schroeder'}]", "url": "https://genius.com/Elvis-presley-a-big-hunk-o-love-lyrics" }, "lyrics": "[Intro] Hey baby, I ain't asking much of you No no no no no no no no baby, I ain't asking much of you Just a big-a big-a big-a hunk of love will do [Verse 1] Don't be a stingy little mama You're about to starve me half to death Well you can spare a kiss or two and Still have plenty left, no no no Baby, I ain't asking much of you Just a big-a big-a big-a hunk of love will do [Verse 2] You're just a natural born beehive Filled with honey to the top Well I ain't greedy baby All I want is all you got, no no no I ain't asking much of you Just a big-a big-a big-a hunk of love will do [Verse 3] I got wishbone in my pocket I got a rabbit's foot around my wrist You know I'd have all the things these lucky charms could bring If you'd give me just one sweet kiss, no no no no no no no Baby, I ain't asking much of you Just a big-a big-a big-a hunk of love will do", "vector_lyrics": [ ... ] } }, { "_index": "song_lyrics_index", "_id": "6555a24228c74579ac9fe7c6737adb54", "_score": 1, "_source": { "artist": "Johnny and The Hurricanes", "album": "Red River Rock", "meta_data": { "release_date": null, "year": 1959, "media": "[]", "writers": "[]", "url": "https://genius.com/Johnny-and-the-hurricanes-red-river-rock-lyrics" }, "lyrics": """Unknown Miscellaneous Paddy And The Whale PADDY AND THE WHALE Paddy O'Brian left Ireland in glee; He had a strong notion old England to see; He shipped in the Nellie, for England was bound And the whiskey he drank made his head go around Cho: Laddy whack, fol de dol, fol de rol I dee dee * O, Paddy been never sailing before; It made his heart ache when he heard the loud roar; For the glance of his eye, a whale he did spy: "I'm going to be ate," says Paddy,"by-and-by" O, Paddy run forward and caught hold of the mast He grasped his arms round and there he held fast The boat gave a tip, and, losing his grip Down in the whale's belly poor Paddy did slip He was down in the whale six months and five days Till luck one day to his throat he did pop The whale give a snort and then give a blow And out on the land poor Paddy did go O, Paddy is landed and safe on the shore; He swears that he 'll never go to sea any more The next time he wishes old England to see It will be when the railroad runs over the sea Note: Alternate chorus I've heard is: Caterwaulin', Tarpaulin', Harpoonin' and all Tune is another Derry Down variant RG From Ballads and Sea Songs of Newfoundland, Greenleaf Collected from John Edison, Fleur de Lys, 1929 @sailor @fish Filename[ PADWHAL Play.exe DERRDWN2 RG ===DOCUMENT BOUNDARY===""", "vector_lyrics": [ ... ] } } Remote Connector details: _GET /_plugins/ml/connectors/WVgvuY8Ba1cmZFt-2qvW { "name": "DevToolsRemoteConnector7", "version": "1", "description": "This is remote connector configurator created from Open Search Lib at runtime", "protocol": "http", "parameters": { "endpoint": "****************", "deployment_name": "*******", "temperature": "0.0", "model": "****", "api_version": "**********" }, "actions": [ { "action_type": "PREDICT", "method": "POST", "url": "https://${parameters.endpoint}/openai/deployments/${parameters.deployment_name}/chat/completions?api-version=${parameters.api_version}", "headers": { "api-key": "${credential.openAI_key}" }, "request_body": """{ "messages": "${parameters.messages}", "temperature": ${parameters.temperature} }""" } ], "owner": { "name": "admin", "backend_roles": [ "admin" ], "roles": [ "own_index", "all_access" ], "custom_attribute_names": [], "user_requested_tenant": "__user__" }, "access": "private" } Remote model details _GET /_plugins/ml/models/XlgwuY8Ba1cmZFt-Nquz { "name": "devtools_remote_model6", "model_group_id": "OCW-hY8BT-AsUikAscDk", "algorithm": "REMOTE", "model_version": "8", "description": "Description about the remote model", "model_state": "DEPLOYED", "created_time": 1716798895794, "last_updated_time": 1716798975296, "last_deployed_time": 1716798975296, "auto_redeploy_retry_times": 0, "planning_worker_node_count": 1, "current_worker_node_count": 1, "planning_worker_nodes": [ "SgpCQj20SGWdpTIL0-bXGg" ], "deploy_to_all_nodes": true, "is_hidden": false, "connector_id": "WVgvuY8Ba1cmZFt-2qvW" } Agent Configuration: _GET /_plugins/ml/agents/ZVgwuY8Ba1cmZFt-3qvO { "name": "Song Lyrics Agent opensearch-project/OpenSearch#1", "type": "flow", "description": "Description about the agent", "tools": [ { "type": "VectorDBTool", "parameters": { "input": "${parameters.messages}", "source_field": """["artist","album","meta_data"]""", "embedding_field": "vector_lyrics", "index": "song_lyrics_index", "model_id": "cyEKpY8BQusrTe_CRBnv", "k": "4" }, "include_output_in_agent_response": false }, { "type": "MLModelTool", "description": "Tool for answering the question", "parameters": { "model_id": "XlgwuY8Ba1cmZFt-Nquz", "message": """This is the context:\n${parameters.VectorDBTool.output}\nThis is your question:\n${parameters.messages}\n. Give an answer based on the given context\n.""", "reponse_field": "result" }, "include_output_in_agent_response": false } ], "created_time": 1716798938827, "last_updated_time": 1716798938827, "is_hidden": false } Execute the agent with the below: POST /_plugins/_ml/agents/ZVgwuY8Ba1cmZFt-3qvO/_execute { "parameters" : { "messages" : "List the songs which are very peppy beat songs from the lyrics" } } Following response is observed: "{\"error\":{\"reason\":\"Invalid Request\",\"details\":\"Error from remote service: \",\"type\":\"OpenSearchStatusException\"},\"status\":400}" Expected behavior As per the request made to agent, the syntensized response has to be the output. Instead error is observed. Additional Details Plugins Please list all plugins currently enabled. Screenshots If applicable, add screenshots to help explain your problem. uname -a Darwin EPINCHEW00ED 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020 arm64 Additional context Add any other context about the problem here. Catch All Triage - 1 2 3 4 5 6
gharchive/issue
2024-05-27T11:00:38
2025-04-01T06:45:16.578223
{ "authors": [ "admin-techheralds", "dblock" ], "repo": "opensearch-project/ml-commons", "url": "https://github.com/opensearch-project/ml-commons/issues/2482", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1467079967
[FEATURE] Support GPU servers for model serving framework We released model serving framework in 2.4 as experimental feature (doc link). In 2.4, we have limited support running pytorch model on GPU ML nodes. In 2.5, we plan to support popular GPU instances: NVIDIA GPU AWS Inferentia Instance We have added a doc to guide user how to prepare GPU ML node to run model serving framework in regards of this issue #677, so we think we can close it.
gharchive/issue
2022-11-28T20:56:57
2025-04-01T06:45:16.582502
{ "authors": [ "b4sjoo", "ylwu-amzn" ], "repo": "opensearch-project/ml-commons", "url": "https://github.com/opensearch-project/ml-commons/issues/576", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1012432516
Ensure DCO Workflow Check Coming from https://github.com/opensearch-project/project-meta/issues/17 A Developer Certificate of Origin is required on commits in the OpenSearch-Project. See doc.yml for an example workflow. Ensure CONTRIBUTING.md to has a section on the DCO per the project template. [x] DCO Check Workflow [x] CONTRIBUTING.md DCO Section Resolved with https://github.com/opensearch-project/opensearch-ci/pull/16
gharchive/issue
2021-09-30T17:21:24
2025-04-01T06:45:16.585864
{ "authors": [ "peternied" ], "repo": "opensearch-project/opensearch-ci", "url": "https://github.com/opensearch-project/opensearch-ci/issues/19", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1254335214
[Site] add test explore page Description Basic HTML and JS to explore known tests. There can be improvements made but just initial stuff and hardcoded. values to help plugin developers find their results. If this is merged, and Sites are enabled for this repo then it should be accessible via https://opensearch-project.github.io/opensearch-dashboards-functional-test/site/ Signed-off-by: Kawika Avilla kavilla414@gmail.com Issues Resolved https://github.com/opensearch-project/opensearch-dashboards-functional-test/issues/217 Check List [x] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. Click here to test out the site: https://kavilla.github.io/opensearch-dashboards-functional-test/site/
gharchive/pull-request
2022-05-31T19:33:49
2025-04-01T06:45:16.589677
{ "authors": [ "kavilla" ], "repo": "opensearch-project/opensearch-dashboards-functional-test", "url": "https://github.com/opensearch-project/opensearch-dashboards-functional-test/pull/235", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2367138573
[RELEASE] Release v. Next Can we get a release of 2.10.5 with the fix for https://github.com/opensearch-project/opensearch-java/issues/1042? That would be a 2.11.1, we did a 2.11 release a few days ago. ETA? I will try to get to this in the next couple of days unless someone beats me to it. There's a bunch of open bugs and issues in this project, could use some help! @BrendonFaleiro v2.11.1 has been released to maven including this fix, please let me know if you run into any issues with it!
gharchive/issue
2024-06-21T19:34:34
2025-04-01T06:45:16.592181
{ "authors": [ "BrendonFaleiro", "Xtansia", "dblock" ], "repo": "opensearch-project/opensearch-java", "url": "https://github.com/opensearch-project/opensearch-java/issues/1047", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1605944823
[BUG] Issue with consuming the new version of opensearch-py 2.2.0 What is the bug? When consuming the new version of opensearch-py 2.2.0 in opensearch-dsl-py, the workflows are failing for some python versions: https://github.com/opensearch-project/opensearch-dsl-py/actions/runs/4309402957/jobs/7516755816 Error: ERROR: No matching distribution found for certifi>=2022.12.07 (from opensearch-py>=2.0.0->opensearch-dsl==2.0.1) How can one reproduce the bug? See the workflow link https://github.com/opensearch-project/opensearch-dsl-py/actions/runs/4309402957/jobs/7516755816. What is the expected behavior? The workflows should succeed. What is your host/environment? Operating system, version. Do you have any screenshots? If applicable, add screenshots to help explain your problem. Do you have any additional context? Regarding PR #295 @saimedhi Can you take a look? @saimedhi Can you take a look? ok certify seems doesn't support python 2.7 anymore. Should I exclude it from CI runs if confirmed? Please, re-check me. certify seems doesn't support python 2.7 anymore. Should I exclude it from CI runs if confirmed? Please, re-check me. Hello Yury-Fridlyand, as you said only python 2.7 is failing. But I think python 2.7 supports certify. Because tests in opensearch-py are not failing for python 2.7 version. There might be something else causing this failure. certifi page says Requires: Python >=3.6 Got it. I think we need to skip CI runs for python 2.7 version. @Yury-Fridlyand Are you working on it. if not, I will make the changes :) @saimedhi, Will do Fixed in https://github.com/opensearch-project/opensearch-dsl-py/pull/105 @VachaShah, I think we can close this issue This still breaks older python 2 versions when using the command: pip install opensearch-py fixing the version to 2.1.1 is a work around but not ideal. changes in the setup file seem to be ignored in older python versions. I believe I have the last pip version to support python 2 installed. Obviously python 2 is EOL but I'm unable to upgrade in this instance. (qa_venv) [root@env_1r1dc6n-node1 ~]# pip --version pip 20.3.4 from /root/qa_venv/lib/python2.7/site-packages/pip (python 2.7) (qa_venv) [root@env_1r1dc6n-node1 ~]# python --version Python 2.7.5 (qa_venv) [root@env_1r1dc6n-node1 ~]# pip show opensearch-py DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality. Name: opensearch-py Version: 2.1.1 Summary: Python low-level client for OpenSearch Home-page: https://github.com/opensearch-project/opensearch-py Author: Aleksei Atavin, Denis Zalevskiy, Rushi Agrawal, Shephali Mittal Author-email: axeo@aiven.io, dez@aiven.io, rushi.agr@gmail.com, shephalm@amazon.com License: Apache-2.0 Location: /root/qa_venv/lib/python2.7/site-packages Requires: urllib3, certifi, requests Required-by: (qa_venv) [root@env_1r1dc6n-node1 ~]# pip install -U opensearch-py DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality. Collecting opensearch-py Using cached opensearch_py-2.2.0-py2.py3-none-any.whl (291 kB) Requirement already satisfied, skipping upgrade: urllib3<2,>=1.21.1 in ./qa_venv/lib/python2.7/site-packages (from opensearch-py) (1.26.15) Collecting python-dateutil Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Collecting ipaddress; python_version < "3.3" Using cached ipaddress-1.0.23-py2.py3-none-any.whl (18 kB) ERROR: Could not find a version that satisfies the requirement certifi>=2022.12.07 (from opensearch-py) (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 1.0.0, 1.0.1, 14.5.14, 2015.4.28, 2015.9.6, 2015.9.6.1, 2015.9.6.2, 2015.11.20, 2015.11.20.1, 2016.2.28, 2016.8.2, 2016.8.8, 2016.8.31, 2016.9.26, 2017.1.23, 2017.4.17, 2017.7.27, 2017.7.27.1, 2017.11.5, 2018.1.18, 2018.4.16, 2018.8.13, 2018.8.24, 2018.10.15, 2018.11.29, 2019.3.9, 2019.6.16, 2019.9.11, 2019.11.28, 2020.4.5, 2020.4.5.1, 2020.4.5.2, 2020.6.20, 2020.11.8, 2020.12.5, 2021.5.30, 2021.10.8) ERROR: No matching distribution found for certifi>=2022.12.07 (from opensearch-py) Related, a lot of this mess is solved with virtual env I believe, been using pipenv for projects. The problem seems to be related to the certifi package and its Python 2.7 support. opensearch-py requires a newer version of certifi (certifi>=2022.12.07), which is not supported in Python 2.7. As a result, workflows do not work for some Python versions. the error is related to the compatibility issue of the certifi package with Python 2.7 when installing opensearch-py. it is recommended to upgrade to a newer version of Python. Shall we close this? Anything we can do in the client? Shall we close this? Anything we can do in the client? We can close this issue. But I just have a doubt. This pr is fixing github actions, but local tests are still not fixed. We need to exclude CI tests for python versions less than 3.6 in nox file. Can we still do that. Or final version of opensearch-dsl-py is already released?
gharchive/issue
2023-03-02T01:08:44
2025-04-01T06:45:16.604490
{ "authors": [ "ReinGrad", "VachaShah", "Yury-Fridlyand", "dblock", "janderson-cloudian", "saimedhi" ], "repo": "opensearch-project/opensearch-py", "url": "https://github.com/opensearch-project/opensearch-py/issues/309", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1519729497
OUI combo box refine Description Added the prop clearOnBlur to make the combo box input text clear when user focuses out of text box. Issues Resolved #127 Check List [ ] New functionality includes testing. [ ] New functionality has been documented. [x] All tests pass [x] yarn lint [x] yarn test-unit [x] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. @AbhishekReddy1127 Is it possible to put a demo together of the changes? I would like to interact with it before approving. Love the video, very nice!
gharchive/pull-request
2023-01-04T23:28:29
2025-04-01T06:45:16.608673
{ "authors": [ "AbhishekReddy1127", "KrooshalUX", "ashwin-pc" ], "repo": "opensearch-project/oui", "url": "https://github.com/opensearch-project/oui/pull/183", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
860090698
OpenSearch fork changes Fixes #: Description of changes: OpenSearch fork changes Tests: ./gradlew build locally If new tests are added, how long do the new ones take to complete Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. It looks like this PR breaks the ShardBulkDocs metrics... Now it's always displaying weird / wrong values 😢
gharchive/pull-request
2021-04-16T19:01:59
2025-04-01T06:45:16.612813
{ "authors": [ "craph", "sruti1312" ], "repo": "opensearch-project/performance-analyzer-rca", "url": "https://github.com/opensearch-project/performance-analyzer-rca/pull/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1201873247
Fixes several occurrences of the the Minor/Trivial grammatical fix, removing occurrences of the the in prose. By submitting this pull request, I confirm that my contribution is made under the terms of the BSD-3-Clause License. Withdrawing PR until I figure out how to navigate the process correctly. @rbowen We do want this! To fix DCO commit with -s and have your git name/email set locally, that's about it.
gharchive/pull-request
2022-04-12T13:42:36
2025-04-01T06:45:16.614515
{ "authors": [ "dblock", "rbowen" ], "repo": "opensearch-project/project-website", "url": "https://github.com/opensearch-project/project-website/pull/761", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1695632736
Add a details button to open the findings flyout from the correlations page. Description Add a details button to open the findings flyout from the correlations page. Issues Resolved Resolves #564 Screenshots Check List [ ] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. Codecov Report Merging #572 (f3e04cf) into main (3d95270) will increase coverage by 0.00%. The diff coverage is n/a. :mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@ ## main #572 +/- ## ======================================= Coverage 32.66% 32.66% ======================================= Files 135 136 +1 Lines 4020 4059 +39 Branches 649 652 +3 ======================================= + Hits 1313 1326 +13 - Misses 2567 2594 +27 + Partials 140 139 -1 see 11 files with indirect coverage changes
gharchive/pull-request
2023-05-04T09:33:01
2025-04-01T06:45:16.620966
{ "authors": [ "codecov-commenter", "jovancacvetkovic" ], "repo": "opensearch-project/security-analytics-dashboards-plugin", "url": "https://github.com/opensearch-project/security-analytics-dashboards-plugin/pull/572", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1589863179
Release version 2.6.0 This is a component issue for 2.6.0. Coming from opensearch-build__3081__. Please follow the following checklist. Please refer to the DATES in that post. How to use this issue This Component Release Issue This issue captures the state of the OpenSearch release, on component/plugin level; its assignee is responsible for driving the release. Please contact them or @mention them on this issue for help. Any release related work can be linked to this issue or added as comments to create visiblity into the release status. Release Steps There are several steps to the release process; these steps are completed as the whole component release and components that are behind present risk to the release. The component owner resolves the tasks in this issue and communicate with the overall release owner to make sure each component are moving along as expected. Steps have completion dates for coordinating efforts between the components of a release; components can start as soon as they are ready far in advance of a future release. The most current set of dates is on the overall release issue linked at the top of this issue. The Overall Release Issue Linked at the top of this issue, the overall release issue captures the state of the entire OpenSearch release including references to this issue, the release owner which is the assignee is responsible for communicating the release status broadly. Please contact them or @mention them on that issue for help. What should I do if my plugin isn't making any changes? If including changes in this release, increment the version on __2.x__ branch to __2.6.0__ for Min/Core, and __2.6.0.0__ for components. Otherwise, keep the version number unchanged for both. Preparation [x] Assign this issue to a release owner. [x] Finalize scope and feature set and update the Public Roadmap. [x] All the tasks in this issue have been reviewed by the release owner. [x] Create, update, triage and label all features and issues targeted for this release with v2.6.0. CI/CD [x] All code changes for __2.6.0__ are complete. [x] Ensure working and passing CI. [x] Check that this repo is included in the distribution manifest. Pre-Release [x] Update to the __2.6__ release branch in the distribution manifest. [x] Increment the version on the parent branch to the next development iteration. [x] Gather, review and publish release notes following the rules and back port it to the release branch.git-release-notes may be used to generate release notes from your commit history. [x] Confirm that all changes for __2.6.0__ have been merged. [x] Add this repo to the manifest for the next developer iteration. Release Testing [x] Find/fix bugs using latest tarball and docker image provided in parent release issue and update the release notes if necessary. [x] Code Complete: Test within the distribution, ensuring integration, backwards compatibility, and performance tests pass. [x] Sanity Testing: Sanity testing and fixing of critical issues found. [x] File issues for all intermittent test failures. Release [x] Complete documentation. [x] Verify all issued labeled for this release are closed or labelled for the next release. [x] Verify the release date mentioned in release notes is correct and matches actual release date. Post Release [x] Prepare for an eventual security fix development iteration by incrementing the version on the release branch to the next eventual patch version. [x] Add this repo to the manifest of the next eventual security patch version. [x] Suggest improvements to this template. [ ] Conduct a retrospective, and publish its results. @cwperks I think we are done with this release issue, can you confirm / close out?
gharchive/issue
2023-02-17T19:31:42
2025-04-01T06:45:16.634630
{ "authors": [ "gaiksaya", "peternied" ], "repo": "opensearch-project/security", "url": "https://github.com/opensearch-project/security/issues/2446", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1070045790
Add unit tests for backend models Description add unit tests for backend models fix transport communication for ObservabilityObjectDoc Issues Resolved [List any issues this PR will resolve] Check List [ ] New functionality includes testing. [ ] All tests pass, including unit test, integration test and doctest [ ] New functionality has been documented. [ ] New functionality has javadoc added [ ] New functionality has user manual doc added [x] Commits are signed per the DCO using --signoff By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here. Codecov Report Merging #283 (503ed47) into main (140f2e1) will increase coverage by 7.32%. The diff coverage is 100.00%. @@ Coverage Diff @@ ## main #283 +/- ## ============================================ + Coverage 56.46% 63.78% +7.32% - Complexity 197 213 +16 ============================================ Files 37 37 Lines 2026 2027 +1 Branches 231 231 ============================================ + Hits 1144 1293 +149 + Misses 727 578 -149 - Partials 155 156 +1 Flag Coverage Δ opensearch-observability 63.78% <100.00%> (+7.32%) :arrow_up: Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ ...arch/observability/model/ObservabilityObjectDoc.kt 85.13% <100.00%> (+28.97%) :arrow_up: ...ability/model/ObservabilityObjectDataProperties.kt 61.53% <0.00%> (+3.84%) :arrow_up: ...opensearch/observability/model/OperationalPanel.kt 74.43% <0.00%> (+18.18%) :arrow_up: ...lin/org/opensearch/observability/model/Notebook.kt 79.47% <0.00%> (+18.94%) :arrow_up: ...n/org/opensearch/observability/model/SavedQuery.kt 80.60% <0.00%> (+19.39%) :arrow_up: ...in/org/opensearch/observability/model/Timestamp.kt 83.33% <0.00%> (+21.42%) :arrow_up: ...ensearch/observability/model/SavedVisualization.kt 85.07% <0.00%> (+25.37%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 140f2e1...503ed47. Read the comment docs.
gharchive/pull-request
2021-12-02T22:41:13
2025-04-01T06:45:16.653700
{ "authors": [ "codecov-commenter", "joshuali925" ], "repo": "opensearch-project/trace-analytics", "url": "https://github.com/opensearch-project/trace-analytics/pull/283", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
721824761
CLI delete behavior undefined when more than one OSM w same name on cluster This GitHub Issue is a request to improve the osm mesh delete experience. The CLI tool must provide additional context for the user to understand exactly which OSM deployment is going to be deleted. Current State For example -- on the same cluster I have 2 OSM controllers installed. The OSMs have the same mesh name but in different namespaces (perhaps that should not be allowed): $ ./bin/osm mesh list MESH NAME NAMESPACE osm mesh-system osm osm-system Issuing the osm mesh delete command - I am asked whether I want to delete osm - but I am unclear exactly which instance is going to be deleted: $ ./bin/osm mesh delete Uninstall OSM [mesh name: osm] ? [y/n]: y OSM [mesh name: osm] uninstalled Outcome $ ./bin/osm mesh list MESH NAME NAMESPACE osm mesh-system It seems that the osm deleted was the one in the osm-system namespace. Suggested Improvement To improve this I suggest we change the behaviour of the osm mesh delete command. It should list all OSM meshes on the current Kubernetes cluster and ask for a particular index / row number to be deleted. For example: $ osm mesh delete There are 2 OSM controller deployments running on cluster osm-abcd.westcentralus.azmk8s.io N: MESH NAME NAMESPACE CONTROLLER PODS 1: osm mesh-system osm-controller-abc,osm-controller-efg 2: osm osm-system osm-controller-xyz Which one [1, 2]: Uninstall OSM [mesh name: osm in namespace mesh-system] ? [y/n]: y OSM [mesh name: osm in namespace mesh-system] uninstalled I would like to see: Cluster which I am about to affect OSM Name Namespaces where these OSM Controllers Pod(s) of the OSM Controller to be deleted With this context there would be no ambiguity which mesh I am deleting. Scope (please mark with X where applicable) New Functionality [ ] Install [ ] SMI Traffic Access Policy [ ] SMI Traffic Specs Policy [ ] SMI Traffic Split Policy [ ] Permissive Traffic Policy [ ] Ingress [ ] Egress [ ] Envoy Control Plane [ ] CLI Tool [x] Metrics [ ] Certificate Management [ ] Sidecar Injection [ ] Logging [ ] Debugging [ ] Tests [ ] CI System [ ] Project Release [ ] I don't think this should be possible in the current state because the mutatingwebhookname relies on the mesh name and it is a cluster scoped resource. If you try to install a control plane with the same name, the installation will fail. However, this issue exposes ambiguity around mesh name that can be cleaned up. What do you think about adding validation to ensure that a control plane should be unique to a cluster? cc/ @shashankram @ksubrmnn +1 on "adding validation to ensure that a control plane should be unique to a cluster" - would solve this issue
gharchive/issue
2020-10-14T22:05:58
2025-04-01T06:45:16.665567
{ "authors": [ "draychev", "michelleN" ], "repo": "openservicemesh/osm", "url": "https://github.com/openservicemesh/osm/issues/1839", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2390543771
lca: cnf: wait for ibu idle stage after finalize cgu After the finalize CGU reports Completed state wait for the IBU Idle condition to report True as well. Also remove 'Wait until node is reporting as Ready' as it can sometimes prematurely fail if the kube api is not available. Looks good to me. /Peri M few nit comments from my side otherwise looks good to me. /Peri M Looks good to me. /Peri M
gharchive/pull-request
2024-07-04T10:40:08
2025-04-01T06:45:16.673214
{ "authors": [ "mcornea", "mpmaruthu" ], "repo": "openshift-kni/eco-gotests", "url": "https://github.com/openshift-kni/eco-gotests/pull/79", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
977792260
New role gpu_operator_get_csv_version and other improvements entitlement_test_wait_deployment: do not check openssl x509 -enddate if entitlement not deployed gpu_operator_deploy_from_operatorhub: deploy_from_bundle: give 5min to deploy instead of 2min gpu_operator_get_csv_version: new role to simplify version-conditional execution of ansible tests toolbox: _common.py: allow overriding ARTIFACT_EXTRA_LOGS_DIR toolbox/gpu-operator/must-gather.sh: capture GPU Operator CSV version CI: GPU Operator: store in ${ARTIFACT_DIR} the operator/cluster/ci-artifacts version For display in ci-dashboard nfd_operator_undeploy_from_operatorhub: undeploy from the right namespace gpu_operator_run_gpu-burn: extend wait time to 10min CI: ci_entrypoint_gpu-operator.sh: mute 'entitlement undeploy' finalizer /test gpu-operator-e2e Other than comments looks good /test gpu-operator-e2e /test gpu-operator-e2e /test gpu-operator-e2e /test gpu-operator-e2e /lgtm thanks @omer, /approve On Wed, Aug 25, 2021 at 11:46 AM Omer Tuchfeld @.***> wrote: /lgtm — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openshift-psap/ci-artifacts/pull/244#issuecomment-905349115, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZVQIX5XVCJFJYWWUDPBG3T6S3YRANCNFSM5CWGXRAQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email . /approve cancel there is a race condition in gpu-operator-e2e caused by 013__entitlement__undeploy (in presubmit-operatorhub), which deletes the entitlement, but doesn't wait for all the nodes to be rebooted. After that, in presubmit-master, 001__entitlement__test_cluster passes, but the driver deployment fails because the entitlement is gone ... there is a race condition in gpu-operator-e2e caused by 013__entitlement__undeploy (in presubmit-operatorhub), which deletes the entitlement, but doesn't wait for all the nodes to be rebooted. After that, in presubmit-master, 001__entitlement__test_cluster passes, but the driver deployment fails because the entitlement is gone ... I added a commit to wait for the proper undeployment of the MachineConfig before running presubmit-master test /test gpu-operator-e2e /lgtm /test gpu-operator-e2e @omertuc any clue why | jq works on my system, but not in the CI (it needs | jq .) version mismatch?
gharchive/pull-request
2021-08-24T07:26:31
2025-04-01T06:45:16.685591
{ "authors": [ "kpouget", "omertuc" ], "repo": "openshift-psap/ci-artifacts", "url": "https://github.com/openshift-psap/ci-artifacts/pull/244", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
274543251
Update copr link in Makefile comments Describe what this PR does and why we need it: Update a comment in the Makefile. The non-suffixed repo is redundant and I want to remove it. Keeping it up to date doubles the number of builds we're doing and copr is already under a crushing load so we're not helping them or us to enjoy a stable service. Changes Unknown when pulling 590e8dcf6b060bb9f441c94eb3dda6a112132381 on jmontleon:update-copr-link into ** on openshift:master**.
gharchive/pull-request
2017-11-16T14:52:26
2025-04-01T06:45:16.688181
{ "authors": [ "coveralls", "jmontleon" ], "repo": "openshift/ansible-service-broker", "url": "https://github.com/openshift/ansible-service-broker/pull/559", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2463383114
METAL-1114: featuregates: add metal jobs for test analyzer Ensure we're looking at ipv6, ipv4, and dualstack metal techpreview jobs. /lgtm /approve /hold release the hold when you're ready. /hold cancel /hold dualstack is categorized wrong in sippy /lgtm /hold cancel
gharchive/pull-request
2024-08-13T13:39:10
2025-04-01T06:45:16.690257
{ "authors": [ "deads2k", "stbenjam" ], "repo": "openshift/api", "url": "https://github.com/openshift/api/pull/1998", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
863369713
Bug 1951853: operator/dns: Describe default toleration Update the godoc for the dnses.operator.openshift.io resource's spec.nodePlacement.tolerations field to state that the DNS operator adds a toleration for the node-role.kubernetes.io/master taint, that omitting the toleration would be risky, and that the daemon controller adds some additional tolerations. operator/v1/types_dns.go (DNSNodePlacement): Fix description of the Tolerations field. operator/v1/0000_70_dns-operator_00-custom-resource-definition.yaml: operator/v1/zz_generated.swagger_doc_generated.go: Regenerate. /retest verify job is consistently failing? @Miciah looks like a make update might be in order. Looks like a problem with CI, not with the PR: INFO[2021-05-10T13:07:01Z] W0510 13:06:52.895478 17577 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... INFO[2021-05-10T13:07:01Z] make: Target `verify' not remade because of errors. INFO[2021-05-10T13:07:01Z] {"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"prow/entrypoint/run.go:80","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2021-05-10T13:06:52Z"} /test verify Looks like a problem with CI, not with the PR: INFO[2021-05-10T13:07:01Z] W0510 13:06:52.895478 17577 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... INFO[2021-05-10T13:07:01Z] make: Target `verify' not remade because of errors. INFO[2021-05-10T13:07:01Z] {"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"prow/entrypoint/run.go:80","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2021-05-10T13:06:52Z"} /test verify iirc the failure case for the verify job normally outputs confusing errors like this. Is the verify job passing for you locally? iirc the failure case for the verify job normally outputs confusing errors like this. Is the verify job passing for you locally? Nope. Ah, there was a spurious space after a //. Fixed! In my defense, make verify doesn't pass for me locally even after deleting the spurious space, but make verify did point me to the problem. Anyway, the important thing is that the verify CI job is passing now. :tada: /lgtm
gharchive/pull-request
2021-04-21T01:34:51
2025-04-01T06:45:16.696575
{ "authors": [ "Miciah", "sgreene570" ], "repo": "openshift/api", "url": "https://github.com/openshift/api/pull/904", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2434643779
[release-ocm-2.10] MGMT-18313: Replace golang base image as it is based on Centos Linux 7 This PR Bumps golang version to 1.20 (as some dependencies require) Replaces golang base image to UBI (as CI golang base image is based on Centos Linux 7) /retest /retest /hold /unhold /lgtm /approve /approve /approve /override "Red Hat Konflux / cluster-api-provider-agent-mce-25-on-pull-request" /refresh status /refresh-status
gharchive/pull-request
2024-07-29T07:16:57
2025-04-01T06:45:16.728970
{ "authors": [ "danmanor", "eifrach" ], "repo": "openshift/cluster-api-provider-agent", "url": "https://github.com/openshift/cluster-api-provider-agent/pull/124", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1123221751
Bug 2048214: Alibaba: adding permissions for using KMS encryption Adding necessary permissions to enable KMS key encryption in the registry. /bugzilla refresh /bugzilla refresh Thank you! /lgtm
gharchive/pull-request
2022-02-03T15:26:04
2025-04-01T06:45:16.738158
{ "authors": [ "dmage", "kwoodson" ], "repo": "openshift/cluster-image-registry-operator", "url": "https://github.com/openshift/cluster-image-registry-operator/pull/751", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1229756409
Shibumi/fix dockerfile location This PR fixes our Dockerfile location /lgtm let's see this works correctly too
gharchive/pull-request
2022-05-09T13:56:14
2025-04-01T06:45:16.742067
{ "authors": [ "georgettica", "shibumi" ], "repo": "openshift/configuration-anomaly-detection", "url": "https://github.com/openshift/configuration-anomaly-detection/pull/57", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2461441211
fix image references in 4.17 Fix incorrect image references in release-4.17 Please verify and follow up if other places need to be modified as well. Raised in correctly to main, since TP automation does not forward commits to 4.17 now https://github.com/openshift/dpu-operator/pull/127
gharchive/pull-request
2024-08-12T16:30:55
2025-04-01T06:45:16.795529
{ "authors": [ "ashwindasr" ], "repo": "openshift/dpu-operator", "url": "https://github.com/openshift/dpu-operator/pull/125", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1326055604
Add Integration test step in App creation wizard Fixes https://issues.redhat.com/browse/HACBS-815 This PR is built on top of https://github.com/openshift/hac-dev/pull/154 . This commit b456b7888128939fcf3ea6ac8c6bf4309072a86d contains my changes. Description Contains Add Integration test scenario step Type of change [x] Feature Screen shots / Gifs for design review Test cases Create Utils ✓ Should call k8sCreateResource with integration test data (3 ms) ✓ Should contain the display name in the annotations (1 ms) ✓ Should contain kubernetes formatted resource name (1 ms) How to test or reproduce? This step can be found in Create Application wizard Browser conformance: [x] Chrome [x] Firefox [ ] Safari [ ] Edge cc: @Ranelim @MariaLeonova /assign @christianvogt /hold for https://github.com/openshift/hac-dev/pull/154 @karthikjeeyar Looks great! One small thing I noticed: there's a typo here: Thank you @karthikjeeyar ! The step looks excellent. We need to add an external link icon to the "Learn more" link above. @karthikjeeyar @christianvogt Will we make the 'Cancel' button a 'Go to application' once we pass a step where we created something? I think we landed on creating the app after the first step. Will that be tackled in a separate PR? Thank you! /hold cancel @Ranelim see https://github.com/openshift/hac-dev/pull/170 E2E tests are passing locally Thanks @Ranelim @MariaLeonova Updated the PR. @karthikjeeyar @rohitkrai03 If i'm on the integration page and fill it out, then go back to the previous page and then forward to the next page, the form seems to get submitted @karthikjeeyar @rohitkrai03 If i'm on the integration page and fill it out, then go back to the previous page and then forward to the next page, the form seems to get submitted First time landing on the step and also clicking on the current step link is submitting the form , I have logged an issue for this https://github.com/patternfly-labs/formik-pf/issues/19 /lgtm /approve /lgtm cancel /lgtm
gharchive/pull-request
2022-08-02T15:51:04
2025-04-01T06:45:16.806611
{ "authors": [ "MariaLeonova", "Ranelim", "christianvogt", "karthikjeeyar" ], "repo": "openshift/hac-dev", "url": "https://github.com/openshift/hac-dev/pull/166", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1303912623
bundle-gen: Add --image-only and --image-repo (for MCE) Add support to hack/bundle-gen.py for skipping all the OperatorHub pieces and only building/pushing the operator image. At the same time, add the ability to override the image repository to which that image gets pushed. The --image-only, --dummy-bundle, and --branch arguments are mutually exclusive. Example usage below. Note that the argument to --image-only can be any commit-ish recognized by git, including a raw SHA. [efried@efried hive]$ ./hack/bundle-gen.py --image-only master --image-repo quay.io/2uasimojo/hive Cloning git@github.com:openshift/hive.git to /tmp/hive-repo-49b2mwqo Working in /tmp/hive-repo-49b2mwqo Checking out 52ff9fde3e9272015ff66c809503f14c38bb9889 Building container quay.io/2uasimojo/hive:v0.0.3739-52ff9fd Sending build context to Docker daemon 269.1MB Step 1/19 : FROM registry.ci.openshift.org/openshift/release:golang-1.18 as builder <snip> Step 19/19 : ENTRYPOINT ["/opt/services/manager"] ---> Running in 492c2cad5b22 Removing intermediate container 492c2cad5b22 ---> 6606a8b9d676 Successfully built 6606a8b9d676 Successfully tagged quay.io/2uasimojo/hive:v0.0.3739-52ff9fd Pushing container The push refers to repository [quay.io/2uasimojo/hive] 4ce8414ebfc5: Pushed 303981fa49b3: Pushed 5c579d91f1f0: Pushed cdc174b9727e: Pushed eb88336f5364: Pushed d136ebfc6fba: Pushed d33feb9e8957: Pushed 49a343c84da3: Mounted from openshift-hive/hive d392da935ab9: Mounted from openshift-hive/hive 2e5b35302773: Mounted from openshift-hive/hive a18e420160ca: Mounted from openshift-hive/hive 39f0f49cf7b6: Mounted from openshift-hive/hive 773711fd02f0: Mounted from openshift-hive/hive 5bf135c4a0de: Mounted from openshift-hive/hive v0.0.3739-52ff9fd: digest: sha256:c084fbed2f7cba14d23c46807abc4a307acb10067a5df27edcc3515eaa6fb623 size: 3266 HIVE-1954 /assign @gurnben /cc @joeg-pro @2uasimojo I was able to test llast Thurs. Results and an RFE: After resolving local issues with my system, --image-only worked ok when using buildah as the build engine. FYI, in my first attempt, I didn't have buildah installed but rather had only podman installed together with the podman-docker compatibility shim. Hence, the script used docker as the build engine and it got most of the way though, but fails when trying to push the image, because it supplies a --config option that the docker-compatibility shim aparently doesn't support: Successfully tagged quay.io/joeg-pro/hive:v0.0.3739-52ff9fd 4fc52b26e262a91060045231335511e5dfbba0ace311f2c763a3e5c005bc3cf0 Pushing container Error: unknown flag: --config Traceback (most recent call last): File "./bundle-gen.py", line 545, in <module> build_and_push_image(args.registry_auth_file, args.image_repo, hive_version, args.dry_run or args.dummy_bundle, args.build_engine) File "./bundle-gen.py", line 191, in build_and_push_image check=True, File "/usr/lib64/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['docker', '--config', '/home/jmg/.docker', 'push', 'quay.io/joeg-pro/hive:v0.0.3739-52ff9fd']' returned non-zero exit status 125. Since it works with buildah this isn't a blocking issue for us, but thought I'd mention it anyway. RFE: In thinking about our use scenario, it would be handy if we could control the tag on the image. I saw that the arg to --image-repo was to be without the tag. So could you add an optional --tag to allow us to override the tag on the image. Or alternatively allow the arg to --image-repo to include the tag (in which case maybe a name like --image-ref might be better since it would be more than specifying the registry and repo). @joeg-pro How pervasive would you like that tag override to be? Right now we use the same string in The image tag, and the reference to same in the CSV The name of the clusterserviceversion.yaml file The .metadata.name field in the CSV, and the reference to same in the package file The .spec.version field in the CSV Okay, I added a commit with --image-tag-override. To answer my own question above: I did what made the most sense for the code, which was to substitute the override wherever the string had the v prefix, and not where it didn't. Since y'all don't care about the version field in the CSV (right?) this is hopefully what you need. PTAL @joeg-pro. @2uasimojo Sorry for the lack of attention to your questions and updates. Your decision son where to apply the tag override are good. I just tested the updated bndle-gen using buildah as the build engine and it looks good to me. Thanks. /lgtm
gharchive/pull-request
2022-07-13T20:34:19
2025-04-01T06:45:16.815774
{ "authors": [ "2uasimojo", "joeg-pro" ], "repo": "openshift/hive", "url": "https://github.com/openshift/hive/pull/1819", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2347297200
NOISSUE: Revert "Reuse restConfigClientGetter when creating cmdutil.Factory" (HIVE-2399) This reverts commit 110c817776fa2d3892dabcdfdde6b7d43e36320e. (#2246) /assign @2uasimojo /hold Clean revert. /lgtm Unhold when ready /unhold
gharchive/pull-request
2024-06-11T20:55:52
2025-04-01T06:45:16.818537
{ "authors": [ "2uasimojo", "dlom" ], "repo": "openshift/hive", "url": "https://github.com/openshift/hive/pull/2304", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1250082982
Remove monitoring config reconciliation What this PR does / why we need it: Instead of hypershift syncing in a config into the guest cluster, the CMO should default to the proper node selector for the prometheus operator deployment. This allows the config inside the guest cluster to remain entirely user driven. This commit removes the code to reconcile the configuration. Which issue(s) this PR fixes Fixes #2089224 Checklist [x] Subject and description added to both, commit and PR. [x] Relevant issues have been referenced. /hold We first need a change in the CMO that will set the default node selector as in https://github.com/openshift/cluster-monitoring-operator/pull/1679 /bugzilla refresh Looks like there's more stuff to fix up: E0527 02:31:24.458727 1 run.go:74] "command failed" err="invalid argument \"CSIMigrationOpenStack=false\" for \"--feature-gates\" flag: cannot set feature gate CSIMigrationOpenStack to false, feature is locked to true" /hold cancel The CMO change is now in /lgtm
gharchive/pull-request
2022-05-26T21:02:42
2025-04-01T06:45:16.823031
{ "authors": [ "csrwng", "sjenning" ], "repo": "openshift/hypershift", "url": "https://github.com/openshift/hypershift/pull/1420", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2104731675
MULTIARCH-4092: Rebase with upstream: v0.6.0 release What type of PR is this? What this PR does / why we need it: Upgrades k8s to 1.29.1 Use Golang 1.21 Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: Steps Followed: # Remote upstream: git@github.com:kubernetes-sigs/ibm-powervs-block-csi-driver.git git fetch upstream git merge upstream/main <<<resolve conflicts if any>>> && git commit git push origin <feature_branch> Release note: none /retest /jira refresh
gharchive/pull-request
2024-01-29T06:15:08
2025-04-01T06:45:16.826575
{ "authors": [ "Karthik-K-N", "yussufsh" ], "repo": "openshift/ibm-powervs-block-csi-driver", "url": "https://github.com/openshift/ibm-powervs-block-csi-driver/pull/74", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1402894941
UPSTREAM: 99: Update golint to 1.50 Bumping golangci-golint to the latest release to bring go 1.19 support. /hold for the upstream PR to merge. cc @openshift/storage /hold cancel /label docs-approved /label px-approved /label qe-approved Just fixing golint
gharchive/pull-request
2022-10-10T09:50:35
2025-04-01T06:45:16.828473
{ "authors": [ "jsafrane" ], "repo": "openshift/ibm-vpc-block-csi-driver", "url": "https://github.com/openshift/ibm-vpc-block-csi-driver/pull/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
719477256
Support comments in .dockerignore When reading the contents of .dockerignore, filter out empty lines and lines that have a '#' as their first character. This will address https://github.com/containers/buildah/issues/2686, which also affects us. LGTM Other then perhaps adding one more test. /lgtm
gharchive/pull-request
2020-10-12T15:40:42
2025-04-01T06:45:16.830061
{ "authors": [ "nalind", "rhatdan" ], "repo": "openshift/imagebuilder", "url": "https://github.com/openshift/imagebuilder/pull/177", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2514399361
Fix failures while executing make test-e2e Observed that the make test-e2e command fails due to unavailability of the cert-manager-webhook service with the following error log snippet. [FAILED] Unexpected error: <*errors.errorString | 0xc00012c550>: make deploy IMG=quay.io/amalvank/instaslicev2-controller:latest failed with error: (exit status 2) make[1]: Entering directory '/home/svanka/go/src/instaslice-operator' test -s /home/svanka/go/src/instaslice-operator/bin/controller-gen && /home/svanka/go/src/instaslice-operator/bin/controller-gen --version | grep -q v0.14.0 || \ GOBIN=/home/svanka/go/src/instaslice-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.14.0 /home/svanka/go/src/instaslice-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases cd config/manager && /home/svanka/go/src/instaslice-operator/bin/kustomize edit set image controller=quay.io/amalvank/instaslicev2-controller:latest daemonset=quay.io/amalvank/instaslicev2-daemonset:latest /home/svanka/go/src/instaslice-operator/bin/kustomize build config/default | kubectl apply -f - Warning: resource namespaces/instaslice-operator-system is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. namespace/instaslice-operator-system configured customresourcedefinition.apiextensions.k8s.io/instaslices.inference.codeflare.dev configured serviceaccount/instaslice-operator-controller-manager created role.rbac.authorization.k8s.io/instaslice-operator-leader-election-role created clusterrole.rbac.authorization.k8s.io/instaslice-operator-manager-role created clusterrole.rbac.authorization.k8s.io/instaslice-operator-metrics-reader created clusterrole.rbac.authorization.k8s.io/instaslice-operator-proxy-role created rolebinding.rbac.authorization.k8s.io/instaslice-operator-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/instaslice-operator-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/instaslice-operator-proxy-rolebinding created service/instaslice-operator-controller-manager-metrics-service created service/instaslice-operator-webhook-service created deployment.apps/instaslice-operator-controller-manager created daemonset.apps/instaslice-operator-controller-daemonset created mutatingwebhookconfiguration.admissionregistration.k8s.io/instaslice-operator-mutating-webhook-configuration created Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s": dial tcp 10.96.164.237:443: connect: connection refused Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/validate?timeout=30s": dial tcp 10.96.164.237:443: connect: connection refused make[1]: *** [Makefile:230: deploy] Error 1 make[1]: Leaving directory '/home/svanka/go/src/instaslice-operator' This change waits until the cert-manager-webhook pod is in Running state before performing the e2e test. /cc @harche @asm582 @mamy-CS /lgtm
gharchive/pull-request
2024-09-09T16:28:34
2025-04-01T06:45:16.837646
{ "authors": [ "asm582", "sairameshv" ], "repo": "openshift/instaslice-operator", "url": "https://github.com/openshift/instaslice-operator/pull/72", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
427739561
resourcesynccontroller: use cached get to confirm resource exists before deleting target /cc @deads2k /approve /lgtm
gharchive/pull-request
2019-04-01T14:31:46
2025-04-01T06:45:16.838892
{ "authors": [ "mfojtik", "sttts" ], "repo": "openshift/library-go", "url": "https://github.com/openshift/library-go/pull/324", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }