added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:34:59.276738
2023-04-03T07:19:24
1651488552
{ "authors": [ "aniketkatkar97", "harshach" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9304", "repo": "open-metadata/OpenMetadata", "url": "https://github.com/open-metadata/OpenMetadata/issues/10888" }
gharchive/issue
backend: aggregate and suggest APIs not working properly Affected module Does it impact the UI, backend or Ingestion Framework? backend Describe the bug A clear and concise description of what the bug is. Both the aggregate and suggest APIs are not working for the container entity when trying to search through the columns field. https://user-images.githubusercontent.com/51777795/229433682-54b0964a-bd83-44f2-bf66-eda35f63df03.mov Aggregate: Request URL: /api/v1/search/aggregate?index=container_search_index&field=dataModel.columns.name Suggest: Request URL: /api/v1/search/suggest?index=container_search_index&field=column_suggest&q=d Suggest API is not working properly for the Dashboard entity for the chart_suggest field. As shown in the video, the dashboard FCC New Coder Survey 2018 should return two charts in suggestion when searched with the query bo(i.e. Box Plot and Boy Name Cloud), but the response object contains only one option, although the length of results in the object is 2. https://user-images.githubusercontent.com/51777795/229471413-c8e2f52a-fdb9-473b-b148-207fd8deb4cf.mov Request URL: /api/v1/search/suggest?index=dashboard_search_index&field=chart_suggest&q=bo Response: { "text": "bo", "offset": 0, "length": 2, "options": [ { "text": "Box plot", "_index": "dashboard_search_index", "_type": "_doc", "_id": "0021151a-65bd-4638-8717-86546cfd9bab", "_score": 10, "_source": { "id": "0021151a-65bd-4638-8717-86546cfd9bab", "name": "FCC New Coder Survey 2018", "displayName": "FCC New Coder Survey 2018", "fullyQualifiedName": "sample_superset.11", "description": "", "version": 0.1, "updatedAt":<PHONE_NUMBER>906, "updatedBy": "admin", "dashboardUrl": "http://localhost:808/superset/dashboard/7/", "charts": [ { "id": "3194d51e-b75e-4062-901f-20047982d13e", "type": "chart", "name": "170", "fullyQualifiedName": "sample_superset.170", "description": "", "displayName": "Box plot", "deleted": false }, { "id": "648d9946-1c7b-4dd0-ad5e-fdfdd1c389e1", "type": "chart", "name": "180", "fullyQualifiedName": "sample_superset.180", "description": "", "displayName": "Boy Name Cloud", "deleted": false }, { "id": "074dff51-2310-4030-b70c-41c9720faa30", "type": "chart", "name": "183", "fullyQualifiedName": "sample_superset.183", "description": "", "displayName": "Average and Sum Trends", "deleted": false }, { "id": "6b4a27bc-ea81-445c-b3b1-da7ddf2923c1", "type": "chart", "name": "197", "fullyQualifiedName": "sample_superset.197", "description": "", "displayName": "Arcs", "deleted": false } ], "dataModels": [], "href": "http://localhost:8585/api/v1/dashboards/0021151a-65bd-4638-8717-86546cfd9bab", "followers": [], "tags": [], "service": { "id": "6a375a27-0a10-4661-8636-ea29bc254d1f", "type": "dashboardService", "name": "sample_superset", "fullyQualifiedName": "sample_superset", "deleted": false }, "serviceType": "Superset", "usageSummary": { "dailyStats": { "count": 0, "percentileRank": 0 }, "weeklyStats": { "count": 0, "percentileRank": 0 }, "monthlyStats": { "count": 0, "percentileRank": 0 }, "date": "2023-04-03" }, "deleted": false, "tier": null, "suggest": [ { "input": "sample_superset.11", "weight": 5 }, { "input": "FCC New Coder Survey 2018", "weight": 10 } ], "chart_suggest": [ { "input": "Box plot", "weight": 5 }, { "input": "Boy Name Cloud", "weight": 5 }, { "input": "Average and Sum Trends", "weight": 5 }, { "input": "Arcs", "weight": 5 } ], "service_suggest": [ { "input": "sample_superset", "weight": 5 } ], "entityType": "dashboard" } } ] } To Reproduce Screenshots or steps to reproduce Expected behavior A clear and concise description of what you expected to happen. Version: OS: [e.g. iOS] Python version: OpenMetadata version: [e.g. 0.8] OpenMetadata Ingestion package version: [e.g. openmetadata-ingestion[docker]==XYZ] Additional context Add any other context about the problem here. @aniketkatkar97 always .keyword for a filed at the end. cc @chirag-madlani http://localhost:8585/api/v1/search/aggregate?index=container_search_index&field=dataModel.columns.name.keyword "took": 14, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 7, "relation": "eq" }, "max_score": null, "hits": [] }, "aggregations": { "sterms#dataModel.columns.name.keyword": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "approved", "doc_count": 1 }, { "key": "budget_executor", "doc_count": 1 }, { "key": "budget_total_value", "doc_count": 1 }, { "key": "department_id", "doc_count": 2 }, { "key": "fraudulent_claims", "doc_count": 1 }, { "key": "merchant", "doc_count": 1 }, { "key": "notes", "doc_count": 1 }, { "key": "total_value_for_current_month", "doc_count": 1 }, { "key": "transaction_id", "doc_count": 1 }, { "key": "transaction_time", "doc_count": 1 } ] } } }``` suggest http://localhost:8585/api/v1/search/suggest?index=container_search_index&field=column_suggest&q=d ``` Response { "suggest": { "metadata-suggest": [ { "text": "d", "offset": 0, "length": 1, "options": [ { "text": "department_id", "_index": "container_search_index", "_type": "_doc", "_id": "fbb18d09-b4d9-455b-8186-e5132ebe2e18", "_score": 5.0, "_source": { "id": "fbb18d09-b4d9-455b-8186-e5132ebe2e18", "name": "finance", "fullyQualifiedName": "s3_object_store_sample.departments.finance", "displayName": "Finance department", "description": "Bucket containing finance department information", "version": 0.1, "updatedAt":<PHONE_NUMBER>774, "updatedBy": "admin", "href": "http://localhost:8585/api/v1/containers/fbb18d09-b4d9-455b-8186-e5132ebe2e18", "service": { "id": "d448d780-52ab-4254-bc85-061ecb0040c1", "type": "objectStoreService", "name": "s3_object_store_sample", "fullyQualifiedName": "s3_object_store_sample", "deleted": false, "href": "http://localhost:8585/api/v1/services/objectStoreServices/d448d780-52ab-4254-bc85-061ecb0040c1" }, "parent": { "id": "c9624db8-06d4-44e4-8bd7-89cda9d19b20", "type": "container", "name": "departments", "fullyQualifiedName": "s3_object_store_sample.departments", "description": "Bucket containing company department information", "displayName": "Company departments", "deleted": false, "href": "http://localhost:8585/api/v1/containers/c9624db8-06d4-44e4-8bd7-89cda9d19b20" }, "dataModel": { "isPartitioned": false, "columns": [ { "name": "department_id", "dataType": "NUMERIC", "dataTypeDisplay": "numeric", "description": "The ID of the department. This column is the primary key for this table.", "fullyQualifiedName": "s3_object_store_sample.departments.finance.department_id", "tags": [], "constraint": "PRIMARY_KEY", "ordinalPosition": 1 }, { "name": "budget_total_value", "dataType": "NUMERIC", "dataTypeDisplay": "numeric", "description": "The department's budget for the current year.", "fullyQualifiedName": "s3_object_store_sample.departments.finance.budget_total_value", "tags": [], "ordinalPosition": 2 }, { "name": "notes", "dataType": "VARCHAR", "dataLength": 100, "dataTypeDisplay": "varchar", "description": "Notes concerning sustainability for the budget.", "fullyQualifiedName": "s3_object_store_sample.departments.finance.notes", "tags": [], "ordinalPosition": 3 }, { "name": "budget_executor", "dataType": "VARCHAR", "dataTypeDisplay": "varchar", "description": "The responsible finance lead for the budget execution", "fullyQualifiedName": "s3_object_store_sample.departments.finance.budget_executor", "tags": [], "ordinalPosition": 4 } ] }, "prefix": "/departments/finance/", "numberOfObjects": 75.0, "size": 286720.0, "fileFormats": [ "zip", "csv" ], "serviceType": "S3", "deleted": false, "tags": [], "tier": null, "followers": [], "suggest": [ { "input": "s3_object_store_sample.departments.finance", "weight": 5 }, { "input": "finance", "weight": 10 } ], "service_suggest": [ { "input": "s3_object_store_sample", "weight": 5 } ], "column_suggest": [ { "input": "department_id", "weight": 5 }, { "input": "budget_total_value", "weight": 5 }, { "input": "notes", "weight": 5 }, { "input": "budget_executor", "weight": 5 } ], "entityType": "container" } } ] } ] } } @aniketkatkar97 the suggest keywords are used to retrieve a document that matches the keywords in suggest. It won't give you 2 different documents as the Box Chart, Boy Name Cloud belongs to the same document. So either you search with "Box" or "Boy" they hit one source document which is the dashboard doc that contains both these charts. The response you are getting is correct.
2025-04-01T04:34:59.279608
2020-11-01T02:15:38
733847932
{ "authors": [ "MRJasonP", "XiaohangZhan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9305", "repo": "open-mmlab/OpenSelfSup", "url": "https://github.com/open-mmlab/OpenSelfSup/issues/64" }
gharchive/issue
InterCLR implemetation Hi Xiaohang, Thank you for sharing the amazing work. I would like to check with you whether this repo contains the implementation&pretrianed models of InterCLR method. I read your recent paper "Delving into Inter-Image Invariance for Unsupervised Visual Representations", and would like to further study the method. Thank you Code of InterCLR will be released once it is accepted.
2025-04-01T04:34:59.280715
2019-12-18T09:28:49
539558696
{ "authors": [ "Youggie", "hellock" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9306", "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/1831" }
gharchive/issue
RuntimeError: expand(torch.cuda.FloatTensor{[256, 100, 100]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (3) When you multiply two tensors, you get this error. How do you solve it?Does anyone have any good advice?Thank you very much Please follow the Error report issue template.
2025-04-01T04:34:59.282768
2019-01-18T07:53:04
400610068
{ "authors": [ "alexwq100", "kakaluote", "tdiekel" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9307", "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/271" }
gharchive/issue
workflow not work when i config workflow = [('train', 2),('val',1)], train crash by this assert len(data_loaders) == len(workflow) because len(data_loaders) always 1 hi ,@hellock, you say " In detection tasks we implement mAP evaluation during training to serve as validation, so the val phase is unnecessary." where is it,in the tools/train.py I haven't found. @alexwq100 Look here
2025-04-01T04:34:59.287703
2020-11-19T10:43:19
746459791
{ "authors": [ "CLAassistant", "OpenMMLab-Assistant-007", "ZwwWayne", "renjithbaby23" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9308", "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/pull/4145" }
gharchive/pull-request
Update get_started.md Added instructions related to the docker version to ensure that user is not running into issues mentioned here https://github.com/NVIDIA/nvidia-docker/issues/1165 Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Thanks for your contribution. Hi! @renjithbaby23 First of all, we want to express our gratitude for your significant PR in the OpenMMLab project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR. We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA If you have WeChat account,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:) Thank you again for your contribution❤
2025-04-01T04:34:59.301350
2023-10-18T13:21:11
1949745956
{ "authors": [ "Lemonade24510", "Zzzouzou", "atinfinity" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9309", "repo": "open-mmlab/mmdetection3d", "url": "https://github.com/open-mmlab/mmdetection3d/issues/2779" }
gharchive/issue
[Bug] FCOS3D train on kitti dataset Prerequisite [X] I have searched Issues and Discussions but cannot get the expected help. [X] I have read the FAQ documentation but cannot get the expected help. [X] The bug has not been fixed in the latest version (dev-1.x) or latest version (dev-1.0). Task I have modified the scripts/configs, or I'm working on my own tasks/models/datasets. Branch 1.x branch https://github.com/open-mmlab/mmdetection3d/tree/dev-1.x Environment win11 CUDA11.5 torch1.11 Reproduces the problem - code sample python tools/train.py /mmdetection3d/configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_kitti-mono3d.py When I use this command, I find that the model training is not calling the gpu. Reproduces the problem - command or script Refer to this document for detailed configuration https://github.com/open-mmlab/mmdetection3d/issues/865 Reproduces the problem - error message Additional information I want to know if it's because windows doesn't support it ......If it is supported, please help me, I want to know how to enable gpu. I'm facing the same issue. What should I do? @mickeyouyou @lbin @atinfinity @Zzzouzou @Lemonade24510 Please check the information of your environment. python mmdet3d/utils/collect_env.py @Zzzouzou @Lemonade24510 Please check the information of your environment. python mmdet3d/utils/collect_env.py ok,It looks like this. @atinfinity @Zzzouzou You use GeForce RTX 4080. The GPU Compute Capability of this GPU is 8.9(https://developer.nvidia.com/cuda-gpus). On the other hands, I found the following information in your log. PyTorch was build by CUDA 11.5 There is no compute=89, sm_89 in NVCC architecture flag PyTorch needs to be built with CUDA 11.8+ for your GPU. https://developer.nvidia.com/blog/cuda-toolkit-11-8-new-features-revealed/ I found Docker image from Docker Hub. But, I'm not sure if mmdetection3d supports PyTorch 2.x. https://hub.docker.com/r/pytorch/pytorch/tags?name=11.8 Thanks for the reply!I tried the same thing on another computer(NVIDIA GeForce RTX 2060), this is my environment configuration, and it seems to have the same gpu unused problem. @atinfinity @Zzzouzou It seems that MMDetection3D version is a little old. Did you try the latest version? How do you check the usage of GPU while training? You can use nvidia-smi. Is there any error messages while training? And, I found the following message in https://mmdetection3d.readthedocs.io/en/v1.3.0/get_started.html. MMDetection3D works on Linux, Windows (experimental support) and macOS. So, you may try the following approaches. use WSL2 use Ubuntu ok,I see ,thank you again,there isn't any error messages.emmm,maybe this vision is not work on windows(c+g). @Zzzouzou It seems that training script use NVIDIA GPU. The "C" means "Compute". C = Compute, which defines the processes that use the compute mode of Nvidia GPUs which use CUDA libraries, used in deep learning training and inferencing using Tensorflow-GPU, Pytorch, etc https://stackoverflow.com/a/59375300
2025-04-01T04:34:59.308605
2021-07-02T01:33:44
935334692
{ "authors": [ "chetanmreddy", "filaPro" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9310", "repo": "open-mmlab/mmdetection3d", "url": "https://github.com/open-mmlab/mmdetection3d/issues/694" }
gharchive/issue
Inference using existing models and standard datasets - imvoxelnet - Kitti Hello @Tai-Wang @filaPro I am trying to do inference using the instructions here python tools/test.py configs/imvoxelnet/imvoxelnet_kitti-3d-car.py checkpoints/imvoxelnet_kitti-3d-car_20210610_152323-b9abba85.pth --show --show-dir ./data/kitti/show_results/ I have prepared the KITTI dataset as required. But getting the following error: `(open-mmlab) root@cmudi001-nx-interactive-pod-1:/cmudi001-nx-1/mmdetection3d# python tools/test.py configs/imvoxelnet/imvoxelnet_kitti-3d-car.py checkpoints/imvoxelnet_kitti-3d-car_20210610_152323-b9abba85.pth --show --show-dir ./data/kitti/show_results/ /opt/conda/envs/open-mmlab/lib/python3.7/site-packages/mmdet/core/anchor/builder.py:16: UserWarning: build_anchor_generator would be deprecated soon, please use build_prior_generator 'build_anchor_generator would be deprecated soon, please use ' Use load_from_local loader [ ] 0/3769, elapsed: 0s, ETA:/opt/conda/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448224956/work/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) Traceback (most recent call last): File "tools/test.py", line 214, in main() File "tools/test.py", line 184, in main outputs = single_gpu_test(model, data_loader, args.show, args.show_dir) File "/cmudi001-nx-1/mmdetection3d/mmdet3d/apis/test.py", line 51, in single_gpu_test if batch_size == 1 and isinstance(data['img'][0], TypeError: 'DataContainer' object is not subscriptable` Can you please help me with this? Thank you UPDATE: When I use --out instead of --show and --show-dir as shown above, it works fine. Hi @chetanmreddy, Unfortunately current version of ImVoxelNet is not ready for visualization, as in is not inherited from Base3DDetector and does not have show_results method.
2025-04-01T04:34:59.488100
2018-10-02T15:02:00
365941420
{ "authors": [ "tommyJimmy87", "tsandall" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9311", "repo": "open-policy-agent/opa-istio-plugin", "url": "https://github.com/open-policy-agent/opa-istio-plugin/issues/59" }
gharchive/issue
Inject Sidecar per Pod Would be nice to have the injection with the annotations. Like in Istio you can define the injection for namespace or per pod with the annotation : sidecar.istio.io/inject: "true" The quickstart.yaml file shows how you can enable OPA sidecar injection. https://github.com/open-policy-agent/opa-istio-plugin/blob/master/quick_start.yaml. The example injects OPA into pods in namespaces labeled with opa-istio-injection=enabled. If you wanted finer-grained injection, you could customize the mutating admission control policy (or write you own like in that file.) Let me know if this does not answer your question.
2025-04-01T04:34:59.520455
2021-06-17T01:10:15
923283927
{ "authors": [ "bhess", "gpadisal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9312", "repo": "open-quantum-safe/liboqs-java", "url": "https://github.com/open-quantum-safe/liboqs-java/issues/17" }
gharchive/issue
Different secret key size(Dilithium3) and Signature (Falcon-1024) Dilithium3 secretkey size is 4000 vs NIST 4016 Falcon-1024 Signature size 1330 vs NIST 1280 Dilithium3 secret key size was changed to 4000 bytes as part of the version 3.1 specification update. Falcon-1024 defines 1330 bytes for signatures in their reference implementation. I believe it is because of the encoding used for the signature blob in the NIST API. Hi , Thanks for your quick reply. Appreciate your help to clarify few points could you confirm OQS referring updated params Where can I refer these updated params from NIST submissions/package ? does OQS lib has the original version of Dilithium3 with secret size 4016 ? Thanks. could you confirm OQS referring updated params You can find which upstream versions OQS uses and the key/signature sizes in the docs. For Dilithium and Falcon: https://openquantumsafe.org/liboqs/algorithms/sig/dilithium https://openquantumsafe.org/liboqs/algorithms/sig/falcon 2. Where can I refer these updated params from NIST submissions/package ? They should be available on the Authors' websites for the submissions (available under the links above). 3. does OQS lib has the original version of Dilithium3 with secret size 4016 ? OQS uses the latest Dilithium3 version with secret key size 4000 bytes. See the version 3.1 spec on the Dilithium site: https://pq-crystals.org/dilithium/resources.shtml
2025-04-01T04:34:59.524516
2023-10-11T05:35:02
1936863063
{ "authors": [ "RohitArora7", "dstebila", "jimouris" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9313", "repo": "open-quantum-safe/liboqs-java", "url": "https://github.com/open-quantum-safe/liboqs-java/issues/21" }
gharchive/issue
Issue with decap_secret In order to give client_secret_key from external I had modified KEMExample.java and KeyEncapsulation.java , i am getting the following error KEMExample.java byte[] shared_secret_client = client.decap_secret(ciphertext,client_secret_key); KeyEncapsulation.java public byte[] decap_secret(byte[] ciphertext, byte[] secret_key_r) throws RuntimeException { if (ciphertext.length != alg_details_.length_ciphertext) { throw new RuntimeException("Incorrect ciphertext length"); } if (secret_key_r.length != alg_details_.length_secret_key) { throw new RuntimeException("Incorrect secret key length, " + "make sure you specify one in the " + "constructor or run generate_keypair()"); } byte[] shared_secret = new byte[(int)alg_details_.length_shared_secret]; int rv_ = decap_secret(shared_secret, ciphertext, secret_key_r); if (rv_ != 0) throw new RuntimeException("Cannot decapsulate secret"); return shared_secret; } ERROR examples/KEMExample.java:31: error: no suitable method found for decap_secret(byte[],byte[]) byte[] shared_secret_client = client.decap_secret(ciphertext,client_secret_key); ^ method KeyEncapsulation.decap_secret(byte[],byte[],byte[]) is not applicable (actual and formal argument lists differ in length) method KeyEncapsulation.decap_secret(byte[]) is not applicable (actual and formal argument lists differ in length) 1 error I don't understand what you're asking. You modified the code and then your modifications didn't work? We're not able to help with that. Ok, forget about the modification. See We have a condition where we are saving client c_public_key and c_secret_key .... now the question is decap_secret(ciphertext) takes only one argument ... now if I want to retrieve shared secret i have to give ciphertext and c_secret_key to decap_secret() .... How to do that ? I believe you'll have to use the KeyEncapsulation constructor to read the secret key back in. @RohitArora7 Take a closer look at the KEM example, it separates the client from the server functionality.
2025-04-01T04:34:59.536448
2023-07-15T20:45:33
1806325733
{ "authors": [ "babblebey", "bdougie" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9314", "repo": "open-sauced/app", "url": "https://github.com/open-sauced/app/pull/1371" }
gharchive/pull-request
feat: collaboration request tab enhancement Description This pull request introduces a new feature to display the Request Tab regardless of the collaboration status. Additionally, a div is added at the top part of the Request Tab Content, actively indicating the user's collaboration status with a corresponding message for true or false cases. Changes Made Modified the implementation to always display the Request Tab, regardless of the collaboration status. Added a div at the top of the Request Tab Content to showcase the user's collaboration status. If the collaboration status is set to false (receive_collaboration is false), a message is displayed indicating that collaboration requests are not being accepted: "You are not accepting Collaboration Requests." If the collaboration status is set to true (receive_collaboration is true), a message is displayed indicating that collaboration requests are being accepted: "You are currently accepting Collaboration Requests." Implemented a ToggleSwitch component within the div to allow users to instantly toggle the collaboration status between true and false, What type of PR is this? (check all applicable) [x] 🍕 Feature [ ] 🐛 Bug Fix [ ] 📝 Documentation Update [ ] 🎨 Style [x] 🧑‍💻 Code Refactor [ ] 🔥 Performance Improvements [ ] ✅ Test [ ] 🤖 Build [ ] 🔁 CI [ ] 📦 Chore (Release) [ ] ⏩ Revert Related Tickets & Documents Fixes #1310 Mobile & Desktop Screenshots/Recordings screencast-localhost_3000-2023.07.16-11_49_21.webm Added tests? [ ] 👍 yes [x] 🙅 no, because they aren't needed [ ] 🙋 no, because I need help Added to documentation? [ ] 📜 README.md [ ] 📓 docs.opensauced.pizza [ ] 🍕 dev.to/opensauced [ ] 📕 storybook [x] 🙅 no documentation needed [optional] Are there any post-deployment tasks we need to perform? NA [optional] What gif best describes this PR or how it makes you feel? But, I have this question @bdougie ...and not show the pending section until the feature is enabled. In cases where there are pending requests, are you suggesting that we should hide the Collaboration Requests when receive_collaboration is false? in cases where there are pending requests, are you suggesting that we should hide the Collaboration Requests when receive_collaboration is false? If the logged in user is not accepting requests, they should will not have an active button on their profile to receive collaborations. Any pending requests should be declined. We need to rethink this. Moving forward, we should consider making the collaboration feature viewable for all to see, and encourage individuals to upgrade for more collaboration opportunities. cc @open-sauced/engineering & @isabensusan for context. Closing for now.
2025-04-01T04:34:59.543709
2023-09-01T21:15:09
1878052525
{ "authors": [ "bdougie", "brandonroberts" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9315", "repo": "open-sauced/app", "url": "https://github.com/open-sauced/app/pull/1653" }
gharchive/pull-request
chore: add redirect for insight subdomain Description Adds configuration for a domain-level redirect from insight.opensauced.pizza to insights.opensauced.pizza What type of PR is this? (check all applicable) [ ] 🍕 Feature [ ] 🐛 Bug Fix [ ] 📝 Documentation Update [ ] 🎨 Style [ ] 🧑‍💻 Code Refactor [ ] 🔥 Performance Improvements [ ] ✅ Test [ ] 🤖 Build [ ] 🔁 CI [x] 📦 Chore (Release) [ ] ⏩ Revert Related Tickets & Documents Mobile & Desktop Screenshots/Recordings Added tests? [ ] 👍 yes [ ] 🙅 no, because they aren't needed [ ] 🙋 no, because I need help Added to documentation? [ ] 📜 README.md [ ] 📓 docs.opensauced.pizza [ ] 🍕 dev.to/opensauced [ ] 📕 storybook [ ] 🙅 no documentation needed [optional] Are there any post-deployment tasks we need to perform? [optional] What gif best describes this PR or how it makes you feel? I removed the subdomain app from the original opensauced.pizza site on Netlify. I have added a 3rd alias to the site in preparation for this. We may be able to add the app redirect in this and then make it the primary subdomain in the settings
2025-04-01T04:34:59.571295
2024-03-19T20:32:09
2195950689
{ "authors": [ "arminru", "gdfast", "tigrannajaryan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9316", "repo": "open-telemetry/opamp-go", "url": "https://github.com/open-telemetry/opamp-go/pull/264" }
gharchive/pull-request
Upgrade OTel SDK dependencies in the internal/example module Problem The example code has a number of dependencies on otel packages that are either deprecated or very old. This makes developing and improving the examples challenging, and also doesn't make for the best examples for new users. Solution Upgrade the OTel packages to v1.24.0 (the latest version) Remove old and deprecated dependent modules where possible Thank you @gdfast If you haven't already please also run go mod tidy in the example directory. @arminru we have a codecov report failing with some message about keys. I remember you were doing something with tokens a few days ago. Can this be the reason it fails? https://github.com/open-telemetry/opamp-go/actions/runs/8349792083/job/22871138122?pr=264 @tigrannajaryan the token should be available via the org secret already but the workflow probably needs to be configured to actually use it. The action might also need an update at some point. https://docs.codecov.com/docs/adding-the-codecov-token @tigrannajaryan I just checked the workflow yaml and it's already set up correctly. The token itself was the problem, seems like the repo had its own invalid or expired token configured locally that was overriding the org secret. I deleted it, now it should work. Great, thanks @arminru ! Thank you @gdfast If you haven't already please also run go mod tidy in the example directory. ✔️ I did this before pushing
2025-04-01T04:34:59.642922
2023-03-22T16:31:11
1636123096
{ "authors": [ "Allex1", "Saki042", "TylerHelmuth", "pseymournutanix", "seb-835", "sreejesh-radhakrishnan-db" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9317", "repo": "open-telemetry/opentelemetry-helm-charts", "url": "https://github.com/open-telemetry/opentelemetry-helm-charts/issues/696" }
gharchive/issue
imagePullSecrets on the collector image Hello, The helm chart now allows imagePullSecrets for the operator image, but this doesn't fall into the collector (unless I am missing something). Perhaps it would be easier to add them to the service accounts if set in .Values.imagePullSecrets @pseymournutanix That's a good point and we are aware of it. As we don't actually deploy the cr as part of this chart we should at least update the examples. Can you help with that? Thanks I am following https://opentelemetry.io/docs/k8s-operator/automatic/#create-an-opentelemetry-collector-optional and my deployment is failing for same reason. as its private reg for image. I can see deployment this is the service account used., are you saying we should add this serviceAccount with a imagePullSecret ? serviceAccount: care-optl-collector serviceAccountName: care-optl-collector @pseymournutanix did you make it work? please give some pointers if the case. Using https://opentelemetry.io/docs/k8s-operator/automatic/#create-an-opentelemetry-collector-optional as the starting point, I believe you can do kubectl apply -f - <<EOF apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: demo spec: imagePullPolicy: THE_VALUE_YOU_WANT config: | receivers: otlp: protocols: grpc: http: processors: memory_limiter: check_interval: 1s limit_percentage: 75 spike_limit_percentage: 15 batch: send_batch_size: 10000 timeout: 10s exporters: logging: service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, batch] exporters: [logging] metrics: receivers: [otlp] processors: [memory_limiter, batch] exporters: [logging] logs: receivers: [otlp] processors: [memory_limiter, batch] exporters: [logging] EOF And it will use that value in the deployment it makes for the collector. Since this object is created outside the opentelemetry-operator chart the chart cannot be used to set this value. I believe this is another reason to look at https://github.com/open-telemetry/opentelemetry-helm-charts/issues/562 and https://github.com/open-telemetry/opentelemetry-helm-charts/issues/69 sorry the issue is with imagePullSecret not present on the deployment and not imagePullPolicy? how will having imagePullPolicy in this way help solve the issue if secret not present in deployment and hence cannot pull image from private reg? Just to make anyone trying to do what I have been doing, I did use imagePulllSecrets in the spec, and has to use --validate=false for k8s apply not to fail rather throw warning Warning: unknown field "spec.imagePullSecrets" and Deployment came up fine. Oh sorry, I mixed up the fields. It looks like the OpenTelemetryCollector custom resource does not accept imagePullSecrets :( The stuff about the operator chart not being able to accomplish this is still accurate. Based on https://github.com/open-telemetry/opentelemetry-operator/issues/846 the current solution is to create and manage your own service account that has the imagePullSecrets set, and then use the name of that service account in .Spec. serviceAccount on the OpenTelemetryCollector CR. That all kinda complicated, so I think it is worth commenting on that issue and asking that imagePullSecrets also be available on the CR directly. Hi there some plan to add the imagePullSecrets on the CR directly ? or do we have to consider to always put it in the ServiceAccount ? @TylerHelmuth , is this issue still open? Can I get more info about it? The desired state would be for the setting to be exposed on the OpenTelemetryCollector CR, so a change needs made in the otel operator repo @TylerHelmuth , I am working on the Open source for the first time, might require more context. Can you confirm this is the repo link: https://github.com/open-telemetry/opentelemetry-operator
2025-04-01T04:34:59.669474
2023-02-09T13:49:16
1577926015
{ "authors": [ "Flarna", "dyladan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9318", "repo": "open-telemetry/opentelemetry-js", "url": "https://github.com/open-telemetry/opentelemetry-js/issues/3598" }
gharchive/issue
Consider making internal SDK components private Currently we export a lot of functions, types, objects, and classes from our core packages that we consider to be internal. This leads people to depend on these components which we do not recommend (example: https://github.com/prisma/prisma/blob/9c6e80e6fadc3068a54bc6cbff2edb218e88d345/packages/engine-core/src/tracing/createSpan.ts). As much as possible we should encourage users to use the API interfaces only. This could be done by making them private, making the constructors private, or making all properties not included in the interface private (maybe there is some way to exclude some properties from the published types while keeping them internally?) non-exhaustive list: SDK Span and Tracer SDK Meter and Metrics TracerProvider (still need a way to configure it though) Everything in core package We could export interfaces instead classes where possible. This has the additional benefit that typescript doesn't complain about incompatible types just because private fields differ.
2025-04-01T04:34:59.680474
2024-02-29T15:49:26
2161566951
{ "authors": [ "Amit-mycedar", "alexghr", "axe-me", "luskin", "pichlermarc" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9319", "repo": "open-telemetry/opentelemetry-js", "url": "https://github.com/open-telemetry/opentelemetry-js/issues/4518" }
gharchive/issue
Package subpath './build/src/trace/NoopTracer' is not defined by "exports" What happened? Steps to Reproduce Install the latest version of OTEL (1.8.0) Expected Result Work with no errors Actual Result Throws an error Additional Details we have 2 different installations of OTEL 1 is a first-party use (in our code) where it was easy to overcome the issue by locking the version to 1.7.0 the other one is made by another package (temporal.io) which requires a minimum version of ^1.3.0, and is installing the latest 1.8.0 version which causes the bug. OpenTelemetry Setup Code No response package.json { "main": "dist/index.js", "types": "dist/index.d.ts", "scripts": { "build": "rimraf dist && tsc", "postinstall": "npm run build", "prepublish": "npm run build" }, "private": true, "dependencies": { <EMAIL_ADDRESS>"^2.0.0-alpha.2", <EMAIL_ADDRESS>"^2.0.0-alpha.14", "@nestjs/common": "^9.4.2", "@nestjs/core": "^10.0.5", "@nestjs/swagger": "^7.2.0", "@nestjs/terminus": "^10.0.1", "@opentelemetry/api": "1.7.0", "@opentelemetry/auto-instrumentations-node": "^0.35.0", "@opentelemetry/exporter-trace-otlp-proto": "^0.34.0", "@opentelemetry/sdk-node": "^0.39.1", "@temporalio/client": "1.7.0", "@temporalio/worker": "1.7.0", "axios": "^1.4.0", "class-transformer": "^0.5.1", "class-validator": "^0.14.0", "decimal.js": "^10.4.3", "dinero.js": "^2.0.0-alpha.14", "dotenv": "^16.1.1", "iterare": "^1.2.1", "libphonenumber-js": "^1.10.56", "lodash": "^4.17.21", "rxjs": "^7.8.1" }, "devDependencies": { "@cedar/eslint-config-global": "file:../eslint", "@temporalio/client": "1.7.0", "@types/lodash": "^4.14.195", "@types/node": "^20.2.5", "@typescript-eslint/parser": "^5.59.7", "prettier": "^2.8.8", "typescript": "^5.0.4" } } Relevant log output Error: Package subpath './build/src/trace/NoopTracer' is not defined by "exports" in /node_modules/<EMAIL_ADDRESS> at new NodeError (node:internal/errors:406:5) at exportsNotFound (node:internal/modules/esm/resolve:268:10) at packageExportsResolve (node:internal/modules/esm/resolve:598:9) at resolveExports (node:internal/modules/cjs/loader:547:36) at Function.Module._findPath (node:internal/modules/cjs/loader:621:31) at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1034:27) at Module.Hook._require.Module.require (/node_modules/require-in-the-middle/index.js:81:25) at require (node:internal/modules/helpers:130:18) at Object.<anonymous> (node_modules/@temporalio/worker/src/tracing.ts:2:1) at Module._compile (node:internal/modules/cjs/loader:1241:14) having the same issue here I was not able to reproduce this on my own. I can see however, that there's a deep-import used in that version of the @temporalio/worker package. See https://github.com/temporalio/sdk-typescript/blob/v1.7.0/packages/worker/src/tracing.ts#L2 The current version of that package does not seem to do that anymore. Internal package structure may change at any time, therefore we don't consider deep imports like this as part of the stable API (as is the case for most typescript packages). Are you using a deep-import in your package too by any chance? Closing as I think that that's caused by using a non-stable API. Please let me know if that was not the reason, I'll then re-open the issue. This is problem for us as well @pichlermarc. We utilize the NoopTracerProvider which is now not exported in the package.json and therefore resulting in these errors: Module not found: Package path ./build/src/trace/NoopTracerProvider is not exported from package /Users/gregg/Code/web/node_modules/.pnpm/@mothership+service-kit@2.5.1/node_modules/@opentelemetry/api (see exports field in /Users/gregg/Code/web/node_modules/.pnpm/@mothership+service-kit<EMAIL_ADDRESS> Import trace for requested module: ./node_modules/.pnpm/@mothership+service-kit<EMAIL_ADDRESS>./src/instrumentation.ts @luskin this looks to be the same deep-import issue as mentioned above. Somewhere in that package there must be a deep import like @opentelemetry/api/build/src/trace/NoopTracerProvider. The same reason as stated above applies. Internal package structure may change at any time, so we don't consider deep-imports like this to be part of the stable API. Hello, would it be possible to export a createNoopTracer function from the API-package, similar to the createNoopMeter that's already part of the public API of the package? I realise the code already exists and would just have to be packaged nicely for public consumption? My use case is to build my own no-op SDK, without relying on the fact that an unintiliazed API is using noop components. My use case is to build my own no-op SDK, without relying on the fact that an unintiliazed API is using noop components. I'd rather not provide another public API in the @opentelemetry/api package unless it's required by the spec. Currently the specification does not leave any room for us to publish a 2.0 API package, which means that we have to be extremely careful with what we add to the API as we may be stuck with it for a very, very long time. createNoopMeter() exists as there's certain cases in the spec that require us to return a Noop meter. Anyway - a feature request like this should be discussed in a separate issue for visibility reasons.
2025-04-01T04:34:59.694238
2019-11-15T22:07:19
523712728
{ "authors": [ "codecov-io", "mayurkale22", "xiao-lix" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9320", "repo": "open-telemetry/opentelemetry-js", "url": "https://github.com/open-telemetry/opentelemetry-js/pull/542" }
gharchive/pull-request
feat: add example for postgres plugin Which problem is this PR solving? Requires #501 Closes #491 Short description of the changes Add an express app example using postgres plugin Codecov Report Merging #542 into master will increase coverage by 6.06%. The diff coverage is n/a. @@ Coverage Diff @@ ## master #542 +/- ## ========================================== + Coverage 90.4% 96.46% +6.06% ========================================== Files 144 124 -20 Lines 7261 5877 -1384 Branches 642 530 -112 ========================================== - Hits 6564 5669 -895 + Misses 697 208 -489 Impacted Files Coverage Δ ...ages/opentelemetry-plugin-http/test/utils/utils.ts 33.33% <0%> (-26.67%) :arrow_down: ...ckages/opentelemetry-core/src/common/NoopLogger.ts 33.33% <0%> (-16.67%) :arrow_down: ...metry-core/src/trace/instrumentation/BasePlugin.ts 80.55% <0%> (-5.56%) :arrow_down: ...res/opentelemetry-plugin-pg/test/assertionUtils.ts 96.29% <0%> (-3.71%) :arrow_down: ...core/src/context/propagation/BinaryTraceContext.ts 97.5% <0%> (-0.84%) :arrow_down: ...core/src/context/propagation/NoopHttpTextFormat.ts 100% <0%> (ø) :arrow_up: ...telemetry-plugin-grpc/test/utils/assertionUtils.ts 100% <0%> (ø) :arrow_up: .../opentelemetry-core/src/trace/spancontext-utils.ts 100% <0%> (ø) :arrow_up: ...ages/opentelemetry-core/src/internal/validators.ts 100% <0%> (ø) :arrow_up: ...telemetry-scope-base/test/NoopScopeManager.test.ts 100% <0%> (ø) :arrow_up: ... and 36 more @open-telemetry/javascript-approvers Please review and approve if looks good. The dependent PR (#501) is already merged, merging this one now. Thanks for the work!
2025-04-01T04:34:59.709041
2020-01-10T17:40:44
548207119
{ "authors": [ "codecov-io", "dyladan", "mayurkale22" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9321", "repo": "open-telemetry/opentelemetry-js", "url": "https://github.com/open-telemetry/opentelemetry-js/pull/683" }
gharchive/pull-request
chore: 0.3.3 release proposal Which problem is this PR solving? Weekly patch release Release draft note: https://github.com/open-telemetry/opentelemetry-js/releases/edit/untagged-f1f83ee5eb03f516352b Total 70 files updated: 28 packages (version.ts + package.json) = 56 test-utils package (only package.json) = 1 12 examples (package.json) = 12 1 lerna.json = 1 Examples in getting-started/ guide are out-of-date(#682), hence didn't update package version there. Blocked on https://github.com/open-telemetry/opentelemetry-js/pull/681 Codecov Report Merging #683 into master will decrease coverage by 1.67%. The diff coverage is 45.83%. @@ Coverage Diff @@ ## master #683 +/- ## ========================================= - Coverage 91.58% 89.9% -1.68% ========================================= Files 217 214 -3 Lines 10156 10233 +77 Branches 916 932 +16 ========================================= - Hits 9301 9200 -101 - Misses 855 1033 +178 Impacted Files Coverage Δ packages/opentelemetry-plugin-dns/src/version.ts 100% <ø> (ø) :arrow_up: packages/opentelemetry-plugin-mysql/src/version.ts 100% <ø> (ø) :arrow_up: ...ges/opentelemetry-scope-async-hooks/src/version.ts 0% <0%> (ø) :arrow_up: packages/opentelemetry-plugin-https/src/version.ts 0% <0%> (ø) :arrow_up: ...kages/opentelemetry-exporter-jaeger/src/version.ts 0% <0%> (ø) :arrow_up: packages/opentelemetry-tracing/src/version.ts 0% <0%> (ø) :arrow_up: packages/opentelemetry-web/src/version.ts 0% <0%> (ø) :arrow_up: packages/opentelemetry-metrics/src/version.ts 0% <0%> (ø) :arrow_up: packages/opentelemetry-scope-base/src/version.ts 0% <0%> (ø) :arrow_up: ...s/opentelemetry-exporter-prometheus/src/version.ts 0% <0%> (ø) :arrow_up: ... and 64 more The build failure is a SIGHUP received during the bootstrap phase of the build. Is it possible we are running out of memory on our build node?
2025-04-01T04:34:59.752980
2022-06-17T17:55:07
1275318366
{ "authors": [ "arielvalentin", "plantfansam" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9322", "repo": "open-telemetry/opentelemetry-ruby-contrib", "url": "https://github.com/open-telemetry/opentelemetry-ruby-contrib/pull/51" }
gharchive/pull-request
Sqs processing spans This PR was originally in opentelemetry-ruby. Migrated it over to -contrib because we moved instrumentation here. Original description from @YanivD follows: This pr discussed as part of https://github.com/open-telemetry/opentelemetry-ruby/pull/1026 What's added in this PR? Added config option extract_messaging_context disabled by default. Extracting context propagation for SQS.ReceiveMessage method. In addition to the receive span, it creates an empty process span for every processed message, linking to the producer span. Context propagation strategy Currently SQS doesn't natively support propagating the OTEL context. Used JS aws-sdk instrumentation as an inspiration for this PR. It use message attributes to populate propagator fields. Limitations SQS support up to 10 message metadata attributes. Context fields will not be injected into the sqs message attributes if it will cause the limit to be reached.
2025-04-01T04:34:59.813642
2022-08-22T18:07:34
1346798347
{ "authors": [ "chalin" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9323", "repo": "open-telemetry/opentelemetry.io", "url": "https://github.com/open-telemetry/opentelemetry.io/pull/1646" }
gharchive/pull-request
Blog Go Web-apps repackaging and copyedits Followup to #1581 Preview: https://deploy-preview-1646--opentelemetry.netlify.app/blog/2022/go-web-app-instrumentation/ [x] Create page bundle with local & compressed images [ ] Get approval from @NavehMevorach @svrnm @cartermp [ ] Remove draft status This is ready for review @NavehMevorach @svrnm @cartermp. I won't have time to make a full pass for copyediting, but this article is fine to publish as is. @svrnm - is this good to go?
2025-04-01T04:34:59.816356
2023-01-13T13:46:35
1532329960
{ "authors": [ "Vibaswan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9324", "repo": "open-traffic-generator/openapiart", "url": "https://github.com/open-traffic-generator/openapiart/pull/386" }
gharchive/pull-request
Segregate proto generation resolving issue #385 For some reason not sure why Ranga actually made the change which i am reverting right now https://github.com/open-traffic-generator/openapiart/commit/8fc7b8d1e559dcf9f759cb67bf277fd4531ef065#diff-c4e7b5e541655c6a631951b4e8fe368a052b0132fe605d13fbad12689fb0c416 But I don't see any problem in keeping it separate. Made changes in CI so that generated proto, yml and json is validated
2025-04-01T04:34:59.817357
2022-02-21T01:38:08
1145218756
{ "authors": [ "greenkiwi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9325", "repo": "open-turo/actions-gha", "url": "https://github.com/open-turo/actions-gha/issues/8" }
gharchive/issue
feat: add node-check-dist Add new action that will check that the dist folder has no changes. JS GitHub actions should have the current build of code checked in. This action will ensure that the code is checked in and has not changed. Closed with #9
2025-04-01T04:34:59.833413
2024-05-16T08:03:13
2299657085
{ "authors": [ "le-h-des-iles", "tjbck" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9326", "repo": "open-webui/open-webui", "url": "https://github.com/open-webui/open-webui/issues/2305" }
gharchive/issue
enhancement: auth ollama connection Is your feature request related to a problem? Please describe. I'm hosting my Ollama on OVH Public Cloud. My application is private so I need to use token authentication to access it. But, unfortunately, I didn't find the possibility to pass token in open-webui connections settings. Describe the solution you'd like Allow user to pass token per Ollama Base URL. Additional context The feature is already done and it look like this : The token field is hide by default. You can show it by clicking on the key : Added to dev
2025-04-01T04:34:59.836398
2024-11-25T16:52:30
2691454113
{ "authors": [ "PatBQc", "tjbck" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9327", "repo": "open-webui/open-webui", "url": "https://github.com/open-webui/open-webui/issues/7350" }
gharchive/issue
Customized model creation accessible to users Discussed in https://github.com/open-webui/open-webui/discussions/4589 Originally posted by Genai-labs August 14, 2024 Your feature request related to a problem? Please describe. It’s not a problem, but rather related to the idea of making this app available to a group of people. Describe the solution you’d like In the case of a large number of users, it could be interesting if they could create their own customized models without having to go through the admins. Already possible with 0.4 Wow That was fast 😄 Thanks a lot !
2025-04-01T04:34:59.884335
2023-06-15T18:45:20
1759364248
{ "authors": [ "vanithavalluripalli9" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9328", "repo": "openBackhaul/OperationKeyManagement", "url": "https://github.com/openBackhaul/OperationKeyManagement/issues/214" }
gharchive/issue
Request to add description of call backs that ends subscription of old release, if new-application is of same release and same application name for /v1/bequeath-your-data-and-die service Problem Description: Scenario : Same Release of application but deployed in different address and port. When a same release but deployed in different address and port of existing application is registered , it will update the http-c , tcp-c of old Release in config file of New application with the preceding application information provided in request body. Once the application is approved and receives “embed-yourself” , a callback will be triggered to the old application’s “/v1/bequeath-your-data-and-die” . When /v1/bequeath-your-data-and-die is executed,it will update the NewRelease http-c,tcp-c with the new application details. Then during series of call backs following callbacks should not be sent PromptForBequeathingDataCausesRObeingRequestedToStopNotificationsToOldRelease PromptForBequeathingDataCausesALTbeingRequestedToStopNotificationsToOldRelease promptForBequeathingDataCausesRequestForDeregisteringOfOldRelease As, it will lead to end of subscriptions of same application . Solution: To add description for callbacks mentioned below : PromptForBequeathingDataCausesRObeingRequestedToStopNotificationsToOldRelease PromptForBequeathingDataCausesALTbeingRequestedToStopNotificationsToOldRelease promptForBequeathingDataCausesRequestForDeregisteringOfOldRelease As , Call back will not be sent, if new-application-name == application name in *-http-s-000 and new-application-address == address in *-http-s-000 Closing the issue , based on the decision made in the issue https://github.com/openBackhaul/ExecutionAndTraceLog/issues/260
2025-04-01T04:34:59.887903
2019-11-21T14:49:38
526642385
{ "authors": [ "demx8as6", "openBackhaul" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9329", "repo": "openBackhaul/equipment", "url": "https://github.com/openBackhaul/equipment/issues/9" }
gharchive/issue
UUID Definitions of the UUID, respectively univeral-id, in the ONF Core IM must be checked for completeness and potentially completed. Inserted values must be unique at least in the network domain, which is managed by the SDN Controller. Local uniqueness, inside the Device, is not sufficient. Further characteristics (e.g. persistence after system restart or soft reset) and information structure of the ID shall be analysed and defined (if required). Uniqueness on SDN-Controller is ensured by concatenation of NETCONF-SERVER-Identifier (mountpoint, node-id) and the device local ids. Note: the device does not know its outside world when generation UUIDs. Q: Where can such statement be documented? I: to improve we coudl define that the OAM-MAC address is part of the UUID when UUIDs are generated by the device - issue: What happens when the Backplane is removed. Proposal to the 5G-xhaul call on 19th of October 2022: Issue should be closed without further activity. Note: If native NETCONF-Server implementation exists the RFC4122 section 4.4 should be implemented. - such generation process guaranties uniqueness across the universe. In case of Mediators as part of the OAM implementation it is necessary that the Mediator must be able to map the UUID values of the CoreModel to the identifiers of the NetworkElement. As a consequence the UUID values does not follow the format of RFC4122 and can guarantee uniqueness only at NetworkElement level. Uniqueness on SDN-Controller is ensured by concatenation of NETCONF-SERVER-Identifier (mountpoint, node-id) and the device local ids.
2025-04-01T04:34:59.934067
2021-01-04T15:58:09
778192259
{ "authors": [ "ax3l", "cosmonaut-ok" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9330", "repo": "openPMD/openPMD-standard", "url": "https://github.com/openPMD/openPMD-standard/issues/236" }
gharchive/issue
Proposed: make option "recordBased" for iterationEncoding It's not comfortable to make separate file/group for each iteration in case of active working with time series (e.g. if required to select single point from every iteration). Much better to make n+1-dimensional array and write data to it. But, it breaks openPMD compatibility. Hi @cosmonaut-ok, Thank you for the idea! We plan something very similar, currently drafted as "variableBased" encoding related to #221. In that, we will use the intrinsic capabilities of e.g. the ADIOS2 format to encode steps inside a data set (variable) if the file format supports this. n+1 dimensionality in other formats, e.g. HDF5, is thinkable as a work-around for those formats. Yet we need to be aware that contrary to ADIOS1/2 "step" encoding, HDF5 would then be limited to stay with the same shape of the data set for all iterations; which is limiting for some applications (e.g. particle data shape changes over time). Hi @cosmonaut-ok, Thank you for the idea! We plan something very similar, currently drafted as "variableBased" encoding related to #221. In that, we will use the intrinsic capabilities of e.g. the ADIOS2 format to encode steps inside a data set (variable) if the file format supports this. n+1 dimensionality in other formats, e.g. HDF5, is thinkable as a work-around for those formats. Yet we need to be aware that contrary to ADIOS1/2 "step" encoding, HDF5 would then be limited to stay with the same shape of the data set for all iterations; which is limiting for some applications (e.g. particle data shape changes over time).
2025-04-01T04:34:59.980190
2024-10-02T20:44:32
2562609593
{ "authors": [ "AriesAlex", "fulldecent", "khorwood-openai" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9331", "repo": "openai/openai-realtime-console", "url": "https://github.com/openai/openai-realtime-console/issues/15" }
gharchive/issue
Host a demo instance Can you please host a demo instance? This will allow us to quickly evaluate this API and see if it is worth building on. Hey there! As we roll out the Realtime API, when you have access you'll be able to play with it directly here: https://platform.openai.com/playground/realtime I have forked the repository and set up GitHub Pages. The online instance is available at: https://ariesalex.github.io/openai-realtime-console fork repo: https://github.com/AriesAlex/openai-realtime-console Cool thank you
2025-04-01T04:34:59.986145
2023-07-04T02:15:39
1787028738
{ "authors": [ "Huntrr", "gongyaguang-tal", "lxww302" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9332", "repo": "openai/prm800k", "url": "https://github.com/openai/prm800k/issues/7" }
gharchive/issue
Questions about implmentation detail. Thanks for your excellent work on open-sourcing the data. I have a few questions regarding to the implmentation details. You mentioned "We train like a normal LM, learning to predict a single rating token (-1, 0, 1) given a problem and the solution up to the current step". In my understanding, the input is something like [question_tokens, sep_token, solution_step1_tokens, rating_token_for_1, solution_step2_tokens, rating_token_for_0, solution_step3_tokens, rating_token_for_-1]. You use different token for different score, and when predicting the rating_token_for_step3, the model has information access to the rating score for step1 and step2 because they are present in the context. The training loss is the log-likelihood of these rating tokens over the whole vocabulary, am I correct ? You also mentioned "it suffices to perform a single PRM forward pass over the whole solution". I am not sure how that worked out. My best guess is that a rating placeholder token is appended at the end of every step. The input is something like [question_tokens, sep_token, solution_step1_tokens, rating_placeholder_token, solution_step2_tokens, rating_placeholder_token, solution_step3_tokens, rating_placeholder_token]. You take the log-likelihood of these rating tokens and normalize it over 3 tokens (rating_token_for_-1, rating_token_for_0, rating_token_for_1). Is it correct ? We did an even simpler setup! The examples look like: prompt=[question_tokens, solution_step1_tokens, request_rating_token, placeholder_token, solution_step2_tokens, request_rating_token, placeholder_token, solution_step3_tokens, request_rating_token] completion=[rating_for_step_3] and train log likelihood over whole vocab for just the completion the model doesn't get to see step1, step2 ratings, it just sees the placeholders basically correct! we just put in all the placeholders and do a forward pass and normalize the ratings for the log probs on the placeholder tokens @Huntrr I have several questions. Hope for your reply. How to define the placeholder_token and request_rating_token? Will they pose a negative impact on continuity of the total solution? Which could be better, if I add a new classifier head instead of training the way LM does? i think most open source LLMs have an idea of "special tokens" you can use as placeholder tokens. no negative impact on "continuity" because we don't sample solutions from the PRM model these should be equivalent for the purposes of this project. please open a new issue if you have questions
2025-04-01T04:34:59.991042
2023-11-17T09:32:08
1998677975
{ "authors": [ "Blason", "SpiderD555", "bmbeverst", "orianelou" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9333", "repo": "openappsec/openappsec", "url": "https://github.com/openappsec/openappsec/issues/77" }
gharchive/issue
Applying local policy goes into a continuous loop Hi Team, I am trying to create a local policy and have attached my policy here. However when I apply the policy it goes on and on. Any idea how do I debug or check what are errors in file? open-appsec-ctl [local_policy.yaml.txt](https://github.com/openappsec/openappsec/files/13390005/local_policy.yaml.txt) -ap local_policy.yaml Applying new policy. Policy path: local_policy.yaml .............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................^C Hi @Blason, Thank you for attaching the policy! We have successfully recreated the issue and are looking into it. I will update yo once we find a solution. Thanks man!! That means the policy is properly created, Hey folks any luck with the policy? Hi @Blason, We're still looking into a fix, but to get you up and running you can try this workaround to reload the policy: cpnano -qs orchestration cpnano -rs orchestration I see - that is a good workaround Just a small note about the bug. I think I have also encountered the same bug - in my case the problem was with wrong yaml policy formatting. Once corrected it all started to work correctly again. I think I have run into the same issue. I also had a continuous loop even after I corrected the policy. I copied the correct policy for a working node. @Blason does your open-appsec pod have any values in the /etc/cp/conf/policy.json file? Furthermore, how about the output of /usr/sbin/open-appsec-ctl -s? Run commands on the open-appsec pod with this command, kubectl exec -it <YOUR_OPEN_APPSEC_K8S_NGINX_INGRESS_POD_NAME> -- sh Hi, the issue was solved, my apologies for not updating earlier!
2025-04-01T04:35:00.053392
2021-01-01T02:28:12
777179797
{ "authors": [ "SeregaYakovlev", "guimkwon" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9334", "repo": "openaps/oref0", "url": "https://github.com/openaps/oref0/issues/1390" }
gharchive/issue
BG below 68 mg/dl not reported to rig or nightscout Describe the bug The communication between rig and pump fails when BG level goes below 68 mg/dl. I see BG levels below 68 mg/dl on the pump screen fine. However, no BG reported on nightscout and putty shows 'BG too old' error. When BG goes up above 70 mg/dl, everything goes back to normal; looping works and BG reported on nightscout. Please see nightscout screen shot below. Another related or separate problem is that BG trend arrow does not show on nightscout, although pump screen shows the arrows. To Reproduce Steps to reproduce the behavior: Run oref0 0.7.0 and Nightscout 14.0.7 using medtronic pump and enlite sensor. When BG goes below 68 mg/dl, no report of BG on nightscout and 'BG too old' error on putty. I have used two different medtronic pumps and two different edison/explorer boards and noticed the same problem. Expected behavior A clear and concise description of what you expected to happen. Screenshots https://files.gitter.im/5473937fdb8155e6700d7bec/LlCs/image.png putty script below Checking deliverAt: 2020-12-21T03:52:54.603Z is within 1m of current time: Sun Dec 20 21:52:54 CST 2020 and that smb-suggested.json is less than 1m old enact/smb-suggested.json: {"deliverAt":"2020-12-21T03:52:54.603Z","temp":"absolute","duration":30,"rate":0} "If current system time Sun Dec 20 2020 21:52:54 GMT-0600 (CST) is correct, then BG data is too old. The last BG data was read 12.9m ago at Sun Dec 20 2020 21:40:00 GMT-0600 (CST). Shortening 106m long zero temp to 30m. " Temp refreshed: monitor/temp_basal.json: {"duration":106,"temp":"absolute","rate":0} enact/smb-enacted.json: "Rate: 0 Duration: 30" Checking pump status (suspended/bolusing): {"status":"normal","bolusing":false,"suspended":false} Temp refreshed: monitor/temp_basal.json: {"duration":30,"temp":"absolute","rate":0} No bolus needed. Settings less than 15 minutes old. grep: enact/bolused.json: No such file or directory Refreshing pumphistory because: enacted, No deliverAt found. {"reason":"If current system time Sun Dec 20 2020 21:55:15 GMT-0600 (CST) is correct, then BG data is too old. The last BG data was read 15.3m ago at Sun Dec 20 2020 21:40:00 GMT-0600 (CST). Temp 0 <= current basal 0.9U/hr; doing nothing. "} Couldn't smb_verify_suggested Smartphone (please complete the following information): Setup Information (please complete the following information): Pump type: Medtronic 723 CGM type: medtronic (mdt) Rig type: Edison/Explorer Board rig oref0 version: oref0 0.7.0 master nighscout version: 14.0.7 Additional context Add any other context about the problem here. I updated Oref0 and re-ran oref0 setup. However, I still see the same problem. The full script while it is not running and when it comes back to rerunning are below. When BG is not reported to nightscout and putty shows 'BG tood old' error. Starting oref0-pump-loop at Sat Jan 2 16:27:03 CST 2021 with 4 second wait_for_silence: MDT CGM configured; not waiting Listening for 4s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:27:08 CST 2021 Preflight OK. Attempting to retrieve MDT CGM data from pump MDT CGM data retrieved Profile less than 60m old; Profile valid. Pump history updated through 2021-01-02T16:19:29-06:00 with 0 new records; meal.json Warning: could not parse monitor/carbhistory.json Not enough glucose data to calculate carb absorption; found: 20 refreshed: {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":-0.41,"maxDeviation":0,"minDeviation":-1.66,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":0.218,"allDeviations":[0,0,-2],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Listening for 8s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:27:53 CST 2021 Checking that pump clock: "2021-01-02T16:27:11-06:00" is within 90s of current time: 2021-01-02T16:27:55-0600 Temp refreshed: monitor/temp_basal.json: {"duration":23,"temp":"absolute","rate":0} {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":-0.41,"maxDeviation":0,"minDeviation":-1.66,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":0.218,"allDeviations":[0,0,-2],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Warning: Autotune has not been run. All microboluses will be disabled until you manually run autotune or add it to run nightly in your loop. {"iob":-0.796,"activity":-0.004,"basaliob":-0.796,"bolusiob":0,"netbasalinsulin":-1,"bolusinsulin":0,"time":"2021-01-02T22:27:13.000Z","iobWithZeroTemp":{"iob":-0.796,"activity":-0.004,"basaliob":-0.796,"bolusiob":0,"netbasalinsulin":-1,"bolusinsulin":0,"time":"2021-01-02T22:27:13.000Z"},"lastBolusTime":0,"lastTemp":{"rate":0,"timestamp":"2021-01-02T16:19:29-06:00","started_at":"2021-01-02T22:19:29.000Z","date":1609625969000,"duration":8.73}} {"delta":0,"glucose":64,"noise":null,"short_avgdelta":0,"long_avgdelta":-1.96,"date":1609625280000,"last_cal":0,"device":"openaps://kwon"} null No deliverAt found. {"reason":"If current system time Sat Jan 02 2021 16:28:04 GMT-0600 (CST) is correct, then BG data is too old. The last BG data was read 20.1m ago at Sat Jan 02 2021 16:08:00 GMT-0600 (CST). Temp 0 <= current basal 0.9U/hr; doing nothing. "} Couldn't smb_verify_suggested oref0-pump-loop failed. MDT CGM configured; not waiting Unsuccessful oref0-pump-loop (BG too old) at Sat Jan 2 16:28:05 CST 2021 Starting oref0-pump-loop at Sat Jan 2 16:39:03 CST 2021 with 24 second wait_for_silence: MDT CGM configured; not waiting Listening for 24s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:39:29 CST 2021 Preflight OK. Attempting to retrieve MDT CGM data from pump MDT CGM data retrieved Profile less than 60m old; Profile valid. Pump history updated through 2021-01-02T16:19:29-06:00 with 0 new records; meal.json Warning: could not parse monitor/carbhistory.json Warning: setting mealCOB to 0 because currentDeviation is null/undefined Not enough glucose data to calculate carb absorption; found: 20 refreshed: {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":null,"maxDeviation":0,"minDeviation":999,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":999,"allDeviations":[],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Listening for 9s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:40:07 CST 2021 Checking that pump clock: "2021-01-02T16:39:25-06:00" is within 90s of current time: 2021-01-02T16:40:09-0600 Temp refreshed: monitor/temp_basal.json: {"duration":11,"temp":"absolute","rate":0} {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":null,"maxDeviation":0,"minDeviation":999,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":999,"allDeviations":[],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Warning: Autotune has not been run. All microboluses will be disabled until you manually run autotune or add it to run nightly in your loop. {"iob":-0.894,"activity":-0.0049,"basaliob":-0.894,"bolusiob":0,"netbasalinsulin":-1.15,"bolusinsulin":0,"time":"2021-01-02T22:39:27.000Z","iobWithZeroTemp":{"iob":-0.894,"activity":-0.0049,"basaliob":-0.894,"bolusiob":0,"netbasalinsulin":-1.15,"bolusinsulin":0,"time":"2021-01-02T22:39:27.000Z"},"lastBolusTime":0,"lastTemp":{"rate":0,"timestamp":"2021-01-02T16:19:29-06:00","started_at":"2021-01-02T22:19:29.000Z","date":1609625969000,"duration":20.97}} {"delta":0,"glucose":64,"noise":null,"short_avgdelta":0,"long_avgdelta":-1.96,"date":1609625280000,"last_cal":0,"device":"openaps://kwon"} null No deliverAt found. {"reason":"If current system time Sat Jan 02 2021 16:40:17 GMT-0600 (CST) is correct, then BG data is too old. The last BG data was read 32.3m ago at Sat Jan 02 2021 16:08:00 GMT-0600 (CST). Temp 0 <= current basal 0.9U/hr; doing nothing. "} Couldn't smb_verify_suggested oref0-pump-loop failed. MDT CGM configured; not waiting Unsuccessful oref0-pump-loop (BG too old) at Sat Jan 2 16:40:17 CST 2021 When BG is above 70 mg/dl and looping works again and BG is reported to nightscout Starting oref0-pump-loop at Sat Jan 2 16:45:03 CST 2021 with 28 second wait_for_silence: MDT CGM configured; not waiting Listening for 28s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:45:32 CST 2021 Preflight OK. Attempting to retrieve MDT CGM data from pump MDT CGM data retrieved Profile less than 60m old; Profile valid. Pump history updated through 2021-01-02T16:19:29-06:00 with 0 new records; meal.json Warning: could not parse monitor/carbhistory.json Not enough glucose data to calculate carb absorption; found: 22 refreshed: {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":6.89,"maxDeviation":0.13,"minDeviation":-0.05,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":4.91,"allDeviations":[7,0,0,0,0],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Listening for 1s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:46:05 CST 2021 Checking that pump clock: "2021-01-02T16:45:23-06:00" is within 90s of current time: 2021-01-02T16:46:07-0600 Temp refreshed: monitor/temp_basal.json: {"duration":5,"temp":"absolute","rate":0} {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":6.89,"maxDeviation":0.13,"minDeviation":-0.05,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":4.91,"allDeviations":[7,0,0,0,0],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Warning: Autotune has not been run. All microboluses will be disabled until you manually run autotune or add it to run nightly in your loop. {"iob":-0.963,"activity":-0.0054,"basaliob":-0.963,"bolusiob":0,"netbasalinsulin":-1.25,"bolusinsulin":0,"time":"2021-01-02T22:45:25.000Z","iobWithZeroTemp":{"iob":-0.963,"activity":-0.0054,"basaliob":-0.963,"bolusiob":0,"netbasalinsulin":-1.25,"bolusinsulin":0,"time":"2021-01-02T22:45:25.000Z"},"lastBolusTime":0,"lastTemp":{"rate":0,"timestamp":"2021-01-02T16:19:29-06:00","started_at":"2021-01-02T22:19:29.000Z","date":1609625969000,"duration":26.93}} {"delta":22,"glucose":90,"noise":null,"short_avgdelta":22,"long_avgdelta":3.48,"date":1609627380000,"last_cal":0,"device":"openaps://kwon"} Autosens ratio: 1; Basal unchanged: 0.9; ISF unchanged: 30; CR: 5 currenttemp: { duration: 5, temp: 'absolute', rate: 0 } lastTempAge: 27 m tempModulus: 2 m SMB disabled (!microBolusAllowed) profile.sens: 30 sens: 30 CSF: 6 Limiting carb impact from 21.2 to 15 mg/dL/5m ( 30 g/h ) Carb Impact: 15 mg/dL per 5m; CI Duration: 0 hours; remaining CI ( 3 peak): 0 mg/dL per 5m UAM Impact: 21.2 mg/dL per 5m; UAM Duration: 1.1 hours minPredBG: 191 minIOBPredBG: 189 minZTGuardBG: 91 minUAMPredBG: 228 avgPredBG: 224 COB: 0 / 0 maxDelta 22 > 20% of BG 90 - disabling SMB BG projected to remain above 110 for 0 minutes BG projected to remain above 75 for 240 minutes naive_eventualBG: 119 bgUndershoot: -44 zeroTempDuration: 240 zeroTempEffect: 108 carbsReq: -25 2021-01-02T22:46:15.325Z Checking deliverAt: 2021-01-02T22:46:15.325Z is within 1m of current time: Sat Jan 2 16:46:15 CST 2021 and that smb-suggested.json is less than 1m old enact/smb-suggested.json: {"temp":"absolute","bg":90,"tick":"+22","eventualBG":246,"insulinReq":2.7,"reservoir":"40.15\n","deliverAt":"2021-01-02T22:46:15.325Z","sensitivityRatio":1,"COB":0,"IOB":-0.963,"BGI":0.81,"deviation":127,"ISF":30,"CR":5,"target_bg":110,"duration":30,"rate":2.7} "minPredBG 191, minGuardBG 110, IOBpredBG 201, UAMpredBG 246; maxDelta 22 > 20% of BG 90: SMB disabled; Eventual BG 246 >= 110, adj. req. rate: 6.3 to maxSafeBasal: 2.7, temp 0<2.7U/hr. " UAM: [90,110,129,146,162,176,188,199,208,216,221,226,228,229,230,231,232,233,234,234,235,236,237,237,238,239,239,240,240,241,241,242,242,243,243,243,244,244,244,244,245,245,245,245,245,245,246] IOB: [90,105,118,130,141,151,159,166,172,177,181,183,184,185,186,187,188,188,189,190,191,192,192,193,194,194,195,196,196,197,197,197,198,198,199,199,199,200,200,200,200,201] ZT: [90,91,92,93,94,95,96,97,98,99,101,102,104,105,107,108,110] Temp refreshed: monitor/temp_basal.json: {"duration":5,"temp":"absolute","rate":0} enact/smb-enacted.json: "Rate: 2.7 Duration: 30" Checking pump status (suspended/bolusing): {"status":"normal","bolusing":false,"suspended":false} Temp refreshed: monitor/temp_basal.json: {"duration":30,"temp":"absolute","rate":2.7} No bolus needed. Pump profile refreshed; Could not parse autotune_data Could not parse temptargets_data. No temptargets found. Settings refreshed; grep: enact/bolused.json: No such file or directory Refreshing pumphistory because: enacted, Settings less than 3 minutes old. Pump history update Screen shot of nightscout https://user-images.githubusercontent.com/70490258/103560761-f0a29f00-4e7d-11eb-8aa8-40adaa225cd8.png photo of pump screen when looping is not working https://user-images.githubusercontent.com/70490258/103560854-25165b00-4e7e-11eb-84ed-1a3466001c46.png I updated Oref0 and re-ran oref0 setup. However, I still see the same problem. The full script while it is not running and when it comes back to rerunning are below. When BG is not reported to nightscout and putty shows 'BG tood old' error. Starting oref0-pump-loop at Sat Jan 2 16:27:03 CST 2021 with 4 second wait_for_silence: MDT CGM configured; not waiting Listening for 4s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:27:08 CST 2021 Preflight OK. Attempting to retrieve MDT CGM data from pump MDT CGM data retrieved Profile less than 60m old; Profile valid. Pump history updated through 2021-01-02T16:19:29-06:00 with 0 new records; meal.json Warning: could not parse monitor/carbhistory.json Not enough glucose data to calculate carb absorption; found: 20 refreshed: {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":-0.41,"maxDeviation":0,"minDeviation":-1.66,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":0.218,"allDeviations":[0,0,-2],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Listening for 8s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:27:53 CST 2021 Checking that pump clock: "2021-01-02T16:27:11-06:00" is within 90s of current time: 2021-01-02T16:27:55-0600 Temp refreshed: monitor/temp_basal.json: {"duration":23,"temp":"absolute","rate":0} {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":-0.41,"maxDeviation":0,"minDeviation":-1.66,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":0.218,"allDeviations":[0,0,-2],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Warning: Autotune has not been run. All microboluses will be disabled until you manually run autotune or add it to run nightly in your loop. {"iob":-0.796,"activity":-0.004,"basaliob":-0.796,"bolusiob":0,"netbasalinsulin":-1,"bolusinsulin":0,"time":"2021-01-02T22:27:13.000Z","iobWithZeroTemp":{"iob":-0.796,"activity":-0.004,"basaliob":-0.796,"bolusiob":0,"netbasalinsulin":-1,"bolusinsulin":0,"time":"2021-01-02T22:27:13.000Z"},"lastBolusTime":0,"lastTemp":{"rate":0,"timestamp":"2021-01-02T16:19:29-06:00","started_at":"2021-01-02T22:19:29.000Z","date":1609625969000,"duration":8.73}} {"delta":0,"glucose":64,"noise":null,"short_avgdelta":0,"long_avgdelta":-1.96,"date":1609625280000,"last_cal":0,"device":"openaps://kwon"} null No deliverAt found. {"reason":"If current system time Sat Jan 02 2021 16:28:04 GMT-0600 (CST) is correct, then BG data is too old. The last BG data was read 20.1m ago at Sat Jan 02 2021 16:08:00 GMT-0600 (CST). Temp 0 <= current basal 0.9U/hr; doing nothing. "} Couldn't smb_verify_suggested oref0-pump-loop failed. MDT CGM configured; not waiting Unsuccessful oref0-pump-loop (BG too old) at Sat Jan 2 16:28:05 CST 2021 Starting oref0-pump-loop at Sat Jan 2 16:39:03 CST 2021 with 24 second wait_for_silence: MDT CGM configured; not waiting Listening for 24s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:39:29 CST 2021 Preflight OK. Attempting to retrieve MDT CGM data from pump MDT CGM data retrieved Profile less than 60m old; Profile valid. Pump history updated through 2021-01-02T16:19:29-06:00 with 0 new records; meal.json Warning: could not parse monitor/carbhistory.json Warning: setting mealCOB to 0 because currentDeviation is null/undefined Not enough glucose data to calculate carb absorption; found: 20 refreshed: {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":null,"maxDeviation":0,"minDeviation":999,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":999,"allDeviations":[],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Listening for 9s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:40:07 CST 2021 Checking that pump clock: "2021-01-02T16:39:25-06:00" is within 90s of current time: 2021-01-02T16:40:09-0600 Temp refreshed: monitor/temp_basal.json: {"duration":11,"temp":"absolute","rate":0} {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":null,"maxDeviation":0,"minDeviation":999,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":999,"allDeviations":[],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Warning: Autotune has not been run. All microboluses will be disabled until you manually run autotune or add it to run nightly in your loop. {"iob":-0.894,"activity":-0.0049,"basaliob":-0.894,"bolusiob":0,"netbasalinsulin":-1.15,"bolusinsulin":0,"time":"2021-01-02T22:39:27.000Z","iobWithZeroTemp":{"iob":-0.894,"activity":-0.0049,"basaliob":-0.894,"bolusiob":0,"netbasalinsulin":-1.15,"bolusinsulin":0,"time":"2021-01-02T22:39:27.000Z"},"lastBolusTime":0,"lastTemp":{"rate":0,"timestamp":"2021-01-02T16:19:29-06:00","started_at":"2021-01-02T22:19:29.000Z","date":1609625969000,"duration":20.97}} {"delta":0,"glucose":64,"noise":null,"short_avgdelta":0,"long_avgdelta":-1.96,"date":1609625280000,"last_cal":0,"device":"openaps://kwon"} null No deliverAt found. {"reason":"If current system time Sat Jan 02 2021 16:40:17 GMT-0600 (CST) is correct, then BG data is too old. The last BG data was read 32.3m ago at Sat Jan 02 2021 16:08:00 GMT-0600 (CST). Temp 0 <= current basal 0.9U/hr; doing nothing. "} Couldn't smb_verify_suggested oref0-pump-loop failed. MDT CGM configured; not waiting Unsuccessful oref0-pump-loop (BG too old) at Sat Jan 2 16:40:17 CST 2021 When BG is above 70 mg/dl and looping works again and BG is reported to nightscout Starting oref0-pump-loop at Sat Jan 2 16:45:03 CST 2021 with 28 second wait_for_silence: MDT CGM configured; not waiting Listening for 28s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:45:32 CST 2021 Preflight OK. Attempting to retrieve MDT CGM data from pump MDT CGM data retrieved Profile less than 60m old; Profile valid. Pump history updated through 2021-01-02T16:19:29-06:00 with 0 new records; meal.json Warning: could not parse monitor/carbhistory.json Not enough glucose data to calculate carb absorption; found: 22 refreshed: {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":6.89,"maxDeviation":0.13,"minDeviation":-0.05,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":4.91,"allDeviations":[7,0,0,0,0],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Listening for 1s: .No interfering pump comms detected from other rigs (this is a good thing!) Continuing oref0-pump-loop at Sat Jan 2 16:46:05 CST 2021 Checking that pump clock: "2021-01-02T16:45:23-06:00" is within 90s of current time: 2021-01-02T16:46:07-0600 Temp refreshed: monitor/temp_basal.json: {"duration":5,"temp":"absolute","rate":0} {"carbs":0,"nsCarbs":0,"bwCarbs":0,"journalCarbs":0,"mealCOB":0,"currentDeviation":6.89,"maxDeviation":0.13,"minDeviation":-0.05,"slopeFromMaxDeviation":0,"slopeFromMinDeviation":4.91,"allDeviations":[7,0,0,0,0],"lastCarbTime":0,"bwFound":false,"reason":"not enough glucose data to calculate carb absorption"} Warning: Autotune has not been run. All microboluses will be disabled until you manually run autotune or add it to run nightly in your loop. {"iob":-0.963,"activity":-0.0054,"basaliob":-0.963,"bolusiob":0,"netbasalinsulin":-1.25,"bolusinsulin":0,"time":"2021-01-02T22:45:25.000Z","iobWithZeroTemp":{"iob":-0.963,"activity":-0.0054,"basaliob":-0.963,"bolusiob":0,"netbasalinsulin":-1.25,"bolusinsulin":0,"time":"2021-01-02T22:45:25.000Z"},"lastBolusTime":0,"lastTemp":{"rate":0,"timestamp":"2021-01-02T16:19:29-06:00","started_at":"2021-01-02T22:19:29.000Z","date":1609625969000,"duration":26.93}} {"delta":22,"glucose":90,"noise":null,"short_avgdelta":22,"long_avgdelta":3.48,"date":1609627380000,"last_cal":0,"device":"openaps://kwon"} Autosens ratio: 1; Basal unchanged: 0.9; ISF unchanged: 30; CR: 5 currenttemp: { duration: 5, temp: 'absolute', rate: 0 } lastTempAge: 27 m tempModulus: 2 m SMB disabled (!microBolusAllowed) profile.sens: 30 sens: 30 CSF: 6 Limiting carb impact from 21.2 to 15 mg/dL/5m ( 30 g/h ) Carb Impact: 15 mg/dL per 5m; CI Duration: 0 hours; remaining CI ( 3 peak): 0 mg/dL per 5m UAM Impact: 21.2 mg/dL per 5m; UAM Duration: 1.1 hours minPredBG: 191 minIOBPredBG: 189 minZTGuardBG: 91 minUAMPredBG: 228 avgPredBG: 224 COB: 0 / 0 maxDelta 22 > 20% of BG 90 - disabling SMB BG projected to remain above 110 for 0 minutes BG projected to remain above 75 for 240 minutes naive_eventualBG: 119 bgUndershoot: -44 zeroTempDuration: 240 zeroTempEffect: 108 carbsReq: -25 2021-01-02T22:46:15.325Z Checking deliverAt: 2021-01-02T22:46:15.325Z is within 1m of current time: Sat Jan 2 16:46:15 CST 2021 and that smb-suggested.json is less than 1m old enact/smb-suggested.json: {"temp":"absolute","bg":90,"tick":"+22","eventualBG":246,"insulinReq":2.7,"reservoir":"40.15\n","deliverAt":"2021-01-02T22:46:15.325Z","sensitivityRatio":1,"COB":0,"IOB":-0.963,"BGI":0.81,"deviation":127,"ISF":30,"CR":5,"target_bg":110,"duration":30,"rate":2.7} "minPredBG 191, minGuardBG 110, IOBpredBG 201, UAMpredBG 246; maxDelta 22 > 20% of BG 90: SMB disabled; Eventual BG 246 >= 110, adj. req. rate: 6.3 to maxSafeBasal: 2.7, temp 0<2.7U/hr. " UAM: [90,110,129,146,162,176,188,199,208,216,221,226,228,229,230,231,232,233,234,234,235,236,237,237,238,239,239,240,240,241,241,242,242,243,243,243,244,244,244,244,245,245,245,245,245,245,246] IOB: [90,105,118,130,141,151,159,166,172,177,181,183,184,185,186,187,188,188,189,190,191,192,192,193,194,194,195,196,196,197,197,197,198,198,199,199,199,200,200,200,200,201] ZT: [90,91,92,93,94,95,96,97,98,99,101,102,104,105,107,108,110] Temp refreshed: monitor/temp_basal.json: {"duration":5,"temp":"absolute","rate":0} enact/smb-enacted.json: "Rate: 2.7 Duration: 30" Checking pump status (suspended/bolusing): {"status":"normal","bolusing":false,"suspended":false} Temp refreshed: monitor/temp_basal.json: {"duration":30,"temp":"absolute","rate":2.7} No bolus needed. Pump profile refreshed; Could not parse autotune_data Could not parse temptargets_data. No temptargets found. Settings refreshed; grep: enact/bolused.json: No such file or directory Refreshing pumphistory because: enacted, Settings less than 3 minutes old. Pump history update Screen shot of nightscout https://user-images.githubusercontent.com/70490258/103560761-f0a29f00-4e7d-11eb-8aa8-40adaa225cd8.png photo of pump screen when looping is not working https://user-images.githubusercontent.com/70490258/103560854-25165b00-4e7e-11eb-84ed-1a3466001c46.png I caught the same problem. I have: Medtronic 722 pump Enlite sensors (MDT sensors). Rig type: Edison/Explorer Board rig oref0 version: oref0 0.7.0 master nightscout version 14.2.3 OpenAPS does not see sugar below 3.6 mmol/l. And stops working until the sugar is 3.6 mmol / l or higher.
2025-04-01T04:35:00.058690
2017-02-20T05:35:38
208787918
{ "authors": [ "Bender1061", "Kdisimone", "moomoobloo" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9335", "repo": "openaps/oref0", "url": "https://github.com/openaps/oref0/pull/409" }
gharchive/pull-request
Spidev already in use reboot I'm not entirely sure that this change is entered correctly...trying to resolve the "spidev already in use" error by adding a crontab line. PLEASE take a look at this carefully as I'm a complete noob and basically was working off existing syntaxes to help me add the line to the script are you sure you want to use 50 lines? I know you have it waiting for 10 mins, but matters how long your pump loop is. With the Enlite people, it can take about 7 mins to complete a loop, so I would be a bit more comfortable with 30 lines myself. I'm totally open to suggestion....I'm definitely not an expert on what would work best here for the greatest good. I'm not sure what the context for this is, but why would we want to reboot the device just because spidev is already in use? Getting that error is normal and expected in many situations. For instance, any time a pump command is manually invoked, it's expected that the loop would produce the 'already in use' error, and I don't think it would be helpful to reboot in that situation. Also, I have similar concerns to @Bender1061 that triggering reboot based on log output is not very robust. It seems to create the risk of infinite reboot loops or at least a series of several consecutive reboots if the error is not cleared from the logs in time. Is there a more specific condition that can be detected instead of the 'already in use' error? Is there a way to detect it independent of the log output? It is related to this issue https://github.com/openaps/oref0/issues/406 I see now that @scottleibrand commented this cron line won't work well as the solution...I'd missed that prior to submitting this. I will close this out...but the issue still needs resolution. If one of you can suggest a better way and make a PR, that would be much appreciated. I have been using a different solution currently, but do not know how to PR it in. I do know however, that the BT tethering definitely hangs us up everyday that my daughter goes from the car ride (BT use) to school (school wifi). It will not automatically hop off the BT and get on school wifi. It gets stuck with this error and only a reboot will solve the issue.
2025-04-01T04:35:00.062460
2017-11-26T20:59:46
276856035
{ "authors": [ "Ebgineer", "scottleibrand", "tim2000s" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9336", "repo": "openaps/oref0", "url": "https://github.com/openaps/oref0/pull/817" }
gharchive/pull-request
Simple Remote Access Scripts Test regenerate-index.sh creates a new file index.html every time it is run. This contains simple html formatting with example data for now (intent is to add additional parameter-value pairs for display after approval of proof of concept). setup-http.sh is intended to be called from oref0-setup.sh and adds cron entries to start SimpleHTTPServer at reboot and run regenerate-index.sh periodically. Both files should be in ~/myopenaps/enact directory. From a browser on any device, browse to http://[RIG_IP_ADDRESS]/index.html and the page will self-refresh periodically. Should we put the index.html into the www/ folder instead of enact/ along with the other web stuff we've done? I would do. The old solution of running from enact was simply a work around. It’s better by far to have any web services running from their own sandboxed space with a user that has no ability to read and write to the APS file system (only to the web server one). Especially if someone can interact with them.
2025-04-01T04:35:00.086064
2016-03-27T13:46:58
143810267
{ "authors": [ "muralisrini", "rilaaax" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9337", "repo": "openblockchain/obc-peer", "url": "https://github.com/openblockchain/obc-peer/issues/882" }
gharchive/issue
Unable to invoke/query another chaincode I tried to invoke/query another chaincode as shown in examples: github.com/openblockchain/obc-peer/openchain/example/chaincode/chaincode_example04 and github.com/openblockchain/obc-peer/openchain/example/chaincode/chaincode_example05 In both cases it doesn't work. I also tried to do it with my own chaincodes but I get the same problem. Does anyone know what's wrong with these features? Thanks. @rilaaax : Unit test executetransactions_test.go excercises chaincode_example04 and chaincode_example05. If they run successfully then there's probably its user error of some sort. If they didn't, I'd check to make sure chaincode_example02 is not modified and chaincode_example02/ contains only chaincode_example02.go (if something is modified, the chaincode name will change. You'll have to use the deployed name). To test them cd $GOPATH/src/github.com/openblockchain/obc-peer/openchain/chaincode go test -run ChaincodeInvokeChaincode go test -rub ChaincodeQueryChaincode Somethings to check is chaincode_example02 deployed and running ? are you running in --peer-chaincodedev mode ? This is ok but you have to uniformly use the name you provided to start chaincode_example02 I tried the first test and it seems that it worked fine (my console looked like Matrix)... But the goal is just to do a simple test by myself to see how it works and use these features in other chaincodes. And when I try to do it I get an error. To be more precise I will explain my process, using chaincode_example05 which (if I well understood the chaincode) query chaincode_example02 upon invoke and computes a and b to store it in sum state. (I have a peer running --peer-chaincodedev mode and security enabled running the CA server) First I set up chaincode_example02: OPENCHAIN_CHAINCODE_ID_NAME=cc_ex02 OPENCHAIN_PEER_ADDRESS=<IP_ADDRESS>:30303 ./example02 then I deploy it: ./obc-peer chaincode deploy -u jim -n cc_ex02 -c '{"Function":"init", "Args": ["a","100", "b", "200"]}' and I get: [cc_ex02]Received INIT, initializing chaincode Aval = 100, Bval = 200 In the same way for chaincode_example05: OPENCHAIN_CHAINCODE_ID_NAME=cc_ex05 OPENCHAIN_PEER_ADDRESS=<IP_ADDRESS>:30303 ./example05 then I deploy it: ./obc-peer chaincode deploy -u jim -n cc_ex05 -c '{"Function":"init", "Args": ["sum", "0"]}' and I get: [cc_ex05]Received INIT, initializing chaincode sumVal = 0 So far all is ok! Now when I try to invoke the invokefunction in cc_ex05 which query cc_ex02's a and b states and do the compute: ./obc-peer chaincode invoke -u jim -l golang -n cc_ex05 -c '{"Function": "invoke", "Args": ["cc_ex02", "sum"]}' I get an error: cc_ex02 console: 2016/03/27 22:22:36 [e577f315]Received message QUERY from shim 2016/03/27 22:22:36 [e577f315]Handling ChaincodeMessage of type: QUERY(state:ready) 2016/03/27 22:22:36 [e577f315]Sending GET_STATE 2016/03/27 22:22:36 [e577f315]Received message ERROR from shim 2016/03/27 22:22:36 [e577f315]Handling ChaincodeMessage of type: ERROR(state:ready) Error starting Simple chaincode: Error handling message: [e577f315-4428-49fa-ba12-f97cb2e91cbb]Chaincode handler FSM cannot handle message (ERROR) with payload size (82) while in state: ready cc_ex05 console: 2016/03/27 22:22:36 [e577f315]Received message TRANSACTION from shim 2016/03/27 22:22:36 [e577f315]Handling ChaincodeMessage of type: TRANSACTION(state:ready) 2016/03/27 22:22:36 [e577f315]Received TRANSACTION, invoking transaction on chaincode(Src:ready, Dst:transaction) 2016/03/27 22:22:36 [e577f315]Sending INVOKE_QUERY 2016/03/27 22:23:06 [e577f315]Received message ERROR from shim 2016/03/27 22:23:06 [e577f315]Handling ChaincodeMessage of type: ERROR(state:transaction) 2016/03/27 22:23:06 [e577f315]before send 2016/03/27 22:23:06 [e577f315]after send 2016/03/27 22:23:06 [e577f315]Error received from validator ERROR, communicated(state:ready) 2016/03/27 22:23:06 [e577f315]Received ERROR. Failed to query chaincode. Got error: Timeout expired while executing transaction Sorry for this long post but I hope it will help you understand my concern. Thank you again for your help! See issue #905
2025-04-01T04:35:00.090084
2023-06-09T08:51:16
1749445575
{ "authors": [ "gkeishin", "kaospwnz", "rahulmah" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9338", "repo": "openbmc/openbmc-test-automation", "url": "https://github.com/openbmc/openbmc-test-automation/issues/2211" }
gharchive/issue
Error: WebDriver.init() when starts GUI tests Hello everyone. I used Ubuntu 22.04(VMWare 16) and Geckodriver 33.0, IPMI tool alredy insstalled, and I got the trouble with GUI tests. Its isn't working, I get the error on every test: Verify Navigation To Sensors Page :: Verify navigation to Sensors ... | FAIL | Parent suite setup failed: Keyword 'Retry Browser Login Attempts' failed after retrying for 2 minutes 10 seconds. The last error was: TypeError: WebDriver.init() got an unexpected keyword argument 'firefox_profile' Can you help me, please? What I need to do? Thanks a lot! @kaospwnz Can u check this setup and see if it matches or is similar for running Web UI test bucket runs https://github.com/openbmc/openbmc-test-automation/blob/master/docs/gui_setup_reference.md Adding @rahulmah to as well give his opinion on the above Just try providing OPENBMC_USERNAME and OPENBMC_PASSWORD like below while running your suite and see if that works out for you. python3 -m robot -v OPENBMC_HOST:x.x.x.x -v OPENBMC_USERNAME:${BMC_USERNAME} -v OPENBMC_PASSWORD:${BMC_PASSWORD} \ -v GUI_MODE:header gui/gui_test/<Path_of_your_suite> @kaospwnz all good ? Closing the ticket.. If not working please re-open.. Thanks
2025-04-01T04:35:00.117783
2022-11-12T21:22:02
1446609494
{ "authors": [ "chinedu117" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9339", "repo": "opencdms/opencdms-app-shell", "url": "https://github.com/opencdms/opencdms-app-shell/pull/9" }
gharchive/pull-request
Feature/configurable app This makes the vuejs app shell configurable. Closes #5 @isedwards I have added the configuration guideline.
2025-04-01T04:35:00.119169
2022-02-08T16:16:31
1127467698
{ "authors": [ "skorasaurus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9340", "repo": "opencleveland/refundcleveland", "url": "https://github.com/opencleveland/refundcleveland/issues/120" }
gharchive/issue
Update the Open Graph Image Update the open graph image; which is a hard image, and contains data from 2021. https://github.com/opencleveland/refundcleveland/blob/f755f73f30f09ebc2280d5695825c9927f217ad4/static/images/meta-img.jpg should really do this before soft launch today
2025-04-01T04:35:00.120674
2023-12-14T08:07:38
2041162551
{ "authors": [ "criticic", "rakim-0" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9341", "repo": "opencodeiiita/CyberRepo-2077", "url": "https://github.com/opencodeiiita/CyberRepo-2077/pull/29" }
gharchive/pull-request
Complete Task 6 Issue: #6 Make sure to add all the required items as per each issue. Please include a screenshot with a timestamp. Either system clock or just run timedatectl on the terminal @rakim-0 I have updated the PR, kindly take a look
2025-04-01T04:35:00.128537
2016-07-05T23:27:51
163968525
{ "authors": [ "ThePurpleRabbits", "clarete", "piamancini" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9342", "repo": "opencollective/opencollective-api", "url": "https://github.com/opencollective/opencollective-api/issues/474" }
gharchive/issue
Couple of issues with Koajs OpenCollective (money not arriving at intended destination) Couple of issues: https://github.com/koajs/koa/issues/770 and https://github.com/koajs/koa/issues/767 Could you sort it out so the money gets to the right people as soon as possible. Thanks. Also the having our name and Clown-Fox's blog on the backers list but our priority is that the coders get the money I paid to them. Hi @ThePurpleRabbits The way OpenCollective works is that TJ and the rest of the Koa team need to submit and expense/invoice in order to retrieve the money. We don't make automatic transfers. The reason they use OpenCollective is that they don't want to receive the money directly into their accounts and then manage it themselves. So the funds are stored in their public page https://opencollective.com/koajs and when they submit an expense or an invoice to be paid, the funds are sent via paypal. I'll look now into why you guys are not showing up on their readme. That's a good idea. When they need it they'll know why they need it. Important. I am hopeful about this issue now. exactly! I'll close this issue for lack of activity but please feel free to reopen if there's anything else that we missed!
2025-04-01T04:35:00.133064
2019-05-15T19:27:33
444606176
{ "authors": [ "Betree", "coveralls", "eloyekunle", "znarf" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9343", "repo": "opencollective/opencollective-api", "url": "https://github.com/opencollective/opencollective-api/pull/1988" }
gharchive/pull-request
Add ability to create private updates Related: opencollective/opencollective-frontend#1796 Ref: opencollective/opencollective#1923 Coverage decreased (-0.02%) to 66.228% when pulling 461e7bd6fda2f50c4e289aed2cb2f4a6cbfe8a66 on feat/private-updates into b19039e357f9f156f0713871f8ef7cb85ea90e44 on master. @eloyekunle Why have you chosen the "stripContent" approach instead of just restricting access to the update? Is there a requirement to see the title? @znarf From the original specifications and discussions in the issue, there's a requirement to show the title while hiding the actual content. @eloyekunle Great, got it. What about implementing the permission to read the content in the GraphQL getter instead of manipulating the object with stripContent? @znarf I'll look into that. @znarf I've removed the stripContent approach in favor of modifying GraphQL getters. @Betree ok for you? Let's merge this ASAP so we can properly test https://github.com/opencollective/opencollective-frontend/pull/1796
2025-04-01T04:35:00.135395
2021-12-20T16:35:17
1084988484
{ "authors": [ "kewitz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9344", "repo": "opencollective/opencollective-api", "url": "https://github.com/opencollective/opencollective-api/pull/6983" }
gharchive/pull-request
Block CNY currency payout on business hosts A lagging problem related to Wise internal restrictions that are leaking into our platform. Despite AliPay appearing as the only viable Transfer method in the Bank Account payout form, this is not actually allowed by Wise. Expense error: TransferWise validation error: Selected recipient type CHINESE_ALIPAY is not supported for this payment References: https://wise.com/help/articles/2955298/guide-to-cny-transfers https://wise.com/us/blog/new-send-cny-instantly-to-alipay-users @znarf we don't. I feel like we should, even though we're just enforcing Wise's business rules, we abstract them enough to the point it looks like we're the ones responsible for blocking these. I think we need to tweak this UX a little bit to reduce the burden. Or accept all currencies and automatically set these expenses as "Custom Payout Method", as we do with hosts that are not connected to Wise.
2025-04-01T04:35:00.148555
2021-03-23T15:30:17
838849186
{ "authors": [ "SudharakaP", "znarf" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9345", "repo": "opencollective/opencollective", "url": "https://github.com/opencollective/opencollective/issues/4111" }
gharchive/issue
Inconsistency in the Add Funds Modal Describe the bug I think there's an inconsistency in the Add Funds modal in the collective page and in the transaction/expenses pages. For example the collective home page the Add funds modal looks like this, Whereas in the transaction and expenses pages it looks like this, Notice that the host fee checkbox is missing in the transactions and expenses pages. Click on the action menu and Add Funds. Do the same in the transaction page for the collective. Expected behavior My understanding is that the modal should show the same design everywhere it is shown within the collective. Let me know otherwise. 🤔 @piamancini : Ops, I somehow missed this one. Will work on it tomorrow. 😄 Looks like host.plan.hostFees might be unavailable, preventing to compute canAddHostFee properly. So might just be a matter of making sure the field is asked in a GraphQL query.
2025-04-01T04:35:00.153313
2023-01-13T05:40:53
1531738211
{ "authors": [ "mimir-d" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9346", "repo": "opencomputeproject/ocp-diag-core", "url": "https://github.com/opencomputeproject/ocp-diag-core/issues/632" }
gharchive/issue
[output spec] validator object field "value" should contain list The validator reference value field has a primitive type currently (bool, numeric, string), but for some of the types (set in, not in) it should also be a list of primitives. Change is trivial, in Validator object attributes: <tr> <td><em>value</em></td> <td>JSON: string, number, boolean</td> <td><strong>Yes</strong></td> <td>The value to use on the right side of the arithmetic comparison.</td> </tr> to <tr> <td><em>value</em></td> <td>JSON: string, number, boolean, array</td> <td><strong>Yes</strong></td> <td>The value to use on the right side of the validation. If an array, it must be homogenous and contain supported primitive types.</td> </tr> @dhawkes thanks for signalling this
2025-04-01T04:35:00.167702
2022-04-28T16:56:08
1219011583
{ "authors": [ "dplore", "liulk", "sezhang2", "smulky" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9347", "repo": "openconfig/featureprofiles", "url": "https://github.com/openconfig/featureprofiles/issues/173" }
gharchive/issue
gNMI-1.8: annotation_test.go : Configuration Metadata-only Retrieve and Replace Issue: Encoding JSON used instead of JSON_IETF in GetRequest Question: Can you please clarify if the script has missed out setting “Encoding” field for GetRequest in this testcase? I believe Google would be expecting JSON_IETF to be used as standard. Testcase: gNMI-1.8: Configuration Metadata-only Retrieve and Replace Code reference: https://github.com/openconfig/featureprofiles/blob/fe7a8c25dcaf8cc3e57e942b153398c34066c595/feature/system/gnmi/metadata/tests/annotation_test/annotation_test.go#L109 Analysis: It seems that the script is not setting any value(json/json_ietf/proto/…) for ’Encoding’ field of GNMI GetRequest. getResponse, err := gnmiClient.Get(context.Background(), &gpb.GetRequest{ Path: []*gpb.Path{{ Elem: []*gpb.PathElem{}, }}, Type: gpb.GetRequest_CONFIG, }) So Encoding value is getting set internally to “0” (Which maps to JSON type) and not JSON_IETF(Encoding: “4”). If script is modified to set “Encoding” field to 4 (JSON_IETF) test passes. getResponse, err := gnmiClient.Get(context.Background(), &gpb.GetRequest{ Path: []*gpb.Path{{ Elem: []*gpb.PathElem{}, }}, Type: gpb.GetRequest_CONFIG, Encoding: 4, <<<<< }) The recommendation is that the featureprofiles tests should request JSON IETF encoding instead of leaving it to the default. @liulk any thoughts here? Seems like we should explicitly use JSON IETF for encoding? I added Encoding: gpb.Encoding_JSON_IETF under GetRequest for a couple of tests. We can set the default value to JSON IETF in gnmi proto file to avoid specifying it explicitly for each GetRequest. Yes, it seems that we should specify the JSON_IETF encoding in GetRequest. FYI- I submitted the fix at https://github.com/openconfig/featureprofiles/blob/main/feature/system/gnmi/metadata/tests/annotation_test/annotation_test.go#L114 Type: gpb.GetRequest_CONFIG, + Encoding: gpb.Encoding_JSON_IETF, Thanks, Sean Close the issue as the fix is committed.
2025-04-01T04:35:00.180859
2023-08-15T21:04:34
1852126116
{ "authors": [ "OpenConfigBot", "alshabib", "cfernanz", "trathod1" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9348", "repo": "openconfig/featureprofiles", "url": "https://github.com/openconfig/featureprofiles/pull/2025" }
gharchive/pull-request
DP-1.11: Adding test support for Nokia Following changes are made in this PR - 1. Updating deviations in metadata file 2. Updating test script to support Nokia "This code is a Contribution to the OpenConfig Feature Profiles project ("Work") made under the Google Software Grant and Corporate Contributor License Agreement ("CLA") and governed by the Apache License 2.0. No other rights or licenses in or to any of Nokia's intellectual property are granted for any other purpose. This code is provided on an "as is" basis without any warranties of any kind." Pull Request Functional Test Report for #2025 / 39ad90677f1466657ddf18bc1ccf654285366df9 Virtual Devices Device Test Test Documentation Job Raw Log Arista cEOS DP-1.11: Bursty traffic test Cisco 8000E DP-1.11: Bursty traffic test Cisco XRd DP-1.11: Bursty traffic test Juniper cPTX DP-1.11: Bursty traffic test Nokia SR Linux DP-1.11: Bursty traffic test Openconfig Lemming DP-1.11: Bursty traffic test Help /gcbrun /fptest all
2025-04-01T04:35:00.186873
2024-08-13T01:44:48
2462198054
{ "authors": [ "OpenConfigBot", "coveralls", "dplore" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9349", "repo": "openconfig/featureprofiles", "url": "https://github.com/openconfig/featureprofiles/pull/3375" }
gharchive/pull-request
TE-18.2 gRIBI and Scheduler mpls-in-udp scale test Note, this test depends on https://github.com/openconfig/featureprofiles/pull/3375 Pull Request Functional Test Report for #3375 / 6098636ee83a818884655972c0a17292d5f4c9f6 No tests identified for validation. Help Pull Request Test Coverage Report for Build<PHONE_NUMBER>8 Details 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 55.252% Totals Change from base Build<PHONE_NUMBER>4: 0.0% Covered Lines: 1983 Relevant Lines: 3589 💛 - Coveralls
2025-04-01T04:35:00.193462
2016-04-06T22:28:26
146452986
{ "authors": [ "liangchenye", "mrunalp" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9350", "repo": "opencontainers/ocitools", "url": "https://github.com/opencontainers/ocitools/pull/29" }
gharchive/pull-request
Add support for sysctls to generate Signed-off-by: Mrunal Patel<EMAIL_ADDRESS> https://groups.google.com/forum/#!topic/golang-announce/JMs4_CGbXWk Russ Cox Mar 1 Re: [golang-dev] Re: A friendly heads-up: Deletion of Go1.4-dependent code in x/tools repo Other recipients<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> On Tue, Mar 1, 2016 at 5:53 AM, Konstantin Shaposhnikov<EMAIL_ADDRESS>wrote: Will cmd/vet be deleted as well? I've seen quite a lot of projects that use Go 1.4 and "go get" vet from the x/tools repository during the build. Since it depends on those, yes, it will. (See https://golang.org/doc/devel/release.html#policy for details.) Because we knew this deletion would affect use cases such as the one you mentioned, we waited an extra six months (one release cycle) to give users additional time to update, and we pre-announced the change in October (see Robert's original email for the link). That's a significant grace period with significant advance notice. We can't keep the code running indefinitely. People who need to keep running Go 1.4 can in the worst case make their own copy of the code, but we'd also love to hear from them about why they're still on Go 1.4 so that we can address those reasons. Thanks. Russ Go community suggests to update the usage of vet package. Seems the x/tools repo was removed just between PR#28 and this one. Fixed in https://github.com/opencontainers/ocitools/pull/28. PS runc also affected by this. @liangchenye Thanks for the info! I have merged your PR and rebuilt this one. I have also opened a PR to fix this in runc. LGTM
2025-04-01T04:35:00.204788
2019-04-22T00:33:09
435562916
{ "authors": [ "afeld", "joshuamckenty" ], "license": "cc0-1.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9351", "repo": "opencontrol/opencontrol-website", "url": "https://github.com/opencontrol/opencontrol-website/pull/37" }
gharchive/pull-request
convert site to Jekyll This pull request is an alternate to #34 - kudos to @shawndwells for the idea. Preview. I tried to keep the changes as minimal as possible by: Using uswds-jekyll as a remote theme, rather than including all the files directly Avoiding content changes The site could use a lot of work content-wise, but wanted to save all that thinking for separate pull request(s). I created the Jekyll site in a GitHub Pages-compatible way, though I think we should still consider whether the benefits of something like Netlify would be worth it. Since GitHub Pages has a strict subset of Netlify features, this still leaves that door open for later if we want. Also, since the site doesn't seem to be continuously deploying right now (there are changes on master that aren't live at https://open-control.org), this is safe to merge without DNS changes or anything. Lgtm.
2025-04-01T04:35:00.377149
2023-02-24T06:56:32
1598015575
{ "authors": [ "fengyuentau" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9352", "repo": "opencv/opencv_zoo", "url": "https://github.com/opencv/opencv_zoo/pull/138" }
gharchive/pull-request
add Yunet C++ demo The first C++ demo for the models in the zoo. Related issue: https://github.com/opencv/opencv_zoo/issues/135. /cc @ShiqiYu
2025-04-01T04:35:00.397828
2020-04-14T01:34:29
599234588
{ "authors": [ "DeZhao-Zhang", "jeremyh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9353", "repo": "opendatacube/datacube-explorer", "url": "https://github.com/opendatacube/datacube-explorer/issues/120" }
gharchive/issue
Relation "cubedash.product" does not exist While i visit the web, the server give the wrong of sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "cubedash.product" does not exist LINE 2: FROM cubedash.product ^ [SQL: SELECT cubedash.product.dataset_count, cubedash.product.time_earliest, cubedash.product.time_latest, now() - cubedash.product.last_refresh AS last_refresh_age, cubedash.product.id AS id_, cubedash.product. the datacube is ok, while i check datacube-explore is also run ok I do not see any config about the database in datacube-explorer, Did I miss something Hello, thanks for the report. Did you run cubedash-gen? It usually creates those tables See here: https://github.com/opendatacube/datacube-explorer#summary-generation It works, thanks very much. It was beacase I do not have postgis extension, but while I run nohup cubedash-gen --init --all &>> summary-gen.log &, there was no wrong message. while i use cubedash-gen --init, it gives me wrong message, may be it can be added in readme for user to test environment. Thanks!
2025-04-01T04:35:00.417832
2024-03-28T12:11:53
2213075150
{ "authors": [ "christianvogt", "pnaik1", "ppadti" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9354", "repo": "opendatahub-io/odh-dashboard", "url": "https://github.com/opendatahub-io/odh-dashboard/pull/2643" }
gharchive/pull-request
Add chai-subset for simpler object assertions Closes: RHOAIENG-4441 Description This PR aims to add chai-subset to perform a single subset comparison of the object. How Has This Been Tested? npm run test Test Impact Request review criteria: Self checklist (all need to be checked): [x] The developer has manually tested the changes and verified that the changes work [x] Commits have been squashed into descriptive, self-contained units of work (e.g. 'WIP' and 'Implements feedback' style messages have been removed) [x] Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious). [ ] The developer has added tests or explained why testing cannot be added (unit or cypress tests for related changes) If you have UI changes: [ ] Included any necessary screenshots or gifs if it was a UI change. [ ] Included tags to the UX team if it was a UI/UX change (find relevant UX in the SMEs section). After the PR is posted & before it merges: [ ] The developer has tested their solution on a cluster by using the image produced by the PR to main /lgtm /approve
2025-04-01T04:35:00.465136
2019-06-13T20:39:12
455942282
{ "authors": [ "clarlars", "leiyiz", "wbrunette" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9355", "repo": "opendatakit/services", "url": "https://github.com/opendatakit/services/pull/134" }
gharchive/pull-request
Development adding ability to sync reduced-size image attachments for server and redownload all reduced-size images if ever wanted depends on: https://github.com/opendatakit/sync-endpoint/pull/20 https://github.com/opendatakit/androidlibrary/pull/127 runtests runtests runtests runtests
2025-04-01T04:35:00.468241
2024-05-06T18:06:13
2281443011
{ "authors": [ "adamkorynta", "dezidizon" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9356", "repo": "opendcs/rest_api", "url": "https://github.com/opendcs/rest_api/issues/160" }
gharchive/issue
Prototype Implementation - Stand up Embedded Jetty operating environment Prototype Implementation - Stand up Embedded Jetty operating environment Includes OpenDCS Web Client, and OpenDCS API. Investigate how CWMS AAA fits into this operating environment. Prototype works with embedded jetty and within Tomcat. Closed with PR: https://github.com/opendcs/rest_api/pull/186/
2025-04-01T04:35:00.484786
2020-10-14T22:33:00
721836846
{ "authors": [ "vamshin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9357", "repo": "opendistro-for-elasticsearch/k-NN", "url": "https://github.com/opendistro-for-elasticsearch/k-NN/pull/250" }
gharchive/pull-request
PostingsFormat Fix for odfe 1.9 postings list fix Issue #, if available: https://github.com/opendistro-for-elasticsearch/k-NN/issues/227 Description of changes: https://github.com/opendistro-for-elasticsearch/k-NN/pull/236 By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Manually verified suggest Field to check PostingsFormat fix curl -X PUT "localhost:9200/music?pretty" -H 'Content-Type: application/json' -d' { "settings": { "index": { "knn": true } }, "mappings": { "properties": { "suggest": { "type": "completion" } } } } ' curl -X POST "localhost:9200/music/_doc/1?refresh=true" -H 'Content-Type: application/json' -d' { "suggest" : { "input": [ "Nevermind", "Nirvana" ], "weight" : 34 } } ' curl -X POST "localhost:9200/music/_search?pretty" -H 'Content-Type: application/json' -d' { "suggest": { "song-suggest": { "prefix": "nir", "completion": { "field": "suggest" } } } } ' Output:- { "took" : 92, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 0, "relation" : "eq" }, "max_score" : null, "hits" : [ ] }, "suggest" : { "song-suggest" : [ { "text" : "nir", "offset" : 0, "length" : 3, "options" : [ { "text" : "Nirvana", "_index" : "music", "_type" : "_doc", "_id" : "1", "_score" : 34.0, "_source" : { "suggest" : { "input" : [ "Nevermind", "Nirvana" ], "weight" : 34 } } } ] } ] } }
2025-04-01T04:35:00.500326
2015-11-03T10:32:34
114785595
{ "authors": [ "eaglerayp", "jiakai1000" ], "license": "bsd-2-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9358", "repo": "opendp/dpdk-nginx", "url": "https://github.com/opendp/dpdk-nginx/issues/2" }
gharchive/issue
nginx cannot connect I have built and run nginx with opendp successfully. However, once connected to http://<IP_ADDRESS>:80/ (no matter by browser or ab), the nginx program just shutdown without any sign. (opendp still running) below is my nginx start log: 2015/11/03 18:18:58 [alert] 8889#0: listen() to <IP_ADDRESS>:80, backlog 511 failed, ignored (57: Invalid slot) 2015/11/03 18:18:58 [notice] 8889#0: using the "epoll" event method 2015/11/03 18:18:58 [notice] 8889#0: nginx/1.9.5 2015/11/03 18:18:58 [notice] 8889#0: built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) 2015/11/03 18:18:58 [notice] 8889#0: OS: Linux 3.13.0-37-generic 2015/11/03 18:18:58 [notice] 8889#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576 $ // you can see program shutdown without ctrl+c Is problem of listen backlog? I haven't met this problem before, you can run nginx with gdb to see what happend if nginx crashed. After installing new OS (ubuntu14.04), I can build successfully. Maybe there are some driver/kernel problem since I have tested some library. Let's close the issue.
2025-04-01T04:35:00.515583
2021-08-04T04:18:07
959868205
{ "authors": [ "codecov-commenter", "mynktl" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9359", "repo": "openebs/dynamic-nfs-provisioner", "url": "https://github.com/openebs/dynamic-nfs-provisioner/pull/81" }
gharchive/pull-request
feat(volume-event): marking nfs-resources to send volume provisioning events What this PR does?: This PR adds the following changes: Mark NFS resources to send volume events on nfs volume provisioning. To mark the nfs resources, the provisioner adds - Annotation events.openebs.io/required:"true" on NFS PV - Finalizer nfs.events.openebs.io/finalizer on backend PVC, backend PV, and NFS PV resource Added option to configure timeout backend PVC bound phase by setting env variable OPENEBS_IO_NFS_SERVER_BACKEND_PVC_TIMEOUT in nfs provisioner deployment. If not mentioned then the default value of 60 seconds will be used for backend PVC bound timeout. Does this PR require any upgrade changes?: No If the changes in this PR are manually verified, list down the scenarios covered:: Checklist: [ ] Fixes # [x] PR Title follows the convention of <type>(<scope>): <subject> [ ] Has the change log section been updated? [x] Commit has unit tests [x] Commit has integration tests [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track: [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them: Codecov Report Merging #81 (bb2ae19) into develop (d65ac1f) will decrease coverage by 0.06%. The diff coverage is 37.50%. @@ Coverage Diff @@ ## develop #81 +/- ## =========================================== - Coverage 45.19% 45.13% -0.07% =========================================== Files 27 27 Lines 2239 2300 +61 =========================================== + Hits 1012 1038 +26 - Misses 1157 1191 +34 - Partials 70 71 +1 Impacted Files Coverage Δ provisioner/config.go 0.00% <ø> (ø) provisioner/env.go 22.22% <0.00%> (-6.35%) :arrow_down: provisioner/provisioner.go 0.00% <0.00%> (ø) provisioner/helper_kernel_nfs_server.go 73.53% <54.54%> (-2.58%) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 3de660a...bb2ae19. Read the comment docs. Closing this PR since changes are available in different PR https://github.com/openebs/dynamic-nfs-provisioner/pull/93
2025-04-01T04:35:00.529386
2022-10-17T19:02:04
1412066056
{ "authors": [ "niladrih" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9360", "repo": "openebs/mayastor-extensions", "url": "https://github.com/openebs/mayastor-extensions/pull/56" }
gharchive/pull-request
Cherry-pick #55 into release/2.0 branch Signed-off-by: Niladri Halder<EMAIL_ADDRESS> bors try bors merge bors merge
2025-04-01T04:35:00.531404
2018-03-22T11:43:13
307608866
{ "authors": [ "a4abhishek", "pawanpraka1" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9361", "repo": "openebs/node-disk-manager", "url": "https://github.com/openebs/node-disk-manager/pull/21" }
gharchive/pull-request
Changed ndmctl to ndm Changed Makefile and hack/build.sh to create binary named "ndm" instead of "ndmctl" Changed all references of "ndmctl" to "ndm" in code. Changed Makefile to build docker image Signed-off-by: Abhishek Kashyap<EMAIL_ADDRESS> looks good to me.
2025-04-01T04:35:00.533325
2017-10-01T07:53:11
261899206
{ "authors": [ "kmova", "ksatchit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9362", "repo": "openebs/openebs", "url": "https://github.com/openebs/openebs/pull/444" }
gharchive/pull-request
Setup Travis CI for documentation changes Fixes #443 .travis.yaml adds sphinx build steps that converts warnings into errors inspiration: https://coderwall.com/p/wws2uq/have-travis-ci-test-your-sphinx-docs requirements.txt sphinx related packages :+1:
2025-04-01T04:35:00.536348
2017-10-15T11:38:24
265565520
{ "authors": [ "abhisheklakra007", "nitisuryawanshi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9363", "repo": "openebs/openebs", "url": "https://github.com/openebs/openebs/pull/636" }
gharchive/pull-request
Update resources for CI test suite Close: #621 What this PR does / why we need it: Gives 2 virtual CPU & 2048 mB RAM to VMs Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #621 Special notes for your reviewer: Ref: https://www.vagrantup.com/docs/virtualbox/configuration.html#vboxmanage-customizations @abhisheklakra007 thank you for your PR contribution. We would like to mention you in our tweets, thanking you for your contribution. Can you please mail me your twitter handle here<EMAIL_ADDRESS>?
2025-04-01T04:35:00.539701
2020-05-21T06:16:00
622275188
{ "authors": [ "kmova", "mittachaitu", "shubham14bajpai" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9364", "repo": "openebs/upgrade", "url": "https://github.com/openebs/upgrade/pull/7" }
gharchive/pull-request
chore(docs): add example yamls and docs for cstor upgrades/migration Signed-off-by: shubham<EMAIL_ADDRESS>This PR adds sample upgrade job yamls for cstor CSPC and Volume upgrades. This PR also adds documentation for migration of cstor pools and volumes. @shubham14bajpai - If there are no further updates required, please remove the hold-merge label. Can we add below note for rancher based cluster: Note: If the Kubernetes cluster is on rancher and iscsi is running inside the kubelet container then it is mandatory to install iscsi service on the nodes and add extra binds to the kubelet container as mentioned here. Can we add below note for rancher based cluster: Note: If the Kubernetes cluster is on rancher and iscsi is running inside the kubelet container then it is mandatory to install iscsi service on the nodes and add extra binds to the kubelet container as mentioned here. Added
2025-04-01T04:35:00.634514
2022-09-12T13:52:59
1369951233
{ "authors": [ "kdmccormick", "ziafazal" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9365", "repo": "openedx/xblock-sdk", "url": "https://github.com/openedx/xblock-sdk/pull/241" }
gharchive/pull-request
build: bump version to 0.5.2 Release notes: fix: not found error while loading file from media storage (by @ziafazal from Edly) Python dependency upgrades (by 2U) Full changeset: https://github.com/openedx/xblock-sdk/compare/v0.5.1...f1f5e172d13d8783ba7ce60472d8d23d1aaabba6 @ziafazal Could you review this? @kdmccormick we need to update version here too.
2025-04-01T04:35:00.659342
2017-10-26T20:06:39
268894107
{ "authors": [ "ahalterman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9366", "repo": "openeventdata/mordecai", "url": "https://github.com/openeventdata/mordecai/issues/29" }
gharchive/issue
Test place name resolution accuracy Before merging the v2 branch, need to assess the accuracy of the geonames lookup call. This will inform if we need a feature type detection step. I use some heuristics to get the top 4 potential results, then have a model to pick which of those 4 it is. After annotating a couple thousand, this is what I get on the test set. precision recall f1-score support 0 0.89 0.91 0.90 251 1 0.77 0.66 0.71 89 2 0.81 0.83 0.82 30 3 0.00 0.00 0.00 2 avg / total 0.85 0.84 0.84 372
2025-04-01T04:35:00.675922
2022-12-08T16:19:58
1484972265
{ "authors": [ "adriantam", "jon-whit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9367", "repo": "openfga/openfga.dev", "url": "https://github.com/openfga/openfga.dev/pull/304" }
gharchive/pull-request
chore: clean up migrating schema guide Description Clean up migration guide. More specifically, add warnings to user that they must delete and re-add all public access tuples update the summary list so that it matches the section subheadings move the migration to the correct folder References Review Checklist [x] I have clicked on "allow edits by maintainers". [ ] I have added documentation for new/changed functionality in this PR or in a PR to openfga.dev [Provide a link to any relevant PRs in the references section above] [x] The correct base branch is being used, if not main [ ] I have added tests to validate that the change in functionality is working as expected I noticed the "Migrations" section in the sidebar doesn't have a box around the arrow like the other sections. I noticed the "Migrations" section in the sidebar doesn't have a box around the arrow like the other sections. We will need to add an overview page similar to building blocks etc. We will add it when we want to enable this page. Otherwise, it is an overview page of 1 page
2025-04-01T04:35:00.679628
2022-06-16T19:29:15
1273997251
{ "authors": [ "CLAassistant", "elbuo8" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9368", "repo": "openfga/openfga", "url": "https://github.com/openfga/openfga/pull/40" }
gharchive/pull-request
docs(readme): use pwd when downloading release from github Description When downloading the binary from github, assume it's on PWD since the user is not creating a ./bin directory. When downloading it (or compiling from source) the binary should be ignored from the repo. References Review Checklist [ ] I have added documentation for new/changed functionality in this PR or in openfga.dev [x] The correct base branch is being used, if not main Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:35:00.698945
2019-06-09T07:34:57
453872341
{ "authors": [ "RajaVamsi11" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9369", "repo": "openfoodfacts/openfoodfacts-androidapp", "url": "https://github.com/openfoodfacts/openfoodfacts-androidapp/pull/2726" }
gharchive/pull-request
Fixes #2723: Documented the files in openfood/repositories package Description Documented the files in openfood/repositories package Related issues and discussion Fixes #2723 @teolemon @deniger Can you please review this PR.
2025-04-01T04:35:00.701139
2021-02-28T19:59:56
818295219
{ "authors": [ "naivekook" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9370", "repo": "openfoodfacts/openfoodfacts-androidapp", "url": "https://github.com/openfoodfacts/openfoodfacts-androidapp/pull/3865" }
gharchive/pull-request
Remove CameraSelectorDialogFragment.kt Description We have a bunch of deprecation issues related to CameraSelectorDialogFragment.kt, but this class is not used anymore in code, so just I deleted it. Related issues Part of https://github.com/openfoodfacts/openfoodfacts-androidapp/wiki/Build-results Related PRs none Screenshots none ptal @teolemon @VaiTon
2025-04-01T04:35:00.717219
2022-01-23T19:47:57
1111967848
{ "authors": [ "M123-dev", "VaiTon", "monsieurtanuki", "sgayangi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9371", "repo": "openfoodfacts/smooth-app", "url": "https://github.com/openfoodfacts/smooth-app/issues/1005" }
gharchive/issue
Update generic widgets to follow mocks more closely What We got a list of "final?" mock components on Figma, it would be good to update the generic_lib (former smooth_ui_library) to represent these Screenshot Is this issue free to work on? I was thinking of adding all the colors as static variables to a file so they can be used from anywhere in the project. Heyyy @sgayangi, sure, globalizing the colors would be great, could be similar to #1179, but just a static const for each color could lead to problems regarding darkmode. @monsieurtanuki sed different alpha values for darkmode when I remember right. any suggestions? Actually @M123-dev the more I think about it the more I think dark-mode does not make that much sense for this specific app, where we display label icons with already set-up colors (e.g. light green "European organic AB") and product pictures with their own luminosity. That said, in some cases I might have used alpha in dark mode, but it should have been rather for backgrounds. For foreground color that's not appropriate: imagine in day mode red on white screen (readable) and in dark mode red with alpha on black screen (hard to read). How shall I proceed? I have added the colors in the Figma file similar to https://github.com/openfoodfacts/smooth-app/blob/develop/packages/smooth_app/lib/widgets/attribute_helper.dart. Shall I start replacing the colors used throughout the project with those? Yes sounds good @sgayangi Are we still going to do this? @teolemon
2025-04-01T04:35:00.733129
2023-11-05T15:21:13
1977834815
{ "authors": [ "codecov-commenter", "monsieurtanuki" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9372", "repo": "openfoodfacts/smooth-app", "url": "https://github.com/openfoodfacts/smooth-app/pull/4768" }
gharchive/pull-request
fix: 1815 - around context.mounted What The PR is about two related topics: Removing all the ignore: use_build_context_synchronously Replacing all the State<StatefulWidget> widget by BuildContext context, now that it's possible in flutter Fixes bug(s) Fixes: #1815 Codecov Report Merging #4768 (ba07b7a) into develop (21469af) will decrease coverage by 0.04%. The diff coverage is 0.00%. @@ Coverage Diff @@ ## develop #4768 +/- ## ========================================== - Coverage 9.89% 9.86% -0.04% ========================================== Files 312 312 Lines 15809 15863 +54 ========================================== Hits 1565 1565 - Misses 14244 14298 +54 Files Coverage Δ .../cards/data_cards/product_image_carousel_item.dart 0.00% <ø> (ø) ...e_panel/knowledge_panels/knowledge_panel_page.dart 0.00% <ø> (ø) ...ckages/smooth_app/lib/pages/offline_data_page.dart 1.40% <ø> (ø) ...th_app/lib/pages/product/add_new_product_page.dart 0.00% <ø> (ø) ...smooth_app/lib/pages/product/new_product_page.dart 0.00% <ø> (ø) .../lib/pages/product/product_image_gallery_view.dart 0.00% <ø> (ø) .../lib/pages/product/product_image_local_button.dart 0.00% <ø> (ø) .../smooth_app/lib/data_models/onboarding_loader.dart 0.00% <0.00%> (ø) ...es/smooth_app/lib/pages/hunger_games/congrats.dart 1.42% <0.00%> (-0.03%) :arrow_down: ...ooth_app/lib/pages/hunger_games/question_page.dart 2.27% <0.00%> (-0.02%) :arrow_down: ... and 34 more :mega: Codecov offers a browser extension for seamless coverage viewing on GitHub. Try it in Chrome or Firefox today! Thank you @g123k for you review!
2025-04-01T04:35:00.743583
2024-06-15T20:31:24
2355213332
{ "authors": [ "codecov-commenter", "g123k" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9373", "repo": "openfoodfacts/smooth-app", "url": "https://github.com/openfoodfacts/smooth-app/pull/5384" }
gharchive/pull-request
feat: Extract ingredients/packaging: loading / loaded / extracting states Hi everyone, Extract ingredients and packaging share the same screen. However, there are two issues: We don't clearly understand what's happening with the progress bar There is an empty container when we extract the data I've made things clearer, that you can see in this video: https://github.com/openfoodfacts/smooth-app/assets/246838/215d07b6-3269-4a89-a763-384a9b8d49d9 Codecov Report Attention: Patch coverage is 0% with 78 lines in your changes missing coverage. Please review. Project coverage is 7.32%. Comparing base (4d9c7fc) to head (d854301). Report is 196 commits behind head on develop. Files Patch % Lines ...es/smooth_app/lib/pages/product/edit_ocr_page.dart 0.00% 69 Missing :warning: ..._app/lib/pages/product/ocr_ingredients_helper.dart 0.00% 5 Missing :warning: ...th_app/lib/pages/product/ocr_packaging_helper.dart 0.00% 4 Missing :warning: Additional details and impacted files @@ Coverage Diff @@ ## develop #5384 +/- ## ========================================== - Coverage 9.54% 7.32% -2.23% ========================================== Files 325 386 +61 Lines 16411 19847 +3436 ========================================== - Hits 1567 1454 -113 - Misses 14844 18393 +3549 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
2025-04-01T04:35:00.808446
2024-08-05T12:25:36
2448427907
{ "authors": [ "codecov-commenter", "dfguerrerom" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9374", "repo": "openforis/sepal_ui", "url": "https://github.com/openforis/sepal_ui/pull/926" }
gharchive/pull-request
Main pre release merge This PR will merge sepal_pre_release into main. :warning: Please install the to ensure uploads and comments are reliably processed by Codecov. Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 96.89%. Comparing base (27c8069) to head (832a930). Report is 61 commits behind head on main. :exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files @@ Coverage Diff @@ ## main #926 +/- ## ========================================== + Coverage 96.61% 96.89% +0.27% ========================================== Files 39 39 Lines 3903 3960 +57 ========================================== + Hits 3771 3837 +66 + Misses 132 123 -9 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
2025-04-01T04:35:00.813097
2018-10-12T12:46:37
369536254
{ "authors": [ "jmaupetit", "sampaccoud" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9375", "repo": "openfun/arnold", "url": "https://github.com/openfun/arnold/issues/140" }
gharchive/issue
Add support for Ansible vault password GPG encryption Purpose Managing Ansible vault passwords for each customer/environment could be a tough task when these passwords are shared among a team of developers. Dealing with those passwords is not that easy because we want the whole process to be painless and secure. Painless and secure, two words that could appear contradictory and thus force us to compromises. One reasonable choice is to store Ansible vaults password for a customer in a particular environment in an encrypted file that is committed and versioned in a private git repository along with the customer's configurations. Proposal Using GnuPG is a classical approach to encrypt those passwords as we can use the GPG public keys of the whole developer team to make sure only those identities will be able to decrypt the passwords. Once decrypted, the Ansible vault password can be injected in Arnold's container (via en environment variable), that will display it via a small bash script located in ANSIBLE_VAULT_PASSWORD_FILE (see Ansible's documentation). By doing so, the developer doesn't have to know what the password is (or even type it), as everything will rely on its local GPG configuration. Constraint This feature should be optional as GnuPG would be a strong requirement that may not fit with companies security policy. @jmaupetit I propose to close this issue as we are now confortable that this is best located in Matsuo. Maybe we can let this issue open and change the label as a reminder to integrate this in the documentation?
2025-04-01T04:35:00.822195
2022-04-11T15:32:38
1200147690
{ "authors": [ "gareth-j" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9376", "repo": "openghg/openghg", "url": "https://github.com/openghg/openghg/issues/307" }
gharchive/issue
Synonyms - how to handle At the moment we use the synonym function with get_obs_surface https://github.com/openghg/openghg/blob/db131709cea7f07ae29e99b69741d13120b784b2/openghg/retrieve/_access.py#L393 but if this fails to find a synonym it just falls over. This is another part of keeping the metadata updated issue, if someone passes in a species we don't recognise should we just continue when failing to find a synonym? I feel that'd be easier. See line 126 in _search.py for the species translator.
2025-04-01T04:35:00.823336
2022-12-09T12:21:24
1486664534
{ "authors": [ "SutarPrasad", "gareth-j", "rt17603" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9377", "repo": "openghg/openghg", "url": "https://github.com/openghg/openghg/issues/497" }
gharchive/issue
Remove hardcoded URLs from populate functions At the moment there's quite a few hard coded URLs for the example data in the code. These should be removed and read from somewhere. Any suggestion on ways to read it? One way could be use of yaml files Would like some more details on specific cases if we were able to resolve this? If we don't have any to hand then I suggest this issue should be closed.
2025-04-01T04:35:01.060179
2019-11-14T21:17:52
523109654
{ "authors": [ "boothym", "russss" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9378", "repo": "openinframap/styles", "url": "https://github.com/openinframap/styles/issues/61" }
gharchive/issue
Increase visibilty of railway lines on base map Hope this is the correct place to report this issue. I noticed on the vector base map for OIM, railway lines do not appear until z13 - and even then they are not as easy to see compared to the much more prominent white roads. I was looking at the proximity of high voltage power lines to unelectrified railway lines, but it was difficult to work out where the railway lines are located at z10-12. Or would it be possible to offer the default OSM standard layer (and/or Mapnik grayscale like OpenRailwayMap) as a base map choice? I've made these a bit more visible now. Excellent, thanks! 👍
2025-04-01T04:35:01.298498
2020-05-05T22:25:56
612932935
{ "authors": [ "jdville03" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9379", "repo": "openlawteam/openlaw-website", "url": "https://github.com/openlawteam/openlaw-website/pull/110" }
gharchive/pull-request
static website redesign Preview of new static website. TODOs: [ ] link to actual medium post in "Conquer Legal Complexities" section [ ] add images for partners/clients in "Uncompromising Standards" section [ ] copy edit for entire site Updated TODOs: [ ] copy edit for entire site We will hide these for now in order to deploy the updated static site soon. [ ] link to actual medium post in "Conquer Legal Complexities" section [ ] add images for partners/clients in "Uncompromising Standards" section
2025-04-01T04:35:01.301413
2016-03-14T09:40:43
140623370
{ "authors": [ "mstrop", "tsauerwein" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9380", "repo": "openlayers/ol3", "url": "https://github.com/openlayers/ol3/issues/5029" }
gharchive/issue
forEachFeatureAtPixel doesn't work correctly for Point geometry A feature is not found, when I click far enough from centre of the Point geometry (close enough to its border, but still inside of it). I have created an example here - https://jsfiddle.net/mstrop/4gvLhfje/7/. When you start clicking on border and you will continue towards the centre of the circle you will see, that the circle is found quite far from the border. Tested on IE, FF and Chrome. The property renderBuffer (see ol.layer.Vector) is taken into account for the hit detection. By default it is set to 100 pixels. If you have larger symbols, you will have to adjust it accordingly, e.g.: var vectorLayer = new ol.layer.Vector({ source: vectorSource, style: styleFunction, renderBuffer: 200 }); https://jsfiddle.net/4gvLhfje/10/ Thanks a lot and sorry for bothering.
2025-04-01T04:35:01.302481
2015-04-20T14:57:46
69606387
{ "authors": [ "elemoine", "fredj" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9381", "repo": "openlayers/ol3", "url": "https://github.com/openlayers/ol3/pull/3603" }
gharchive/pull-request
Reformat upgrade-notes.md This PR fixes some formatting issues in the upgrade notes. Please merge, thanks
2025-04-01T04:35:01.308172
2021-04-30T11:08:48
872344982
{ "authors": [ "ahocevar", "ecxod", "mprins", "simonseyock" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9382", "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/issues/12261" }
gharchive/issue
OpenLayers missing on Composer I like to use Composer, so I searched OpenLayers on Composer but I cannot find it there. Warning: We had problems parsing your composer.json file, the parser reports: The "https://api.github.com/repos/openlayers/openlayers/contents/composer.json?ref=main" file could not be downloaded (HTTP/2 404 ) * https://packagist.org/packages/submit * Why would a javascript library (that needs compiling/treeshaking/bundling that cannot be done with composer) be available in a PHP package repository? It's so disappointing when you ask a legitimate question in a new form, and then get bumped upside the head by an unfriendly person who in addition answers the question with a counter-question. Because with Composer I could always download the latest version automatically, so I'll probably have to rig it up some other way. Because many webhosters don't allow executable files. PHP cli however do allow. A common approach for webapps in PHP is to use both composer and npm (or yarn). The javascript part can then be processed and bundled on the development machine and then can be uploaded to a webhoster. You can look for example at the symfony project which uses an approach like this. @ecxod Like @simonseyock already pointed out, if you want to use OpenLayers (and other JavaScript libraries that are published as npm packages) in a PHP project, you'll probably want to make webpack or a similar bundler part of your local develpment setup. This Stack Overflow thread may give you some hints. Also note that such a setup does not add the requirement for your web hoster to support executable files. The JavaScript bundle created by webpack (or a similar bundler) is just added as static file(s) to your build artifacts.
2025-04-01T04:35:01.340854
2017-10-30T13:21:45
269597140
{ "authors": [ "Ksoso", "Primajin", "aalbericio", "ahocevar", "ale-cristofori", "aloulouamine", "bartvde", "findawayer", "jalik", "jktravis", "kbroncel", "lduros", "nikolas", "orpheus", "sanderdesnaijer", "smtalha", "sookoll" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9383", "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/issues/7401" }
gharchive/issue
Testing react components that uses OL I'm trying to test a map component in my application (created with 'create-react-app' starter). Tests are crashing at "node_modules\ol\map.js:1" with an error "SyntaxError: Unexpected token import" because of the ol beign es2015 modules, and not beign transformed with babel. Someone posted a similar case in create-react-app repo: link They stated that: Packages should always be pre-compiled to their lowest target environment. In CRA apps, we enforce an IE 9+ compatibility requirement. Compiling dependencies is slow and unreliable; code should be shipped in both common js and esmodules, or only common js. There are fields in package.json specifying the differing builds. Can you consider adding a common js build? What is Your position in that matter? I'm at the beginning of starting quite big enterprise solution based on React and Openlayers 4.x and also got problems with testing my modules with Jest which contains OL4 solutions in it. The problem is caused by node.js which doesn't support es2015 module system and tests are running on node.js The ol npm package should be transpiled to ES5. I'am also using Jest to create unit tests, but the only fact that I import the ol lib is causing the tests to fail and I don't see how to solve this.. This jest config works for me in my package.json: "jest": { "setupFiles": [ "<rootDir>/env-setup.js" ], "transformIgnorePatterns": [ "/node_modules/(?!(ol|labelgun|mapbox-to-ol-style|ol-mapbox-style)/).*/" ], "coveragePathIgnorePatterns": [ "/node_modules/", "env-setup.js" ] } Isn't it, that you have to "eject" from create-react-app to make this config work? Thanks @bartvde for giving a working solution, but would'nt it be nice if Jest + OpenLayers were working out of the box as @kbroncel said ? I dug into the ol npm sources and in the github repo and saw that it was compiled using a makefile, it's not very common right ? I mean usually there are tools like gulp, grunt, webpack... that allow to execute complex tasks (transpiling, renaming, stripping comments, bundling...), I am already doing such things with one of my npm package that is exploded into multiple files and is ES5 compatible, have a look at the small gulpfile : https://github.com/jalik/cuic.js/blob/master/gulpfile.js I personally agree on publishing as ES5 see also Dan Abramov from Facebook here: https://twitter.com/dan_abramov/status/923600212798722051 I'm a bit puzzled right now. Is OL published as ES5 right now or I am imagining things? What can we do to make a NPM package published as ES5 to work with Jest without ejecting from CreateReactAapp? Is it even possible? AFAIK the ol package is ES2015 @kbroncel the released code of ol is ES2015 also known as ES6, don't mix ES5 and ES2015, there're not the same, ES5 was released in 2009 while ES6 was released in 2015. To say things simply, ES5 is the old classic JavaScript that everyone know and ES6 is the new one. @jalik @bartvde Ok, sorry. I mixed up ES2015 with ES5. So OL is published as ES2015. By this was not the point of this thread. What can we do to run Jest tests while using OL with CreateReactApp? It only works if you use the transformIgnorePatterns but that will raise the following error: so you need to eject - then you can use the advanced jest config and it will run through. That's how it worked for us at least. For further reading: https://github.com/facebook/create-react-app/issues/2537 We managed to get around it with: https://github.com/timarney/react-app-rewired May help someone. @kbroncel Do you have detail of what you added in the config-overrides.js file and package.json to get this working? I'm suspecting you only have to change the "test" entry of your scripts in package.json to get this working, correct? In package.json change "scripts": { - "test": "react-scripts test --env=jsdom", + "test": "react-app-rewired test --env=jsdom" }, And in config-overrides.js module.exports = { webpack: function (config, env) { return config; }, jest: function (config) { config.testEnvironment = "node"; config.transformIgnorePatterns = [ "/node_modules/(?!(ol|labelgun|mapbox-to-ol-style|ol-mapbox-style)/).*/" ]; config.coveragePathIgnorePatterns = [ "/node_modules/", "env-setup.js" ]; config.snapshotSerializers = [ "enzyme-to-json/serializer" ]; return config; } } it is more general issue that just react/jest. Jes, in jest we can hack problem away, but we can't it every other test frameworks. I use tape, and I haven't found sane working solution (currently I build test bundle with webpack and then run it). I tried to teach babel to transpile ol without any success. Closing this issue, because using the transformIgnorePatterns config option is well established practice and more and more npm packages are published as ES modules. Without any general solution? Or Your solution is that everybody have to use Jest as testing framework? Solutions will be different for each test framework/bundler. For tape, you could take a look at https://www.npmjs.com/package/babel-tape-runner. @ahocevar excuse me, you said that transformIgnorePatterns is well established practice but as @sookoll pointed it's a Jest related configuration and there are tons of testing libs out there. I personally use Jest, but not everybody use it. It's a bit sad that such a great JS lib is not usable "out of the box" with testing lib without digging the internet to find how to fix the conflict... Is it to much work to precompile to ES5 using tools like Babel ? At least, could it be mentionned somewhere in the doc that testing libs may cause error, and how to fix this for the most known libs ? Thank you. I'm having the same problem, months later it seems, and simply cannot get the provided solution to work. I'm not using CRA or an ejected version of CRA. The error: Details: /path/to/Projects/project/node_modules/ol/Map.js:4 import PluggableMap from './PluggableMap.js'; ^^^^^^ SyntaxError: Unexpected token import 1 | import React, { Component, createRef } from "react"; 2 | import { compose, map, find, equals, prop } from "ramda"; > 3 | import OlMap from "ol/Map"; | ^ 4 | import View from "ol/View"; 5 | import OL_STATE from "ol/source/State"; 6 | import OSM from "ol/source/OSM"; at ScriptTransformer._transformAndBuildScript (node_modules/jest-runtime/build/script_transformer.js:403:17) at Object.<anonymous> (Scripts/Controllers/Tools/Map/Components/Map.tsx:3:1) at Object.<anonymous> (Scripts/Controllers/Tools/Map/Components/__tests__/Map.test.tsx:3:1) My config: module.exports = { setupFiles: [ "./testConfig/test-shim.js", "./testConfig/test-setup.js" ], transform: { "^.+\\.tsx?$": "ts-jest" }, transformIgnorePatterns: [ "/node_modules/(?!(ol)/).*/", "node_modules/(?!(ol)/)" ], testRegex: "(/__tests__/.*|(\\.|/)(test|spec))\\.(jsx?|tsx?)$", moduleNameMapper: { "^(Controllers|Api|Utilities)/(.*)$": "<rootDir>Scripts/$1/$2" }, moduleFileExtensions: ["ts", "tsx", "js", "jsx", "json", "node"], coverageReporters: ["text", "text-summary", "html"], coverageDirectory: "testConfig/coverageReport", collectCoverageFrom: ["**/Scripts/{App,Controllers,Utilities,Localization,EntryPoints}/**/*.{ts,tsx}"], coverageThreshold: { global: { branches: 0, functions: 0, lines: 0, statements: 0 } } }; We basically could only work around it in the end by overwriting the OL components in the testrunner with blank functions returning either undefined or another empty function (based on what was expected) Thanks, @Primajin. I started down this path this morning, and things were going okay. But I noticed on the first mock that I created it as a .js file instead of a .ts file, and got a similar error, but the output was slightly different. And then it hit me that I'm probably not transforming any .js files. Long story short, I added babel-jest and jest-canvas-mock to the project. Here are the changes I made to the config. setupFiles: [ "./testConfig/test-shim.js", "jest-canvas-mock", // <- mocking lib so your console won't bleed "./testConfig/test-setup.js" ], transform: { "^.+\\.tsx?$": "ts-jest", "^.+\\.jsx?$": "babel-jest", // <- transform JS files }, testRegex: "(/__tests__/.*|(\\.|/)(test|spec))\\.(jsx?|tsx?)$", transformIgnorePatterns: [ "node_modules/(?!(ol)/)", // <- exclude the ol library from not being transformed ], I was more about adding lines like these to your test file (ugly but works): jest.mock('../../../../node_modules/ol/format/GeoJSON', () => function() {}); jest.mock('../../../../node_modules/ol/geom/LineString', () => {}); // more stuff here whatever you need.... Here's my two cents... https://medium.com/@compatt84/how-to-test-open-layers-react-components-with-mocha-part-i-9a2ca0458ba1 https://medium.com/@compatt84/how-to-test-open-layers-react-components-with-mocha-part-ii-d91d65145bce yeah it indeed looks like the solution I provided does not work with newer jest versions The solution of @jktravis worked for me, thanks! I only had to rename .babelrc to babel.config.js to make it work. The solution of @jktravis worked for me, thanks! I only had to rename .babelrc to babel.config.js to make it work. when I did this and added: "transformIgnorePatterns": [ "node_modules/(?!(ol)/)" ], it worked... but I have no idea why. If I go back to .babelrc with the transformIgnorePatterns it won't work ?? "transformIgnorePatterns": [ "node_modules/(?!(ol)/)" ], After adding this block test time get increased by 5 minutes for every file. I had the same issue. I thought it was just my implementation. But yeah. It’s super slow. I'm running into this problem even with the transformIgnorePatterns fix. Here's the jest section of my package.json: "jest": { "testMatch": [ "<rootDir>/media/js/src/**/?(*.)(spec|test).js?(x)" ], "testEnvironment": "node", "testURL": "http://localhost", "transform": { "^.+\\.jsx?$": "babel-jest" }, "transformIgnorePatterns": [ "/node_modules/(?!(ol)/)" ] } And my jest output: $ npm run test ><EMAIL_ADDRESS>test /home/nik/src/d/mediathread > jest FAIL media/js/src/GridAsset.test.jsx ● Test suite failed to run /home/nik/src/d/mediathread/node_modules/ol/Feature.js:29 import { assert } from './asserts.js'; ^^^^^^ SyntaxError: Cannot use import statement outside a module 4 | import AnnotationScroller from './AnnotationScroller'; 5 | > 6 | import Feature from 'ol/Feature'; | ^ 7 | import Map from 'ol/Map'; 8 | import View from 'ol/View'; 9 | import GeoJSON from 'ol/format/GeoJSON'; at ScriptTransformer._transformAndBuildScript (node_modules/@jest/transform/build/ScriptTransformer.js:537:17) at ScriptTransformer.transform (node_modules/@jest/transform/build/ScriptTransformer.js:579:25) at Object.<anonymous> (media/js/src/GridAsset.jsx:6:1) Test Suites: 1 failed, 1 total Okay well, using this babel.config.js fixed my problem: module.exports = { "presets": [ ['@babel/preset-env', { targets: { node: 'current' } }], "@babel/preset-react" ] } from https://github.com/facebook/jest/issues/6229#issuecomment-551916758 Since CLI options take precedence over custom jest config, adding the --transformIgnorePatterns option to the test script command worked for me without ejecting the create-react-app ... "test": "CI=true react-scripts test --transformIgnorePatterns \"node_modules/(?!ol)/\"", ... I came up with a different approach, which is to transpile the whole package to commonjs once then use this version for testing. Below is the minimal reproduction of my settings in package.json. { "scripts": { "test": "jest", "test:watch": "jest --watch", "test:prepare": "babel node_modules/ol --extensions .js --out-dir node_modules/.compiled/ol --presets @babel/preset-env --source-maps" }, "dependencies": { "ol": "^6.4.3" }, "devDependencies": { "@babel/cli": "^7.12.10", "@babel/core": "^7.11.6", "@babel/preset-env": "^7.11.5", "jest": "^26.6.3", }, "jest": { "moduleDirectories": [ "node_modules/.compiled", "node_modules" ], } } Run test:prepare to make a test-ready version of OpenLayers in node_modules/.compiled and your node will not complain about ES module syntax. Pros OpenLayers does not get transformed on the fly. -> It's faster Cons You need to manually run test:prepare when you move to a different version of OpenLayers. I came up with a different approach, which is to transpile the whole package to commonjs once then use this version for testing. Below is the minimal reproduction of my settings in package.json. { "scripts": { "test": "jest", "test:watch": "jest --watch", "test:prepare": "babel node_modules/ol --extensions .js --out-dir node_modules/.compiled/ol --presets @babel/preset-env --source-maps" }, "dependencies": { "ol": "^6.4.3" }, "devDependencies": { "@babel/cli": "^7.12.10", "@babel/core": "^7.11.6", "@babel/preset-env": "^7.11.5", "jest": "^26.6.3", }, "jest": { "moduleDirectories": [ "node_modules/.compiled", "node_modules" ], } } Run test:prepare to make a test-ready version of OpenLayers in node_modules/.compiled and your node will not complain about ES module syntax. Pros OpenLayers does not get transformed on the fly. -> It's faster Cons You need to manually run test:prepare when you move to a different version of OpenLayers. Thank you, @iStefo I really prefer your solution over the babel based one. It's faster and simpler to configure but I'm getting this configuration error now: Could not locate module ol/format mapped as: C:\Projects\Source\html5-console\packages\core\node_modules\.compiled\ol\ol.js. This comes from an utility importing Openlayers stuff, of course (import { GeoJSON, WKT } from 'ol/format'). I mean, the .compiled/ol/ol.js is there but it keeps reporting that configuration error. Thanks Hello @iStefo , First of all, thanks for your solution. ESBUILD is really fast and I thought it was all fixed using this approach but then some other errors started to arise: They were all related to the ol/style package that could not find constructors for classes like: Fill, Stroke, etc I decided to move to: babel node_modules/ol --extensions .js --out-dir node_modules/.compiled/ol --presets @babel/preset-env and a jest module mapper: '^ol/(.*)$': 'node_modules/.compiled/ol/$1.js', And that made the trick. However, I still want to use ESBUILD but I think that my tests don't like its single-file output or something inside that ol.js file is not really compliant with Jest. I've been reading wether ESBUILD can keep file structure but I think it can not. I'd appreciate if you have any other tip I can use. Thanks in advance.
2025-04-01T04:35:01.344109
2022-02-05T18:47:04
1125003887
{ "authors": [ "tschaub" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9384", "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/pull/13332" }
gharchive/pull-request
Avoid duplicate imports This pulls in the latest eslint-config-openlayers config and fixes the duplicate imports. Hmm. Some trouble with actions. Hmm. Some trouble with actions. Our actions failed to run at the same time this incident was occurring: https://www.githubstatus.com/incidents/vvthhf8gxt80. I assumed that was the reason the latest commit didn't run. Instead (or in addition), it looks like missing yaml quotes were causing trouble. I fixed that in 459cd51ae2ae7fded1f1ede5215248f9537f4e7c.
2025-04-01T04:35:01.345010
2017-04-02T12:27:12
218761380
{ "authors": [ "ahocevar", "dhivehi" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9385", "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/pull/6662" }
gharchive/pull-request
dhivehi-maps.html As suggested, i have made an example instead of a source Just an XYZ source. I think we have enough examples already showing how to work with these.
2025-04-01T04:35:01.350149
2017-10-19T12:40:54
266828029
{ "authors": [ "fredj", "gberaudo", "jbo023" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9386", "repo": "openlayers/openlayers", "url": "https://github.com/openlayers/openlayers/pull/7376" }
gharchive/pull-request
changed visibility of overlay properties to protected Hi, we would like to change the visibility of the Overlay properties and functions to protected. In the ol-cesium project created a synchronizedOverlay Class which inherits from the original ol.Overlay Class, which will only work after this PR. We also added a function "getOptions" to retrieve the original options the overlay has been created with. There is also a discussion in the original Pull request to ol-cesium here: https://github.com/openlayers/ol-cesium/pull/498#discussion_r140455509 Jannes @jbo023, you should remove the trailing underscores when changing from private to protected. Otherwise it looks good to me. @gberaudo I removed the trailing underscores Sorry @jbo023, I did not see your message. There is now a conflict, could you please rebase your branch? @gberaudo I rebased and squashed the changes Thanks @jbo023; it looks good to me. @fredj what do you think about it, is it good to merge? (waiting for travis) Travis appears stuck. @jbo023, could you please try to do the following: git commit --amend --allow-empty Then force push the branch. @gberaudo done @jbo023 was still stuck, can you please try to push force again? @fredj done, if it helps, i could also create a new PR ? All tests pass locally, merging
2025-04-01T04:35:01.377413
2022-06-16T13:16:43
1273552947
{ "authors": [ "jmertic", "yarille" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9387", "repo": "openmainframeproject/omp-landscape", "url": "https://github.com/openmainframeproject/omp-landscape/pull/554" }
gharchive/pull-request
Web landscape 2022 06 08 t13 51 Pre-submission checklist: Please check each of these after submitting your pull request: [ ] Are you including a repo_url? You need to pick the single best GitHub repository for your project, not a GitHub organization. [ ] Does the project meet the guidelines for new entries section? [ ] Have you included a URL for your SVG or added it to hosted_logos and referenced it there? [ ] Does your logo clearly state the name of the project/product and follow the other logo guidelines? [ ] Does your project name match the text on the logo? [ ] Have you verified that the Crunchbase data for your organization is correct (including headquarters and LinkedIn)? [ ] ~5 minutes after opening the pull request, the CNCF-Bot will post the URL for your staging server. Have you confirmed that it looks good to you and then added a comment to the PR saying "LGTM"? Hey @yarille - this looks to add View and Changeman - anything else? I'm adding another shortly. API State Monitor for Zowe API ML On Fri, Jun 24, 2022 at 10:34 AM John Mertic @.***> wrote: Hey @yarille https://github.com/yarille - this looks to add View and Changeman - anything else? — Reply to this email directly, view it on GitHub https://github.com/openmainframeproject/omp-landscape/pull/554#issuecomment-1165636609, or unsubscribe https://github.com/notifications/unsubscribe-auth/AV3CWSCTF6547RE5STSXLP3VQXBOJANCNFSM5Y6YV7AQ . You are receiving this because you were mentioned.Message ID: @.***> -- Yarille Ortiz Kilborn Linux Foundation Project Coordinator
2025-04-01T04:35:01.394202
2016-12-23T02:17:04
197302768
{ "authors": [ "joshmoore", "manics" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9388", "repo": "openmicroscopy/infrastructure", "url": "https://github.com/openmicroscopy/infrastructure/pull/236" }
gharchive/pull-request
Disable nginx port_in_redirect (IDR-0.3.1) This is an attempt to make Nginx redirects relative to host:port instead of the default which is to use the host only. It didn't have the desired effect with OMERO.web, but it's been deployed anyway. http://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name_in_redirect nods Can you give an example of a URL that you were looking to have redirect differently? ssh idr-gateway-proxy -L 12345:localhost:80 curl -IL localhost:12345/webclient HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.7 Date: Fri, 06 Jan 2017 11:26:15 GMT Content-Type: text/html Content-Length: 161 Location: http://localhost/webclient/userdata/?experimenter=-1 Connection: keep-alive curl: (7) Failed to connect to localhost port 80: Connection refused
2025-04-01T04:35:01.427735
2022-08-08T08:05:10
1331489508
{ "authors": [ "franziskuskiefer", "kkohbrok", "raphaelrobert" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9389", "repo": "openmls/openmls", "url": "https://github.com/openmls/openmls/issues/916" }
gharchive/issue
[MLS Spec change] Remove AppAck proposal. (#654) Link to the exact changes https://github.com/mlswg/mls-protocol/pull/654 Description of the changes AppAck is only partially implemented so far. Since the AppAck proposal spec moves to draft-mls-extensions we might still implement it and therefore we should not remove it from the code just yet. I moved this out of spec changes to extension work. When fully implementing AppAck, it should be added as a test case to test_valsem242.
2025-04-01T04:35:01.466790
2015-11-17T14:36:36
117367134
{ "authors": [ "khenderick", "ringods" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9390", "repo": "openmotics/3rd-party", "url": "https://github.com/openmotics/3rd-party/issues/1" }
gharchive/issue
OpenTherm? Hello, Thinking of using OpenMotics in a renovation project. Today, I bumped into this announcement (Dutch): http://www.onemorething.nl/2015/11/nieuwe-nest-thermostaat-voor-europa-beschikbaar/ It seems to support the OpenTherm "standard": https://en.wikipedia.org/wiki/OpenTherm How do these compare regarding heating: is a Nest thermostat competing with OpenMotics functionality or can these augment each other? Hi, OpenMotics does not support OpenTherm at this moment. It cannot work together with Nest controlling the heating (only one of both should control heating). But there's nothing wrong with OpenMotics being used for lights, inputs, outputs, ... and the Nest for controlling the heating. For what I know from Nest, it talks to the boiler just like a regular room thermostat would do. However, it's smart in a sense that it learns when it should heat to which temperature. The downside is that e.g. if your living room is on the desired temperature, but you want to heat up the bathroom, you have to do some manual tweaking; temporarily increase the set room temperature and close all thermostatic radiator valves in the living room to prevent that one from heating up. OpenMotics is a bit different there. It is not (yet) smart (enough) that it can learn when it should heat to which temperature, so you have to program it when to heat up to which temperatures (you can off course, just like any other thermostat, temporarily increase the temperature). However, when using OpenMotics, the thermostatic radiator valves that are usually located at the radiator are removed and electric valves are placed onto the heating distributors. This, together with a temperature sensor in every room enables OpenMotics to control the temperature for each individual room, each room with its own temperatures and timings. So you can just turn on the heating for the bathroom alone, or for a bedroom when one of the kids is sick. I hope this more or less answers all your questions. There's a lot of information in our wiki. In any case, do not hesitate to let us know if more things would be unclear.
2025-04-01T04:35:01.520355
2020-06-08T19:58:25
634894590
{ "authors": [ "iabdalkader", "kwagyeman" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9391", "repo": "openmv/openmv-ide", "url": "https://github.com/openmv/openmv-ide/issues/81" }
gharchive/issue
Please add any type of progress indicator when updating IDE resources. This just keeps running and it's hard to tell if it's stuck or downloading, if you have a slow connection it's a real issue. Fixed
2025-04-01T04:35:01.525026
2024-09-20T18:06:32
2539379402
{ "authors": [ "gab-arrobo", "mbilal92" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9392", "repo": "opennetworkinglab/aether-onramp", "url": "https://github.com/opennetworkinglab/aether-onramp/pull/50" }
gharchive/pull-request
Update Charts version and gnbsim image This PR includes: Update SD-Core Helm Charts version to 1.0.18, which includes improvements in several CP images Update gnbsim image tag to rel-1.4.5 Other minor comment edits such that the comments are aligned across all blueprints with the goal of showing only "true" changes in git status/diff after copying a blueprint into the main.yml blueprint These changes were tested using OnRamp quickstart blueprint will all profiles enabled NOTE: main.yml and main-quickstart.yml blueprints are identical. What about deleting main-quickstart.yml and use main.yml as default and for quickstart documentation/tests? @llpeterson @mbilal92 please your help reviewing this PR. Thanks! It looks good.
2025-04-01T04:35:01.535705
2017-05-24T00:33:22
230887377
{ "authors": [ "fcapovilla", "komizutama" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9393", "repo": "openpantry/open_pantry", "url": "https://github.com/openpantry/open_pantry/issues/106" }
gharchive/issue
Move svgs into Objects Object format is the best way to link to svgs determine the correct way to do this and convert all svgs into these. I did a pull request to fix this issue, but I had to use img tags instead of object tags. Object tags forced a rerender of SVG images on every modification, which means images flickered every time I added or removed an item in the cart. This problems doesn't seem to occur when using img tags. The pull request was merged. Can we close this ticket?
2025-04-01T04:35:01.573081
2023-04-04T22:03:20
1654639535
{ "authors": [ "woolfg" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9394", "repo": "openpodcast/pipelines", "url": "https://github.com/openpodcast/pipelines/issues/48" }
gharchive/issue
loggin is broken openpodcast/apple-connector:sha-2b2db94 do no longer show log ouput. just the cron log line: docker service logs abc_apple-connector uc0cqtjc06rs@worker1| starting cron (10 08 * * *) openpodcast/apple-connector:sha-5e341c1 still has log output even the init print is not shown, so it isn't the logger I guess will check logs tomorrow again, maybe it wasn't started at all today no logs, but it seems that it is not running at all, some problems with cron?
2025-04-01T04:35:01.581257
2018-06-07T11:49:00
330237322
{ "authors": [ "coveralls", "oleksiyVeretiuk" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9395", "repo": "openprocurement/openprocurement.api", "url": "https://github.com/openprocurement/openprocurement.api/pull/346" }
gharchive/pull-request
Validation that depends on doc types This change is  Coverage increased (+0.3%) to 74.371% when pulling 2fe2fb2ec97f585f087dacf43b950b62e3b2b71e on oleksiyVeretiuk:ea_core_master into a8f8af0a2d3bdcf968795f9f0337e29fe2d3327b on openprocurement:ea_core_master.
2025-04-01T04:35:01.622145
2020-09-28T14:53:41
710341339
{ "authors": [ "melisabok", "zbialecki" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9396", "repo": "openreview/openreview-py", "url": "https://github.com/openreview/openreview-py/pull/756" }
gharchive/pull-request
Fix/iclr setup use tqdm for all the iterations. fix authorids field description. make the pdf field mandatory for the revision invitation Does dev have the current webfield code? Thanks @zbialecki , I just deployed the changes in the dev site, could you take another look? I want to make sure the approach of reloading the whole tab panel is correct. I think it's working correctly, the only thing I noticed is that it's missing the referrer banner
2025-04-01T04:35:01.633324
2021-02-15T13:26:54
808539231
{ "authors": [ "jkschneider", "knutwannheden" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9397", "repo": "openrewrite/rewrite", "url": "https://github.com/openrewrite/rewrite/issues/283" }
gharchive/issue
ChangeMethodName doesn't refactor method references If I modify the test ChangeMethodNameTest#changeMethodNameForMethodWithSingleArgDeclarative() as follows it fails, because the recipe didn't refactor the method reference as expected. @Test fun changeMethodNameForMethodWithSingleArgDeclarative(jp: JavaParser) = assertChanged( jp, dependsOn = arrayOf(b), recipe = ChangeMethodName("com.abc.B singleArg(String)", "bar"), before = """ package com.abc; class A { public void test() { new B().singleArg("boo"); new java.util.ArrayList<String>().forEach(new B()::singleArg); } } """, after = """ package com.abc; class A { public void test() { new B().bar("boo"); new java.util.ArrayList<String>().forEach(new B()::bar); } } """ ) Note: I tested this against the latest master of the repo. By the way, it might be nice to have a Java-specific RecipeTest type, where the method parameters are annotated with @org.intellij.lang.annotations.Language("java") so that in the tests the Java-code is properly syntax highlighted and the editor will do code completion etc. By the way... @knutwannheden That's a huge "btw" :). Done in #285.
2025-04-01T04:35:01.681003
2020-02-26T21:56:38
571685425
{ "authors": [ "PerilousApricot", "juztas", "kreczko" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9398", "repo": "opensciencegrid/xrootd-hdfs", "url": "https://github.com/opensciencegrid/xrootd-hdfs/issues/25" }
gharchive/issue
Store checksum information as xattr on hdfs Since 2.5.0 release, Hadoop supports xattr and it could set the checksum values as xattr and not files under /cksums dir as it is right now. Like @bbockelm mentioned -- you can't access xattr from the libhdfs C library, even in the latest trunk, so it will be difficult to access it from this plugin (see https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h) While it is not available in libhdfs C library, XrootD allows now for drop-in checksum plugins. I've written such a plugin in Python, which stores the checksum results in the extended attributes. It is currently under test. It heavily borrows from cephsum plugin. To try it out, you will need Python >=3.8: pip install xrdsum[hdfs] Usage example: /usr/bin/time -v xrdsum --verbose --debug get <HDFS path to file> --read-size 128 xrootd config # ensure cksum adler32 is included in the tpc directive, in order to caclulate by default on transfer ofs.tpc cksum adler32 fcreds ?gsi =X509_USER_PROXY autorm xfr 40 pgm /etc/xrootd/xrdcp-tpc.sh # add this line to trigger external checksum calculation. Would be overwritten by other xrootd.chksum lines xrootd.chksum max 50 adler32 /etc/xrootd/xrdsum.sh with /etc/xrootd/xrdcp-tpc.sh containing: #!/bin/sh # from https://github.com/snafus/cephsum/blob/master/scripts/xrdcp-tpc.sh #Original code #/usr/bin/xrdcp --server -f $1 root://$XRDXROOTD_PROXY/$2 # Get the last two variables as SRC and DST, all others are assumed as additional arguments OTHERARGS="${@:1:$#-2}" DSTFILE="${@:$#:1}" SRCFILE="${@:$#-1:1}" /usr/bin/xrdcp $OTHERARGS --server -f $SRCFILE root://$XRDXROOTD_PROXY/$DSTFILE and with /etc/xrootd/xrdsum.sh containing: #!/usr/bin/env bash RESULT=$(xrdsum get --store-result --chunk-size 64 --verbose --storage-catalog /etc/xrootd/storage.xml "$1") ECODE=$? # XRootD expects return on stdout - checksum followed by a new line printf "%s\n" "$RESULT" exit "$ECODE"
2025-04-01T04:35:01.682265
2015-03-11T16:19:59
60684949
{ "authors": [ "reesjones", "timm" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9399", "repo": "opensciences/opensciences.github.io", "url": "https://github.com/opensciences/opensciences.github.io/issues/195" }
gharchive/issue
Links to contributors should work e.g. the link to "Birgit Hofer" is brolen http://openscience.us/repo/spreadsheet/faultsfw.html All links changed to corresponding people page article.
2025-04-01T04:35:01.683427
2015-05-01T16:48:22
72472491
{ "authors": [ "CarterPape", "WeiFoo" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9400", "repo": "opensciences/opensciences.github.io", "url": "https://github.com/opensciences/opensciences.github.io/issues/421" }
gharchive/issue
Green streams for data-intensive software http://dl.acm.org/citation.cfm?id=2486859 This paper is to test their implementation on some benchmarks: beamformer, DCT, DES.....
2025-04-01T04:35:01.685735
2020-09-12T04:35:41
700045223
{ "authors": [ "znicholls" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9401", "repo": "openscm/scmdata", "url": "https://github.com/openscm/scmdata/pull/119" }
gharchive/pull-request
7.0.0 hotfixes Pull request Please confirm that this pull request has done the following: [x] Tests added [x] Documentation added (where applicable) (N/A) [x] Example added (either to an existing notebook or as a new notebook, where applicable) (N/A) [x] Description in CHANGELOG.rst added Adding to CHANGELOG.rst Please add a single line in the changelog notes similar to one of the following: - (`#XX <https://github.com/openscm/scmdata/pull/XX>`_) Added feature which does something - (`#XX <https://github.com/openscm/scmdata/pull/XX>`_) Fixed bug identified in (`#YY <https://github.com/openscm/scmdata/issues/YY>`_) @lewisjared also fyi
2025-04-01T04:35:01.694707
2021-07-11T12:52:42
941446773
{ "authors": [ "qu1ck", "rahuldhebri" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9402", "repo": "openscopeproject/InteractiveHtmlBom", "url": "https://github.com/openscopeproject/InteractiveHtmlBom/issues/259" }
gharchive/issue
New error with V5.99 I have cloned the latest master branch and I am using V5.99.0-11022 when I try to run generate BOM it gives me this error "unsupported drawing class PCB_DIM_ALIGNED" Application: KiCad PCB Editor (64-bit) Version: (5.99.0-11022-g0527dc6fe0), release build Libraries: wxWidgets 3.1.5 libcurl/7.74.0-DEV Schannel zlib/1.2.11 Platform: Windows 10 (build 19043), 64-bit edition, 64 bit, Little endian, wxMSW Build Info: Date: Jun 14 2021 08:59:27 wxWidgets: 3.1.5 (wchar_t,STL containers) Boost: 1.75.0 OCC: 7.5.0 Curl: 7.74.0-DEV ngspice: 34 Compiler: Visual C++ 1928 without C++ ABI Build settings: KICAD_USE_OCC=ON KICAD_SPICE=ON Update kicad, this was fixed in https://gitlab.com/kicad/code/kicad/-/commit/fbbe72381150c1995c401f318786d2fb643113c7 Thanks for your quick reply, Solved!
2025-04-01T04:35:01.702268
2023-10-07T08:46:05
1931262014
{ "authors": [ "ashking94", "gulgulni" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:9403", "repo": "opensearch-project/OpenSearch", "url": "https://github.com/opensearch-project/OpenSearch/issues/10486" }
gharchive/issue
Evaluate response for indexing request if the remote store interaction fails Is your feature request related to a problem? Please describe. We should evaluate if the response being thrown at the time of indexing during a remote store outage shares only the required information. Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here. @ashking94 please add details around the proposed solution Sure @gulgulni. So, lets assume we are using S3 as the underlying remote store. So, if S3 was to throw a 500 or 503, then the associated exception that the SDK throws corresponding with these status codes would also be shown in the response message of the bulk or index api.