Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
19,804
26,187,469,134
IssuesEvent
2023-01-03 03:45:20
MarkBind/markbind
https://api.github.com/repos/MarkBind/markbind
opened
Create Default Debugging Configurations - VS Code
p.Low a-Process d.easy
### Please confirm that you have searched existing issues in the repo Yes, I have searched the existing issues ### Any related issues? #1346 ### What is the area that this feature belongs to? Testing ### Is your feature request related to a problem? Please describe. Some sample IDE debugging configurations have been mentioned in our dev docs [here]( https://markbind.org/devdocs/devGuide/workflow.html#:~:text=IDE%20debugger!%20We%20provide%20several%20sample%20configurations%20for%20WebStorm%20and%20VS%20Code.). While it is a good to have a reference, we certainly can create and commit a set of default debugging configuration that can be used out of the box. This helps with improving developer experience. ### Describe the solution you'd like As I am using VSCode for MarkBind development, I would like to have a default vscode config to do debugging easily. This should require creating and committing - `.vscode/launch.json` - `.vscode/tasks.json` - some instructions in dev docs on this - and or other files as appropriate ### Describe alternatives you've considered Can/should also do it for other IDEs such as Webstorm, if possible. ### Additional context _No response_
1.0
Create Default Debugging Configurations - VS Code - ### Please confirm that you have searched existing issues in the repo Yes, I have searched the existing issues ### Any related issues? #1346 ### What is the area that this feature belongs to? Testing ### Is your feature request related to a problem? Please describe. Some sample IDE debugging configurations have been mentioned in our dev docs [here]( https://markbind.org/devdocs/devGuide/workflow.html#:~:text=IDE%20debugger!%20We%20provide%20several%20sample%20configurations%20for%20WebStorm%20and%20VS%20Code.). While it is a good to have a reference, we certainly can create and commit a set of default debugging configuration that can be used out of the box. This helps with improving developer experience. ### Describe the solution you'd like As I am using VSCode for MarkBind development, I would like to have a default vscode config to do debugging easily. This should require creating and committing - `.vscode/launch.json` - `.vscode/tasks.json` - some instructions in dev docs on this - and or other files as appropriate ### Describe alternatives you've considered Can/should also do it for other IDEs such as Webstorm, if possible. ### Additional context _No response_
process
create default debugging configurations vs code please confirm that you have searched existing issues in the repo yes i have searched the existing issues any related issues what is the area that this feature belongs to testing is your feature request related to a problem please describe some sample ide debugging configurations have been mentioned in our dev docs while it is a good to have a reference we certainly can create and commit a set of default debugging configuration that can be used out of the box this helps with improving developer experience describe the solution you d like as i am using vscode for markbind development i would like to have a default vscode config to do debugging easily this should require creating and committing vscode launch json vscode tasks json some instructions in dev docs on this and or other files as appropriate describe alternatives you ve considered can should also do it for other ides such as webstorm if possible additional context no response
1
6,071
8,909,366,194
IssuesEvent
2019-01-18 05:54:13
mick-warehime/sixth_corp
https://api.github.com/repos/mick-warehime/sixth_corp
closed
All mods should be defined from data.
development process
This allows us to store initial mods as character data. We can replace the currently implemented mods with enums.
1.0
All mods should be defined from data. - This allows us to store initial mods as character data. We can replace the currently implemented mods with enums.
process
all mods should be defined from data this allows us to store initial mods as character data we can replace the currently implemented mods with enums
1
17,861
23,807,738,855
IssuesEvent
2022-09-04 09:50:15
CGAL/cgal
https://api.github.com/repos/CGAL/cgal
closed
question about scale in pointmatcher registation
Pkg::Point_set_processing_3
I'm using the registration with pointmatcher to registrate two point clouds with different scales .To make similarity transform valid, I change the ICP.chain' s default from rigidtransformation to similaritytransformation. However, when I set errorminimizer to PointToPointSimilarityErrorMinimizer, the rotation matrix part I get is still an normal rotation matrix without scale. How to solve this? 1.cgal:registration_with_pointmatcher.cpp // Prepare error minimizer ICP_config error_minimizer { /*.name=*/"PointToPointSimilarityErrorMinimizer", /*.params=*/{ } }; 2.libpointmatcher/pointmatcher/ICP.cpp ``` void PointMatcher<T>::ICPChainBase::setDefault() { this->cleanup(); this->transformations.push_back(std::make_shared<typename TransformationsImpl<T>::SimilarityTransformation>()); this->readingDataPointsFilters.push_back(std::make_shared<typename DataPointsFiltersImpl<T>::RandomSamplingDataPointsFilter>()); this->referenceDataPointsFilters.push_back(std::make_shared<typename DataPointsFiltersImpl<T>::SamplingSurfaceNormalDataPointsFilter>()); this->outlierFilters.push_back(std::make_shared<typename OutlierFiltersImpl<T>::TrimmedDistOutlierFilter>()); this->matcher = std::make_shared<typename MatchersImpl<T>::KDTreeMatcher>(); this->errorMinimizer = std::make_shared<PointToPointSimilarityErrorMinimizer<T> >(); this->transformationCheckers.push_back(std::make_shared<typename TransformationCheckersImpl<T>::CounterTransformationChecker>()); this->transformationCheckers.push_back(std::make_shared<typename TransformationCheckersImpl<T>::DifferentialTransformationChecker>()); this->inspector = std::make_shared<typename InspectorsImpl<T>::NullInspector>(); } ```
1.0
question about scale in pointmatcher registation - I'm using the registration with pointmatcher to registrate two point clouds with different scales .To make similarity transform valid, I change the ICP.chain' s default from rigidtransformation to similaritytransformation. However, when I set errorminimizer to PointToPointSimilarityErrorMinimizer, the rotation matrix part I get is still an normal rotation matrix without scale. How to solve this? 1.cgal:registration_with_pointmatcher.cpp // Prepare error minimizer ICP_config error_minimizer { /*.name=*/"PointToPointSimilarityErrorMinimizer", /*.params=*/{ } }; 2.libpointmatcher/pointmatcher/ICP.cpp ``` void PointMatcher<T>::ICPChainBase::setDefault() { this->cleanup(); this->transformations.push_back(std::make_shared<typename TransformationsImpl<T>::SimilarityTransformation>()); this->readingDataPointsFilters.push_back(std::make_shared<typename DataPointsFiltersImpl<T>::RandomSamplingDataPointsFilter>()); this->referenceDataPointsFilters.push_back(std::make_shared<typename DataPointsFiltersImpl<T>::SamplingSurfaceNormalDataPointsFilter>()); this->outlierFilters.push_back(std::make_shared<typename OutlierFiltersImpl<T>::TrimmedDistOutlierFilter>()); this->matcher = std::make_shared<typename MatchersImpl<T>::KDTreeMatcher>(); this->errorMinimizer = std::make_shared<PointToPointSimilarityErrorMinimizer<T> >(); this->transformationCheckers.push_back(std::make_shared<typename TransformationCheckersImpl<T>::CounterTransformationChecker>()); this->transformationCheckers.push_back(std::make_shared<typename TransformationCheckersImpl<T>::DifferentialTransformationChecker>()); this->inspector = std::make_shared<typename InspectorsImpl<T>::NullInspector>(); } ```
process
question about scale in pointmatcher registation i m using the registration with pointmatcher to registrate two point clouds with different scales to make similarity transform valid i change the icp chain s default from rigidtransformation to similaritytransformation however when i set errorminimizer to pointtopointsimilarityerrorminimizer the rotation matrix part i get is still an normal rotation matrix without scale how to solve this cgal registration with pointmatcher cpp prepare error minimizer icp config error minimizer name pointtopointsimilarityerrorminimizer params libpointmatcher pointmatcher icp cpp void pointmatcher icpchainbase setdefault this cleanup this transformations push back std make shared similaritytransformation this readingdatapointsfilters push back std make shared randomsamplingdatapointsfilter this referencedatapointsfilters push back std make shared samplingsurfacenormaldatapointsfilter this outlierfilters push back std make shared trimmeddistoutlierfilter this matcher std make shared kdtreematcher this errorminimizer std make shared this transformationcheckers push back std make shared countertransformationchecker this transformationcheckers push back std make shared differentialtransformationchecker this inspector std make shared nullinspector
1
15,715
19,848,831,079
IssuesEvent
2022-01-21 09:58:25
ooi-data/CE04OSPS-PC01B-4A-DOSTAD109-streamed-do_stable_sample
https://api.github.com/repos/ooi-data/CE04OSPS-PC01B-4A-DOSTAD109-streamed-do_stable_sample
opened
๐Ÿ›‘ Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:58:24.940676. ## Details Flow name: `CE04OSPS-PC01B-4A-DOSTAD109-streamed-do_stable_sample` Task name: `processing_task` Error type: `ValueError` Error message: cannot reshape array of size 1209600 into shape (2777778,) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append return self._write_op(self._append_nosync, data, axis=axis) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op return self._synchronized_op(f, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op result = f(*args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync self[append_selection] = data File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__ self.set_basic_selection(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection return self._set_basic_selection_nd(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd self._set_selection(indexer, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp> cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem chunk = self._decode_chunk(cdata) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk chunk = chunk.reshape(expected_shape or self._chunks, order=self._order) ValueError: cannot reshape array of size 1209600 into shape (2777778,) ``` </details>
1.0
๐Ÿ›‘ Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:58:24.940676. ## Details Flow name: `CE04OSPS-PC01B-4A-DOSTAD109-streamed-do_stable_sample` Task name: `processing_task` Error type: `ValueError` Error message: cannot reshape array of size 1209600 into shape (2777778,) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append return self._write_op(self._append_nosync, data, axis=axis) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op return self._synchronized_op(f, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op result = f(*args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync self[append_selection] = data File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__ self.set_basic_selection(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection return self._set_basic_selection_nd(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd self._set_selection(indexer, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp> cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem chunk = self._decode_chunk(cdata) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk chunk = chunk.reshape(expected_shape or self._chunks, order=self._order) ValueError: cannot reshape array of size 1209600 into shape (2777778,) ``` </details>
process
๐Ÿ›‘ processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed do stable sample task name processing task error type valueerror error message cannot reshape array of size into shape traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync self data file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in process for setitem chunk self decode chunk cdata file srv conda envs notebook lib site packages zarr core py line in decode chunk chunk chunk reshape expected shape or self chunks order self order valueerror cannot reshape array of size into shape
1
9,556
12,517,290,671
IssuesEvent
2020-06-03 10:52:37
GoogleCloudPlatform/dotnet-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
opened
[Spanner] Deactivate TestGetBackupOperations
api: spanner priority: p2 type: process
The test times out so we're skipping this for now ([example](https://source.cloud.google.com/results/invocations/f255d775-e3ca-4231-b34d-8b70b669a0d7/targets/github%2Fdotnet-docs-samples%2Fspanner%2Fapi%2FSpannerTest/tests). We should look into this and re-enable when we figure it out.
1.0
[Spanner] Deactivate TestGetBackupOperations - The test times out so we're skipping this for now ([example](https://source.cloud.google.com/results/invocations/f255d775-e3ca-4231-b34d-8b70b669a0d7/targets/github%2Fdotnet-docs-samples%2Fspanner%2Fapi%2FSpannerTest/tests). We should look into this and re-enable when we figure it out.
process
deactivate testgetbackupoperations the test times out so we re skipping this for now we should look into this and re enable when we figure it out
1
18,127
24,167,165,368
IssuesEvent
2022-09-22 15:54:38
streamnative/flink
https://api.github.com/repos/streamnative/flink
closed
[SQL Connector] Minor improvement ticket
compute/data-processing type/enhancement
- [x] add config groups to PulsarTableOptions - [x] check if we can use the PulsarConfigValidator for pulsar table options. - [x] review `properties` metadata fields, add test and add documentation - CLOSED: [x] add flink-avro in the pulsar-flink docker image using the recommended way: not sure what it means. Closing because no recent updates. - [x] insert into..select from ... What schema should the sink use ? - [x] Documentation: why SQL connector only supports Exclusive and Shared subscription Type ? - [x] consider refactor the PulsarTableTestUtils on how the rows are collected. - [x] where to put the testing pojo TestingUser - [x] check on how the keyBytes are used. - [ ] fix daily e2e tests ## catalog related - [ ] catalog: check the data fields mapping and the format mapping semantics. - [ ] revisit the error handling mechanism of PulsarCatalog. like `e.printStackTrace()` - [ ] add more schema tests for PulsarCatalog based on SchemaData - [ ] add auth tests for PulsarCatalog, check if auth can be used manually. Add authStandalone.conf in the test files.
1.0
[SQL Connector] Minor improvement ticket - - [x] add config groups to PulsarTableOptions - [x] check if we can use the PulsarConfigValidator for pulsar table options. - [x] review `properties` metadata fields, add test and add documentation - CLOSED: [x] add flink-avro in the pulsar-flink docker image using the recommended way: not sure what it means. Closing because no recent updates. - [x] insert into..select from ... What schema should the sink use ? - [x] Documentation: why SQL connector only supports Exclusive and Shared subscription Type ? - [x] consider refactor the PulsarTableTestUtils on how the rows are collected. - [x] where to put the testing pojo TestingUser - [x] check on how the keyBytes are used. - [ ] fix daily e2e tests ## catalog related - [ ] catalog: check the data fields mapping and the format mapping semantics. - [ ] revisit the error handling mechanism of PulsarCatalog. like `e.printStackTrace()` - [ ] add more schema tests for PulsarCatalog based on SchemaData - [ ] add auth tests for PulsarCatalog, check if auth can be used manually. Add authStandalone.conf in the test files.
process
minor improvement ticket add config groups to pulsartableoptions check if we can use the pulsarconfigvalidator for pulsar table options review properties metadata fields add test and add documentation closed add flink avro in the pulsar flink docker image using the recommended way not sure what it means closing because no recent updates insert into select from what schema should the sink use documentation why sql connector only supports exclusive and shared subscription type consider refactor the pulsartabletestutils on how the rows are collected where to put the testing pojo testinguser check on how the keybytes are used fix daily tests catalog related catalog check the data fields mapping and the format mapping semantics revisit the error handling mechanism of pulsarcatalog like e printstacktrace add more schema tests for pulsarcatalog based on schemadata add auth tests for pulsarcatalog check if auth can be used manually add authstandalone conf in the test files
1
19,542
25,864,263,203
IssuesEvent
2022-12-13 19:25:24
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
DISABLED test_success_non_blocking (__main__.ForkTest)
module: multiprocessing triaged module: flaky-tests skipped
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_success_non_blocking&suite=ForkTest&file=test_multiprocessing_spawn.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7286770748). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. cc @VitalyFedyunin
1.0
DISABLED test_success_non_blocking (__main__.ForkTest) - Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_success_non_blocking&suite=ForkTest&file=test_multiprocessing_spawn.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7286770748). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. cc @VitalyFedyunin
process
disabled test success non blocking main forktest platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green cc vitalyfedyunin
1
256,934
19,477,267,644
IssuesEvent
2021-12-24 15:20:13
adafruit/Adafruit_CircuitPython_Display_Text
https://api.github.com/repos/adafruit/Adafruit_CircuitPython_Display_Text
closed
Missing Type Annotations
good first issue documentation
There are missing type annotations for some functions in this library. The `typing` module does not exist on CircuitPython devices so the import needs to be wrapped in try/except to catch the error for missing import. There is an example of how that is done here: ```python try: from typing import List, Tuple except ImportError: pass ``` Once imported the typing annotations for the argument type(s), and return type(s) can be added to the function signature. Here is an example of a function that has had this done already: ```python def wrap_text_to_pixels( string: str, max_width: int, font=None, indent0: str = "", indent1: str = "" ) -> List[str]: ``` If you are new to Git or Github we have a guide about contributing to our projects here: https://learn.adafruit.com/contribute-to-circuitpython-with-git-and-github There is also a guide that covers our CI utilities and how to run them locally to ensure they will pass in Github Actions here: https://learn.adafruit.com/creating-and-sharing-a-circuitpython-library/check-your-code In particular the pages: `Sharing docs on ReadTheDocs` and `Check your code with pre-commit` contain the tools to install and commands to run locally to run the checks. If you are attempting to resolve this issue and need help, you can post a comment on this issue and tag both @foamyguy and @kattni or reach out to us on Discord: https://adafru.it/discord in the `#circuitpython-dev` channel. The following locations are reported by mypy to be missing type annotations: - [ ] adafruit_display_text/\_\_init\_\_.py:41 - [ ] adafruit_display_text/\_\_init\_\_.py:48 - [ ] adafruit_display_text/\_\_init\_\_.py:118 - [ ] adafruit_display_text/\_\_init\_\_.py:312 - [ ] adafruit_display_text/\_\_init\_\_.py:438 - [ ] adafruit_display_text/bitmap_label.py:522
1.0
Missing Type Annotations - There are missing type annotations for some functions in this library. The `typing` module does not exist on CircuitPython devices so the import needs to be wrapped in try/except to catch the error for missing import. There is an example of how that is done here: ```python try: from typing import List, Tuple except ImportError: pass ``` Once imported the typing annotations for the argument type(s), and return type(s) can be added to the function signature. Here is an example of a function that has had this done already: ```python def wrap_text_to_pixels( string: str, max_width: int, font=None, indent0: str = "", indent1: str = "" ) -> List[str]: ``` If you are new to Git or Github we have a guide about contributing to our projects here: https://learn.adafruit.com/contribute-to-circuitpython-with-git-and-github There is also a guide that covers our CI utilities and how to run them locally to ensure they will pass in Github Actions here: https://learn.adafruit.com/creating-and-sharing-a-circuitpython-library/check-your-code In particular the pages: `Sharing docs on ReadTheDocs` and `Check your code with pre-commit` contain the tools to install and commands to run locally to run the checks. If you are attempting to resolve this issue and need help, you can post a comment on this issue and tag both @foamyguy and @kattni or reach out to us on Discord: https://adafru.it/discord in the `#circuitpython-dev` channel. The following locations are reported by mypy to be missing type annotations: - [ ] adafruit_display_text/\_\_init\_\_.py:41 - [ ] adafruit_display_text/\_\_init\_\_.py:48 - [ ] adafruit_display_text/\_\_init\_\_.py:118 - [ ] adafruit_display_text/\_\_init\_\_.py:312 - [ ] adafruit_display_text/\_\_init\_\_.py:438 - [ ] adafruit_display_text/bitmap_label.py:522
non_process
missing type annotations there are missing type annotations for some functions in this library the typing module does not exist on circuitpython devices so the import needs to be wrapped in try except to catch the error for missing import there is an example of how that is done here python try from typing import list tuple except importerror pass once imported the typing annotations for the argument type s and return type s can be added to the function signature here is an example of a function that has had this done already python def wrap text to pixels string str max width int font none str str list if you are new to git or github we have a guide about contributing to our projects here there is also a guide that covers our ci utilities and how to run them locally to ensure they will pass in github actions here in particular the pages sharing docs on readthedocs and check your code with pre commit contain the tools to install and commands to run locally to run the checks if you are attempting to resolve this issue and need help you can post a comment on this issue and tag both foamyguy and kattni or reach out to us on discord in the circuitpython dev channel the following locations are reported by mypy to be missing type annotations adafruit display text init py adafruit display text init py adafruit display text init py adafruit display text init py adafruit display text init py adafruit display text bitmap label py
0
11,882
14,680,053,533
IssuesEvent
2020-12-31 08:55:04
zammad/zammad
https://api.github.com/repos/zammad/zammad
closed
Japanese emails incorrectly converted
bug mail processing prioritised by payment verified
<!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! ๐Ÿค“ Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: * Installation method (source, package, ..): * Operating system: * Database + version: * Elasticsearch version: * Browser + version: * Ticket-ID: #1077036 *Note: This is **no regression** of issue #2498 - the emails are entirely different.* You can find a sample mail in above mentioned ticket (see my internal note), as the content of the mail is too sensitive. Customer tried to clean the mail up which ended in a technical different mail. ### Expected behavior: Zammad is capable of importing japanese mails and, if needed, fix the encoding to ensure it can cleanly import the mail. ### Actual behavior: Zammad fails to import mails with specific combinations between UTF8 encoding and japanese. In this example the content type is state as UTF8: ``` Content-Type: multipart/mixed; boundary="--==_mimepart_5e24add243fce_3b9752e77a03209d0"; charset=UTF-8 ``` and then put into mimeparts: ``` [...] ----==_mimepart_5e24add243fce_3b9752e77a03209d0 Content-Type: multipart/alternative; boundary="--==_mimepart_5e24add243db0_3b9752e77a03207a7"; charset=UTF-8 Content-Transfer-Encoding: 7bit ----==_mimepart_5e24add243db0_3b9752e77a03207a7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 [...] ``` ![image](https://user-images.githubusercontent.com/6549061/85568555-0fec8400-b632-11ea-808e-bf5a78a56636.png) ### Sidenotes from customer: Maybe relevant as I just received these information: The people in japan sending these mails seem to send them via Microsoft Office by typing a word document which they then sendout from Word (?) via Outlook. Not really sure how this works and if we can completely reproduce that. ### Steps to reproduce the behavior: * have a specific combination of mail content and japanese stuff inside and try to import it Yes I'm sure this is a bug and no feature request or a general question.
1.0
Japanese emails incorrectly converted - <!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! ๐Ÿค“ Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: * Installation method (source, package, ..): * Operating system: * Database + version: * Elasticsearch version: * Browser + version: * Ticket-ID: #1077036 *Note: This is **no regression** of issue #2498 - the emails are entirely different.* You can find a sample mail in above mentioned ticket (see my internal note), as the content of the mail is too sensitive. Customer tried to clean the mail up which ended in a technical different mail. ### Expected behavior: Zammad is capable of importing japanese mails and, if needed, fix the encoding to ensure it can cleanly import the mail. ### Actual behavior: Zammad fails to import mails with specific combinations between UTF8 encoding and japanese. In this example the content type is state as UTF8: ``` Content-Type: multipart/mixed; boundary="--==_mimepart_5e24add243fce_3b9752e77a03209d0"; charset=UTF-8 ``` and then put into mimeparts: ``` [...] ----==_mimepart_5e24add243fce_3b9752e77a03209d0 Content-Type: multipart/alternative; boundary="--==_mimepart_5e24add243db0_3b9752e77a03207a7"; charset=UTF-8 Content-Transfer-Encoding: 7bit ----==_mimepart_5e24add243db0_3b9752e77a03207a7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 [...] ``` ![image](https://user-images.githubusercontent.com/6549061/85568555-0fec8400-b632-11ea-808e-bf5a78a56636.png) ### Sidenotes from customer: Maybe relevant as I just received these information: The people in japan sending these mails seem to send them via Microsoft Office by typing a word document which they then sendout from Word (?) via Outlook. Not really sure how this works and if we can completely reproduce that. ### Steps to reproduce the behavior: * have a specific combination of mail content and japanese stuff inside and try to import it Yes I'm sure this is a bug and no feature request or a general question.
process
japanese emails incorrectly converted hi there thanks for filing an issue please ensure the following things before creating an issue thank you ๐Ÿค“ since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package operating system database version elasticsearch version browser version ticket id note this is no regression of issue the emails are entirely different you can find a sample mail in above mentioned ticket see my internal note as the content of the mail is too sensitive customer tried to clean the mail up which ended in a technical different mail expected behavior zammad is capable of importing japanese mails and if needed fix the encoding to ensure it can cleanly import the mail actual behavior zammad fails to import mails with specific combinations between encoding and japanese in this example the content type is state as content type multipart mixed boundary mimepart charset utf and then put into mimeparts mimepart content type multipart alternative boundary mimepart charset utf content transfer encoding mimepart content type text plain charset utf content transfer encoding sidenotes from customer maybe relevant as i just received these information the people in japan sending these mails seem to send them via microsoft office by typing a word document which they then sendout from word via outlook not really sure how this works and if we can completely reproduce that steps to reproduce the behavior have a specific combination of mail content and japanese stuff inside and try to import it yes i m sure this is a bug and no feature request or a general question
1
43,919
23,422,069,604
IssuesEvent
2022-08-13 21:19:51
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
storage: `MVCCGet` showing significant `MVCCIterator.Stats()` CPU time
C-performance A-storage T-storage
`MVCCIterator.Stats()` has been seen to use ~4% of CPU during `MVCCGet` benchmarks: <img width="1646" alt="Screenshot 2022-08-13 at 17 15 06" src="https://user-images.githubusercontent.com/644420/184500272-7d34082c-64d3-4cad-9233-c19fd1cede3e.png"> This is used to attach iterator stats to a running trace, but the stats collection is called unconditionally (regardless of whether an actual trace is in progress): https://github.com/cockroachdb/cockroach/blob/9fb8d361ecb349dda4bfee10a17a6e19277762f0/pkg/storage/mvcc.go#L999-L1000 Jira issue: CRDB-18562
True
storage: `MVCCGet` showing significant `MVCCIterator.Stats()` CPU time - `MVCCIterator.Stats()` has been seen to use ~4% of CPU during `MVCCGet` benchmarks: <img width="1646" alt="Screenshot 2022-08-13 at 17 15 06" src="https://user-images.githubusercontent.com/644420/184500272-7d34082c-64d3-4cad-9233-c19fd1cede3e.png"> This is used to attach iterator stats to a running trace, but the stats collection is called unconditionally (regardless of whether an actual trace is in progress): https://github.com/cockroachdb/cockroach/blob/9fb8d361ecb349dda4bfee10a17a6e19277762f0/pkg/storage/mvcc.go#L999-L1000 Jira issue: CRDB-18562
non_process
storage mvccget showing significant mvcciterator stats cpu time mvcciterator stats has been seen to use of cpu during mvccget benchmarks img width alt screenshot at src this is used to attach iterator stats to a running trace but the stats collection is called unconditionally regardless of whether an actual trace is in progress jira issue crdb
0
8,512
11,694,538,659
IssuesEvent
2020-03-06 04:27:18
nanoframework/Home
https://api.github.com/repos/nanoframework/Home
closed
Generating Stubs (interop) for functions with int and short return types
Area: Metadata Processor Priority: Low Type: Bug trivial up-for-grabs
VS 2019 VS Extension 1.17.0.13 When generating the stubs for interop assemblies with functions with return values of type int or short, entries are SetResult_int16_t or SetResult_in32_t in the ... mshl file. When using gcc 9.2.1, however, SetResult_INT16 or SetResult_INT32 must be created!
1.0
Generating Stubs (interop) for functions with int and short return types - VS 2019 VS Extension 1.17.0.13 When generating the stubs for interop assemblies with functions with return values of type int or short, entries are SetResult_int16_t or SetResult_in32_t in the ... mshl file. When using gcc 9.2.1, however, SetResult_INT16 or SetResult_INT32 must be created!
process
generating stubs interop for functions with int and short return types vs vs extension when generating the stubs for interop assemblies with functions with return values of type int or short entries are setresult t or setresult t in the mshl file when using gcc however setresult or setresult must be created
1
16,990
22,355,341,116
IssuesEvent
2022-06-15 15:12:07
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
"echo" in powershell is an alias. for clarity, one otta use "Write-Output" instead
doc-enhancement devops/prod Pri1 devops-cicd-process/tech
howdy y'all, many of your examples use "echo" for the "script:" code. that has caused relatively new users to try to use "Write-Host" instead of "Write-Output". i recommend changing from the alias to the real cmdlet. aliases should NEVER be used for anything other than one-off, throwaway code. if it will be reused or read by anyone ... the full cmdlet otta be used. take care, lee --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
"echo" in powershell is an alias. for clarity, one otta use "Write-Output" instead - howdy y'all, many of your examples use "echo" for the "script:" code. that has caused relatively new users to try to use "Write-Host" instead of "Write-Output". i recommend changing from the alias to the real cmdlet. aliases should NEVER be used for anything other than one-off, throwaway code. if it will be reused or read by anyone ... the full cmdlet otta be used. take care, lee --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
echo in powershell is an alias for clarity one otta use write output instead howdy y all many of your examples use echo for the script code that has caused relatively new users to try to use write host instead of write output i recommend changing from the alias to the real cmdlet aliases should never be used for anything other than one off throwaway code if it will be reused or read by anyone the full cmdlet otta be used take care lee document details โš  do not edit this section it is required for docs microsoft com โžŸ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
98,099
4,017,454,186
IssuesEvent
2016-05-16 04:11:54
Thaerious/NERVE
https://api.github.com/repos/Thaerious/NERVE
opened
Remove right click menu from reading pane unless functionality is restored
bug - high priority
Right now, on my mac, using duncsa-short, none of the three items works, and the Customize Tag throws up an exception error.
1.0
Remove right click menu from reading pane unless functionality is restored - Right now, on my mac, using duncsa-short, none of the three items works, and the Customize Tag throws up an exception error.
non_process
remove right click menu from reading pane unless functionality is restored right now on my mac using duncsa short none of the three items works and the customize tag throws up an exception error
0
20,135
26,682,013,604
IssuesEvent
2023-01-26 18:29:17
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Flaky tests may be due to limited shared memory
module: multiprocessing module: dataloader module: ci triaged
As of August 18, 2022, pytorch/pytorch had 240 flaky tests in its system, the majority of them owned by the distributed, multiprocessing, and dataloader teams. @NivekT suggested that the dataloader tests are likely flaky due to lack of shared memory, and thus inspired the following action items we can take: 1. Track shared memory usage with `free` on Linux. Whenever a test case fails, we can print the amount of used + free memory at that moment. We may also want to check before the test + after the failure. 2. Use free before every test to see if we have enough memory at the start of the test. If not, we should skip the test with a message. **We should:** proceed with 1 to verify that lack of shared memory is the culprit, and if so, then do 2. Tagging @ejguan and @VitalyFedyunin or @vitaly-fedyunin for more suggestions + confirmation that this is a reasonable next step. cc @VitalyFedyunin @SsnL @ejguan @NivekT @seemethere @malfet @pytorch/pytorch-dev-infra
1.0
Flaky tests may be due to limited shared memory - As of August 18, 2022, pytorch/pytorch had 240 flaky tests in its system, the majority of them owned by the distributed, multiprocessing, and dataloader teams. @NivekT suggested that the dataloader tests are likely flaky due to lack of shared memory, and thus inspired the following action items we can take: 1. Track shared memory usage with `free` on Linux. Whenever a test case fails, we can print the amount of used + free memory at that moment. We may also want to check before the test + after the failure. 2. Use free before every test to see if we have enough memory at the start of the test. If not, we should skip the test with a message. **We should:** proceed with 1 to verify that lack of shared memory is the culprit, and if so, then do 2. Tagging @ejguan and @VitalyFedyunin or @vitaly-fedyunin for more suggestions + confirmation that this is a reasonable next step. cc @VitalyFedyunin @SsnL @ejguan @NivekT @seemethere @malfet @pytorch/pytorch-dev-infra
process
flaky tests may be due to limited shared memory as of august pytorch pytorch had flaky tests in its system the majority of them owned by the distributed multiprocessing and dataloader teams nivekt suggested that the dataloader tests are likely flaky due to lack of shared memory and thus inspired the following action items we can take track shared memory usage with free on linux whenever a test case fails we can print the amount of used free memory at that moment we may also want to check before the test after the failure use free before every test to see if we have enough memory at the start of the test if not we should skip the test with a message we should proceed with to verify that lack of shared memory is the culprit and if so then do tagging ejguan and vitalyfedyunin or vitaly fedyunin for more suggestions confirmation that this is a reasonable next step cc vitalyfedyunin ssnl ejguan nivekt seemethere malfet pytorch pytorch dev infra
1
6,408
23,100,606,693
IssuesEvent
2022-07-27 02:04:32
longhorn/longhorn
https://api.github.com/repos/longhorn/longhorn
closed
[IMPROVEMENT] Purging a volume before rebuilding starts
area/engine priority/1 require/automation-e2e require/doc area/stability kind/improvement backport-needed/1.2.5 area/replica backport-needed/1.3.1
## Is your improvement request related to a feature? Please describe If we can do snapshot purge before rebuilding, we would eliminate 1x space used by the system snapshots for the case we mentioned in https://longhorn.io/docs/1.3.0/volumes-and-nodes/volume-size/#space-configuration-suggestions-for-volumes ## Describe the solution you'd like Before creating the system snapshot for the rebuilding replica, volume should do snapshot purge automatically ## Additional context Some users/customers are complaining about the extra space used by Longhorn volumes in the worst case, this improvement would help reduce the overhead.
1.0
[IMPROVEMENT] Purging a volume before rebuilding starts - ## Is your improvement request related to a feature? Please describe If we can do snapshot purge before rebuilding, we would eliminate 1x space used by the system snapshots for the case we mentioned in https://longhorn.io/docs/1.3.0/volumes-and-nodes/volume-size/#space-configuration-suggestions-for-volumes ## Describe the solution you'd like Before creating the system snapshot for the rebuilding replica, volume should do snapshot purge automatically ## Additional context Some users/customers are complaining about the extra space used by Longhorn volumes in the worst case, this improvement would help reduce the overhead.
non_process
purging a volume before rebuilding starts is your improvement request related to a feature please describe if we can do snapshot purge before rebuilding we would eliminate space used by the system snapshots for the case we mentioned in describe the solution you d like before creating the system snapshot for the rebuilding replica volume should do snapshot purge automatically additional context some users customers are complaining about the extra space used by longhorn volumes in the worst case this improvement would help reduce the overhead
0
15,055
18,763,074,795
IssuesEvent
2021-11-05 19:00:57
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
opened
Process Heat - Moisture
bug Process Heating important
![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/ac17a549-6c62-48d6-a484-fb1f356b3644) either need a /100 before sending to suite or change the unit label to "lb/lb dry air" (and kg/kg dry air for metric - no unit conversions are needed) If you do the second one, you will need a /100 from when the 'calculate' button puts it into the field (or remove a x100)
1.0
Process Heat - Moisture - ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/ac17a549-6c62-48d6-a484-fb1f356b3644) either need a /100 before sending to suite or change the unit label to "lb/lb dry air" (and kg/kg dry air for metric - no unit conversions are needed) If you do the second one, you will need a /100 from when the 'calculate' button puts it into the field (or remove a x100)
process
process heat moisture either need a before sending to suite or change the unit label to lb lb dry air and kg kg dry air for metric no unit conversions are needed if you do the second one you will need a from when the calculate button puts it into the field or remove a
1
14,882
18,287,302,902
IssuesEvent
2021-10-05 11:47:33
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
dsPIC33E analyze not working
Feature: Analysis Feature: Processor/PIC
**Describe the bug** ``` java.lang.reflect.InvocationTargetException at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisWorkerCommand.applyTo(AutoAnalysisManager.java:1713) at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisWorkerCommand.applyToWithTransaction(AutoAnalysisManager.java:1655) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.scheduleWorker(AutoAnalysisManager.java:1364) at ghidra.app.cmd.formats.CoffBinaryAnalysisCommand.applyTo(CoffBinaryAnalysisCommand.java:90) at ghidra.app.analyzers.AbstractBinaryFormatAnalyzer.added(AbstractBinaryFormatAnalyzer.java:39) at ghidra.app.plugin.core.analysis.AnalysisScheduler.runAnalyzer(AnalysisScheduler.java:186) at ghidra.app.plugin.core.analysis.AnalysisTask.applyTo(AnalysisTask.java:39) at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisTaskWrapper.run(AutoAnalysisManager.java:688) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.startAnalysis(AutoAnalysisManager.java:788) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.startAnalysis(AutoAnalysisManager.java:667) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.startAnalysis(AutoAnalysisManager.java:632) at ghidra.app.plugin.core.analysis.AnalysisBackgroundCommand.applyTo(AnalysisBackgroundCommand.java:58) at ghidra.framework.plugintool.mgr.BackgroundCommandTask.run(BackgroundCommandTask.java:102) at ghidra.framework.plugintool.mgr.ToolTaskManager.run(ToolTaskManager.java:315) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.io.IOException: Unable to read bytes at rom:100770 at ghidra.app.util.bin.MemoryByteProvider.readBytes(MemoryByteProvider.java:129) at ghidra.app.util.bin.BinaryReader.readInt(BinaryReader.java:682) at ghidra.app.util.bin.BinaryReader.readNextInt(BinaryReader.java:316) at ghidra.app.util.bin.format.coff.CoffLineNumber.<init>(CoffLineNumber.java:32) at ghidra.app.util.bin.format.coff.CoffSectionHeader.parseLineNumbers(CoffSectionHeader.java:280) at ghidra.app.util.bin.format.coff.CoffSectionHeader.parse(CoffSectionHeader.java:267) at ghidra.app.util.bin.format.coff.CoffFileHeader.parse(CoffFileHeader.java:207) at ghidra.app.cmd.formats.CoffBinaryAnalysisCommand.analysisWorkerCallback(CoffBinaryAnalysisCommand.java:71) at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisWorkerCommand.applyTo(AutoAnalysisManager.java:1707) ... 14 more ``` **Environment (please complete the following information):** - OS: [e.g. Linux, Windows] - Java Version: [e.g. 11.0.12] - Ghidra Version: [e.g. 10.0.2] - Ghidra Origin: [e.g. official ghidra-sre.org distro, third party distro, locally built]
1.0
dsPIC33E analyze not working - **Describe the bug** ``` java.lang.reflect.InvocationTargetException at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisWorkerCommand.applyTo(AutoAnalysisManager.java:1713) at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisWorkerCommand.applyToWithTransaction(AutoAnalysisManager.java:1655) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.scheduleWorker(AutoAnalysisManager.java:1364) at ghidra.app.cmd.formats.CoffBinaryAnalysisCommand.applyTo(CoffBinaryAnalysisCommand.java:90) at ghidra.app.analyzers.AbstractBinaryFormatAnalyzer.added(AbstractBinaryFormatAnalyzer.java:39) at ghidra.app.plugin.core.analysis.AnalysisScheduler.runAnalyzer(AnalysisScheduler.java:186) at ghidra.app.plugin.core.analysis.AnalysisTask.applyTo(AnalysisTask.java:39) at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisTaskWrapper.run(AutoAnalysisManager.java:688) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.startAnalysis(AutoAnalysisManager.java:788) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.startAnalysis(AutoAnalysisManager.java:667) at ghidra.app.plugin.core.analysis.AutoAnalysisManager.startAnalysis(AutoAnalysisManager.java:632) at ghidra.app.plugin.core.analysis.AnalysisBackgroundCommand.applyTo(AnalysisBackgroundCommand.java:58) at ghidra.framework.plugintool.mgr.BackgroundCommandTask.run(BackgroundCommandTask.java:102) at ghidra.framework.plugintool.mgr.ToolTaskManager.run(ToolTaskManager.java:315) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.io.IOException: Unable to read bytes at rom:100770 at ghidra.app.util.bin.MemoryByteProvider.readBytes(MemoryByteProvider.java:129) at ghidra.app.util.bin.BinaryReader.readInt(BinaryReader.java:682) at ghidra.app.util.bin.BinaryReader.readNextInt(BinaryReader.java:316) at ghidra.app.util.bin.format.coff.CoffLineNumber.<init>(CoffLineNumber.java:32) at ghidra.app.util.bin.format.coff.CoffSectionHeader.parseLineNumbers(CoffSectionHeader.java:280) at ghidra.app.util.bin.format.coff.CoffSectionHeader.parse(CoffSectionHeader.java:267) at ghidra.app.util.bin.format.coff.CoffFileHeader.parse(CoffFileHeader.java:207) at ghidra.app.cmd.formats.CoffBinaryAnalysisCommand.analysisWorkerCallback(CoffBinaryAnalysisCommand.java:71) at ghidra.app.plugin.core.analysis.AutoAnalysisManager$AnalysisWorkerCommand.applyTo(AutoAnalysisManager.java:1707) ... 14 more ``` **Environment (please complete the following information):** - OS: [e.g. Linux, Windows] - Java Version: [e.g. 11.0.12] - Ghidra Version: [e.g. 10.0.2] - Ghidra Origin: [e.g. official ghidra-sre.org distro, third party distro, locally built]
process
analyze not working describe the bug java lang reflect invocationtargetexception at ghidra app plugin core analysis autoanalysismanager analysisworkercommand applyto autoanalysismanager java at ghidra app plugin core analysis autoanalysismanager analysisworkercommand applytowithtransaction autoanalysismanager java at ghidra app plugin core analysis autoanalysismanager scheduleworker autoanalysismanager java at ghidra app cmd formats coffbinaryanalysiscommand applyto coffbinaryanalysiscommand java at ghidra app analyzers abstractbinaryformatanalyzer added abstractbinaryformatanalyzer java at ghidra app plugin core analysis analysisscheduler runanalyzer analysisscheduler java at ghidra app plugin core analysis analysistask applyto analysistask java at ghidra app plugin core analysis autoanalysismanager analysistaskwrapper run autoanalysismanager java at ghidra app plugin core analysis autoanalysismanager startanalysis autoanalysismanager java at ghidra app plugin core analysis autoanalysismanager startanalysis autoanalysismanager java at ghidra app plugin core analysis autoanalysismanager startanalysis autoanalysismanager java at ghidra app plugin core analysis analysisbackgroundcommand applyto analysisbackgroundcommand java at ghidra framework plugintool mgr backgroundcommandtask run backgroundcommandtask java at ghidra framework plugintool mgr tooltaskmanager run tooltaskmanager java at java base java lang thread run thread java caused by java io ioexception unable to read bytes at rom at ghidra app util bin memorybyteprovider readbytes memorybyteprovider java at ghidra app util bin binaryreader readint binaryreader java at ghidra app util bin binaryreader readnextint binaryreader java at ghidra app util bin format coff cofflinenumber cofflinenumber java at ghidra app util bin format coff coffsectionheader parselinenumbers coffsectionheader java at ghidra app util bin format coff coffsectionheader parse coffsectionheader java at ghidra app util bin format coff cofffileheader parse cofffileheader java at ghidra app cmd formats coffbinaryanalysiscommand analysisworkercallback coffbinaryanalysiscommand java at ghidra app plugin core analysis autoanalysismanager analysisworkercommand applyto autoanalysismanager java more environment please complete the following information os java version ghidra version ghidra origin
1
10,295
13,148,126,401
IssuesEvent
2020-08-08 19:33:46
OUDcollective/Qualitative-Self
https://api.github.com/repos/OUDcollective/Qualitative-Self
closed
Google Domains - 3
Creative Strategy process implementation
![Screen Shot from awesomescreenshot.com](https://www.awesomescreenshot.com/api/v1/destination/image/show?ImageKey=tm-3957-16879-9eb1aa899d4272428f87d32f8a4f0d39) --- **Source URL**: [https://domains.google.com/m/registrar/cart](https://domains.google.com/m/registrar/cart) <table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.89</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>3840x1405</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@0.6666666865348816x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
1.0
Google Domains - 3 - ![Screen Shot from awesomescreenshot.com](https://www.awesomescreenshot.com/api/v1/destination/image/show?ImageKey=tm-3957-16879-9eb1aa899d4272428f87d32f8a4f0d39) --- **Source URL**: [https://domains.google.com/m/registrar/cart](https://domains.google.com/m/registrar/cart) <table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.89</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>3840x1405</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@0.6666666865348816x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
process
google domains source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
1
6,056
8,880,024,733
IssuesEvent
2019-01-14 03:01:16
nodejs/node
https://api.github.com/repos/nodejs/node
opened
child_process: child_process.spawn ENOMEM on Windows
child_process
<!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 11 * **Platform**: Windows Server 2016 * **Subsystem**: child_process <!-- Please provide more details below this comment. --> Windows Server 2016 appears to run into memory issues when spawning multiple subprocesses. Here's the test in question in nyc: ````js const path = require('path') const assert = require('assert') const {spawnSync} = require('child_process') const time = process.hrtime() const workerPath = path.join(__dirname, './cache-collision-worker.js') function doFork (message) { const output = spawnSync(process.execPath, [workerPath, String(time[0]), String(time[1]), message]) assert.equal(output.status, 0, 'received non-zero exit code ' + output.status) } doFork('foo') doFork('bar') doFork('baz') doFork('quz') doFork('nada') ``` which spawns the fairly boring subprocess: ```js var assert = require('assert') var start = [ parseInt(process.argv[2], 10), parseInt(process.argv[3], 10) ] var message = process.argv[4] var diff = process.hrtime(start) while (diff[0] * 1e9 + diff[1] < 3e9) { diff = process.hrtime(start) } assert.strictEqual(require('./cache-collision-target')(message), message === 'nada' ? undefined : 'this is a ' + message) //assert.strictEqual(process.env.NYC_CWD, __dirname) ``` which in turn requires: ```js module.exports = function (foo) { if (foo === 'foo') { return 'this is a foo' } if (foo === 'bar') { return 'this is a bar' } if (foo === 'baz') { return 'this is a baz' } if (foo === 'quz') { return 'this is a quz' } } ``` I've tried using both `spawn` and `spawnSync` and the issue crops up in both cases. I also note that this behavior is new to Node 11: <img width="909" alt="screen shot 2019-01-13 at 6 52 05 pm" src="https://user-images.githubusercontent.com/194609/51094863-595e5180-1765-11e9-92d5-528d5c7279dd.png"> At a glance, this issue seems similar to https://github.com/nodejs/node/issues/25382; but I note that tests run fine on Node 8 and Node 10. @nodejs/process
1.0
child_process: child_process.spawn ENOMEM on Windows - <!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 11 * **Platform**: Windows Server 2016 * **Subsystem**: child_process <!-- Please provide more details below this comment. --> Windows Server 2016 appears to run into memory issues when spawning multiple subprocesses. Here's the test in question in nyc: ````js const path = require('path') const assert = require('assert') const {spawnSync} = require('child_process') const time = process.hrtime() const workerPath = path.join(__dirname, './cache-collision-worker.js') function doFork (message) { const output = spawnSync(process.execPath, [workerPath, String(time[0]), String(time[1]), message]) assert.equal(output.status, 0, 'received non-zero exit code ' + output.status) } doFork('foo') doFork('bar') doFork('baz') doFork('quz') doFork('nada') ``` which spawns the fairly boring subprocess: ```js var assert = require('assert') var start = [ parseInt(process.argv[2], 10), parseInt(process.argv[3], 10) ] var message = process.argv[4] var diff = process.hrtime(start) while (diff[0] * 1e9 + diff[1] < 3e9) { diff = process.hrtime(start) } assert.strictEqual(require('./cache-collision-target')(message), message === 'nada' ? undefined : 'this is a ' + message) //assert.strictEqual(process.env.NYC_CWD, __dirname) ``` which in turn requires: ```js module.exports = function (foo) { if (foo === 'foo') { return 'this is a foo' } if (foo === 'bar') { return 'this is a bar' } if (foo === 'baz') { return 'this is a baz' } if (foo === 'quz') { return 'this is a quz' } } ``` I've tried using both `spawn` and `spawnSync` and the issue crops up in both cases. I also note that this behavior is new to Node 11: <img width="909" alt="screen shot 2019-01-13 at 6 52 05 pm" src="https://user-images.githubusercontent.com/194609/51094863-595e5180-1765-11e9-92d5-528d5c7279dd.png"> At a glance, this issue seems similar to https://github.com/nodejs/node/issues/25382; but I note that tests run fine on Node 8 and Node 10. @nodejs/process
process
child process child process spawn enomem on windows thank you for reporting a possible bug in node js please fill in as much of the template below as you can version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify the affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you can version platform windows server subsystem child process windows server appears to run into memory issues when spawning multiple subprocesses here s the test in question in nyc js const path require path const assert require assert const spawnsync require child process const time process hrtime const workerpath path join dirname cache collision worker js function dofork message const output spawnsync process execpath string time message assert equal output status received non zero exit code output status dofork foo dofork bar dofork baz dofork quz dofork nada which spawns the fairly boring subprocess js var assert require assert var start parseint process argv parseint process argv var message process argv var diff process hrtime start while diff diff diff process hrtime start assert strictequal require cache collision target message message nada undefined this is a message assert strictequal process env nyc cwd dirname which in turn requires js module exports function foo if foo foo return this is a foo if foo bar return this is a bar if foo baz return this is a baz if foo quz return this is a quz i ve tried using both spawn and spawnsync and the issue crops up in both cases i also note that this behavior is new to node img width alt screen shot at pm src at a glance this issue seems similar to but i note that tests run fine on node and node nodejs process
1
342,687
10,320,515,548
IssuesEvent
2019-08-30 20:46:29
GTNewHorizons/NewHorizons
https://api.github.com/repos/GTNewHorizons/NewHorizons
closed
Hope we get a download for TickProfiler
very low priority
Since this is the best tool to identify lag sources on 1.7, I made a ticket to hopefully get another download link, since the old page is down: https://github.com/MinimallyCorrect/TickProfiler/issues/110
1.0
Hope we get a download for TickProfiler - Since this is the best tool to identify lag sources on 1.7, I made a ticket to hopefully get another download link, since the old page is down: https://github.com/MinimallyCorrect/TickProfiler/issues/110
non_process
hope we get a download for tickprofiler since this is the best tool to identify lag sources on i made a ticket to hopefully get another download link since the old page is down
0
5,162
7,933,998,906
IssuesEvent
2018-07-08 14:03:44
frc4571/The-Beelzebub
https://api.github.com/repos/frc4571/The-Beelzebub
closed
Convert GRIP output into hood position
drive-subsystem shooter-subsystem vision-processing
Given we know how far the target is, determine the position of the hood such that we can shoot from the farthest point ( without having to drive too close to the boiler)
1.0
Convert GRIP output into hood position - Given we know how far the target is, determine the position of the hood such that we can shoot from the farthest point ( without having to drive too close to the boiler)
process
convert grip output into hood position given we know how far the target is determine the position of the hood such that we can shoot from the farthest point without having to drive too close to the boiler
1
2,832
5,785,978,106
IssuesEvent
2017-05-01 07:46:11
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
closed
Customised Search input
inprocess
@AllenFang Do you please have an example of using a custom searchField ? I can't get my input text field to call the right onSearchChange function onKeyUp.
1.0
Customised Search input - @AllenFang Do you please have an example of using a custom searchField ? I can't get my input text field to call the right onSearchChange function onKeyUp.
process
customised search input allenfang do you please have an example of using a custom searchfield i can t get my input text field to call the right onsearchchange function onkeyup
1
691,397
23,695,958,300
IssuesEvent
2022-08-29 14:40:08
ooni/probe
https://api.github.com/repos/ooni/probe
closed
oohelperd: add support for prometheus metrics
priority/medium
We discussed which metrics today with @FedericoCeratto and @hellais and we mentioned: - [x] time it takes to serve a request [https://github.com/ooni/probe-cli/pull/897] - [x] number of errors [https://github.com/ooni/probe-cli/pull/897] - [x] number of running workers [https://github.com/ooni/probe-cli/pull/897] - [x] additionally, it should not listen on *:8080 (not strictly prometheus but completementary) [https://github.com/ooni/probe-cli/pull/887] - [x] additionally, we should also have logs (not strictly prometheus but completementary) [https://github.com/ooni/probe-cli/pull/896]
1.0
oohelperd: add support for prometheus metrics - We discussed which metrics today with @FedericoCeratto and @hellais and we mentioned: - [x] time it takes to serve a request [https://github.com/ooni/probe-cli/pull/897] - [x] number of errors [https://github.com/ooni/probe-cli/pull/897] - [x] number of running workers [https://github.com/ooni/probe-cli/pull/897] - [x] additionally, it should not listen on *:8080 (not strictly prometheus but completementary) [https://github.com/ooni/probe-cli/pull/887] - [x] additionally, we should also have logs (not strictly prometheus but completementary) [https://github.com/ooni/probe-cli/pull/896]
non_process
oohelperd add support for prometheus metrics we discussed which metrics today with federicoceratto and hellais and we mentioned time it takes to serve a request number of errors number of running workers additionally it should not listen on not strictly prometheus but completementary additionally we should also have logs not strictly prometheus but completementary
0
280,535
8,683,438,721
IssuesEvent
2018-12-02 18:09:02
NGO-DB/ndb-core
https://api.github.com/repos/NGO-DB/ndb-core
opened
Bug: Adding and deleting processes
Priority: High Type: Bug
Processes cannot be deleted (causes runtime error). Generally not very user-friendly
1.0
Bug: Adding and deleting processes - Processes cannot be deleted (causes runtime error). Generally not very user-friendly
non_process
bug adding and deleting processes processes cannot be deleted causes runtime error generally not very user friendly
0
68,460
3,288,316,425
IssuesEvent
2015-10-29 14:44:55
LetterboxDev/backend
https://api.github.com/repos/LetterboxDev/backend
closed
Add support for perfect match
endpoints priority-high
Fields for perfect match have already been added to the user model. Just need to check if the letter is a perfect match or not.
1.0
Add support for perfect match - Fields for perfect match have already been added to the user model. Just need to check if the letter is a perfect match or not.
non_process
add support for perfect match fields for perfect match have already been added to the user model just need to check if the letter is a perfect match or not
0
3,039
6,039,801,001
IssuesEvent
2017-06-10 07:34:05
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Use scripts to control lobby + bots, drop 'screens' (ui) support
discussion type: process
I am proposing we remove the UI that is used to launch and manage bots and the lobby in favor of just having management/admin scripts. I think we can get to full feature parity without too much difficult and for much lower cost taking this approach. For example, it's pretty easy to build a 'restart_lobby', 'restart_bot' script, etc... Listed below are some of the drawbacks and costs to having a UI / 'screens' support: - requires a shared session, not good for server management - greatly complicates deployment, interactions have to be done from a UI. This adds time and effort to starting and managing N bots. Ultimately not very sustainable since instances need to be hand managed effectively. - server requires extra memory and CPU resources needed to run the UI. - lots of code is used to support these features. Removing this code would lower development costs.
1.0
Use scripts to control lobby + bots, drop 'screens' (ui) support - I am proposing we remove the UI that is used to launch and manage bots and the lobby in favor of just having management/admin scripts. I think we can get to full feature parity without too much difficult and for much lower cost taking this approach. For example, it's pretty easy to build a 'restart_lobby', 'restart_bot' script, etc... Listed below are some of the drawbacks and costs to having a UI / 'screens' support: - requires a shared session, not good for server management - greatly complicates deployment, interactions have to be done from a UI. This adds time and effort to starting and managing N bots. Ultimately not very sustainable since instances need to be hand managed effectively. - server requires extra memory and CPU resources needed to run the UI. - lots of code is used to support these features. Removing this code would lower development costs.
process
use scripts to control lobby bots drop screens ui support i am proposing we remove the ui that is used to launch and manage bots and the lobby in favor of just having management admin scripts i think we can get to full feature parity without too much difficult and for much lower cost taking this approach for example it s pretty easy to build a restart lobby restart bot script etc listed below are some of the drawbacks and costs to having a ui screens support requires a shared session not good for server management greatly complicates deployment interactions have to be done from a ui this adds time and effort to starting and managing n bots ultimately not very sustainable since instances need to be hand managed effectively server requires extra memory and cpu resources needed to run the ui lots of code is used to support these features removing this code would lower development costs
1
50,578
6,102,921,280
IssuesEvent
2017-06-20 17:34:15
dydrich/e-school
https://api.github.com/repos/dydrich/e-school
closed
relazioni web per genitori
request testing urgent
visualizzazione e download delle relazioni pubbliche nell'area genitori
1.0
relazioni web per genitori - visualizzazione e download delle relazioni pubbliche nell'area genitori
non_process
relazioni web per genitori visualizzazione e download delle relazioni pubbliche nell area genitori
0
155,306
24,443,341,718
IssuesEvent
2022-10-06 15:58:32
IATI/IATI-Pattern-Library
https://api.github.com/repos/IATI/IATI-Pattern-Library
closed
Remove the publisher count from the homepage
visual design
We've had a request from Annaliese to remove the active publisher count from the homepage. She has said "you can move it somewhere else, but not on the front page where it has such a prominent focus". This is because we want to move to counting only our active publishers, but its all a bit complicated.
1.0
Remove the publisher count from the homepage - We've had a request from Annaliese to remove the active publisher count from the homepage. She has said "you can move it somewhere else, but not on the front page where it has such a prominent focus". This is because we want to move to counting only our active publishers, but its all a bit complicated.
non_process
remove the publisher count from the homepage we ve had a request from annaliese to remove the active publisher count from the homepage she has said you can move it somewhere else but not on the front page where it has such a prominent focus this is because we want to move to counting only our active publishers but its all a bit complicated
0
4,476
7,341,427,226
IssuesEvent
2018-03-07 02:01:30
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
MFA Server blade - only for on-premises MFA server?
cxp in-process multi-factor-authentication product-question triaged
Are all settings in the "MFA Server" blade ("Unlock account", "Block/unblock users", "Caching", etc.) only meant for use with the on-premises MFA server or are these settings also meant for use with the MFA in cloud deployment flavor? --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 56319b59-e0f6-f0a7-c853-99cebcb5711a * Version Independent ID: d9e50b54-3eee-d0cf-5fcf-706e06b94fab * [Content](https://docs.microsoft.com/en-us/azure/multi-factor-authentication/multi-factor-authentication-whats-next) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/multi-factor-authentication/multi-factor-authentication-whats-next.md) * Service: multi-factor-authentication
1.0
MFA Server blade - only for on-premises MFA server? - Are all settings in the "MFA Server" blade ("Unlock account", "Block/unblock users", "Caching", etc.) only meant for use with the on-premises MFA server or are these settings also meant for use with the MFA in cloud deployment flavor? --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 56319b59-e0f6-f0a7-c853-99cebcb5711a * Version Independent ID: d9e50b54-3eee-d0cf-5fcf-706e06b94fab * [Content](https://docs.microsoft.com/en-us/azure/multi-factor-authentication/multi-factor-authentication-whats-next) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/multi-factor-authentication/multi-factor-authentication-whats-next.md) * Service: multi-factor-authentication
process
mfa server blade only for on premises mfa server are all settings in the mfa server blade unlock account block unblock users caching etc only meant for use with the on premises mfa server or are these settings also meant for use with the mfa in cloud deployment flavor document details โš  do not edit this section it is required for docs microsoft com โžŸ github issue linking id version independent id service multi factor authentication
1
252,414
21,576,742,794
IssuesEvent
2022-05-02 14:27:57
rasterio/rasterio
https://api.github.com/repos/rasterio/rasterio
opened
Allow testing against GDAL dev branch
testing
Since switching to using GDAL's docker images for testing we don't have a great way to test against unreleased versions of GDAL. For example, we don't know if rasterio will have problems with GDAL 3.4.3.
1.0
Allow testing against GDAL dev branch - Since switching to using GDAL's docker images for testing we don't have a great way to test against unreleased versions of GDAL. For example, we don't know if rasterio will have problems with GDAL 3.4.3.
non_process
allow testing against gdal dev branch since switching to using gdal s docker images for testing we don t have a great way to test against unreleased versions of gdal for example we don t know if rasterio will have problems with gdal
0
224,931
17,782,841,654
IssuesEvent
2021-08-31 07:32:50
elastic/kibana
https://api.github.com/repos/elastic/kibana
reopened
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/timelion/feature_controls/timelion_securityยทts - Timelion feature controls security global timelion all privileges allows a timelion sheet to be created
Team:KibanaApp failed-test Feature:TimelionApp
A test failed on a tracked branch ``` Error: expected testSubject(timelionSaveSuccessToast) to exist at TestSubjects.existOrFail (/dev/shm/workspace/parallel/22/kibana/test/functional/services/common/test_subjects.ts:45:13) at TimelionPageObject.saveTimelionSheet (/dev/shm/workspace/parallel/22/kibana/test/functional/page_objects/timelion_page.ts:67:5) at Context.<anonymous> (test/functional/apps/timelion/feature_controls/timelion_security.ts:87:9) at Object.apply (/dev/shm/workspace/parallel/22/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/16578/) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/timelion/feature_controls/timelion_securityยทts","test.name":"Timelion feature controls security global timelion all privileges allows a timelion sheet to be created","test.failCount":5}} -->
1.0
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/timelion/feature_controls/timelion_securityยทts - Timelion feature controls security global timelion all privileges allows a timelion sheet to be created - A test failed on a tracked branch ``` Error: expected testSubject(timelionSaveSuccessToast) to exist at TestSubjects.existOrFail (/dev/shm/workspace/parallel/22/kibana/test/functional/services/common/test_subjects.ts:45:13) at TimelionPageObject.saveTimelionSheet (/dev/shm/workspace/parallel/22/kibana/test/functional/page_objects/timelion_page.ts:67:5) at Context.<anonymous> (test/functional/apps/timelion/feature_controls/timelion_security.ts:87:9) at Object.apply (/dev/shm/workspace/parallel/22/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/16578/) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/timelion/feature_controls/timelion_securityยทts","test.name":"Timelion feature controls security global timelion all privileges allows a timelion sheet to be created","test.failCount":5}} -->
non_process
failing test chrome x pack ui functional tests x pack test functional apps timelion feature controls timelion securityยทts timelion feature controls security global timelion all privileges allows a timelion sheet to be created a test failed on a tracked branch error expected testsubject timelionsavesuccesstoast to exist at testsubjects existorfail dev shm workspace parallel kibana test functional services common test subjects ts at timelionpageobject savetimelionsheet dev shm workspace parallel kibana test functional page objects timelion page ts at context test functional apps timelion feature controls timelion security ts at object apply dev shm workspace parallel kibana node modules kbn test target node functional test runner lib mocha wrap function js first failure
0
13,398
15,873,603,321
IssuesEvent
2021-04-09 02:51:19
Azure/bicep
https://api.github.com/repos/Azure/bicep
opened
Release checklist
process
The following steps must be performed for an official release: - [ ] Run official build - [ ] Get version number from official build and push a new tag - [ ] Create draft release for the new tag - [ ] Use `git shortlog <previous tag>..<new tag>` as the draft release description - [ ] Run `./scripts/UploadSignedReleaseArtifacts.ps1` to add official artifacts to the release. - [ ] Clean up release notes - [ ] Validate VSIX and Bicep CLI manually on most common platforms - [ ] Upload VSIX to VS gallery - [ ] Upload NuGet packages to nuget.org via `./scripts/PublishPackages.ps1` - [ ] [Update homebrew](https://github.com/Azure/homebrew-bicep/actions/workflows/update-homebrew.yml)
1.0
Release checklist - The following steps must be performed for an official release: - [ ] Run official build - [ ] Get version number from official build and push a new tag - [ ] Create draft release for the new tag - [ ] Use `git shortlog <previous tag>..<new tag>` as the draft release description - [ ] Run `./scripts/UploadSignedReleaseArtifacts.ps1` to add official artifacts to the release. - [ ] Clean up release notes - [ ] Validate VSIX and Bicep CLI manually on most common platforms - [ ] Upload VSIX to VS gallery - [ ] Upload NuGet packages to nuget.org via `./scripts/PublishPackages.ps1` - [ ] [Update homebrew](https://github.com/Azure/homebrew-bicep/actions/workflows/update-homebrew.yml)
process
release checklist the following steps must be performed for an official release run official build get version number from official build and push a new tag create draft release for the new tag use git shortlog as the draft release description run scripts uploadsignedreleaseartifacts to add official artifacts to the release clean up release notes validate vsix and bicep cli manually on most common platforms upload vsix to vs gallery upload nuget packages to nuget org via scripts publishpackages
1
163,033
6,188,562,727
IssuesEvent
2017-07-04 10:29:02
zero-os/0-orchestrator
https://api.github.com/repos/zero-os/0-orchestrator
closed
Install script should only check validity of the given zerotier network
priority_major state_verification type_bug
In this line in the install script : https://github.com/zero-os/0-orchestrator/blob/master/scripts/install-orchestrator.sh#L82 We should also grep on the zerotier network id. Cause the docker could also have some other network but that should not impact on the install of the orchetrator
1.0
Install script should only check validity of the given zerotier network - In this line in the install script : https://github.com/zero-os/0-orchestrator/blob/master/scripts/install-orchestrator.sh#L82 We should also grep on the zerotier network id. Cause the docker could also have some other network but that should not impact on the install of the orchetrator
non_process
install script should only check validity of the given zerotier network in this line in the install script we should also grep on the zerotier network id cause the docker could also have some other network but that should not impact on the install of the orchetrator
0
229
2,653,378,701
IssuesEvent
2015-03-16 22:58:30
camsci/meteor-pi
https://api.github.com/repos/camsci/meteor-pi
closed
WP1.5 - Camera location detection
domain:Image Processing Top Level Functionality
Use astrometrics to determine the camera orientation along with GPS for absolute position.
1.0
WP1.5 - Camera location detection - Use astrometrics to determine the camera orientation along with GPS for absolute position.
process
camera location detection use astrometrics to determine the camera orientation along with gps for absolute position
1
450,522
13,012,844,181
IssuesEvent
2020-07-25 08:11:24
buttercup/buttercup-mobile
https://api.github.com/repos/buttercup/buttercup-mobile
opened
Remove dependency on Google APIs for sign-in
Effort: High Platform: Android Priority: Medium Status: Available Type: Enhancement Type: Maintenance
Currently (via react-native-google-signin) Google APIs are required when signing-in to Google Drive. Either find a way to remove this dependency (the Google APIs), or find a way to strip it at build time to produce a second copy of the app without Google Drive support.
1.0
Remove dependency on Google APIs for sign-in - Currently (via react-native-google-signin) Google APIs are required when signing-in to Google Drive. Either find a way to remove this dependency (the Google APIs), or find a way to strip it at build time to produce a second copy of the app without Google Drive support.
non_process
remove dependency on google apis for sign in currently via react native google signin google apis are required when signing in to google drive either find a way to remove this dependency the google apis or find a way to strip it at build time to produce a second copy of the app without google drive support
0
5,140
7,923,463,624
IssuesEvent
2018-07-05 14:08:54
SlicerIGT/SlicerIGT
https://api.github.com/repos/SlicerIGT/SlicerIGT
closed
Allow 'None' selections in TransformProcessor
TransformProcessor bug
Treat 'None' as the world (Ras) coordinate system.
1.0
Allow 'None' selections in TransformProcessor - Treat 'None' as the world (Ras) coordinate system.
process
allow none selections in transformprocessor treat none as the world ras coordinate system
1
352,339
10,540,602,046
IssuesEvent
2019-10-02 08:48:19
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.nature.com - design is broken
browser-firefox engine-gecko os-linux priority-important
<!-- @browser: Firefox 71.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.nature.com/immersive/d41586-019-02711-4/index.html **Browser / Version**: Firefox 71.0 **Operating System**: Ubuntu **Tested Another Browser**: Yes **Problem type**: Design is broken **Description**: Images appear blurry and essential content is missing **Steps to Reproduce**: I tried the site on both Firefox Nightly and Chromium. The content appears properly on Chromium and appears broken in Nightly. [![Screenshot Description](https://webcompat.com/uploads/2019/9/54916ae6-bcb7-4e81-b085-f76cb64c33f5-thumb.jpeg)](https://webcompat.com/uploads/2019/9/54916ae6-bcb7-4e81-b085-f76cb64c33f5.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190913054852</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> <p>Console Messages:</p> <pre> [{'level': 'error', 'log': ['EncodingError: Invalid image request.'], 'uri': '', 'pos': '0:0'}, {'level': 'warn', 'log': ['Loading mixed (insecure) display content http://www.nature.com/favicon.ico?foxtrotcallback=true on a secure page'], 'uri': 'https://www.nature.com/immersive/d41586-019-02711-4/index.html', 'pos': '0:0'}, {'level': 'warn', 'log': ['onmozfullscreenchange is deprecated.'], 'uri': 'https://www.nature.com/immersive/d41586-019-02711-4/index.html', 'pos': '0:0'}, {'level': 'warn', 'log': ['onmozfullscreenerror is deprecated.'], 'uri': 'https://www.nature.com/immersive/d41586-019-02711-4/index.html', 'pos': '0:0'}] </pre> </details> _From [webcompat.com](https://webcompat.com/) with โค๏ธ_
1.0
www.nature.com - design is broken - <!-- @browser: Firefox 71.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://www.nature.com/immersive/d41586-019-02711-4/index.html **Browser / Version**: Firefox 71.0 **Operating System**: Ubuntu **Tested Another Browser**: Yes **Problem type**: Design is broken **Description**: Images appear blurry and essential content is missing **Steps to Reproduce**: I tried the site on both Firefox Nightly and Chromium. The content appears properly on Chromium and appears broken in Nightly. [![Screenshot Description](https://webcompat.com/uploads/2019/9/54916ae6-bcb7-4e81-b085-f76cb64c33f5-thumb.jpeg)](https://webcompat.com/uploads/2019/9/54916ae6-bcb7-4e81-b085-f76cb64c33f5.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190913054852</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> <p>Console Messages:</p> <pre> [{'level': 'error', 'log': ['EncodingError: Invalid image request.'], 'uri': '', 'pos': '0:0'}, {'level': 'warn', 'log': ['Loading mixed (insecure) display content http://www.nature.com/favicon.ico?foxtrotcallback=true on a secure page'], 'uri': 'https://www.nature.com/immersive/d41586-019-02711-4/index.html', 'pos': '0:0'}, {'level': 'warn', 'log': ['onmozfullscreenchange is deprecated.'], 'uri': 'https://www.nature.com/immersive/d41586-019-02711-4/index.html', 'pos': '0:0'}, {'level': 'warn', 'log': ['onmozfullscreenerror is deprecated.'], 'uri': 'https://www.nature.com/immersive/d41586-019-02711-4/index.html', 'pos': '0:0'}] </pre> </details> _From [webcompat.com](https://webcompat.com/) with โค๏ธ_
non_process
design is broken url browser version firefox operating system ubuntu tested another browser yes problem type design is broken description images appear blurry and essential content is missing steps to reproduce i tried the site on both firefox nightly and chromium the content appears properly on chromium and appears broken in nightly browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false console messages uri pos level warn log uri pos level warn log uri pos level warn log uri pos from with โค๏ธ
0
3,440
6,537,317,049
IssuesEvent
2017-08-31 21:51:49
pburns96/Revature-VenderBender
https://api.github.com/repos/pburns96/Revature-VenderBender
closed
As a customer, I want to be able to view my shopping cart history
Medium Priority Work In Process
Prerequisites: -Shopping cart Task List: -view all purchases -sort by different periods of time -each purchase will have each item I have purchased and the quantity
1.0
As a customer, I want to be able to view my shopping cart history - Prerequisites: -Shopping cart Task List: -view all purchases -sort by different periods of time -each purchase will have each item I have purchased and the quantity
process
as a customer i want to be able to view my shopping cart history prerequisites shopping cart task list view all purchases sort by different periods of time each purchase will have each item i have purchased and the quantity
1
461,390
13,229,517,922
IssuesEvent
2020-08-18 08:18:20
yalla-coop/presspad
https://api.github.com/repos/yalla-coop/presspad
closed
Create Organisation Settings
3-points Backend Frontend backlog priority-5
- [ ] Set up Settings route - [ ] Settings page in line with wireframes: https://www.figma.com/file/CMkMSsbTLjpitcetLUunz9/PressPad?node-id=2879%3A65 **Across all** - [ ] Any changes show changes saved when user clicks save button - [ ] Client side validation to ensure user doesn't delete any fields that are required **My Account** - [ ] Clicking change my password shows two inputs for user to enter old and new password - [ ] Client side validation to ensure old and new passwords aren't the same - [ ] Password validation - At least 8 characters, 1 upper case, 1 lower case and 1 number **Details** This is the section in the sign up account details - [ ] Pre-fill inputs with answers from when the user signed up **Profile** This is the section in the sign up profile - [ ] Pre-fill inputs with answers from when the user signed up
1.0
Create Organisation Settings - - [ ] Set up Settings route - [ ] Settings page in line with wireframes: https://www.figma.com/file/CMkMSsbTLjpitcetLUunz9/PressPad?node-id=2879%3A65 **Across all** - [ ] Any changes show changes saved when user clicks save button - [ ] Client side validation to ensure user doesn't delete any fields that are required **My Account** - [ ] Clicking change my password shows two inputs for user to enter old and new password - [ ] Client side validation to ensure old and new passwords aren't the same - [ ] Password validation - At least 8 characters, 1 upper case, 1 lower case and 1 number **Details** This is the section in the sign up account details - [ ] Pre-fill inputs with answers from when the user signed up **Profile** This is the section in the sign up profile - [ ] Pre-fill inputs with answers from when the user signed up
non_process
create organisation settings set up settings route settings page in line with wireframes across all any changes show changes saved when user clicks save button client side validation to ensure user doesn t delete any fields that are required my account clicking change my password shows two inputs for user to enter old and new password client side validation to ensure old and new passwords aren t the same password validation at least characters upper case lower case and number details this is the section in the sign up account details pre fill inputs with answers from when the user signed up profile this is the section in the sign up profile pre fill inputs with answers from when the user signed up
0
647,437
21,103,591,992
IssuesEvent
2022-04-04 16:30:46
zeyneplervesarp/swe574-javagang
https://api.github.com/repos/zeyneplervesarp/swe574-javagang
reopened
Backend for flagging feature
backend medium priority difficulty-medium
new table for flagging + service and endpoint (just for flagging).
1.0
Backend for flagging feature - new table for flagging + service and endpoint (just for flagging).
non_process
backend for flagging feature new table for flagging service and endpoint just for flagging
0
24,011
4,055,684,613
IssuesEvent
2016-05-24 16:10:48
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
circleci: failed tests: TestAdminLossOfQuorum
Robot test-failure
The following test appears to have failed: [#17811](https://circleci.com/gh/cockroachdb/cockroach/17811): ``` I160513 21:50:30.410286 acceptance/cluster/docker.go:328 unable to create container volumes: Error response from daemon: Conflict. The name "/volumes" is already in use by container cc2fce78b101c70b94319f590cdd70169ca06c913097c56dd9fc7917840b1b7e. You have to remove (or rename) that container to be able to reuse that name. W160513 21:50:30.410831 acceptance/cluster/docker.go:355 error indicated existing container volumes, but none found: error: Error response from daemon: Conflict. The name "/volumes" is already in use by container cc2fce78b101c70b94319f590cdd70169ca06c913097c56dd9fc7917840b1b7e. You have to remove (or rename) that container to be able to reuse that name. containers: [{ID:d8803bdf5016c35ae34b34dcc82f8e1bdfc1c9ddf3b91423a39f7fa1314cae32 Names:[/condescending_heyrovsky] Image:cockroachdb/builder:20160305-182433 ImageID:sha256:26823585ed7ee28fa547f4a91d16d6f26765bfcc3b49e8befcd6f67189ad4617 Command:make test 'TESTFLAGS=-v --verbosity=1 --vmodule=monitor=2,tracer=2' Created:1463175962 Ports:[] SizeRw:0 SizeRootFs:0 Labels:map[] State: Status:Exited (0) 10 seconds ago HostConfig:{NetworkMode:default} NetworkSettings:0xc820118328 Mounts:[]} {ID:588ab0fe0e54046690570c19706e067ca63bdbe2f7d74f74409ba52b9970d20b Names:[/sharp_liskov] Image:cockroachdb/builder:20160305-182433 ImageID:sha256:26823585ed7ee28fa547f4a91d16d6f26765bfcc3b49e8befcd6f67189ad4617 Command:build/circle-deps.sh docker Created:1463175105 Ports:[] SizeRw:0 SizeRootFs:0 Labels:map[] State: Status:Exited (0) 4 minutes ago HostConfig:{NetworkMode:default} NetworkSettings:0xc820118338 Mounts:[]}] I160513 21:50:30.410876 acceptance/cluster/localcluster.go:658 stopping --- FAIL: TestAdminLossOfQuorum (16.37s) panic: ContainerCreate: exceeded 10 tries with a 10s timeout [recovered] panic: ContainerCreate: exceeded 10 tries with a 10s timeout [recovered] panic: ContainerCreate: exceeded 10 tries with a 10s timeout goroutine 24 [running]: panic(0x162b6e0, 0xc8202ab050) /usr/local/go/src/runtime/panic.go:464 +0x3e6 testing.tRunner.func1(0xc8203f8cf0) /usr/local/go/src/testing/testing.go:467 +0x192 panic(0x162b6e0, 0xc8202ab050) /usr/local/go/src/runtime/panic.go:426 +0x4e9 github.com/cockroachdb/cockroach/acceptance/cluster.(*LocalCluster).stopOnPanic(0xc8203b0540) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/localcluster.go:238 +0x9b panic(0x162b6e0, 0xc8202ab050) /usr/local/go/src/runtime/panic.go:426 +0x4e9 github.com/cockroachdb/cockroach/acceptance/cluster.maybePanic(0x7f35a7469028, 0xc8202ab050) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/docker.go:166 +0x4b github.com/cockroachdb/cockroach/acceptance/cluster.(*LocalCluster).initCluster(0xc8203b0540) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/localcluster.go:366 +0x11ff github.com/cockroachdb/cockroach/acceptance/cluster.(*LocalCluster).Start(0xc8203b0540) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/localcluster.go:589 +0xd3 github.com/cockroachdb/cockroach/acceptance.StartCluster(0xc8203f8cf0, 0xc8204152d0, 0x9, 0xc820428bc0, 0x1, 0x1, 0x12a05f200, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/acceptance/util_test.go:187 +0x3d1 github.com/cockroachdb/cockroach/acceptance.runTestOnConfigs.func1(0xc8203f8cf0, 0xc82004deb8, 0x1bd49e0) -- testing.tRunner(0xc8203f8cf0, 0x235a780) /usr/local/go/src/testing/testing.go:473 +0x98 created by testing.RunTests /usr/local/go/src/testing/testing.go:582 +0x892 goroutine 1 [chan receive]: testing.RunTests(0x1bd6b08, 0x235a780, 0x18, 0x18, 0xc82025bd01) /usr/local/go/src/testing/testing.go:583 +0x8d2 testing.(*M).Run(0xc8204d7eb8, 0x1bd4958) /usr/local/go/src/testing/testing.go:515 +0x81 github.com/cockroachdb/cockroach/acceptance.TestMain(0xc8204d7eb8) /go/src/github.com/cockroachdb/cockroach/acceptance/main_test.go:48 +0x3e main.main() github.com/cockroachdb/cockroach/acceptance/_test/_testmain.go:98 +0x114 goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 20 [chan receive]: github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x2634900) /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64 created by github.com/cockroachdb/cockroach/util/log.init.1 /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a goroutine 7 [select, locked to thread]: runtime.gopark(0x1bd7358, 0xc820022f28, 0x184f610, 0x6, 0x18, 0x2) /usr/local/go/src/runtime/proc.go:262 +0x163 runtime.selectgoImpl(0xc820022f28, 0x0, 0x18) /usr/local/go/src/runtime/select.go:392 +0xa67 runtime.selectgo(0xc820022f28) /usr/local/go/src/runtime/select.go:215 +0x12 runtime.ensureSigM.func1() /usr/local/go/src/runtime/signal1_unix.go:279 +0x358 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 22 [syscall]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:116 +0x132 os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:22 +0x18 created by os/signal.init.1 /usr/local/go/src/os/signal/signal_unix.go:28 +0x37 goroutine 23 [chan receive]: github.com/cockroachdb/cockroach/acceptance.TestMain.func1() /go/src/github.com/cockroachdb/cockroach/acceptance/main_test.go:39 +0xd8 created by github.com/cockroachdb/cockroach/acceptance.TestMain /go/src/github.com/cockroachdb/cockroach/acceptance/main_test.go:47 +0x30 goroutine 36 [IO wait]: net.runtime_pollWait(0x7f35a743af68, 0x72, 0xc820430000) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc8203383e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc8203383e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820338380, 0xc820430000, 0x1000, 0x1000, 0x0, 0x7f35a7469050, 0xc820014200) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820118098, 0xc820430000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 -- net/http.(*persistConn).readLoop(0xc8204080d0) /usr/local/go/src/net/http/transport.go:1069 +0x177 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:853 +0x10a6 goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc8204080d0) /usr/local/go/src/net/http/transport.go:1273 +0x472 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:854 +0x10cb ok github.com/cockroachdb/cockroach/acceptance 1337s ``` Please assign, take a look and update the issue accordingly.
1.0
circleci: failed tests: TestAdminLossOfQuorum - The following test appears to have failed: [#17811](https://circleci.com/gh/cockroachdb/cockroach/17811): ``` I160513 21:50:30.410286 acceptance/cluster/docker.go:328 unable to create container volumes: Error response from daemon: Conflict. The name "/volumes" is already in use by container cc2fce78b101c70b94319f590cdd70169ca06c913097c56dd9fc7917840b1b7e. You have to remove (or rename) that container to be able to reuse that name. W160513 21:50:30.410831 acceptance/cluster/docker.go:355 error indicated existing container volumes, but none found: error: Error response from daemon: Conflict. The name "/volumes" is already in use by container cc2fce78b101c70b94319f590cdd70169ca06c913097c56dd9fc7917840b1b7e. You have to remove (or rename) that container to be able to reuse that name. containers: [{ID:d8803bdf5016c35ae34b34dcc82f8e1bdfc1c9ddf3b91423a39f7fa1314cae32 Names:[/condescending_heyrovsky] Image:cockroachdb/builder:20160305-182433 ImageID:sha256:26823585ed7ee28fa547f4a91d16d6f26765bfcc3b49e8befcd6f67189ad4617 Command:make test 'TESTFLAGS=-v --verbosity=1 --vmodule=monitor=2,tracer=2' Created:1463175962 Ports:[] SizeRw:0 SizeRootFs:0 Labels:map[] State: Status:Exited (0) 10 seconds ago HostConfig:{NetworkMode:default} NetworkSettings:0xc820118328 Mounts:[]} {ID:588ab0fe0e54046690570c19706e067ca63bdbe2f7d74f74409ba52b9970d20b Names:[/sharp_liskov] Image:cockroachdb/builder:20160305-182433 ImageID:sha256:26823585ed7ee28fa547f4a91d16d6f26765bfcc3b49e8befcd6f67189ad4617 Command:build/circle-deps.sh docker Created:1463175105 Ports:[] SizeRw:0 SizeRootFs:0 Labels:map[] State: Status:Exited (0) 4 minutes ago HostConfig:{NetworkMode:default} NetworkSettings:0xc820118338 Mounts:[]}] I160513 21:50:30.410876 acceptance/cluster/localcluster.go:658 stopping --- FAIL: TestAdminLossOfQuorum (16.37s) panic: ContainerCreate: exceeded 10 tries with a 10s timeout [recovered] panic: ContainerCreate: exceeded 10 tries with a 10s timeout [recovered] panic: ContainerCreate: exceeded 10 tries with a 10s timeout goroutine 24 [running]: panic(0x162b6e0, 0xc8202ab050) /usr/local/go/src/runtime/panic.go:464 +0x3e6 testing.tRunner.func1(0xc8203f8cf0) /usr/local/go/src/testing/testing.go:467 +0x192 panic(0x162b6e0, 0xc8202ab050) /usr/local/go/src/runtime/panic.go:426 +0x4e9 github.com/cockroachdb/cockroach/acceptance/cluster.(*LocalCluster).stopOnPanic(0xc8203b0540) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/localcluster.go:238 +0x9b panic(0x162b6e0, 0xc8202ab050) /usr/local/go/src/runtime/panic.go:426 +0x4e9 github.com/cockroachdb/cockroach/acceptance/cluster.maybePanic(0x7f35a7469028, 0xc8202ab050) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/docker.go:166 +0x4b github.com/cockroachdb/cockroach/acceptance/cluster.(*LocalCluster).initCluster(0xc8203b0540) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/localcluster.go:366 +0x11ff github.com/cockroachdb/cockroach/acceptance/cluster.(*LocalCluster).Start(0xc8203b0540) /go/src/github.com/cockroachdb/cockroach/acceptance/cluster/localcluster.go:589 +0xd3 github.com/cockroachdb/cockroach/acceptance.StartCluster(0xc8203f8cf0, 0xc8204152d0, 0x9, 0xc820428bc0, 0x1, 0x1, 0x12a05f200, 0x0, 0x0) /go/src/github.com/cockroachdb/cockroach/acceptance/util_test.go:187 +0x3d1 github.com/cockroachdb/cockroach/acceptance.runTestOnConfigs.func1(0xc8203f8cf0, 0xc82004deb8, 0x1bd49e0) -- testing.tRunner(0xc8203f8cf0, 0x235a780) /usr/local/go/src/testing/testing.go:473 +0x98 created by testing.RunTests /usr/local/go/src/testing/testing.go:582 +0x892 goroutine 1 [chan receive]: testing.RunTests(0x1bd6b08, 0x235a780, 0x18, 0x18, 0xc82025bd01) /usr/local/go/src/testing/testing.go:583 +0x8d2 testing.(*M).Run(0xc8204d7eb8, 0x1bd4958) /usr/local/go/src/testing/testing.go:515 +0x81 github.com/cockroachdb/cockroach/acceptance.TestMain(0xc8204d7eb8) /go/src/github.com/cockroachdb/cockroach/acceptance/main_test.go:48 +0x3e main.main() github.com/cockroachdb/cockroach/acceptance/_test/_testmain.go:98 +0x114 goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 20 [chan receive]: github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x2634900) /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64 created by github.com/cockroachdb/cockroach/util/log.init.1 /go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a goroutine 7 [select, locked to thread]: runtime.gopark(0x1bd7358, 0xc820022f28, 0x184f610, 0x6, 0x18, 0x2) /usr/local/go/src/runtime/proc.go:262 +0x163 runtime.selectgoImpl(0xc820022f28, 0x0, 0x18) /usr/local/go/src/runtime/select.go:392 +0xa67 runtime.selectgo(0xc820022f28) /usr/local/go/src/runtime/select.go:215 +0x12 runtime.ensureSigM.func1() /usr/local/go/src/runtime/signal1_unix.go:279 +0x358 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 goroutine 22 [syscall]: os/signal.signal_recv(0x0) /usr/local/go/src/runtime/sigqueue.go:116 +0x132 os/signal.loop() /usr/local/go/src/os/signal/signal_unix.go:22 +0x18 created by os/signal.init.1 /usr/local/go/src/os/signal/signal_unix.go:28 +0x37 goroutine 23 [chan receive]: github.com/cockroachdb/cockroach/acceptance.TestMain.func1() /go/src/github.com/cockroachdb/cockroach/acceptance/main_test.go:39 +0xd8 created by github.com/cockroachdb/cockroach/acceptance.TestMain /go/src/github.com/cockroachdb/cockroach/acceptance/main_test.go:47 +0x30 goroutine 36 [IO wait]: net.runtime_pollWait(0x7f35a743af68, 0x72, 0xc820430000) /usr/local/go/src/runtime/netpoll.go:160 +0x60 net.(*pollDesc).Wait(0xc8203383e0, 0x72, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a net.(*pollDesc).WaitRead(0xc8203383e0, 0x0, 0x0) /usr/local/go/src/net/fd_poll_runtime.go:78 +0x36 net.(*netFD).Read(0xc820338380, 0xc820430000, 0x1000, 0x1000, 0x0, 0x7f35a7469050, 0xc820014200) /usr/local/go/src/net/fd_unix.go:250 +0x23a net.(*conn).Read(0xc820118098, 0xc820430000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:172 +0xe4 -- net/http.(*persistConn).readLoop(0xc8204080d0) /usr/local/go/src/net/http/transport.go:1069 +0x177 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:853 +0x10a6 goroutine 37 [select]: net/http.(*persistConn).writeLoop(0xc8204080d0) /usr/local/go/src/net/http/transport.go:1273 +0x472 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport.go:854 +0x10cb ok github.com/cockroachdb/cockroach/acceptance 1337s ``` Please assign, take a look and update the issue accordingly.
non_process
circleci failed tests testadminlossofquorum the following test appears to have failed acceptance cluster docker go unable to create container volumes error response from daemon conflict the name volumes is already in use by container you have to remove or rename that container to be able to reuse that name acceptance cluster docker go error indicated existing container volumes but none found error error response from daemon conflict the name volumes is already in use by container you have to remove or rename that container to be able to reuse that name containers image cockroachdb builder imageid command make test testflags v verbosity vmodule monitor tracer created ports sizerw sizerootfs labels map state status exited seconds ago hostconfig networkmode default networksettings mounts id names image cockroachdb builder imageid command build circle deps sh docker created ports sizerw sizerootfs labels map state status exited minutes ago hostconfig networkmode default networksettings mounts acceptance cluster localcluster go stopping fail testadminlossofquorum panic containercreate exceeded tries with a timeout panic containercreate exceeded tries with a timeout panic containercreate exceeded tries with a timeout goroutine panic usr local go src runtime panic go testing trunner usr local go src testing testing go panic usr local go src runtime panic go github com cockroachdb cockroach acceptance cluster localcluster stoponpanic go src github com cockroachdb cockroach acceptance cluster localcluster go panic usr local go src runtime panic go github com cockroachdb cockroach acceptance cluster maybepanic go src github com cockroachdb cockroach acceptance cluster docker go github com cockroachdb cockroach acceptance cluster localcluster initcluster go src github com cockroachdb cockroach acceptance cluster localcluster go github com cockroachdb cockroach acceptance cluster localcluster start go src github com cockroachdb cockroach acceptance cluster localcluster go github com cockroachdb cockroach acceptance startcluster go src github com cockroachdb cockroach acceptance util test go github com cockroachdb cockroach acceptance runtestonconfigs testing trunner usr local go src testing testing go created by testing runtests usr local go src testing testing go goroutine testing runtests usr local go src testing testing go testing m run usr local go src testing testing go github com cockroachdb cockroach acceptance testmain go src github com cockroachdb cockroach acceptance main test go main main github com cockroachdb cockroach acceptance test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine github com cockroachdb cockroach util log loggingt flushdaemon go src github com cockroachdb cockroach util log clog go created by github com cockroachdb cockroach util log init go src github com cockroachdb cockroach util log clog go goroutine runtime gopark usr local go src runtime proc go runtime selectgoimpl usr local go src runtime select go runtime selectgo usr local go src runtime select go runtime ensuresigm usr local go src runtime unix go runtime goexit usr local go src runtime asm s goroutine os signal signal recv usr local go src runtime sigqueue go os signal loop usr local go src os signal signal unix go created by os signal init usr local go src os signal signal unix go goroutine github com cockroachdb cockroach acceptance testmain go src github com cockroachdb cockroach acceptance main test go created by github com cockroachdb cockroach acceptance testmain go src github com cockroachdb cockroach acceptance main test go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go net http persistconn readloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine net http persistconn writeloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go ok github com cockroachdb cockroach acceptance please assign take a look and update the issue accordingly
0
24,741
12,393,284,077
IssuesEvent
2020-05-20 15:11:51
fieldenms/tg
https://api.github.com/repos/fieldenms/tg
opened
On-demand dialog creation
P1 Performance UI / UX
Every master / centre has its own dialog instance (`tg-custom-action-dialog`) that is used for opening of another masters / centres. This includes masters for actions, masters for creating / editing of persistent entities, centres with details etc. Centres even have two dialogs: one for configuration adjustments (resize columns etc.) and other for standard editing / actioning purposes. These all dialogs contribute to DOM size / memory footprint and potentially lead to worse overall performance of client application. The dialog itself is not too big to be created on-demand. But now it is created when parent component attaches. This leads to greater DOM node count and unnecessary dialogs present on all open centres / masters. It would be great to avoid excessive `tg-custom-action-dialog` creation. ### Expected outcome Simpler DOM and lower memory footprint with potentially better overall performance.
True
On-demand dialog creation - Every master / centre has its own dialog instance (`tg-custom-action-dialog`) that is used for opening of another masters / centres. This includes masters for actions, masters for creating / editing of persistent entities, centres with details etc. Centres even have two dialogs: one for configuration adjustments (resize columns etc.) and other for standard editing / actioning purposes. These all dialogs contribute to DOM size / memory footprint and potentially lead to worse overall performance of client application. The dialog itself is not too big to be created on-demand. But now it is created when parent component attaches. This leads to greater DOM node count and unnecessary dialogs present on all open centres / masters. It would be great to avoid excessive `tg-custom-action-dialog` creation. ### Expected outcome Simpler DOM and lower memory footprint with potentially better overall performance.
non_process
on demand dialog creation every master centre has its own dialog instance tg custom action dialog that is used for opening of another masters centres this includes masters for actions masters for creating editing of persistent entities centres with details etc centres even have two dialogs one for configuration adjustments resize columns etc and other for standard editing actioning purposes these all dialogs contribute to dom size memory footprint and potentially lead to worse overall performance of client application the dialog itself is not too big to be created on demand but now it is created when parent component attaches this leads to greater dom node count and unnecessary dialogs present on all open centres masters it would be great to avoid excessive tg custom action dialog creation expected outcome simpler dom and lower memory footprint with potentially better overall performance
0
16,293
20,921,865,930
IssuesEvent
2022-03-24 18:10:31
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Launching node through the dynamic linker breaks fork()
child_process
### Version v14.17.5 ### Platform Linux (RHEL) 3.10.0-862.el7.x86_64 ### Subsystem _No response_ ### What steps will reproduce the bug? parent.js ```javascript const cp = require("child_process"); console.log("parent.js") cp.fork("child.js") ``` child.js ```javascript console.log("child.js") ``` ```bash /lib64/ld-linux-x86-64.so.2 $(which node) parent.js ``` ### How often does it reproduce? Is there a required condition? Always ### What is the expected behavior? Should print ```console parent.js child.js ``` ### What do you see instead? Prints ```console parent.js child.js: error while loading shared libraries: child.js: cannot open shared object file ``` ### Additional information The error message seems to come from the dynamic linker, as it is what you'd get if you try to run ```bash /lib64/ld-linux-x86-64.so.2 child.js ``` `spawn()`, `exec` and `execFile` work as expected. Problem was reproduced with glibc 2.12, 2.14, 2.17 and 2.35. I assumed the problem came from argv[0] being different from what the function would expect but setting it manually with the `--argv0` option had no effect.ย 
1.0
Launching node through the dynamic linker breaks fork() - ### Version v14.17.5 ### Platform Linux (RHEL) 3.10.0-862.el7.x86_64 ### Subsystem _No response_ ### What steps will reproduce the bug? parent.js ```javascript const cp = require("child_process"); console.log("parent.js") cp.fork("child.js") ``` child.js ```javascript console.log("child.js") ``` ```bash /lib64/ld-linux-x86-64.so.2 $(which node) parent.js ``` ### How often does it reproduce? Is there a required condition? Always ### What is the expected behavior? Should print ```console parent.js child.js ``` ### What do you see instead? Prints ```console parent.js child.js: error while loading shared libraries: child.js: cannot open shared object file ``` ### Additional information The error message seems to come from the dynamic linker, as it is what you'd get if you try to run ```bash /lib64/ld-linux-x86-64.so.2 child.js ``` `spawn()`, `exec` and `execFile` work as expected. Problem was reproduced with glibc 2.12, 2.14, 2.17 and 2.35. I assumed the problem came from argv[0] being different from what the function would expect but setting it manually with the `--argv0` option had no effect.ย 
process
launching node through the dynamic linker breaks fork version platform linux rhel subsystem no response what steps will reproduce the bug parent js javascript const cp require child process console log parent js cp fork child js child js javascript console log child js bash ld linux so which node parent js how often does it reproduce is there a required condition always what is the expected behavior should print console parent js child js what do you see instead prints console parent js child js error while loading shared libraries child js cannot open shared object file additional information the error message seems to come from the dynamic linker as it is what you d get if you try to run bash ld linux so child js spawn exec and execfile work as expected problem was reproduced with glibc and i assumed the problem came from argv being different from what the function would expect but setting it manually with the option had no effect ย 
1
10,039
13,044,161,609
IssuesEvent
2020-07-29 03:47:24
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `SubDateDurationReal` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `SubDateDurationReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `SubDateDurationReal` from TiDB - ## Description Port the scalar function `SubDateDurationReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function subdatedurationreal from tidb description port the scalar function subdatedurationreal from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
1
44,697
12,338,391,268
IssuesEvent
2020-05-14 16:22:44
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
closed
Floating-point error when summing Daylighting Zone Fractions
Defect
We are running into floating-point rounding errors when setting the fraction of a zone controlled by the daylighting reference. Although the decimal representation of these fractions given in the IDF sums to exactly 1, it seems that the floating point representation in EnergyPlus is > 1, which causes a Fatal Error. Would it be possible to tolerate values slightly larger than 1 to accomodate for this case ? Because we are not sure how to proceed when filing the IDF otherwise. ### Details ``` ** Severe ** GetDaylightingControls: Fraction of Zone controlled by the Daylighting reference points is > 1.0. ** ~~~ ** ..discovered in "Daylighting:Controls" for Zone="Z-F0ON27MII", trying to control 1.00 of the zone. ``` Sample Daylighting:Controls that causes EnergyPlus to fail: ``` Daylighting:Controls, z-f0on27mii, ! Name z-f0on27mii, ! Zone Name splitflux, ! Daylighting Method on, ! Availability Schedule Name continuous, ! Lighting Control Type 0.0, ! Minimum Input Power Fraction for Continuous or ContinuousOff Dimming Control 0.0, ! Minimum Light Output Fraction for Continuous or ContinuousOff Dimming Control 1, ! Number of Stepped Control Steps 1.0, ! Probability Lighting will be Reset When Needed in Manual Stepped Control , ! Glare Calculation Daylighting Reference Point Name 0.0, ! Glare Calculation Azimuth Angle of View Direction Clockwise from Zone y-Axis 22.0, ! Maximum Allowable Discomfort Glare Index , ! DElight Gridding Resolution z-f0on27mii|jo_ecl_circu+0, ! Daylighting Reference Point 0 Name 0.1053, ! Fraction of Zone Controlled by Reference Point 0 200.0, ! Illuminance Setpoint at Reference Point 0 z-f0on27mii|jo_ecl_circu+1, ! Daylighting Reference Point 1 Name 0.0936, ! Fraction of Zone Controlled by Reference Point 1 200.0, ! Illuminance Setpoint at Reference Point 1 z-f0on27mii|jo_ecl_circu+2, ! Daylighting Reference Point 2 Name 0.1213, ! Fraction of Zone Controlled by Reference Point 2 200.0, ! Illuminance Setpoint at Reference Point 2 z-f0on27mii|jo_ecl_circu+3, ! Daylighting Reference Point 3 Name 0.1018, ! Fraction of Zone Controlled by Reference Point 3 200.0, ! Illuminance Setpoint at Reference Point 3 z-f0on27mii|jo_ecl_circu+4, ! Daylighting Reference Point 4 Name 0.0893, ! Fraction of Zone Controlled by Reference Point 4 200.0, ! Illuminance Setpoint at Reference Point 4 z-f0on27mii|jo_ecl_circu+5, ! Daylighting Reference Point 5 Name 0.0842, ! Fraction of Zone Controlled by Reference Point 5 200.0, ! Illuminance Setpoint at Reference Point 5 z-f0on27mii|jo_ecl_circu+6, ! Daylighting Reference Point 6 Name 0.0882, ! Fraction of Zone Controlled by Reference Point 6 200.0, ! Illuminance Setpoint at Reference Point 6 z-f0on27mii|jo_ecl_circu+7, ! Daylighting Reference Point 7 Name 0.1026, ! Fraction of Zone Controlled by Reference Point 7 200.0, ! Illuminance Setpoint at Reference Point 7 z-f0on27mii|jo_ecl_circu+8, ! Daylighting Reference Point 8 Name 0.1134, ! Fraction of Zone Controlled by Reference Point 8 200.0, ! Illuminance Setpoint at Reference Point 8 z-f0on27mii|jo_ecl_circu+9, ! Daylighting Reference Point 9 Name 0.1003, ! Fraction of Zone Controlled by Reference Point 9 200.0; ! Illuminance Setpoint at Reference Point 9 ``` (note that `1053+0936+1213+1018+0893+0842+0882+1026+1134+1003 = 10000`) Some additional details for this issue (if relevant): - Platform: Ubuntu 18.04 - Version of EnergyPlus: 9.2 ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [x] Defect file added (list location of defect file here): Post has enough info to produce a unit test - [x] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
1.0
Floating-point error when summing Daylighting Zone Fractions - We are running into floating-point rounding errors when setting the fraction of a zone controlled by the daylighting reference. Although the decimal representation of these fractions given in the IDF sums to exactly 1, it seems that the floating point representation in EnergyPlus is > 1, which causes a Fatal Error. Would it be possible to tolerate values slightly larger than 1 to accomodate for this case ? Because we are not sure how to proceed when filing the IDF otherwise. ### Details ``` ** Severe ** GetDaylightingControls: Fraction of Zone controlled by the Daylighting reference points is > 1.0. ** ~~~ ** ..discovered in "Daylighting:Controls" for Zone="Z-F0ON27MII", trying to control 1.00 of the zone. ``` Sample Daylighting:Controls that causes EnergyPlus to fail: ``` Daylighting:Controls, z-f0on27mii, ! Name z-f0on27mii, ! Zone Name splitflux, ! Daylighting Method on, ! Availability Schedule Name continuous, ! Lighting Control Type 0.0, ! Minimum Input Power Fraction for Continuous or ContinuousOff Dimming Control 0.0, ! Minimum Light Output Fraction for Continuous or ContinuousOff Dimming Control 1, ! Number of Stepped Control Steps 1.0, ! Probability Lighting will be Reset When Needed in Manual Stepped Control , ! Glare Calculation Daylighting Reference Point Name 0.0, ! Glare Calculation Azimuth Angle of View Direction Clockwise from Zone y-Axis 22.0, ! Maximum Allowable Discomfort Glare Index , ! DElight Gridding Resolution z-f0on27mii|jo_ecl_circu+0, ! Daylighting Reference Point 0 Name 0.1053, ! Fraction of Zone Controlled by Reference Point 0 200.0, ! Illuminance Setpoint at Reference Point 0 z-f0on27mii|jo_ecl_circu+1, ! Daylighting Reference Point 1 Name 0.0936, ! Fraction of Zone Controlled by Reference Point 1 200.0, ! Illuminance Setpoint at Reference Point 1 z-f0on27mii|jo_ecl_circu+2, ! Daylighting Reference Point 2 Name 0.1213, ! Fraction of Zone Controlled by Reference Point 2 200.0, ! Illuminance Setpoint at Reference Point 2 z-f0on27mii|jo_ecl_circu+3, ! Daylighting Reference Point 3 Name 0.1018, ! Fraction of Zone Controlled by Reference Point 3 200.0, ! Illuminance Setpoint at Reference Point 3 z-f0on27mii|jo_ecl_circu+4, ! Daylighting Reference Point 4 Name 0.0893, ! Fraction of Zone Controlled by Reference Point 4 200.0, ! Illuminance Setpoint at Reference Point 4 z-f0on27mii|jo_ecl_circu+5, ! Daylighting Reference Point 5 Name 0.0842, ! Fraction of Zone Controlled by Reference Point 5 200.0, ! Illuminance Setpoint at Reference Point 5 z-f0on27mii|jo_ecl_circu+6, ! Daylighting Reference Point 6 Name 0.0882, ! Fraction of Zone Controlled by Reference Point 6 200.0, ! Illuminance Setpoint at Reference Point 6 z-f0on27mii|jo_ecl_circu+7, ! Daylighting Reference Point 7 Name 0.1026, ! Fraction of Zone Controlled by Reference Point 7 200.0, ! Illuminance Setpoint at Reference Point 7 z-f0on27mii|jo_ecl_circu+8, ! Daylighting Reference Point 8 Name 0.1134, ! Fraction of Zone Controlled by Reference Point 8 200.0, ! Illuminance Setpoint at Reference Point 8 z-f0on27mii|jo_ecl_circu+9, ! Daylighting Reference Point 9 Name 0.1003, ! Fraction of Zone Controlled by Reference Point 9 200.0; ! Illuminance Setpoint at Reference Point 9 ``` (note that `1053+0936+1213+1018+0893+0842+0882+1026+1134+1003 = 10000`) Some additional details for this issue (if relevant): - Platform: Ubuntu 18.04 - Version of EnergyPlus: 9.2 ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [x] Defect file added (list location of defect file here): Post has enough info to produce a unit test - [x] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
non_process
floating point error when summing daylighting zone fractions we are running into floating point rounding errors when setting the fraction of a zone controlled by the daylighting reference although the decimal representation of these fractions given in the idf sums to exactly it seems that the floating point representation in energyplus is which causes a fatal error would it be possible to tolerate values slightly larger than to accomodate for this case because we are not sure how to proceed when filing the idf otherwise details severe getdaylightingcontrols fraction of zone controlled by the daylighting reference points is discovered in daylighting controls for zone z trying to control of the zone sample daylighting controls that causes energyplus to fail daylighting controls z name z zone name splitflux daylighting method on availability schedule name continuous lighting control type minimum input power fraction for continuous or continuousoff dimming control minimum light output fraction for continuous or continuousoff dimming control number of stepped control steps probability lighting will be reset when needed in manual stepped control glare calculation daylighting reference point name glare calculation azimuth angle of view direction clockwise from zone y axis maximum allowable discomfort glare index delight gridding resolution z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point z jo ecl circu daylighting reference point name fraction of zone controlled by reference point illuminance setpoint at reference point note that some additional details for this issue if relevant platform ubuntu version of energyplus checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here post has enough info to produce a unit test ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
0
85,837
16,748,007,694
IssuesEvent
2021-06-11 18:13:54
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
Loop alignment issue in Array2 benchmark
area-CodeGen-coreclr
A recent change #51901 leading to a regression in the Benchstone.BenchI.Array2 benchmark on Ubuntu (but not Windows): #52316. The core of the benchmark is the `Bench` function inner loop: ``` for (; loop != 0; loop--) { for (int i = 0; i < 10; i++) { for (int j = 0; j < 10; j++) { for (int k = 0; k < 10; k++) { d[i][j][k] = s[i][j][k]; } } } } ``` The code of this loop is almost equivalent, modulo register allocation, before and after #51901. The difference is loop alignment: before #51901, the loop fits in 2 32-byte chunks; after, it is in 3 32-byte chunks. On Ubuntu, this leads to about a 50% performance regression. Simply setting `COMPlus_JitAlignLoopAdaptive=0` changes the alignment such that the inner loop fits in 2 32-byte chunks, recovering the performance. This is a high weight basic block; perhaps the alignment heuristics should "try harder" and be willing to insert more alignment padding in case it might be profitable?
1.0
Loop alignment issue in Array2 benchmark - A recent change #51901 leading to a regression in the Benchstone.BenchI.Array2 benchmark on Ubuntu (but not Windows): #52316. The core of the benchmark is the `Bench` function inner loop: ``` for (; loop != 0; loop--) { for (int i = 0; i < 10; i++) { for (int j = 0; j < 10; j++) { for (int k = 0; k < 10; k++) { d[i][j][k] = s[i][j][k]; } } } } ``` The code of this loop is almost equivalent, modulo register allocation, before and after #51901. The difference is loop alignment: before #51901, the loop fits in 2 32-byte chunks; after, it is in 3 32-byte chunks. On Ubuntu, this leads to about a 50% performance regression. Simply setting `COMPlus_JitAlignLoopAdaptive=0` changes the alignment such that the inner loop fits in 2 32-byte chunks, recovering the performance. This is a high weight basic block; perhaps the alignment heuristics should "try harder" and be willing to insert more alignment padding in case it might be profitable?
non_process
loop alignment issue in benchmark a recent change leading to a regression in the benchstone benchi benchmark on ubuntu but not windows the core of the benchmark is the bench function inner loop for loop loop for int i i i for int j j j for int k k k d s the code of this loop is almost equivalent modulo register allocation before and after the difference is loop alignment before the loop fits in byte chunks after it is in byte chunks on ubuntu this leads to about a performance regression simply setting complus jitalignloopadaptive changes the alignment such that the inner loop fits in byte chunks recovering the performance this is a high weight basic block perhaps the alignment heuristics should try harder and be willing to insert more alignment padding in case it might be profitable
0
5,067
7,868,659,825
IssuesEvent
2018-06-24 01:54:39
StrikeNP/trac_test
https://api.github.com/repos/StrikeNP/trac_test
closed
Get rid of HOC "best-ever" and HOC 12/17/2005 lines on nightly plots. (Trac #734)
Migrated from Trac betlej@uwm.edu post_processing task
The HOC "best-ever" lines on the nightly plots are from a tuning in October 2005. The HOC 12/17/2005 lines are from the time of the submission of the DYCOMS-II RF02 SCM intercomparison. At this point, the quality of CLUBB results have long exceeded the quality of results found in both of these "benchmark" sets. All they do is serve to clutter the nightly plots for some cases. We should remove them from the nightly plots and get rid of the options to plot them from the plotgen code. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/734 ```json { "status": "closed", "changetime": "2014-09-18T19:50:30", "description": "The HOC \"best-ever\" lines on the nightly plots are from a tuning in October 2005. The HOC 12/17/2005 lines are from the time of the submission of the DYCOMS-II RF02 SCM intercomparison. At this point, the quality of CLUBB results have long exceeded the quality of results found in both of these \"benchmark\" sets. All they do is serve to clutter the nightly plots for some cases. We should remove them from the nightly plots and get rid of the options to plot them from the plotgen code.", "reporter": "bmg2@uwm.edu", "cc": "vlarson@uwm.edu, bmg2@uwm.edu", "resolution": "Verified by V. Larson", "_ts": "1411069830326367", "component": "post_processing", "summary": "Get rid of HOC \"best-ever\" and HOC 12/17/2005 lines on nightly plots.", "priority": "trivial", "keywords": "HOC \"best-ever\" HOC 12/17/2005", "time": "2014-08-31T01:15:32", "milestone": "Miscellaneous projects", "owner": "betlej@uwm.edu", "type": "task" } ```
1.0
Get rid of HOC "best-ever" and HOC 12/17/2005 lines on nightly plots. (Trac #734) - The HOC "best-ever" lines on the nightly plots are from a tuning in October 2005. The HOC 12/17/2005 lines are from the time of the submission of the DYCOMS-II RF02 SCM intercomparison. At this point, the quality of CLUBB results have long exceeded the quality of results found in both of these "benchmark" sets. All they do is serve to clutter the nightly plots for some cases. We should remove them from the nightly plots and get rid of the options to plot them from the plotgen code. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/734 ```json { "status": "closed", "changetime": "2014-09-18T19:50:30", "description": "The HOC \"best-ever\" lines on the nightly plots are from a tuning in October 2005. The HOC 12/17/2005 lines are from the time of the submission of the DYCOMS-II RF02 SCM intercomparison. At this point, the quality of CLUBB results have long exceeded the quality of results found in both of these \"benchmark\" sets. All they do is serve to clutter the nightly plots for some cases. We should remove them from the nightly plots and get rid of the options to plot them from the plotgen code.", "reporter": "bmg2@uwm.edu", "cc": "vlarson@uwm.edu, bmg2@uwm.edu", "resolution": "Verified by V. Larson", "_ts": "1411069830326367", "component": "post_processing", "summary": "Get rid of HOC \"best-ever\" and HOC 12/17/2005 lines on nightly plots.", "priority": "trivial", "keywords": "HOC \"best-ever\" HOC 12/17/2005", "time": "2014-08-31T01:15:32", "milestone": "Miscellaneous projects", "owner": "betlej@uwm.edu", "type": "task" } ```
process
get rid of hoc best ever and hoc lines on nightly plots trac the hoc best ever lines on the nightly plots are from a tuning in october the hoc lines are from the time of the submission of the dycoms ii scm intercomparison at this point the quality of clubb results have long exceeded the quality of results found in both of these benchmark sets all they do is serve to clutter the nightly plots for some cases we should remove them from the nightly plots and get rid of the options to plot them from the plotgen code attachments migrated from json status closed changetime description the hoc best ever lines on the nightly plots are from a tuning in october the hoc lines are from the time of the submission of the dycoms ii scm intercomparison at this point the quality of clubb results have long exceeded the quality of results found in both of these benchmark sets all they do is serve to clutter the nightly plots for some cases we should remove them from the nightly plots and get rid of the options to plot them from the plotgen code reporter uwm edu cc vlarson uwm edu uwm edu resolution verified by v larson ts component post processing summary get rid of hoc best ever and hoc lines on nightly plots priority trivial keywords hoc best ever hoc time milestone miscellaneous projects owner betlej uwm edu type task
1
794
3,274,949,209
IssuesEvent
2015-10-26 13:40:22
google/personfinder
https://api.github.com/repos/google/personfinder
closed
Acquire and set new reCaptcha key for production before the next push
release & process
Set config captcha_site_key / captcha_secret_key.
1.0
Acquire and set new reCaptcha key for production before the next push - Set config captcha_site_key / captcha_secret_key.
process
acquire and set new recaptcha key for production before the next push set config captcha site key captcha secret key
1
1,671
4,308,666,644
IssuesEvent
2016-07-21 13:47:22
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
Add preprocessor interface for setting perfect foresight tolerance
preprocessor
Currently, one must manually set ```options_.dynatol.f```. I would suggest to use the name `TolFun` for the option. While we are at it, I would also suggest to use an option `TolFun` for the `steady` command to set `options_.solve_tolf`
1.0
Add preprocessor interface for setting perfect foresight tolerance - Currently, one must manually set ```options_.dynatol.f```. I would suggest to use the name `TolFun` for the option. While we are at it, I would also suggest to use an option `TolFun` for the `steady` command to set `options_.solve_tolf`
process
add preprocessor interface for setting perfect foresight tolerance currently one must manually set options dynatol f i would suggest to use the name tolfun for the option while we are at it i would also suggest to use an option tolfun for the steady command to set options solve tolf
1
8,851
11,952,920,494
IssuesEvent
2020-04-03 19:49:41
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
Create sample format guidelines for Python-docs-samples
type: process
The python reviewer group required to approve PRs has only 4 people, be nice to expand that quickly. Also probably having a canonical sample in Python that is agreed upon would be nice too! (To protobuf or to dictionary that is the question, I vote protobuf objects)
1.0
Create sample format guidelines for Python-docs-samples - The python reviewer group required to approve PRs has only 4 people, be nice to expand that quickly. Also probably having a canonical sample in Python that is agreed upon would be nice too! (To protobuf or to dictionary that is the question, I vote protobuf objects)
process
create sample format guidelines for python docs samples the python reviewer group required to approve prs has only people be nice to expand that quickly also probably having a canonical sample in python that is agreed upon would be nice too to protobuf or to dictionary that is the question i vote protobuf objects
1
920
3,381,144,514
IssuesEvent
2015-11-26 00:10:30
GsDevKit/gsDevKitHome
https://api.github.com/repos/GsDevKit/gsDevKitHome
closed
startTopaz script would be useful ......
in process
[Suggested by Dario](http://forum.world.st/Glass-Open-topaz-on-Development-Kit-gemstone-instance-td4796421.html) ... it's a good idea ... add an option for defining which topazini file to use would allow you to switch users easily .... add option for scripts to run ...
1.0
startTopaz script would be useful ...... - [Suggested by Dario](http://forum.world.st/Glass-Open-topaz-on-Development-Kit-gemstone-instance-td4796421.html) ... it's a good idea ... add an option for defining which topazini file to use would allow you to switch users easily .... add option for scripts to run ...
process
starttopaz script would be useful it s a good idea add an option for defining which topazini file to use would allow you to switch users easily add option for scripts to run
1
319,873
9,761,643,354
IssuesEvent
2019-06-05 09:16:24
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.techadvisor.co.uk - site is not usable
browser-focus-geckoview engine-gecko priority-normal
<!-- @browser: Firefox Focus --> <!-- @ua_header: Mozilla/5.0 (Linux; Android 8.0.0; SM-G950U) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Mobile Safari/537.36 --> <!-- @reported_with: --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.techadvisor.co.uk/how-to/desktop-pc/how-install-ssd-in-your-pc-3374767/ **Browser / Version**: Firefox Focus **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: Searching in URL does not work after too many queries **Steps to Reproduce**: Search for one thing. Hit enter and Google loads. Search for another, change mind and search for another, occasionally those queries will not be sent. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with โค๏ธ_
1.0
www.techadvisor.co.uk - site is not usable - <!-- @browser: Firefox Focus --> <!-- @ua_header: Mozilla/5.0 (Linux; Android 8.0.0; SM-G950U) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Mobile Safari/537.36 --> <!-- @reported_with: --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.techadvisor.co.uk/how-to/desktop-pc/how-install-ssd-in-your-pc-3374767/ **Browser / Version**: Firefox Focus **Operating System**: Android 8.0.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: Searching in URL does not work after too many queries **Steps to Reproduce**: Search for one thing. Hit enter and Google loads. Search for another, change mind and search for another, occasionally those queries will not be sent. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with โค๏ธ_
non_process
site is not usable url browser version firefox focus operating system android tested another browser yes problem type site is not usable description searching in url does not work after too many queries steps to reproduce search for one thing hit enter and google loads search for another change mind and search for another occasionally those queries will not be sent browser configuration none from with โค๏ธ
0
9,799
12,813,083,623
IssuesEvent
2020-07-04 10:43:32
ontop/ontop
https://api.github.com/repos/ontop/ontop
closed
Join over two columns with same name
status: fixed topic: mapping processing topic: r2rml compatibility
<!-- Do you want to ask a question? Are you looking for support? We have also a mailing list https://groups.google.com/d/forum/ontop4obda Have a look at our guidelines on how to submit a bug report https://ontop-vkg.org/community/contributing/bug-report --> ### Description R2RML is not translated well to OBDA mappings when a `rr:joinCondition` is specified between two columns with the same name ### Steps to Reproduce 1. Create an R2ML mapping with a `rr:joinCondition` where the `rr:child` and `rr:parent` columns are named equal 2. Translate the R2RML mapping to OBDA mapping 3. Run a query that exploits the `rr:joinCondition` **Expected behaviour:** Ontop should return the result-set of the input query **Actual behaviour:** The engine does not know which column has to take from the `rr:joinCondition` as the SQL query returns two columns that are named equal. Part of the log: ``` source query: SELECT * FROM (SELECT * FROM example1) AS child, (SELECT * FROM example2) AS parent WHERE child.id=parent.id org.semanticweb.owlapi.reasoner.IllegalConfigurationException: it.unibz.inf.ontop.exception.InvalidMappingSourceQueriesException: Error: The source query does not provide the attribute id (variable id) required by the target atom. Problem location: source query of the mapping assertion ``` **Reproduces how often:** Every Time ### Attached material [example-ontop.zip](https://github.com/ontop/ontop/files/4185609/example-error-ontop.zip) ### Versions Tested in Ontop v3.0.1 (Cli) over MySQL but also PostgreSQL ### Aditional Information The only possible solution at this moment is to run first a translation from R2RML to OBDA mapping and edit the mapping manually which enforces the user to understand the syntax of the OBDA mappings instead of the R2RML W3C recommendation, this generates difficulties in order to use the engine
1.0
Join over two columns with same name - <!-- Do you want to ask a question? Are you looking for support? We have also a mailing list https://groups.google.com/d/forum/ontop4obda Have a look at our guidelines on how to submit a bug report https://ontop-vkg.org/community/contributing/bug-report --> ### Description R2RML is not translated well to OBDA mappings when a `rr:joinCondition` is specified between two columns with the same name ### Steps to Reproduce 1. Create an R2ML mapping with a `rr:joinCondition` where the `rr:child` and `rr:parent` columns are named equal 2. Translate the R2RML mapping to OBDA mapping 3. Run a query that exploits the `rr:joinCondition` **Expected behaviour:** Ontop should return the result-set of the input query **Actual behaviour:** The engine does not know which column has to take from the `rr:joinCondition` as the SQL query returns two columns that are named equal. Part of the log: ``` source query: SELECT * FROM (SELECT * FROM example1) AS child, (SELECT * FROM example2) AS parent WHERE child.id=parent.id org.semanticweb.owlapi.reasoner.IllegalConfigurationException: it.unibz.inf.ontop.exception.InvalidMappingSourceQueriesException: Error: The source query does not provide the attribute id (variable id) required by the target atom. Problem location: source query of the mapping assertion ``` **Reproduces how often:** Every Time ### Attached material [example-ontop.zip](https://github.com/ontop/ontop/files/4185609/example-error-ontop.zip) ### Versions Tested in Ontop v3.0.1 (Cli) over MySQL but also PostgreSQL ### Aditional Information The only possible solution at this moment is to run first a translation from R2RML to OBDA mapping and edit the mapping manually which enforces the user to understand the syntax of the OBDA mappings instead of the R2RML W3C recommendation, this generates difficulties in order to use the engine
process
join over two columns with same name do you want to ask a question are you looking for support we have also a mailing list have a look at our guidelines on how to submit a bug report description is not translated well to obda mappings when a rr joincondition is specified between two columns with the same name steps to reproduce create an mapping with a rr joincondition where the rr child and rr parent columns are named equal translate the mapping to obda mapping run a query that exploits the rr joincondition expected behaviour ontop should return the result set of the input query actual behaviour the engine does not know which column has to take from the rr joincondition as the sql query returns two columns that are named equal part of the log source query select from select from as child select from as parent where child id parent id org semanticweb owlapi reasoner illegalconfigurationexception it unibz inf ontop exception invalidmappingsourcequeriesexception error the source query does not provide the attribute id variable id required by the target atom problem location source query of the mapping assertion reproduces how often every time attached material versions tested in ontop cli over mysql but also postgresql aditional information the only possible solution at this moment is to run first a translation from to obda mapping and edit the mapping manually which enforces the user to understand the syntax of the obda mappings instead of the recommendation this generates difficulties in order to use the engine
1
435,490
30,503,277,031
IssuesEvent
2023-07-18 15:10:43
vijayk3327/Lightning-Aura-Component
https://api.github.com/repos/vijayk3327/Lightning-Aura-Component
opened
How to Create Custom Accordion Expand, Collapse and Toggle Section in Lightning Component
documentation question
In this post we are going to learn about [how to create a custom accordion Expand Collapse and Toggle section using JavaScript ](https://www.w3web.net/custom-accordion-expand-collapse-and-toggle-lightning-component/)in Salesforce lightning component. **[โ†’ Get source code live demo link:-](https://www.w3web.net/custom-accordion-expand-collapse-and-toggle-lightning-component/)** <img src="https://www.w3web.net/wp-content/uploads/2020/08/accordionComponent-min.gif"/> **Step 1:- Create Lightning Application : customAccordionApp.app** ` <aura:application extends="force:slds"> <c:customAccordionCmp/> </aura:application>` **Step 2:- Create Lightning Component : customAccordionCmp.cmp** ` <aura:component implements="force:appHostable,flexipage:availableForAllPageTypes,flexipage:availableForRecordHome,force:hasRecordId,forceCommunity:availableForAllPageTypes,force:lightningQuickAction" access="global" > <div class="slds"> <div class="slds-grid slds-wrap"> <div class="slds-p-horizontal--medium slds-col slds-size_6-of-12"> <ul class="slds-accordion w3webAccordion" id="w3webAccordionListOver"> <li class="slds-accordion__list-item" id="w3webListItem0"> <div class="slds-accordion__summary"> <h3 class="slds-summary-heading" name="w3webListItem0" onclick="{!c.accordionAction}">Editing, Saving and Removing rows Dynamically in Lightning component</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/edit-save-and-remove-rows-dynamically-in-lightning-component/"> <img src="https://www.w3web.net/wp-content/uploads/2020/07/editDeleteSave.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about that how to edit row, saving row or removing row dynamically in Salesforce lightning component.</p><br/> <p>In this example we will customize the same component and achieve to the editing row, saving row and removing rows functionality of dynamically on Custom sObject by help of wrapper apex class and JavaScript Controller in lightning component...<span class="readMore"><a href="https://www.w3web.net/edit-save-and-remove-rows-dynamically-in-lightning-component/">Read more...</a></span></p> </div> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem1"> <h3 class="slds-summary-heading" name="w3webListItem1" onclick="{!c.accordionAction}"> How to validate child component from parent component</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/how-to-validate-child-component-from-parent-component/"> <img src="https://www.w3web.net/wp-content/uploads/2020/08/stylishFormValidation.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about how to validate child component from parent component on click button using aura method in Salesforce lightning component.</p><br/> <p>Real time scenarios:- Create a custom and stylish form validation and validate child component from parent component using aura method in lightning component...<span class="readMore"><a href="https://www.w3web.net/how-to-validate-child-component-from-parent-component/">Read more...</a></span> </p> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem2"> <h3 class="slds-summary-heading" name="w3webListItem2" onclick="{!c.accordionAction}">Trigger to update count of child records with custom field of parent object.</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/"> <img src="https://www.w3web.net/wp-content/uploads/2020/08/employeeSizeTrigger.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about how to update count of child records with custom field value into parent object using trigger map list in Salesforce.</p><br/> <p><strong>Real time scenarios:-</strong> Write a trigger on parent object where create a <strong>custom field</strong> as Employee (Number type).</p><br/> <p>Or if user update the value of employee <strong>less then</strong> the total number of child records, in this case the child records should be <strong>exist only equal to </strong> employee size, rest records of child object should be automatic removed. <span class="readMore"><a href="https://www.w3web.net/update-count-of-child-record-based-on-parent-object-value/">Read more...</a></span></p> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem3"> <h3 class="slds-summary-heading" name="w3webListItem3" onclick="{!c.accordionAction}">Create rollup summary using Apex trigger on custom object</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/"> <img src="https://www.w3web.net/wp-content/uploads/2020/08/rollupSummary-trigger.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about How to create roll-up summary trigger for <b>count child records</b> on custom object using Apex trigger in Salesforce.</p><br/> <p><strong>Real time scenarios:-</strong> Write a trigger on parent object where create a <strong>custom field</strong> as Employee (Number type).</p><br/> <p><strong>Real time scenarios:-</strong> Write a roll-up summary trigger for count child records on custom parent object. Create a custom field (Number Type) on parent object, <strong>calculate the total number</strong> of related <strong>child records</strong> and put into them...<span class="readMore"><a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/">Read more...</a></span></p> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem4"> <h3 class="slds-summary-heading" name="w3webListItem4" onclick="{!c.accordionAction}">How to fetch picklist values from apex controller in lightning component</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/"> <img src="https://www.w3web.net/wp-content/uploads/2020/07/picklistValue.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about how to retrieve Picklist values from Apex controller in Lightning Component...<span class="readMore"><a href="https://www.w3web.net/fetch-picklist-values-dynamically/">Read more...</a></span></p> </div> </li> </ul> </div> </div> <br/> <br/> <!--Start RelatedTopics Section--> <div style="border:1px #ddd solid; padding:10px; background:#eee; margin:40px 0;"> <p data-aura-rendered-by="435:0"><img src="https://www.w3web.net/wp-content/uploads/2021/05/thumbsUpLike.png" width="25" height="25" style="vertical-align:top; margin-right:10px;" data-aura-rendered-by="436:0"><strong data-aura-rendered-by="437:0"><span style="font-size:16px; font-style:italic; display:inline-block; margin-right:5px;">Don't forget to check out:-</span><a href="https://www.w3web.net/" target="_blank" rel="noopener noreferrer" style="text-decoration:none;" data-aura-rendered-by="440:0">An easy way to learn step-by-step online free Salesforce tutorial, To know more Click <span style="color:#ff8000; font-size:18px;" data-aura-rendered-by="442:0">Here..</span></a></strong></p> <br/><br/> <p data-aura-rendered-by="435:0"><img src="https://www.w3web.net/wp-content/uploads/2021/07/tickMarkIcon.png" width="25" height="25" style="vertical-align:top; margin-right:10px;" data-aura-rendered-by="436:0"><strong data-aura-rendered-by="437:0"><span style="font-size:17px; font-style:italic; display:inline-block; margin-right:5px; color:rgb(255 128 0);">You May Also Like โ†’</span> </strong></p> <div style="display:block; overflow:hidden;"> <div style="width: 50%; float:left; display:inline-block"> <ul style="list-style-type: square; font-size: 16px; margin: 0 0 0 54px; padding: 0;"> <li><a href="https://www.w3web.net/lwc-get-set-lightning-checkbox-value/" target="_blank" rel="noopener noreferrer">How to get selected checkbox value in lwc</a></li> <li><a href="https://www.w3web.net/display-account-related-contacts-in-lwc/" target="_blank" rel="noopener noreferrer">how to display account related contacts based on AccountId in lwc</a></li> <li><a href="https://www.w3web.net/create-lightning-datatable-row-actions-in-lwc/" target="_blank" rel="noopener noreferrer">how to create lightning datatable row actions in lwc</a></li> <li><a href="https://www.w3web.net/if-and-else-condition-in-lwc/" target="_blank" rel="noopener noreferrer">how to use if and else condition in lwc</a></li> <li><a href="https://www.w3web.net/get-selected-radio-button-value-and-checked-default-in-lwc/" target="_blank" rel="noopener noreferrer">how to display selected radio button value in lwc</a></li> </ul> </div> <div style="width: 50%; float:left; display:inline-block"> <ul style="list-style-type: square; font-size: 16px; margin: 0 0 0 54px; padding: 0;"> <li><a href="https://www.w3web.net/display-account-related-contacts-lwc/" target="_blank" rel="noopener noreferrer">display account related contacts based on account name in lwc</a></li> <li><a href="https://www.w3web.net/create-lightning-datatable-row-actions-in-lwc/" target="_blank" rel="noopener noreferrer">how to insert a record of account Using apex class in LWC</a></li> <li><a href="https://www.w3web.net/fetch-picklist-values-dynamic-in-lwc/" target="_blank" rel="noopener noreferrer">how to get picklist values dynamically in lwc</a></li> <li><a href="https://www.w3web.net/edit-save-and-remove-rows-dynamically-in-lightning-component/" target="_blank" rel="noopener noreferrer">how to edit/save row dynamically in lightning component</a></li> <li><a href="https://www.w3web.net/update-parent-object-from-child/" target="_blank" rel="noopener noreferrer">update parent field from child using apex trigger</a></li> </ul> </div> <div style="clear:both;"></div> <br/> <div class="youtubeIcon"> <a href="https://www.youtube.com/channel/UCW62gTen2zniILj9xE6LmOg" target="_blank" rel="noopener noreferrer"><img src="https://www.w3web.net/wp-content/uploads/2021/11/youtubeIcon.png" width="25" height="25" style="vertical-align:top; margin-right:10px;"/> <strong>TechW3web:-</strong> To know more, Use this <span style="color: #ff8000; font-weight: bold;">Link</span> </a> </div> </div> </div> <!--End RelatedTopics Section--> </div> </aura:component>` **Step 3:- Create Lightning Component : customAccordionCmpcontroller.js** ` ({ accordionAction : function(component, event, helper) { var thisObj = event.target.name; var w3webAccordionListOver = document.getElementById('w3webAccordionListOver'); var accordionListAll = w3webAccordionListOver.querySelectorAll('.slds-accordion__list-item'); //alert(accordionListAll.length); var conainActive = document.getElementById(thisObj).classList.contains('activeRow'); for(var i=0; i<accordionListAll.length; i++){ accordionListAll[i].classList.remove('activeRow'); } if(conainActive == true){ document.getElementById(thisObj).classList.remove('activeRow'); }else{ document.getElementById(thisObj).classList.toggle('activeRow'); } } })` **Step 4:- Create Lightning Component Style: customAccordionCmp.CSS** ` .THIS { background:#fff !important; } .THIS .w3webAccordion {margin:0; padding:0; list-style:none;} .THIS .w3webAccordion li{padding:5px 0 5px 0;} .THIS .w3webAccordion li .slds-summary-heading{display:block; padding:0 0 0 20px; font-size:17px; position:relative;} .THIS .w3webAccordion li .slds-summary-heading:before {content:''; width:17px; height:17px; display:inline-block; background:url(/resource/SLDS2016/assets/icons/utility/chevronright_60.png) no-repeat left top; background-size:cover; cursor: pointer; position:absolute; left:0; top:5px;} .THIS .w3webAccordion li .slds-summary-heading:hover{color:#04a5ca; cursor:pointer;} .THIS .w3webAccordion .slds-accordion__summary{display:initial;} .THIS .w3webAccordion .slds-accordion__list-item .accordionContent{display:none; overflow:hidden;} .THIS .w3webAccordion li.activeRow .slds-summary-heading{color:#04a5ca;} .THIS .w3webAccordion li.activeRow .accordionContent{display:block; padding:5px 0 5px 20px; font-size:14px;} .THIS .w3webAccordion li.activeRow .accordionContent .postImage{display: inline-block; float: left; margin-right: 10px;} .THIS .w3webAccordion li.activeRow .slds-summary-heading:before {content:''; width:18px; height:17px; display:inline-block; background:url(/resource/SLDS2016/assets/icons/utility/chevrondown_60.png) no-repeat left top; background-size:cover; cursor: pointer; position:absolute; left:0; top:5px;} .THIS .readMore{font-size:14px; font-weight:bold; display:inline-block; padding:0 0 0 10px;} .THIS .readMore a{color:#ff0000; text-decoration:none;} .THIS .readMore a:hover{color:#04a5ca; text-decoration:underline;}` **[โ†’ Get source code live demo link:-](https://www.w3web.net/custom-accordion-expand-collapse-and-toggle-lightning-component/)**
1.0
How to Create Custom Accordion Expand, Collapse and Toggle Section in Lightning Component - In this post we are going to learn about [how to create a custom accordion Expand Collapse and Toggle section using JavaScript ](https://www.w3web.net/custom-accordion-expand-collapse-and-toggle-lightning-component/)in Salesforce lightning component. **[โ†’ Get source code live demo link:-](https://www.w3web.net/custom-accordion-expand-collapse-and-toggle-lightning-component/)** <img src="https://www.w3web.net/wp-content/uploads/2020/08/accordionComponent-min.gif"/> **Step 1:- Create Lightning Application : customAccordionApp.app** ` <aura:application extends="force:slds"> <c:customAccordionCmp/> </aura:application>` **Step 2:- Create Lightning Component : customAccordionCmp.cmp** ` <aura:component implements="force:appHostable,flexipage:availableForAllPageTypes,flexipage:availableForRecordHome,force:hasRecordId,forceCommunity:availableForAllPageTypes,force:lightningQuickAction" access="global" > <div class="slds"> <div class="slds-grid slds-wrap"> <div class="slds-p-horizontal--medium slds-col slds-size_6-of-12"> <ul class="slds-accordion w3webAccordion" id="w3webAccordionListOver"> <li class="slds-accordion__list-item" id="w3webListItem0"> <div class="slds-accordion__summary"> <h3 class="slds-summary-heading" name="w3webListItem0" onclick="{!c.accordionAction}">Editing, Saving and Removing rows Dynamically in Lightning component</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/edit-save-and-remove-rows-dynamically-in-lightning-component/"> <img src="https://www.w3web.net/wp-content/uploads/2020/07/editDeleteSave.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about that how to edit row, saving row or removing row dynamically in Salesforce lightning component.</p><br/> <p>In this example we will customize the same component and achieve to the editing row, saving row and removing rows functionality of dynamically on Custom sObject by help of wrapper apex class and JavaScript Controller in lightning component...<span class="readMore"><a href="https://www.w3web.net/edit-save-and-remove-rows-dynamically-in-lightning-component/">Read more...</a></span></p> </div> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem1"> <h3 class="slds-summary-heading" name="w3webListItem1" onclick="{!c.accordionAction}"> How to validate child component from parent component</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/how-to-validate-child-component-from-parent-component/"> <img src="https://www.w3web.net/wp-content/uploads/2020/08/stylishFormValidation.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about how to validate child component from parent component on click button using aura method in Salesforce lightning component.</p><br/> <p>Real time scenarios:- Create a custom and stylish form validation and validate child component from parent component using aura method in lightning component...<span class="readMore"><a href="https://www.w3web.net/how-to-validate-child-component-from-parent-component/">Read more...</a></span> </p> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem2"> <h3 class="slds-summary-heading" name="w3webListItem2" onclick="{!c.accordionAction}">Trigger to update count of child records with custom field of parent object.</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/"> <img src="https://www.w3web.net/wp-content/uploads/2020/08/employeeSizeTrigger.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about how to update count of child records with custom field value into parent object using trigger map list in Salesforce.</p><br/> <p><strong>Real time scenarios:-</strong> Write a trigger on parent object where create a <strong>custom field</strong> as Employee (Number type).</p><br/> <p>Or if user update the value of employee <strong>less then</strong> the total number of child records, in this case the child records should be <strong>exist only equal to </strong> employee size, rest records of child object should be automatic removed. <span class="readMore"><a href="https://www.w3web.net/update-count-of-child-record-based-on-parent-object-value/">Read more...</a></span></p> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem3"> <h3 class="slds-summary-heading" name="w3webListItem3" onclick="{!c.accordionAction}">Create rollup summary using Apex trigger on custom object</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/"> <img src="https://www.w3web.net/wp-content/uploads/2020/08/rollupSummary-trigger.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about How to create roll-up summary trigger for <b>count child records</b> on custom object using Apex trigger in Salesforce.</p><br/> <p><strong>Real time scenarios:-</strong> Write a trigger on parent object where create a <strong>custom field</strong> as Employee (Number type).</p><br/> <p><strong>Real time scenarios:-</strong> Write a roll-up summary trigger for count child records on custom parent object. Create a custom field (Number Type) on parent object, <strong>calculate the total number</strong> of related <strong>child records</strong> and put into them...<span class="readMore"><a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/">Read more...</a></span></p> </div> </li> <li class="slds-accordion__list-item" id="w3webListItem4"> <h3 class="slds-summary-heading" name="w3webListItem4" onclick="{!c.accordionAction}">How to fetch picklist values from apex controller in lightning component</h3> <div class="accordionContent"> <div class="postImage"> <a href="https://www.w3web.net/roll-up-summary-trigger-on-custom-object/"> <img src="https://www.w3web.net/wp-content/uploads/2020/07/picklistValue.png" width="200" height="150"/> </a> </div> <p>In this post we are going to learn about how to retrieve Picklist values from Apex controller in Lightning Component...<span class="readMore"><a href="https://www.w3web.net/fetch-picklist-values-dynamically/">Read more...</a></span></p> </div> </li> </ul> </div> </div> <br/> <br/> <!--Start RelatedTopics Section--> <div style="border:1px #ddd solid; padding:10px; background:#eee; margin:40px 0;"> <p data-aura-rendered-by="435:0"><img src="https://www.w3web.net/wp-content/uploads/2021/05/thumbsUpLike.png" width="25" height="25" style="vertical-align:top; margin-right:10px;" data-aura-rendered-by="436:0"><strong data-aura-rendered-by="437:0"><span style="font-size:16px; font-style:italic; display:inline-block; margin-right:5px;">Don't forget to check out:-</span><a href="https://www.w3web.net/" target="_blank" rel="noopener noreferrer" style="text-decoration:none;" data-aura-rendered-by="440:0">An easy way to learn step-by-step online free Salesforce tutorial, To know more Click <span style="color:#ff8000; font-size:18px;" data-aura-rendered-by="442:0">Here..</span></a></strong></p> <br/><br/> <p data-aura-rendered-by="435:0"><img src="https://www.w3web.net/wp-content/uploads/2021/07/tickMarkIcon.png" width="25" height="25" style="vertical-align:top; margin-right:10px;" data-aura-rendered-by="436:0"><strong data-aura-rendered-by="437:0"><span style="font-size:17px; font-style:italic; display:inline-block; margin-right:5px; color:rgb(255 128 0);">You May Also Like โ†’</span> </strong></p> <div style="display:block; overflow:hidden;"> <div style="width: 50%; float:left; display:inline-block"> <ul style="list-style-type: square; font-size: 16px; margin: 0 0 0 54px; padding: 0;"> <li><a href="https://www.w3web.net/lwc-get-set-lightning-checkbox-value/" target="_blank" rel="noopener noreferrer">How to get selected checkbox value in lwc</a></li> <li><a href="https://www.w3web.net/display-account-related-contacts-in-lwc/" target="_blank" rel="noopener noreferrer">how to display account related contacts based on AccountId in lwc</a></li> <li><a href="https://www.w3web.net/create-lightning-datatable-row-actions-in-lwc/" target="_blank" rel="noopener noreferrer">how to create lightning datatable row actions in lwc</a></li> <li><a href="https://www.w3web.net/if-and-else-condition-in-lwc/" target="_blank" rel="noopener noreferrer">how to use if and else condition in lwc</a></li> <li><a href="https://www.w3web.net/get-selected-radio-button-value-and-checked-default-in-lwc/" target="_blank" rel="noopener noreferrer">how to display selected radio button value in lwc</a></li> </ul> </div> <div style="width: 50%; float:left; display:inline-block"> <ul style="list-style-type: square; font-size: 16px; margin: 0 0 0 54px; padding: 0;"> <li><a href="https://www.w3web.net/display-account-related-contacts-lwc/" target="_blank" rel="noopener noreferrer">display account related contacts based on account name in lwc</a></li> <li><a href="https://www.w3web.net/create-lightning-datatable-row-actions-in-lwc/" target="_blank" rel="noopener noreferrer">how to insert a record of account Using apex class in LWC</a></li> <li><a href="https://www.w3web.net/fetch-picklist-values-dynamic-in-lwc/" target="_blank" rel="noopener noreferrer">how to get picklist values dynamically in lwc</a></li> <li><a href="https://www.w3web.net/edit-save-and-remove-rows-dynamically-in-lightning-component/" target="_blank" rel="noopener noreferrer">how to edit/save row dynamically in lightning component</a></li> <li><a href="https://www.w3web.net/update-parent-object-from-child/" target="_blank" rel="noopener noreferrer">update parent field from child using apex trigger</a></li> </ul> </div> <div style="clear:both;"></div> <br/> <div class="youtubeIcon"> <a href="https://www.youtube.com/channel/UCW62gTen2zniILj9xE6LmOg" target="_blank" rel="noopener noreferrer"><img src="https://www.w3web.net/wp-content/uploads/2021/11/youtubeIcon.png" width="25" height="25" style="vertical-align:top; margin-right:10px;"/> <strong>TechW3web:-</strong> To know more, Use this <span style="color: #ff8000; font-weight: bold;">Link</span> </a> </div> </div> </div> <!--End RelatedTopics Section--> </div> </aura:component>` **Step 3:- Create Lightning Component : customAccordionCmpcontroller.js** ` ({ accordionAction : function(component, event, helper) { var thisObj = event.target.name; var w3webAccordionListOver = document.getElementById('w3webAccordionListOver'); var accordionListAll = w3webAccordionListOver.querySelectorAll('.slds-accordion__list-item'); //alert(accordionListAll.length); var conainActive = document.getElementById(thisObj).classList.contains('activeRow'); for(var i=0; i<accordionListAll.length; i++){ accordionListAll[i].classList.remove('activeRow'); } if(conainActive == true){ document.getElementById(thisObj).classList.remove('activeRow'); }else{ document.getElementById(thisObj).classList.toggle('activeRow'); } } })` **Step 4:- Create Lightning Component Style: customAccordionCmp.CSS** ` .THIS { background:#fff !important; } .THIS .w3webAccordion {margin:0; padding:0; list-style:none;} .THIS .w3webAccordion li{padding:5px 0 5px 0;} .THIS .w3webAccordion li .slds-summary-heading{display:block; padding:0 0 0 20px; font-size:17px; position:relative;} .THIS .w3webAccordion li .slds-summary-heading:before {content:''; width:17px; height:17px; display:inline-block; background:url(/resource/SLDS2016/assets/icons/utility/chevronright_60.png) no-repeat left top; background-size:cover; cursor: pointer; position:absolute; left:0; top:5px;} .THIS .w3webAccordion li .slds-summary-heading:hover{color:#04a5ca; cursor:pointer;} .THIS .w3webAccordion .slds-accordion__summary{display:initial;} .THIS .w3webAccordion .slds-accordion__list-item .accordionContent{display:none; overflow:hidden;} .THIS .w3webAccordion li.activeRow .slds-summary-heading{color:#04a5ca;} .THIS .w3webAccordion li.activeRow .accordionContent{display:block; padding:5px 0 5px 20px; font-size:14px;} .THIS .w3webAccordion li.activeRow .accordionContent .postImage{display: inline-block; float: left; margin-right: 10px;} .THIS .w3webAccordion li.activeRow .slds-summary-heading:before {content:''; width:18px; height:17px; display:inline-block; background:url(/resource/SLDS2016/assets/icons/utility/chevrondown_60.png) no-repeat left top; background-size:cover; cursor: pointer; position:absolute; left:0; top:5px;} .THIS .readMore{font-size:14px; font-weight:bold; display:inline-block; padding:0 0 0 10px;} .THIS .readMore a{color:#ff0000; text-decoration:none;} .THIS .readMore a:hover{color:#04a5ca; text-decoration:underline;}` **[โ†’ Get source code live demo link:-](https://www.w3web.net/custom-accordion-expand-collapse-and-toggle-lightning-component/)**
non_process
how to create custom accordion expand collapse and toggle section in lightning component in this post we are going to learn about salesforce lightning component img src step create lightning application customaccordionapp app step create lightning component customaccordioncmp cmp editing saving and removing rows dynamically in lightning component a href in this post we are going to learn about that how to edit row saving row or removing row dynamically in salesforce lightning component in this example we will customize the same component and achieve to the editing row saving row and removing rows functionality of dynamically on custom sobject by help of wrapper apex class and javascript controller in lightning component how to validate child component from parent component a href in this post we are going to learn about how to validate child component from parent component on click button using aura method in salesforce lightning component real time scenarios create a custom and stylish form validation and validate child component from parent component using aura method in lightning component trigger to update count of child records with custom field of parent object a href in this post we are going to learn about how to update count of child records with custom field value into parent object using trigger map list in salesforce real time scenarios write a trigger on parent object where create a custom field as employee number type or if user update the value of employee less then the total number of child records in this case the child records should be exist only equal to employee size rest records of child object should be automatic removed create rollup summary using apex trigger on custom object a href in this post we are going to learn about how to create roll up summary trigger for count child records on custom object using apex trigger in salesforce real time scenarios write a trigger on parent object where create a custom field as employee number type real time scenarios write a roll up summary trigger for count child records on custom parent object create a custom field number type on parent object calculate the total number of related child records and put into them how to fetch picklist values from apex controller in lightning component a href in this post we are going to learn about how to retrieve picklist values from apex controller in lightning component don t forget to check out an easy way to learn step by step online free salesforce tutorial to know more click here you may also like โ†’ how to get selected checkbox value in lwc how to display account related contacts based on accountid in lwc how to create lightning datatable row actions in lwc how to use if and else condition in lwc how to display selected radio button value in lwc display account related contacts based on account name in lwc how to insert a record of account using apex class in lwc how to get picklist values dynamically in lwc how to edit save row dynamically in lightning component update parent field from child using apex trigger to know more use this link step create lightning component customaccordioncmpcontroller js accordionaction function component event helper var thisobj event target name var document getelementbyid var accordionlistall queryselectorall slds accordion list item alert accordionlistall length var conainactive document getelementbyid thisobj classlist contains activerow for var i i accordionlistall length i accordionlistall classlist remove activerow if conainactive true document getelementbyid thisobj classlist remove activerow else document getelementbyid thisobj classlist toggle activerow step create lightning component style customaccordioncmp css this background fff important this margin padding list style none this li padding this li slds summary heading display block padding font size position relative this li slds summary heading before content width height display inline block background url resource assets icons utility chevronright png no repeat left top background size cover cursor pointer position absolute left top this li slds summary heading hover color cursor pointer this slds accordion summary display initial this slds accordion list item accordioncontent display none overflow hidden this li activerow slds summary heading color this li activerow accordioncontent display block padding font size this li activerow accordioncontent postimage display inline block float left margin right this li activerow slds summary heading before content width height display inline block background url resource assets icons utility chevrondown png no repeat left top background size cover cursor pointer position absolute left top this readmore font size font weight bold display inline block padding this readmore a color text decoration none this readmore a hover color text decoration underline
0
14,059
16,870,568,459
IssuesEvent
2021-06-22 03:38:48
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
opened
point2grid does not support double data type latitude/longitude variables
MET: Point PreProcessing Tools alert: NEED ACCOUNT KEY component: CI/CD priority: medium requestor: METplus Team required: FOR OFFICIAL RELEASE type: bug
*Replace italics below with details for this issue.* ## Describe the Problem ## This was found during supporting the helpdesk ticket (https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=100088). The input data file "VOLCAT_HIMAWARI-8_FLDK_s2020296_050000_v300250_VCB_w167_FLDK_b2020295_204000_g001_pc_rg.nc (downloaded at kiowa:/d1/personal/hsoh/data/Binyu/wrong_latitude_units/)" is the gridded NetCDF. regrid_data_plane is the right tool for regridding. In general, point2grid can handle this kind of files (CF complaint NetCDF with lat/lon variables), but less regridding methods. This netcdf contains the double type latitude and longitude variables and regridding by point2grid does not work. ### Expected Behavior ### point2grid should support the double type latitude and longitude variables ### Environment ### Describe your runtime environment: *1. Machine: (Linux Workstation)* *2. OS: (e.g. RedHat Linux)* *3. MET v10.0* ### To Reproduce ### Describe the steps to reproduce the behavior: *1. Run the following command at kiowa `/usr/local/met-10.0.0/bin/point2grid /d1/personal/hsoh/data/Binyu/wrong_latitude_units/VOLCAT_HIMAWARI-8_FLDK_s2020296_050000_v300250_VCB_w167_FLDK_b2020295_204000_g001_pc_rg.nc "latlon 200 200 45 153 0.1 0.1" bezy_2020296_05_regrided_p2g.nc -field 'name="ash_mass_loading"; level="(0,*,*)";' -v 4 ` *2. Click the log message and the regridid NetCDF file* `DEBUG 4: regrid_nc_variable() -> [Count] data cells: 0, missing: 0, non_missing: 0, non mapped cells: 40000 out of 40000 DEBUG 4: Range: data: [1e+11 - -1e+11] WARNING: WARNING: regrid_nc_variable() -> There are no matching cells between input and the target grid. WARNING: ` ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required - [x] Select **scientist(s)** or **no scientist** required: No scientist ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Organization** level **Project** for support of the current coordinated release - [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
1.0
point2grid does not support double data type latitude/longitude variables - *Replace italics below with details for this issue.* ## Describe the Problem ## This was found during supporting the helpdesk ticket (https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=100088). The input data file "VOLCAT_HIMAWARI-8_FLDK_s2020296_050000_v300250_VCB_w167_FLDK_b2020295_204000_g001_pc_rg.nc (downloaded at kiowa:/d1/personal/hsoh/data/Binyu/wrong_latitude_units/)" is the gridded NetCDF. regrid_data_plane is the right tool for regridding. In general, point2grid can handle this kind of files (CF complaint NetCDF with lat/lon variables), but less regridding methods. This netcdf contains the double type latitude and longitude variables and regridding by point2grid does not work. ### Expected Behavior ### point2grid should support the double type latitude and longitude variables ### Environment ### Describe your runtime environment: *1. Machine: (Linux Workstation)* *2. OS: (e.g. RedHat Linux)* *3. MET v10.0* ### To Reproduce ### Describe the steps to reproduce the behavior: *1. Run the following command at kiowa `/usr/local/met-10.0.0/bin/point2grid /d1/personal/hsoh/data/Binyu/wrong_latitude_units/VOLCAT_HIMAWARI-8_FLDK_s2020296_050000_v300250_VCB_w167_FLDK_b2020295_204000_g001_pc_rg.nc "latlon 200 200 45 153 0.1 0.1" bezy_2020296_05_regrided_p2g.nc -field 'name="ash_mass_loading"; level="(0,*,*)";' -v 4 ` *2. Click the log message and the regridid NetCDF file* `DEBUG 4: regrid_nc_variable() -> [Count] data cells: 0, missing: 0, non_missing: 0, non mapped cells: 40000 out of 40000 DEBUG 4: Range: data: [1e+11 - -1e+11] WARNING: WARNING: regrid_nc_variable() -> There are no matching cells between input and the target grid. WARNING: ` ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required - [x] Select **scientist(s)** or **no scientist** required: No scientist ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [ ] Select **Organization** level **Project** for support of the current coordinated release - [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [ ] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) ## Bugfix Checklist ## See the [METplus Workflow](https://dtcenter.github.io/METplus/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
process
does not support double data type latitude longitude variables replace italics below with details for this issue describe the problem this was found during supporting the helpdesk ticket the input data file volcat himawari fldk vcb fldk pc rg nc downloaded at kiowa personal hsoh data binyu wrong latitude units is the gridded netcdf regrid data plane is the right tool for regridding in general can handle this kind of files cf complaint netcdf with lat lon variables but less regridding methods this netcdf contains the double type latitude and longitude variables and regridding by does not work expected behavior should support the double type latitude and longitude variables environment describe your runtime environment machine linux workstation os e g redhat linux met to reproduce describe the steps to reproduce the behavior run the following command at kiowa usr local met bin personal hsoh data binyu wrong latitude units volcat himawari fldk vcb fldk pc rg nc latlon bezy regrided nc field name ash mass loading level v click the log message and the regridid netcdf file debug regrid nc variable data cells missing non missing non mapped cells out of debug range data warning warning regrid nc variable there are no matching cells between input and the target grid warning relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required no scientist labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and linked issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version close this issue
1
2,211
2,587,915,347
IssuesEvent
2015-02-17 21:28:21
azavea/nyc-trees
https://api.github.com/repos/azavea/nyc-trees
closed
Progress Map action bar
user testing
The progress map should have an action bar at the bottom, similar to the event map, that shows information about a selected block face. - Available: you would see a button that would take you to the reservations page - Already mapped: "Mapped by [group name]" or if no group "Mapped by an individual mapper" - Unavailable: "This is reserved by `[group name]` (with link to group page)" or "This is reserved by an individual mapper" When the user is zoomed far out, the action bar should have a message like "zoom in to select a block face".
1.0
Progress Map action bar - The progress map should have an action bar at the bottom, similar to the event map, that shows information about a selected block face. - Available: you would see a button that would take you to the reservations page - Already mapped: "Mapped by [group name]" or if no group "Mapped by an individual mapper" - Unavailable: "This is reserved by `[group name]` (with link to group page)" or "This is reserved by an individual mapper" When the user is zoomed far out, the action bar should have a message like "zoom in to select a block face".
non_process
progress map action bar the progress map should have an action bar at the bottom similar to the event map that shows information about a selected block face available you would see a button that would take you to the reservations page already mapped mapped by or if no group mapped by an individual mapper unavailable this is reserved by with link to group page or this is reserved by an individual mapper when the user is zoomed far out the action bar should have a message like zoom in to select a block face
0
15,210
19,042,291,713
IssuesEvent
2021-11-25 00:17:44
km4ack/patmenu2
https://api.github.com/repos/km4ack/patmenu2
closed
Auto update Winlink forms?
enhancement in process
Is it possible to include a function to update the Winlink forms the same way we do the Winlink gateway list with Pat?
1.0
Auto update Winlink forms? - Is it possible to include a function to update the Winlink forms the same way we do the Winlink gateway list with Pat?
process
auto update winlink forms is it possible to include a function to update the winlink forms the same way we do the winlink gateway list with pat
1
198
2,609,098,887
IssuesEvent
2015-02-26 12:26:36
cs2103jan2015-f10-3c/main
https://api.github.com/repos/cs2103jan2015-f10-3c/main
opened
Data processing component for add task feature2
component.DataProcessing Deadline.week7 priority.high type.task
@YangXiaozhou @kevin-christian maybe you guys can decide more specifically the diff thing you guys do for data processing
1.0
Data processing component for add task feature2 - @YangXiaozhou @kevin-christian maybe you guys can decide more specifically the diff thing you guys do for data processing
process
data processing component for add task yangxiaozhou kevin christian maybe you guys can decide more specifically the diff thing you guys do for data processing
1
233,798
25,770,227,704
IssuesEvent
2022-12-09 07:13:09
sourceplusplus/sourceplusplus
https://api.github.com/repos/sourceplusplus/sourceplusplus
closed
graphql-java-18.1.jar: 1 vulnerabilities (highest severity is: 7.5) - autoclosed
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>graphql-java-18.1.jar</b></p></summary> <p>GraphqL Java</p> <p>Path to dependency file: /platform/processor/live-instrument/build.gradle.kts</p> <p>Path to vulnerable library: /caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/sourceplusplus/sourceplusplus/commit/dbb0c37636e6e66a5de8b2fe3945098ca275d4b8">dbb0c37636e6e66a5de8b2fe3945098ca275d4b8</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (graphql-java version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-37734](https://www.mend.io/vulnerability-database/CVE-2022-37734) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | graphql-java-18.1.jar | Direct | 18.3 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-37734</summary> ### Vulnerable Library - <b>graphql-java-18.1.jar</b></p> <p>GraphqL Java</p> <p>Path to dependency file: /platform/processor/live-instrument/build.gradle.kts</p> <p>Path to vulnerable library: /caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar</p> <p> Dependency Hierarchy: - :x: **graphql-java-18.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sourceplusplus/sourceplusplus/commit/dbb0c37636e6e66a5de8b2fe3945098ca275d4b8">dbb0c37636e6e66a5de8b2fe3945098ca275d4b8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> graphql-java before19.0 is vulnerable to Denial of Service. An attacker can send a malicious GraphQL query that consumes CPU resources. The fixed versions are 19.0 and later, 18.3, and 17.4, and 0.0.0-2022-07-26T05-45-04-226aabd9. <p>Publish Date: 2022-09-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37734>CVE-2022-37734</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-09-12</p> <p>Fix Resolution: 18.3</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
graphql-java-18.1.jar: 1 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>graphql-java-18.1.jar</b></p></summary> <p>GraphqL Java</p> <p>Path to dependency file: /platform/processor/live-instrument/build.gradle.kts</p> <p>Path to vulnerable library: /caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/sourceplusplus/sourceplusplus/commit/dbb0c37636e6e66a5de8b2fe3945098ca275d4b8">dbb0c37636e6e66a5de8b2fe3945098ca275d4b8</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (graphql-java version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-37734](https://www.mend.io/vulnerability-database/CVE-2022-37734) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | graphql-java-18.1.jar | Direct | 18.3 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-37734</summary> ### Vulnerable Library - <b>graphql-java-18.1.jar</b></p> <p>GraphqL Java</p> <p>Path to dependency file: /platform/processor/live-instrument/build.gradle.kts</p> <p>Path to vulnerable library: /caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar,/caches/modules-2/files-2.1/com.graphql-java/graphql-java/18.1/cdac2372878a8db6fbd1b6b7ba0b55e5ba7a717e/graphql-java-18.1.jar</p> <p> Dependency Hierarchy: - :x: **graphql-java-18.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sourceplusplus/sourceplusplus/commit/dbb0c37636e6e66a5de8b2fe3945098ca275d4b8">dbb0c37636e6e66a5de8b2fe3945098ca275d4b8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> graphql-java before19.0 is vulnerable to Denial of Service. An attacker can send a malicious GraphQL query that consumes CPU resources. The fixed versions are 19.0 and later, 18.3, and 17.4, and 0.0.0-2022-07-26T05-45-04-226aabd9. <p>Publish Date: 2022-09-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37734>CVE-2022-37734</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-09-12</p> <p>Fix Resolution: 18.3</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
graphql java jar vulnerabilities highest severity is autoclosed vulnerable library graphql java jar graphql java path to dependency file platform processor live instrument build gradle kts path to vulnerable library caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in graphql java version remediation available high graphql java jar direct details cve vulnerable library graphql java jar graphql java path to dependency file platform processor live instrument build gradle kts path to vulnerable library caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar caches modules files com graphql java graphql java graphql java jar dependency hierarchy x graphql java jar vulnerable library found in head commit a href found in base branch master vulnerability details graphql java is vulnerable to denial of service an attacker can send a malicious graphql query that consumes cpu resources the fixed versions are and later and and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend
0
245,550
7,887,544,340
IssuesEvent
2018-06-27 18:50:07
raster-foundry/raster-foundry
https://api.github.com/repos/raster-foundry/raster-foundry
opened
Implement navbar search component for orgs and users
frontend priority
https://share.goabstract.com/f381825d-8454-497d-99fb-d2008d689f63 The endpoints for org and user search already exist. The search bar should not be visible when in the project editor.
1.0
Implement navbar search component for orgs and users - https://share.goabstract.com/f381825d-8454-497d-99fb-d2008d689f63 The endpoints for org and user search already exist. The search bar should not be visible when in the project editor.
non_process
implement navbar search component for orgs and users the endpoints for org and user search already exist the search bar should not be visible when in the project editor
0
16,635
21,707,260,258
IssuesEvent
2022-05-10 10:45:55
sjmog/smartflix
https://api.github.com/repos/sjmog/smartflix
opened
Showing a single show
Rails/REST 04-background-processing
In the previous project, you built a tested, styled homepage showing database-backed shows, deployed with Docker on Heroku. In this project, we're going to enrich the basic show data in our database by leveraging the Open Movie Database API. In this challenge, we'll set up a `show` route to display details about a single show. Here's a user story: ``` As a user, So I can dive deeper into details about my shows, I want to click on a show and see more details about it. ``` ## To complete this challenge, you will have to: - [ ] Write an acceptance test for clicking on a single show and seeing some details about it. - [ ] Implement a `show` interface by defining a RESTful route, controller, and view to show a single Show. - [ ] Make sure that clicking on a single show will take you to the `show` page for any given show. ## Tips - You can lay out the show page however you think best: feel free to use any of the database fields to populate it. Remember to test for them! - Make sure you're following the [Rails conventions](https://guides.rubyonrails.org/routing.html#crud-verbs-and-actions)!
1.0
Showing a single show - In the previous project, you built a tested, styled homepage showing database-backed shows, deployed with Docker on Heroku. In this project, we're going to enrich the basic show data in our database by leveraging the Open Movie Database API. In this challenge, we'll set up a `show` route to display details about a single show. Here's a user story: ``` As a user, So I can dive deeper into details about my shows, I want to click on a show and see more details about it. ``` ## To complete this challenge, you will have to: - [ ] Write an acceptance test for clicking on a single show and seeing some details about it. - [ ] Implement a `show` interface by defining a RESTful route, controller, and view to show a single Show. - [ ] Make sure that clicking on a single show will take you to the `show` page for any given show. ## Tips - You can lay out the show page however you think best: feel free to use any of the database fields to populate it. Remember to test for them! - Make sure you're following the [Rails conventions](https://guides.rubyonrails.org/routing.html#crud-verbs-and-actions)!
process
showing a single show in the previous project you built a tested styled homepage showing database backed shows deployed with docker on heroku in this project we re going to enrich the basic show data in our database by leveraging the open movie database api in this challenge we ll set up a show route to display details about a single show here s a user story as a user so i can dive deeper into details about my shows i want to click on a show and see more details about it to complete this challenge you will have to write an acceptance test for clicking on a single show and seeing some details about it implement a show interface by defining a restful route controller and view to show a single show make sure that clicking on a single show will take you to the show page for any given show tips you can lay out the show page however you think best feel free to use any of the database fields to populate it remember to test for them make sure you re following the
1
158,396
12,413,439,260
IssuesEvent
2020-05-22 12:43:22
eclipse/openj9
https://api.github.com/repos/eclipse/openj9
opened
java/lang/ref/FinalizeOverride timeout running jstack
test failure
https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_Nightly/61 java/lang/ref/FinalizeOverride.java ``` 00:41:03 ACTION: main -- Error. Agent error: java.lang.Exception: Agent 35 timed out with a timeout of 960 seconds; check console log for any additional details 00:41:03 REASON: Assumed action based on file name: run main FinalizeOverride 00:41:03 TIME: 962.632 seconds 00:41:03 messages: 00:41:03 command: main FinalizeOverride 00:41:03 reason: Assumed action based on file name: run main FinalizeOverride 00:41:03 Mode: agentvm 00:41:03 Agent id: 35 00:41:03 Timeout refired 960 times 00:41:03 Timeout information: 00:41:03 Running jstack on process 31842 00:41:03 2020-05-22T01:39:09.394211345 00:41:03 Virtual machine: 31842 JVM information: 00:41:03 JRE 11 Linux ppc64le-64-Bit Compressed References 20200521_395 (JIT enabled, AOT enabled) 00:41:03 OpenJ9 - 561026fca 00:41:03 OMR - 00689235c 00:41:03 JCL - cfce36dfff5 based on jdk-11.0.8+3 00:41:03 00:41:03 "main" prio=5 Id=1 WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Object.java:221) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.join(Thread.java:716) 00:41:03 - locked java.lang.Thread@4043657b 00:41:03 at app//com.sun.javatest.regtest.agent.MainActionHelper.runClass(MainActionHelper.java:184) 00:41:03 at app//com.sun.javatest.regtest.agent.AgentServer.doMain(AgentServer.java:301) 00:41:03 at app//com.sun.javatest.regtest.agent.AgentServer.run(AgentServer.java:232) 00:41:03 at app//com.sun.javatest.regtest.agent.AgentServer.main(AgentServer.java:69) 00:41:03 00:41:03 "JIT Compilation Thread-000" prio=10 Id=3 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-001 Suspended" prio=10 Id=4 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-002 Suspended" prio=10 Id=5 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-003 Suspended" prio=10 Id=6 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-004 Suspended" prio=10 Id=7 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-005 Suspended" prio=10 Id=8 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-006 Suspended" prio=10 Id=9 RUNNABLE 00:41:03 00:41:03 "JIT Diagnostic Compilation Thread-007 Suspended" prio=10 Id=10 RUNNABLE 00:41:03 00:41:03 "JIT-SamplerThread" prio=10 Id=11 TIMED_WAITING 00:41:03 00:41:03 "IProfiler" prio=5 Id=12 RUNNABLE 00:41:03 00:41:03 "Common-Cleaner" prio=8 Id=2 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Object.java:221) 00:41:03 at java.base@11.0.8-internal/java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:138) 00:41:03 - locked java.lang.ref.ReferenceQueue@e8d74150 00:41:03 at java.base@11.0.8-internal/jdk.internal.ref.CleanerImpl.run(CleanerImpl.java:148) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.run(Thread.java:836) 00:41:03 at java.base@11.0.8-internal/jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:134) 00:41:03 00:41:03 "Concurrent Mark Helper" prio=1 Id=13 RUNNABLE 00:41:03 00:41:03 "Finalizer thread" prio=5 Id=14 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=15 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=16 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=17 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=18 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=19 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=20 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=21 RUNNABLE 00:41:03 00:41:03 "Attach API wait loop" prio=10 Id=24 RUNNABLE 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.IPC.waitSemaphore(Native Method) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.CommonDirectory.waitSemaphore(CommonDirectory.java:259) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.WaitLoop.waitForNotification(WaitLoop.java:66) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.WaitLoop.run(WaitLoop.java:154) 00:41:03 00:41:03 "pool-1-thread-1" prio=5 Id=25 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/jdk.internal.misc.Unsafe.park(Native Method) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.run(Thread.java:836) 00:41:03 00:41:03 "AgentVMThread" prio=5 Id=26 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.sleep(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.sleep(Thread.java:966) 00:41:03 at app//FinalizeOverride.test(FinalizeOverride.java:75) 00:41:03 at app//FinalizeOverride.main(FinalizeOverride.java:52) 00:41:03 at java.base@11.0.8-internal/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 00:41:03 at java.base@11.0.8-internal/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 00:41:03 at java.base@11.0.8-internal/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 00:41:03 at java.base@11.0.8-internal/java.lang.reflect.Method.invoke(Method.java:566) 00:41:03 at app//com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.run(Thread.java:836) 00:41:03 00:41:03 "Attachment 44538" prio=10 Id=27 RUNNABLE 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.DiagnosticUtils.dumpAllThreadsImpl(Native Method) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.DiagnosticUtils.getThreadInfo(DiagnosticUtils.java:233) 00:41:03 at app//openj9.internal.tools.attach.target.DiagnosticUtils$$Lambda$66/00000000F4005C80.apply(Unknown Source) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.DiagnosticUtils.executeDiagnosticCommand(DiagnosticUtils.java:169) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.Attachment.doCommand(Attachment.java:249) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.Attachment.run(Attachment.java:160) 00:41:03 00:41:03 "file lock watchdog" prio=10 Id=28 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Object.java:221) 00:41:03 at java.base@11.0.8-internal/java.util.TimerThread.mainLoop(Timer.java:553) 00:41:03 - locked java.util.TaskQueue@793acdc7 00:41:03 at java.base@11.0.8-internal/java.util.TimerThread.run(Timer.java:506) 00:41:03 00:41:03 00:41:03 --- Timeout information end. ```
1.0
java/lang/ref/FinalizeOverride timeout running jstack - https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_Nightly/61 java/lang/ref/FinalizeOverride.java ``` 00:41:03 ACTION: main -- Error. Agent error: java.lang.Exception: Agent 35 timed out with a timeout of 960 seconds; check console log for any additional details 00:41:03 REASON: Assumed action based on file name: run main FinalizeOverride 00:41:03 TIME: 962.632 seconds 00:41:03 messages: 00:41:03 command: main FinalizeOverride 00:41:03 reason: Assumed action based on file name: run main FinalizeOverride 00:41:03 Mode: agentvm 00:41:03 Agent id: 35 00:41:03 Timeout refired 960 times 00:41:03 Timeout information: 00:41:03 Running jstack on process 31842 00:41:03 2020-05-22T01:39:09.394211345 00:41:03 Virtual machine: 31842 JVM information: 00:41:03 JRE 11 Linux ppc64le-64-Bit Compressed References 20200521_395 (JIT enabled, AOT enabled) 00:41:03 OpenJ9 - 561026fca 00:41:03 OMR - 00689235c 00:41:03 JCL - cfce36dfff5 based on jdk-11.0.8+3 00:41:03 00:41:03 "main" prio=5 Id=1 WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Object.java:221) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.join(Thread.java:716) 00:41:03 - locked java.lang.Thread@4043657b 00:41:03 at app//com.sun.javatest.regtest.agent.MainActionHelper.runClass(MainActionHelper.java:184) 00:41:03 at app//com.sun.javatest.regtest.agent.AgentServer.doMain(AgentServer.java:301) 00:41:03 at app//com.sun.javatest.regtest.agent.AgentServer.run(AgentServer.java:232) 00:41:03 at app//com.sun.javatest.regtest.agent.AgentServer.main(AgentServer.java:69) 00:41:03 00:41:03 "JIT Compilation Thread-000" prio=10 Id=3 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-001 Suspended" prio=10 Id=4 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-002 Suspended" prio=10 Id=5 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-003 Suspended" prio=10 Id=6 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-004 Suspended" prio=10 Id=7 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-005 Suspended" prio=10 Id=8 RUNNABLE 00:41:03 00:41:03 "JIT Compilation Thread-006 Suspended" prio=10 Id=9 RUNNABLE 00:41:03 00:41:03 "JIT Diagnostic Compilation Thread-007 Suspended" prio=10 Id=10 RUNNABLE 00:41:03 00:41:03 "JIT-SamplerThread" prio=10 Id=11 TIMED_WAITING 00:41:03 00:41:03 "IProfiler" prio=5 Id=12 RUNNABLE 00:41:03 00:41:03 "Common-Cleaner" prio=8 Id=2 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Object.java:221) 00:41:03 at java.base@11.0.8-internal/java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:138) 00:41:03 - locked java.lang.ref.ReferenceQueue@e8d74150 00:41:03 at java.base@11.0.8-internal/jdk.internal.ref.CleanerImpl.run(CleanerImpl.java:148) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.run(Thread.java:836) 00:41:03 at java.base@11.0.8-internal/jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:134) 00:41:03 00:41:03 "Concurrent Mark Helper" prio=1 Id=13 RUNNABLE 00:41:03 00:41:03 "Finalizer thread" prio=5 Id=14 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=15 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=16 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=17 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=18 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=19 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=20 RUNNABLE 00:41:03 00:41:03 "GC Slave" prio=5 Id=21 RUNNABLE 00:41:03 00:41:03 "Attach API wait loop" prio=10 Id=24 RUNNABLE 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.IPC.waitSemaphore(Native Method) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.CommonDirectory.waitSemaphore(CommonDirectory.java:259) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.WaitLoop.waitForNotification(WaitLoop.java:66) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.WaitLoop.run(WaitLoop.java:154) 00:41:03 00:41:03 "pool-1-thread-1" prio=5 Id=25 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/jdk.internal.misc.Unsafe.park(Native Method) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) 00:41:03 at java.base@11.0.8-internal/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.run(Thread.java:836) 00:41:03 00:41:03 "AgentVMThread" prio=5 Id=26 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.sleep(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.sleep(Thread.java:966) 00:41:03 at app//FinalizeOverride.test(FinalizeOverride.java:75) 00:41:03 at app//FinalizeOverride.main(FinalizeOverride.java:52) 00:41:03 at java.base@11.0.8-internal/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 00:41:03 at java.base@11.0.8-internal/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 00:41:03 at java.base@11.0.8-internal/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 00:41:03 at java.base@11.0.8-internal/java.lang.reflect.Method.invoke(Method.java:566) 00:41:03 at app//com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:298) 00:41:03 at java.base@11.0.8-internal/java.lang.Thread.run(Thread.java:836) 00:41:03 00:41:03 "Attachment 44538" prio=10 Id=27 RUNNABLE 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.DiagnosticUtils.dumpAllThreadsImpl(Native Method) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.DiagnosticUtils.getThreadInfo(DiagnosticUtils.java:233) 00:41:03 at app//openj9.internal.tools.attach.target.DiagnosticUtils$$Lambda$66/00000000F4005C80.apply(Unknown Source) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.DiagnosticUtils.executeDiagnosticCommand(DiagnosticUtils.java:169) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.Attachment.doCommand(Attachment.java:249) 00:41:03 at java.base@11.0.8-internal/openj9.internal.tools.attach.target.Attachment.run(Attachment.java:160) 00:41:03 00:41:03 "file lock watchdog" prio=10 Id=28 TIMED_WAITING 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Native Method) 00:41:03 at java.base@11.0.8-internal/java.lang.Object.wait(Object.java:221) 00:41:03 at java.base@11.0.8-internal/java.util.TimerThread.mainLoop(Timer.java:553) 00:41:03 - locked java.util.TaskQueue@793acdc7 00:41:03 at java.base@11.0.8-internal/java.util.TimerThread.run(Timer.java:506) 00:41:03 00:41:03 00:41:03 --- Timeout information end. ```
non_process
java lang ref finalizeoverride timeout running jstack java lang ref finalizeoverride java action main error agent error java lang exception agent timed out with a timeout of seconds check console log for any additional details reason assumed action based on file name run main finalizeoverride time seconds messages command main finalizeoverride reason assumed action based on file name run main finalizeoverride mode agentvm agent id timeout refired times timeout information running jstack on process virtual machine jvm information jre linux bit compressed references jit enabled aot enabled omr jcl based on jdk main prio id waiting at java base internal java lang object wait native method at java base internal java lang object wait object java at java base internal java lang thread join thread java locked java lang thread at app com sun javatest regtest agent mainactionhelper runclass mainactionhelper java at app com sun javatest regtest agent agentserver domain agentserver java at app com sun javatest regtest agent agentserver run agentserver java at app com sun javatest regtest agent agentserver main agentserver java jit compilation thread prio id runnable jit compilation thread suspended prio id runnable jit compilation thread suspended prio id runnable jit compilation thread suspended prio id runnable jit compilation thread suspended prio id runnable jit compilation thread suspended prio id runnable jit compilation thread suspended prio id runnable jit diagnostic compilation thread suspended prio id runnable jit samplerthread prio id timed waiting iprofiler prio id runnable common cleaner prio id timed waiting at java base internal java lang object wait native method at java base internal java lang object wait object java at java base internal java lang ref referencequeue remove referencequeue java locked java lang ref referencequeue at java base internal jdk internal ref cleanerimpl run cleanerimpl java at java base internal java lang thread run thread java at java base internal jdk internal misc innocuousthread run innocuousthread java concurrent mark helper prio id runnable finalizer thread prio id runnable gc slave prio id runnable gc slave prio id runnable gc slave prio id runnable gc slave prio id runnable gc slave prio id runnable gc slave prio id runnable gc slave prio id runnable attach api wait loop prio id runnable at java base internal internal tools attach target ipc waitsemaphore native method at java base internal internal tools attach target commondirectory waitsemaphore commondirectory java at java base internal internal tools attach target waitloop waitfornotification waitloop java at java base internal internal tools attach target waitloop run waitloop java pool thread prio id timed waiting at java base internal jdk internal misc unsafe park native method at java base internal java util concurrent locks locksupport parknanos locksupport java at java base internal java util concurrent locks abstractqueuedsynchronizer conditionobject awaitnanos abstractqueuedsynchronizer java at java base internal java util concurrent scheduledthreadpoolexecutor delayedworkqueue take scheduledthreadpoolexecutor java at java base internal java util concurrent scheduledthreadpoolexecutor delayedworkqueue take scheduledthreadpoolexecutor java at java base internal java util concurrent threadpoolexecutor gettask threadpoolexecutor java at java base internal java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base internal java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base internal java lang thread run thread java agentvmthread prio id timed waiting at java base internal java lang thread sleep native method at java base internal java lang thread sleep thread java at app finalizeoverride test finalizeoverride java at app finalizeoverride main finalizeoverride java at java base internal jdk internal reflect nativemethodaccessorimpl native method at java base internal jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base internal jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base internal java lang reflect method invoke method java at app com sun javatest regtest agent mainactionhelper agentvmrunnable run mainactionhelper java at java base internal java lang thread run thread java attachment prio id runnable at java base internal internal tools attach target diagnosticutils dumpallthreadsimpl native method at java base internal internal tools attach target diagnosticutils getthreadinfo diagnosticutils java at app internal tools attach target diagnosticutils lambda apply unknown source at java base internal internal tools attach target diagnosticutils executediagnosticcommand diagnosticutils java at java base internal internal tools attach target attachment docommand attachment java at java base internal internal tools attach target attachment run attachment java file lock watchdog prio id timed waiting at java base internal java lang object wait native method at java base internal java lang object wait object java at java base internal java util timerthread mainloop timer java locked java util taskqueue at java base internal java util timerthread run timer java timeout information end
0
56,260
8,059,403,478
IssuesEvent
2018-08-02 21:50:33
gatsbyjs/gatsby
https://api.github.com/repos/gatsbyjs/gatsby
closed
[v2] Document how plugins can make sure they're compatible with v2
๐Ÿท type: documentation
Most plugins shouldn't have to do anything to upgrade to v2, and that needs to be clear. Also worth considering: - how do we signal version compatibility in the plugin library? - Kyle said they already updated the plugins in the repo to say โ€œ2.0โ€, how can we make this either automated OR make sure the authors do it so it doesn't have to be done manually by someone internal?
1.0
[v2] Document how plugins can make sure they're compatible with v2 - Most plugins shouldn't have to do anything to upgrade to v2, and that needs to be clear. Also worth considering: - how do we signal version compatibility in the plugin library? - Kyle said they already updated the plugins in the repo to say โ€œ2.0โ€, how can we make this either automated OR make sure the authors do it so it doesn't have to be done manually by someone internal?
non_process
document how plugins can make sure they re compatible with most plugins shouldn t have to do anything to upgrade to and that needs to be clear also worth considering how do we signal version compatibility in the plugin library kyle said they already updated the plugins in the repo to say โ€œ โ€ how can we make this either automated or make sure the authors do it so it doesn t have to be done manually by someone internal
0
13,142
15,559,062,797
IssuesEvent
2021-03-16 11:01:52
CGAL/cgal
https://api.github.com/repos/CGAL/cgal
closed
Eigen error in "registration_with_OpenGR"
Pkg::Point_set_processing_3
I reopen [issue 5167](https://github.com/CGAL/cgal/issues/5167). ## Issue Details When compilling the `registration_with_OpenGR` example of the `Point_set_processing_3` package, I got 36 Eigen errors. I don't have compiled Eigen because it's a header only library. OpenGR was successfully build. Do you know where it could come from? ## Environment - Operating system: Windows 10, 64 bits - Compiler: MSVC 14.2 - Release or debug mode: Release - CGAL version: 5.1.1 - Boost version: 1.74.0 - Eigen version: 3.3.9 ## Output errors ``` Description File 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Eigen::internal::dense_xpr_base': too few template arguments C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'ReturnType': is not a member of 'Eigen::ScalarBinaryOpTraits<float,double,Eigen::internal::scalar_product_op<ScalarA,ScalarB>>' C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseCoeffsBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\MatrixBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\MatrixCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\MatrixBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': is not a member of 'Eigen::internal::traits<XprType>' C:\dev\eigen-3.3.9\Eigen\src\Core\Block.h 'Scalar': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\DenseCoeffsBase.h 'Scalar': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\Block.h 'type': is not a member of 'Eigen::internal::plain_matrix_type_dense<T,Eigen::internal::traits<T>::XprKind,0>' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h 'type': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\DenseCoeffsBase.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\Block.h syntax error: missing '>' before identifier 'Scalar' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h type 'unknown-type' unexpected C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h unexpected token(s) preceding ';' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h ```
1.0
Eigen error in "registration_with_OpenGR" - I reopen [issue 5167](https://github.com/CGAL/cgal/issues/5167). ## Issue Details When compilling the `registration_with_OpenGR` example of the `Point_set_processing_3` package, I got 36 Eigen errors. I don't have compiled Eigen because it's a header only library. OpenGR was successfully build. Do you know where it could come from? ## Environment - Operating system: Windows 10, 64 bits - Compiler: MSVC 14.2 - Release or debug mode: Release - CGAL version: 5.1.1 - Boost version: 1.74.0 - Eigen version: 3.3.9 ## Output errors ``` Description File 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Eigen::internal::dense_xpr_base': too few template arguments C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'ReturnType': is not a member of 'Eigen::ScalarBinaryOpTraits<float,double,Eigen::internal::scalar_product_op<ScalarA,ScalarB>>' C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseCoeffsBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\DenseBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\MatrixBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\CommonCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\plugins\MatrixCwiseBinaryOps.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\MatrixBase.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': is not a member of 'Eigen::internal::traits<Derived>' C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': is not a member of 'Eigen::internal::traits<XprType>' C:\dev\eigen-3.3.9\Eigen\src\Core\Block.h 'Scalar': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h 'Scalar': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\DenseCoeffsBase.h 'Scalar': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\Block.h 'type': is not a member of 'Eigen::internal::plain_matrix_type_dense<T,Eigen::internal::traits<T>::XprKind,0>' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h 'type': unknown override specifier C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\Product.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\DenseCoeffsBase.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h missing type specifier - int assumed. Note: C++ does not support default-int C:\dev\eigen-3.3.9\Eigen\src\Core\Block.h syntax error: missing '>' before identifier 'Scalar' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h type 'unknown-type' unexpected C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h unexpected token(s) preceding ';' C:\dev\eigen-3.3.9\Eigen\src\Core\util\XprHelper.h ```
process
eigen error in registration with opengr i reopen issue details when compilling the registration with opengr example of the point set processing package i got eigen errors i don t have compiled eigen because it s a header only library opengr was successfully build do you know where it could come from environment operating system windows bits compiler msvc release or debug mode release cgal version boost version eigen version output errors description file scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h eigen internal dense xpr base too few template arguments c dev eigen eigen src core product h returntype is not a member of eigen scalarbinaryoptraits c dev eigen eigen src core product h scalar is not a member of eigen internal traits c dev eigen eigen src core densecoeffsbase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core util xprhelper h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core densebase h scalar is not a member of eigen internal traits c dev eigen eigen src core matrixbase h scalar is not a member of eigen internal traits c dev eigen eigen src plugins commoncwisebinaryops h scalar is not a member of eigen internal traits c dev eigen eigen src plugins commoncwisebinaryops h scalar is not a member of eigen internal traits c dev eigen eigen src plugins commoncwisebinaryops h scalar is not a member of eigen internal traits c dev eigen eigen src plugins commoncwisebinaryops h scalar is not a member of eigen internal traits c dev eigen eigen src plugins matrixcwisebinaryops h scalar is not a member of eigen internal traits c dev eigen eigen src core matrixbase h scalar is not a member of eigen internal traits c dev eigen eigen src core product h scalar is not a member of eigen internal traits c dev eigen eigen src core product h scalar is not a member of eigen internal traits c dev eigen eigen src core block h scalar unknown override specifier c dev eigen eigen src core product h scalar unknown override specifier c dev eigen eigen src core densecoeffsbase h scalar unknown override specifier c dev eigen eigen src core block h type is not a member of eigen internal plain matrix type dense xprkind c dev eigen eigen src core util xprhelper h type unknown override specifier c dev eigen eigen src core util xprhelper h missing type specifier int assumed note c does not support default int c dev eigen eigen src core product h missing type specifier int assumed note c does not support default int c dev eigen eigen src core densecoeffsbase h missing type specifier int assumed note c does not support default int c dev eigen eigen src core util xprhelper h missing type specifier int assumed note c does not support default int c dev eigen eigen src core block h syntax error missing before identifier scalar c dev eigen eigen src core util xprhelper h type unknown type unexpected c dev eigen eigen src core util xprhelper h unexpected token s preceding c dev eigen eigen src core util xprhelper h
1
1,698
2,660,188,241
IssuesEvent
2015-03-19 03:38:12
NREL/OpenStudio
https://api.github.com/repos/NREL/OpenStudio
closed
Every simulation will have at least 4 errors (Bugzilla #1068)
component - Code severity - Major Bug
On 2013-02-11 11:29:53, @axelstudios wrote: > With 0.10.3, every simulation is generating four errors during ModelToIDF: > > Unknown IddObjectType: 'OS:ZoneAirContaminantBalance' > Unknown IddObjectType: 'OS:ZoneCapacitanceMultiplier:ResearchSpecial' > Unknown IddObjectType: 'OS:ZoneAirContaminantBalance' > Unknown IddObjectType: 'OS:ZoneCapacitanceMultiplier:ResearchSpecial' On 2013-02-11 12:19:51, @axelstudios wrote: > With commit 10936, I added no-ops for these IDD objects in the forward translator. That will work for now, but hopefully we can figure out why these objects have started to be inserted in every new/saved osm.
1.0
Every simulation will have at least 4 errors (Bugzilla #1068) - On 2013-02-11 11:29:53, @axelstudios wrote: > With 0.10.3, every simulation is generating four errors during ModelToIDF: > > Unknown IddObjectType: 'OS:ZoneAirContaminantBalance' > Unknown IddObjectType: 'OS:ZoneCapacitanceMultiplier:ResearchSpecial' > Unknown IddObjectType: 'OS:ZoneAirContaminantBalance' > Unknown IddObjectType: 'OS:ZoneCapacitanceMultiplier:ResearchSpecial' On 2013-02-11 12:19:51, @axelstudios wrote: > With commit 10936, I added no-ops for these IDD objects in the forward translator. That will work for now, but hopefully we can figure out why these objects have started to be inserted in every new/saved osm.
non_process
every simulation will have at least errors bugzilla on axelstudios wrote with every simulation is generating four errors during modeltoidf unknown iddobjecttype os zoneaircontaminantbalance unknown iddobjecttype os zonecapacitancemultiplier researchspecial unknown iddobjecttype os zoneaircontaminantbalance unknown iddobjecttype os zonecapacitancemultiplier researchspecial on axelstudios wrote with commit i added no ops for these idd objects in the forward translator that will work for now but hopefully we can figure out why these objects have started to be inserted in every new saved osm
0
22,747
32,063,837,079
IssuesEvent
2023-09-24 23:48:01
h4sh5/npm-auto-scanner
https://api.github.com/repos/h4sh5/npm-auto-scanner
opened
@pandacss/dev 0.15.1 has 2 guarddog issues
npm-silent-process-execution
```{"npm-silent-process-execution":[{"code":" (0, import_node_child_process.spawn)(import_node_process8.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/cli-default.js:18428","message":"This package is silently executing another executable"},{"code":" (0, import_node_child_process.spawn)(import_node_process8.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/cli-main.js:18312","message":"This package is silently executing another executable"}]}```
1.0
@pandacss/dev 0.15.1 has 2 guarddog issues - ```{"npm-silent-process-execution":[{"code":" (0, import_node_child_process.spawn)(import_node_process8.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/cli-default.js:18428","message":"This package is silently executing another executable"},{"code":" (0, import_node_child_process.spawn)(import_node_process8.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/cli-main.js:18312","message":"This package is silently executing another executable"}]}```
process
pandacss dev has guarddog issues npm silent process execution n detached true n stdio ignore n unref location package dist cli default js message this package is silently executing another executable code import node child process spawn import node default execpath n detached true n stdio ignore n unref location package dist cli main js message this package is silently executing another executable
1
149,870
11,939,020,004
IssuesEvent
2020-04-02 14:37:30
ansible/awx
https://api.github.com/repos/ansible/awx
closed
tower_workflow_template does not accept a survey spec
component:awx_collection priority:high state:needs_test type:bug
##### ISSUE TYPE - Bug Report ##### SUMMARY The tower_workflow_template module does not accept a survey specification (in JSON or YAML format). ##### ENVIRONMENT * AWX version: 9.2.0 * AWX install method: Kubernetes 1.14.8 * Ansible version: 2.9.3 * Operating System: MacOS 10.15.1 * Web Browser: Chrome for MacOS 80.0.3987.100 ##### STEPS TO REPRODUCE * Create a Playbook for building a Workflow Template with a survey ```yaml --- - hosts: localhost gather_facts: false tasks: - name: Ensure Workflow Template exists tower_workflow_template: name: R Workflow description: A new workflow template schema: "{{ lookup('file', '{{ playbook_dir }}/surveys/workflow_schema.yml') }}" state: present survey_enabled: true survey: "{{ lookup('file', '{{ playbook_dir }}/surveys/survey_spec.json') }}" tower_username: admin tower_password: password tower_host: http://localhost:8052 tower_verify_ssl: false ``` * Create Survey Specification file (survey_spec.json) ```json { "name": "", "description": "", "spec": [ { "question_name": "Which AWS Account would you like to query?", "question_description": "Enter Account ID", "required": true, "type": "text", "variable": "account", "min": 0, "max": 20, "default": "", "choices": "", "new_question": true } ] } ``` * Launch the Playbook to build Workflow Template from AWX. ##### EXPECTED RESULTS * The job completes successfully without error. * A Workflow Template is created with the provided survey specification, the same way tower_job_template does. ##### ACTUAL RESULTS Playbook always fails with error: `json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n",` Full error: ``` { "module_stdout": "", "module_stderr": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 102, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.web_infrastructure.ansible_tower.tower_workflow_template', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 203, in <module>\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 187, in main\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 720, in modify\n return self.write(pk, create_on_missing=create_on_missing, force_on_exists=True, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 1191, in write\n survey_input = json.loads(survey_input.strip(' '))\n File \"/usr/lib64/python3.6/json/__init__.py\", line 354, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib64/python3.6/json/decoder.py\", line 339, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib64/python3.6/json/decoder.py\", line 355, in raw_decode\n obj, end = self.scan_once(s, idx)\njson.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n", "exception": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 102, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.web_infrastructure.ansible_tower.tower_workflow_template', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 203, in <module>\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 187, in main\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 720, in modify\n return self.write(pk, create_on_missing=create_on_missing, force_on_exists=True, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 1191, in write\n survey_input = json.loads(survey_input.strip(' '))\n File \"/usr/lib64/python3.6/json/__init__.py\", line 354, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib64/python3.6/json/decoder.py\", line 339, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib64/python3.6/json/decoder.py\", line 355, in raw_decode\n obj, end = self.scan_once(s, idx)\njson.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1, "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "_ansible_no_log": false, "changed": false } ``` ##### ADDITIONAL INFORMATION * If the attributes `survey` and `survey_enabled` are not included in the module invocation, the Playbook creates the Workflow Template successfully. * This module is not consistent with similar modules (namely tower_job_template). The survey specification attribute for tower_workflow_template is `survey` not `survey_spec`, and the [module documentation for tower_workflow_template](https://docs.ansible.com/ansible/latest/modules/tower_workflow_template_module.html) does not specify a variable type for this attribute (tower_job_template specifies dict). * The same error occurs when providing either a YAML or JSON formatted survey specification. Though neither is specified in the module documentation, tower_job_template allows for either a JSON or YAML formatted survey spec. * Given the error, it seems as this module only expects JSON, and that should be noted in the documentation if true . * The JSON code in `survey_spec.json` can be used with tower_job_template successfully using the same lookup plugin retrieval method.
1.0
tower_workflow_template does not accept a survey spec - ##### ISSUE TYPE - Bug Report ##### SUMMARY The tower_workflow_template module does not accept a survey specification (in JSON or YAML format). ##### ENVIRONMENT * AWX version: 9.2.0 * AWX install method: Kubernetes 1.14.8 * Ansible version: 2.9.3 * Operating System: MacOS 10.15.1 * Web Browser: Chrome for MacOS 80.0.3987.100 ##### STEPS TO REPRODUCE * Create a Playbook for building a Workflow Template with a survey ```yaml --- - hosts: localhost gather_facts: false tasks: - name: Ensure Workflow Template exists tower_workflow_template: name: R Workflow description: A new workflow template schema: "{{ lookup('file', '{{ playbook_dir }}/surveys/workflow_schema.yml') }}" state: present survey_enabled: true survey: "{{ lookup('file', '{{ playbook_dir }}/surveys/survey_spec.json') }}" tower_username: admin tower_password: password tower_host: http://localhost:8052 tower_verify_ssl: false ``` * Create Survey Specification file (survey_spec.json) ```json { "name": "", "description": "", "spec": [ { "question_name": "Which AWS Account would you like to query?", "question_description": "Enter Account ID", "required": true, "type": "text", "variable": "account", "min": 0, "max": 20, "default": "", "choices": "", "new_question": true } ] } ``` * Launch the Playbook to build Workflow Template from AWX. ##### EXPECTED RESULTS * The job completes successfully without error. * A Workflow Template is created with the provided survey specification, the same way tower_job_template does. ##### ACTUAL RESULTS Playbook always fails with error: `json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n",` Full error: ``` { "module_stdout": "", "module_stderr": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 102, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.web_infrastructure.ansible_tower.tower_workflow_template', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 203, in <module>\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 187, in main\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 720, in modify\n return self.write(pk, create_on_missing=create_on_missing, force_on_exists=True, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 1191, in write\n survey_input = json.loads(survey_input.strip(' '))\n File \"/usr/lib64/python3.6/json/__init__.py\", line 354, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib64/python3.6/json/decoder.py\", line 339, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib64/python3.6/json/decoder.py\", line 355, in raw_decode\n obj, end = self.scan_once(s, idx)\njson.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n", "exception": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 102, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1581716550.1960776-123130320192707/AnsiballZ_tower_workflow_template.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.web_infrastructure.ansible_tower.tower_workflow_template', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 203, in <module>\n File \"/tmp/ansible_tower_workflow_template_payload_8a_qpqdy/ansible_tower_workflow_template_payload.zip/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py\", line 187, in main\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 720, in modify\n return self.write(pk, create_on_missing=create_on_missing, force_on_exists=True, **kwargs)\n File \"/usr/local/lib/python3.6/site-packages/tower_cli/models/base.py\", line 1191, in write\n survey_input = json.loads(survey_input.strip(' '))\n File \"/usr/lib64/python3.6/json/__init__.py\", line 354, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib64/python3.6/json/decoder.py\", line 339, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib64/python3.6/json/decoder.py\", line 355, in raw_decode\n obj, end = self.scan_once(s, idx)\njson.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1, "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "_ansible_no_log": false, "changed": false } ``` ##### ADDITIONAL INFORMATION * If the attributes `survey` and `survey_enabled` are not included in the module invocation, the Playbook creates the Workflow Template successfully. * This module is not consistent with similar modules (namely tower_job_template). The survey specification attribute for tower_workflow_template is `survey` not `survey_spec`, and the [module documentation for tower_workflow_template](https://docs.ansible.com/ansible/latest/modules/tower_workflow_template_module.html) does not specify a variable type for this attribute (tower_job_template specifies dict). * The same error occurs when providing either a YAML or JSON formatted survey specification. Though neither is specified in the module documentation, tower_job_template allows for either a JSON or YAML formatted survey spec. * Given the error, it seems as this module only expects JSON, and that should be noted in the documentation if true . * The JSON code in `survey_spec.json` can be used with tower_job_template successfully using the same lookup plugin retrieval method.
non_process
tower workflow template does not accept a survey spec issue type bug report summary the tower workflow template module does not accept a survey specification in json or yaml format environment awx version awx install method kubernetes ansible version operating system macos web browser chrome for macos steps to reproduce create a playbook for building a workflow template with a survey yaml hosts localhost gather facts false tasks name ensure workflow template exists tower workflow template name r workflow description a new workflow template schema lookup file playbook dir surveys workflow schema yml state present survey enabled true survey lookup file playbook dir surveys survey spec json tower username admin tower password password tower host tower verify ssl false create survey specification file survey spec json json name description spec question name which aws account would you like to query question description enter account id required true type text variable account min max default choices new question true launch the playbook to build workflow template from awx expected results the job completes successfully without error a workflow template is created with the provided survey specification the same way tower job template does actual results playbook always fails with error json decoder jsondecodeerror expecting property name enclosed in double quotes line column char n full error module stdout module stderr traceback most recent call last n file var lib awx ansible tmp ansible tmp ansiballz tower workflow template py line in n ansiballz main n file var lib awx ansible tmp ansible tmp ansiballz tower workflow template py line in ansiballz main n invoke module zipped mod temp path ansiballz params n file var lib awx ansible tmp ansible tmp ansiballz tower workflow template py line in invoke module n runpy run module mod name ansible modules web infrastructure ansible tower tower workflow template init globals none run name main alter sys true n file usr runpy py line in run module n return run module code code init globals run name mod spec n file usr runpy py line in run module code n mod name mod spec pkg name script name n file usr runpy py line in run code n exec code run globals n file tmp ansible tower workflow template payload qpqdy ansible tower workflow template payload zip ansible modules web infrastructure ansible tower tower workflow template py line in n file tmp ansible tower workflow template payload qpqdy ansible tower workflow template payload zip ansible modules web infrastructure ansible tower tower workflow template py line in main n file usr local lib site packages tower cli models base py line in modify n return self write pk create on missing create on missing force on exists true kwargs n file usr local lib site packages tower cli models base py line in write n survey input json loads survey input strip n file usr json init py line in loads n return default decoder decode s n file usr json decoder py line in decode n obj end self raw decode s idx w s end n file usr json decoder py line in raw decode n obj end self scan once s idx njson decoder jsondecodeerror expecting property name enclosed in double quotes line column char n exception traceback most recent call last n file var lib awx ansible tmp ansible tmp ansiballz tower workflow template py line in n ansiballz main n file var lib awx ansible tmp ansible tmp ansiballz tower workflow template py line in ansiballz main n invoke module zipped mod temp path ansiballz params n file var lib awx ansible tmp ansible tmp ansiballz tower workflow template py line in invoke module n runpy run module mod name ansible modules web infrastructure ansible tower tower workflow template init globals none run name main alter sys true n file usr runpy py line in run module n return run module code code init globals run name mod spec n file usr runpy py line in run module code n mod name mod spec pkg name script name n file usr runpy py line in run code n exec code run globals n file tmp ansible tower workflow template payload qpqdy ansible tower workflow template payload zip ansible modules web infrastructure ansible tower tower workflow template py line in n file tmp ansible tower workflow template payload qpqdy ansible tower workflow template payload zip ansible modules web infrastructure ansible tower tower workflow template py line in main n file usr local lib site packages tower cli models base py line in modify n return self write pk create on missing create on missing force on exists true kwargs n file usr local lib site packages tower cli models base py line in write n survey input json loads survey input strip n file usr json init py line in loads n return default decoder decode s n file usr json decoder py line in decode n obj end self raw decode s idx w s end n file usr json decoder py line in raw decode n obj end self scan once s idx njson decoder jsondecodeerror expecting property name enclosed in double quotes line column char n msg module failure nsee stdout stderr for the exact error rc ansible facts discovered interpreter python usr libexec platform python ansible no log false changed false additional information if the attributes survey and survey enabled are not included in the module invocation the playbook creates the workflow template successfully this module is not consistent with similar modules namely tower job template the survey specification attribute for tower workflow template is survey not survey spec and the does not specify a variable type for this attribute tower job template specifies dict the same error occurs when providing either a yaml or json formatted survey specification though neither is specified in the module documentation tower job template allows for either a json or yaml formatted survey spec given the error it seems as this module only expects json and that should be noted in the documentation if true the json code in survey spec json can be used with tower job template successfully using the same lookup plugin retrieval method
0
18,600
24,573,606,941
IssuesEvent
2022-10-13 10:32:18
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Complete mobile details need to be captured from Android and iOS devices
Feature request Process: Fixed Process: Tested dev
Currently, only mobile platform information is collecting in the database but this information is not sufficient to know the participant device details. Please collect at least below details so that it will be useful for debugging the issues 1. Mobile Platform (iOS /Android)- Already capturing 2. Mobile device type ( iPhone11, iPhone 12, once plus 8 ......etc) 3. Mobile device OS versions @blnkumar-btc Please assign this ticket to right developers
2.0
Complete mobile details need to be captured from Android and iOS devices - Currently, only mobile platform information is collecting in the database but this information is not sufficient to know the participant device details. Please collect at least below details so that it will be useful for debugging the issues 1. Mobile Platform (iOS /Android)- Already capturing 2. Mobile device type ( iPhone11, iPhone 12, once plus 8 ......etc) 3. Mobile device OS versions @blnkumar-btc Please assign this ticket to right developers
process
complete mobile details need to be captured from android and ios devices currently only mobile platform information is collecting in the database but this information is not sufficient to know the participant device details please collect at least below details so that it will be useful for debugging the issues mobile platform ios android already capturing mobile device type iphone once plus etc mobile device os versions blnkumar btc please assign this ticket to right developers
1
7,914
11,095,295,043
IssuesEvent
2019-12-16 08:47:18
mattermost/mattermost-developer-documentation
https://api.github.com/repos/mattermost/mattermost-developer-documentation
closed
Hidden Environment Variables are Unavailable in Builds Triggered by PRs from Forks
Process
Our build process utilizes hidden environment variables to protect private information like our AWS, GitHub, and YouTube API credentials. As a security precaution, these variables [are not available in builds that are triggered by a PR submitted from a fork of the repo](https://docs.travis-ci.com/user/pull-requests/#Pull-Requests-and-Security-Restrictions). We'll need to modify the build process to avoid executing the steps that require these variables in this case. See #77 for details.
1.0
Hidden Environment Variables are Unavailable in Builds Triggered by PRs from Forks - Our build process utilizes hidden environment variables to protect private information like our AWS, GitHub, and YouTube API credentials. As a security precaution, these variables [are not available in builds that are triggered by a PR submitted from a fork of the repo](https://docs.travis-ci.com/user/pull-requests/#Pull-Requests-and-Security-Restrictions). We'll need to modify the build process to avoid executing the steps that require these variables in this case. See #77 for details.
process
hidden environment variables are unavailable in builds triggered by prs from forks our build process utilizes hidden environment variables to protect private information like our aws github and youtube api credentials as a security precaution these variables we ll need to modify the build process to avoid executing the steps that require these variables in this case see for details
1
14,988
18,785,053,174
IssuesEvent
2021-11-08 11:12:56
Leaflet/Leaflet
https://api.github.com/repos/Leaflet/Leaflet
closed
404 in Mobile Example
bug good first issue compatibility
- [โœ”๏ธ] I've looked at the [documentation](http://leafletjs.com/reference.html) to make sure the behavior is documented and expected - [โœ”๏ธ] I'm sure this is a Leaflet code issue, not an issue with my own code nor with the framework I'm using (Cordova, Ionic, Angular, Reactโ€ฆ) - [โœ”๏ธ] I've searched through the issues to make sure it's not yet reported **Steps to reproduce** Steps to reproduce the behavior: -step1: Visit: https://leafletjs.com/examples/mobile/example.html -step2: Open your browser developer tools and look at the Javascript console. -step3: Locate the 404 message saying: GEThttps://api.mapbox.com/styles/v1/mapbox/streets-v11/tiles/-1/0/0?access_token=pk.eyJ1IjoibWFwYm94IiwiYSI6ImNpejY4NXVycTA2emYycXBndHRqcmZ3N3gifQ.rJcFIG214AriISLbB6B5aw [HTTP/1.1 404 Not Found 26ms] **Expected behavior** The expected behavior would be no 404 errors with the Mapbox API such as in the quickstart example found at: https://leafletjs.com/examples/quick-start/example.html **Current behavior** The current behavior displays a 404 error complaining that the API URL cannot be found which I also believe is causing a significant delay on the map loading while it somehow recovers from the crash/404. **Environment** - Leaflet version: leaflet@1.7.1 - Browser (with version): Firefox 87.0 - OS/Platform (with version): Ubuntu 20.04 **Additional context** No further context Example: https://leafletjs.com/examples/mobile/example.html - [โœ”๏ธ] this example is as simple as possible - [โœ”๏ธ] this example does not rely on any third party code
True
404 in Mobile Example - - [โœ”๏ธ] I've looked at the [documentation](http://leafletjs.com/reference.html) to make sure the behavior is documented and expected - [โœ”๏ธ] I'm sure this is a Leaflet code issue, not an issue with my own code nor with the framework I'm using (Cordova, Ionic, Angular, Reactโ€ฆ) - [โœ”๏ธ] I've searched through the issues to make sure it's not yet reported **Steps to reproduce** Steps to reproduce the behavior: -step1: Visit: https://leafletjs.com/examples/mobile/example.html -step2: Open your browser developer tools and look at the Javascript console. -step3: Locate the 404 message saying: GEThttps://api.mapbox.com/styles/v1/mapbox/streets-v11/tiles/-1/0/0?access_token=pk.eyJ1IjoibWFwYm94IiwiYSI6ImNpejY4NXVycTA2emYycXBndHRqcmZ3N3gifQ.rJcFIG214AriISLbB6B5aw [HTTP/1.1 404 Not Found 26ms] **Expected behavior** The expected behavior would be no 404 errors with the Mapbox API such as in the quickstart example found at: https://leafletjs.com/examples/quick-start/example.html **Current behavior** The current behavior displays a 404 error complaining that the API URL cannot be found which I also believe is causing a significant delay on the map loading while it somehow recovers from the crash/404. **Environment** - Leaflet version: leaflet@1.7.1 - Browser (with version): Firefox 87.0 - OS/Platform (with version): Ubuntu 20.04 **Additional context** No further context Example: https://leafletjs.com/examples/mobile/example.html - [โœ”๏ธ] this example is as simple as possible - [โœ”๏ธ] this example does not rely on any third party code
non_process
in mobile example i ve looked at the to make sure the behavior is documented and expected i m sure this is a leaflet code issue not an issue with my own code nor with the framework i m using cordova ionic angular reactโ€ฆ i ve searched through the issues to make sure it s not yet reported steps to reproduce steps to reproduce the behavior visit open your browser developer tools and look at the javascript console locate the message saying get expected behavior the expected behavior would be no errors with the mapbox api such as in the quickstart example found at current behavior the current behavior displays a error complaining that the api url cannot be found which i also believe is causing a significant delay on the map loading while it somehow recovers from the crash environment leaflet version leaflet browser with version firefox os platform with version ubuntu additional context no further context example this example is as simple as possible this example does not rely on any third party code
0
230,599
7,612,022,746
IssuesEvent
2018-05-01 16:00:57
AffiliateWP/affiliatewp-checkout-referrals
https://api.github.com/repos/AffiliateWP/affiliatewp-checkout-referrals
closed
Checkout referrals are not being generated
Has HS to-notify Priority: High bug
Referrals are not being generated properly on checkout. Ticket: https://secure.helpscout.net/conversation/570770402/81272?folderId=634609.
1.0
Checkout referrals are not being generated - Referrals are not being generated properly on checkout. Ticket: https://secure.helpscout.net/conversation/570770402/81272?folderId=634609.
non_process
checkout referrals are not being generated referrals are not being generated properly on checkout ticket
0
15,659
19,846,975,703
IssuesEvent
2022-01-21 07:54:14
ooi-data/RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam
https://api.github.com/repos/ooi-data/RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam
opened
๐Ÿ›‘ Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T07:54:14.001509. ## Details Flow name: `RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
๐Ÿ›‘ Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T07:54:14.001509. ## Details Flow name: `RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__ return self.func(self.array) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask data = np.asarray(data, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
๐Ÿ›‘ processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed vadcp velocity beam task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
22,015
30,520,267,247
IssuesEvent
2023-07-19 07:35:22
q191201771/lal
https://api.github.com/repos/q191201771/lal
closed
OnAvPacket callback
#Question *In process
Hello, thanks for the project. Is there a way to specify onAvPacket callback? In addition, I could not find a documentation on how to start the RTMP server without config file.
1.0
OnAvPacket callback - Hello, thanks for the project. Is there a way to specify onAvPacket callback? In addition, I could not find a documentation on how to start the RTMP server without config file.
process
onavpacket callback hello thanks for the project is there a way to specify onavpacket callback in addition i could not find a documentation on how to start the rtmp server without config file
1
3,334
6,460,450,411
IssuesEvent
2017-08-16 04:10:09
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Glitch (Bug) Reporting - Where? Who?
discussion type: process
Following up from testing group forum thread: https://forums.triplea-game.org/topic/268/volunteers-needed-early-release-testing-group/13 The question is raised: where to report game glitches? Candidate choices are to keep using github issues or to use the nodeBB forum. -------------- Requirements from a game dev point of view: * Need to be able to see which bugs are open and be able to 'close' bugs to remove them from that view. Ideally would be able to track status as well (assignee + in-progress) ---------------------- ## Problems to Solve * Lack of process clarity for bug reports, where to report them. * Lack of decisive bug reporting tool/instructions. We're close to having this, but the bug reporting links are not all working across all game versions, the instructions are not necessarily decisive with a simple and minimal: "this is how you submit bug reports". ---------------------- ## Some Considerations * Minimal process overhead, we don't want anyone responsible for hand moving content. Ideally we would be able to enlist additional support and not just rely on admins alone for managing bugs. * Tooling integration, github issues and PRs link to each other -------------------- To discuss: if github issues should be the final home for where we track bugs, or if we want to consolidate to the NodeBB forum, and we should discuss the pro's/cons and implications of the migration so we know we are doing something worthwhile.
1.0
Glitch (Bug) Reporting - Where? Who? - Following up from testing group forum thread: https://forums.triplea-game.org/topic/268/volunteers-needed-early-release-testing-group/13 The question is raised: where to report game glitches? Candidate choices are to keep using github issues or to use the nodeBB forum. -------------- Requirements from a game dev point of view: * Need to be able to see which bugs are open and be able to 'close' bugs to remove them from that view. Ideally would be able to track status as well (assignee + in-progress) ---------------------- ## Problems to Solve * Lack of process clarity for bug reports, where to report them. * Lack of decisive bug reporting tool/instructions. We're close to having this, but the bug reporting links are not all working across all game versions, the instructions are not necessarily decisive with a simple and minimal: "this is how you submit bug reports". ---------------------- ## Some Considerations * Minimal process overhead, we don't want anyone responsible for hand moving content. Ideally we would be able to enlist additional support and not just rely on admins alone for managing bugs. * Tooling integration, github issues and PRs link to each other -------------------- To discuss: if github issues should be the final home for where we track bugs, or if we want to consolidate to the NodeBB forum, and we should discuss the pro's/cons and implications of the migration so we know we are doing something worthwhile.
process
glitch bug reporting where who following up from testing group forum thread the question is raised where to report game glitches candidate choices are to keep using github issues or to use the nodebb forum requirements from a game dev point of view need to be able to see which bugs are open and be able to close bugs to remove them from that view ideally would be able to track status as well assignee in progress problems to solve lack of process clarity for bug reports where to report them lack of decisive bug reporting tool instructions we re close to having this but the bug reporting links are not all working across all game versions the instructions are not necessarily decisive with a simple and minimal this is how you submit bug reports some considerations minimal process overhead we don t want anyone responsible for hand moving content ideally we would be able to enlist additional support and not just rely on admins alone for managing bugs tooling integration github issues and prs link to each other to discuss if github issues should be the final home for where we track bugs or if we want to consolidate to the nodebb forum and we should discuss the pro s cons and implications of the migration so we know we are doing something worthwhile
1
2,353
5,164,011,827
IssuesEvent
2017-01-17 09:13:42
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
closed
[subtitles] [eng] Jean-Luc Mรฉlenchon dรฉnonce l'รฉvasion fiscale au Parlement europรฉen
Language: English Process: [6] Approved
# Video title Jean-Luc Mรฉlenchon dรฉnonce l'รฉvasion fiscale au Parlement europรฉen # URL https://www.youtube.com/watch?v=Jkojd3SskSY Youtube subtitle language Anglais Duration 1:14 URL subtitles https://www.youtube.com/timedtext_editor?ref=player&tab=captions&v=Jkojd3SskSY&lang=en&action_mde_edit_form=1&ui=hd&bl=vmp
1.0
[subtitles] [eng] Jean-Luc Mรฉlenchon dรฉnonce l'รฉvasion fiscale au Parlement europรฉen - # Video title Jean-Luc Mรฉlenchon dรฉnonce l'รฉvasion fiscale au Parlement europรฉen # URL https://www.youtube.com/watch?v=Jkojd3SskSY Youtube subtitle language Anglais Duration 1:14 URL subtitles https://www.youtube.com/timedtext_editor?ref=player&tab=captions&v=Jkojd3SskSY&lang=en&action_mde_edit_form=1&ui=hd&bl=vmp
process
jean luc mรฉlenchon dรฉnonce l รฉvasion fiscale au parlement europรฉen video title jean luc mรฉlenchon dรฉnonce l รฉvasion fiscale au parlement europรฉen url youtube subtitle language anglais duration url subtitles
1
19,420
25,565,598,035
IssuesEvent
2022-11-30 14:05:43
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Support more tail-sampling scenarios
enhancement priority:p3 spec:trace release:after-ga processor/tailsampling
**Is your feature request related to a problem? Please describe.** Current tail-sampling policies can only support simple scenarios, for instance: ```yaml policies: [ { type: numeric_attribute, numeric_attribute: { key: key1, min_value: 50, max_value: 100 }, }, { type: string_attribute, string_attribute: { key: key2, values: [value1, value2] }, }, { type: rate_limiting, rate_limiting: { spans_per_second: 35 }, }, ] ``` It means if a tracing data matches one of these policies, it will be marked as sampled. I think it should support complicated policies to obtain more benefits from tail-sampling. Here is the scenario. I wouldn't say I like to use a global sample rate to make sampling decide since it can miss important traces that we are interested in, and those traces are rare, such as traces with high latencies. Therefore, I need some policies to filter out those traces, other traces which not match those policies will be decided by a global sample rate. Example policies: a trace will be sampled once it matches these policies: - duration > 10s **OR** - (http.latency > 1s **AND** rate_limit = 1000 QPS) **OR** - (error = true **AND** sample_rate = 1%) **OR** - sample_rate = 0.01% In this example, all traces that duration is higher than 10s will be sampled. Traces that HTTP latency is higher than 1s will be sampled with 1000 QPS rate limiter. Error traces have a 1% sample rate. Other traces have a 0.01% sample rate. **Describe the solution you'd like** Extend tail-sampling policies. For instance: ```yaml # type: expression # operator: OR policies: - type: numeric_attribute numeric_attribute: { key: duration, min_value: 10000 } - type: expression operator: AND policies: - type: numeric_attribute numeric_attribute: { key: http.latency, min_value: 1000 } - type: rate_limiting rate_limiting: { spans_per_second: 1000 } - type: expression operator: AND policies: - type: string_attribute string_attribute: { key: error, values: [true] } - type: probabilistic_sampler sampling_percentage: 1 - type: probabilistic_sampler sampling_percentage: 0.01 ``` Also, it will allow complicated attribute matching become reality, like `app=foo && error=true`
1.0
Support more tail-sampling scenarios - **Is your feature request related to a problem? Please describe.** Current tail-sampling policies can only support simple scenarios, for instance: ```yaml policies: [ { type: numeric_attribute, numeric_attribute: { key: key1, min_value: 50, max_value: 100 }, }, { type: string_attribute, string_attribute: { key: key2, values: [value1, value2] }, }, { type: rate_limiting, rate_limiting: { spans_per_second: 35 }, }, ] ``` It means if a tracing data matches one of these policies, it will be marked as sampled. I think it should support complicated policies to obtain more benefits from tail-sampling. Here is the scenario. I wouldn't say I like to use a global sample rate to make sampling decide since it can miss important traces that we are interested in, and those traces are rare, such as traces with high latencies. Therefore, I need some policies to filter out those traces, other traces which not match those policies will be decided by a global sample rate. Example policies: a trace will be sampled once it matches these policies: - duration > 10s **OR** - (http.latency > 1s **AND** rate_limit = 1000 QPS) **OR** - (error = true **AND** sample_rate = 1%) **OR** - sample_rate = 0.01% In this example, all traces that duration is higher than 10s will be sampled. Traces that HTTP latency is higher than 1s will be sampled with 1000 QPS rate limiter. Error traces have a 1% sample rate. Other traces have a 0.01% sample rate. **Describe the solution you'd like** Extend tail-sampling policies. For instance: ```yaml # type: expression # operator: OR policies: - type: numeric_attribute numeric_attribute: { key: duration, min_value: 10000 } - type: expression operator: AND policies: - type: numeric_attribute numeric_attribute: { key: http.latency, min_value: 1000 } - type: rate_limiting rate_limiting: { spans_per_second: 1000 } - type: expression operator: AND policies: - type: string_attribute string_attribute: { key: error, values: [true] } - type: probabilistic_sampler sampling_percentage: 1 - type: probabilistic_sampler sampling_percentage: 0.01 ``` Also, it will allow complicated attribute matching become reality, like `app=foo && error=true`
process
support more tail sampling scenarios is your feature request related to a problem please describe current tail sampling policies can only support simple scenarios for instance yaml policies type numeric attribute numeric attribute key min value max value type string attribute string attribute key values type rate limiting rate limiting spans per second it means if a tracing data matches one of these policies it will be marked as sampled i think it should support complicated policies to obtain more benefits from tail sampling here is the scenario i wouldn t say i like to use a global sample rate to make sampling decide since it can miss important traces that we are interested in and those traces are rare such as traces with high latencies therefore i need some policies to filter out those traces other traces which not match those policies will be decided by a global sample rate example policies a trace will be sampled once it matches these policies duration or http latency and rate limit qps or error true and sample rate or sample rate in this example all traces that duration is higher than will be sampled traces that http latency is higher than will be sampled with qps rate limiter error traces have a sample rate other traces have a sample rate describe the solution you d like extend tail sampling policies for instance yaml type expression operator or policies type numeric attribute numeric attribute key duration min value type expression operator and policies type numeric attribute numeric attribute key http latency min value type rate limiting rate limiting spans per second type expression operator and policies type string attribute string attribute key error values type probabilistic sampler sampling percentage type probabilistic sampler sampling percentage also it will allow complicated attribute matching become reality like app foo error true
1
12,973
15,353,258,023
IssuesEvent
2021-03-01 08:19:45
topcoder-platform/community-app
https://api.github.com/repos/topcoder-platform/community-app
opened
Recommender tool: Matched skills should not be clickable in listings page.
P4 ShapeupProcess challenge- recommender-tool
Description: Matched Skills shown on Challenge listings page after toggling on Recommender tool should not be clickable. ![image](https://user-images.githubusercontent.com/55895538/109470045-36f93080-7ab2-11eb-8776-d856363e9257.png) https://user-images.githubusercontent.com/55895538/109470063-3d87a800-7ab2-11eb-9b25-eb389b0a47b0.mp4
1.0
Recommender tool: Matched skills should not be clickable in listings page. - Description: Matched Skills shown on Challenge listings page after toggling on Recommender tool should not be clickable. ![image](https://user-images.githubusercontent.com/55895538/109470045-36f93080-7ab2-11eb-8776-d856363e9257.png) https://user-images.githubusercontent.com/55895538/109470063-3d87a800-7ab2-11eb-9b25-eb389b0a47b0.mp4
process
recommender tool matched skills should not be clickable in listings page description matched skills shown on challenge listings page after toggling on recommender tool should not be clickable
1
30,560
6,155,980,255
IssuesEvent
2017-06-28 15:45:18
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
p-confirmdialog is not centered to browser
defect
<!-- - IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING. - IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours. --> **I'm submitting a ...** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed. http://plnkr.co/edit/dvC3zTPeKKQ1XQ25wF5C?p=preview **Current behavior** <!-- Describe how the bug manifests. --> When width of the p-confirmdialog is not set, the dialog is not centered to the page. If you resize your browser, the dialog will be properly repositioned to center. **Expected behavior** <!-- Describe what the behavior would be without the bug. --> The dialog should be centered to the browser. **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Press the `Show Dialog` button in the plunker. **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> Chrome Version 56.0.2924.87 Windows 10 * **Angular version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> "@angular/core": "^2.3.1", * **PrimeNG version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> Tried both 2.0.1 and 2.0.0 * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced --> Chrome Version 56.0.2924.87, Microsoft Edge 38.14393.0.0 * **Language:** [all | TypeScript X.X | ES6/7 | ES5] "typescript": "~2.0.3" * **Node (for AoT issues):** `node --version` = v6.9.2
1.0
p-confirmdialog is not centered to browser - <!-- - IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING. - IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours. --> **I'm submitting a ...** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed. http://plnkr.co/edit/dvC3zTPeKKQ1XQ25wF5C?p=preview **Current behavior** <!-- Describe how the bug manifests. --> When width of the p-confirmdialog is not set, the dialog is not centered to the page. If you resize your browser, the dialog will be properly repositioned to center. **Expected behavior** <!-- Describe what the behavior would be without the bug. --> The dialog should be centered to the browser. **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Press the `Show Dialog` button in the plunker. **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> Chrome Version 56.0.2924.87 Windows 10 * **Angular version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> "@angular/core": "^2.3.1", * **PrimeNG version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> Tried both 2.0.1 and 2.0.0 * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced --> Chrome Version 56.0.2924.87, Microsoft Edge 38.14393.0.0 * **Language:** [all | TypeScript X.X | ES6/7 | ES5] "typescript": "~2.0.3" * **Node (for AoT issues):** `node --version` = v6.9.2
non_process
p confirmdialog is not centered to browser if you don t fill out the following information we might close your issue without investigating if you d like to secure our response you may consider primeng pro support where support is provided within hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please fork the plunkr below and create a case demonstrating your bug report issues without a plunkr have much less possibility to be reviewed current behavior when width of the p confirmdialog is not set the dialog is not centered to the page if you resize your browser the dialog will be properly repositioned to center expected behavior the dialog should be centered to the browser minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point press the show dialog button in the plunker what is the motivation use case for changing the behavior please tell us about your environment chrome version windows angular version x angular core primeng version x tried both and browser chrome version microsoft edge language typescript node for aot issues node version
0
7,255
10,419,249,205
IssuesEvent
2019-09-15 15:11:21
PHPSocialNetwork/phpfastcache
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
closed
TLS connexion for (P)Redis
7.1 8.0 >_< Working & Scheduled [-_-] In Process ^_^ Improvement
**Configuration** - **PhpFastCache version:** 7.0.5 - **PhpFastCache API version:** 2.0.4 - **PHP version:** `PHP 7.1.27-1+0~20190307202204.14+stretch~1.gbp7163d5 (cli) (built: Mar 7 2019 20:22:04) ( NTS )` - **Operating system:** Debian Stretch **Is your feature request related to a problem? Please describe.** I would like to use phpfastcache with a Redis server configurated with a TLS connexion. **Describe the solution you'd like** Add a `scheme` optionnal option to Redis and Predis configurations. **Describe alternatives you've considered** Add a `tls` boolean option to Redis and Predis configurations to replace tcp scheme by tls. **Additional context** Part of Predis README for Redis server connexion: https://github.com/nrk/predis/#connecting-to-redis
1.0
TLS connexion for (P)Redis - **Configuration** - **PhpFastCache version:** 7.0.5 - **PhpFastCache API version:** 2.0.4 - **PHP version:** `PHP 7.1.27-1+0~20190307202204.14+stretch~1.gbp7163d5 (cli) (built: Mar 7 2019 20:22:04) ( NTS )` - **Operating system:** Debian Stretch **Is your feature request related to a problem? Please describe.** I would like to use phpfastcache with a Redis server configurated with a TLS connexion. **Describe the solution you'd like** Add a `scheme` optionnal option to Redis and Predis configurations. **Describe alternatives you've considered** Add a `tls` boolean option to Redis and Predis configurations to replace tcp scheme by tls. **Additional context** Part of Predis README for Redis server connexion: https://github.com/nrk/predis/#connecting-to-redis
process
tls connexion for p redis configuration phpfastcache version phpfastcache api version php version php stretch cli built mar nts operating system debian stretch is your feature request related to a problem please describe i would like to use phpfastcache with a redis server configurated with a tls connexion describe the solution you d like add a scheme optionnal option to redis and predis configurations describe alternatives you ve considered add a tls boolean option to redis and predis configurations to replace tcp scheme by tls additional context part of predis readme for redis server connexion
1
87,517
25,137,814,620
IssuesEvent
2022-11-09 20:10:06
talent-connect/connect
https://api.github.com/repos/talent-connect/connect
opened
[TP:] Creating Figma Foundational Designs
User story Task Build UI/UX Priority: Medium
## User story As a member of the design team, I want the designs available in Figma to follow what is available in the production of our product, so that we can minimise the existing challenges. The challenges for team members are: - It's hard for the team members to execute new designs/re-designs; - New-joiners are not able to identify what is/is not available in the product, being this a relevant obstacle in the speed of the onboarding process. ## Tasks 1. Creating Figma Foundational Designs - Talent Pool - Jobseeker - [ ] Refer to the diagram that is available in the costumer journey map file - [ ] Design the screens, using the styles and components from the design system (in case something is not available, inform the mentor @RitaSousa and the PO, so that a ticket can be created) - [ ] Present the diagram to the team in the design meeting - [ ] Get feedback from the peers - [ ] Iterate on feedback if needed - [ ] Handover design file 2. Building User Flow in Figma - Talent Pool - Company - [ ] Refer to the diagram that is available in the costumer journey map file - [ ] Design the screens, using the styles and components from the design system (in case something is not available, inform the mentor @RitaSousa and the PO, so that a ticket can be created) - [ ] Present the diagram to the team in the design meeting - [ ] Get feedback from the peers - [ ] Iterate on feedback if needed - [ ] Handover design fileEven though this is currently a _Build UI/UX_ issue, it will in the future become a _User Story_ issue once the UI/UX assets are ready. Therefore, please write up the User Story below.
1.0
[TP:] Creating Figma Foundational Designs - ## User story As a member of the design team, I want the designs available in Figma to follow what is available in the production of our product, so that we can minimise the existing challenges. The challenges for team members are: - It's hard for the team members to execute new designs/re-designs; - New-joiners are not able to identify what is/is not available in the product, being this a relevant obstacle in the speed of the onboarding process. ## Tasks 1. Creating Figma Foundational Designs - Talent Pool - Jobseeker - [ ] Refer to the diagram that is available in the costumer journey map file - [ ] Design the screens, using the styles and components from the design system (in case something is not available, inform the mentor @RitaSousa and the PO, so that a ticket can be created) - [ ] Present the diagram to the team in the design meeting - [ ] Get feedback from the peers - [ ] Iterate on feedback if needed - [ ] Handover design file 2. Building User Flow in Figma - Talent Pool - Company - [ ] Refer to the diagram that is available in the costumer journey map file - [ ] Design the screens, using the styles and components from the design system (in case something is not available, inform the mentor @RitaSousa and the PO, so that a ticket can be created) - [ ] Present the diagram to the team in the design meeting - [ ] Get feedback from the peers - [ ] Iterate on feedback if needed - [ ] Handover design fileEven though this is currently a _Build UI/UX_ issue, it will in the future become a _User Story_ issue once the UI/UX assets are ready. Therefore, please write up the User Story below.
non_process
creating figma foundational designs user story as a member of the design team i want the designs available in figma to follow what is available in the production of our product so that we can minimise the existing challenges the challenges for team members are it s hard for the team members to execute new designs re designs new joiners are not able to identify what is is not available in the product being this a relevant obstacle in the speed of the onboarding process tasks creating figma foundational designs talent pool jobseeker refer to the diagram that is available in the costumer journey map file design the screens using the styles and components from the design system in case something is not available inform the mentor ritasousa and the po so that a ticket can be created present the diagram to the team in the design meeting get feedback from the peers iterate on feedback if needed handover design file building user flow in figma talent pool company refer to the diagram that is available in the costumer journey map file design the screens using the styles and components from the design system in case something is not available inform the mentor ritasousa and the po so that a ticket can be created present the diagram to the team in the design meeting get feedback from the peers iterate on feedback if needed handover design fileeven though this is currently a build ui ux issue it will in the future become a user story issue once the ui ux assets are ready therefore please write up the user story below
0
14,661
17,785,315,443
IssuesEvent
2021-08-31 10:18:26
googleapis/python-storage-transfer
https://api.github.com/repos/googleapis/python-storage-transfer
closed
Release as GA
type: process api: storagetransfer
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287) ## Required - [ ] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: August 27 2021** - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA
1.0
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287) ## Required - [ ] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: August 27 2021** - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA
process
release as ga required days elapsed since last beta release with new api surface release on after august server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
1
327,807
28,083,705,055
IssuesEvent
2023-03-30 08:22:21
teemtee/tmt
https://api.github.com/repos/teemtee/tmt
opened
centos-7 image doesn't get ip assigned with '-c session' (the default)
testcloud
Reproduced with https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz tmt-1.22.dev-1.20230327184022335541.main.55.g550bd2b.fc37.noarch python3-testcloud-0.9.2-1.fc37.noarch ` tmt run provision -h virtual -i https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz` fails to boot with `Failed to connect in 60s.` The machine is spawned and run as visible in virsh: `6 tmt-091-FBRgDtfg running` and console login there works. There are two NICs and only one of them has IP assigned: ``` $ virsh console 6 --safe Connected to domain 'tmt-091-FBRgDtfg' Escape character is ^] (Ctrl + ]) CentOS Linux 7 (Core) Kernel 3.10.0-1127.el7.x86_64 on an x86_64 default-0 login: root Password: [root@default-0 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:5f:e5:5e brd ff:ff:ff:ff:ff:ff inet 172.17.2.15/24 brd 172.17.2.255 scope global dynamic eth0 valid_lft 86281sec preferred_lft 86281sec inet6 fec0::5054:ff:fe5f:e55e/64 scope site mngtmpaddr dynamic valid_lft 86281sec preferred_lft 14281sec inet6 fe80::5054:ff:fe5f:e55e/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff ``` By `virsh dumpxml 6`: The `52:54:00:5f:e5:5e` (the one with IP) is defined by `<interface>` however there is no note about the other one `52:54:00:12:34:56` - It likely comes from ``` <qemu:commandline> <qemu:arg value='-netdev'/> <qemu:arg value='user,id=testcloud_net.10056,hostfwd=tcp::10056-:22'/> <qemu:arg value='-device'/> <qemu:arg value='e1000,addr=1e.0,netdev=testcloud_net.10056'/> </qemu:commandline> ``` Now what to do? Workaround would be to run 'dhclient' as part of cloud-init.
1.0
centos-7 image doesn't get ip assigned with '-c session' (the default) - Reproduced with https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz tmt-1.22.dev-1.20230327184022335541.main.55.g550bd2b.fc37.noarch python3-testcloud-0.9.2-1.fc37.noarch ` tmt run provision -h virtual -i https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz` fails to boot with `Failed to connect in 60s.` The machine is spawned and run as visible in virsh: `6 tmt-091-FBRgDtfg running` and console login there works. There are two NICs and only one of them has IP assigned: ``` $ virsh console 6 --safe Connected to domain 'tmt-091-FBRgDtfg' Escape character is ^] (Ctrl + ]) CentOS Linux 7 (Core) Kernel 3.10.0-1127.el7.x86_64 on an x86_64 default-0 login: root Password: [root@default-0 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:5f:e5:5e brd ff:ff:ff:ff:ff:ff inet 172.17.2.15/24 brd 172.17.2.255 scope global dynamic eth0 valid_lft 86281sec preferred_lft 86281sec inet6 fec0::5054:ff:fe5f:e55e/64 scope site mngtmpaddr dynamic valid_lft 86281sec preferred_lft 14281sec inet6 fe80::5054:ff:fe5f:e55e/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff ``` By `virsh dumpxml 6`: The `52:54:00:5f:e5:5e` (the one with IP) is defined by `<interface>` however there is no note about the other one `52:54:00:12:34:56` - It likely comes from ``` <qemu:commandline> <qemu:arg value='-netdev'/> <qemu:arg value='user,id=testcloud_net.10056,hostfwd=tcp::10056-:22'/> <qemu:arg value='-device'/> <qemu:arg value='e1000,addr=1e.0,netdev=testcloud_net.10056'/> </qemu:commandline> ``` Now what to do? Workaround would be to run 'dhclient' as part of cloud-init.
non_process
centos image doesn t get ip assigned with c session the default reproduced with tmt dev main noarch testcloud noarch tmt run provision h virtual i fails to boot with failed to connect in the machine is spawned and run as visible in virsh tmt fbrgdtfg running and console login there works there are two nics and only one of them has ip assigned virsh console safe connected to domain tmt fbrgdtfg escape character is ctrl centos linux core kernel on an default login root password ip a lo mtu qdisc noqueue state unknown group default qlen link loopback brd inet scope host lo valid lft forever preferred lft forever scope host valid lft forever preferred lft forever mtu qdisc pfifo fast state up group default qlen link ether brd ff ff ff ff ff ff inet brd scope global dynamic valid lft preferred lft ff scope site mngtmpaddr dynamic valid lft preferred lft ff scope link valid lft forever preferred lft forever mtu qdisc noop state down group default qlen link ether brd ff ff ff ff ff ff by virsh dumpxml the the one with ip is defined by however there is no note about the other one it likely comes from now what to do workaround would be to run dhclient as part of cloud init
0
69,510
7,136,668,747
IssuesEvent
2018-01-23 08:13:39
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
GKE clusters should not be in Add Node Cluster drop-down
area/cluster kind/bug status/to-test version/2.0
**Rancher versions:** 2.0 master 1/16 **Steps to Reproduce:** 1. Create a GKE cluster (Wait for it to be active) 2. Go to Node page and click Add node 3. Click on Cluster drop-down in Cluster options section **Results:** GKE clusters show up in list ![image](https://user-images.githubusercontent.com/11514927/35018994-a5f5e234-fae1-11e7-9c00-e009fbb29680.png)
1.0
GKE clusters should not be in Add Node Cluster drop-down - **Rancher versions:** 2.0 master 1/16 **Steps to Reproduce:** 1. Create a GKE cluster (Wait for it to be active) 2. Go to Node page and click Add node 3. Click on Cluster drop-down in Cluster options section **Results:** GKE clusters show up in list ![image](https://user-images.githubusercontent.com/11514927/35018994-a5f5e234-fae1-11e7-9c00-e009fbb29680.png)
non_process
gke clusters should not be in add node cluster drop down rancher versions master steps to reproduce create a gke cluster wait for it to be active go to node page and click add node click on cluster drop down in cluster options section results gke clusters show up in list
0
613,546
19,093,340,028
IssuesEvent
2021-11-29 14:23:32
Ksyu-Smorkalova/Neoflex-git
https://api.github.com/repos/Ksyu-Smorkalova/Neoflex-git
opened
ะžะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะพะฒ ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹
priority:low severity: minor
Issue description: ะะฐ ัั‚ั€ะฐะฝะธั†ะต "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะพะฒ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ ------------------------------------- ------------------------------------- Configuration under test: ------------------------------------- Build version: ะฒะตั€ัะธั ั€ะตะปะธะทะฐ ะฝะฐ 28.11.2021 System: Ubuntu 20.04, Firefox 89.0 (64-ะฑะธั‚ะฝั‹ะน) ------------- Actions: -------------- 1.ะ—ะฐั…ะพะดะธะผ ะฝะฐ ะณะปะฐะฒะฝั‹ะน ัะฐะนั‚ http://ossetianpie.ru/ 2.ะžั‚ะบั€ั‹ะฒะฐะตะผ ัั‚ั€ะฐะฝะธั†ัƒ "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" 3.ะกะผะพั‚ั€ะธะผ ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะฐ ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" ------------------------- Expected results: -------------------------- 1.OK 2.OK 3.ะžะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะฟะพะทะธั†ะธะธ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะทะฐะณะปะฐะฒะฝะพะน ะฑัƒะบะฒั‹ ------------------- Real results: ------------------- 1.OK 2.OK 3.ะžะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะฟะพะทะธั†ะธะน ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ ![ะผััะฝ ะฑะปัŽะดะฐ](https://user-images.githubusercontent.com/27230450/143884693-bd37f148-7555-4197-96e5-bdfb1a79ad6c.png)
1.0
ะžะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะพะฒ ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ - Issue description: ะะฐ ัั‚ั€ะฐะฝะธั†ะต "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะพะฒ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ ------------------------------------- ------------------------------------- Configuration under test: ------------------------------------- Build version: ะฒะตั€ัะธั ั€ะตะปะธะทะฐ ะฝะฐ 28.11.2021 System: Ubuntu 20.04, Firefox 89.0 (64-ะฑะธั‚ะฝั‹ะน) ------------- Actions: -------------- 1.ะ—ะฐั…ะพะดะธะผ ะฝะฐ ะณะปะฐะฒะฝั‹ะน ัะฐะนั‚ http://ossetianpie.ru/ 2.ะžั‚ะบั€ั‹ะฒะฐะตะผ ัั‚ั€ะฐะฝะธั†ัƒ "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" 3.ะกะผะพั‚ั€ะธะผ ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะฐ ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต "ะœััะฝั‹ะต ะฑะปัŽะดะฐ" ------------------------- Expected results: -------------------------- 1.OK 2.OK 3.ะžะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะฟะพะทะธั†ะธะธ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะทะฐะณะปะฐะฒะฝะพะน ะฑัƒะบะฒั‹ ------------------- Real results: ------------------- 1.OK 2.OK 3.ะžะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะฟะพะทะธั†ะธะน ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ ![ะผััะฝ ะฑะปัŽะดะฐ](https://user-images.githubusercontent.com/27230450/143884693-bd37f148-7555-4197-96e5-bdfb1a79ad6c.png)
non_process
ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะพะฒ ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต ะผััะฝั‹ะต ะฑะปัŽะดะฐ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ issue description ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต ะผััะฝั‹ะต ะฑะปัŽะดะฐ ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะพะฒ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹ configuration under test build version ะฒะตั€ัะธั ั€ะตะปะธะทะฐ ะฝะฐ system ubuntu firefox ะฑะธั‚ะฝั‹ะน actions ะทะฐั…ะพะดะธะผ ะฝะฐ ะณะปะฐะฒะฝั‹ะน ัะฐะนั‚ ะพั‚ะบั€ั‹ะฒะฐะตะผ ัั‚ั€ะฐะฝะธั†ัƒ ะผััะฝั‹ะต ะฑะปัŽะดะฐ ัะผะพั‚ั€ะธะผ ะพะฟะธัะฐะฝะธะต ั‚ะพะฒะฐั€ะฐ ะฝะฐ ัั‚ั€ะฐะฝะธั†ะต ะผััะฝั‹ะต ะฑะปัŽะดะฐ expected results ok ok ะพะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะฟะพะทะธั†ะธะธ ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะทะฐะณะปะฐะฒะฝะพะน ะฑัƒะบะฒั‹ real results ok ok ะพะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะฟะพะทะธั†ะธะน ะฝะฐั‡ะธะฝะฐะตั‚ัั ั ะผะฐะปะตะฝัŒะบะพะน ะฑัƒะบะฒั‹
0
208,327
23,595,744,303
IssuesEvent
2022-08-23 19:03:47
MatBenfield/news
https://api.github.com/repos/MatBenfield/news
closed
[SecurityWeek] Novant Health Says Malformed Tracking Pixel Exposed Health Data to Meta
SecurityWeek Stale
**Healthcare services provider Novant Health has sent notifications to more than 1.3 million individuals that their protected health information (PHI) might have been inadvertently exposed to Facebook parent company Meta.** [read more](https://www.securityweek.com/novant-health-says-malformed-tracking-pixel-exposed-health-data-meta) <https://www.securityweek.com/novant-health-says-malformed-tracking-pixel-exposed-health-data-meta>
True
[SecurityWeek] Novant Health Says Malformed Tracking Pixel Exposed Health Data to Meta - **Healthcare services provider Novant Health has sent notifications to more than 1.3 million individuals that their protected health information (PHI) might have been inadvertently exposed to Facebook parent company Meta.** [read more](https://www.securityweek.com/novant-health-says-malformed-tracking-pixel-exposed-health-data-meta) <https://www.securityweek.com/novant-health-says-malformed-tracking-pixel-exposed-health-data-meta>
non_process
novant health says malformed tracking pixel exposed health data to meta healthcare services provider novant health has sent notifications to more than million individuals that their protected health information phi might have been inadvertently exposed to facebook parent company meta
0
211,033
16,415,996,541
IssuesEvent
2021-05-19 06:48:52
tardis2snej/hobbymates
https://api.github.com/repos/tardis2snej/hobbymates
closed
ะคะพั€ะผะฐั‚ัƒะฒะฐะฝะฝั ั‚ะฐ ะดะพะดะฐั‚ะบะธ
documentation
- [x] ะ—ะผั–ัั‚ - [x] ะกะฟะธัะพะบ ะฒะธะบะพั€ะธัั‚ะฐะฝะธั… ะดะถะตั€ะตะป - [x] ะ”ะพะดะฐั‚ะบะธ - [x] ะŸะตั€ะตะณะปัะฝัƒั‚ะธ ั„ะพั€ะผะฐั‚ัƒะฒะฐะฝะฝั
1.0
ะคะพั€ะผะฐั‚ัƒะฒะฐะฝะฝั ั‚ะฐ ะดะพะดะฐั‚ะบะธ - - [x] ะ—ะผั–ัั‚ - [x] ะกะฟะธัะพะบ ะฒะธะบะพั€ะธัั‚ะฐะฝะธั… ะดะถะตั€ะตะป - [x] ะ”ะพะดะฐั‚ะบะธ - [x] ะŸะตั€ะตะณะปัะฝัƒั‚ะธ ั„ะพั€ะผะฐั‚ัƒะฒะฐะฝะฝั
non_process
ั„ะพั€ะผะฐั‚ัƒะฒะฐะฝะฝั ั‚ะฐ ะดะพะดะฐั‚ะบะธ ะทะผั–ัั‚ ัะฟะธัะพะบ ะฒะธะบะพั€ะธัั‚ะฐะฝะธั… ะดะถะตั€ะตะป ะดะพะดะฐั‚ะบะธ ะฟะตั€ะตะณะปัะฝัƒั‚ะธ ั„ะพั€ะผะฐั‚ัƒะฒะฐะฝะฝั
0
59,156
14,368,098,966
IssuesEvent
2020-12-01 07:51:51
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
Vulnerability roundup 96: gitlab-13.0.14: 2 advisories [9.1]
1.severity: security
[search](https://search.nix.gsc.io/?q=gitlab&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=gitlab+in%3Apath&type=Code) * [ ] [CVE-2020-13347](https://nvd.nist.gov/vuln/detail/CVE-2020-13347) CVSSv3=9.1 (nixos-20.09, nixos-unstable) * [ ] [CVE-2020-13335](https://nvd.nist.gov/vuln/detail/CVE-2020-13335) CVSSv3=4.3 (nixos-20.09, nixos-unstable) Scanned versions: nixos-20.09: d105075a1fd; nixos-unstable: 34ad166a830. Cc @fpletz Cc @globin Cc @krav Cc @talyz
True
Vulnerability roundup 96: gitlab-13.0.14: 2 advisories [9.1] - [search](https://search.nix.gsc.io/?q=gitlab&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=gitlab+in%3Apath&type=Code) * [ ] [CVE-2020-13347](https://nvd.nist.gov/vuln/detail/CVE-2020-13347) CVSSv3=9.1 (nixos-20.09, nixos-unstable) * [ ] [CVE-2020-13335](https://nvd.nist.gov/vuln/detail/CVE-2020-13335) CVSSv3=4.3 (nixos-20.09, nixos-unstable) Scanned versions: nixos-20.09: d105075a1fd; nixos-unstable: 34ad166a830. Cc @fpletz Cc @globin Cc @krav Cc @talyz
non_process
vulnerability roundup gitlab advisories nixos nixos unstable nixos nixos unstable scanned versions nixos nixos unstable cc fpletz cc globin cc krav cc talyz
0
20,144
26,693,710,386
IssuesEvent
2023-01-27 08:26:59
FranklinDM/Swarth
https://api.github.com/repos/FranklinDM/Swarth
closed
Swarth + uBlock Origin causes entire page to be black while picking elements for a custom filter
Bug Stylesheet Processor
Occurs with "stylesheet processor" and "simple stylesheet". Interestingly, using "color inversion" results in a different issue - the text box instead appears in the bottom right corner of the page itself rather than in your browser (meaning you have to scroll to the bottom of web page first). ![screenshot](https://user-images.githubusercontent.com/11896003/212419802-79161fe8-bbb7-41ab-95c1-c40f68b10fb2.png)
1.0
Swarth + uBlock Origin causes entire page to be black while picking elements for a custom filter - Occurs with "stylesheet processor" and "simple stylesheet". Interestingly, using "color inversion" results in a different issue - the text box instead appears in the bottom right corner of the page itself rather than in your browser (meaning you have to scroll to the bottom of web page first). ![screenshot](https://user-images.githubusercontent.com/11896003/212419802-79161fe8-bbb7-41ab-95c1-c40f68b10fb2.png)
process
swarth ublock origin causes entire page to be black while picking elements for a custom filter occurs with stylesheet processor and simple stylesheet interestingly using color inversion results in a different issue the text box instead appears in the bottom right corner of the page itself rather than in your browser meaning you have to scroll to the bottom of web page first
1
72,722
8,768,100,958
IssuesEvent
2018-12-17 21:52:21
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
opened
Nullable: support `var?`
Area-Compilers Area-Language Design New Language Feature - Nullable Reference Types
We need to discuss in LDM whether we want this feature, and if so, when. Also, there are open questions on its semantics.
1.0
Nullable: support `var?` - We need to discuss in LDM whether we want this feature, and if so, when. Also, there are open questions on its semantics.
non_process
nullable support var we need to discuss in ldm whether we want this feature and if so when also there are open questions on its semantics
0
21,280
28,442,550,518
IssuesEvent
2023-04-16 04:03:02
cse442-at-ub/project_s23-team-infinity
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
closed
Create documentation on the transferring of data from the backend database to the frontend to display events created on the calendar.
Processing Task Sprint 3
**Task tests** *Test 1* 1) To navigate to the Google Docs click this provided link: https://docs.google.com/document/d/1ARcj1f70tV1m-r1JaOqr0CcV139BuB_-1rERJiW-2CI/edit 2) Verify there is the title: Connecting the PHP to react for create events 3) Verify the documentation gives an overview of how to facilitate the sending of data by React over to the PHP and from the PHP back to the React.
1.0
Create documentation on the transferring of data from the backend database to the frontend to display events created on the calendar. - **Task tests** *Test 1* 1) To navigate to the Google Docs click this provided link: https://docs.google.com/document/d/1ARcj1f70tV1m-r1JaOqr0CcV139BuB_-1rERJiW-2CI/edit 2) Verify there is the title: Connecting the PHP to react for create events 3) Verify the documentation gives an overview of how to facilitate the sending of data by React over to the PHP and from the PHP back to the React.
process
create documentation on the transferring of data from the backend database to the frontend to display events created on the calendar task tests test to navigate to the google docs click this provided link verify there is the title connecting the php to react for create events verify the documentation gives an overview of how to facilitate the sending of data by react over to the php and from the php back to the react
1
15,511
19,703,266,561
IssuesEvent
2022-01-12 18:52:17
googleapis/java-deploy
https://api.github.com/repos/googleapis/java-deploy
opened
Your .repo-metadata.json file has a problem ๐Ÿค’
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'deploy' invalid in .repo-metadata.json โ˜๏ธ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem ๐Ÿค’ - You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'deploy' invalid in .repo-metadata.json โ˜๏ธ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem ๐Ÿค’ you have a problem with your repo metadata json file result of scan ๐Ÿ“ˆ release level must be equal to one of the allowed values in repo metadata json api shortname deploy invalid in repo metadata json โ˜๏ธ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
5,325
8,139,925,116
IssuesEvent
2018-08-20 19:20:08
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
Audit the entire catalog for iOS 12 bugs
type:Process
Definition of done: ## Every component example works as expected on iOS 12 non-X device - [x] ActivityIndicator - [x] AnimationTiming - [x] AppBar - [ ] Backdrop - [ ] Banner - [x] BottomAppBar - [x] BottomNavigation - [x] BottomSheet - [ ] ButtonBar - [x] Buttons - [x] Cards - [ ] Checkboxes - [x] Chips - [x] CollectionCells - [ ] CollectionLayoutAttributes - [x] Collections - [ ] DataTables - [x] Dialogs - [x] Dividers - [x] FeatureHighlight - [x] FlexibleHeader - [ ] HeaderStackView - [ ] ImageList - [x] Ink - [ ] LibraryInfo - [x] List - [x] MaskedTransition - [ ] Menus - [x] NavigationBar - [ ] NavigationDrawer - [ ] OverlayWindow - [x] PageControl - [x] Palettes - [x] ProgressView - [ ] RadioButton - [x] ShadowElevations - [x] ShadowLayer - [ ] SideSheet - [x] Slider - [x] Snackbar - [x] Switch - [ ] Tabs - [x] TextFields - [x] Themes - [ ] Tooltips - [x] Typography ## Every component example works as expected on an iOS 12 iPhone X - [x] ActivityIndicator - [x] AnimationTiming - [x] AppBar - [ ] Backdrop - [ ] Banner - [x] BottomAppBar - [x] BottomNavigation - [x] BottomSheet - [ ] ButtonBar - [x] Buttons - [x] Cards - [ ] Checkboxes - [x] Chips - [x] CollectionCells - [ ] CollectionLayoutAttributes - [x] Collections - [ ] DataTables - [x] Dialogs - [x] Dividers - [x] FeatureHighlight - [x] FlexibleHeader - [ ] HeaderStackView - [ ] ImageList - [x] Ink - [ ] LibraryInfo - [x] List - [x] MaskedTransition - [ ] Menus - [x] NavigationBar - [ ] NavigationDrawer - [ ] OverlayWindow - [x] PageControl - [x] Palettes - [x] ProgressView - [ ] RadioButton - [x] ShadowElevations - [x] ShadowLayer - [ ] SideSheet - [x] Slider - [x] Snackbar - [x] Switch - [ ] Tabs - [x] TextFields - [x] Themes - [ ] Tooltips - [x] Typography
1.0
Audit the entire catalog for iOS 12 bugs - Definition of done: ## Every component example works as expected on iOS 12 non-X device - [x] ActivityIndicator - [x] AnimationTiming - [x] AppBar - [ ] Backdrop - [ ] Banner - [x] BottomAppBar - [x] BottomNavigation - [x] BottomSheet - [ ] ButtonBar - [x] Buttons - [x] Cards - [ ] Checkboxes - [x] Chips - [x] CollectionCells - [ ] CollectionLayoutAttributes - [x] Collections - [ ] DataTables - [x] Dialogs - [x] Dividers - [x] FeatureHighlight - [x] FlexibleHeader - [ ] HeaderStackView - [ ] ImageList - [x] Ink - [ ] LibraryInfo - [x] List - [x] MaskedTransition - [ ] Menus - [x] NavigationBar - [ ] NavigationDrawer - [ ] OverlayWindow - [x] PageControl - [x] Palettes - [x] ProgressView - [ ] RadioButton - [x] ShadowElevations - [x] ShadowLayer - [ ] SideSheet - [x] Slider - [x] Snackbar - [x] Switch - [ ] Tabs - [x] TextFields - [x] Themes - [ ] Tooltips - [x] Typography ## Every component example works as expected on an iOS 12 iPhone X - [x] ActivityIndicator - [x] AnimationTiming - [x] AppBar - [ ] Backdrop - [ ] Banner - [x] BottomAppBar - [x] BottomNavigation - [x] BottomSheet - [ ] ButtonBar - [x] Buttons - [x] Cards - [ ] Checkboxes - [x] Chips - [x] CollectionCells - [ ] CollectionLayoutAttributes - [x] Collections - [ ] DataTables - [x] Dialogs - [x] Dividers - [x] FeatureHighlight - [x] FlexibleHeader - [ ] HeaderStackView - [ ] ImageList - [x] Ink - [ ] LibraryInfo - [x] List - [x] MaskedTransition - [ ] Menus - [x] NavigationBar - [ ] NavigationDrawer - [ ] OverlayWindow - [x] PageControl - [x] Palettes - [x] ProgressView - [ ] RadioButton - [x] ShadowElevations - [x] ShadowLayer - [ ] SideSheet - [x] Slider - [x] Snackbar - [x] Switch - [ ] Tabs - [x] TextFields - [x] Themes - [ ] Tooltips - [x] Typography
process
audit the entire catalog for ios bugs definition of done every component example works as expected on ios non x device activityindicator animationtiming appbar backdrop banner bottomappbar bottomnavigation bottomsheet buttonbar buttons cards checkboxes chips collectioncells collectionlayoutattributes collections datatables dialogs dividers featurehighlight flexibleheader headerstackview imagelist ink libraryinfo list maskedtransition menus navigationbar navigationdrawer overlaywindow pagecontrol palettes progressview radiobutton shadowelevations shadowlayer sidesheet slider snackbar switch tabs textfields themes tooltips typography every component example works as expected on an ios iphone x activityindicator animationtiming appbar backdrop banner bottomappbar bottomnavigation bottomsheet buttonbar buttons cards checkboxes chips collectioncells collectionlayoutattributes collections datatables dialogs dividers featurehighlight flexibleheader headerstackview imagelist ink libraryinfo list maskedtransition menus navigationbar navigationdrawer overlaywindow pagecontrol palettes progressview radiobutton shadowelevations shadowlayer sidesheet slider snackbar switch tabs textfields themes tooltips typography
1
98,564
30,004,362,457
IssuesEvent
2023-06-26 11:26:36
4paradigm/OpenMLDB
https://api.github.com/repos/4paradigm/OpenMLDB
closed
cmake configure: INTERFACE_LIBRARY targets may only have whitelisted properties. The property "LOCATION" is not allowed
bug build
reproduced with cmake 3.18 on debian: ``` CMake Error at src/sdk/CMakeLists.txt:269 (get_property): e INTERFACE_LIBRARY targets may only have whitelisted properties. The e property "LOCATION" is not allowed. ```
1.0
cmake configure: INTERFACE_LIBRARY targets may only have whitelisted properties. The property "LOCATION" is not allowed - reproduced with cmake 3.18 on debian: ``` CMake Error at src/sdk/CMakeLists.txt:269 (get_property): e INTERFACE_LIBRARY targets may only have whitelisted properties. The e property "LOCATION" is not allowed. ```
non_process
cmake configure interface library targets may only have whitelisted properties the property location is not allowed reproduced with cmake on debian cmake error at src sdk cmakelists txt get property e interface library targets may only have whitelisted properties the e property location is not allowed
0
11,402
14,237,737,634
IssuesEvent
2020-11-18 17:37:11
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
opened
PH integration into TH
Calculator Process Heating Treasure Hunt
Each calc (except flue gas) will have the following potential utilities: * Natural Gas * Electricity * Other Fuel * Steam If Natural Gas or Other fuel - field called "Available Heat" with link to "Flue Gas Calc Modal" If Electricity - field called "Thermal Efficiency" If Steam - no field Help Text for new field field TO DO - Kristina Steam - needs reminder that this is just for if steam is coming into your system as a utility NG/Other fuel - note if steam is the thermal transfer medium, Available heat = boiler efficiency Electricity - if hybrid furnace, should use "Other fuel" and estimate the total usage and unit costs based on weighted average of energy use
1.0
PH integration into TH - Each calc (except flue gas) will have the following potential utilities: * Natural Gas * Electricity * Other Fuel * Steam If Natural Gas or Other fuel - field called "Available Heat" with link to "Flue Gas Calc Modal" If Electricity - field called "Thermal Efficiency" If Steam - no field Help Text for new field field TO DO - Kristina Steam - needs reminder that this is just for if steam is coming into your system as a utility NG/Other fuel - note if steam is the thermal transfer medium, Available heat = boiler efficiency Electricity - if hybrid furnace, should use "Other fuel" and estimate the total usage and unit costs based on weighted average of energy use
process
ph integration into th each calc except flue gas will have the following potential utilities natural gas electricity other fuel steam if natural gas or other fuel field called available heat with link to flue gas calc modal if electricity field called thermal efficiency if steam no field help text for new field field to do kristina steam needs reminder that this is just for if steam is coming into your system as a utility ng other fuel note if steam is the thermal transfer medium available heat boiler efficiency electricity if hybrid furnace should use other fuel and estimate the total usage and unit costs based on weighted average of energy use
1
21,657
30,106,433,907
IssuesEvent
2023-06-30 02:00:09
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 30 Jun 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps - **Authors:** Minsoo Kang, Suhyun Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.16612 - **Pdf link:** https://arxiv.org/pdf/2306.16612 - **Abstract** Data augmentation is now an essential part of the image training process, as it effectively prevents overfitting and makes the model more robust against noisy datasets. Recent mixing augmentation strategies have advanced to generate the mixup mask that can enrich the saliency information, which is a supervisory signal. However, these methods incur a significant computational burden to optimize the mixup mask. From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead. We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images and achieve rich saliency in mixup images. Moreover, GuidedMixup controls the mixup ratio for each pixel to better preserve the salient region by interpolating two paired images smoothly. The experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance on classification datasets. In addition, our method shows good performance in experiments with corrupted or reduced datasets. ### Streaming egocentric action anticipation: An evaluation scheme and approach - **Authors:** Antonino Furnari, Giovanni Maria Farinella - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.16682 - **Pdf link:** https://arxiv.org/pdf/2306.16682 - **Abstract** Egocentric action anticipation aims to predict the future actions the camera wearer will perform from the observation of the past. While predictions about the future should be available before the predicted events take place, most approaches do not pay attention to the computational time required to make such predictions. As a result, current evaluation schemes assume that predictions are available right after the input video is observed, i.e., presuming a negligible runtime, which may lead to overly optimistic evaluations. We propose a streaming egocentric action evaluation scheme which assumes that predictions are performed online and made available only after the model has processed the current input segment, which depends on its runtime. To evaluate all models considering the same prediction horizon, we hence propose that slower models should base their predictions on temporal segments sampled ahead of time. Based on the observation that model runtime can affect performance in the considered streaming evaluation scenario, we further propose a lightweight action anticipation model based on feed-forward 3D CNNs which is optimized using knowledge distillation techniques with a novel past-to-future distillation loss. Experiments on the three popular datasets EPIC-KITCHENS-55, EPIC-KITCHENS-100 and EGTEA Gaze+ show that (i) the proposed evaluation scheme induces a different ranking on state-of-the-art methods as compared to classic evaluations, (ii) lightweight approaches tend to outmatch more computationally expensive ones, and (iii) the proposed model based on feed-forward 3D CNNs and knowledge distillation outperforms current art in the streaming egocentric action anticipation scenario. ### Spectral Batch Normalization: Normalization in the Frequency Domain - **Authors:** Rinor Cakaj, Jens Mehnert, Bin Yang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.16999 - **Pdf link:** https://arxiv.org/pdf/2306.16999 - **Abstract** Regularization is a set of techniques that are used to improve the generalization ability of deep neural networks. In this paper, we introduce spectral batch normalization (SBN), a novel effective method to improve generalization by normalizing feature maps in the frequency (spectral) domain. The activations of residual networks without batch normalization (BN) tend to explode exponentially in the depth of the network at initialization. This leads to extremely large feature map norms even though the parameters are relatively small. These explosive dynamics can be very detrimental to learning. BN makes weight decay regularization on the scaling factors $\gamma, \beta$ approximately equivalent to an additive penalty on the norm of the feature maps, which prevents extremely large feature map norms to a certain degree. However, we show experimentally that, despite the approximate additive penalty of BN, feature maps in deep neural networks (DNNs) tend to explode at the beginning of the network and that feature maps of DNNs contain large values during the whole training. This phenomenon also occurs in a weakened form in non-residual networks. SBN addresses large feature maps by normalizing them in the frequency domain. In our experiments, we empirically show that SBN prevents exploding feature maps at initialization and large feature map values during the training. Moreover, the normalization of feature maps in the frequency domain leads to more uniform distributed frequency components. This discourages the DNNs to rely on single frequency components of feature maps. These, together with other effects of SBN, have a regularizing effect on the training of residual and non-residual networks. We show experimentally that using SBN in addition to standard regularization methods improves the performance of DNNs by a relevant margin, e.g. ResNet50 on ImageNet by 0.71%. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Unsupervised 3D registration through optimization-guided cyclical self-training - **Authors:** Alexander Bigalke, Lasse Hansen, Tony C. W. Mok, Mattias P. Heinrich - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.16997 - **Pdf link:** https://arxiv.org/pdf/2306.16997 - **Abstract** State-of-the-art deep learning-based registration methods employ three different learning strategies: supervised learning, which requires costly manual annotations, unsupervised learning, which heavily relies on hand-crafted similarity metrics designed by domain experts, or learning from synthetic data, which introduces a domain shift. To overcome the limitations of these strategies, we propose a novel self-supervised learning paradigm for unsupervised registration, relying on self-training. Our idea is based on two key insights. Feature-based differentiable optimizers 1) perform reasonable registration even from random features and 2) stabilize the training of the preceding feature extraction network on noisy labels. Consequently, we propose cyclical self-training, where pseudo labels are initialized as the displacement fields inferred from random features and cyclically updated based on more and more expressive features from the learning feature extractor, yielding a self-reinforcement effect. We evaluate the method for abdomen and lung registration, consistently surpassing metric-based supervision and outperforming diverse state-of-the-art competitors. Source code is available at https://github.com/multimodallearning/reg-cyclical-self-train. ### Deep Ensemble for Rotorcraft Attitude Prediction - **Authors:** Hikmat Khan, Nidhal Carla Bouaynaya, Ghulam Rasool, Tyler Travis, Lacey Thompson, Charles C. Johnson - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.17104 - **Pdf link:** https://arxiv.org/pdf/2306.17104 - **Abstract** Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems could accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm could correctly identify rotorcraft attitude at an accuracy in the range of 80\%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94\%. In this paper, five onboard camera views included the pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained various convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79\% % to 90\% %. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3\%. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### End-to-End Learnable Multi-Scale Feature Compression for VCM - **Authors:** Yeongwoong Kim, Hyewon Jeong, Janghyun Yu, Younhee Kim, Jooyoung Lee, Se Yoon Jeong, Hui Yong Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.16670 - **Pdf link:** https://arxiv.org/pdf/2306.16670 - **Abstract** The proliferation of deep learning-based machine vision applications has given rise to a new type of compression, so called video coding for machine (VCM). VCM differs from traditional video coding in that it is optimized for machine vision performance instead of human visual quality. In the feature compression track of MPEG-VCM, multi-scale features extracted from images are subject to compression. Recent feature compression works have demonstrated that the versatile video coding (VVC) standard-based approach can achieve a BD-rate reduction of up to 96% against MPEG-VCM feature anchor. However, it is still sub-optimal as VVC was not designed for extracted features but for natural images. Moreover, the high encoding complexity of VVC makes it difficult to design a lightweight encoder without sacrificing performance. To address these challenges, we propose a novel multi-scale feature compression method that enables both the end-to-end optimization on the extracted features and the design of lightweight encoders. The proposed model combines a learnable compressor with a multi-scale feature fusion network so that the redundancy in the multi-scale features is effectively removed. Instead of simply cascading the fusion network and the compression network, we integrate the fusion and encoding processes in an interleaved way. Our model first encodes a larger-scale feature to obtain a latent representation and then fuses the latent with a smaller-scale feature. This process is successively performed until the smallest-scale feature is fused and then the encoded latent at the final stage is entropy-coded for transmission. The results show that our model outperforms previous approaches by at least 52% BD-rate reduction and has $\times5$ to $\times27$ times less encoding time for object detection. It is noteworthy that our model can attain near-lossless task performance with only 0.002-0.003% of the uncompressed feature data size. ### Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation - **Authors:** Hanqiu Chen, Hang Yang, Stephen BR Fitzmeyer, Cong Hao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.16699 - **Pdf link:** https://arxiv.org/pdf/2306.16699 - **Abstract** Implicit Neural Representation (INR) is an innovative approach for representing complex shapes or objects without explicitly defining their geometry or surface structure. Instead, INR represents objects as continuous functions. Previous research has demonstrated the effectiveness of using neural networks as INR for image compression, showcasing comparable performance to traditional methods such as JPEG. However, INR holds potential for various applications beyond image compression. This paper introduces Rapid-INR, a novel approach that utilizes INR for encoding and compressing images, thereby accelerating neural network training in computer vision tasks. Our methodology involves storing the whole dataset directly in INR format on a GPU, mitigating the significant data communication overhead between the CPU and GPU during training. Additionally, the decoding process from INR to RGB format is highly parallelized and executed on-the-fly. To further enhance compression, we propose iterative and dynamic pruning, as well as layer-wise quantization, building upon previous work. We evaluate our framework on the image classification task, utilizing the ResNet-18 backbone network and three commonly used datasets with varying image sizes. Rapid-INR reduces memory consumption to only 5% of the original dataset size and achieves a maximum 6$\times$ speedup over the PyTorch training pipeline, as well as a maximum 1.2x speedup over the DALI training pipeline, with only a marginal decrease in accuracy. Importantly, Rapid-INR can be readily applied to other computer vision tasks and backbone networks with reasonable engineering efforts. Our implementation code is publicly available at https://anonymous.4open.science/r/INR-4BF7. ## Keyword: RAW There is no result ## Keyword: raw image There is no result
2.0
New submissions for Fri, 30 Jun 23 - ## Keyword: events ### GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps - **Authors:** Minsoo Kang, Suhyun Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.16612 - **Pdf link:** https://arxiv.org/pdf/2306.16612 - **Abstract** Data augmentation is now an essential part of the image training process, as it effectively prevents overfitting and makes the model more robust against noisy datasets. Recent mixing augmentation strategies have advanced to generate the mixup mask that can enrich the saliency information, which is a supervisory signal. However, these methods incur a significant computational burden to optimize the mixup mask. From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead. We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images and achieve rich saliency in mixup images. Moreover, GuidedMixup controls the mixup ratio for each pixel to better preserve the salient region by interpolating two paired images smoothly. The experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance on classification datasets. In addition, our method shows good performance in experiments with corrupted or reduced datasets. ### Streaming egocentric action anticipation: An evaluation scheme and approach - **Authors:** Antonino Furnari, Giovanni Maria Farinella - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.16682 - **Pdf link:** https://arxiv.org/pdf/2306.16682 - **Abstract** Egocentric action anticipation aims to predict the future actions the camera wearer will perform from the observation of the past. While predictions about the future should be available before the predicted events take place, most approaches do not pay attention to the computational time required to make such predictions. As a result, current evaluation schemes assume that predictions are available right after the input video is observed, i.e., presuming a negligible runtime, which may lead to overly optimistic evaluations. We propose a streaming egocentric action evaluation scheme which assumes that predictions are performed online and made available only after the model has processed the current input segment, which depends on its runtime. To evaluate all models considering the same prediction horizon, we hence propose that slower models should base their predictions on temporal segments sampled ahead of time. Based on the observation that model runtime can affect performance in the considered streaming evaluation scenario, we further propose a lightweight action anticipation model based on feed-forward 3D CNNs which is optimized using knowledge distillation techniques with a novel past-to-future distillation loss. Experiments on the three popular datasets EPIC-KITCHENS-55, EPIC-KITCHENS-100 and EGTEA Gaze+ show that (i) the proposed evaluation scheme induces a different ranking on state-of-the-art methods as compared to classic evaluations, (ii) lightweight approaches tend to outmatch more computationally expensive ones, and (iii) the proposed model based on feed-forward 3D CNNs and knowledge distillation outperforms current art in the streaming egocentric action anticipation scenario. ### Spectral Batch Normalization: Normalization in the Frequency Domain - **Authors:** Rinor Cakaj, Jens Mehnert, Bin Yang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.16999 - **Pdf link:** https://arxiv.org/pdf/2306.16999 - **Abstract** Regularization is a set of techniques that are used to improve the generalization ability of deep neural networks. In this paper, we introduce spectral batch normalization (SBN), a novel effective method to improve generalization by normalizing feature maps in the frequency (spectral) domain. The activations of residual networks without batch normalization (BN) tend to explode exponentially in the depth of the network at initialization. This leads to extremely large feature map norms even though the parameters are relatively small. These explosive dynamics can be very detrimental to learning. BN makes weight decay regularization on the scaling factors $\gamma, \beta$ approximately equivalent to an additive penalty on the norm of the feature maps, which prevents extremely large feature map norms to a certain degree. However, we show experimentally that, despite the approximate additive penalty of BN, feature maps in deep neural networks (DNNs) tend to explode at the beginning of the network and that feature maps of DNNs contain large values during the whole training. This phenomenon also occurs in a weakened form in non-residual networks. SBN addresses large feature maps by normalizing them in the frequency domain. In our experiments, we empirically show that SBN prevents exploding feature maps at initialization and large feature map values during the training. Moreover, the normalization of feature maps in the frequency domain leads to more uniform distributed frequency components. This discourages the DNNs to rely on single frequency components of feature maps. These, together with other effects of SBN, have a regularizing effect on the training of residual and non-residual networks. We show experimentally that using SBN in addition to standard regularization methods improves the performance of DNNs by a relevant margin, e.g. ResNet50 on ImageNet by 0.71%. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Unsupervised 3D registration through optimization-guided cyclical self-training - **Authors:** Alexander Bigalke, Lasse Hansen, Tony C. W. Mok, Mattias P. Heinrich - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.16997 - **Pdf link:** https://arxiv.org/pdf/2306.16997 - **Abstract** State-of-the-art deep learning-based registration methods employ three different learning strategies: supervised learning, which requires costly manual annotations, unsupervised learning, which heavily relies on hand-crafted similarity metrics designed by domain experts, or learning from synthetic data, which introduces a domain shift. To overcome the limitations of these strategies, we propose a novel self-supervised learning paradigm for unsupervised registration, relying on self-training. Our idea is based on two key insights. Feature-based differentiable optimizers 1) perform reasonable registration even from random features and 2) stabilize the training of the preceding feature extraction network on noisy labels. Consequently, we propose cyclical self-training, where pseudo labels are initialized as the displacement fields inferred from random features and cyclically updated based on more and more expressive features from the learning feature extractor, yielding a self-reinforcement effect. We evaluate the method for abdomen and lung registration, consistently surpassing metric-based supervision and outperforming diverse state-of-the-art competitors. Source code is available at https://github.com/multimodallearning/reg-cyclical-self-train. ### Deep Ensemble for Rotorcraft Attitude Prediction - **Authors:** Hikmat Khan, Nidhal Carla Bouaynaya, Ghulam Rasool, Tyler Travis, Lacey Thompson, Charles C. Johnson - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.17104 - **Pdf link:** https://arxiv.org/pdf/2306.17104 - **Abstract** Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems could accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm could correctly identify rotorcraft attitude at an accuracy in the range of 80\%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94\%. In this paper, five onboard camera views included the pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained various convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79\% % to 90\% %. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3\%. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### End-to-End Learnable Multi-Scale Feature Compression for VCM - **Authors:** Yeongwoong Kim, Hyewon Jeong, Janghyun Yu, Younhee Kim, Jooyoung Lee, Se Yoon Jeong, Hui Yong Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.16670 - **Pdf link:** https://arxiv.org/pdf/2306.16670 - **Abstract** The proliferation of deep learning-based machine vision applications has given rise to a new type of compression, so called video coding for machine (VCM). VCM differs from traditional video coding in that it is optimized for machine vision performance instead of human visual quality. In the feature compression track of MPEG-VCM, multi-scale features extracted from images are subject to compression. Recent feature compression works have demonstrated that the versatile video coding (VVC) standard-based approach can achieve a BD-rate reduction of up to 96% against MPEG-VCM feature anchor. However, it is still sub-optimal as VVC was not designed for extracted features but for natural images. Moreover, the high encoding complexity of VVC makes it difficult to design a lightweight encoder without sacrificing performance. To address these challenges, we propose a novel multi-scale feature compression method that enables both the end-to-end optimization on the extracted features and the design of lightweight encoders. The proposed model combines a learnable compressor with a multi-scale feature fusion network so that the redundancy in the multi-scale features is effectively removed. Instead of simply cascading the fusion network and the compression network, we integrate the fusion and encoding processes in an interleaved way. Our model first encodes a larger-scale feature to obtain a latent representation and then fuses the latent with a smaller-scale feature. This process is successively performed until the smallest-scale feature is fused and then the encoded latent at the final stage is entropy-coded for transmission. The results show that our model outperforms previous approaches by at least 52% BD-rate reduction and has $\times5$ to $\times27$ times less encoding time for object detection. It is noteworthy that our model can attain near-lossless task performance with only 0.002-0.003% of the uncompressed feature data size. ### Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation - **Authors:** Hanqiu Chen, Hang Yang, Stephen BR Fitzmeyer, Cong Hao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.16699 - **Pdf link:** https://arxiv.org/pdf/2306.16699 - **Abstract** Implicit Neural Representation (INR) is an innovative approach for representing complex shapes or objects without explicitly defining their geometry or surface structure. Instead, INR represents objects as continuous functions. Previous research has demonstrated the effectiveness of using neural networks as INR for image compression, showcasing comparable performance to traditional methods such as JPEG. However, INR holds potential for various applications beyond image compression. This paper introduces Rapid-INR, a novel approach that utilizes INR for encoding and compressing images, thereby accelerating neural network training in computer vision tasks. Our methodology involves storing the whole dataset directly in INR format on a GPU, mitigating the significant data communication overhead between the CPU and GPU during training. Additionally, the decoding process from INR to RGB format is highly parallelized and executed on-the-fly. To further enhance compression, we propose iterative and dynamic pruning, as well as layer-wise quantization, building upon previous work. We evaluate our framework on the image classification task, utilizing the ResNet-18 backbone network and three commonly used datasets with varying image sizes. Rapid-INR reduces memory consumption to only 5% of the original dataset size and achieves a maximum 6$\times$ speedup over the PyTorch training pipeline, as well as a maximum 1.2x speedup over the DALI training pipeline, with only a marginal decrease in accuracy. Importantly, Rapid-INR can be readily applied to other computer vision tasks and backbone networks with reasonable engineering efforts. Our implementation code is publicly available at https://anonymous.4open.science/r/INR-4BF7. ## Keyword: RAW There is no result ## Keyword: raw image There is no result
process
new submissions for fri jun keyword events guidedmixup an efficient mixup strategy guided by saliency maps authors minsoo kang suhyun kim subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract data augmentation is now an essential part of the image training process as it effectively prevents overfitting and makes the model more robust against noisy datasets recent mixing augmentation strategies have advanced to generate the mixup mask that can enrich the saliency information which is a supervisory signal however these methods incur a significant computational burden to optimize the mixup mask from this motivation we propose a novel saliency aware mixup method guidedmixup which aims to retain the salient regions in mixup images with low computational overhead we develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images and achieve rich saliency in mixup images moreover guidedmixup controls the mixup ratio for each pixel to better preserve the salient region by interpolating two paired images smoothly the experiments on several datasets demonstrate that guidedmixup provides a good trade off between augmentation overhead and generalization performance on classification datasets in addition our method shows good performance in experiments with corrupted or reduced datasets streaming egocentric action anticipation an evaluation scheme and approach authors antonino furnari giovanni maria farinella subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract egocentric action anticipation aims to predict the future actions the camera wearer will perform from the observation of the past while predictions about the future should be available before the predicted events take place most approaches do not pay attention to the computational time required to make such predictions as a result current evaluation schemes assume that predictions are available right after the input video is observed i e presuming a negligible runtime which may lead to overly optimistic evaluations we propose a streaming egocentric action evaluation scheme which assumes that predictions are performed online and made available only after the model has processed the current input segment which depends on its runtime to evaluate all models considering the same prediction horizon we hence propose that slower models should base their predictions on temporal segments sampled ahead of time based on the observation that model runtime can affect performance in the considered streaming evaluation scenario we further propose a lightweight action anticipation model based on feed forward cnns which is optimized using knowledge distillation techniques with a novel past to future distillation loss experiments on the three popular datasets epic kitchens epic kitchens and egtea gaze show that i the proposed evaluation scheme induces a different ranking on state of the art methods as compared to classic evaluations ii lightweight approaches tend to outmatch more computationally expensive ones and iii the proposed model based on feed forward cnns and knowledge distillation outperforms current art in the streaming egocentric action anticipation scenario spectral batch normalization normalization in the frequency domain authors rinor cakaj jens mehnert bin yang subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract regularization is a set of techniques that are used to improve the generalization ability of deep neural networks in this paper we introduce spectral batch normalization sbn a novel effective method to improve generalization by normalizing feature maps in the frequency spectral domain the activations of residual networks without batch normalization bn tend to explode exponentially in the depth of the network at initialization this leads to extremely large feature map norms even though the parameters are relatively small these explosive dynamics can be very detrimental to learning bn makes weight decay regularization on the scaling factors gamma beta approximately equivalent to an additive penalty on the norm of the feature maps which prevents extremely large feature map norms to a certain degree however we show experimentally that despite the approximate additive penalty of bn feature maps in deep neural networks dnns tend to explode at the beginning of the network and that feature maps of dnns contain large values during the whole training this phenomenon also occurs in a weakened form in non residual networks sbn addresses large feature maps by normalizing them in the frequency domain in our experiments we empirically show that sbn prevents exploding feature maps at initialization and large feature map values during the training moreover the normalization of feature maps in the frequency domain leads to more uniform distributed frequency components this discourages the dnns to rely on single frequency components of feature maps these together with other effects of sbn have a regularizing effect on the training of residual and non residual networks we show experimentally that using sbn in addition to standard regularization methods improves the performance of dnns by a relevant margin e g on imagenet by keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp unsupervised registration through optimization guided cyclical self training authors alexander bigalke lasse hansen tony c w mok mattias p heinrich subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract state of the art deep learning based registration methods employ three different learning strategies supervised learning which requires costly manual annotations unsupervised learning which heavily relies on hand crafted similarity metrics designed by domain experts or learning from synthetic data which introduces a domain shift to overcome the limitations of these strategies we propose a novel self supervised learning paradigm for unsupervised registration relying on self training our idea is based on two key insights feature based differentiable optimizers perform reasonable registration even from random features and stabilize the training of the preceding feature extraction network on noisy labels consequently we propose cyclical self training where pseudo labels are initialized as the displacement fields inferred from random features and cyclically updated based on more and more expressive features from the learning feature extractor yielding a self reinforcement effect we evaluate the method for abdomen and lung registration consistently surpassing metric based supervision and outperforming diverse state of the art competitors source code is available at deep ensemble for rotorcraft attitude prediction authors hikmat khan nidhal carla bouaynaya ghulam rasool tyler travis lacey thompson charles c johnson subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract historically the rotorcraft community has experienced a higher fatal accident rate than other aviation segments including commercial and general aviation recent advancements in artificial intelligence ai and the application of these technologies in different areas of our lives are both intriguing and encouraging when developed appropriately for the aviation domain ai techniques provide an opportunity to help design systems that can address rotorcraft safety challenges our recent work demonstrated that ai algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges e g indicated airspeed these ai based techniques provide a potentially cost effective solution especially for small helicopter operators to record the flight state information and perform post flight analyses we also showed that carefully designed and trained ai systems could accurately predict rotorcraft attitude i e pitch and yaw from outside scenes images or video data ordinary off the shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene including the horizon the ai algorithm could correctly identify rotorcraft attitude at an accuracy in the range of in this work we combined five different onboard camera viewpoints to improve attitude prediction accuracy to in this paper five onboard camera views included the pilot windshield co pilot windshield pilot electronic flight instrument system efis display co pilot efis display and the attitude indicator gauge using video data from each camera view we trained various convolutional neural networks cnns which achieved prediction accuracy in the range of to we subsequently ensembled the learned knowledge from all cnns and achieved an ensembled accuracy of keyword image signal processing there is no result keyword image signal process there is no result keyword compression end to end learnable multi scale feature compression for vcm authors yeongwoong kim hyewon jeong janghyun yu younhee kim jooyoung lee se yoon jeong hui yong kim subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract the proliferation of deep learning based machine vision applications has given rise to a new type of compression so called video coding for machine vcm vcm differs from traditional video coding in that it is optimized for machine vision performance instead of human visual quality in the feature compression track of mpeg vcm multi scale features extracted from images are subject to compression recent feature compression works have demonstrated that the versatile video coding vvc standard based approach can achieve a bd rate reduction of up to against mpeg vcm feature anchor however it is still sub optimal as vvc was not designed for extracted features but for natural images moreover the high encoding complexity of vvc makes it difficult to design a lightweight encoder without sacrificing performance to address these challenges we propose a novel multi scale feature compression method that enables both the end to end optimization on the extracted features and the design of lightweight encoders the proposed model combines a learnable compressor with a multi scale feature fusion network so that the redundancy in the multi scale features is effectively removed instead of simply cascading the fusion network and the compression network we integrate the fusion and encoding processes in an interleaved way our model first encodes a larger scale feature to obtain a latent representation and then fuses the latent with a smaller scale feature this process is successively performed until the smallest scale feature is fused and then the encoded latent at the final stage is entropy coded for transmission the results show that our model outperforms previous approaches by at least bd rate reduction and has to times less encoding time for object detection it is noteworthy that our model can attain near lossless task performance with only of the uncompressed feature data size rapid inr storage efficient cpu free dnn training using implicit neural representation authors hanqiu chen hang yang stephen br fitzmeyer cong hao subjects computer vision and pattern recognition cs cv artificial intelligence cs ai hardware architecture cs ar machine learning cs lg arxiv link pdf link abstract implicit neural representation inr is an innovative approach for representing complex shapes or objects without explicitly defining their geometry or surface structure instead inr represents objects as continuous functions previous research has demonstrated the effectiveness of using neural networks as inr for image compression showcasing comparable performance to traditional methods such as jpeg however inr holds potential for various applications beyond image compression this paper introduces rapid inr a novel approach that utilizes inr for encoding and compressing images thereby accelerating neural network training in computer vision tasks our methodology involves storing the whole dataset directly in inr format on a gpu mitigating the significant data communication overhead between the cpu and gpu during training additionally the decoding process from inr to rgb format is highly parallelized and executed on the fly to further enhance compression we propose iterative and dynamic pruning as well as layer wise quantization building upon previous work we evaluate our framework on the image classification task utilizing the resnet backbone network and three commonly used datasets with varying image sizes rapid inr reduces memory consumption to only of the original dataset size and achieves a maximum times speedup over the pytorch training pipeline as well as a maximum speedup over the dali training pipeline with only a marginal decrease in accuracy importantly rapid inr can be readily applied to other computer vision tasks and backbone networks with reasonable engineering efforts our implementation code is publicly available at keyword raw there is no result keyword raw image there is no result
1
7,905
3,114,804,251
IssuesEvent
2015-09-03 11:04:32
NLeSC/Xenon
https://api.github.com/repos/NLeSC/Xenon
closed
Ant test fails with "failed to create task or type sshexec"
Documentation Testing
Running test with "ant test" gave the following error: ```` BUILD FAILED /home/stefanv/git/Xenon/build.xml:22: The following error occurred while executing this line: /home/stefanv/git/Xenon/test/build.xml:78: Problem: failed to create task or type sshexec Cause: the class org.apache.tools.ant.taskdefs.optional.ssh.SSHExec was not found. This looks like one of Ant's optional components. Action: Check that the appropriate optional JAR exists in -/usr/share/ant/lib -/home/stefanv/.ant/lib -a directory added on the command line with the -lib argument ```` I got it working by installing 'ant-optional' package and running tests with `ant -lib /usr/share/java`. This should be documented.
1.0
Ant test fails with "failed to create task or type sshexec" - Running test with "ant test" gave the following error: ```` BUILD FAILED /home/stefanv/git/Xenon/build.xml:22: The following error occurred while executing this line: /home/stefanv/git/Xenon/test/build.xml:78: Problem: failed to create task or type sshexec Cause: the class org.apache.tools.ant.taskdefs.optional.ssh.SSHExec was not found. This looks like one of Ant's optional components. Action: Check that the appropriate optional JAR exists in -/usr/share/ant/lib -/home/stefanv/.ant/lib -a directory added on the command line with the -lib argument ```` I got it working by installing 'ant-optional' package and running tests with `ant -lib /usr/share/java`. This should be documented.
non_process
ant test fails with failed to create task or type sshexec running test with ant test gave the following error build failed home stefanv git xenon build xml the following error occurred while executing this line home stefanv git xenon test build xml problem failed to create task or type sshexec cause the class org apache tools ant taskdefs optional ssh sshexec was not found this looks like one of ant s optional components action check that the appropriate optional jar exists in usr share ant lib home stefanv ant lib a directory added on the command line with the lib argument i got it working by installing ant optional package and running tests with ant lib usr share java this should be documented
0
85,860
15,755,295,011
IssuesEvent
2021-03-31 01:31:30
Baneeishaque/wp4
https://api.github.com/repos/Baneeishaque/wp4
opened
CVE-2020-11023 (Medium) detected in jquery-3.3.1.min.js, jquery-1.11.3.min.js
security vulnerability
## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.min.js</b>, <b>jquery-1.11.3.min.js</b></p></summary> <p> <details><summary><b>jquery-3.3.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p> <p>Path to vulnerable library: wp4/wp-includes/sodium_compat/vendor/phpunit/php-code-coverage/src/Report/Html/Renderer/Template/js/jquery.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.11.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p> <p>Path to dependency file: wp4/wp-content/plugins/unyson/framework/static/libs/unycon/index.html</p> <p>Path to vulnerable library: wp4/wp-content/plugins/unyson/framework/static/libs/unycon/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.3.min.js** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11023 (Medium) detected in jquery-3.3.1.min.js, jquery-1.11.3.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.min.js</b>, <b>jquery-1.11.3.min.js</b></p></summary> <p> <details><summary><b>jquery-3.3.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p> <p>Path to vulnerable library: wp4/wp-includes/sodium_compat/vendor/phpunit/php-code-coverage/src/Report/Html/Renderer/Template/js/jquery.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-3.3.1.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.11.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p> <p>Path to dependency file: wp4/wp-content/plugins/unyson/framework/static/libs/unycon/index.html</p> <p>Path to vulnerable library: wp4/wp-content/plugins/unyson/framework/static/libs/unycon/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.11.3.min.js** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js jquery min js cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to vulnerable library wp includes sodium compat vendor phpunit php code coverage src report html renderer template js jquery min js dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file wp content plugins unyson framework static libs unycon index html path to vulnerable library wp content plugins unyson framework static libs unycon index html dependency hierarchy x jquery min js vulnerable library vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
17,348
23,172,940,282
IssuesEvent
2022-07-31 01:32:44
bitnami/vms
https://api.github.com/repos/bitnami/vms
closed
[ProcessMaker] PHP exec() issue
tech-issues triage stale processmaker
### Platform Installers ### bndiagnostic ID [know more about bndiagnostic ID](https://docs.bitnami.com/general/how-to/understand-bndiagnostic/) 4792b768-57cb-88d7-a8a5-476d7cda4ea9 ### bndiagnostic output pdfunite: /opt/processmaker-3.2.1-0/common/lib/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib64/libpng16.so.16) ### bndiagnostic was not useful. Could you please tell us why? Do not know where to fix the missing / unmatch version from Bitnami. ### Describe your issue as much as you can I'm running Bitnami ProcessMaker 3.2.1 on OpenSuSe Leap 15.1. Debugging information of my PHP program executing in a ProcessMaker trigger are as follows. ``` $doc1Path = /opt/processmaker-3.2.1-0/apps/processmaker/htdocs/shared/sites/workflow/files/658/949/791/62bde58ed8edb6031387589/outdocs/70980437362bde5bb068965026481252_1.pdf; $doc2Path = /opt/processmaker-3.2.1-0/apps/processmaker/htdocs/shared/sites/workflow/files/658/949/791/62bde58ed8edb6031387589/outdocs/31336435762bde5bb5c1567011102787_1.pdf; $joinedFilePath = /tmp/joined_34vTZO.pdf; exec("pdfunite $doc1Path $doc2Path $joinedFilePath", @=aOutput); ProcessMaker Error Message: Unable to upload joined file '/tmp/joined_34vTZO.pdf' to case. ``` However, an empty file /tmp/joined_34vTZO (without the extension ".pdf") is found in /tmp On the other hand, there is no problem on executing the command "pdfunite $doc1Path $doc2Path $joinedFilePath" in a console.
1.0
[ProcessMaker] PHP exec() issue - ### Platform Installers ### bndiagnostic ID [know more about bndiagnostic ID](https://docs.bitnami.com/general/how-to/understand-bndiagnostic/) 4792b768-57cb-88d7-a8a5-476d7cda4ea9 ### bndiagnostic output pdfunite: /opt/processmaker-3.2.1-0/common/lib/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib64/libpng16.so.16) ### bndiagnostic was not useful. Could you please tell us why? Do not know where to fix the missing / unmatch version from Bitnami. ### Describe your issue as much as you can I'm running Bitnami ProcessMaker 3.2.1 on OpenSuSe Leap 15.1. Debugging information of my PHP program executing in a ProcessMaker trigger are as follows. ``` $doc1Path = /opt/processmaker-3.2.1-0/apps/processmaker/htdocs/shared/sites/workflow/files/658/949/791/62bde58ed8edb6031387589/outdocs/70980437362bde5bb068965026481252_1.pdf; $doc2Path = /opt/processmaker-3.2.1-0/apps/processmaker/htdocs/shared/sites/workflow/files/658/949/791/62bde58ed8edb6031387589/outdocs/31336435762bde5bb5c1567011102787_1.pdf; $joinedFilePath = /tmp/joined_34vTZO.pdf; exec("pdfunite $doc1Path $doc2Path $joinedFilePath", @=aOutput); ProcessMaker Error Message: Unable to upload joined file '/tmp/joined_34vTZO.pdf' to case. ``` However, an empty file /tmp/joined_34vTZO (without the extension ".pdf") is found in /tmp On the other hand, there is no problem on executing the command "pdfunite $doc1Path $doc2Path $joinedFilePath" in a console.
process
php exec issue platform installers bndiagnostic id bndiagnostic output pdfunite opt processmaker common lib libz so version zlib not found required by usr so bndiagnostic was not useful could you please tell us why do not know where to fix the missing unmatch version from bitnami describe your issue as much as you can i m running bitnami processmaker on opensuse leap debugging information of my php program executing in a processmaker trigger are as follows opt processmaker apps processmaker htdocs shared sites workflow files outdocs pdf opt processmaker apps processmaker htdocs shared sites workflow files outdocs pdf joinedfilepath tmp joined pdf exec pdfunite joinedfilepath aoutput processmaker error message unable to upload joined file tmp joined pdf to case however an empty file tmp joined without the extension pdf is found in tmp on the other hand there is no problem on executing the command pdfunite joinedfilepath in a console
1
14,846
18,239,572,146
IssuesEvent
2021-10-01 11:13:16
googleapis/python-bigquery
https://api.github.com/repos/googleapis/python-bigquery
opened
Keep legacy generated SQL types available for import
type: process semver: major
As a follow-up to #814 and as discussed with @tswast on the chat, some users, that otherwise could upgrade to a `3.x` version, would still need the legacy generated SQL model types that the `3.x` version will remove. We should add these models back to the codebase, but with the following restrictions: - These legacy files would be copy-pasted to their old place, but will not be generated anymore. - The legacy models would not be exposed in the library's top-level namespace, nor would they be imported or used anywhere else in the code. Users will have to import them using the full module path, and at their own risk. - These types will not be maintained in any way, and will eventually get out of sync with what the code generator would produce. This is intentional. - When importing anything from that legacy sub-package, a warning should be issued. @tswast Feel free to update/correct the requirements above in case I misunderstood/misremembered them.
1.0
Keep legacy generated SQL types available for import - As a follow-up to #814 and as discussed with @tswast on the chat, some users, that otherwise could upgrade to a `3.x` version, would still need the legacy generated SQL model types that the `3.x` version will remove. We should add these models back to the codebase, but with the following restrictions: - These legacy files would be copy-pasted to their old place, but will not be generated anymore. - The legacy models would not be exposed in the library's top-level namespace, nor would they be imported or used anywhere else in the code. Users will have to import them using the full module path, and at their own risk. - These types will not be maintained in any way, and will eventually get out of sync with what the code generator would produce. This is intentional. - When importing anything from that legacy sub-package, a warning should be issued. @tswast Feel free to update/correct the requirements above in case I misunderstood/misremembered them.
process
keep legacy generated sql types available for import as a follow up to and as discussed with tswast on the chat some users that otherwise could upgrade to a x version would still need the legacy generated sql model types that the x version will remove we should add these models back to the codebase but with the following restrictions these legacy files would be copy pasted to their old place but will not be generated anymore the legacy models would not be exposed in the library s top level namespace nor would they be imported or used anywhere else in the code users will have to import them using the full module path and at their own risk these types will not be maintained in any way and will eventually get out of sync with what the code generator would produce this is intentional when importing anything from that legacy sub package a warning should be issued tswast feel free to update correct the requirements above in case i misunderstood misremembered them
1
101,794
4,140,866,180
IssuesEvent
2016-06-14 00:58:59
yoshiquest/forge-clj
https://api.github.com/repos/yoshiquest/forge-clj
opened
Add Advanced World Generation and Custom Dimensions
enhancement Low Priority
I need to go back and rewrite the chunk providing code again, and I also need to implement everything else required for a Custom Dimension. Low priority right now, as I am not looking forward to doing this again, and there are other, hopefully-less-difficult things that I could do first. If you would like to help, this would be an acceptable place to help out at.
1.0
Add Advanced World Generation and Custom Dimensions - I need to go back and rewrite the chunk providing code again, and I also need to implement everything else required for a Custom Dimension. Low priority right now, as I am not looking forward to doing this again, and there are other, hopefully-less-difficult things that I could do first. If you would like to help, this would be an acceptable place to help out at.
non_process
add advanced world generation and custom dimensions i need to go back and rewrite the chunk providing code again and i also need to implement everything else required for a custom dimension low priority right now as i am not looking forward to doing this again and there are other hopefully less difficult things that i could do first if you would like to help this would be an acceptable place to help out at
0