Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
68,767
21,883,645,800
IssuesEvent
2022-05-19 16:22:41
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Switching video room -> video room is broken
T-Defect A-Video-Rooms
### Steps to reproduce 1. Join a public video room 2. Click on a private video room 3. PIP for public room closes, no new PIP appears, private room isn't joined 4. Navigate away, see PIP for private video room 5. Click on private video room, see the Join screen ### Outcome #### What did you expect? Can switch video rooms and join other video rooms #### What happened instead? See video, especially PIP and switching rooms https://user-images.githubusercontent.com/51663/169349622-c5c6881e-524a-4048-89a1-08dd631fe010.mp4 ### Operating system _No response_ ### Browser information Chromium 101.0.4951.64 (Official Build) Arch Linux (64-bit) ### URL for webapp develop.element.io ### Application version Element version: b2d057b7c34c-react-efc36acf9334-js-81d884f899eb Olm version: 3.2.8 ### Homeserver matrix.org ### Will you send logs? Yes
1.0
Switching video room -> video room is broken - ### Steps to reproduce 1. Join a public video room 2. Click on a private video room 3. PIP for public room closes, no new PIP appears, private room isn't joined 4. Navigate away, see PIP for private video room 5. Click on private video room, see the Join screen ### Outcome #### What did you expect? Can switch video rooms and join other video rooms #### What happened instead? See video, especially PIP and switching rooms https://user-images.githubusercontent.com/51663/169349622-c5c6881e-524a-4048-89a1-08dd631fe010.mp4 ### Operating system _No response_ ### Browser information Chromium 101.0.4951.64 (Official Build) Arch Linux (64-bit) ### URL for webapp develop.element.io ### Application version Element version: b2d057b7c34c-react-efc36acf9334-js-81d884f899eb Olm version: 3.2.8 ### Homeserver matrix.org ### Will you send logs? Yes
defect
switching video room video room is broken steps to reproduce join a public video room click on a private video room pip for public room closes no new pip appears private room isn t joined navigate away see pip for private video room click on private video room see the join screen outcome what did you expect can switch video rooms and join other video rooms what happened instead see video especially pip and switching rooms operating system no response browser information chromium official build arch linux bit url for webapp develop element io application version element version react js olm version homeserver matrix org will you send logs yes
1
39,238
9,333,395,977
IssuesEvent
2019-03-28 14:20:35
OpenMS/OpenMS
https://api.github.com/repos/OpenMS/OpenMS
closed
ResidueDB not thread safe.
OpenMS - library defect major
e.g. parsing of AASequences containing modifications with multiple threads causes segmentation faults. Reason is that modified residues are implicitly added to the ResidueDB singleton if they don't exist. #5 0x00007ffff6b95586 in OpenMS::Map<OpenMS::String, OpenMS::Map<OpenMS::String, OpenMS::Residue*> >::operator[](this=0x7678e8, key=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/include/OpenMS/DATASTRUCTURES/Map.h:127 ``` #6 0x00007ffff6b92179 in OpenMS::ResidueDB::addResidue_ (this=0x7670b0, r=0x7fffdc004400) at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/CHEMISTRY/ResidueDB.cpp:161 #7 0x00007ffff6b94aae in OpenMS::ResidueDB::getModifiedResidue (this=0x7670b0, residue=0x846070, modification=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/CHEMISTRY/ResidueDB.cpp:483 ``` #8 0x00007ffff6b5b00a in OpenMS::AASequence::setModification (this=0x7fffdc004ad0, index=16, modification=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/CHEMISTRY/AASequence.cpp:902 ``` #9 0x00007ffff70a8be9 in OpenMS::ModifiedPeptideGenerator::applyFixedModifications (fixed_mods_begin=..., fixed_mods_end=..., peptide=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/ANALYSIS/RNPXL/ModifiedPeptideGenerator.cpp:71 ``` #10 0x0000000000497534 in SimpleSearchEngine::main_ (.omp_data_i=0x7fffffffaf40) at /abi-data/sachsenb/OpenMS_IDE/src/utils/SimpleSearchEngine.cpp:582 #11 0x00007ffff400ae3a in gomp_thread_start (xdata=<value optimized out>) at ../.././libgomp/team.c:115 #12 0x00000033854079d1 in start_thread () from /lib64/libpthread.so.0 #13 0x00000030bd2e8b6d in clone () from /lib64/libc.so.6
1.0
ResidueDB not thread safe. - e.g. parsing of AASequences containing modifications with multiple threads causes segmentation faults. Reason is that modified residues are implicitly added to the ResidueDB singleton if they don't exist. #5 0x00007ffff6b95586 in OpenMS::Map<OpenMS::String, OpenMS::Map<OpenMS::String, OpenMS::Residue*> >::operator[](this=0x7678e8, key=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/include/OpenMS/DATASTRUCTURES/Map.h:127 ``` #6 0x00007ffff6b92179 in OpenMS::ResidueDB::addResidue_ (this=0x7670b0, r=0x7fffdc004400) at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/CHEMISTRY/ResidueDB.cpp:161 #7 0x00007ffff6b94aae in OpenMS::ResidueDB::getModifiedResidue (this=0x7670b0, residue=0x846070, modification=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/CHEMISTRY/ResidueDB.cpp:483 ``` #8 0x00007ffff6b5b00a in OpenMS::AASequence::setModification (this=0x7fffdc004ad0, index=16, modification=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/CHEMISTRY/AASequence.cpp:902 ``` #9 0x00007ffff70a8be9 in OpenMS::ModifiedPeptideGenerator::applyFixedModifications (fixed_mods_begin=..., fixed_mods_end=..., peptide=...) ``` at /abi-data/sachsenb/OpenMS_IDE/src/openms/source/ANALYSIS/RNPXL/ModifiedPeptideGenerator.cpp:71 ``` #10 0x0000000000497534 in SimpleSearchEngine::main_ (.omp_data_i=0x7fffffffaf40) at /abi-data/sachsenb/OpenMS_IDE/src/utils/SimpleSearchEngine.cpp:582 #11 0x00007ffff400ae3a in gomp_thread_start (xdata=<value optimized out>) at ../.././libgomp/team.c:115 #12 0x00000033854079d1 in start_thread () from /lib64/libpthread.so.0 #13 0x00000030bd2e8b6d in clone () from /lib64/libc.so.6
defect
residuedb not thread safe e g parsing of aasequences containing modifications with multiple threads causes segmentation faults reason is that modified residues are implicitly added to the residuedb singleton if they don t exist in openms map operator this key at abi data sachsenb openms ide src openms include openms datastructures map h in openms residuedb addresidue this r at abi data sachsenb openms ide src openms source chemistry residuedb cpp in openms residuedb getmodifiedresidue this residue modification at abi data sachsenb openms ide src openms source chemistry residuedb cpp in openms aasequence setmodification this index modification at abi data sachsenb openms ide src openms source chemistry aasequence cpp in openms modifiedpeptidegenerator applyfixedmodifications fixed mods begin fixed mods end peptide at abi data sachsenb openms ide src openms source analysis rnpxl modifiedpeptidegenerator cpp in simplesearchengine main omp data i at abi data sachsenb openms ide src utils simplesearchengine cpp in gomp thread start xdata at libgomp team c in start thread from libpthread so in clone from libc so
1
23,203
3,776,138,374
IssuesEvent
2016-03-17 15:45:45
obophenotype/upheno
https://api.github.com/repos/obophenotype/upheno
closed
Add MedGen to disease importer
Priority-Medium Status-Accepted Type-Defect
Originally reported on Google Code with ID 21 ``` for better or worse, medgen exists and has mappings to HPO, SNOMED, OMIM, and orphanet. I am not sure how these mappings relate to those that are assigned to the "phenotypes" in ClinVar, which can also be construed as mappings. More info here: ftp://ftp.ncbi.nlm.nih.gov/pub/medgen/ http://www.ncbi.nlm.nih.gov/medgen/docs/faq/ http://www.ncbi.nlm.nih.gov/projects/clinvar/ClinVarDataDictionary.pdf note that we basically get a bunch of mappings via our ClinVar ingest, that may or may not match our own DO-miner mappings. it would be good to get a report on this too. ``` Reported by `haendel@ohsu.edu` on 2014-05-08 16:07:25
1.0
Add MedGen to disease importer - Originally reported on Google Code with ID 21 ``` for better or worse, medgen exists and has mappings to HPO, SNOMED, OMIM, and orphanet. I am not sure how these mappings relate to those that are assigned to the "phenotypes" in ClinVar, which can also be construed as mappings. More info here: ftp://ftp.ncbi.nlm.nih.gov/pub/medgen/ http://www.ncbi.nlm.nih.gov/medgen/docs/faq/ http://www.ncbi.nlm.nih.gov/projects/clinvar/ClinVarDataDictionary.pdf note that we basically get a bunch of mappings via our ClinVar ingest, that may or may not match our own DO-miner mappings. it would be good to get a report on this too. ``` Reported by `haendel@ohsu.edu` on 2014-05-08 16:07:25
defect
add medgen to disease importer originally reported on google code with id for better or worse medgen exists and has mappings to hpo snomed omim and orphanet i am not sure how these mappings relate to those that are assigned to the phenotypes in clinvar which can also be construed as mappings more info here ftp ftp ncbi nlm nih gov pub medgen note that we basically get a bunch of mappings via our clinvar ingest that may or may not match our own do miner mappings it would be good to get a report on this too reported by haendel ohsu edu on
1
190,165
14,535,015,754
IssuesEvent
2020-12-15 04:30:58
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
[Testerina] Function pointer argument support for Function mocking
Component/Testerina Team/TestFramework Type/Improvement
**Description:** Function mocking needs to support having function pointers in the arguments. Consider the following function to be mocked. ``` public isolated function print(string msg, *KeyValues keyValues) { ``` Here `keyValues` being the new data type to be supported ``` public type KeyValues record {| never msg?; Value...; |}; ``` The current implementation does not support mocking the `keyValues` argument. **Steps to reproduce:** **Affected Versions:** **OS, DB, other environment details and versions:** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
2.0
[Testerina] Function pointer argument support for Function mocking - **Description:** Function mocking needs to support having function pointers in the arguments. Consider the following function to be mocked. ``` public isolated function print(string msg, *KeyValues keyValues) { ``` Here `keyValues` being the new data type to be supported ``` public type KeyValues record {| never msg?; Value...; |}; ``` The current implementation does not support mocking the `keyValues` argument. **Steps to reproduce:** **Affected Versions:** **OS, DB, other environment details and versions:** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
non_defect
function pointer argument support for function mocking description function mocking needs to support having function pointers in the arguments consider the following function to be mocked public isolated function print string msg keyvalues keyvalues here keyvalues being the new data type to be supported public type keyvalues record never msg value the current implementation does not support mocking the keyvalues argument steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
0
12,368
2,694,263,626
IssuesEvent
2015-04-01 19:16:16
google/google-api-go-client
https://api.github.com/repos/google/google-api-go-client
opened
[FR] Support the discovery document "supportsMediaDownload" flag
new priority-medium type-defect
**alainv@google.com** on 25 Oct 2012 at 11:11: ``` Google APIs can set a flag in their discovery document to clients know that a method supports the "?alt=media" query parameter to download media content instead of JSON metadata. An example can be seen on the storage.objects.get endpoint from the Cloud Storage API: "get": { "id": "storage.objects.get", "path": "b/{bucket}/o/{object}", "httpMethod": "GET", "description": "Retrieves objects or their associated metadata.", "parameters": { "bucket": { "type": "string", "description": "Name of the bucket in which the object resides.", "required": true, "location": "path" }, "object": { "type": "string", "description": "Name of the object.", "required": true, "location": "path" }, "projection": { "type": "string", "description": "Set of properties to return. Defaults to no_acl.", "enum": [ "full", "no_acl" ], "enumDescriptions": [ "Include all properties.", "Omit the acl property." ], "location": "query" } }, "parameterOrder": [ "bucket", "object" ], "response": { "$ref": "Object" }, "scopes": [ "https://www.googleapis.com/auth/devstorage.full_control", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/devstorage.read_write" ], "supportsMediaDownload": true } [from https://www.googleapis.com/discovery/v1/apis/storage/v1beta1/rest] If such flag is set, the generator could generate a Download method that would set the "?alt=" query parameter to "media" and return an http.Response object or a simpler object containing: * an io.Reader for the content. * a string for the content-type. This makes it easier for client to download content instead of manually building URLs and HTTP requests. ```
1.0
[FR] Support the discovery document "supportsMediaDownload" flag - **alainv@google.com** on 25 Oct 2012 at 11:11: ``` Google APIs can set a flag in their discovery document to clients know that a method supports the "?alt=media" query parameter to download media content instead of JSON metadata. An example can be seen on the storage.objects.get endpoint from the Cloud Storage API: "get": { "id": "storage.objects.get", "path": "b/{bucket}/o/{object}", "httpMethod": "GET", "description": "Retrieves objects or their associated metadata.", "parameters": { "bucket": { "type": "string", "description": "Name of the bucket in which the object resides.", "required": true, "location": "path" }, "object": { "type": "string", "description": "Name of the object.", "required": true, "location": "path" }, "projection": { "type": "string", "description": "Set of properties to return. Defaults to no_acl.", "enum": [ "full", "no_acl" ], "enumDescriptions": [ "Include all properties.", "Omit the acl property." ], "location": "query" } }, "parameterOrder": [ "bucket", "object" ], "response": { "$ref": "Object" }, "scopes": [ "https://www.googleapis.com/auth/devstorage.full_control", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/devstorage.read_write" ], "supportsMediaDownload": true } [from https://www.googleapis.com/discovery/v1/apis/storage/v1beta1/rest] If such flag is set, the generator could generate a Download method that would set the "?alt=" query parameter to "media" and return an http.Response object or a simpler object containing: * an io.Reader for the content. * a string for the content-type. This makes it easier for client to download content instead of manually building URLs and HTTP requests. ```
defect
support the discovery document supportsmediadownload flag alainv google com on oct at google apis can set a flag in their discovery document to clients know that a method supports the alt media query parameter to download media content instead of json metadata an example can be seen on the storage objects get endpoint from the cloud storage api get id storage objects get path b bucket o object httpmethod get description retrieves objects or their associated metadata parameters bucket type string description name of the bucket in which the object resides required true location path object type string description name of the object required true location path projection type string description set of properties to return defaults to no acl enum full no acl enumdescriptions include all properties omit the acl property location query parameterorder bucket object response ref object scopes supportsmediadownload true if such flag is set the generator could generate a download method that would set the alt query parameter to media and return an http response object or a simpler object containing an io reader for the content a string for the content type this makes it easier for client to download content instead of manually building urls and http requests
1
300,965
26,006,419,544
IssuesEvent
2022-12-20 19:51:26
neondatabase/neon
https://api.github.com/repos/neondatabase/neon
closed
pg_ctl failed PID file does not exist at tear down of many tests
t/bug a/test/flaky
https://github.com/neondatabase/neon/actions/runs/3656200757/jobs/6179034142 ``` 2022-12-09T10:14:58.4744182Z ==================================== ERRORS ==================================== 2022-12-09T10:14:58.4744882Z ________ ERROR at teardown of test_branching_with_pgbench[cascade-1-10] ________ 2022-12-09T10:14:58.4745495Z [gw3] linux -- Python 3.9.2 /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/bin/python 2022-12-09T10:14:58.4747223Z subprocess.CalledProcessError: Command '['/tmp/neon/bin/neon_local', 'pg', 'stop', '--tenant-id', 'cc5b2cfa553215779a417069e9424c61', 'b3_pg_node']' returned non-zero exit status 1. 2022-12-09T10:14:58.4747650Z 2022-12-09T10:14:58.4747822Z The above exception was the direct cause of the following exception: 2022-12-09T10:14:58.4748401Z /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/lib/python3.9/site-packages/allure_commons/_allure.py:200: in __call__ 2022-12-09T10:14:58.4751796Z return self._fixture_function(*args, **kwargs) 2022-12-09T10:14:58.4752776Z test_runner/fixtures/neon_fixtures.py:1004: in neon_simple_env 2022-12-09T10:14:58.4753245Z _shared_simple_env.postgres.stop_all() 2022-12-09T10:14:58.4753678Z test_runner/fixtures/neon_fixtures.py:2463: in stop_all 2022-12-09T10:14:58.4754293Z pg.stop() 2022-12-09T10:14:58.4754707Z test_runner/fixtures/neon_fixtures.py:2344: in stop 2022-12-09T10:14:58.4755160Z self.env.neon_cli.pg_stop( 2022-12-09T10:14:58.4755576Z test_runner/fixtures/neon_fixtures.py:1693: in pg_stop 2022-12-09T10:14:58.4756003Z return self.raw_cli(args, check_return_code=check_return_code) 2022-12-09T10:14:58.4756429Z test_runner/fixtures/neon_fixtures.py:1366: in raw_cli 2022-12-09T10:14:58.4756880Z raise Exception(msg) from subprocess.CalledProcessError( 2022-12-09T10:14:58.4757542Z E Exception: Run ['/tmp/neon/bin/neon_local', 'pg', 'stop', '--tenant-id', 'cc5b2cfa553215779a417069e9424c61', 'b3_pg_node'] failed: 2022-12-09T10:14:58.4757959Z E stdout: 2022-12-09T10:14:58.4759025Z E stderr: command failed: pg_ctl failed, exit code: exit status: 1, stdout: , stderr: pg_ctl: PID file "/tmp/test_output/test_branching_with_pgbench[cascade-1-10]/repo/pgdatadirs/tenants/cc5b2cfa553215779a417069e9424c61/b3_pg_node/postmaster.pid" does not exist 2022-12-09T10:14:58.4759754Z E Is server running? ``` Unfortunately, allure failed for unclear reasons, so no pg logs apparently.
1.0
pg_ctl failed PID file does not exist at tear down of many tests - https://github.com/neondatabase/neon/actions/runs/3656200757/jobs/6179034142 ``` 2022-12-09T10:14:58.4744182Z ==================================== ERRORS ==================================== 2022-12-09T10:14:58.4744882Z ________ ERROR at teardown of test_branching_with_pgbench[cascade-1-10] ________ 2022-12-09T10:14:58.4745495Z [gw3] linux -- Python 3.9.2 /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/bin/python 2022-12-09T10:14:58.4747223Z subprocess.CalledProcessError: Command '['/tmp/neon/bin/neon_local', 'pg', 'stop', '--tenant-id', 'cc5b2cfa553215779a417069e9424c61', 'b3_pg_node']' returned non-zero exit status 1. 2022-12-09T10:14:58.4747650Z 2022-12-09T10:14:58.4747822Z The above exception was the direct cause of the following exception: 2022-12-09T10:14:58.4748401Z /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/lib/python3.9/site-packages/allure_commons/_allure.py:200: in __call__ 2022-12-09T10:14:58.4751796Z return self._fixture_function(*args, **kwargs) 2022-12-09T10:14:58.4752776Z test_runner/fixtures/neon_fixtures.py:1004: in neon_simple_env 2022-12-09T10:14:58.4753245Z _shared_simple_env.postgres.stop_all() 2022-12-09T10:14:58.4753678Z test_runner/fixtures/neon_fixtures.py:2463: in stop_all 2022-12-09T10:14:58.4754293Z pg.stop() 2022-12-09T10:14:58.4754707Z test_runner/fixtures/neon_fixtures.py:2344: in stop 2022-12-09T10:14:58.4755160Z self.env.neon_cli.pg_stop( 2022-12-09T10:14:58.4755576Z test_runner/fixtures/neon_fixtures.py:1693: in pg_stop 2022-12-09T10:14:58.4756003Z return self.raw_cli(args, check_return_code=check_return_code) 2022-12-09T10:14:58.4756429Z test_runner/fixtures/neon_fixtures.py:1366: in raw_cli 2022-12-09T10:14:58.4756880Z raise Exception(msg) from subprocess.CalledProcessError( 2022-12-09T10:14:58.4757542Z E Exception: Run ['/tmp/neon/bin/neon_local', 'pg', 'stop', '--tenant-id', 'cc5b2cfa553215779a417069e9424c61', 'b3_pg_node'] failed: 2022-12-09T10:14:58.4757959Z E stdout: 2022-12-09T10:14:58.4759025Z E stderr: command failed: pg_ctl failed, exit code: exit status: 1, stdout: , stderr: pg_ctl: PID file "/tmp/test_output/test_branching_with_pgbench[cascade-1-10]/repo/pgdatadirs/tenants/cc5b2cfa553215779a417069e9424c61/b3_pg_node/postmaster.pid" does not exist 2022-12-09T10:14:58.4759754Z E Is server running? ``` Unfortunately, allure failed for unclear reasons, so no pg logs apparently.
non_defect
pg ctl failed pid file does not exist at tear down of many tests errors error at teardown of test branching with pgbench linux python github home cache pypoetry virtualenvs neon pxwmzvk bin python subprocess calledprocesserror command returned non zero exit status the above exception was the direct cause of the following exception github home cache pypoetry virtualenvs neon pxwmzvk lib site packages allure commons allure py in call return self fixture function args kwargs test runner fixtures neon fixtures py in neon simple env shared simple env postgres stop all test runner fixtures neon fixtures py in stop all pg stop test runner fixtures neon fixtures py in stop self env neon cli pg stop test runner fixtures neon fixtures py in pg stop return self raw cli args check return code check return code test runner fixtures neon fixtures py in raw cli raise exception msg from subprocess calledprocesserror e exception run failed e stdout e stderr command failed pg ctl failed exit code exit status stdout stderr pg ctl pid file tmp test output test branching with pgbench repo pgdatadirs tenants pg node postmaster pid does not exist e is server running unfortunately allure failed for unclear reasons so no pg logs apparently
0
322,667
27,623,554,006
IssuesEvent
2023-03-10 03:36:43
TencentBlueKing/bk-ci
https://api.github.com/repos/TencentBlueKing/bk-ci
closed
【蓝盾-无需评审会】流水线组列表按照字母序A~Z排序
for gray kind/enhancement area/ci/frontend area/ci/backend tested grayed streams/for gray streams/grayed streams/done approved accepted
【背景】 目前流水线组列表展示如下: <img width="120" alt="image" src="https://user-images.githubusercontent.com/54432927/215653016-b3a6ec2d-a75c-45e0-92fe-354076b52f7f.png"> 存在两个问题: 1、用户添加的流水线组列表,未排序 2、添加流水线组时,默认加到列表最后,但手动刷新后位置变化 添加成功后: <img width="150" alt="image" src="https://user-images.githubusercontent.com/54432927/215653313-8bcbf3bf-cde3-4492-97c4-6b597df09b7e.png"> 手动刷新后: <img width="150" alt="image" src="https://user-images.githubusercontent.com/54432927/215653402-7524a9bc-04b0-4139-8214-52282c8307b9.png"> 【需求】 1、流水线组列表,按照字母序A~Z顺序排序 - 排序是在各小区域内,区域顺序为: - 系统流水线组 - 置顶的用户流水线组 - 非置顶的用户流水线组 2、添加流水线组成功后,自动刷新流水线组列表 - 局部刷新,比如添加了项目流水线组,则刷新「项目流水线组」列表
1.0
【蓝盾-无需评审会】流水线组列表按照字母序A~Z排序 - 【背景】 目前流水线组列表展示如下: <img width="120" alt="image" src="https://user-images.githubusercontent.com/54432927/215653016-b3a6ec2d-a75c-45e0-92fe-354076b52f7f.png"> 存在两个问题: 1、用户添加的流水线组列表,未排序 2、添加流水线组时,默认加到列表最后,但手动刷新后位置变化 添加成功后: <img width="150" alt="image" src="https://user-images.githubusercontent.com/54432927/215653313-8bcbf3bf-cde3-4492-97c4-6b597df09b7e.png"> 手动刷新后: <img width="150" alt="image" src="https://user-images.githubusercontent.com/54432927/215653402-7524a9bc-04b0-4139-8214-52282c8307b9.png"> 【需求】 1、流水线组列表,按照字母序A~Z顺序排序 - 排序是在各小区域内,区域顺序为: - 系统流水线组 - 置顶的用户流水线组 - 非置顶的用户流水线组 2、添加流水线组成功后,自动刷新流水线组列表 - 局部刷新,比如添加了项目流水线组,则刷新「项目流水线组」列表
non_defect
【蓝盾 无需评审会】流水线组列表按照字母序a~z排序 【背景】 目前流水线组列表展示如下: img width alt image src 存在两个问题: 、用户添加的流水线组列表,未排序 、添加流水线组时,默认加到列表最后,但手动刷新后位置变化 添加成功后: img width alt image src 手动刷新后: img width alt image src 【需求】 、流水线组列表,按照字母序a~z顺序排序 排序是在各小区域内,区域顺序为: 系统流水线组 置顶的用户流水线组 非置顶的用户流水线组 、添加流水线组成功后,自动刷新流水线组列表 局部刷新,比如添加了项目流水线组,则刷新「项目流水线组」列表
0
33,958
14,239,233,238
IssuesEvent
2020-11-18 19:49:25
hashicorp/terraform-provider-aws
https://api.github.com/repos/hashicorp/terraform-provider-aws
closed
Terraform constantly updates resource policy on API Gateway
bug new-resource service/apigateway
We're seeing an issue where Terraform constantly updates the resource policy of an API gateway: ``` module.apps.aws_api_gateway_rest_api.segmentation-etl-creation-api: Modifying... (ID: x023a0eez5) policy: "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"execute-api:Invoke\",\"Resource\":\"arn:aws:execute-api:us-west-1:999999999999:0123456789a/*\",\"Condition\":{\"StringEquals\":{\"aws:SourceVpc\":\"vpc-bcdef123\"}}}]}" => "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Action\": \"execute-api:Invoke\",\n \"Resource\": \"execute-api:/*\",\n \"Principal\": \"*\",\n \"Condition\": {\n \"StringEquals\": {\n \"aws:SourceVpc\": \"vpc-bcdef123\"\n }\n }\n }\n ]\n}" ``` In our terraform module we want to apply a resource policy our API gateway, so we have the following: ```hcl data "aws_iam_policy_document" "resource-policy" { statement { principals { type = "*" identifiers = ["*"] } actions = [ "execute-api:Invoke", ] resources = [ "execute-api:/*", ] condition { test = "StringEquals" variable = "aws:SourceVpc" values = ["${var.vpc_id}"] } } } resource "aws_api_gateway_rest_api" "rest-api" { name = "rest-api" description = "API to manage things" endpoint_configuration { types = ["PRIVATE"] } policy = "${data.aws_iam_policy_document.resource-policy.json}" } ``` According to the [Amazon docs](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-create-attach.html) we can use this `execute-api:/stage/method/part` short hand and AWS will expand this to the full ARN of the `aws_api_gateway_rest_api` instance. Ideally we'd like to be able to change the `resources` part of the resource policy to reference the ARN of the `aws_api_gateway_rest_api` instance directly, like so: ```hcl resources = [ "${aws_api_gateway_rest_api.rest-api.execution_arn}:/*", ] ``` except that this introduces a cycle between the resource policy and the REST API as both require each other to exist before they can be created: ``` Error: Cycle: module.apps.data.aws_iam_policy_document.resource-policy, module.apps.aws_api_gateway_rest_api.rest-api ``` This appears to be due to the unfortunate way that resource policies are stored in AWS. Reading the [docs](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-create-attach.html) it looks like resource policies don't exist as entities themselves but only as things that hang off a REST API. At the moment we're working around this by ignoring policy changes - it'd be great if there were a nicer way to do this. ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Terraform Version * Terraform version `0.11.7` * AWS provider version `1.31.0` ### Affected Resource(s) * aws_api_gateway_rest_api
1.0
Terraform constantly updates resource policy on API Gateway - We're seeing an issue where Terraform constantly updates the resource policy of an API gateway: ``` module.apps.aws_api_gateway_rest_api.segmentation-etl-creation-api: Modifying... (ID: x023a0eez5) policy: "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"execute-api:Invoke\",\"Resource\":\"arn:aws:execute-api:us-west-1:999999999999:0123456789a/*\",\"Condition\":{\"StringEquals\":{\"aws:SourceVpc\":\"vpc-bcdef123\"}}}]}" => "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"\",\n \"Effect\": \"Allow\",\n \"Action\": \"execute-api:Invoke\",\n \"Resource\": \"execute-api:/*\",\n \"Principal\": \"*\",\n \"Condition\": {\n \"StringEquals\": {\n \"aws:SourceVpc\": \"vpc-bcdef123\"\n }\n }\n }\n ]\n}" ``` In our terraform module we want to apply a resource policy our API gateway, so we have the following: ```hcl data "aws_iam_policy_document" "resource-policy" { statement { principals { type = "*" identifiers = ["*"] } actions = [ "execute-api:Invoke", ] resources = [ "execute-api:/*", ] condition { test = "StringEquals" variable = "aws:SourceVpc" values = ["${var.vpc_id}"] } } } resource "aws_api_gateway_rest_api" "rest-api" { name = "rest-api" description = "API to manage things" endpoint_configuration { types = ["PRIVATE"] } policy = "${data.aws_iam_policy_document.resource-policy.json}" } ``` According to the [Amazon docs](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-create-attach.html) we can use this `execute-api:/stage/method/part` short hand and AWS will expand this to the full ARN of the `aws_api_gateway_rest_api` instance. Ideally we'd like to be able to change the `resources` part of the resource policy to reference the ARN of the `aws_api_gateway_rest_api` instance directly, like so: ```hcl resources = [ "${aws_api_gateway_rest_api.rest-api.execution_arn}:/*", ] ``` except that this introduces a cycle between the resource policy and the REST API as both require each other to exist before they can be created: ``` Error: Cycle: module.apps.data.aws_iam_policy_document.resource-policy, module.apps.aws_api_gateway_rest_api.rest-api ``` This appears to be due to the unfortunate way that resource policies are stored in AWS. Reading the [docs](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-create-attach.html) it looks like resource policies don't exist as entities themselves but only as things that hang off a REST API. At the moment we're working around this by ignoring policy changes - it'd be great if there were a nicer way to do this. ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Terraform Version * Terraform version `0.11.7` * AWS provider version `1.31.0` ### Affected Resource(s) * aws_api_gateway_rest_api
non_defect
terraform constantly updates resource policy on api gateway we re seeing an issue where terraform constantly updates the resource policy of an api gateway module apps aws api gateway rest api segmentation etl creation api modifying id policy version statement n version n statement n in our terraform module we want to apply a resource policy our api gateway so we have the following hcl data aws iam policy document resource policy statement principals type identifiers actions execute api invoke resources execute api condition test stringequals variable aws sourcevpc values resource aws api gateway rest api rest api name rest api description api to manage things endpoint configuration types policy data aws iam policy document resource policy json according to the we can use this execute api stage method part short hand and aws will expand this to the full arn of the aws api gateway rest api instance ideally we d like to be able to change the resources part of the resource policy to reference the arn of the aws api gateway rest api instance directly like so hcl resources aws api gateway rest api rest api execution arn except that this introduces a cycle between the resource policy and the rest api as both require each other to exist before they can be created error cycle module apps data aws iam policy document resource policy module apps aws api gateway rest api rest api this appears to be due to the unfortunate way that resource policies are stored in aws reading the it looks like resource policies don t exist as entities themselves but only as things that hang off a rest api at the moment we re working around this by ignoring policy changes it d be great if there were a nicer way to do this community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform version aws provider version affected resource s aws api gateway rest api
0
7,576
2,610,406,117
IssuesEvent
2015-02-26 20:11:58
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
opened
Medical Droids + Bunkers
auto-migrated Priority-Medium Type-Defect
``` A user writes: The [medical droids] are healing the infantry bunker when they are inside....then is impossible to destroy it...I dont know if is it correct that they have to heal the bunkers, but I think thats not necessary. And I think that when you have higher number of repair droids in the bunker then it makes it harder to destroy it. ``` ----- Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 20 Aug 2011 at 4:59
1.0
Medical Droids + Bunkers - ``` A user writes: The [medical droids] are healing the infantry bunker when they are inside....then is impossible to destroy it...I dont know if is it correct that they have to heal the bunkers, but I think thats not necessary. And I think that when you have higher number of repair droids in the bunker then it makes it harder to destroy it. ``` ----- Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 20 Aug 2011 at 4:59
defect
medical droids bunkers a user writes the are healing the infantry bunker when they are inside then is impossible to destroy it i dont know if is it correct that they have to heal the bunkers but i think thats not necessary and i think that when you have higher number of repair droids in the bunker then it makes it harder to destroy it original issue reported on code google com by killerhurdz netscape net on aug at
1
411,800
12,032,205,588
IssuesEvent
2020-04-13 11:32:59
dom96/choosenim
https://api.github.com/repos/dom96/choosenim
reopened
OS error: Permission denied at version `0.5`
Feature High Priority
`Spawning of process failed. (Error was: Additional info: "Could not find command: \'~/.choosenim/toolchains/nim-1.0.4/bin/nimble\'. OS error: Permission denied`
1.0
OS error: Permission denied at version `0.5` - `Spawning of process failed. (Error was: Additional info: "Could not find command: \'~/.choosenim/toolchains/nim-1.0.4/bin/nimble\'. OS error: Permission denied`
non_defect
os error permission denied at version spawning of process failed error was additional info could not find command choosenim toolchains nim bin nimble os error permission denied
0
548,447
16,064,667,348
IssuesEvent
2021-04-23 17:08:36
trailofbits/deepstate
https://api.github.com/repos/trailofbits/deepstate
closed
Fuzzers are slow
HIGH PRIORITY bug fuzzing
Fuzzing deepstate tests is much slower than normally instrumented binaries. ## AFL Control sample (fast.cpp): ```cpp #include <cstdio> #include <deepstate/DeepState.hpp> int main() { char x[100]; scanf("%100s", x); if (x[0] == 'l') if (x[1] == 'u') if (x[2] == 'l') if (x[3] == 'z') { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } return 0; } ``` ```shell afl-clang++ ./fast.cpp -o fast.afl mkdir input && echo X > input/X # same seed as deepstate's default afl-fuzz -i input -o out_fast ./fast.afl # check exec/s ``` Research sample (slow.cpp): ```cpp #include <deepstate/DeepState.hpp> TEST(a,b) { char *x = DeepState_CStr_C(100, 0); if (x[0] == 'l') if (x[1] == 'u') if (x[2] == 'l') if (x[3] == 'z') { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } } ``` ```shell deepstate-afl --compile_test ./slow.cpp --out_test_name slow deepstate-afl -o out_slow ./slow.afl --fuzzer_out # check exec/s ``` ==================== ## libFuzzer Control sample (fast.cpp): ```cpp #include <cinttypes> #include <cstring> extern "C" int LLVMFuzzerTestOneInput(const uint8_t *x, std::size_t Size) { if (Size < 8) return 0; if (strcmp((char*)x, "lalakoko") == 0) { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } return 0; } ``` ```shell clang++ ./fast.cpp -fsanitize=fuzzer,undefined -o fast ./fast # check exec/s ``` Research sample (slow.cpp): ```cpp #include <cstring> #include <deepstate/DeepState.hpp> TEST(A, B) { char *x = DeepState_CStr_C(100, 0); ASSUME_GT(strlen(x), 7); if (strcmp(x, "lalakoko") == 0) { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } } ``` ```shell deepstate-libfuzzer --compile_test ./slow.cpp --out_test_name slow ./slow.libfuzzer # or mkdir out_slow && deepstate-libfuzzer -o out_slow ./slow.libfuzzer --fuzzer_out # check exec/s ``` ==================== AFL control sample: around `4500` exec/s. AFL research sample: around `2500` exec/s. libFuzzer control sample: around `1720000` exec/s (yes, its more than 1.5 million). libFuzzer research sample: around `30000` exec/s (yes, its 30 thousand). Possible causes I may think of: * deepstate produces files, which are temporarily saved to disk (not sure about that) * deepstate library itself **is** instrumented, which is most probably excessive * maybe deepstate forks in wrong moment? And fuzzers execute whole `main` function (which is produced by deepstate, is huge and is instrumented).
1.0
Fuzzers are slow - Fuzzing deepstate tests is much slower than normally instrumented binaries. ## AFL Control sample (fast.cpp): ```cpp #include <cstdio> #include <deepstate/DeepState.hpp> int main() { char x[100]; scanf("%100s", x); if (x[0] == 'l') if (x[1] == 'u') if (x[2] == 'l') if (x[3] == 'z') { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } return 0; } ``` ```shell afl-clang++ ./fast.cpp -o fast.afl mkdir input && echo X > input/X # same seed as deepstate's default afl-fuzz -i input -o out_fast ./fast.afl # check exec/s ``` Research sample (slow.cpp): ```cpp #include <deepstate/DeepState.hpp> TEST(a,b) { char *x = DeepState_CStr_C(100, 0); if (x[0] == 'l') if (x[1] == 'u') if (x[2] == 'l') if (x[3] == 'z') { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } } ``` ```shell deepstate-afl --compile_test ./slow.cpp --out_test_name slow deepstate-afl -o out_slow ./slow.afl --fuzzer_out # check exec/s ``` ==================== ## libFuzzer Control sample (fast.cpp): ```cpp #include <cinttypes> #include <cstring> extern "C" int LLVMFuzzerTestOneInput(const uint8_t *x, std::size_t Size) { if (Size < 8) return 0; if (strcmp((char*)x, "lalakoko") == 0) { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } return 0; } ``` ```shell clang++ ./fast.cpp -fsanitize=fuzzer,undefined -o fast ./fast # check exec/s ``` Research sample (slow.cpp): ```cpp #include <cstring> #include <deepstate/DeepState.hpp> TEST(A, B) { char *x = DeepState_CStr_C(100, 0); ASSUME_GT(strlen(x), 7); if (strcmp(x, "lalakoko") == 0) { int *p = (int*)0xcafeeeee; *(p+1) = 0xfafaafaf; } } ``` ```shell deepstate-libfuzzer --compile_test ./slow.cpp --out_test_name slow ./slow.libfuzzer # or mkdir out_slow && deepstate-libfuzzer -o out_slow ./slow.libfuzzer --fuzzer_out # check exec/s ``` ==================== AFL control sample: around `4500` exec/s. AFL research sample: around `2500` exec/s. libFuzzer control sample: around `1720000` exec/s (yes, its more than 1.5 million). libFuzzer research sample: around `30000` exec/s (yes, its 30 thousand). Possible causes I may think of: * deepstate produces files, which are temporarily saved to disk (not sure about that) * deepstate library itself **is** instrumented, which is most probably excessive * maybe deepstate forks in wrong moment? And fuzzers execute whole `main` function (which is produced by deepstate, is huge and is instrumented).
non_defect
fuzzers are slow fuzzing deepstate tests is much slower than normally instrumented binaries afl control sample fast cpp cpp include include int main char x scanf x if x l if x u if x l if x z int p int p return shell afl clang fast cpp o fast afl mkdir input echo x input x same seed as deepstate s default afl fuzz i input o out fast fast afl check exec s research sample slow cpp cpp include test a b char x deepstate cstr c if x l if x u if x l if x z int p int p shell deepstate afl compile test slow cpp out test name slow deepstate afl o out slow slow afl fuzzer out check exec s libfuzzer control sample fast cpp cpp include include extern c int llvmfuzzertestoneinput const t x std size t size if size return if strcmp char x lalakoko int p int p return shell clang fast cpp fsanitize fuzzer undefined o fast fast check exec s research sample slow cpp cpp include include test a b char x deepstate cstr c assume gt strlen x if strcmp x lalakoko int p int p shell deepstate libfuzzer compile test slow cpp out test name slow slow libfuzzer or mkdir out slow deepstate libfuzzer o out slow slow libfuzzer fuzzer out check exec s afl control sample around exec s afl research sample around exec s libfuzzer control sample around exec s yes its more than million libfuzzer research sample around exec s yes its thousand possible causes i may think of deepstate produces files which are temporarily saved to disk not sure about that deepstate library itself is instrumented which is most probably excessive maybe deepstate forks in wrong moment and fuzzers execute whole main function which is produced by deepstate is huge and is instrumented
0
52,504
13,224,792,307
IssuesEvent
2020-08-17 19:51:29
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
[trac] Can't paste error messages (Trac #2352)
Incomplete Migration Migrated from Trac defect infrastructure
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2352">https://code.icecube.wisc.edu/projects/icecube/ticket/2352</a>, reported by olivas</summary> <p> ```json { "status": "closed", "changetime": "2019-09-18T06:12:03", "_ts": "1568787123679098", "description": "I get the following error when I try to create a ticket with a traceback:\n\nGenshi UnicodeEncodeError error while rendering template (unknown template location)\n\nAwesome.", "reporter": "olivas", "cc": "", "resolution": "worksforme", "time": "2019-09-10T02:35:14", "component": "infrastructure", "summary": "[trac] Can't paste error messages", "priority": "normal", "keywords": "", "milestone": "Long-Term Future", "owner": "", "type": "defect" } ``` </p> </details>
1.0
[trac] Can't paste error messages (Trac #2352) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2352">https://code.icecube.wisc.edu/projects/icecube/ticket/2352</a>, reported by olivas</summary> <p> ```json { "status": "closed", "changetime": "2019-09-18T06:12:03", "_ts": "1568787123679098", "description": "I get the following error when I try to create a ticket with a traceback:\n\nGenshi UnicodeEncodeError error while rendering template (unknown template location)\n\nAwesome.", "reporter": "olivas", "cc": "", "resolution": "worksforme", "time": "2019-09-10T02:35:14", "component": "infrastructure", "summary": "[trac] Can't paste error messages", "priority": "normal", "keywords": "", "milestone": "Long-Term Future", "owner": "", "type": "defect" } ``` </p> </details>
defect
can t paste error messages trac migrated from json status closed changetime ts description i get the following error when i try to create a ticket with a traceback n ngenshi unicodeencodeerror error while rendering template unknown template location n nawesome reporter olivas cc resolution worksforme time component infrastructure summary can t paste error messages priority normal keywords milestone long term future owner type defect
1
56,073
14,919,881,962
IssuesEvent
2021-01-23 01:49:22
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
Trouble Importing with -T option: one or more devices is currently unavailable
Status: Triage Needed Type: Defect
### System information Type | Version/Name --- | --- Distribution Name | CentOS Distribution Version | 6.7 (I know! it's an old system on an inside system not accessible directly) Linux Kernel | 2.6.32-696.1.1 Architecture | x86_64 ZFS Version | 0.6.5.9-1 SPL Version | 0.6.5.9-1 I'm in a similar boat as #7808. Trying to revert back to a previous TXG but getting the message `cannot import pool40: one or more devices is currently unavailable` I destroyed some snapshots a couple of days ago on a backup system and now I realize that I'd like to get them back. I have found a txg from the command: `zdb -hhe pool40` and it shows: ``` ... history zone: 'linux' history who: 0 history time: 1611072080 history hostname: 'nfs4.localdomain' unrecognized record: dsname: 'pool40/home@2020-09-01' dsid: 6654 history internal str: '' internal_name: 'destroy' history txg: **38984040** history time: 1611072081 history hostname: 'nfs4.localdomain' unrecognized record: ioctl: 'destroy_snaps' in_nvl: snaps: pool40/home@2020-09-01 history zone: 'linux' history who: 0 history time: 1611072083 history hostname: 'nfs4.localdomain' 2021-01-19.11:01:23 zfs destroy pool40/home@2020-09-01 history command: 'zfs destroy pool40/home@2020-09-01' ... ``` I tried reverting to it by exporting the pool and then importing with -T for that txg but it gave the dreaded `cannot import pool40: one or more devices is currently unavailable` message. So, I wondered about the Uber blocks and tried to list them with ``` [root@nfs4 /]# zdb -e pool40 -ul -------------------------------------------- LABEL 0 -------------------------------------------- failed to read label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to read label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to read label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to read label 3 ``` Then I tried listing them from devices and got better results with four LABEL sections and 32 Uberblocks but they are mostly from today (maybe from exporting and importing?) but there are actually a few from September. Does that give hope that there is a chance I can roll back to a transaction from January 19?: ``` Uberblock[20] magic = 0000000000bab10c version = 5000 txg = 39037716 guid_sum = 8620789707633366463 timestamp = 1611358064 UTC = Fri Jan 22 18:27:44 2021 Uberblock[21] magic = 0000000000bab10c version = 5000 txg = 37162741 guid_sum = 8620789707633366463 timestamp = 1601393839 UTC = Tue Sep 29 11:37:19 2020 Uberblock[22] magic = 0000000000bab10c version = 5000 txg = 37162774 guid_sum = 8620789707633366463 timestamp = 1601394021 UTC = Tue Sep 29 11:40:21 2020 Uberblock[23] magic = 0000000000bab10c version = 5000 txg = 39037655 guid_sum = 8620789707633366463 timestamp = 1611357737 UTC = Fri Jan 22 18:22:17 2021 Uberblock[24] magic = 0000000000bab10c version = 5000 txg = 39037720 guid_sum = 8620789707633366463 timestamp = 1611358085 UTC = Fri Jan 22 18:28:05 2021 Uberblock[25] magic = 0000000000bab10c version = 5000 txg = 39037689 guid_sum = 8620789707633366463 ``` This backup system hasn't been updated in quite a while. As in: 0.6.5.9-1. Would it make sense to update now? Or try to deal with it with the tools that were available then? I didn't see any answer to the -T problem so I'm wondering if it has been fixed and maybe I can try a newer version of ZFS to make it work. Thanks for your help.
1.0
Trouble Importing with -T option: one or more devices is currently unavailable - ### System information Type | Version/Name --- | --- Distribution Name | CentOS Distribution Version | 6.7 (I know! it's an old system on an inside system not accessible directly) Linux Kernel | 2.6.32-696.1.1 Architecture | x86_64 ZFS Version | 0.6.5.9-1 SPL Version | 0.6.5.9-1 I'm in a similar boat as #7808. Trying to revert back to a previous TXG but getting the message `cannot import pool40: one or more devices is currently unavailable` I destroyed some snapshots a couple of days ago on a backup system and now I realize that I'd like to get them back. I have found a txg from the command: `zdb -hhe pool40` and it shows: ``` ... history zone: 'linux' history who: 0 history time: 1611072080 history hostname: 'nfs4.localdomain' unrecognized record: dsname: 'pool40/home@2020-09-01' dsid: 6654 history internal str: '' internal_name: 'destroy' history txg: **38984040** history time: 1611072081 history hostname: 'nfs4.localdomain' unrecognized record: ioctl: 'destroy_snaps' in_nvl: snaps: pool40/home@2020-09-01 history zone: 'linux' history who: 0 history time: 1611072083 history hostname: 'nfs4.localdomain' 2021-01-19.11:01:23 zfs destroy pool40/home@2020-09-01 history command: 'zfs destroy pool40/home@2020-09-01' ... ``` I tried reverting to it by exporting the pool and then importing with -T for that txg but it gave the dreaded `cannot import pool40: one or more devices is currently unavailable` message. So, I wondered about the Uber blocks and tried to list them with ``` [root@nfs4 /]# zdb -e pool40 -ul -------------------------------------------- LABEL 0 -------------------------------------------- failed to read label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to read label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to read label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to read label 3 ``` Then I tried listing them from devices and got better results with four LABEL sections and 32 Uberblocks but they are mostly from today (maybe from exporting and importing?) but there are actually a few from September. Does that give hope that there is a chance I can roll back to a transaction from January 19?: ``` Uberblock[20] magic = 0000000000bab10c version = 5000 txg = 39037716 guid_sum = 8620789707633366463 timestamp = 1611358064 UTC = Fri Jan 22 18:27:44 2021 Uberblock[21] magic = 0000000000bab10c version = 5000 txg = 37162741 guid_sum = 8620789707633366463 timestamp = 1601393839 UTC = Tue Sep 29 11:37:19 2020 Uberblock[22] magic = 0000000000bab10c version = 5000 txg = 37162774 guid_sum = 8620789707633366463 timestamp = 1601394021 UTC = Tue Sep 29 11:40:21 2020 Uberblock[23] magic = 0000000000bab10c version = 5000 txg = 39037655 guid_sum = 8620789707633366463 timestamp = 1611357737 UTC = Fri Jan 22 18:22:17 2021 Uberblock[24] magic = 0000000000bab10c version = 5000 txg = 39037720 guid_sum = 8620789707633366463 timestamp = 1611358085 UTC = Fri Jan 22 18:28:05 2021 Uberblock[25] magic = 0000000000bab10c version = 5000 txg = 39037689 guid_sum = 8620789707633366463 ``` This backup system hasn't been updated in quite a while. As in: 0.6.5.9-1. Would it make sense to update now? Or try to deal with it with the tools that were available then? I didn't see any answer to the -T problem so I'm wondering if it has been fixed and maybe I can try a newer version of ZFS to make it work. Thanks for your help.
defect
trouble importing with t option one or more devices is currently unavailable system information type version name distribution name centos distribution version i know it s an old system on an inside system not accessible directly linux kernel architecture zfs version spl version i m in a similar boat as trying to revert back to a previous txg but getting the message cannot import one or more devices is currently unavailable i destroyed some snapshots a couple of days ago on a backup system and now i realize that i d like to get them back i have found a txg from the command zdb hhe and it shows history zone linux history who history time history hostname localdomain unrecognized record dsname home dsid history internal str internal name destroy history txg history time history hostname localdomain unrecognized record ioctl destroy snaps in nvl snaps home history zone linux history who history time history hostname localdomain zfs destroy home history command zfs destroy home i tried reverting to it by exporting the pool and then importing with t for that txg but it gave the dreaded cannot import one or more devices is currently unavailable message so i wondered about the uber blocks and tried to list them with zdb e ul label failed to read label label failed to read label label failed to read label label failed to read label then i tried listing them from devices and got better results with four label sections and uberblocks but they are mostly from today maybe from exporting and importing but there are actually a few from september does that give hope that there is a chance i can roll back to a transaction from january uberblock magic version txg guid sum timestamp utc fri jan uberblock magic version txg guid sum timestamp utc tue sep uberblock magic version txg guid sum timestamp utc tue sep uberblock magic version txg guid sum timestamp utc fri jan uberblock magic version txg guid sum timestamp utc fri jan uberblock magic version txg guid sum this backup system hasn t been updated in quite a while as in would it make sense to update now or try to deal with it with the tools that were available then i didn t see any answer to the t problem so i m wondering if it has been fixed and maybe i can try a newer version of zfs to make it work thanks for your help
1
28,086
5,185,360,462
IssuesEvent
2017-01-20 10:10:05
extnet/Ext.NET
https://api.github.com/repos/extnet/Ext.NET
closed
Chart Label: Default display value of 'None' prevents hiding labels
4.x breaking-change defect
Ext.NET has the default value for Label's `Display` property as `None`. When defining a label on a chart's series, it is by default displayed instead, so anytime an user specifies a series' label and set its display status to `None`, Ext.NET will not output the setting (as assuming its default value) and the label will not be applied. This issue is related to #1379 and is logged here to avoid forgetting to fix this issue when ExtJS 6.0.3 or newer is merged to Ext.NET.
1.0
Chart Label: Default display value of 'None' prevents hiding labels - Ext.NET has the default value for Label's `Display` property as `None`. When defining a label on a chart's series, it is by default displayed instead, so anytime an user specifies a series' label and set its display status to `None`, Ext.NET will not output the setting (as assuming its default value) and the label will not be applied. This issue is related to #1379 and is logged here to avoid forgetting to fix this issue when ExtJS 6.0.3 or newer is merged to Ext.NET.
defect
chart label default display value of none prevents hiding labels ext net has the default value for label s display property as none when defining a label on a chart s series it is by default displayed instead so anytime an user specifies a series label and set its display status to none ext net will not output the setting as assuming its default value and the label will not be applied this issue is related to and is logged here to avoid forgetting to fix this issue when extjs or newer is merged to ext net
1
22,421
3,645,250,226
IssuesEvent
2016-02-15 13:51:37
GoldenSoftwareLtd/gedemin
https://api.github.com/repos/GoldenSoftwareLtd/gedemin
closed
Авансовый отчет
Bank Gorodok Priority-High Type-Defect
Originally reported on Google Code with ID 1383 ``` Не возможно правильно оформить авансовый отчет так как негде ввести сумму полученную в кассе наличными. ``` Reported by `PogoSG` on 2009-05-26 12:18:26
1.0
Авансовый отчет - Originally reported on Google Code with ID 1383 ``` Не возможно правильно оформить авансовый отчет так как негде ввести сумму полученную в кассе наличными. ``` Reported by `PogoSG` on 2009-05-26 12:18:26
defect
авансовый отчет originally reported on google code with id не возможно правильно оформить авансовый отчет так как негде ввести сумму полученную в кассе наличными reported by pogosg on
1
19,339
10,373,382,073
IssuesEvent
2019-09-09 07:06:32
widelands/widelands-issue-migration2
https://api.github.com/repos/widelands/widelands-issue-migration2
opened
Improve routing for wares to perform better on large networks
Confirmed Medium performance
Here are the valgrind profiling file that I spoke about in https://wl.widelands.org/forum/topic/1464/?page=1#post-10721 For the record, the AI logic is disabled from this study (through replaying a previous game), and the UI is disabled too (through SDL_VIDEODRIVER=dummy). The result is that the routing computations (Transfere::next_step or Economy::find_best_supply) cost between 1/3 and 1/2 of all instructions during the replay. That may be related to the fact that it was a 512x512 board. So I was wrong, the most urgent optimization target is not the future event set ;) I will investiguate further.
True
Improve routing for wares to perform better on large networks - Here are the valgrind profiling file that I spoke about in https://wl.widelands.org/forum/topic/1464/?page=1#post-10721 For the record, the AI logic is disabled from this study (through replaying a previous game), and the UI is disabled too (through SDL_VIDEODRIVER=dummy). The result is that the routing computations (Transfere::next_step or Economy::find_best_supply) cost between 1/3 and 1/2 of all instructions during the replay. That may be related to the fact that it was a 512x512 board. So I was wrong, the most urgent optimization target is not the future event set ;) I will investiguate further.
non_defect
improve routing for wares to perform better on large networks here are the valgrind profiling file that i spoke about in for the record the ai logic is disabled from this study through replaying a previous game and the ui is disabled too through sdl videodriver dummy the result is that the routing computations transfere next step or economy find best supply cost between and of all instructions during the replay that may be related to the fact that it was a board so i was wrong the most urgent optimization target is not the future event set i will investiguate further
0
6,950
2,610,319,125
IssuesEvent
2015-02-26 19:43:00
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
Text
auto-migrated Priority-Medium Type-Defect
``` * Income increase upgrade for Space Station (Republic, Skirmish, Geonosis) missing description * Prototype Designs 2 upgrade missing text (Space Skirmish, Republic) * CIS Proton Bomb 2 upgrade text missing (CIS, Space, Korriban, Outer Rim Sieges) ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 6 May 2011 at 9:49
1.0
Text - ``` * Income increase upgrade for Space Station (Republic, Skirmish, Geonosis) missing description * Prototype Designs 2 upgrade missing text (Space Skirmish, Republic) * CIS Proton Bomb 2 upgrade text missing (CIS, Space, Korriban, Outer Rim Sieges) ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 6 May 2011 at 9:49
defect
text income increase upgrade for space station republic skirmish geonosis missing description prototype designs upgrade missing text space skirmish republic cis proton bomb upgrade text missing cis space korriban outer rim sieges original issue reported on code google com by gmail com on may at
1
10,738
2,622,182,695
IssuesEvent
2015-03-04 00:19:30
byzhang/leveldb
https://api.github.com/repos/byzhang/leveldb
opened
Db::Open() takes a long time
auto-migrated Priority-Medium Type-Defect
``` We have a heavily used database, which stores 100G+ data, and is frequently updated. When we close and reopen the database, it takes up to hours to open it. Is it possible to config(Options) to make Db::Open() returns quickly? What steps will reproduce the problem? 1. Open a database 2. Doing a lot Sets/Deletes... 3. Close data 4. Reopen it(Db::Open()) What is the expected output? What do you see instead? Expecting Db::Open() returns within one second. But it blocks for an hour. What version of the product are you using? On what operating system? 1.14 Please provide any additional information below. ``` Original issue reported on code.google.com by `wuzuy...@gmail.com` on 8 Dec 2013 at 2:51
1.0
Db::Open() takes a long time - ``` We have a heavily used database, which stores 100G+ data, and is frequently updated. When we close and reopen the database, it takes up to hours to open it. Is it possible to config(Options) to make Db::Open() returns quickly? What steps will reproduce the problem? 1. Open a database 2. Doing a lot Sets/Deletes... 3. Close data 4. Reopen it(Db::Open()) What is the expected output? What do you see instead? Expecting Db::Open() returns within one second. But it blocks for an hour. What version of the product are you using? On what operating system? 1.14 Please provide any additional information below. ``` Original issue reported on code.google.com by `wuzuy...@gmail.com` on 8 Dec 2013 at 2:51
defect
db open takes a long time we have a heavily used database which stores data and is frequently updated when we close and reopen the database it takes up to hours to open it is it possible to config options to make db open returns quickly what steps will reproduce the problem open a database doing a lot sets deletes close data reopen it db open what is the expected output what do you see instead expecting db open returns within one second but it blocks for an hour what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by wuzuy gmail com on dec at
1
35,376
7,722,563,235
IssuesEvent
2018-05-24 09:37:33
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
MimeType for .bmp is missing
Defect
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.6.4 * Platform and Target: Apache, MariaDB ### What you did Create a file download of an ```.bmp``` file: ```php $response = $this->response->withFile('path/file.bmp', [ 'download' => true, 'name' => 'file.bmp', ]); ``` ### What happened File is downloaded as ```Content-Type: text/html; charset=UTF-8``` ### What you expected to happen File should be downloaded with ```image/bmp```. ### Possible fix Add Mimetype for bmp files in https://github.com/cakephp/cakephp/blob/master/src/Http/Response.php#L120
1.0
MimeType for .bmp is missing - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.6.4 * Platform and Target: Apache, MariaDB ### What you did Create a file download of an ```.bmp``` file: ```php $response = $this->response->withFile('path/file.bmp', [ 'download' => true, 'name' => 'file.bmp', ]); ``` ### What happened File is downloaded as ```Content-Type: text/html; charset=UTF-8``` ### What you expected to happen File should be downloaded with ```image/bmp```. ### Possible fix Add Mimetype for bmp files in https://github.com/cakephp/cakephp/blob/master/src/Http/Response.php#L120
defect
mimetype for bmp is missing this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target apache mariadb what you did create a file download of an bmp file php response this response withfile path file bmp download true name file bmp what happened file is downloaded as content type text html charset utf what you expected to happen file should be downloaded with image bmp possible fix add mimetype for bmp files in
1
41,246
10,343,466,313
IssuesEvent
2019-09-04 09:01:01
carbon-design-system/ibm-security
https://api.github.com/repos/carbon-design-system/ibm-security
closed
Header tests generate failed prop type errors
Defect severity 4
## Bug - Header tests generate failed prop type errors **Expected behavior -** Tests do not generate a prop type error. **Actual behavior -** Header tests generate the following error: ``` console.error node_modules/prop-types/checkPropTypes.js:20 Warning: Failed prop type: Invalid prop `accounts[0].id` of type `number` supplied to `Header`, expected `string`. in Header ``` And related: ``` console.error node_modules/prop-types/checkPropTypes.js:20 Warning: Failed prop type: Invalid prop `profile.account.id` of type `number` supplied to `Header`, expected `string`. in Header ``` ### Steps for reproducing `yarn test` or review CircleCI logs from recent builds.
1.0
Header tests generate failed prop type errors - ## Bug - Header tests generate failed prop type errors **Expected behavior -** Tests do not generate a prop type error. **Actual behavior -** Header tests generate the following error: ``` console.error node_modules/prop-types/checkPropTypes.js:20 Warning: Failed prop type: Invalid prop `accounts[0].id` of type `number` supplied to `Header`, expected `string`. in Header ``` And related: ``` console.error node_modules/prop-types/checkPropTypes.js:20 Warning: Failed prop type: Invalid prop `profile.account.id` of type `number` supplied to `Header`, expected `string`. in Header ``` ### Steps for reproducing `yarn test` or review CircleCI logs from recent builds.
defect
header tests generate failed prop type errors bug header tests generate failed prop type errors expected behavior tests do not generate a prop type error actual behavior header tests generate the following error console error node modules prop types checkproptypes js warning failed prop type invalid prop accounts id of type number supplied to header expected string in header and related console error node modules prop types checkproptypes js warning failed prop type invalid prop profile account id of type number supplied to header expected string in header steps for reproducing yarn test or review circleci logs from recent builds
1
17,317
2,998,309,290
IssuesEvent
2015-07-23 13:29:07
gtnx/fb2psql
https://api.github.com/repos/gtnx/fb2psql
closed
Перенос foreign key
auto-migrated Priority-Low Type-Defect
``` 1. Разобраться как хранятся в fb 2. Разобраться как добавить в psql ``` Original issue reported on code.google.com by `vitaliy.chekhunov` on 9 Feb 2012 at 6:39
1.0
Перенос foreign key - ``` 1. Разобраться как хранятся в fb 2. Разобраться как добавить в psql ``` Original issue reported on code.google.com by `vitaliy.chekhunov` on 9 Feb 2012 at 6:39
defect
перенос foreign key разобраться как хранятся в fb разобраться как добавить в psql original issue reported on code google com by vitaliy chekhunov on feb at
1
34,332
7,447,011,375
IssuesEvent
2018-03-28 11:01:15
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
Isikud ja organisatsioonid: lehitsemise pisifix
P: high R: fixed T: defect
**Reported by sven syld on 6 Aug 2012 13:59 UTC** '''Object''' Isikud-organisatsioonid: lehitsemine '''!ToDo''' Organisatsiooni radio button-i kõrval on lahter -> üleval tuleb panna lahtri nimetus "Nimi" (vt., lk. 57)
1.0
Isikud ja organisatsioonid: lehitsemise pisifix - **Reported by sven syld on 6 Aug 2012 13:59 UTC** '''Object''' Isikud-organisatsioonid: lehitsemine '''!ToDo''' Organisatsiooni radio button-i kõrval on lahter -> üleval tuleb panna lahtri nimetus "Nimi" (vt., lk. 57)
defect
isikud ja organisatsioonid lehitsemise pisifix reported by sven syld on aug utc object isikud organisatsioonid lehitsemine todo organisatsiooni radio button i kõrval on lahter üleval tuleb panna lahtri nimetus nimi vt lk
1
41,757
10,593,073,767
IssuesEvent
2019-10-09 14:15:05
octavian-paraschiv/protone-suite
https://api.github.com/repos/octavian-paraschiv/protone-suite
closed
Wrong cache folder locations
Category-Suite OS-All Priority-P2 ReportSource-DevQA Type-Defect
dzrcache and imgcache folders should be subfolders of PathUtils.ProgramDataDir, not PathUtils.LocalAppDataFolder.
1.0
Wrong cache folder locations - dzrcache and imgcache folders should be subfolders of PathUtils.ProgramDataDir, not PathUtils.LocalAppDataFolder.
defect
wrong cache folder locations dzrcache and imgcache folders should be subfolders of pathutils programdatadir not pathutils localappdatafolder
1
50,831
26,804,056,585
IssuesEvent
2023-02-01 16:58:33
scylladb/scylladb
https://api.github.com/repos/scylladb/scylladb
closed
Latency degradation on read workload during decommission
performance latency Master/Triage waiting-reproduction/QA
This is Scylla's bug tracker, to be used for reporting bugs only. If you have a question about Scylla, and not a bug, please ask it in our mailing-list at scylladb-dev@googlegroups.com or in our slack channel. - [] I have read the disclaimer above, and I am reporting a suspected malfunction in Scylla. *Installation details* Scylla version (or git commit hash): 5.2.0~dev.20221219.3e6ddf21bc0f with build-id 4b2025074efc29f0de734ae31e056c6ac1563dea Cluster size: 3 (but the decommission happens in 3 nodes that were added in the operation before that, so there are 3 decommission in a row) OS (RHEL/CentOS/Ubuntu/AWS AMI): ami-0c7844f06f7081bb3 (eu-west-1) - i3.2xlarge during this test we have "only" 65 reactor stalls, but the decommission degradation is twice than last good known one (the average, because it was `5.75 ms`, `7.11 ms` and `8 ms` and now it is `8.8 ms`, `13.09 ms` and `13.63 ms`) the last good one ran `5.2.0~dev.20220921.1ba78ac35b3e with build-id e8b404519bb0cdfa562e6ad63954cc56e52333b4` and has `test_id: e5d019b7-265f-454b-9ede-8c13a067565d`, and had 488 reactor stalls, meaning that maybe the problem here is not related to reactor stalls last good one: ![e5d019b7-265f-454b-9ede-8c13a067565d](https://user-images.githubusercontent.com/32832995/209699469-8a38be0f-a764-42ac-bc38-4b70b8c214d1.png) last run: ![30107bfb-509b-458c-8d5e-335ecf492de2](https://user-images.githubusercontent.com/32832995/209700053-1a1e7208-fb0b-4756-aeab-708e9ed32a50.png) and searching for the first degradation during decommission for this workload we have: `5.2.0~dev.20220929.c194c811df01 with build-id 3e922fa2f3dd7546827b4bb02ec4cebd184e062b` with `test_id: 5bbe03b4-09be-4cea-a32b-d393f80dd7fa` (it had 172 reactor stalls): ![5bbe03b4-09be-4cea-a32b-d393f80dd7fa](https://user-images.githubusercontent.com/32832995/209700121-dfd95a2c-6f83-4725-be2a-71118ddceb6f.png) we also have a degradation, introduce on the same build (also for decommission) on mixed workload, reported here --> #12239 and it may be related
True
Latency degradation on read workload during decommission - This is Scylla's bug tracker, to be used for reporting bugs only. If you have a question about Scylla, and not a bug, please ask it in our mailing-list at scylladb-dev@googlegroups.com or in our slack channel. - [] I have read the disclaimer above, and I am reporting a suspected malfunction in Scylla. *Installation details* Scylla version (or git commit hash): 5.2.0~dev.20221219.3e6ddf21bc0f with build-id 4b2025074efc29f0de734ae31e056c6ac1563dea Cluster size: 3 (but the decommission happens in 3 nodes that were added in the operation before that, so there are 3 decommission in a row) OS (RHEL/CentOS/Ubuntu/AWS AMI): ami-0c7844f06f7081bb3 (eu-west-1) - i3.2xlarge during this test we have "only" 65 reactor stalls, but the decommission degradation is twice than last good known one (the average, because it was `5.75 ms`, `7.11 ms` and `8 ms` and now it is `8.8 ms`, `13.09 ms` and `13.63 ms`) the last good one ran `5.2.0~dev.20220921.1ba78ac35b3e with build-id e8b404519bb0cdfa562e6ad63954cc56e52333b4` and has `test_id: e5d019b7-265f-454b-9ede-8c13a067565d`, and had 488 reactor stalls, meaning that maybe the problem here is not related to reactor stalls last good one: ![e5d019b7-265f-454b-9ede-8c13a067565d](https://user-images.githubusercontent.com/32832995/209699469-8a38be0f-a764-42ac-bc38-4b70b8c214d1.png) last run: ![30107bfb-509b-458c-8d5e-335ecf492de2](https://user-images.githubusercontent.com/32832995/209700053-1a1e7208-fb0b-4756-aeab-708e9ed32a50.png) and searching for the first degradation during decommission for this workload we have: `5.2.0~dev.20220929.c194c811df01 with build-id 3e922fa2f3dd7546827b4bb02ec4cebd184e062b` with `test_id: 5bbe03b4-09be-4cea-a32b-d393f80dd7fa` (it had 172 reactor stalls): ![5bbe03b4-09be-4cea-a32b-d393f80dd7fa](https://user-images.githubusercontent.com/32832995/209700121-dfd95a2c-6f83-4725-be2a-71118ddceb6f.png) we also have a degradation, introduce on the same build (also for decommission) on mixed workload, reported here --> #12239 and it may be related
non_defect
latency degradation on read workload during decommission this is scylla s bug tracker to be used for reporting bugs only if you have a question about scylla and not a bug please ask it in our mailing list at scylladb dev googlegroups com or in our slack channel i have read the disclaimer above and i am reporting a suspected malfunction in scylla installation details scylla version or git commit hash dev with build id cluster size but the decommission happens in nodes that were added in the operation before that so there are decommission in a row os rhel centos ubuntu aws ami ami eu west during this test we have only reactor stalls but the decommission degradation is twice than last good known one the average because it was ms ms and ms and now it is ms ms and ms the last good one ran dev with build id and has test id and had reactor stalls meaning that maybe the problem here is not related to reactor stalls last good one last run and searching for the first degradation during decommission for this workload we have dev with build id with test id it had reactor stalls we also have a degradation introduce on the same build also for decommission on mixed workload reported here and it may be related
0
42,940
12,965,138,472
IssuesEvent
2020-07-20 21:45:42
jtimberlake/griffin
https://api.github.com/repos/jtimberlake/griffin
opened
WS-2019-0231 (Medium) detected in adm-zip-0.4.4.tgz
security vulnerability
## WS-2019-0231 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>adm-zip-0.4.4.tgz</b></p></summary> <p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p> <p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/griffin/ui/angular/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/griffin/ui/angular/node_modules/webdriver-js-extender/node_modules/adm-zip/package.json</p> <p> Dependency Hierarchy: - protractor-5.1.2.tgz (Root Library) - webdriver-js-extender-1.0.0.tgz - selenium-webdriver-2.53.3.tgz - :x: **adm-zip-0.4.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jtimberlake/griffin/commit/7b8d4cb53c4eab239eecb18da5b2a6048b2fce60">7b8d4cb53c4eab239eecb18da5b2a6048b2fce60</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames <p>Publish Date: 2018-04-22 <p>URL: <a href=https://hackerone.com/reports/362118>WS-2019-0231</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/994">https://www.npmjs.com/advisories/994</a></p> <p>Release Date: 2019-09-09</p> <p>Fix Resolution: 0.4.9</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"adm-zip","packageVersion":"0.4.4","isTransitiveDependency":true,"dependencyTree":"protractor:5.1.2;webdriver-js-extender:1.0.0;selenium-webdriver:2.53.3;adm-zip:0.4.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.4.9"}],"vulnerabilityIdentifier":"WS-2019-0231","vulnerabilityDetails":"adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames","vulnerabilityUrl":"https://hackerone.com/reports/362118","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> -->
True
WS-2019-0231 (Medium) detected in adm-zip-0.4.4.tgz - ## WS-2019-0231 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>adm-zip-0.4.4.tgz</b></p></summary> <p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p> <p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/griffin/ui/angular/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/griffin/ui/angular/node_modules/webdriver-js-extender/node_modules/adm-zip/package.json</p> <p> Dependency Hierarchy: - protractor-5.1.2.tgz (Root Library) - webdriver-js-extender-1.0.0.tgz - selenium-webdriver-2.53.3.tgz - :x: **adm-zip-0.4.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jtimberlake/griffin/commit/7b8d4cb53c4eab239eecb18da5b2a6048b2fce60">7b8d4cb53c4eab239eecb18da5b2a6048b2fce60</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames <p>Publish Date: 2018-04-22 <p>URL: <a href=https://hackerone.com/reports/362118>WS-2019-0231</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/994">https://www.npmjs.com/advisories/994</a></p> <p>Release Date: 2019-09-09</p> <p>Fix Resolution: 0.4.9</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"adm-zip","packageVersion":"0.4.4","isTransitiveDependency":true,"dependencyTree":"protractor:5.1.2;webdriver-js-extender:1.0.0;selenium-webdriver:2.53.3;adm-zip:0.4.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.4.9"}],"vulnerabilityIdentifier":"WS-2019-0231","vulnerabilityDetails":"adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames","vulnerabilityUrl":"https://hackerone.com/reports/362118","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> -->
non_defect
ws medium detected in adm zip tgz ws medium severity vulnerability vulnerable library adm zip tgz a javascript implementation of zip for nodejs allows user to create or extract zip files both in memory or to from disk library home page a href path to dependency file tmp ws scm griffin ui angular package json path to vulnerable library tmp ws scm griffin ui angular node modules webdriver js extender node modules adm zip package json dependency hierarchy protractor tgz root library webdriver js extender tgz selenium webdriver tgz x adm zip tgz vulnerable library found in head commit a href vulnerability details adm zip versions before are vulnerable to arbitrary file write due to extraction of a specifically crafted archive that contains path traversal filenames publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails adm zip versions before are vulnerable to arbitrary file write due to extraction of a specifically crafted archive that contains path traversal filenames vulnerabilityurl
0
118,620
11,984,660,648
IssuesEvent
2020-04-07 16:11:48
jeremydmoore/coding4ch
https://api.github.com/repos/jeremydmoore/coding4ch
opened
Install instructions for macOS Mojave
documentation
We need completed instructions for setting up an initial development environment for macOS Mojave
1.0
Install instructions for macOS Mojave - We need completed instructions for setting up an initial development environment for macOS Mojave
non_defect
install instructions for macos mojave we need completed instructions for setting up an initial development environment for macos mojave
0
96,956
12,194,322,986
IssuesEvent
2020-04-29 15:38:30
liqd/a4-opin
https://api.github.com/repos/liqd/a4-opin
closed
Link to organisation page in dashboard
Dev: Needs design Prio: Medium
**URL:** any dashboard page **expected behaviour:** that there is a link in the dashboard to the organisation home page **behaviour:** we could use the name of the organistaion to link to home page as already has hover behaviour or we add a view button on org settings page like we do with projects **Comment/Question:** frustrating when updating a project to not be able to click and see the updates
1.0
Link to organisation page in dashboard - **URL:** any dashboard page **expected behaviour:** that there is a link in the dashboard to the organisation home page **behaviour:** we could use the name of the organistaion to link to home page as already has hover behaviour or we add a view button on org settings page like we do with projects **Comment/Question:** frustrating when updating a project to not be able to click and see the updates
non_defect
link to organisation page in dashboard url any dashboard page expected behaviour that there is a link in the dashboard to the organisation home page behaviour we could use the name of the organistaion to link to home page as already has hover behaviour or we add a view button on org settings page like we do with projects comment question frustrating when updating a project to not be able to click and see the updates
0
202,717
15,296,716,389
IssuesEvent
2021-02-24 07:19:03
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: sqlsmith/setup=seed/setting=no-mutations failed
C-test-failure O-roachtest O-robot branch-master release-blocker
[(roachtest).sqlsmith/setup=seed/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=buildLog) on [master@ec011620c7cf299fdbb898db692b36454defc4a2](https://github.com/cockroachdb/cockroach/commits/ec011620c7cf299fdbb898db692b36454defc4a2): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1 sqlsmith.go:198,sqlsmith.go:228,test_runner.go:767: error: pq: internal error: crdb_internal.complete_stream_ingestion_job(): get-stream-ingestion-job-metadata: UpdateDeadlineMaybe() called on leaf txn stmt: WITH with_4711 (col_27818) AS ( SELECT * FROM ( VALUES ((-8311897396447942706):::INT8), (8370906667990257790:::INT8), ((-4911692604465806628):::INT8), (4868999840479281033:::INT8), ((-7636973884350956732):::INT8) ) AS tab_11473 (col_27818) ) SELECT (('13:18:19.26659+12:49:00':::TIMETZ::TIMETZ + '1979-05-28':::DATE::DATE)::TIMESTAMPTZ::TIMESTAMPTZ - tab_11474._interval::INTERVAL)::TIMESTAMPTZ AS col_27819, crdb_internal.complete_stream_ingestion_job(tab_11474._int2::INT8, tab_11474._timestamptz::TIMESTAMPTZ)::INT8 AS col_27820, e'\U00002603':::STRING AS col_27821, tab_11474._date AS col_27822 FROM defaultdb.public.seed@seed__int8__float8__date_idx AS tab_11474; ``` <details><summary>More</summary><p> Artifacts: [/sqlsmith/setup=seed/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=artifacts#/sqlsmith/setup=seed/setting=no-mutations) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: sqlsmith/setup=seed/setting=no-mutations failed - [(roachtest).sqlsmith/setup=seed/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=buildLog) on [master@ec011620c7cf299fdbb898db692b36454defc4a2](https://github.com/cockroachdb/cockroach/commits/ec011620c7cf299fdbb898db692b36454defc4a2): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1 sqlsmith.go:198,sqlsmith.go:228,test_runner.go:767: error: pq: internal error: crdb_internal.complete_stream_ingestion_job(): get-stream-ingestion-job-metadata: UpdateDeadlineMaybe() called on leaf txn stmt: WITH with_4711 (col_27818) AS ( SELECT * FROM ( VALUES ((-8311897396447942706):::INT8), (8370906667990257790:::INT8), ((-4911692604465806628):::INT8), (4868999840479281033:::INT8), ((-7636973884350956732):::INT8) ) AS tab_11473 (col_27818) ) SELECT (('13:18:19.26659+12:49:00':::TIMETZ::TIMETZ + '1979-05-28':::DATE::DATE)::TIMESTAMPTZ::TIMESTAMPTZ - tab_11474._interval::INTERVAL)::TIMESTAMPTZ AS col_27819, crdb_internal.complete_stream_ingestion_job(tab_11474._int2::INT8, tab_11474._timestamptz::TIMESTAMPTZ)::INT8 AS col_27820, e'\U00002603':::STRING AS col_27821, tab_11474._date AS col_27822 FROM defaultdb.public.seed@seed__int8__float8__date_idx AS tab_11474; ``` <details><summary>More</summary><p> Artifacts: [/sqlsmith/setup=seed/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2712399&tab=artifacts#/sqlsmith/setup=seed/setting=no-mutations) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
non_defect
roachtest sqlsmith setup seed setting no mutations failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup seed setting no mutations run sqlsmith go sqlsmith go test runner go error pq internal error crdb internal complete stream ingestion job get stream ingestion job metadata updatedeadlinemaybe called on leaf txn stmt with with col as select from values as tab col select timetz timetz date date timestamptz timestamptz tab interval interval timestamptz as col crdb internal complete stream ingestion job tab tab timestamptz timestamptz as col e string as col tab date as col from defaultdb public seed seed date idx as tab more artifacts powered by
0
47,483
13,056,204,915
IssuesEvent
2020-07-30 03:59:15
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
boost port needs to fail, if it can't find python "devel" parts (Trac #625)
Migrated from Trac defect tools/ports
Migrated from https://code.icecube.wisc.edu/ticket/625 ```json { "status": "closed", "changetime": "2014-10-22T17:41:41", "description": "", "reporter": "nega", "cc": "", "resolution": "wontfix", "_ts": "1413999701734819", "component": "tools/ports", "summary": "boost port needs to fail, if it can't find python \"devel\" parts", "priority": "minor", "keywords": "", "time": "2011-04-28T20:34:33", "milestone": "", "owner": "nega", "type": "defect" } ```
1.0
boost port needs to fail, if it can't find python "devel" parts (Trac #625) - Migrated from https://code.icecube.wisc.edu/ticket/625 ```json { "status": "closed", "changetime": "2014-10-22T17:41:41", "description": "", "reporter": "nega", "cc": "", "resolution": "wontfix", "_ts": "1413999701734819", "component": "tools/ports", "summary": "boost port needs to fail, if it can't find python \"devel\" parts", "priority": "minor", "keywords": "", "time": "2011-04-28T20:34:33", "milestone": "", "owner": "nega", "type": "defect" } ```
defect
boost port needs to fail if it can t find python devel parts trac migrated from json status closed changetime description reporter nega cc resolution wontfix ts component tools ports summary boost port needs to fail if it can t find python devel parts priority minor keywords time milestone owner nega type defect
1
776,435
27,260,325,923
IssuesEvent
2023-02-22 14:29:08
GoogleCloudPlatform/golang-samples
https://api.github.com/repos/GoogleCloudPlatform/golang-samples
closed
compute/instances/create-start-instance: TestComputeCreateInstanceFromSnapshotSnippets failed
type: bug priority: p1 api: compute samples flakybot: issue flakybot: flaky
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: e888c56cb843f475db4f79b391be999518e63db4 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/172a4863-d9b6-4384-bfe3-0d98afa4f069), [Sponge](http://sponge2/172a4863-d9b6-4384-bfe3-0d98afa4f069) status: failed <details><summary>Test output</summary><br><pre> create_instance_from_snapshot_test.go:136: deleteInstance got err: googleapi: Error 404: The resource 'projects/golang-samples-tests-6/zones/europe-central2-b/instances/test-1567092704765984485' was not found create_instance_from_snapshot_test.go:140: deleteInstance got err: googleapi: Error 404: The resource 'projects/golang-samples-tests-6/zones/europe-central2-b/instances/test-5615385495034323152' was not found create_instance_from_snapshot_test.go:153: deleteInstance got err: googleapi: Error 404: The resource 'projects/golang-samples-tests-6/zones/europe-central2-b/instances/test-1567092704765984485' was not found</pre></details>
1.0
compute/instances/create-start-instance: TestComputeCreateInstanceFromSnapshotSnippets failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: e888c56cb843f475db4f79b391be999518e63db4 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/172a4863-d9b6-4384-bfe3-0d98afa4f069), [Sponge](http://sponge2/172a4863-d9b6-4384-bfe3-0d98afa4f069) status: failed <details><summary>Test output</summary><br><pre> create_instance_from_snapshot_test.go:136: deleteInstance got err: googleapi: Error 404: The resource 'projects/golang-samples-tests-6/zones/europe-central2-b/instances/test-1567092704765984485' was not found create_instance_from_snapshot_test.go:140: deleteInstance got err: googleapi: Error 404: The resource 'projects/golang-samples-tests-6/zones/europe-central2-b/instances/test-5615385495034323152' was not found create_instance_from_snapshot_test.go:153: deleteInstance got err: googleapi: Error 404: The resource 'projects/golang-samples-tests-6/zones/europe-central2-b/instances/test-1567092704765984485' was not found</pre></details>
non_defect
compute instances create start instance testcomputecreateinstancefromsnapshotsnippets failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output create instance from snapshot test go deleteinstance got err googleapi error the resource projects golang samples tests zones europe b instances test was not found create instance from snapshot test go deleteinstance got err googleapi error the resource projects golang samples tests zones europe b instances test was not found create instance from snapshot test go deleteinstance got err googleapi error the resource projects golang samples tests zones europe b instances test was not found
0
48,891
13,184,768,060
IssuesEvent
2020-08-12 20:03:29
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
photonics-service tests fail on all version of OSX (Trac #391)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/391 , reported by nega and owned by nega_</summary> <p> ```json { "status": "closed", "changetime": "2012-05-14T20:26:25", "description": "Tests fail w/ the following:\n\n{{{\n[badtz:~/i3/icerec/build] ./env-shell.sh bin/photonics-service-test -a\nLogging configured from file ./log4cplus.conf\nRunning all tests:\nI3PhotoSplineTest.cxx...\n SamplingAtProblematicCoordinates............................ ok\nIceTrayTest.cxx...\n cascades....................................................Photonics Level1: Version '1.70: pyrosoma r1', verbosity 0 \nPhotonics Level1: Error: This seems to be a level2 driver file: '/Users/nega/i3/icerec/build/photonics-service/resources/tables/level1_shower.list'\n}}}\n\nThis is because the test is calling functions from the photonics libs and not their \"dummy\" equivalents in photonics-service.\n\nThis is not:\n * a photonics issue (reverting produced the same result)\n * a photonics-service issue (reverting produced the same result)\n * an Xcode issue (multiple versions of Xcode tested)\n * an OS X version issue (Leopard, Snow Leopard, and Lion all fail)\n\nThis is ''possibly'' related to the removal of \"flat namespaces\" from OS X linking, in order to address link issues w/ python on Lion. I haven't tested this.\n", "reporter": "nega", "cc": "nwhitehorn jvansanten olivas", "resolution": "fixed", "_ts": "1337027185000000", "component": "combo reconstruction", "summary": "photonics-service tests fail on all version of OSX", "priority": "normal", "keywords": "osx photonics-service photonics linking", "time": "2012-05-11T23:10:17", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
photonics-service tests fail on all version of OSX (Trac #391) - <details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/391 , reported by nega and owned by nega_</summary> <p> ```json { "status": "closed", "changetime": "2012-05-14T20:26:25", "description": "Tests fail w/ the following:\n\n{{{\n[badtz:~/i3/icerec/build] ./env-shell.sh bin/photonics-service-test -a\nLogging configured from file ./log4cplus.conf\nRunning all tests:\nI3PhotoSplineTest.cxx...\n SamplingAtProblematicCoordinates............................ ok\nIceTrayTest.cxx...\n cascades....................................................Photonics Level1: Version '1.70: pyrosoma r1', verbosity 0 \nPhotonics Level1: Error: This seems to be a level2 driver file: '/Users/nega/i3/icerec/build/photonics-service/resources/tables/level1_shower.list'\n}}}\n\nThis is because the test is calling functions from the photonics libs and not their \"dummy\" equivalents in photonics-service.\n\nThis is not:\n * a photonics issue (reverting produced the same result)\n * a photonics-service issue (reverting produced the same result)\n * an Xcode issue (multiple versions of Xcode tested)\n * an OS X version issue (Leopard, Snow Leopard, and Lion all fail)\n\nThis is ''possibly'' related to the removal of \"flat namespaces\" from OS X linking, in order to address link issues w/ python on Lion. I haven't tested this.\n", "reporter": "nega", "cc": "nwhitehorn jvansanten olivas", "resolution": "fixed", "_ts": "1337027185000000", "component": "combo reconstruction", "summary": "photonics-service tests fail on all version of OSX", "priority": "normal", "keywords": "osx photonics-service photonics linking", "time": "2012-05-11T23:10:17", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
photonics service tests fail on all version of osx trac migrated from reported by nega and owned by nega json status closed changetime description tests fail w the following n n n env shell sh bin photonics service test a nlogging configured from file conf nrunning all tests cxx n samplingatproblematiccoordinates ok nicetraytest cxx n cascades photonics version pyrosoma verbosity nphotonics error this seems to be a driver file users nega icerec build photonics service resources tables shower list n n nthis is because the test is calling functions from the photonics libs and not their dummy equivalents in photonics service n nthis is not n a photonics issue reverting produced the same result n a photonics service issue reverting produced the same result n an xcode issue multiple versions of xcode tested n an os x version issue leopard snow leopard and lion all fail n nthis is possibly related to the removal of flat namespaces from os x linking in order to address link issues w python on lion i haven t tested this n reporter nega cc nwhitehorn jvansanten olivas resolution fixed ts component combo reconstruction summary photonics service tests fail on all version of osx priority normal keywords osx photonics service photonics linking time milestone owner nega type defect
1
161,671
12,558,526,083
IssuesEvent
2020-06-07 16:09:48
WoWManiaUK/Redemption
https://api.github.com/repos/WoWManiaUK/Redemption
opened
[Spell/Warlock] Demonic Knowledge
Fix - Ready to Test
**What is Happening:** Talent description (rank3): Description: Increases your spell damage by an amount equal to 12% of the total of your active demon's Stamina plus Intellect. Problem is: in wow mania it not automatic update when intelect or stamina change. **What Should happen:** Should update warlock spell power, based in how much stamina / intelect pet has. Showcase: https://drive.google.com/file/d/15ilU-iDPWgAmuBpDUTQ7QlMVNRyLhvi0/view?usp=sharing
1.0
[Spell/Warlock] Demonic Knowledge - **What is Happening:** Talent description (rank3): Description: Increases your spell damage by an amount equal to 12% of the total of your active demon's Stamina plus Intellect. Problem is: in wow mania it not automatic update when intelect or stamina change. **What Should happen:** Should update warlock spell power, based in how much stamina / intelect pet has. Showcase: https://drive.google.com/file/d/15ilU-iDPWgAmuBpDUTQ7QlMVNRyLhvi0/view?usp=sharing
non_defect
demonic knowledge what is happening talent description description increases your spell damage by an amount equal to of the total of your active demon s stamina plus intellect problem is in wow mania it not automatic update when intelect or stamina change what should happen should update warlock spell power based in how much stamina intelect pet has showcase
0
17,698
3,012,932,326
IssuesEvent
2015-07-29 04:19:24
yawlfoundation/yawl
https://api.github.com/repos/yawlfoundation/yawl
closed
ruleseditor saves ruleset with a different name than the specification uri
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. open yawlbookWorklet.yawl in the ruleseditor 2. create a ruleset (selection) for onderzoeken_starten 3. save everything What is the expected output? What do you see instead? A rule is now created with name yawlbookWorklet.xrs. However, when you would execute a case for yawlbookWorklet.yawl and execute task examinations, the worklet service is not able to start a worklet as it searches for a ruleset having name gynonc.xrs. Actually, the problem here is that the workletservice searches for a rulesset using the specification uri and that the ruleseditor saves a rulesset which has the name of the file that is opened and not the specification uri. I would propose to make this consistent. What version of the product are you using? On what operating system? windows 2003 server, java1.6.0_03 ``` Original issue reported on code.google.com by `Ronny.M...@gmail.com` on 4 Nov 2008 at 10:32 Attachments: * [yawlbookWorklet.yawl](https://storage.googleapis.com/google-code-attachments/yawl/issue-208/comment-0/yawlbookWorklet.yawl)
1.0
ruleseditor saves ruleset with a different name than the specification uri - ``` What steps will reproduce the problem? 1. open yawlbookWorklet.yawl in the ruleseditor 2. create a ruleset (selection) for onderzoeken_starten 3. save everything What is the expected output? What do you see instead? A rule is now created with name yawlbookWorklet.xrs. However, when you would execute a case for yawlbookWorklet.yawl and execute task examinations, the worklet service is not able to start a worklet as it searches for a ruleset having name gynonc.xrs. Actually, the problem here is that the workletservice searches for a rulesset using the specification uri and that the ruleseditor saves a rulesset which has the name of the file that is opened and not the specification uri. I would propose to make this consistent. What version of the product are you using? On what operating system? windows 2003 server, java1.6.0_03 ``` Original issue reported on code.google.com by `Ronny.M...@gmail.com` on 4 Nov 2008 at 10:32 Attachments: * [yawlbookWorklet.yawl](https://storage.googleapis.com/google-code-attachments/yawl/issue-208/comment-0/yawlbookWorklet.yawl)
defect
ruleseditor saves ruleset with a different name than the specification uri what steps will reproduce the problem open yawlbookworklet yawl in the ruleseditor create a ruleset selection for onderzoeken starten save everything what is the expected output what do you see instead a rule is now created with name yawlbookworklet xrs however when you would execute a case for yawlbookworklet yawl and execute task examinations the worklet service is not able to start a worklet as it searches for a ruleset having name gynonc xrs actually the problem here is that the workletservice searches for a rulesset using the specification uri and that the ruleseditor saves a rulesset which has the name of the file that is opened and not the specification uri i would propose to make this consistent what version of the product are you using on what operating system windows server original issue reported on code google com by ronny m gmail com on nov at attachments
1
184,611
14,985,360,656
IssuesEvent
2021-01-28 19:51:42
GoogleContainerTools/kpt
https://api.github.com/repos/GoogleContainerTools/kpt
opened
mdtogo Generates Duplicate Properties
cleanup documentation good first issue
When `mdtogo` is run via `make generate` it generates go properties that contain the text contents of the Markdown files. Right now when `mdtogo` is run it is generating multiple properties with the same name causing a compile error. This seems to be caused by multiple [`kpt live` docs](https://github.com/GoogleContainerTools/kpt/tree/master/site/content/en/reference/live) using the same name in the same directory. Moving these files or changing their name should correct this.
1.0
mdtogo Generates Duplicate Properties - When `mdtogo` is run via `make generate` it generates go properties that contain the text contents of the Markdown files. Right now when `mdtogo` is run it is generating multiple properties with the same name causing a compile error. This seems to be caused by multiple [`kpt live` docs](https://github.com/GoogleContainerTools/kpt/tree/master/site/content/en/reference/live) using the same name in the same directory. Moving these files or changing their name should correct this.
non_defect
mdtogo generates duplicate properties when mdtogo is run via make generate it generates go properties that contain the text contents of the markdown files right now when mdtogo is run it is generating multiple properties with the same name causing a compile error this seems to be caused by multiple using the same name in the same directory moving these files or changing their name should correct this
0
36,563
7,992,427,242
IssuesEvent
2018-07-20 01:22:25
fieldenms/tg
https://api.github.com/repos/fieldenms/tg
closed
Entity Centre: custom autocompleter fetching
Defect Entity centre P1 Pull request Selection criteria
### Description It appears that autocompleters with custom matchers in entity centres do not use predictable fetching strategy for loading autocompleted values. At this stage, default fetching strategy (all regular properties with key and desc) is always used. This sometimes even fits a custom set of properties that need to be shown. This has two drawbacks: 1. when using dot-notated properties such autocompleters will just fail 2. some (potentially heavy) properties will be loaded where it is unnecessary. - [ ] There is a need to fetch only those properties that were added to the list of visible properties. Otherwise only key and desc should be fetched if visible properties were not specified. - [ ] Also an unnecessary creation of standard autocompleter matcher occurs even if custom matcher is specified. This should be avoided. ### Expected outcome Reliable and transparent fetching of entity centre custom autocompleter's values.
1.0
Entity Centre: custom autocompleter fetching - ### Description It appears that autocompleters with custom matchers in entity centres do not use predictable fetching strategy for loading autocompleted values. At this stage, default fetching strategy (all regular properties with key and desc) is always used. This sometimes even fits a custom set of properties that need to be shown. This has two drawbacks: 1. when using dot-notated properties such autocompleters will just fail 2. some (potentially heavy) properties will be loaded where it is unnecessary. - [ ] There is a need to fetch only those properties that were added to the list of visible properties. Otherwise only key and desc should be fetched if visible properties were not specified. - [ ] Also an unnecessary creation of standard autocompleter matcher occurs even if custom matcher is specified. This should be avoided. ### Expected outcome Reliable and transparent fetching of entity centre custom autocompleter's values.
defect
entity centre custom autocompleter fetching description it appears that autocompleters with custom matchers in entity centres do not use predictable fetching strategy for loading autocompleted values at this stage default fetching strategy all regular properties with key and desc is always used this sometimes even fits a custom set of properties that need to be shown this has two drawbacks when using dot notated properties such autocompleters will just fail some potentially heavy properties will be loaded where it is unnecessary there is a need to fetch only those properties that were added to the list of visible properties otherwise only key and desc should be fetched if visible properties were not specified also an unnecessary creation of standard autocompleter matcher occurs even if custom matcher is specified this should be avoided expected outcome reliable and transparent fetching of entity centre custom autocompleter s values
1
45,925
24,277,057,416
IssuesEvent
2022-09-28 14:32:58
xtermjs/xterm.js
https://api.github.com/repos/xtermjs/xterm.js
opened
serious perf issues on master
important area/performance
Master currently has serious perf issues (comparing `ls -lR /usr` on my machine, all with webgl renderer): - master ![grafik](https://user-images.githubusercontent.com/6193135/192805964-afe17dd6-4d27-4d5a-b761-06cb12b44bb0.png) - version 5.0 ![grafik](https://user-images.githubusercontent.com/6193135/192803167-103d43c7-0820-4c1b-8264-07ef346d8959.png) - version 4.19 ![grafik](https://user-images.githubusercontent.com/6193135/192804223-49efda80-3c91-47c5-96c8-f107f932eb2f.png) All were measured after several runs to give JIT a chance to do its work. It seems the chance causing this came after 5.0.0.
True
serious perf issues on master - Master currently has serious perf issues (comparing `ls -lR /usr` on my machine, all with webgl renderer): - master ![grafik](https://user-images.githubusercontent.com/6193135/192805964-afe17dd6-4d27-4d5a-b761-06cb12b44bb0.png) - version 5.0 ![grafik](https://user-images.githubusercontent.com/6193135/192803167-103d43c7-0820-4c1b-8264-07ef346d8959.png) - version 4.19 ![grafik](https://user-images.githubusercontent.com/6193135/192804223-49efda80-3c91-47c5-96c8-f107f932eb2f.png) All were measured after several runs to give JIT a chance to do its work. It seems the chance causing this came after 5.0.0.
non_defect
serious perf issues on master master currently has serious perf issues comparing ls lr usr on my machine all with webgl renderer master version version all were measured after several runs to give jit a chance to do its work it seems the chance causing this came after
0
569,313
17,011,710,048
IssuesEvent
2021-07-02 06:07:55
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
Protocol tab crashes in both "my account" and "console" application(s) in IS-console.
Priority/Highest Severity/Critical bug console ui
**Describe the issue:** When trying to access the protocol tab of myaccount application and console application in IS-console, getting crash caused by "TypeError: Cannot read property 'id' of undefined" ``` TypeError: Cannot read property 'id' of undefined at main.11cee388.js?d6e49a8fae5bb95e2834:1 at Array.filter (<anonymous>) at main.11cee388.js?d6e49a8fae5bb95e2834:1 at main.11cee388.js?d6e49a8fae5bb95e2834:1 at vr (main.11cee388.js?d6e49a8fae5bb95e2834:1) at ea (636.439a593e.js?d6e49a8fae5bb95e2834:2) at ks (636.439a593e.js?d6e49a8fae5bb95e2834:2) at bu (636.439a593e.js?d6e49a8fae5bb95e2834:2) at yu (636.439a593e.js?d6e49a8fae5bb95e2834:2) at lu (636.439a593e.js?d6e49a8fae5bb95e2834:2) ``` * * * **How to reproduce:** * login to product is console (127.0.0.1:9443/console) * Go to Develop tab * Go to Application under Develop tab * Click either Console application or MyAccount Application * Go to the Protocol tab * * * [1] ![Screenshot from 2021-06-25 13-03-45](https://user-images.githubusercontent.com/20130001/123387986-f6b84380-d5b5-11eb-8125-3a51b833759a.png) [2] ![Screenshot from 2021-06-25 13-04-00](https://user-images.githubusercontent.com/20130001/123388094-0e8fc780-d5b6-11eb-85a9-271f84f60c66.png) [3] ![Screenshot from 2021-06-25 13-04-16](https://user-images.githubusercontent.com/20130001/123388125-151e3f00-d5b6-11eb-995d-31780fc70976.png)
1.0
Protocol tab crashes in both "my account" and "console" application(s) in IS-console. - **Describe the issue:** When trying to access the protocol tab of myaccount application and console application in IS-console, getting crash caused by "TypeError: Cannot read property 'id' of undefined" ``` TypeError: Cannot read property 'id' of undefined at main.11cee388.js?d6e49a8fae5bb95e2834:1 at Array.filter (<anonymous>) at main.11cee388.js?d6e49a8fae5bb95e2834:1 at main.11cee388.js?d6e49a8fae5bb95e2834:1 at vr (main.11cee388.js?d6e49a8fae5bb95e2834:1) at ea (636.439a593e.js?d6e49a8fae5bb95e2834:2) at ks (636.439a593e.js?d6e49a8fae5bb95e2834:2) at bu (636.439a593e.js?d6e49a8fae5bb95e2834:2) at yu (636.439a593e.js?d6e49a8fae5bb95e2834:2) at lu (636.439a593e.js?d6e49a8fae5bb95e2834:2) ``` * * * **How to reproduce:** * login to product is console (127.0.0.1:9443/console) * Go to Develop tab * Go to Application under Develop tab * Click either Console application or MyAccount Application * Go to the Protocol tab * * * [1] ![Screenshot from 2021-06-25 13-03-45](https://user-images.githubusercontent.com/20130001/123387986-f6b84380-d5b5-11eb-8125-3a51b833759a.png) [2] ![Screenshot from 2021-06-25 13-04-00](https://user-images.githubusercontent.com/20130001/123388094-0e8fc780-d5b6-11eb-85a9-271f84f60c66.png) [3] ![Screenshot from 2021-06-25 13-04-16](https://user-images.githubusercontent.com/20130001/123388125-151e3f00-d5b6-11eb-995d-31780fc70976.png)
non_defect
protocol tab crashes in both my account and console application s in is console describe the issue when trying to access the protocol tab of myaccount application and console application in is console getting crash caused by typeerror cannot read property id of undefined typeerror cannot read property id of undefined at main js at array filter at main js at main js at vr main js at ea js at ks js at bu js at yu js at lu js how to reproduce login to product is console console go to develop tab go to application under develop tab click either console application or myaccount application go to the protocol tab
0
38,039
8,639,723,448
IssuesEvent
2018-11-23 21:02:32
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
Error in multiple interface implementation in generic class with Plain autoproperty
defect
A description of the issue. ### Steps To Reproduce https://deck.net/7dc82afdb5c34c8f335e68b9a4fb0bc6 In bridge.json we have rule (`"autoProperty": "Plain"`), but i can't configure it in deck.net, thats why i use `RulesAttribute` ```csharp public class Program { public static void Main() { } public interface Interface1 { [Rules(AutoProperty = AutoPropertyRule.Plain)] string Name {get;set;} } public interface Interface2 : Interface1 { } public interface Interface3 : Interface1 { } public class Class1<T> : Interface1 { [Rules(AutoProperty = AutoPropertyRule.Plain)] public string Name {get;set;} } public class Class2 : Class1<string>, Interface2, Interface3 { } } ``` ### Expected Result ```js Bridge.define("Demo.Program.Class2", { inherits: [Demo.Program.Class1$1(System.String),Demo.Program.Interface2,Demo.Program.Interface3], $kind: "nested class", alias: [ "Name", "Demo$Program$Interface1$Name", ] }); ``` ### Actual Result ```js Bridge.define("Demo.Program.Class2", { inherits: [Demo.Program.Class1$1(System.String),Demo.Program.Interface2,Demo.Program.Interface3], $kind: "nested class", alias: [ "Name", "Demo$Program$Interface1$Name", "Name", "Demo$Program$Interface1$Name" ] }); ``` Alias `Name` is duplicated and i have error in console (obviously)
1.0
Error in multiple interface implementation in generic class with Plain autoproperty - A description of the issue. ### Steps To Reproduce https://deck.net/7dc82afdb5c34c8f335e68b9a4fb0bc6 In bridge.json we have rule (`"autoProperty": "Plain"`), but i can't configure it in deck.net, thats why i use `RulesAttribute` ```csharp public class Program { public static void Main() { } public interface Interface1 { [Rules(AutoProperty = AutoPropertyRule.Plain)] string Name {get;set;} } public interface Interface2 : Interface1 { } public interface Interface3 : Interface1 { } public class Class1<T> : Interface1 { [Rules(AutoProperty = AutoPropertyRule.Plain)] public string Name {get;set;} } public class Class2 : Class1<string>, Interface2, Interface3 { } } ``` ### Expected Result ```js Bridge.define("Demo.Program.Class2", { inherits: [Demo.Program.Class1$1(System.String),Demo.Program.Interface2,Demo.Program.Interface3], $kind: "nested class", alias: [ "Name", "Demo$Program$Interface1$Name", ] }); ``` ### Actual Result ```js Bridge.define("Demo.Program.Class2", { inherits: [Demo.Program.Class1$1(System.String),Demo.Program.Interface2,Demo.Program.Interface3], $kind: "nested class", alias: [ "Name", "Demo$Program$Interface1$Name", "Name", "Demo$Program$Interface1$Name" ] }); ``` Alias `Name` is duplicated and i have error in console (obviously)
defect
error in multiple interface implementation in generic class with plain autoproperty a description of the issue steps to reproduce in bridge json we have rule autoproperty plain but i can t configure it in deck net thats why i use rulesattribute csharp public class program public static void main public interface string name get set public interface public interface public class public string name get set public class expected result js bridge define demo program inherits kind nested class alias name demo program name actual result js bridge define demo program inherits kind nested class alias name demo program name name demo program name alias name is duplicated and i have error in console obviously
1
59,424
14,589,148,093
IssuesEvent
2020-12-19 00:41:36
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
opened
Illegal instruction on older CPUs under version 2.4.0
type:build/install
**System information** - Ubuntu 18.04 and 20.04, Scientific Linux 7 - binary installed via pip - version 2.4.0 - Python 3.8 - installed via pip (either inside or not inside a Conda environment) - various CPU-only and GPU-hosting machines **Describe the problem** `import tensorflow` produces "Illegal instruction (core dumped)" on older machines (seemingly those that do not support AVX2 instructions). There is no problem on new machines (seemingly those that support AVX2 instructions). **Provide the exact sequence of commands / steps that you executed before running into the problem** ``` pip install tensorflow python -c "import tensorflow" ``` **Any other info / logs** The core dump occurs on various machines with various types of CPUs. The common thread seems to be that it occurs on machines that don't support AVX2 instructions. Most of the machines on which this occurs do support AVX instructions. The issue does not occur with Tensorflow 2.3.1 nor with Tensorflow 2.5.0 installed via tf-nightly. Any chance Tensorflow 2.4.0 was built in a way (perhaps unintentionally) that requires AVX2 instructions or some other requirement that causes it to fail on somewhat older (but not really old) machines? Based on what I am seeing, it seems that using 2.4.0 on many machines will fail. The same issue occurs when running in the official Tensorflow Docker container. This seems related to issue #44668.
1.0
Illegal instruction on older CPUs under version 2.4.0 - **System information** - Ubuntu 18.04 and 20.04, Scientific Linux 7 - binary installed via pip - version 2.4.0 - Python 3.8 - installed via pip (either inside or not inside a Conda environment) - various CPU-only and GPU-hosting machines **Describe the problem** `import tensorflow` produces "Illegal instruction (core dumped)" on older machines (seemingly those that do not support AVX2 instructions). There is no problem on new machines (seemingly those that support AVX2 instructions). **Provide the exact sequence of commands / steps that you executed before running into the problem** ``` pip install tensorflow python -c "import tensorflow" ``` **Any other info / logs** The core dump occurs on various machines with various types of CPUs. The common thread seems to be that it occurs on machines that don't support AVX2 instructions. Most of the machines on which this occurs do support AVX instructions. The issue does not occur with Tensorflow 2.3.1 nor with Tensorflow 2.5.0 installed via tf-nightly. Any chance Tensorflow 2.4.0 was built in a way (perhaps unintentionally) that requires AVX2 instructions or some other requirement that causes it to fail on somewhat older (but not really old) machines? Based on what I am seeing, it seems that using 2.4.0 on many machines will fail. The same issue occurs when running in the official Tensorflow Docker container. This seems related to issue #44668.
non_defect
illegal instruction on older cpus under version system information ubuntu and scientific linux binary installed via pip version python installed via pip either inside or not inside a conda environment various cpu only and gpu hosting machines describe the problem import tensorflow produces illegal instruction core dumped on older machines seemingly those that do not support instructions there is no problem on new machines seemingly those that support instructions provide the exact sequence of commands steps that you executed before running into the problem pip install tensorflow python c import tensorflow any other info logs the core dump occurs on various machines with various types of cpus the common thread seems to be that it occurs on machines that don t support instructions most of the machines on which this occurs do support avx instructions the issue does not occur with tensorflow nor with tensorflow installed via tf nightly any chance tensorflow was built in a way perhaps unintentionally that requires instructions or some other requirement that causes it to fail on somewhat older but not really old machines based on what i am seeing it seems that using on many machines will fail the same issue occurs when running in the official tensorflow docker container this seems related to issue
0
2,679
2,702,164,833
IssuesEvent
2015-04-06 02:45:52
exercism/exercism.io
https://api.github.com/repos/exercism/exercism.io
closed
Create Powershell script for Windows Users to install the CLI
documentation topic: getting started windows support
Since exercism is on [Chocolatey](https://chocolatey.org/packages/exercism-io-cli) it should be pretty simple to create an install "dotfile" for new users to use to easily install the CLI.
1.0
Create Powershell script for Windows Users to install the CLI - Since exercism is on [Chocolatey](https://chocolatey.org/packages/exercism-io-cli) it should be pretty simple to create an install "dotfile" for new users to use to easily install the CLI.
non_defect
create powershell script for windows users to install the cli since exercism is on it should be pretty simple to create an install dotfile for new users to use to easily install the cli
0
20,466
3,358,730,567
IssuesEvent
2015-11-19 10:58:49
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
[TEST-FAILURE] ClientPartitionLostListenerTest.test_partitionLostListener_removed
Team: Client Type: Defect
``` java.lang.AssertionError: expected:<0> but was:<1> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at com.hazelcast.client.partitionservice.ClientPartitionLostListenerTest$1.run(ClientPartitionLostListenerTest.java:131) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:825) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:839) at com.hazelcast.client.partitionservice.ClientPartitionLostListenerTest.assertRegistrationsSizeEventually(ClientPartitionLostListenerTest.java:123) at com.hazelcast.client.partitionservice.ClientPartitionLostListenerTest.test_partitionLostListener_removed(ClientPartitionLostListenerTest.java:75) ``` https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.x-ZuluJDK6/com.hazelcast$hazelcast-client-legacy/7/testReport/junit/com.hazelcast.client.partitionservice/ClientPartitionLostListenerTest/test_partitionLostListener_removed/
1.0
[TEST-FAILURE] ClientPartitionLostListenerTest.test_partitionLostListener_removed - ``` java.lang.AssertionError: expected:<0> but was:<1> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at com.hazelcast.client.partitionservice.ClientPartitionLostListenerTest$1.run(ClientPartitionLostListenerTest.java:131) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:825) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:839) at com.hazelcast.client.partitionservice.ClientPartitionLostListenerTest.assertRegistrationsSizeEventually(ClientPartitionLostListenerTest.java:123) at com.hazelcast.client.partitionservice.ClientPartitionLostListenerTest.test_partitionLostListener_removed(ClientPartitionLostListenerTest.java:75) ``` https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.x-ZuluJDK6/com.hazelcast$hazelcast-client-legacy/7/testReport/junit/com.hazelcast.client.partitionservice/ClientPartitionLostListenerTest/test_partitionLostListener_removed/
defect
clientpartitionlostlistenertest test partitionlostlistener removed java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast client partitionservice clientpartitionlostlistenertest run clientpartitionlostlistenertest java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast client partitionservice clientpartitionlostlistenertest assertregistrationssizeeventually clientpartitionlostlistenertest java at com hazelcast client partitionservice clientpartitionlostlistenertest test partitionlostlistener removed clientpartitionlostlistenertest java
1
16,356
2,613,888,326
IssuesEvent
2015-02-28 00:46:51
boxkite/ckanext-donneesqctheme
https://api.github.com/repos/boxkite/ckanext-donneesqctheme
closed
Index the organization name in the dataset
Medium Priority
Since there will be several similar datasets with the same name, it would be useful that putting the name of the organisation with the researched item put the result of that organization first. Example: currently there are 2 datasets names "pistes cyclables", one for organization City of Montréal, one from Nord Ouvert. If I search "Piste cyclable Montréal", the dataset from Montréal comes first (logical since the dataset has a tag "Montréal") If I search "Piste cyclable Nord Ouvert" I don't even get any piste cyclable dataset. We can expect that some people will use a very naive approach when searching data using something list "what-i-am-looking-for city-name". If adding the city name somehow excludes results because the organisation's name is mention nowhere else than at the organisation level, it's a issue. It would be possible to ask organization to manually add the name of the organization either in the title, description or keyword, but chances are good that some will forget. Option I see (besides the manual thing) - Ask Solr to index the organization name (which does not seem to be the case when looking at the behaviour) - Generate a tag that always add the organization name
1.0
Index the organization name in the dataset - Since there will be several similar datasets with the same name, it would be useful that putting the name of the organisation with the researched item put the result of that organization first. Example: currently there are 2 datasets names "pistes cyclables", one for organization City of Montréal, one from Nord Ouvert. If I search "Piste cyclable Montréal", the dataset from Montréal comes first (logical since the dataset has a tag "Montréal") If I search "Piste cyclable Nord Ouvert" I don't even get any piste cyclable dataset. We can expect that some people will use a very naive approach when searching data using something list "what-i-am-looking-for city-name". If adding the city name somehow excludes results because the organisation's name is mention nowhere else than at the organisation level, it's a issue. It would be possible to ask organization to manually add the name of the organization either in the title, description or keyword, but chances are good that some will forget. Option I see (besides the manual thing) - Ask Solr to index the organization name (which does not seem to be the case when looking at the behaviour) - Generate a tag that always add the organization name
non_defect
index the organization name in the dataset since there will be several similar datasets with the same name it would be useful that putting the name of the organisation with the researched item put the result of that organization first example currently there are datasets names pistes cyclables one for organization city of montréal one from nord ouvert if i search piste cyclable montréal the dataset from montréal comes first logical since the dataset has a tag montréal if i search piste cyclable nord ouvert i don t even get any piste cyclable dataset we can expect that some people will use a very naive approach when searching data using something list what i am looking for city name if adding the city name somehow excludes results because the organisation s name is mention nowhere else than at the organisation level it s a issue it would be possible to ask organization to manually add the name of the organization either in the title description or keyword but chances are good that some will forget option i see besides the manual thing ask solr to index the organization name which does not seem to be the case when looking at the behaviour generate a tag that always add the organization name
0
67,564
20,995,666,581
IssuesEvent
2022-03-29 13:19:11
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Result::formatXML and Result::formatJSON do not escape type names
T: Defect C: Functionality P: Medium E: All Editions
A recent fix produced type names containing double quotes in `formatXML()` and `formatJSON()` without escaping them: https://github.com/jOOQ/jOOQ/issues/9347, when the export uses headers. E.g. ```xml <field schema="public" table="t_book" name="status" type=""PUBLIC"."U_BOOK_STATUS""/> ``` We shouldn't quote those names, I think? ```xml <field schema="public" table="t_book" name="status" type="PUBLIC.U_BOOK_STATUS"/> ```
1.0
Result::formatXML and Result::formatJSON do not escape type names - A recent fix produced type names containing double quotes in `formatXML()` and `formatJSON()` without escaping them: https://github.com/jOOQ/jOOQ/issues/9347, when the export uses headers. E.g. ```xml <field schema="public" table="t_book" name="status" type=""PUBLIC"."U_BOOK_STATUS""/> ``` We shouldn't quote those names, I think? ```xml <field schema="public" table="t_book" name="status" type="PUBLIC.U_BOOK_STATUS"/> ```
defect
result formatxml and result formatjson do not escape type names a recent fix produced type names containing double quotes in formatxml and formatjson without escaping them when the export uses headers e g xml we shouldn t quote those names i think xml
1
20,658
3,392,406,003
IssuesEvent
2015-11-30 19:28:29
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
Bridge compiler generates not required Bridge.cast instruction for user defined conversion
defect
Test case ``` using System; using System.Collections.Generic; using System.Linq; using Bridge; using Bridge.Html5; namespace Demo { public class App { [Ready] public static void Main() { var items = new[] { new SubReactElement(), new SubReactElement() }; var a = items.Select(d => (ReactElementOrText)d).ToArray(); } } public class SubReactElement : ReactElement { } public class ReactElement { } public sealed class ReactElementOrText { private ReactElementOrText() { } public static implicit operator ReactElementOrText(string text) { return new ReactElementOrText(); } public static implicit operator ReactElementOrText(ReactElement element) { return new ReactElementOrText(); } } } ``` Such code `(ReactElementOrText)d` is emitted as `Bridge.cast(Demo.ReactElementOrText.op_Implicit(d), Demo.ReactElementOrText)` Please note that `Bridge.cast` is not required in this case, correct emitted script is `Demo.ReactElementOrText.op_Implicit(d)`
1.0
Bridge compiler generates not required Bridge.cast instruction for user defined conversion - Test case ``` using System; using System.Collections.Generic; using System.Linq; using Bridge; using Bridge.Html5; namespace Demo { public class App { [Ready] public static void Main() { var items = new[] { new SubReactElement(), new SubReactElement() }; var a = items.Select(d => (ReactElementOrText)d).ToArray(); } } public class SubReactElement : ReactElement { } public class ReactElement { } public sealed class ReactElementOrText { private ReactElementOrText() { } public static implicit operator ReactElementOrText(string text) { return new ReactElementOrText(); } public static implicit operator ReactElementOrText(ReactElement element) { return new ReactElementOrText(); } } } ``` Such code `(ReactElementOrText)d` is emitted as `Bridge.cast(Demo.ReactElementOrText.op_Implicit(d), Demo.ReactElementOrText)` Please note that `Bridge.cast` is not required in this case, correct emitted script is `Demo.ReactElementOrText.op_Implicit(d)`
defect
bridge compiler generates not required bridge cast instruction for user defined conversion test case using system using system collections generic using system linq using bridge using bridge namespace demo public class app public static void main var items new new subreactelement new subreactelement var a items select d reactelementortext d toarray public class subreactelement reactelement public class reactelement public sealed class reactelementortext private reactelementortext public static implicit operator reactelementortext string text return new reactelementortext public static implicit operator reactelementortext reactelement element return new reactelementortext such code reactelementortext d is emitted as bridge cast demo reactelementortext op implicit d demo reactelementortext please note that bridge cast is not required in this case correct emitted script is demo reactelementortext op implicit d
1
81,347
30,816,014,808
IssuesEvent
2023-08-01 13:31:27
fecgov/fecfile-web-app
https://api.github.com/repos/fecgov/fecfile-web-app
closed
Defect - Functionality is broken for - System to prevent user from having multiple reports of the same type within calendar year.
defect
This defect is being written to where functionality is broken for: "The System to prevent user from having multiple reports of same type within calendar year". (Tested on both Stage and DEV environments.) This functionality was developed in Sprint 23 ticket #361 to where this functionality was working and passed. https://app.zenhub.com/workspaces/fecfile-online-619e578e68408b001c831251/issues/gh/fecgov/fecfile-web-app/361 This defect was found in Sprint 28 during e2e testing. Below are screenshots to show when selecting create a new report and existing report within same calendar year can be selected, added and saved. Existing report: ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/312ccaba-8f03-4cc5-a83e-07cd0bd43a87) Created new report April 15 Quarterly Report (Q1) selectable and able to create and save same report and same calendar year. ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/b2f94916-34cb-4ac3-8fc2-f43794f034cf) Per @MitchellTCG @mjtravers assigned to Sprint 28 pointed at 3 points for fix.
1.0
Defect - Functionality is broken for - System to prevent user from having multiple reports of the same type within calendar year. - This defect is being written to where functionality is broken for: "The System to prevent user from having multiple reports of same type within calendar year". (Tested on both Stage and DEV environments.) This functionality was developed in Sprint 23 ticket #361 to where this functionality was working and passed. https://app.zenhub.com/workspaces/fecfile-online-619e578e68408b001c831251/issues/gh/fecgov/fecfile-web-app/361 This defect was found in Sprint 28 during e2e testing. Below are screenshots to show when selecting create a new report and existing report within same calendar year can be selected, added and saved. Existing report: ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/312ccaba-8f03-4cc5-a83e-07cd0bd43a87) Created new report April 15 Quarterly Report (Q1) selectable and able to create and save same report and same calendar year. ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/b2f94916-34cb-4ac3-8fc2-f43794f034cf) Per @MitchellTCG @mjtravers assigned to Sprint 28 pointed at 3 points for fix.
defect
defect functionality is broken for system to prevent user from having multiple reports of the same type within calendar year this defect is being written to where functionality is broken for the system to prevent user from having multiple reports of same type within calendar year tested on both stage and dev environments this functionality was developed in sprint ticket to where this functionality was working and passed this defect was found in sprint during testing below are screenshots to show when selecting create a new report and existing report within same calendar year can be selected added and saved existing report created new report april quarterly report selectable and able to create and save same report and same calendar year per mitchelltcg mjtravers assigned to sprint pointed at points for fix
1
11,708
2,663,926,860
IssuesEvent
2015-03-20 10:48:20
janniklaval/phratch
https://api.github.com/repos/janniklaval/phratch
closed
bounce block
invalid Priority-Medium Type-Defect
Original [issue 183](https://code.google.com/p/phratch/issues/detail?id=183) created by janniklaval on 2015-03-12T10:47:10.000Z: the bounce blocks works well when the movement is horizontal. We should do the same vertically.
1.0
bounce block - Original [issue 183](https://code.google.com/p/phratch/issues/detail?id=183) created by janniklaval on 2015-03-12T10:47:10.000Z: the bounce blocks works well when the movement is horizontal. We should do the same vertically.
defect
bounce block original created by janniklaval on the bounce blocks works well when the movement is horizontal we should do the same vertically
1
719,512
24,762,543,052
IssuesEvent
2022-10-22 04:48:45
pystardust/ani-cli
https://api.github.com/repos/pystardust/ani-cli
closed
Hex string is too short, padding with zero bytes to length - bad decrypt
type: bug priority 2: medium
**Metadata (please complete the following information)** Version: 3.4 OS: Windows 10 Shell: Git Bash **Describe the bug** Doesn't open, because of hex string errors. `hex string is too short, padding with zero bytes to length` **Steps To Reproduce** 1. Run `ani-cli` 2. Search for "kodocha" 3. Choose 2 (kodomo no omocha tv dub) 4. Choose episode 21 **Expected behavior** The hex string doesn't do errors and the episode is working **Screenshots (if applicable; you can just drag the image onto github)** ![image](https://user-images.githubusercontent.com/40833244/197245736-9d6dc0d6-a3cb-48a7-924d-9626f53760e3.png) **Additional context**
1.0
Hex string is too short, padding with zero bytes to length - bad decrypt - **Metadata (please complete the following information)** Version: 3.4 OS: Windows 10 Shell: Git Bash **Describe the bug** Doesn't open, because of hex string errors. `hex string is too short, padding with zero bytes to length` **Steps To Reproduce** 1. Run `ani-cli` 2. Search for "kodocha" 3. Choose 2 (kodomo no omocha tv dub) 4. Choose episode 21 **Expected behavior** The hex string doesn't do errors and the episode is working **Screenshots (if applicable; you can just drag the image onto github)** ![image](https://user-images.githubusercontent.com/40833244/197245736-9d6dc0d6-a3cb-48a7-924d-9626f53760e3.png) **Additional context**
non_defect
hex string is too short padding with zero bytes to length bad decrypt metadata please complete the following information version os windows shell git bash describe the bug doesn t open because of hex string errors hex string is too short padding with zero bytes to length steps to reproduce run ani cli search for kodocha choose kodomo no omocha tv dub choose episode expected behavior the hex string doesn t do errors and the episode is working screenshots if applicable you can just drag the image onto github additional context
0
72,740
24,264,690,045
IssuesEvent
2022-09-28 04:31:02
panoply/prettify
https://api.github.com/repos/panoply/prettify
closed
New Style Rule: forceValue
Defect Boycott Liquid CSS SCSS Critical
### Description This new rule helps tame CSS code and allows for a persistent style across your project and be very helpful in cases where you are infusing Liquid into style language and the rule has been introduced for such situations. While I personally consider this a lazy, elementary and novice tactic, It's not uncommon for Liquid to be infused with CSS and Shopify actually advocates such a practice. It is apparent that folks regularly employ the approach in their projects, especially when CSS values are pushing wrap limits. If you are not infusing Liquid into your styles, then this option while perfectly fine to use, it's generally frowned upon, however if your code style tastes prefer this, nothing would stop you from leveraging it. This is a *style* exposed option and provides 3 style choices. The option will indent CSS selector property values onto newlines in CSS, SCSS or LESS languages. The optional is particularly helpful when Liquid tags are used as property values as in most cases CSS property value lengths will rarely exceed wraps. ### Goals The goals of this beautification option is to elegantly apply a consistent code style in CSS, SCSS or LESS languages. The rule places property values onto new lines and while not a common practice, it can be exceptionally helpful for cases where Liquid tokens use long naming conventions or you are infusing Liquid conditions, both situations are common in Shopify themes. As aforementioned, this option is not specific to Liquid infused styles, it can be used in styles that do not contain any liquid too. ### Context This option requires refactors only minor augmentation to be applied in the style beautification process. When a user defined `wrap` context is imperative and the logic for this is handled in lexical scopes. If the user has defined `wrap` but has not provided a word `wrap` limit then the rule will fallback to `preserve`. Both the `preserve` and `collapse` styles can also be supported with not too much complexity and heavy lifting. ## Ruleset The option will provide multiple beautification style choices. The initial rollout will include the following: - `preserve` - `collapse` - `wrap` ### Definition The option is set to `preserve` by default and made available to `style` rules. It can be defined as followed: ```ts prettify.options({ wrap: 80, // Define a wrap limit if using wrap as forceValue beautification style style: { forceValue: 'preserve' | 'collapse' | 'wrap' } }) ``` ## Preserve (default) The below examples showcases how the default `preserve` style will behave. Notice how there is no difference between _before_ and _after_ formatting. The structure is left intact. ### Before ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; } ``` ### After ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; } ``` ## Collapse The `collapse` option is ideal and recommended when you are infusing Liquid into styles. This is exceptionally helpful if you are a heathen that inlines styles within a `<style>` embedded tag and sprinkles Liquid code within. Notice how _before_ formatting all selector property values are expressed inline, but _after_ formatting they will output onto new lines. Another important takeaway here is the white space dash (trim) delimiters applied to Liquid tokens. When using this style choice with `correct` enabled, Prettify will reason with the input and apply the space trims where necessary, cool heh? ### Before ```liquid :root { --media-padding: {{ settings.media_padding }}px --font-body-family: {{ settings.type_body_font.family }}, {{ settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {% if some_condition %} {{ settings.font_small }}px{% else %} {{ settings.font_large }}px{% endif %}; font-family: {{ something.prop | filter: 'foo' | default: settings.type_body_font.family }}; background: #ffffff; } ``` ### After ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; background: #ffffff; } ``` ## Wrap The `wrap` style choice requires you to define a word `wrap` limit in the global options. This style choice will only apply new line indentation on values which exceed the wrap limit. Notice how only a couple of values in the _before_ and _after_ examples are output to new lines. This option will rarely newline pure style selector property values given the tiny length of the values but helpful when you need to new line large values, typical of Liquid tags. ### Before ```liquid :root { --media-padding: {{ settings.media_padding }}px --font-body-family: {{ settings.type_body_font.family }}, {{ settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {% if some_condition %} {{ settings.font_small }}px{% else %} {{ settings.font_large }}px{% endif %}; font-family: {{ something.prop | filter: 'foo' | default: settings.type_body_font.family }}; } ``` ### After ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; } ``` ## Conditionals In situations where you conditionally output selector property values, the control structures will behave the same way as output structures. Indentations are applied in the expressions. Below is an example of beautified conditional values. ```liquid .selector { color: rgb(211, 211, 211); font-size: {%- if some_condition -%} {{- settings.font_small }}px {%- else -%} {{- settings.font_large }}px {%- endif %}; } ```
1.0
New Style Rule: forceValue - ### Description This new rule helps tame CSS code and allows for a persistent style across your project and be very helpful in cases where you are infusing Liquid into style language and the rule has been introduced for such situations. While I personally consider this a lazy, elementary and novice tactic, It's not uncommon for Liquid to be infused with CSS and Shopify actually advocates such a practice. It is apparent that folks regularly employ the approach in their projects, especially when CSS values are pushing wrap limits. If you are not infusing Liquid into your styles, then this option while perfectly fine to use, it's generally frowned upon, however if your code style tastes prefer this, nothing would stop you from leveraging it. This is a *style* exposed option and provides 3 style choices. The option will indent CSS selector property values onto newlines in CSS, SCSS or LESS languages. The optional is particularly helpful when Liquid tags are used as property values as in most cases CSS property value lengths will rarely exceed wraps. ### Goals The goals of this beautification option is to elegantly apply a consistent code style in CSS, SCSS or LESS languages. The rule places property values onto new lines and while not a common practice, it can be exceptionally helpful for cases where Liquid tokens use long naming conventions or you are infusing Liquid conditions, both situations are common in Shopify themes. As aforementioned, this option is not specific to Liquid infused styles, it can be used in styles that do not contain any liquid too. ### Context This option requires refactors only minor augmentation to be applied in the style beautification process. When a user defined `wrap` context is imperative and the logic for this is handled in lexical scopes. If the user has defined `wrap` but has not provided a word `wrap` limit then the rule will fallback to `preserve`. Both the `preserve` and `collapse` styles can also be supported with not too much complexity and heavy lifting. ## Ruleset The option will provide multiple beautification style choices. The initial rollout will include the following: - `preserve` - `collapse` - `wrap` ### Definition The option is set to `preserve` by default and made available to `style` rules. It can be defined as followed: ```ts prettify.options({ wrap: 80, // Define a wrap limit if using wrap as forceValue beautification style style: { forceValue: 'preserve' | 'collapse' | 'wrap' } }) ``` ## Preserve (default) The below examples showcases how the default `preserve` style will behave. Notice how there is no difference between _before_ and _after_ formatting. The structure is left intact. ### Before ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; } ``` ### After ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; } ``` ## Collapse The `collapse` option is ideal and recommended when you are infusing Liquid into styles. This is exceptionally helpful if you are a heathen that inlines styles within a `<style>` embedded tag and sprinkles Liquid code within. Notice how _before_ formatting all selector property values are expressed inline, but _after_ formatting they will output onto new lines. Another important takeaway here is the white space dash (trim) delimiters applied to Liquid tokens. When using this style choice with `correct` enabled, Prettify will reason with the input and apply the space trims where necessary, cool heh? ### Before ```liquid :root { --media-padding: {{ settings.media_padding }}px --font-body-family: {{ settings.type_body_font.family }}, {{ settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {% if some_condition %} {{ settings.font_small }}px{% else %} {{ settings.font_large }}px{% endif %}; font-family: {{ something.prop | filter: 'foo' | default: settings.type_body_font.family }}; background: #ffffff; } ``` ### After ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; background: #ffffff; } ``` ## Wrap The `wrap` style choice requires you to define a word `wrap` limit in the global options. This style choice will only apply new line indentation on values which exceed the wrap limit. Notice how only a couple of values in the _before_ and _after_ examples are output to new lines. This option will rarely newline pure style selector property values given the tiny length of the values but helpful when you need to new line large values, typical of Liquid tags. ### Before ```liquid :root { --media-padding: {{ settings.media_padding }}px --font-body-family: {{ settings.type_body_font.family }}, {{ settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {% if some_condition %} {{ settings.font_small }}px{% else %} {{ settings.font_large }}px{% endif %}; font-family: {{ something.prop | filter: 'foo' | default: settings.type_body_font.family }}; } ``` ### After ```liquid :root { --media-padding: {{- settings.media_padding }}px --font-body-family: {{- settings.type_body_font.family }}, {{- settings.type_body_font.fallback_families }}; --font-body-weight-bold: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; } .selector { color: rgb(211, 211, 211); font-size: {{- settings.type_body_font.size | plus: 5 | at_most: 30 }}; font-weight: {{- settings.type_body_font.weight | plus: 300 | at_most: 1000 }}; font-family: {{- settings.prop | default: settings.type_body_font.family }}; } ``` ## Conditionals In situations where you conditionally output selector property values, the control structures will behave the same way as output structures. Indentations are applied in the expressions. Below is an example of beautified conditional values. ```liquid .selector { color: rgb(211, 211, 211); font-size: {%- if some_condition -%} {{- settings.font_small }}px {%- else -%} {{- settings.font_large }}px {%- endif %}; } ```
defect
new style rule forcevalue description this new rule helps tame css code and allows for a persistent style across your project and be very helpful in cases where you are infusing liquid into style language and the rule has been introduced for such situations while i personally consider this a lazy elementary and novice tactic it s not uncommon for liquid to be infused with css and shopify actually advocates such a practice it is apparent that folks regularly employ the approach in their projects especially when css values are pushing wrap limits if you are not infusing liquid into your styles then this option while perfectly fine to use it s generally frowned upon however if your code style tastes prefer this nothing would stop you from leveraging it this is a style exposed option and provides style choices the option will indent css selector property values onto newlines in css scss or less languages the optional is particularly helpful when liquid tags are used as property values as in most cases css property value lengths will rarely exceed wraps goals the goals of this beautification option is to elegantly apply a consistent code style in css scss or less languages the rule places property values onto new lines and while not a common practice it can be exceptionally helpful for cases where liquid tokens use long naming conventions or you are infusing liquid conditions both situations are common in shopify themes as aforementioned this option is not specific to liquid infused styles it can be used in styles that do not contain any liquid too context this option requires refactors only minor augmentation to be applied in the style beautification process when a user defined wrap context is imperative and the logic for this is handled in lexical scopes if the user has defined wrap but has not provided a word wrap limit then the rule will fallback to preserve both the preserve and collapse styles can also be supported with not too much complexity and heavy lifting ruleset the option will provide multiple beautification style choices the initial rollout will include the following preserve collapse wrap definition the option is set to preserve by default and made available to style rules it can be defined as followed ts prettify options wrap define a wrap limit if using wrap as forcevalue beautification style style forcevalue preserve collapse wrap preserve default the below examples showcases how the default preserve style will behave notice how there is no difference between before and after formatting the structure is left intact before liquid root media padding settings media padding px font body family settings type body font family settings type body font fallback families font body weight bold settings type body font weight plus at most selector color rgb font size settings type body font size plus at most font weight settings type body font weight plus at most font family settings prop default settings type body font family after liquid root media padding settings media padding px font body family settings type body font family settings type body font fallback families font body weight bold settings type body font weight plus at most selector color rgb font size settings type body font size plus at most font weight settings type body font weight plus at most font family settings prop default settings type body font family collapse the collapse option is ideal and recommended when you are infusing liquid into styles this is exceptionally helpful if you are a heathen that inlines styles within a embedded tag and sprinkles liquid code within notice how before formatting all selector property values are expressed inline but after formatting they will output onto new lines another important takeaway here is the white space dash trim delimiters applied to liquid tokens when using this style choice with correct enabled prettify will reason with the input and apply the space trims where necessary cool heh before liquid root media padding settings media padding px font body family settings type body font family settings type body font fallback families font body weight bold settings type body font weight plus at most selector color rgb font size if some condition settings font small px else settings font large px endif font family something prop filter foo default settings type body font family background ffffff after liquid root media padding settings media padding px font body family settings type body font family settings type body font fallback families font body weight bold settings type body font weight plus at most selector color rgb font size settings type body font size plus at most font weight settings type body font weight plus at most font family settings prop default settings type body font family background ffffff wrap the wrap style choice requires you to define a word wrap limit in the global options this style choice will only apply new line indentation on values which exceed the wrap limit notice how only a couple of values in the before and after examples are output to new lines this option will rarely newline pure style selector property values given the tiny length of the values but helpful when you need to new line large values typical of liquid tags before liquid root media padding settings media padding px font body family settings type body font family settings type body font fallback families font body weight bold settings type body font weight plus at most selector color rgb font size if some condition settings font small px else settings font large px endif font family something prop filter foo default settings type body font family after liquid root media padding settings media padding px font body family settings type body font family settings type body font fallback families font body weight bold settings type body font weight plus at most selector color rgb font size settings type body font size plus at most font weight settings type body font weight plus at most font family settings prop default settings type body font family conditionals in situations where you conditionally output selector property values the control structures will behave the same way as output structures indentations are applied in the expressions below is an example of beautified conditional values liquid selector color rgb font size if some condition settings font small px else settings font large px endif
1
95,674
8,570,388,045
IssuesEvent
2018-11-11 19:44:23
RegaledSeer/MusicMaze
https://api.github.com/repos/RegaledSeer/MusicMaze
opened
Work on some method for a text maze.
enhancement priority unit tests
At the moment, testing the entire maze itself is a difficult process: There is no clean way to directly observe the effects of moving a player, nor do we have a clean way or representing the state of the maze for unit tests. Construct either a method that will directly observe the maze game, or perhaps work on a small view class that can be used to directly observe the maze game which can later be used for primarily testing purposes.
1.0
Work on some method for a text maze. - At the moment, testing the entire maze itself is a difficult process: There is no clean way to directly observe the effects of moving a player, nor do we have a clean way or representing the state of the maze for unit tests. Construct either a method that will directly observe the maze game, or perhaps work on a small view class that can be used to directly observe the maze game which can later be used for primarily testing purposes.
non_defect
work on some method for a text maze at the moment testing the entire maze itself is a difficult process there is no clean way to directly observe the effects of moving a player nor do we have a clean way or representing the state of the maze for unit tests construct either a method that will directly observe the maze game or perhaps work on a small view class that can be used to directly observe the maze game which can later be used for primarily testing purposes
0
452,843
32,071,277,952
IssuesEvent
2023-09-25 08:11:08
rerun-io/rerun
https://api.github.com/repos/rerun-io/rerun
opened
Clean up the Python API docs
📖 documentation enhancement 🐍 python API
Go through the top-level docs and see it works for the new API. Also remove all references to the old APIs
1.0
Clean up the Python API docs - Go through the top-level docs and see it works for the new API. Also remove all references to the old APIs
non_defect
clean up the python api docs go through the top level docs and see it works for the new api also remove all references to the old apis
0
57,495
15,816,974,720
IssuesEvent
2021-04-05 13:56:43
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
508-defect-3 [SCREENREADER]: Screen reader users should hear an alert through `aria-live` after making a selection
508-defect-2 508-issue-screenreader 508/Accessibility BDD
# [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3) ## Feedback framework - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Definition of done 1. Review and acknowledge feedback. 1. Fix and/or document decisions made. 1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix. <hr/> ## Point of Contact **VFS Point of Contact:** _Josh_ ## User Story or Problem Statement As a screen reader user, I need to know that there is an alert after specific radio button selections. Since I may be accustomed to navigating forms using `tab` , I may accidentally skip relevant information. ## Details Upon selecting "no" on the supporting evidence page, an alert will appear notifying the user that they need to submit their service treatment records as soon as possible. Although this is currently being announced to screen reader users via `aria-describedby` as accomplished in [ticket 19660](https://github.com/department-of-veterans-affairs/va.gov-team/issues/19660), it may be skipped over by advanced screen reader users who may not wait to hear the full announcement. ## Acceptance Criteria - [ ] The alert is announced to the screen reader user through `aria-live="polite"` ## Environment * Operating System: any * Browser: any * Screenreading device: any * Server destination: production ## Steps to Recreate 1. Reach step 3 of 5: Supporting evidence 2. Start your screen reader device of choice 2. Select "no" 3. While the "no" option is being read out, press `tab` again to jump to the continue button 4. Confirm that the alert is not read out ## Solution An empty live-region could be added to exist on the page at render. Upon selecting "no" it can be populated with the alert, which would then be announced to the user even after they `tab` off the "no" option. This may not be technically feasible with the form system, but would be a more appropriate use of aria in this scenario. It also better ensures that the user will hear the alert. <img width="1397" alt="Screen Shot 2021-02-09 at 11 14 03 AM" src="https://user-images.githubusercontent.com/14154792/107393011-46dfaf00-6ac8-11eb-8550-12e36fa727d9.png"> ## WCAG or Vendor Guidance (optional) * [WCAG 2.1 Level A - 1.3.2 Meaningful Sequence](https://www.wuhcag.com/meaningful-sequence/) - Users who rely on assistive technology (such as a screen reader) to interpret content, require content to be presented in a meaningful order. If this is presented out of sequence, users may become disorientated and will not understand the content. * [Gov.uk's check a service is eligible pattern](https://design-system.service.gov.uk/patterns/check-a-service-is-suitable/) would be a significantly more accessible and usable pattern to address this need. Until more testing is conducted with screen readers, the wizard pattern should be used cautiously.
1.0
508-defect-3 [SCREENREADER]: Screen reader users should hear an alert through `aria-live` after making a selection - # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3) ## Feedback framework - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Definition of done 1. Review and acknowledge feedback. 1. Fix and/or document decisions made. 1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix. <hr/> ## Point of Contact **VFS Point of Contact:** _Josh_ ## User Story or Problem Statement As a screen reader user, I need to know that there is an alert after specific radio button selections. Since I may be accustomed to navigating forms using `tab` , I may accidentally skip relevant information. ## Details Upon selecting "no" on the supporting evidence page, an alert will appear notifying the user that they need to submit their service treatment records as soon as possible. Although this is currently being announced to screen reader users via `aria-describedby` as accomplished in [ticket 19660](https://github.com/department-of-veterans-affairs/va.gov-team/issues/19660), it may be skipped over by advanced screen reader users who may not wait to hear the full announcement. ## Acceptance Criteria - [ ] The alert is announced to the screen reader user through `aria-live="polite"` ## Environment * Operating System: any * Browser: any * Screenreading device: any * Server destination: production ## Steps to Recreate 1. Reach step 3 of 5: Supporting evidence 2. Start your screen reader device of choice 2. Select "no" 3. While the "no" option is being read out, press `tab` again to jump to the continue button 4. Confirm that the alert is not read out ## Solution An empty live-region could be added to exist on the page at render. Upon selecting "no" it can be populated with the alert, which would then be announced to the user even after they `tab` off the "no" option. This may not be technically feasible with the form system, but would be a more appropriate use of aria in this scenario. It also better ensures that the user will hear the alert. <img width="1397" alt="Screen Shot 2021-02-09 at 11 14 03 AM" src="https://user-images.githubusercontent.com/14154792/107393011-46dfaf00-6ac8-11eb-8550-12e36fa727d9.png"> ## WCAG or Vendor Guidance (optional) * [WCAG 2.1 Level A - 1.3.2 Meaningful Sequence](https://www.wuhcag.com/meaningful-sequence/) - Users who rely on assistive technology (such as a screen reader) to interpret content, require content to be presented in a meaningful order. If this is presented out of sequence, users may become disorientated and will not understand the content. * [Gov.uk's check a service is eligible pattern](https://design-system.service.gov.uk/patterns/check-a-service-is-suitable/) would be a significantly more accessible and usable pattern to address this need. Until more testing is conducted with screen readers, the wizard pattern should be used cautiously.
defect
defect screen reader users should hear an alert through aria live after making a selection feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact josh user story or problem statement as a screen reader user i need to know that there is an alert after specific radio button selections since i may be accustomed to navigating forms using tab i may accidentally skip relevant information details upon selecting no on the supporting evidence page an alert will appear notifying the user that they need to submit their service treatment records as soon as possible although this is currently being announced to screen reader users via aria describedby as accomplished in it may be skipped over by advanced screen reader users who may not wait to hear the full announcement acceptance criteria the alert is announced to the screen reader user through aria live polite environment operating system any browser any screenreading device any server destination production steps to recreate reach step of supporting evidence start your screen reader device of choice select no while the no option is being read out press tab again to jump to the continue button confirm that the alert is not read out solution an empty live region could be added to exist on the page at render upon selecting no it can be populated with the alert which would then be announced to the user even after they tab off the no option this may not be technically feasible with the form system but would be a more appropriate use of aria in this scenario it also better ensures that the user will hear the alert img width alt screen shot at am src wcag or vendor guidance optional users who rely on assistive technology such as a screen reader to interpret content require content to be presented in a meaningful order if this is presented out of sequence users may become disorientated and will not understand the content would be a significantly more accessible and usable pattern to address this need until more testing is conducted with screen readers the wizard pattern should be used cautiously
1
108,789
11,610,911,892
IssuesEvent
2020-02-26 04:46:34
DylanBulmer/EngineeringEngineers
https://api.github.com/repos/DylanBulmer/EngineeringEngineers
closed
Create Team member report
documentation
Team Member Report Document in Word of PDF format – With 1 – 2 pages per team member for every sprint – Must be created during sprint 1 and updated for every sprint. It includes: * Team member name. * Roles played this week. * Role duties and work performed this week, artifacts created/updated/reviewed. * Role duties and work to be performed next week. * Issues encountered. * Issued resolved. * Percentage of contributions of each team member in the deliverable. Note that the percentage should add up to 100%. The percentage will be checked against GitHub as well. * Up to one page: Weaknesses and Strengths of the student from peers’ point of view. Reports on the improvements in compare to the previous sprint and the plan for the next sprint’s improvement.
1.0
Create Team member report - Team Member Report Document in Word of PDF format – With 1 – 2 pages per team member for every sprint – Must be created during sprint 1 and updated for every sprint. It includes: * Team member name. * Roles played this week. * Role duties and work performed this week, artifacts created/updated/reviewed. * Role duties and work to be performed next week. * Issues encountered. * Issued resolved. * Percentage of contributions of each team member in the deliverable. Note that the percentage should add up to 100%. The percentage will be checked against GitHub as well. * Up to one page: Weaknesses and Strengths of the student from peers’ point of view. Reports on the improvements in compare to the previous sprint and the plan for the next sprint’s improvement.
non_defect
create team member report team member report document in word of pdf format – with – pages per team member for every sprint – must be created during sprint and updated for every sprint it includes team member name roles played this week role duties and work performed this week artifacts created updated reviewed role duties and work to be performed next week issues encountered issued resolved percentage of contributions of each team member in the deliverable note that the percentage should add up to the percentage will be checked against github as well up to one page weaknesses and strengths of the student from peers’ point of view reports on the improvements in compare to the previous sprint and the plan for the next sprint’s improvement
0
644,332
20,974,217,614
IssuesEvent
2022-03-28 13:59:44
center-for-threat-informed-defense/attack-workbench-frontend
https://api.github.com/repos/center-for-threat-informed-defense/attack-workbench-frontend
closed
Matrix ATT&CK IDs shouldn't be validated
bug priority/high Points: 1
Since domains can have multiple matrices and the ATT&CK ID of matrices are the domain identifier, the ATT&CK ID should not be validated for this object type.
1.0
Matrix ATT&CK IDs shouldn't be validated - Since domains can have multiple matrices and the ATT&CK ID of matrices are the domain identifier, the ATT&CK ID should not be validated for this object type.
non_defect
matrix att ck ids shouldn t be validated since domains can have multiple matrices and the att ck id of matrices are the domain identifier the att ck id should not be validated for this object type
0
72,294
9,564,439,678
IssuesEvent
2019-05-05 03:31:54
ts-react/react-admin-template
https://api.github.com/repos/ts-react/react-admin-template
closed
Tab Page实现思路
😸Documentation
项目定位为后台管理系统开发模板 所有的页面(除user相关页面外)都应使用basic-layout布局,所以在basic-layout布局组件里添加tab page 实现。 为尽量保证tab page 可插拔,独立封装TabPages组件以及tabs model 实现思路如下 1、使用localStorage存储打开tabs需要使用的数据 2、在basic-layout布局组件中监听地址栏变化,执行相关tab数据存储动作 数据结构如下 ``` export interface ITab { id?: string; location?: H.Location, menuData?: IMenu } ITab[] ``` 具体实现请查看相关文件 [tab-pages](https://github.com/ts-react/react-admin-template/blob/master/src/components/tab-pages/tab-pages.tsx) [tabs model](https://github.com/ts-react/react-admin-template/blob/master/src/models/tabs.ts) [basic-layout](https://github.com/ts-react/react-admin-template/blob/master/src/layouts/basic-layout.tsx) 一些细节尚未完善,如果好的意见,请回复,
1.0
Tab Page实现思路 - 项目定位为后台管理系统开发模板 所有的页面(除user相关页面外)都应使用basic-layout布局,所以在basic-layout布局组件里添加tab page 实现。 为尽量保证tab page 可插拔,独立封装TabPages组件以及tabs model 实现思路如下 1、使用localStorage存储打开tabs需要使用的数据 2、在basic-layout布局组件中监听地址栏变化,执行相关tab数据存储动作 数据结构如下 ``` export interface ITab { id?: string; location?: H.Location, menuData?: IMenu } ITab[] ``` 具体实现请查看相关文件 [tab-pages](https://github.com/ts-react/react-admin-template/blob/master/src/components/tab-pages/tab-pages.tsx) [tabs model](https://github.com/ts-react/react-admin-template/blob/master/src/models/tabs.ts) [basic-layout](https://github.com/ts-react/react-admin-template/blob/master/src/layouts/basic-layout.tsx) 一些细节尚未完善,如果好的意见,请回复,
non_defect
tab page实现思路 项目定位为后台管理系统开发模板 所有的页面 除user相关页面外 都应使用basic layout布局,所以在basic layout布局组件里添加tab page 实现。 为尽量保证tab page 可插拔,独立封装tabpages组件以及tabs model 实现思路如下 、使用localstorage存储打开tabs需要使用的数据 、在basic layout布局组件中监听地址栏变化,执行相关tab数据存储动作 数据结构如下 export interface itab id string location h location menudata imenu itab 具体实现请查看相关文件 一些细节尚未完善,如果好的意见,请回复,
0
14,891
2,831,390,182
IssuesEvent
2015-05-24 15:55:02
nobodyguy/dslrdashboard
https://api.github.com/repos/nobodyguy/dslrdashboard
closed
nikon D 3200 no live view
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `f.ami...@gmail.com` on 2 Jul 2013 at 1:25
1.0
nikon D 3200 no live view - ``` What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `f.ami...@gmail.com` on 2 Jul 2013 at 1:25
defect
nikon d no live view what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by f ami gmail com on jul at
1
50,986
13,188,018,217
IssuesEvent
2020-08-13 05:19:13
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
Bug in timeshifter (Trac #1778)
Migrated from Trac cmake defect
We have a custom key in our simulation called Generation Spec, I receive the following error: ```text ERROR (I3Module): <class 'icecube.trigger_sim.modules.time_shifter.I3TimeShifter'>_0000: Exception thrown (I3Module.cxx:116 in void I3Module::Do(void (I3Module::*)())) Traceback (most recent call last): File "corsika.py", line 24, in <module> detsim.ExecuteOpts(stats) File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/ipmodule.py", line 219, in ExecuteOpts retval = self.Execute(stats) File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/modules/detectors.py", line 54, in Execute tray.Execute() File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/I3Tray.py", line 234, in Execute super(I3Tray, self).Execute() File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/trigger_sim/modules/time_shifter.py", line 55, in DAQ for trigger_hierarchy in [f for key,f in frame.items() \ RuntimeError: Frame caught exception "unregistered class" for key "GenerationSpec" of type MuonInjectionConfiguration ``` <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1778">https://code.icecube.wisc.edu/ticket/1778</a>, reported by saxani and owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2016-07-15T16:06:04", "description": "We have a custom key in our simulation called Generation Spec, I receive the following error:\n\n{{{\nERROR (I3Module): <class 'icecube.trigger_sim.modules.time_shifter.I3TimeShifter'>_0000: Exception thrown (I3Module.cxx:116 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"corsika.py\", line 24, in <module>\n detsim.ExecuteOpts(stats)\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/ipmodule.py\", line 219, in ExecuteOpts\n retval = self.Execute(stats)\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/modules/detectors.py\", line 54, in Execute\n tray.Execute()\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/I3Tray.py\", line 234, in Execute\n super(I3Tray, self).Execute()\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/trigger_sim/modules/time_shifter.py\", line 55, in DAQ\n for trigger_hierarchy in [f for key,f in frame.items() \\\nRuntimeError: Frame caught exception \"unregistered class\" for key \"GenerationSpec\" of type MuonInjectionConfiguration\n}}}", "reporter": "saxani", "cc": "", "resolution": "fixed", "_ts": "1468598764273384", "component": "cmake", "summary": "Bug in timeshifter", "priority": "normal", "keywords": "", "time": "2016-07-15T15:00:25", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
Bug in timeshifter (Trac #1778) - We have a custom key in our simulation called Generation Spec, I receive the following error: ```text ERROR (I3Module): <class 'icecube.trigger_sim.modules.time_shifter.I3TimeShifter'>_0000: Exception thrown (I3Module.cxx:116 in void I3Module::Do(void (I3Module::*)())) Traceback (most recent call last): File "corsika.py", line 24, in <module> detsim.ExecuteOpts(stats) File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/ipmodule.py", line 219, in ExecuteOpts retval = self.Execute(stats) File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/modules/detectors.py", line 54, in Execute tray.Execute() File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/I3Tray.py", line 234, in Execute super(I3Tray, self).Execute() File "/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/trigger_sim/modules/time_shifter.py", line 55, in DAQ for trigger_hierarchy in [f for key,f in frame.items() \ RuntimeError: Frame caught exception "unregistered class" for key "GenerationSpec" of type MuonInjectionConfiguration ``` <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1778">https://code.icecube.wisc.edu/ticket/1778</a>, reported by saxani and owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2016-07-15T16:06:04", "description": "We have a custom key in our simulation called Generation Spec, I receive the following error:\n\n{{{\nERROR (I3Module): <class 'icecube.trigger_sim.modules.time_shifter.I3TimeShifter'>_0000: Exception thrown (I3Module.cxx:116 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"corsika.py\", line 24, in <module>\n detsim.ExecuteOpts(stats)\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/ipmodule.py\", line 219, in ExecuteOpts\n retval = self.Execute(stats)\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/simprod/modules/detectors.py\", line 54, in Execute\n tray.Execute()\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/I3Tray.py\", line 234, in Execute\n super(I3Tray, self).Execute()\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v2/RHEL_6_x86_64/metaprojects/simulation/V05-00-00/lib/icecube/trigger_sim/modules/time_shifter.py\", line 55, in DAQ\n for trigger_hierarchy in [f for key,f in frame.items() \\\nRuntimeError: Frame caught exception \"unregistered class\" for key \"GenerationSpec\" of type MuonInjectionConfiguration\n}}}", "reporter": "saxani", "cc": "", "resolution": "fixed", "_ts": "1468598764273384", "component": "cmake", "summary": "Bug in timeshifter", "priority": "normal", "keywords": "", "time": "2016-07-15T15:00:25", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
bug in timeshifter trac we have a custom key in our simulation called generation spec i receive the following error text error exception thrown cxx in void do void traceback most recent call last file corsika py line in detsim executeopts stats file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib icecube simprod ipmodule py line in executeopts retval self execute stats file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib icecube simprod modules detectors py line in execute tray execute file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib py line in execute super self execute file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib icecube trigger sim modules time shifter py line in daq for trigger hierarchy in f for key f in frame items runtimeerror frame caught exception unregistered class for key generationspec of type muoninjectionconfiguration migrated from json status closed changetime description we have a custom key in our simulation called generation spec i receive the following error n n nerror exception thrown cxx in void do void ntraceback most recent call last n file corsika py line in n detsim executeopts stats n file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib icecube simprod ipmodule py line in executeopts n retval self execute stats n file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib icecube simprod modules detectors py line in execute n tray execute n file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib py line in execute n super self execute n file cvmfs icecube opensciencegrid org rhel metaprojects simulation lib icecube trigger sim modules time shifter py line in daq n for trigger hierarchy in f for key f in frame items nruntimeerror frame caught exception unregistered class for key generationspec of type muoninjectionconfiguration n reporter saxani cc resolution fixed ts component cmake summary bug in timeshifter priority normal keywords time milestone owner olivas type defect
1
9,384
2,615,146,842
IssuesEvent
2015-03-01 06:22:59
chrsmith/html5rocks
https://api.github.com/repos/chrsmith/html5rocks
closed
Can't subscribe to RSS feed
auto-migrated Maintenance Priority-Low Type-Defect
``` Please describe the issue: I'm using a RSS app on my iPhone and already have some feeds. But cannot get the feed on www.html5rocks.com may need the direct RSS feed URL. Please provide any additional information below. ``` Original issue reported on code.google.com by `E.S.G0...@gmail.com` on 26 Aug 2010 at 9:08
1.0
Can't subscribe to RSS feed - ``` Please describe the issue: I'm using a RSS app on my iPhone and already have some feeds. But cannot get the feed on www.html5rocks.com may need the direct RSS feed URL. Please provide any additional information below. ``` Original issue reported on code.google.com by `E.S.G0...@gmail.com` on 26 Aug 2010 at 9:08
defect
can t subscribe to rss feed please describe the issue i m using a rss app on my iphone and already have some feeds but cannot get the feed on may need the direct rss feed url please provide any additional information below original issue reported on code google com by e s gmail com on aug at
1
35,316
7,697,830,223
IssuesEvent
2018-05-18 20:18:39
megandalster/bmac-warehouse
https://api.github.com/repos/megandalster/bmac-warehouse
reopened
fix format of inventory display table
Priority-Medium Type-Defect auto-migrated
``` show unit count, in addition to number of receipts and total weight. Same for shipments. ``` Original issue reported on code.google.com by `al...@bowdoin.edu` on 26 Apr 2015 at 6:12
1.0
fix format of inventory display table - ``` show unit count, in addition to number of receipts and total weight. Same for shipments. ``` Original issue reported on code.google.com by `al...@bowdoin.edu` on 26 Apr 2015 at 6:12
defect
fix format of inventory display table show unit count in addition to number of receipts and total weight same for shipments original issue reported on code google com by al bowdoin edu on apr at
1
72,601
24,197,560,541
IssuesEvent
2022-09-24 04:47:48
idaholab/moose
https://api.github.com/repos/idaholab/moose
closed
AdvancedExtruderGenerator Documentation Update to Cover Triple-Indexed Parameter
T: defect P: normal
## Bug Description `AdvancedExtruderGenerator` was recently updated to use a triple-indexed input parameter. But the documentation has not been updated accordingly. ## Steps to Reproduce The documentation page of `AdvancedExtruderGenerator` still uses stacked double-indexed input parameter. ## Impact The documentation is confusing due to the inconsistence.
1.0
AdvancedExtruderGenerator Documentation Update to Cover Triple-Indexed Parameter - ## Bug Description `AdvancedExtruderGenerator` was recently updated to use a triple-indexed input parameter. But the documentation has not been updated accordingly. ## Steps to Reproduce The documentation page of `AdvancedExtruderGenerator` still uses stacked double-indexed input parameter. ## Impact The documentation is confusing due to the inconsistence.
defect
advancedextrudergenerator documentation update to cover triple indexed parameter bug description advancedextrudergenerator was recently updated to use a triple indexed input parameter but the documentation has not been updated accordingly steps to reproduce the documentation page of advancedextrudergenerator still uses stacked double indexed input parameter impact the documentation is confusing due to the inconsistence
1
212,691
16,493,388,296
IssuesEvent
2021-05-25 07:40:16
bmedicke/quantum_cryptography
https://api.github.com/repos/bmedicke/quantum_cryptography
closed
add docstrings for laser module
documentation
- [x] Evaluate documentation solutions (GH pages support, format) - [x] Sphynx (rst) - [x] ~~Pydoc~~ - [x] ~~Doxygen (doxypy)~~ - [x] document module
1.0
add docstrings for laser module - - [x] Evaluate documentation solutions (GH pages support, format) - [x] Sphynx (rst) - [x] ~~Pydoc~~ - [x] ~~Doxygen (doxypy)~~ - [x] document module
non_defect
add docstrings for laser module evaluate documentation solutions gh pages support format sphynx rst pydoc doxygen doxypy document module
0
8,118
2,611,453,103
IssuesEvent
2015-02-27 05:00:36
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Chat -> Info version
auto-migrated Priority-Low Type-Defect
``` What steps will reproduce the problem? 1. start game 2. enter in Offical Server 3. Click right mouse key on my self nick or other, who have version >=0.9.14 4. Click to "Info" 5. See - Nick [IP] Unknown [state] What is the expected output? What do you see instead? I see "Unknown" version of user client. I want see 0.9.14.1 :) What version of the product are you using? On what operating system? I use 0.9.14. OS Gentoo ``` Original issue reported on code.google.com by `longer...@gmail.com` on 14 Nov 2010 at 9:38
1.0
Chat -> Info version - ``` What steps will reproduce the problem? 1. start game 2. enter in Offical Server 3. Click right mouse key on my self nick or other, who have version >=0.9.14 4. Click to "Info" 5. See - Nick [IP] Unknown [state] What is the expected output? What do you see instead? I see "Unknown" version of user client. I want see 0.9.14.1 :) What version of the product are you using? On what operating system? I use 0.9.14. OS Gentoo ``` Original issue reported on code.google.com by `longer...@gmail.com` on 14 Nov 2010 at 9:38
defect
chat info version what steps will reproduce the problem start game enter in offical server click right mouse key on my self nick or other who have version click to info see nick unknown what is the expected output what do you see instead i see unknown version of user client i want see what version of the product are you using on what operating system i use os gentoo original issue reported on code google com by longer gmail com on nov at
1
84,272
16,478,106,470
IssuesEvent
2021-05-24 08:24:14
log2timeline/plaso
https://api.github.com/repos/log2timeline/plaso
closed
Double check if parser and plugins that process input in local time pass time zone to event
code health parsers testing
Per https://github.com/log2timeline/plaso/issues/3280 double check if parser and plugins that process input in local time pass time zone to event Also add unit tests to catch this
1.0
Double check if parser and plugins that process input in local time pass time zone to event - Per https://github.com/log2timeline/plaso/issues/3280 double check if parser and plugins that process input in local time pass time zone to event Also add unit tests to catch this
non_defect
double check if parser and plugins that process input in local time pass time zone to event per double check if parser and plugins that process input in local time pass time zone to event also add unit tests to catch this
0
300,065
25,943,732,119
IssuesEvent
2022-12-16 21:24:36
hashicorp/terraform-provider-google
https://api.github.com/repos/hashicorp/terraform-provider-google
opened
Failing test(s): TestAccContainer* location_policy permadiff
test failure
<!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. ---> <!-- i.e. "Consistently since X date" or "X% failure in MONTH" --> Failure rate: 100% since Dec 13 <!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. --> Impacted tests: - TestAccContainerCluster_withNodePoolAutoscaling - TestAccContainerNodePool_regionalAutoscaling - TestAccContainerNodePool_autoscaling <!-- Link to the nightly build(s), ideally with one impacted test opened --> Nightly builds: - [Link](https://ci-oss.hashicorp.engineering/project.html?projectId=GoogleCloud&testNameId=-5799556305855094305&tab=testDetails) <!-- The error message that displays in the tests tab, for reference --> Message: ``` provider_test.go:307: Step 1/6 error: After applying this test step, the plan was not empty. stdout: Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # google_container_cluster.with_node_pool will be updated in-place ~ resource "google_container_cluster" "with_node_pool" { id = "projects/ci-test-project-188019/locations/us-central1-a/clusters/tf-test-cluster-nodepool-ogxe7sdw1b" name = "tf-test-cluster-nodepool-ogxe7sdw1b" # (24 unchanged attributes hidden) ~ node_pool { name = "tf-test-cluster-nodepool-g1ubleuetg" # (7 unchanged attributes hidden) ~ autoscaling { - location_policy = "BALANCED" -> null # (4 unchanged attributes hidden) } # (4 unchanged blocks hidden) } # (15 unchanged blocks hidden) } Plan: 0 to add, 1 to change, 0 to destroy. --- FAIL: TestAccContainerCluster_withNodePoolAutoscaling (643.34s) ```
1.0
Failing test(s): TestAccContainer* location_policy permadiff - <!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. ---> <!-- i.e. "Consistently since X date" or "X% failure in MONTH" --> Failure rate: 100% since Dec 13 <!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. --> Impacted tests: - TestAccContainerCluster_withNodePoolAutoscaling - TestAccContainerNodePool_regionalAutoscaling - TestAccContainerNodePool_autoscaling <!-- Link to the nightly build(s), ideally with one impacted test opened --> Nightly builds: - [Link](https://ci-oss.hashicorp.engineering/project.html?projectId=GoogleCloud&testNameId=-5799556305855094305&tab=testDetails) <!-- The error message that displays in the tests tab, for reference --> Message: ``` provider_test.go:307: Step 1/6 error: After applying this test step, the plan was not empty. stdout: Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # google_container_cluster.with_node_pool will be updated in-place ~ resource "google_container_cluster" "with_node_pool" { id = "projects/ci-test-project-188019/locations/us-central1-a/clusters/tf-test-cluster-nodepool-ogxe7sdw1b" name = "tf-test-cluster-nodepool-ogxe7sdw1b" # (24 unchanged attributes hidden) ~ node_pool { name = "tf-test-cluster-nodepool-g1ubleuetg" # (7 unchanged attributes hidden) ~ autoscaling { - location_policy = "BALANCED" -> null # (4 unchanged attributes hidden) } # (4 unchanged blocks hidden) } # (15 unchanged blocks hidden) } Plan: 0 to add, 1 to change, 0 to destroy. --- FAIL: TestAccContainerCluster_withNodePoolAutoscaling (643.34s) ```
non_defect
failing test s testacccontainer location policy permadiff failure rate since dec impacted tests testacccontainercluster withnodepoolautoscaling testacccontainernodepool regionalautoscaling testacccontainernodepool autoscaling nightly builds message provider test go step error after applying this test step the plan was not empty stdout terraform used the selected providers to generate the following execution plan resource actions are indicated with the following symbols update in place terraform will perform the following actions google container cluster with node pool will be updated in place resource google container cluster with node pool id projects ci test project locations us a clusters tf test cluster nodepool name tf test cluster nodepool unchanged attributes hidden node pool name tf test cluster nodepool unchanged attributes hidden autoscaling location policy balanced null unchanged attributes hidden unchanged blocks hidden unchanged blocks hidden plan to add to change to destroy fail testacccontainercluster withnodepoolautoscaling
0
11,373
8,375,695,245
IssuesEvent
2018-10-05 17:16:26
tempesta-tech/tempesta
https://api.github.com/repos/tempesta-tech/tempesta
closed
Sticky cookie module can redirect user to insecure protocol or other server
bug crucial security
As noticed in https://github.com/tempesta-tech/tempesta/pull/1047#discussion_r210718508 `Location: ` header in redirect response is filled with _http_ protocol. The module doesn't honour original client protocol. E.g.: 1. A client opens secure connection (_httpS_); 2. The client sends a request; 3. Sticky cookie module redirects the client to the same resource, but protocol is hardcoded to be _http_; 4. The client opens a new insecure connection and requests the same resource. This means that the sticky module breaks secure connections. Thus I mark the issue as crucial. How to reproduce: 1. Configure tempesta to use both HTTP and HTTPS: ``` server 127.0.0.1:8080; listen 80; listen 443 proto=https; tls_certificate /etc/tfw-root.crt; tls_certificate_key /etc/tfw-root.key; ``` 2. Use curl to send a request: ``` curl -vkL -b cookies.txt https://192.168.122.12/ ``` 3. Use wireshark to see the packet flow. Not only protocol selection is broken. If non-default port is used, it's not added to the `Location:` header. E.g. `GET http://natsys-lab.com:8080/cgi-bin/show.pl HTTP/1.1\r\n` request will be redirected to `http://natsys-lab.com/cgi-bin/show.pl`. Nobody guaranties `natsys-lab.com:8080` and `natsys-lab.com` to be the same resource.
True
Sticky cookie module can redirect user to insecure protocol or other server - As noticed in https://github.com/tempesta-tech/tempesta/pull/1047#discussion_r210718508 `Location: ` header in redirect response is filled with _http_ protocol. The module doesn't honour original client protocol. E.g.: 1. A client opens secure connection (_httpS_); 2. The client sends a request; 3. Sticky cookie module redirects the client to the same resource, but protocol is hardcoded to be _http_; 4. The client opens a new insecure connection and requests the same resource. This means that the sticky module breaks secure connections. Thus I mark the issue as crucial. How to reproduce: 1. Configure tempesta to use both HTTP and HTTPS: ``` server 127.0.0.1:8080; listen 80; listen 443 proto=https; tls_certificate /etc/tfw-root.crt; tls_certificate_key /etc/tfw-root.key; ``` 2. Use curl to send a request: ``` curl -vkL -b cookies.txt https://192.168.122.12/ ``` 3. Use wireshark to see the packet flow. Not only protocol selection is broken. If non-default port is used, it's not added to the `Location:` header. E.g. `GET http://natsys-lab.com:8080/cgi-bin/show.pl HTTP/1.1\r\n` request will be redirected to `http://natsys-lab.com/cgi-bin/show.pl`. Nobody guaranties `natsys-lab.com:8080` and `natsys-lab.com` to be the same resource.
non_defect
sticky cookie module can redirect user to insecure protocol or other server as noticed in location header in redirect response is filled with http protocol the module doesn t honour original client protocol e g a client opens secure connection https the client sends a request sticky cookie module redirects the client to the same resource but protocol is hardcoded to be http the client opens a new insecure connection and requests the same resource this means that the sticky module breaks secure connections thus i mark the issue as crucial how to reproduce configure tempesta to use both http and https server listen listen proto https tls certificate etc tfw root crt tls certificate key etc tfw root key use curl to send a request curl vkl b cookies txt use wireshark to see the packet flow not only protocol selection is broken if non default port is used it s not added to the location header e g get http r n request will be redirected to nobody guaranties natsys lab com and natsys lab com to be the same resource
0
79,673
15,253,538,518
IssuesEvent
2021-02-20 08:10:56
creativecommons/creativecommons.github.io-source
https://api.github.com/repos/creativecommons/creativecommons.github.io-source
opened
[BUG] Image not fit properly in the article's panel
💻 aspect: code 🚦 status: awaiting triage 🛠 goal: fix 🟧 priority: high
**What is the growth opportunity you want to see solved?** [The organization](https://creativecommons.org/) is the most easily accessible and is considered as the most reliable [source](https://www.smartinsights.com/user-experience/website-design/what-is-the-importance-of-web-design-for-your-audience/#:~:text=A%20well%2Ddesigned%20website%20can,navigate%20your%20website%20with%20ease.). ![image](https://user-images.githubusercontent.com/48223671/108588159-bc2b6980-737d-11eb-8fc2-b378ff16e195.png) The red marked square shows the bug, I am talking about. The complete logo is not visible, the upper and bottom corners are cropped in the image **How do you know that this problem exists today? Why is this important?** After visiting [Creative Commons](https://creativecommons.org/), I realized that one bug caught my attention more than the complete website and then carried out having a [Questionnaire testing](https://www.pewresearch.org/methods/u-s-survey-research/questionnaire-design/) on the same to classmates. **How to measure design's effectiveness?** [A/B testing](https://www.optimizely.com/optimization-glossary/ab-testing/) - A quick A/B with my acquaintances (who cover major sections of people using the internet) with a high-fidelity version.
1.0
[BUG] Image not fit properly in the article's panel - **What is the growth opportunity you want to see solved?** [The organization](https://creativecommons.org/) is the most easily accessible and is considered as the most reliable [source](https://www.smartinsights.com/user-experience/website-design/what-is-the-importance-of-web-design-for-your-audience/#:~:text=A%20well%2Ddesigned%20website%20can,navigate%20your%20website%20with%20ease.). ![image](https://user-images.githubusercontent.com/48223671/108588159-bc2b6980-737d-11eb-8fc2-b378ff16e195.png) The red marked square shows the bug, I am talking about. The complete logo is not visible, the upper and bottom corners are cropped in the image **How do you know that this problem exists today? Why is this important?** After visiting [Creative Commons](https://creativecommons.org/), I realized that one bug caught my attention more than the complete website and then carried out having a [Questionnaire testing](https://www.pewresearch.org/methods/u-s-survey-research/questionnaire-design/) on the same to classmates. **How to measure design's effectiveness?** [A/B testing](https://www.optimizely.com/optimization-glossary/ab-testing/) - A quick A/B with my acquaintances (who cover major sections of people using the internet) with a high-fidelity version.
non_defect
image not fit properly in the article s panel what is the growth opportunity you want to see solved is the most easily accessible and is considered as the most reliable the red marked square shows the bug i am talking about the complete logo is not visible the upper and bottom corners are cropped in the image how do you know that this problem exists today why is this important after visiting i realized that one bug caught my attention more than the complete website and then carried out having a on the same to classmates how to measure design s effectiveness a quick a b with my acquaintances who cover major sections of people using the internet with a high fidelity version
0
321,531
9,805,102,332
IssuesEvent
2019-06-12 08:11:34
Dacaspex/Exhibit
https://api.github.com/repos/Dacaspex/Exhibit
closed
Make entire settings row clickable
enhancement low-priority
This can probably be achieved with `for=` property on the `html` tag
1.0
Make entire settings row clickable - This can probably be achieved with `for=` property on the `html` tag
non_defect
make entire settings row clickable this can probably be achieved with for property on the html tag
0
88,288
8,137,502,579
IssuesEvent
2018-08-20 12:02:39
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
pkg/apis/core/validation/validation_test.go need a spliting and refactoring
area/test kind/cleanup lifecycle/rotten sig/api-machinery
1) The file with tests is huge (and the `validation.go` too): ```console $ wc -l pkg/apis/core/validation/validation_test.go 12227 pkg/apis/core/validation/validation_test.go $ wc -l pkg/apis/core/validation/validation.go 4929 pkg/apis/core/validation/validation.go ``` I'd suggest to split them to a set of files. 2) Many tests could continue to work even if the code is broken. In many places we have a check like this: `len(errs) == 0` that always succeed if at least one, not matter which one, error is present. We should test for the expected error message instead. Related to https://github.com/kubernetes/kubernetes/issues/56230 @thockin WDYT? If you agree, I'll probably could start to slowly refactor them. /kind cleanup /area test
1.0
pkg/apis/core/validation/validation_test.go need a spliting and refactoring - 1) The file with tests is huge (and the `validation.go` too): ```console $ wc -l pkg/apis/core/validation/validation_test.go 12227 pkg/apis/core/validation/validation_test.go $ wc -l pkg/apis/core/validation/validation.go 4929 pkg/apis/core/validation/validation.go ``` I'd suggest to split them to a set of files. 2) Many tests could continue to work even if the code is broken. In many places we have a check like this: `len(errs) == 0` that always succeed if at least one, not matter which one, error is present. We should test for the expected error message instead. Related to https://github.com/kubernetes/kubernetes/issues/56230 @thockin WDYT? If you agree, I'll probably could start to slowly refactor them. /kind cleanup /area test
non_defect
pkg apis core validation validation test go need a spliting and refactoring the file with tests is huge and the validation go too console wc l pkg apis core validation validation test go pkg apis core validation validation test go wc l pkg apis core validation validation go pkg apis core validation validation go i d suggest to split them to a set of files many tests could continue to work even if the code is broken in many places we have a check like this len errs that always succeed if at least one not matter which one error is present we should test for the expected error message instead related to thockin wdyt if you agree i ll probably could start to slowly refactor them kind cleanup area test
0
152,444
23,974,240,933
IssuesEvent
2022-09-13 10:12:37
Budibase/budibase
https://api.github.com/repos/Budibase/budibase
closed
Date Range is in the wrong component list - should be Data not Form.
bug design sev3 - substantial
**Hosting** - Self - Method: docker compose - Budibase Version: 1.0.206 - App Version: 1.0.206 **Describe the bug** Date Range is in the wrong component category/list. **Expected behavior** Date Range should not be in the Form component list/category because it can be used outside of a Form and has nothing to do with forms. It is related to Data - it's a Data Provider filter! **Screenshots** ![Screenshot 2022-06-28 at 18 08 31](https://user-images.githubusercontent.com/101575380/176242040-e9223318-d4c0-4dc5-bf34-a89f2b1e506b.png)
1.0
Date Range is in the wrong component list - should be Data not Form. - **Hosting** - Self - Method: docker compose - Budibase Version: 1.0.206 - App Version: 1.0.206 **Describe the bug** Date Range is in the wrong component category/list. **Expected behavior** Date Range should not be in the Form component list/category because it can be used outside of a Form and has nothing to do with forms. It is related to Data - it's a Data Provider filter! **Screenshots** ![Screenshot 2022-06-28 at 18 08 31](https://user-images.githubusercontent.com/101575380/176242040-e9223318-d4c0-4dc5-bf34-a89f2b1e506b.png)
non_defect
date range is in the wrong component list should be data not form hosting self method docker compose budibase version app version describe the bug date range is in the wrong component category list expected behavior date range should not be in the form component list category because it can be used outside of a form and has nothing to do with forms it is related to data it s a data provider filter screenshots
0
28,066
5,172,101,943
IssuesEvent
2017-01-18 12:28:59
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Button label updates icon instead of label.
defect
``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` When label update after view initialization, the setter updates icon insead of label. ``` set label(val: string) { this._label = val; if(this.initialized) { this.domHandler.findSingle(this.el.nativeElement, '.ui-c').textContent = this._label; } } ``` The css selector should be `.ui-button-text` and not `.ui-c`;
1.0
Button label updates icon instead of label. - ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` When label update after view initialization, the setter updates icon insead of label. ``` set label(val: string) { this._label = val; if(this.initialized) { this.domHandler.findSingle(this.el.nativeElement, '.ui-c').textContent = this._label; } } ``` The css selector should be `.ui-button-text` and not `.ui-c`;
defect
button label updates icon instead of label bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see when label update after view initialization the setter updates icon insead of label set label val string this label val if this initialized this domhandler findsingle this el nativeelement ui c textcontent this label the css selector should be ui button text and not ui c
1
342,164
30,610,634,979
IssuesEvent
2023-07-23 15:06:58
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix reduction_ops.test_torch_unique
PyTorch Frontend Sub Task Failing Test
| | | |---|---| |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a>
1.0
Fix reduction_ops.test_torch_unique - | | | |---|---| |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5637066968"><img src=https://img.shields.io/badge/-failure-red></a>
non_defect
fix reduction ops test torch unique jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
0
563,173
16,677,014,728
IssuesEvent
2021-06-07 17:30:21
googleapis/java-aiplatform
https://api.github.com/repos/googleapis/java-aiplatform
closed
aiplatform.CreateTrainingPipelineVideoClassificationSampleTest: testCreateTrainingPipelineVideoClassificationSample failed
:rotating_light: api: aiplatform flakybot: flaky flakybot: issue priority: p1 type: bug
Note: #274 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: d27e6bf744470176c6c086c1b6b0fa5999c3b9f3 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f9bb3d8a-2879-4dae-85dc-e0b8ed0ddb7e), [Sponge](http://sponge2/f9bb3d8a-2879-4dae-85dc-e0b8ed0ddb7e) status: failed <details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/7949829708634390528" is in state PIPELINE_STATE_FAILED and cannot be canceled. at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:59) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771) at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463) at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427) at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57) at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112) at com.google.cloud.aiplatform.v1.PipelineServiceClient.cancelTrainingPipeline(PipelineServiceClient.java:765) at com.google.cloud.aiplatform.v1.PipelineServiceClient.cancelTrainingPipeline(PipelineServiceClient.java:696) at aiplatform.CancelTrainingPipelineSample.cancelTrainingPipelineSample(CancelTrainingPipelineSample.java:51) at aiplatform.CreateTrainingPipelineVideoClassificationSampleTest.tearDown(CreateTrainingPipelineVideoClassificationSampleTest.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/7949829708634390528" is in state PIPELINE_STATE_FAILED and cannot be canceled. at io.grpc.Status.asRuntimeException(Status.java:535) ... 17 more </pre></details>
1.0
aiplatform.CreateTrainingPipelineVideoClassificationSampleTest: testCreateTrainingPipelineVideoClassificationSample failed - Note: #274 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: d27e6bf744470176c6c086c1b6b0fa5999c3b9f3 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f9bb3d8a-2879-4dae-85dc-e0b8ed0ddb7e), [Sponge](http://sponge2/f9bb3d8a-2879-4dae-85dc-e0b8ed0ddb7e) status: failed <details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/7949829708634390528" is in state PIPELINE_STATE_FAILED and cannot be canceled. at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:59) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771) at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463) at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427) at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57) at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112) at com.google.cloud.aiplatform.v1.PipelineServiceClient.cancelTrainingPipeline(PipelineServiceClient.java:765) at com.google.cloud.aiplatform.v1.PipelineServiceClient.cancelTrainingPipeline(PipelineServiceClient.java:696) at aiplatform.CancelTrainingPipelineSample.cancelTrainingPipelineSample(CancelTrainingPipelineSample.java:51) at aiplatform.CreateTrainingPipelineVideoClassificationSampleTest.tearDown(CreateTrainingPipelineVideoClassificationSampleTest.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/7949829708634390528" is in state PIPELINE_STATE_FAILED and cannot be canceled. at io.grpc.Status.asRuntimeException(Status.java:535) ... 17 more </pre></details>
non_defect
aiplatform createtrainingpipelinevideoclassificationsampletest testcreatetrainingpipelinevideoclassificationsample failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output com google api gax rpc failedpreconditionexception io grpc statusruntimeexception failed precondition the trainingpipeline projects ucaip sample tests locations us trainingpipelines is in state pipeline state failed and cannot be canceled at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax rpc asynctaskexception asynchronous task failed at com google api gax rpc apiexceptions callandtranslateapiexception apiexceptions java at com google api gax rpc unarycallable call unarycallable java at com google cloud aiplatform pipelineserviceclient canceltrainingpipeline pipelineserviceclient java at com google cloud aiplatform pipelineserviceclient canceltrainingpipeline pipelineserviceclient java at aiplatform canceltrainingpipelinesample canceltrainingpipelinesample canceltrainingpipelinesample java at aiplatform createtrainingpipelinevideoclassificationsampletest teardown createtrainingpipelinevideoclassificationsampletest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runafters invokemethod runafters java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by io grpc statusruntimeexception failed precondition the trainingpipeline projects ucaip sample tests locations us trainingpipelines is in state pipeline state failed and cannot be canceled at io grpc status asruntimeexception status java more
0
159,401
20,048,385,399
IssuesEvent
2022-02-03 01:11:40
kapseliboi/token-wizard
https://api.github.com/repos/kapseliboi/token-wizard
opened
WS-2019-0064 (High) detected in handlebars-4.0.11.tgz
security vulnerability
## WS-2019-0064 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - jest-20.0.4.tgz (Root Library) - jest-cli-20.0.4.tgz - istanbul-api-1.2.1.tgz - istanbul-reports-1.1.3.tgz - :x: **handlebars-4.0.11.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of handlebars prior to 4.0.14 are vulnerable to Prototype Pollution. Templates may alter an Objects' prototype, thus allowing an attacker to execute arbitrary code on the server. <p>Publish Date: 2019-01-30 <p>URL: <a href=https://github.com/wycats/handlebars.js/compare/v4.1.1...v4.1.2>WS-2019-0064</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/755/">https://www.npmjs.com/advisories/755/</a></p> <p>Release Date: 2019-01-30</p> <p>Fix Resolution (handlebars): 4.0.14</p> <p>Direct dependency fix Resolution (jest): 20.1.0-alpha.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0064 (High) detected in handlebars-4.0.11.tgz - ## WS-2019-0064 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - jest-20.0.4.tgz (Root Library) - jest-cli-20.0.4.tgz - istanbul-api-1.2.1.tgz - istanbul-reports-1.1.3.tgz - :x: **handlebars-4.0.11.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of handlebars prior to 4.0.14 are vulnerable to Prototype Pollution. Templates may alter an Objects' prototype, thus allowing an attacker to execute arbitrary code on the server. <p>Publish Date: 2019-01-30 <p>URL: <a href=https://github.com/wycats/handlebars.js/compare/v4.1.1...v4.1.2>WS-2019-0064</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/755/">https://www.npmjs.com/advisories/755/</a></p> <p>Release Date: 2019-01-30</p> <p>Fix Resolution (handlebars): 4.0.14</p> <p>Direct dependency fix Resolution (jest): 20.1.0-alpha.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file package json path to vulnerable library node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library found in base branch master vulnerability details versions of handlebars prior to are vulnerable to prototype pollution templates may alter an objects prototype thus allowing an attacker to execute arbitrary code on the server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars direct dependency fix resolution jest alpha step up your open source security game with whitesource
0
2,471
2,607,904,204
IssuesEvent
2015-02-26 00:14:55
chrsmithdemos/zen-coding
https://api.github.com/repos/chrsmithdemos/zen-coding
closed
Zen-Coding Hotkeys not work in Aptana (Ubuntu 10.04)
auto-migrated Priority-Medium Type-Defect
``` What is the expected output? What do you see instead? Hotkeys should be worked, but hotkeys are not worked. In menu not displayed hotkeys below scripts commands. What version of the product are you using? On what operating system? OS: Ubuntu 10.04 x86_64, Eclipse 3.5.2 + Aptana plugin 2.0.4.1268103253-7D- 777iQHT4-dI1Pln5ui Please provide any additional information below. Changing hotkeys - not help :( See attach file. Sorry for my English. ``` ----- Original issue reported on code.google.com by `leoli...@gmail.com` on 6 May 2010 at 12:34 Attachments: * [screenshot_005.png](https://storage.googleapis.com/google-code-attachments/zen-coding/issue-165/comment-0/screenshot_005.png)
1.0
Zen-Coding Hotkeys not work in Aptana (Ubuntu 10.04) - ``` What is the expected output? What do you see instead? Hotkeys should be worked, but hotkeys are not worked. In menu not displayed hotkeys below scripts commands. What version of the product are you using? On what operating system? OS: Ubuntu 10.04 x86_64, Eclipse 3.5.2 + Aptana plugin 2.0.4.1268103253-7D- 777iQHT4-dI1Pln5ui Please provide any additional information below. Changing hotkeys - not help :( See attach file. Sorry for my English. ``` ----- Original issue reported on code.google.com by `leoli...@gmail.com` on 6 May 2010 at 12:34 Attachments: * [screenshot_005.png](https://storage.googleapis.com/google-code-attachments/zen-coding/issue-165/comment-0/screenshot_005.png)
defect
zen coding hotkeys not work in aptana ubuntu what is the expected output what do you see instead hotkeys should be worked but hotkeys are not worked in menu not displayed hotkeys below scripts commands what version of the product are you using on what operating system os ubuntu eclipse aptana plugin please provide any additional information below changing hotkeys not help see attach file sorry for my english original issue reported on code google com by leoli gmail com on may at attachments
1
761,538
26,684,971,302
IssuesEvent
2023-01-26 21:03:40
GoogleCloudPlatform/golang-samples
https://api.github.com/repos/GoogleCloudPlatform/golang-samples
closed
bigquery/bigquery_migration_quickstart: TestApp failed
type: bug priority: p1 api: bigquery samples flakybot: issue
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 4226690ccc343022b414ac38faa39a0a3550929f buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b6d1113d-ee02-4f92-a72f-9a84ba441481), [Sponge](http://sponge2/b6d1113d-ee02-4f92-a72f-9a84ba441481) status: failed <details><summary>Test output</summary><br><pre> main_test.go:47: execution failed: signal: killed main_test.go:52: Did not find expected output. Stdout: </pre></details>
1.0
bigquery/bigquery_migration_quickstart: TestApp failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 4226690ccc343022b414ac38faa39a0a3550929f buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b6d1113d-ee02-4f92-a72f-9a84ba441481), [Sponge](http://sponge2/b6d1113d-ee02-4f92-a72f-9a84ba441481) status: failed <details><summary>Test output</summary><br><pre> main_test.go:47: execution failed: signal: killed main_test.go:52: Did not find expected output. Stdout: </pre></details>
non_defect
bigquery bigquery migration quickstart testapp failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output main test go execution failed signal killed main test go did not find expected output stdout
0
6,567
2,610,256,996
IssuesEvent
2015-02-26 19:22:02
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳激光祛痘印痘坑
auto-migrated Priority-Medium Type-Defect
``` 深圳激光祛痘印痘坑【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:38
1.0
深圳激光祛痘印痘坑 - ``` 深圳激光祛痘印痘坑【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:38
defect
深圳激光祛痘印痘坑 深圳激光祛痘印痘坑【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
1
39,816
9,665,609,267
IssuesEvent
2019-05-21 08:54:56
primefaces/primereact
https://api.github.com/repos/primefaces/primereact
closed
DataTable: alwaysShowPaginator prop not used
defect
**I'm submitting a ...** ``` [ ] bug report [x] feature request [ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57 ``` **Current behavior** <!-- Describe how the bug manifests. --> The `alwaysShowPaginator` prop is defined but is not considered when rendering the paginator, therefore not giving expected behavior when changed. **Expected behavior** When `alwaysShowPaginator` is set to `false` it should hide paginator if there is only one page to show.
1.0
DataTable: alwaysShowPaginator prop not used - **I'm submitting a ...** ``` [ ] bug report [x] feature request [ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57 ``` **Current behavior** <!-- Describe how the bug manifests. --> The `alwaysShowPaginator` prop is defined but is not considered when rendering the paginator, therefore not giving expected behavior when changed. **Expected behavior** When `alwaysShowPaginator` is set to `false` it should hide paginator if there is only one page to show.
defect
datatable alwaysshowpaginator prop not used i m submitting a bug report feature request support request please do not submit support request here instead see current behavior the alwaysshowpaginator prop is defined but is not considered when rendering the paginator therefore not giving expected behavior when changed expected behavior when alwaysshowpaginator is set to false it should hide paginator if there is only one page to show
1
80,889
30,585,061,300
IssuesEvent
2023-07-21 12:46:46
cf-convention/cf-convention.github.io
https://api.github.com/repos/cf-convention/cf-convention.github.io
closed
Updated a broken link to UDUNITS examples of how to format units strings
defect
@larsbarring has corrected a broken link in the FAQ to UDUNITS. His [pull request](https://github.com/cf-convention/cf-convention.github.io/pull/371) will be merged if no-one objects in three weeks. Thanks, Lars.
1.0
Updated a broken link to UDUNITS examples of how to format units strings - @larsbarring has corrected a broken link in the FAQ to UDUNITS. His [pull request](https://github.com/cf-convention/cf-convention.github.io/pull/371) will be merged if no-one objects in three weeks. Thanks, Lars.
defect
updated a broken link to udunits examples of how to format units strings larsbarring has corrected a broken link in the faq to udunits his will be merged if no one objects in three weeks thanks lars
1
11,186
2,641,730,629
IssuesEvent
2015-03-11 19:24:44
chrsmith/html5rocks
https://api.github.com/repos/chrsmith/html5rocks
closed
slides: Slide Position wrong after Tab Inputfield change
Milestone-3 Priority-Low Slides Type-Defect
Original [issue 88](https://code.google.com/p/html5rocks/issues/detail?id=88) created by chrsmith on 2010-07-28T21:33:52.000Z: Reported by sebastian.janzen, Apr 20, 2010 <b>What steps will reproduce the problem?</b> 1. Go to slide 21 2. Set focus (klick) on &quot;Search inside&quot;-input field 3. Press Tab <b>What is the expected output? What do you see instead?</b> Expecting transition to the text foil (22). But I'm between 21 and 22 now, until I slide to the last foil. There the diff is being corrected. What version of the product are you using? On what operating system? Chrome 5.0.342.9 beta on OS X 10.6.3 Addition: My Viewport after that: http://img.janzen.it/e33d70822f2475e22df99c07932055ae.png Comment 1 by dkpassion1, Jun 17, 2010 Yes, I am getting also same unwanted effects, hope attention required! Thanks Comment 2 by jon_bachelor@me.com, Jun 24, 2010 Same issue for me... using FireFox 3.6.4 on OS X 10.6.4.
1.0
slides: Slide Position wrong after Tab Inputfield change - Original [issue 88](https://code.google.com/p/html5rocks/issues/detail?id=88) created by chrsmith on 2010-07-28T21:33:52.000Z: Reported by sebastian.janzen, Apr 20, 2010 <b>What steps will reproduce the problem?</b> 1. Go to slide 21 2. Set focus (klick) on &quot;Search inside&quot;-input field 3. Press Tab <b>What is the expected output? What do you see instead?</b> Expecting transition to the text foil (22). But I'm between 21 and 22 now, until I slide to the last foil. There the diff is being corrected. What version of the product are you using? On what operating system? Chrome 5.0.342.9 beta on OS X 10.6.3 Addition: My Viewport after that: http://img.janzen.it/e33d70822f2475e22df99c07932055ae.png Comment 1 by dkpassion1, Jun 17, 2010 Yes, I am getting also same unwanted effects, hope attention required! Thanks Comment 2 by jon_bachelor@me.com, Jun 24, 2010 Same issue for me... using FireFox 3.6.4 on OS X 10.6.4.
defect
slides slide position wrong after tab inputfield change original created by chrsmith on reported by sebastian janzen apr what steps will reproduce the problem go to slide set focus klick on quot search inside quot input field press tab what is the expected output what do you see instead expecting transition to the text foil but i m between and now until i slide to the last foil there the diff is being corrected what version of the product are you using on what operating system chrome beta on os x addition my viewport after that comment by jun yes i am getting also same unwanted effects hope attention required thanks comment by jon bachelor me com jun same issue for me using firefox on os x
1
290,385
25,062,248,423
IssuesEvent
2022-11-07 03:38:46
milvus-io/milvus
https://api.github.com/repos/milvus-io/milvus
closed
[Bug]: [benchmark][standalone]Milvus load collection failed,raise an error"collection has not been loaded to memory or load failed"
kind/bug triage/accepted stale test/benchmark
### Is there an existing issue for this? - [X] I have searched the existing issues ### Environment ```markdown - Milvus version:2.1.0-20220823-0c695347 - Deployment mode(standalone or cluster):stanalone - SDK version(e.g. pymilvus v2.0.0rc2):2.1.2dev2 - OS(Ubuntu or CentOS): - CPU/Memory: - GPU: - Others: ``` ### Current Behavior server-instance fouram-znxhg-5 server-configmap server-single-16c64m client-configmap client-search-filter-sift50m-ivf-flat-2048 sever: ``` fouram-znxhg-5-etcd-0 1/1 Running 0 2m2s 10.104.1.161 4am-node10 <none> <none> fouram-znxhg-5-milvus-standalone-56b684b5dc-zhvcw 1/1 Running 0 2m2s 10.104.4.60 4am-node11 <none> <none> fouram-znxhg-5-minio-fb879c796-2wbzb 1/1 Running 0 2m2s 10.104.4.61 4am-node11 <none> <none> ``` log: ``` [2022-08-25 10:58:52,901] [ INFO] - Start load collection (milvus_benchmark.runners.search:293) [2022-08-25 11:04:48,342] [ ERROR] - RPC error: [wait_for_loading_collection], <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)>, <Time:{'RPC start': '2022-08-25 10:58:53.198939', 'RPC error': '2022-08-25 11:04:48.341908'}> (pymilvus.decorators:95) [2022-08-25 11:04:48,342] [ ERROR] - RPC error: [load_collection], <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)>, <Time:{'RPC start': '2022-08-25 10:58:52.901199', 'RPC error': '2022-08-25 11:04:48.342846'}> (pymilvus.decorators:95) [2022-08-25 11:04:48,343] [ ERROR] - <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)> (milvus_benchmark.main:118) [2022-08-25 11:04:48,345] [ ERROR] - Traceback (most recent call last): File "main.py", line 87, in run_suite runner.prepare(**cases[0]) File "/src/milvus_benchmark/runners/search.py", line 295, in prepare self.milvus.load_collection(replica_number=replica_number, timeout=1200) File "/src/milvus_benchmark/client.py", line 52, in wrapper result = func(*args, **kwargs) File "/src/milvus_benchmark/client.py", line 581, in load_collection return self._milvus.load_collection(collection_name, **params) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/stub.py", line 195, in load_collection return handler.load_collection(collection_name=collection_name, replica_number=replica_number, timeout=timeout, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 96, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 92, in handler return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 74, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 48, in handler return func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 673, in load_collection self.wait_for_loading_collection(collection_name, timeout) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 96, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 92, in handler return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 74, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 48, in handler return func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 682, in wait_for_loading_collection return self._wait_for_loading_collection(collection_name, timeout) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 711, in _wait_for_loading_collection progress = self.get_collection_loading_progress(collection_name, timeout) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 690, in get_collection_loading_progress raise MilvusException(response.status.error_code, response.status.reason) pymilvus.exceptions.MilvusException: <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)> (milvus_benchmark.main:119) ``` standalone: <img width="1384" alt="截屏2022-08-26 10 47 18" src="https://user-images.githubusercontent.com/34296482/186807060-030d0fed-654b-4284-9318-e11494cbc4a6.png"> ### Expected Behavior _No response_ ### Steps To Reproduce _No response_ ### Milvus Log _No response_ ### Anything else? client-search-filter-sift50m-ivf-flat-2048 ``` insert_search_performance: collections: - milvus: db_config.primary_path: /test/milvus/distribued/sift_50m_128_l2_ivf_flat wal_enable: true collection_name: sift_50m_128_l2 ni_per: 50000 other_fields: int1,int2,float1,double1 build_index: true index_type: ivf_flat index_param: nlist: 2048 run_count: 2 top_ks: [1, 10, 100, 1000] nqs: [1, 10, 100, 200, 500, 1000, 1200] filters: - range: "{'range': {'float1': {'GT': -1.0, 'LT': collection_size * 0.1}}}" - range: "{'range': {'float1': {'GT': -1.0, 'LT': collection_size * 0.5}}}" - range: "{'range': {'float1': {'GT': -1.0, 'LT': collection_size * 0.9}}}" search_params: - nprobe: 8 - nprobe: 32 ```
1.0
[Bug]: [benchmark][standalone]Milvus load collection failed,raise an error"collection has not been loaded to memory or load failed" - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Environment ```markdown - Milvus version:2.1.0-20220823-0c695347 - Deployment mode(standalone or cluster):stanalone - SDK version(e.g. pymilvus v2.0.0rc2):2.1.2dev2 - OS(Ubuntu or CentOS): - CPU/Memory: - GPU: - Others: ``` ### Current Behavior server-instance fouram-znxhg-5 server-configmap server-single-16c64m client-configmap client-search-filter-sift50m-ivf-flat-2048 sever: ``` fouram-znxhg-5-etcd-0 1/1 Running 0 2m2s 10.104.1.161 4am-node10 <none> <none> fouram-znxhg-5-milvus-standalone-56b684b5dc-zhvcw 1/1 Running 0 2m2s 10.104.4.60 4am-node11 <none> <none> fouram-znxhg-5-minio-fb879c796-2wbzb 1/1 Running 0 2m2s 10.104.4.61 4am-node11 <none> <none> ``` log: ``` [2022-08-25 10:58:52,901] [ INFO] - Start load collection (milvus_benchmark.runners.search:293) [2022-08-25 11:04:48,342] [ ERROR] - RPC error: [wait_for_loading_collection], <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)>, <Time:{'RPC start': '2022-08-25 10:58:53.198939', 'RPC error': '2022-08-25 11:04:48.341908'}> (pymilvus.decorators:95) [2022-08-25 11:04:48,342] [ ERROR] - RPC error: [load_collection], <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)>, <Time:{'RPC start': '2022-08-25 10:58:52.901199', 'RPC error': '2022-08-25 11:04:48.342846'}> (pymilvus.decorators:95) [2022-08-25 11:04:48,343] [ ERROR] - <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)> (milvus_benchmark.main:118) [2022-08-25 11:04:48,345] [ ERROR] - Traceback (most recent call last): File "main.py", line 87, in run_suite runner.prepare(**cases[0]) File "/src/milvus_benchmark/runners/search.py", line 295, in prepare self.milvus.load_collection(replica_number=replica_number, timeout=1200) File "/src/milvus_benchmark/client.py", line 52, in wrapper result = func(*args, **kwargs) File "/src/milvus_benchmark/client.py", line 581, in load_collection return self._milvus.load_collection(collection_name, **params) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/stub.py", line 195, in load_collection return handler.load_collection(collection_name=collection_name, replica_number=replica_number, timeout=timeout, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 96, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 92, in handler return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 74, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 48, in handler return func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 673, in load_collection self.wait_for_loading_collection(collection_name, timeout) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 96, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 92, in handler return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 74, in handler raise e File "/usr/local/lib/python3.8/dist-packages/pymilvus/decorators.py", line 48, in handler return func(self, *args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 682, in wait_for_loading_collection return self._wait_for_loading_collection(collection_name, timeout) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 711, in _wait_for_loading_collection progress = self.get_collection_loading_progress(collection_name, timeout) File "/usr/local/lib/python3.8/dist-packages/pymilvus/client/grpc_handler.py", line 690, in get_collection_loading_progress raise MilvusException(response.status.error_code, response.status.reason) pymilvus.exceptions.MilvusException: <MilvusException: (code=1, message=collection sift_50m_128_l2 has not been loaded to memory or load failed)> (milvus_benchmark.main:119) ``` standalone: <img width="1384" alt="截屏2022-08-26 10 47 18" src="https://user-images.githubusercontent.com/34296482/186807060-030d0fed-654b-4284-9318-e11494cbc4a6.png"> ### Expected Behavior _No response_ ### Steps To Reproduce _No response_ ### Milvus Log _No response_ ### Anything else? client-search-filter-sift50m-ivf-flat-2048 ``` insert_search_performance: collections: - milvus: db_config.primary_path: /test/milvus/distribued/sift_50m_128_l2_ivf_flat wal_enable: true collection_name: sift_50m_128_l2 ni_per: 50000 other_fields: int1,int2,float1,double1 build_index: true index_type: ivf_flat index_param: nlist: 2048 run_count: 2 top_ks: [1, 10, 100, 1000] nqs: [1, 10, 100, 200, 500, 1000, 1200] filters: - range: "{'range': {'float1': {'GT': -1.0, 'LT': collection_size * 0.1}}}" - range: "{'range': {'float1': {'GT': -1.0, 'LT': collection_size * 0.5}}}" - range: "{'range': {'float1': {'GT': -1.0, 'LT': collection_size * 0.9}}}" search_params: - nprobe: 8 - nprobe: 32 ```
non_defect
milvus load collection failed raise an error collection has not been loaded to memory or load failed is there an existing issue for this i have searched the existing issues environment markdown milvus version deployment mode standalone or cluster stanalone sdk version e g pymilvus os ubuntu or centos cpu memory gpu others current behavior server instance fouram znxhg server configmap server single client configmap client search filter ivf flat sever fouram znxhg etcd running fouram znxhg milvus standalone zhvcw running fouram znxhg minio running log start load collection milvus benchmark runners search rpc error pymilvus decorators rpc error pymilvus decorators milvus benchmark main traceback most recent call last file main py line in run suite runner prepare cases file src milvus benchmark runners search py line in prepare self milvus load collection replica number replica number timeout file src milvus benchmark client py line in wrapper result func args kwargs file src milvus benchmark client py line in load collection return self milvus load collection collection name params file usr local lib dist packages pymilvus client stub py line in load collection return handler load collection collection name collection name replica number replica number timeout timeout kwargs file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func args kwargs file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func self args kwargs file usr local lib dist packages pymilvus client grpc handler py line in load collection self wait for loading collection collection name timeout file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func args kwargs file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func self args kwargs file usr local lib dist packages pymilvus client grpc handler py line in wait for loading collection return self wait for loading collection collection name timeout file usr local lib dist packages pymilvus client grpc handler py line in wait for loading collection progress self get collection loading progress collection name timeout file usr local lib dist packages pymilvus client grpc handler py line in get collection loading progress raise milvusexception response status error code response status reason pymilvus exceptions milvusexception milvus benchmark main standalone img width alt src expected behavior no response steps to reproduce no response milvus log no response anything else client search filter ivf flat insert search performance collections milvus db config primary path test milvus distribued sift ivf flat wal enable true collection name sift ni per other fields build index true index type ivf flat index param nlist run count top ks nqs filters range range gt lt collection size range range gt lt collection size range range gt lt collection size search params nprobe nprobe
0
58,708
16,717,954,190
IssuesEvent
2021-06-10 01:12:22
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
508-defect-2 [SCREENREADER]: On this page navigation should have a unique accessible label
508-defect-2 508-issue-screenreader 508/Accessibility vsa-content-localization vsa-language assistance & resources
# [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2) <!-- Enter an issue title using the format [ERROR TYPE]: Brief description of the problem --- [SCREENREADER]: Edit buttons need aria-label for context [KEYBOARD]: Add another user link will not receive keyboard focus [AXE-CORE]: Heading levels should increase by one [COGNITION]: Error messages should be more specific [COLOR]: Blue button on blue background does not have sufficient contrast ratio --- --> <!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. --> ## Feedback framework - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Definition of done 1. Review and acknowledge feedback. 1. Fix and/or document decisions made. 1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix. ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** Josh ## User Story or Problem Statement As a screenreader user who uses landmarks to make sense of a page's structure, I want them to be uniquely labelled if there are any duplicate landmarks so I can tell the difference between them. For example, I'd like to read "On this page navigation" instead of "navigation" so I know it will jump me to the on this page navigation specifically and not the breadcrumb or footer. ## Details If a page includes more than one navigation landmark, [each should have a unique label](https://www.w3.org/TR/wai-aria-practices/examples/landmarks/navigation.html). This helps folks with screen readers tell them apart when using a landmarks list. ## Acceptance Criteria - [ ] On this page `nav` has `aria-labelledby="on-this-page"` for all footer language pages ## Steps to Recreate Using VoiceOver on Safari, open the rotor using `opt` + `cmd` + `u` Navigate to landmarks Confirm that there are two generic navigation landmarks ## Proposed Solution Simply add `aria-labelledby="on-this-page"` to the "En esta pagina" `nav` ## WCAG or Vendor Guidance https://www.w3.org/TR/wai-aria-practices/examples/landmarks/navigation.html
1.0
508-defect-2 [SCREENREADER]: On this page navigation should have a unique accessible label - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2) <!-- Enter an issue title using the format [ERROR TYPE]: Brief description of the problem --- [SCREENREADER]: Edit buttons need aria-label for context [KEYBOARD]: Add another user link will not receive keyboard focus [AXE-CORE]: Heading levels should increase by one [COGNITION]: Error messages should be more specific [COLOR]: Blue button on blue background does not have sufficient contrast ratio --- --> <!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. --> ## Feedback framework - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Definition of done 1. Review and acknowledge feedback. 1. Fix and/or document decisions made. 1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix. ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** Josh ## User Story or Problem Statement As a screenreader user who uses landmarks to make sense of a page's structure, I want them to be uniquely labelled if there are any duplicate landmarks so I can tell the difference between them. For example, I'd like to read "On this page navigation" instead of "navigation" so I know it will jump me to the on this page navigation specifically and not the breadcrumb or footer. ## Details If a page includes more than one navigation landmark, [each should have a unique label](https://www.w3.org/TR/wai-aria-practices/examples/landmarks/navigation.html). This helps folks with screen readers tell them apart when using a landmarks list. ## Acceptance Criteria - [ ] On this page `nav` has `aria-labelledby="on-this-page"` for all footer language pages ## Steps to Recreate Using VoiceOver on Safari, open the rotor using `opt` + `cmd` + `u` Navigate to landmarks Confirm that there are two generic navigation landmarks ## Proposed Solution Simply add `aria-labelledby="on-this-page"` to the "En esta pagina" `nav` ## WCAG or Vendor Guidance https://www.w3.org/TR/wai-aria-practices/examples/landmarks/navigation.html
defect
defect on this page navigation should have a unique accessible label enter an issue title using the format brief description of the problem edit buttons need aria label for context add another user link will not receive keyboard focus heading levels should increase by one error messages should be more specific blue button on blue background does not have sufficient contrast ratio feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact josh user story or problem statement as a screenreader user who uses landmarks to make sense of a page s structure i want them to be uniquely labelled if there are any duplicate landmarks so i can tell the difference between them for example i d like to read on this page navigation instead of navigation so i know it will jump me to the on this page navigation specifically and not the breadcrumb or footer details if a page includes more than one navigation landmark this helps folks with screen readers tell them apart when using a landmarks list acceptance criteria on this page nav has aria labelledby on this page for all footer language pages steps to recreate using voiceover on safari open the rotor using opt cmd u navigate to landmarks confirm that there are two generic navigation landmarks proposed solution simply add aria labelledby on this page to the en esta pagina nav wcag or vendor guidance
1
755,562
26,432,749,659
IssuesEvent
2023-01-15 01:42:51
nikolaystrikhar/gutenberg-forms
https://api.github.com/repos/nikolaystrikhar/gutenberg-forms
closed
Add Preview Content to all Form Blocks
enhancement low-priority
We need to define preview content so that it is reviewable in the inserter and also makes the style picker nice. https://developer.wordpress.org/block-editor/developers/block-api/block-registration/#example-optional Nice article with examples. https://mediaron.com/how-to-enable-gutenberg-block-previews/ P.S. This will also fix #138
1.0
Add Preview Content to all Form Blocks - We need to define preview content so that it is reviewable in the inserter and also makes the style picker nice. https://developer.wordpress.org/block-editor/developers/block-api/block-registration/#example-optional Nice article with examples. https://mediaron.com/how-to-enable-gutenberg-block-previews/ P.S. This will also fix #138
non_defect
add preview content to all form blocks we need to define preview content so that it is reviewable in the inserter and also makes the style picker nice nice article with examples p s this will also fix
0
2,852
2,607,962,810
IssuesEvent
2015-02-26 00:40:59
chrsmithdemos/leveldb
https://api.github.com/repos/chrsmithdemos/leveldb
closed
File format docs don't specify endianness
auto-migrated Priority-Medium Type-Defect
``` doc/{table,log}_format.txt do not specify that the following are little-endian: doc/table_format.txt: magic fixed64 doc/log_format.txt: checksum uint32 length uint16 The table_format doc also mentions varint64, in addition to fixed64. It may be worth linking to http://code.google.com/apis/protocolbuffers/docs/encoding.html to describe the varint format. ``` ----- Original issue reported on code.google.com by `nigel...@google.com` on 28 Jul 2011 at 3:25
1.0
File format docs don't specify endianness - ``` doc/{table,log}_format.txt do not specify that the following are little-endian: doc/table_format.txt: magic fixed64 doc/log_format.txt: checksum uint32 length uint16 The table_format doc also mentions varint64, in addition to fixed64. It may be worth linking to http://code.google.com/apis/protocolbuffers/docs/encoding.html to describe the varint format. ``` ----- Original issue reported on code.google.com by `nigel...@google.com` on 28 Jul 2011 at 3:25
defect
file format docs don t specify endianness doc table log format txt do not specify that the following are little endian doc table format txt magic doc log format txt checksum length the table format doc also mentions in addition to it may be worth linking to to describe the varint format original issue reported on code google com by nigel google com on jul at
1
54,749
13,911,159,434
IssuesEvent
2020-10-20 17:00:31
Alfresco/alfresco-php-sdk
https://api.github.com/repos/Alfresco/alfresco-php-sdk
closed
PHP sends incorrect complexType
Priority-Medium Type-Defect auto-migrated
``` I'm not sure if schemas have changed since the release of the 2.1 php library or are different in enterprise vs community or what but the format the alfresco php library is sending a particular complexType is incorrect for our repository. The 'nodes' choice of the cms:Predicate type needs to be a single array and the 'store' and 'uuid' elements be in another array inside of the nodes array. I don't know if I got all of them but I found the problem in three places within the alfresco php library, attached are the sections of code in their fixed state. I'm not really sure how this would be working for anyone unless the version of php soap we have serializes into XML differently than other peoples, not sure why that would be though. ``` Original issue reported on code.google.com by `rwether...@gmail.com` on 25 Nov 2011 at 5:56 Attachments: - [alfresco-php-library-fixes.txt](https://storage.googleapis.com/google-code-attachments/alfresco-php-sdk/issue-5/comment-0/alfresco-php-library-fixes.txt) - [alfreso-php-library-fixes-2.txt](https://storage.googleapis.com/google-code-attachments/alfresco-php-sdk/issue-5/comment-0/alfreso-php-library-fixes-2.txt)
1.0
PHP sends incorrect complexType - ``` I'm not sure if schemas have changed since the release of the 2.1 php library or are different in enterprise vs community or what but the format the alfresco php library is sending a particular complexType is incorrect for our repository. The 'nodes' choice of the cms:Predicate type needs to be a single array and the 'store' and 'uuid' elements be in another array inside of the nodes array. I don't know if I got all of them but I found the problem in three places within the alfresco php library, attached are the sections of code in their fixed state. I'm not really sure how this would be working for anyone unless the version of php soap we have serializes into XML differently than other peoples, not sure why that would be though. ``` Original issue reported on code.google.com by `rwether...@gmail.com` on 25 Nov 2011 at 5:56 Attachments: - [alfresco-php-library-fixes.txt](https://storage.googleapis.com/google-code-attachments/alfresco-php-sdk/issue-5/comment-0/alfresco-php-library-fixes.txt) - [alfreso-php-library-fixes-2.txt](https://storage.googleapis.com/google-code-attachments/alfresco-php-sdk/issue-5/comment-0/alfreso-php-library-fixes-2.txt)
defect
php sends incorrect complextype i m not sure if schemas have changed since the release of the php library or are different in enterprise vs community or what but the format the alfresco php library is sending a particular complextype is incorrect for our repository the nodes choice of the cms predicate type needs to be a single array and the store and uuid elements be in another array inside of the nodes array i don t know if i got all of them but i found the problem in three places within the alfresco php library attached are the sections of code in their fixed state i m not really sure how this would be working for anyone unless the version of php soap we have serializes into xml differently than other peoples not sure why that would be though original issue reported on code google com by rwether gmail com on nov at attachments
1
217,917
7,328,991,275
IssuesEvent
2018-03-05 01:52:33
BuckleScript/bucklescript
https://api.github.com/repos/BuckleScript/bucklescript
closed
Defining or using a module named "Block" or "Curry" causes runtime errors when certain features are used
PRIORITY:HIGH bug
Possible solutions: - Mangle reserved module names - Emit compile time error when encountering reserved module names - Rename internal modules to something much less likely to conflict with user-defined module names Repros: https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAISiAxgazgXjgZwC4BOEKecMAdmAFB4CeADvKdgJLmkgBmcAlu3AB84AZUJ8A5nC64x5cVVikAHljhtSAFgBMVIA ```ml module Block = struct end type t = Int of int | String of string let x = Int 42 ``` https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAYQgJ2QTzgXjgZwC7IQDGecMAdmAFBWykCGADo1BgGZwAeWVccdcAPpY4AKRwA6KCADmACgBEOAJZh4MNmxgkFASjjLycDp15xaMUsOxMWGcVNlwATEA ```ml module Curry = struct end let apply f x = let _ = Js.log("side effect") in f x let _ = apply Js.log 2 ```
1.0
Defining or using a module named "Block" or "Curry" causes runtime errors when certain features are used - Possible solutions: - Mangle reserved module names - Emit compile time error when encountering reserved module names - Rename internal modules to something much less likely to conflict with user-defined module names Repros: https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAISiAxgazgXjgZwC4BOEKecMAdmAFB4CeADvKdgJLmkgBmcAlu3AB84AZUJ8A5nC64x5cVVikAHljhtSAFgBMVIA ```ml module Block = struct end type t = Int of int | String of string let x = Int 42 ``` https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAYQgJ2QTzgXjgZwC7IQDGecMAdmAFBWykCGADo1BgGZwAeWVccdcAPpY4AKRwA6KCADmACgBEOAJZh4MNmxgkFASjjLycDp15xaMUsOxMWGcVNlwATEA ```ml module Curry = struct end let apply f x = let _ = Js.log("side effect") in f x let _ = apply Js.log 2 ```
non_defect
defining or using a module named block or curry causes runtime errors when certain features are used possible solutions mangle reserved module names emit compile time error when encountering reserved module names rename internal modules to something much less likely to conflict with user defined module names repros ml module block struct end type t int of int string of string let x int ml module curry struct end let apply f x let js log side effect in f x let apply js log
0
81,586
31,070,797,238
IssuesEvent
2023-08-12 00:08:51
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
`checkbashisms` errors out on a compiled working dir
Type: Defect Status: Stale
### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Debian Distribution Version | 11 Kernel Version | 5.10.0-13-amd64 Architecture | x86_64 OpenZFS Version | ed715283 ### Describe the problem you're observing `checkstyle` errors 100% of the time on a compiled working tree, because: ``` $ make -s checkstyle all-debug.sh all-syslog.sh data-notify.sh generic-notify.sh resilver_finish-notify.sh scrub_finish-notify.sh statechange-led.sh statechange-notify.sh vdev_clear-led.sh vdev_attach-led.sh pool_import-led.sh resilver_finish-start-scrub.sh trim_finish-notify.sh history_event-zfs-list-cacher.sh zed-functions.sh zed.rc all-debug.sh all-syslog.sh data-notify.sh generic-notify.sh resilver_finish-notify.sh scrub_finish-notify.sh statechange-led.sh statechange-notify.sh vdev_clear-led.sh vdev_attach-led.sh pool_import-led.sh resilver_finish-start-scrub.sh trim_finish-notify.sh history_event-zfs-list-cacher.sh zed-functions.sh zed.rc zfs could not find any possible bashisms in bash script module-setup.sh make[3]: *** [Makefile:957: checkbashisms] Error 1 make[2]: *** [Makefile:987: checkbashisms] Error 2 make[1]: *** [Makefile:988: checkbashisms] Error 2 make: *** [Makefile:1379: checkbashisms] Error 2 ``` ### Describe how to reproduce the problem Above. ### Include any warning/errors/backtraces from the system logs
1.0
`checkbashisms` errors out on a compiled working dir - ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Debian Distribution Version | 11 Kernel Version | 5.10.0-13-amd64 Architecture | x86_64 OpenZFS Version | ed715283 ### Describe the problem you're observing `checkstyle` errors 100% of the time on a compiled working tree, because: ``` $ make -s checkstyle all-debug.sh all-syslog.sh data-notify.sh generic-notify.sh resilver_finish-notify.sh scrub_finish-notify.sh statechange-led.sh statechange-notify.sh vdev_clear-led.sh vdev_attach-led.sh pool_import-led.sh resilver_finish-start-scrub.sh trim_finish-notify.sh history_event-zfs-list-cacher.sh zed-functions.sh zed.rc all-debug.sh all-syslog.sh data-notify.sh generic-notify.sh resilver_finish-notify.sh scrub_finish-notify.sh statechange-led.sh statechange-notify.sh vdev_clear-led.sh vdev_attach-led.sh pool_import-led.sh resilver_finish-start-scrub.sh trim_finish-notify.sh history_event-zfs-list-cacher.sh zed-functions.sh zed.rc zfs could not find any possible bashisms in bash script module-setup.sh make[3]: *** [Makefile:957: checkbashisms] Error 1 make[2]: *** [Makefile:987: checkbashisms] Error 2 make[1]: *** [Makefile:988: checkbashisms] Error 2 make: *** [Makefile:1379: checkbashisms] Error 2 ``` ### Describe how to reproduce the problem Above. ### Include any warning/errors/backtraces from the system logs
defect
checkbashisms errors out on a compiled working dir system information type version name distribution name debian distribution version kernel version architecture openzfs version describe the problem you re observing checkstyle errors of the time on a compiled working tree because make s checkstyle all debug sh all syslog sh data notify sh generic notify sh resilver finish notify sh scrub finish notify sh statechange led sh statechange notify sh vdev clear led sh vdev attach led sh pool import led sh resilver finish start scrub sh trim finish notify sh history event zfs list cacher sh zed functions sh zed rc all debug sh all syslog sh data notify sh generic notify sh resilver finish notify sh scrub finish notify sh statechange led sh statechange notify sh vdev clear led sh vdev attach led sh pool import led sh resilver finish start scrub sh trim finish notify sh history event zfs list cacher sh zed functions sh zed rc zfs could not find any possible bashisms in bash script module setup sh make error make error make error make error describe how to reproduce the problem above include any warning errors backtraces from the system logs
1
44,037
11,924,342,105
IssuesEvent
2020-04-01 09:22:12
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Change Tooltip disabled doesn't close it
defect
**I'm submitting a ...** (check one with "x") ``` [X] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** https://github-bgwszf.stackblitz.io (https://stackblitz.com/edit/github-bgwszf) **Current behavior** Changing the Tooltip to disabled when it's opened doesn't close it. **Expected behavior** Tooltip should be reactive to the disabled property. If opened and changed to disabled should close it. If disabled and then changed to enabled, the Tooltip should open. **Minimal reproduction of the problem with instructions** - Open https://github-bgwszf.stackblitz.io - Focus on input - Start typing (When start typing de tooltip should close because the disabled is changed, but it keep openend). * **Angular version:** 8.X <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 8.X <!-- Check whether this is still an issue in the most recent Angular version -->
1.0
Change Tooltip disabled doesn't close it - **I'm submitting a ...** (check one with "x") ``` [X] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** https://github-bgwszf.stackblitz.io (https://stackblitz.com/edit/github-bgwszf) **Current behavior** Changing the Tooltip to disabled when it's opened doesn't close it. **Expected behavior** Tooltip should be reactive to the disabled property. If opened and changed to disabled should close it. If disabled and then changed to enabled, the Tooltip should open. **Minimal reproduction of the problem with instructions** - Open https://github-bgwszf.stackblitz.io - Focus on input - Start typing (When start typing de tooltip should close because the disabled is changed, but it keep openend). * **Angular version:** 8.X <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 8.X <!-- Check whether this is still an issue in the most recent Angular version -->
defect
change tooltip disabled doesn t close it i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior changing the tooltip to disabled when it s opened doesn t close it expected behavior tooltip should be reactive to the disabled property if opened and changed to disabled should close it if disabled and then changed to enabled the tooltip should open minimal reproduction of the problem with instructions open focus on input start typing when start typing de tooltip should close because the disabled is changed but it keep openend angular version x primeng version x
1
23,814
3,851,868,055
IssuesEvent
2016-04-06 05:29:04
GPF/imame4all
https://api.github.com/repos/GPF/imame4all
closed
Feature Request: Multiplayer
auto-migrated Priority-Medium Type-Defect
``` I would really like to see the ability to configure multiple players especially since a lot of the Android tablets support multiple USB ports or multiple controllers through Bluetooth. ``` Original issue reported on code.google.com by `j...@zerojay.com` on 21 Sep 2011 at 11:17
1.0
Feature Request: Multiplayer - ``` I would really like to see the ability to configure multiple players especially since a lot of the Android tablets support multiple USB ports or multiple controllers through Bluetooth. ``` Original issue reported on code.google.com by `j...@zerojay.com` on 21 Sep 2011 at 11:17
defect
feature request multiplayer i would really like to see the ability to configure multiple players especially since a lot of the android tablets support multiple usb ports or multiple controllers through bluetooth original issue reported on code google com by j zerojay com on sep at
1
45,152
12,603,178,762
IssuesEvent
2020-06-11 13:04:21
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
DynamicDialog and Dialog Closed by clear button on drop down or chips inside
defect
**I'm submitting a ...** (check one with "x") ``` [x ] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted. https://stackblitz.com/edit/primeng-dynamicdialog-dropdown-issue **Current behavior** If there is a drop down with showClear=true inside opened dynamic dialog. click of clear buton on the drop down will close the dynamic dialog as well. **Expected behavior** This is not happen in previous 8. I have to set closeable=false on dyanmic dialog open parameters to prevent this. looks like the dynamic dialog catch the click event from the drop down clear button and treat it as the close button of dynamic dialog. **Please tell us about your environment:** Stakblitz * **Angular version:** 9.0.4 <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 9.1.0 <!-- Check whether this is still an issue in the most recent Angular version --> * **Browser:** [Chrome 83] <!-- All browsers where this could be reproduced --> * **Language:** [TypeScript 3.8.3] * **Node (for AoT issues):** `node --version` = v10.16.3
1.0
DynamicDialog and Dialog Closed by clear button on drop down or chips inside - **I'm submitting a ...** (check one with "x") ``` [x ] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted. https://stackblitz.com/edit/primeng-dynamicdialog-dropdown-issue **Current behavior** If there is a drop down with showClear=true inside opened dynamic dialog. click of clear buton on the drop down will close the dynamic dialog as well. **Expected behavior** This is not happen in previous 8. I have to set closeable=false on dyanmic dialog open parameters to prevent this. looks like the dynamic dialog catch the click event from the drop down clear button and treat it as the close button of dynamic dialog. **Please tell us about your environment:** Stakblitz * **Angular version:** 9.0.4 <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 9.1.0 <!-- Check whether this is still an issue in the most recent Angular version --> * **Browser:** [Chrome 83] <!-- All browsers where this could be reproduced --> * **Language:** [TypeScript 3.8.3] * **Node (for AoT issues):** `node --version` = v10.16.3
defect
dynamicdialog and dialog closed by clear button on drop down or chips inside i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please demonstrate your case at stackblitz by using the issue template below issues without a test case have much less possibility to be reviewd in detail and assisted current behavior if there is a drop down with showclear true inside opened dynamic dialog click of clear buton on the drop down will close the dynamic dialog as well expected behavior this is not happen in previous i have to set closeable false on dyanmic dialog open parameters to prevent this looks like the dynamic dialog catch the click event from the drop down clear button and treat it as the close button of dynamic dialog please tell us about your environment stakblitz angular version primeng version browser language node for aot issues node version
1
36,802
8,139,261,310
IssuesEvent
2018-08-20 17:07:29
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
Wrongs transfer to additional section
auth defect response needed
- Program: Authoritative - Issue type: Bug report ### Short description PowerDNS sends incorrect answers in response to A requests on second NS record. ### Environment - Operating system: debian stretch - Software version: pdns_server 4.0.3-1 , pdns_pipe_backend 4.0.3-1 - Software source: debian stretch default apt sources ### Steps to reproduce 1. Configure pdns server to run with pipe backend on 2 machines with names DNS1.SOME.ORG and DNS2.SOME.ORG 1. Delegate SOME.ORG to that 2 machines via glue records. 1. Use settings ``` out-of-zone-additional-processing=no ``` 1. Use SOA answer to SOME.ORG zone like this: ``` DNS1.SOME.ORG. SUPPORT.SOME.ORG. 2017091101 14400 3600 604800 3600 ``` 1. Run that in production. See that in 1 case of 1000 something going wrong. Clients cannot resolve domain name DNS2.SOME.ORG, saying "invalid hostname". 1. Run `tcpdump udp port 53 -vv -n | grep DNS2.SOME.ORG` on any of dns server machines. 1. See occasional strange UDP packets that PowerDNS sends with "0/0/1" which mean that PowerDNS sends requested information in Additional Section instead of Answer section. ### Example of normal answer ``` 144.76.71.50.53 > 188.138.40.20.39327: [] 31207*- q: A? dns2.lact.ru. 1/0/0 dns2.lact.ru. A 144.76.71.50 (46) ``` /// 1/0/0 means 1 answer, 0 ns records, 0 additional ### Example of wrong answer ``` s439.pingdom.com.59231 > data3.domain: [] 34100+ A? dns2.lact.ru. (30) data3.domain > s439.pingdom.com.59231: [] 34100*- q: A? dns2.lact.ru. 0/0/1 ar: dns2.lact.ru. A 144.76.71.50 (46) ``` // 0/0/1 (0 answers, 0 ns, 1 ar=1 additional record) ### Expected behaviour PowerDNS should send answers in Answers section. ### Actual behaviour DNS clients/recursors see Answer section empty and responds with "invalid hostname", which means that they failed to get ip address. ### Other information * The problem arises only with DNS2.SOME.ORG. * If you disable query cache via `query-cache-ttl=0` option everything becomes OK. * Maybe the problem grows from this code: https://github.com/PowerDNS/pdns/blob/rec-4.0.3/pdns/packethandler.cc#L428 * The stated behavior is very ugly. DNS problems which occurs rarely brings puzzle.
1.0
Wrongs transfer to additional section - - Program: Authoritative - Issue type: Bug report ### Short description PowerDNS sends incorrect answers in response to A requests on second NS record. ### Environment - Operating system: debian stretch - Software version: pdns_server 4.0.3-1 , pdns_pipe_backend 4.0.3-1 - Software source: debian stretch default apt sources ### Steps to reproduce 1. Configure pdns server to run with pipe backend on 2 machines with names DNS1.SOME.ORG and DNS2.SOME.ORG 1. Delegate SOME.ORG to that 2 machines via glue records. 1. Use settings ``` out-of-zone-additional-processing=no ``` 1. Use SOA answer to SOME.ORG zone like this: ``` DNS1.SOME.ORG. SUPPORT.SOME.ORG. 2017091101 14400 3600 604800 3600 ``` 1. Run that in production. See that in 1 case of 1000 something going wrong. Clients cannot resolve domain name DNS2.SOME.ORG, saying "invalid hostname". 1. Run `tcpdump udp port 53 -vv -n | grep DNS2.SOME.ORG` on any of dns server machines. 1. See occasional strange UDP packets that PowerDNS sends with "0/0/1" which mean that PowerDNS sends requested information in Additional Section instead of Answer section. ### Example of normal answer ``` 144.76.71.50.53 > 188.138.40.20.39327: [] 31207*- q: A? dns2.lact.ru. 1/0/0 dns2.lact.ru. A 144.76.71.50 (46) ``` /// 1/0/0 means 1 answer, 0 ns records, 0 additional ### Example of wrong answer ``` s439.pingdom.com.59231 > data3.domain: [] 34100+ A? dns2.lact.ru. (30) data3.domain > s439.pingdom.com.59231: [] 34100*- q: A? dns2.lact.ru. 0/0/1 ar: dns2.lact.ru. A 144.76.71.50 (46) ``` // 0/0/1 (0 answers, 0 ns, 1 ar=1 additional record) ### Expected behaviour PowerDNS should send answers in Answers section. ### Actual behaviour DNS clients/recursors see Answer section empty and responds with "invalid hostname", which means that they failed to get ip address. ### Other information * The problem arises only with DNS2.SOME.ORG. * If you disable query cache via `query-cache-ttl=0` option everything becomes OK. * Maybe the problem grows from this code: https://github.com/PowerDNS/pdns/blob/rec-4.0.3/pdns/packethandler.cc#L428 * The stated behavior is very ugly. DNS problems which occurs rarely brings puzzle.
defect
wrongs transfer to additional section program authoritative issue type bug report short description powerdns sends incorrect answers in response to a requests on second ns record environment operating system debian stretch software version pdns server pdns pipe backend software source debian stretch default apt sources steps to reproduce configure pdns server to run with pipe backend on machines with names some org and some org delegate some org to that machines via glue records use settings out of zone additional processing no use soa answer to some org zone like this some org support some org run that in production see that in case of something going wrong clients cannot resolve domain name some org saying invalid hostname run tcpdump udp port vv n grep some org on any of dns server machines see occasional strange udp packets that powerdns sends with which mean that powerdns sends requested information in additional section instead of answer section example of normal answer q a lact ru lact ru a means answer ns records additional example of wrong answer pingdom com domain a lact ru domain pingdom com q a lact ru ar lact ru a answers ns ar additional record expected behaviour powerdns should send answers in answers section actual behaviour dns clients recursors see answer section empty and responds with invalid hostname which means that they failed to get ip address other information the problem arises only with some org if you disable query cache via query cache ttl option everything becomes ok maybe the problem grows from this code the stated behavior is very ugly dns problems which occurs rarely brings puzzle
1
39,658
9,604,317,829
IssuesEvent
2019-05-10 19:38:06
CenturyLinkCloud/mdw
https://api.github.com/repos/CenturyLinkCloud/mdw
closed
Default Package is not recreated upon cache refresh
defect
When running inflight process instances/activities, the engine uses the defaultPackage static object, but when cache is refreshed (i.e. asset import), we do not create a new defaultPackage, which leads to continuing to use the same class loaders, so classes previously loaded by those classloaders will not reload/recompile classes.
1.0
Default Package is not recreated upon cache refresh - When running inflight process instances/activities, the engine uses the defaultPackage static object, but when cache is refreshed (i.e. asset import), we do not create a new defaultPackage, which leads to continuing to use the same class loaders, so classes previously loaded by those classloaders will not reload/recompile classes.
defect
default package is not recreated upon cache refresh when running inflight process instances activities the engine uses the defaultpackage static object but when cache is refreshed i e asset import we do not create a new defaultpackage which leads to continuing to use the same class loaders so classes previously loaded by those classloaders will not reload recompile classes
1
46,916
13,056,002,035
IssuesEvent
2020-07-30 03:21:28
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
[IceHive] - HiveCleaning::BuildLookUpTables segfaults with IceACT in geometry (Trac #2134)
Incomplete Migration Migrated from Trac combo reconstruction defect
Migrated from https://code.icecube.wisc.edu/ticket/2134 ```json { "status": "closed", "changetime": "2019-02-13T14:15:13", "description": "IceHive segfaults when IceACT is included in the geometry. It assumes the strings are >0 when calculating the hash function: http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/IceHive/trunk/public/IceHive/OMKeyHash.h#L64\n\nsee:\nhttps://icecube-spno.slack.com/archives/C02KQL9KN/p1518116966000686\n\n", "reporter": "kjmeagher", "cc": "mzoll", "resolution": "fixed", "_ts": "1550067313248429", "component": "combo reconstruction", "summary": "[IceHive] - HiveCleaning::BuildLookUpTables segfaults with IceACT in geometry", "priority": "critical", "keywords": "segfault icehive hivesplitter", "time": "2018-02-08T19:42:57", "milestone": "", "owner": "kkrings", "type": "defect" } ```
1.0
[IceHive] - HiveCleaning::BuildLookUpTables segfaults with IceACT in geometry (Trac #2134) - Migrated from https://code.icecube.wisc.edu/ticket/2134 ```json { "status": "closed", "changetime": "2019-02-13T14:15:13", "description": "IceHive segfaults when IceACT is included in the geometry. It assumes the strings are >0 when calculating the hash function: http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/IceHive/trunk/public/IceHive/OMKeyHash.h#L64\n\nsee:\nhttps://icecube-spno.slack.com/archives/C02KQL9KN/p1518116966000686\n\n", "reporter": "kjmeagher", "cc": "mzoll", "resolution": "fixed", "_ts": "1550067313248429", "component": "combo reconstruction", "summary": "[IceHive] - HiveCleaning::BuildLookUpTables segfaults with IceACT in geometry", "priority": "critical", "keywords": "segfault icehive hivesplitter", "time": "2018-02-08T19:42:57", "milestone": "", "owner": "kkrings", "type": "defect" } ```
defect
hivecleaning buildlookuptables segfaults with iceact in geometry trac migrated from json status closed changetime description icehive segfaults when iceact is included in the geometry it assumes the strings are when calculating the hash function reporter kjmeagher cc mzoll resolution fixed ts component combo reconstruction summary hivecleaning buildlookuptables segfaults with iceact in geometry priority critical keywords segfault icehive hivesplitter time milestone owner kkrings type defect
1
73,072
24,444,777,452
IssuesEvent
2022-10-06 16:59:16
SAP/fundamental-ngx
https://api.github.com/repos/SAP/fundamental-ngx
closed
Combobox glyph issue
bug good first issue Defect Hunting
#### Is this a bug, enhancement, or feature request? Bug #### Briefly describe your proposal. Reported by Andreea and Ivijan in Slack #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) 0.35.4 (A patch would be required for this fix) #### If this is a bug, please provide steps for reproducing it. There is a bug in combobox component with glyph property value being passed to input group component. If we check component code and template: https://github.com/SAP/fundamental-ngx/blob/main/libs/core/src/lib/combobox/combobox.component.ts https://github.com/SAP/fundamental-ngx/blob/main/libs/core/src/lib/combobox/combobox.component.html there is input property “glyph”: /** Icon to display in the right-side button. */ @Input() glyph = ‘navigation-down-arrow’; but template uses “glyphValue”: [glyph]=“showDropdownButton ? glyphValue : null” which returns hardcoded navigation-down-arrow icon (if not search), probably should return “glyph” value: /** Get the glyph value based on whether the combobox is used as a search field or not. */ get glyphValue(): string { return this.isSearch ? ‘search’ : ‘navigation-down-arrow’; }
1.0
Combobox glyph issue - #### Is this a bug, enhancement, or feature request? Bug #### Briefly describe your proposal. Reported by Andreea and Ivijan in Slack #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) 0.35.4 (A patch would be required for this fix) #### If this is a bug, please provide steps for reproducing it. There is a bug in combobox component with glyph property value being passed to input group component. If we check component code and template: https://github.com/SAP/fundamental-ngx/blob/main/libs/core/src/lib/combobox/combobox.component.ts https://github.com/SAP/fundamental-ngx/blob/main/libs/core/src/lib/combobox/combobox.component.html there is input property “glyph”: /** Icon to display in the right-side button. */ @Input() glyph = ‘navigation-down-arrow’; but template uses “glyphValue”: [glyph]=“showDropdownButton ? glyphValue : null” which returns hardcoded navigation-down-arrow icon (if not search), probably should return “glyph” value: /** Get the glyph value based on whether the combobox is used as a search field or not. */ get glyphValue(): string { return this.isSearch ? ‘search’ : ‘navigation-down-arrow’; }
defect
combobox glyph issue is this a bug enhancement or feature request bug briefly describe your proposal reported by andreea and ivijan in slack which versions of angular and fundamental library for angular are affected if this is a feature request use current version a patch would be required for this fix if this is a bug please provide steps for reproducing it there is a bug in combobox component with glyph property value being passed to input group component if we check component code and template there is input property “glyph” icon to display in the right side button input glyph ‘navigation down arrow’ but template uses “glyphvalue” “showdropdownbutton glyphvalue null” which returns hardcoded navigation down arrow icon if not search probably should return “glyph” value get the glyph value based on whether the combobox is used as a search field or not get glyphvalue string return this issearch ‘search’ ‘navigation down arrow’
1
4,325
2,610,091,688
IssuesEvent
2015-02-26 18:27:42
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳粉刺痤疮的祛除方法
auto-migrated Priority-Medium Type-Defect
``` 深圳粉刺痤疮的祛除方法【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:51
1.0
深圳粉刺痤疮的祛除方法 - ``` 深圳粉刺痤疮的祛除方法【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:51
defect
深圳粉刺痤疮的祛除方法 深圳粉刺痤疮的祛除方法【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
1
5,085
2,610,180,730
IssuesEvent
2015-02-26 18:57:45
chrsmith/quchuseban
https://api.github.com/repos/chrsmith/quchuseban
opened
纠结去色斑的那些产品好
auto-migrated Priority-Medium Type-Defect
``` 《摘要》 有时候,希望时间为自己停下,就这样和喜欢的人地老天荒�� �有时候,发现身边的人都不了解自己,面对着身边的人,突� ��觉得说不出话;有时候,在自己脆弱的时候,想一个人躲起 来,不愿别人看到自己的伤口;有时候,突然很想逃离现在�� �生活,想不顾一切收拾自己简单的行李去流浪。很多美女对� ��如何祛斑都比较关注,毕竟谁也不想脸上长很多难看的斑, 那么到底吃什么可以祛斑那去色斑的那些产品好, 《客户案例》   我今年43岁,黄褐斑长了有好几年了,整个脸感觉像是五 十多岁了的人一样,为了祛斑,我用了很多祛斑 方法,也试过很多的祛斑产品,现在和姐妹们分享一下什么�� �斑方法最适合我们,什么祛斑产品效果好。<br>   刚开始有黄褐斑的时候,我觉得用点祛斑的化妆品就差�� �多能去掉了,所以就在商场里买了祛斑霜,是些大牌的,感� ��用好的对皮肤也好,刚开始确实用的挺好的,斑也淡了,大 约用了有两个多月的时间吧,感 觉斑都看不见了,我当时特别高兴,可过了大约有一个月的�� �间,还没等我高兴过来,斑又反弹了,这次 比上次的更多了,颜色也更深了,当时我记得把那一堆的瓶�� �罐罐都给摔了,看来祛斑产品是不能用化妆 品了,接着我又瞄准了激光祛斑,花了好几千去做了,结果�� �反弹了,这回有多了个毛病,皮肤的防晒功 能下降了,稍不注意被晒到脸上的斑就更严重了,这次尝试�� �我真的是欲哭无泪了。<br>   后来还是一个朋友打电话说「黛芙薇尔精华液」治疗黄�� �斑效果特别好,她现在斑去掉三个月了都没反弹,皮肤也好� ��很多,让我也用用,这次我犹豫了,这个能有用吗?我就去�� �上查了查,看到很多人都 说这个不错,也不会反弹,就忍不住去他们官方网上订了两�� �周期,用完后斑确实没有了,为了巩固效果 ,我又在专家的建议下用了一个周期,现在都过去半年了,�� �都没反弹,这个祛斑产品效果确实挺好的。 阅读了去色斑的那些产品好,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 去色斑的那些产品好,同时为您分享祛斑小方法 1、防晒!此条非常重要!因为色斑最怕日晒。日光的暴晒或X 线、紫外线的照射过多皆可促发色斑,并使其加剧。 2、防止各种电离辐射! 3、慎用各种有创伤性的治疗!包括冷冻、激光、电离子、强酸 强碱等腐蚀性物质,否则容易造成毁容! ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:22
1.0
纠结去色斑的那些产品好 - ``` 《摘要》 有时候,希望时间为自己停下,就这样和喜欢的人地老天荒�� �有时候,发现身边的人都不了解自己,面对着身边的人,突� ��觉得说不出话;有时候,在自己脆弱的时候,想一个人躲起 来,不愿别人看到自己的伤口;有时候,突然很想逃离现在�� �生活,想不顾一切收拾自己简单的行李去流浪。很多美女对� ��如何祛斑都比较关注,毕竟谁也不想脸上长很多难看的斑, 那么到底吃什么可以祛斑那去色斑的那些产品好, 《客户案例》   我今年43岁,黄褐斑长了有好几年了,整个脸感觉像是五 十多岁了的人一样,为了祛斑,我用了很多祛斑 方法,也试过很多的祛斑产品,现在和姐妹们分享一下什么�� �斑方法最适合我们,什么祛斑产品效果好。<br>   刚开始有黄褐斑的时候,我觉得用点祛斑的化妆品就差�� �多能去掉了,所以就在商场里买了祛斑霜,是些大牌的,感� ��用好的对皮肤也好,刚开始确实用的挺好的,斑也淡了,大 约用了有两个多月的时间吧,感 觉斑都看不见了,我当时特别高兴,可过了大约有一个月的�� �间,还没等我高兴过来,斑又反弹了,这次 比上次的更多了,颜色也更深了,当时我记得把那一堆的瓶�� �罐罐都给摔了,看来祛斑产品是不能用化妆 品了,接着我又瞄准了激光祛斑,花了好几千去做了,结果�� �反弹了,这回有多了个毛病,皮肤的防晒功 能下降了,稍不注意被晒到脸上的斑就更严重了,这次尝试�� �我真的是欲哭无泪了。<br>   后来还是一个朋友打电话说「黛芙薇尔精华液」治疗黄�� �斑效果特别好,她现在斑去掉三个月了都没反弹,皮肤也好� ��很多,让我也用用,这次我犹豫了,这个能有用吗?我就去�� �上查了查,看到很多人都 说这个不错,也不会反弹,就忍不住去他们官方网上订了两�� �周期,用完后斑确实没有了,为了巩固效果 ,我又在专家的建议下用了一个周期,现在都过去半年了,�� �都没反弹,这个祛斑产品效果确实挺好的。 阅读了去色斑的那些产品好,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 去色斑的那些产品好,同时为您分享祛斑小方法 1、防晒!此条非常重要!因为色斑最怕日晒。日光的暴晒或X 线、紫外线的照射过多皆可促发色斑,并使其加剧。 2、防止各种电离辐射! 3、慎用各种有创伤性的治疗!包括冷冻、激光、电离子、强酸 强碱等腐蚀性物质,否则容易造成毁容! ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:22
defect
纠结去色斑的那些产品好 《摘要》 有时候,希望时间为自己停下,就这样和喜欢的人地老天荒�� �有时候,发现身边的人都不了解自己,面对着身边的人,突� ��觉得说不出话;有时候,在自己脆弱的时候,想一个人躲起 来,不愿别人看到自己的伤口;有时候,突然很想逃离现在�� �生活,想不顾一切收拾自己简单的行李去流浪。很多美女对� ��如何祛斑都比较关注,毕竟谁也不想脸上长很多难看的斑, 那么到底吃什么可以祛斑那去色斑的那些产品好, 《客户案例》    ,黄褐斑长了有好几年了,整个脸感觉像是五 十多岁了的人一样,为了祛斑,我用了很多祛斑 方法,也试过很多的祛斑产品,现在和姐妹们分享一下什么�� �斑方法最适合我们,什么祛斑产品效果好。   刚开始有黄褐斑的时候,我觉得用点祛斑的化妆品就差�� �多能去掉了,所以就在商场里买了祛斑霜,是些大牌的,感� ��用好的对皮肤也好,刚开始确实用的挺好的,斑也淡了,大 约用了有两个多月的时间吧,感 觉斑都看不见了,我当时特别高兴,可过了大约有一个月的�� �间,还没等我高兴过来,斑又反弹了,这次 比上次的更多了,颜色也更深了,当时我记得把那一堆的瓶�� �罐罐都给摔了,看来祛斑产品是不能用化妆 品了,接着我又瞄准了激光祛斑,花了好几千去做了,结果�� �反弹了,这回有多了个毛病,皮肤的防晒功 能下降了,稍不注意被晒到脸上的斑就更严重了,这次尝试�� �我真的是欲哭无泪了。   后来还是一个朋友打电话说「黛芙薇尔精华液」治疗黄�� �斑效果特别好,她现在斑去掉三个月了都没反弹,皮肤也好� ��很多,让我也用用,这次我犹豫了,这个能有用吗 我就去�� �上查了查,看到很多人都 说这个不错,也不会反弹,就忍不住去他们官方网上订了两�� �周期,用完后斑确实没有了,为了巩固效果 ,我又在专家的建议下用了一个周期,现在都过去半年了,�� �都没反弹,这个祛斑产品效果确实挺好的。 阅读了去色斑的那些产品好,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》    黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗   答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来    ,服用黛芙薇尔美白,会伤身体吗 有副作用吗   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖    ,去除黄褐斑之后,会反弹吗   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗    ,你们的价格有点贵,能不能便宜一点   答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗    ,我适合用黛芙薇尔精华液吗   答:黛芙薇尔适用人群:    、生理紊乱引起的黄褐斑人群    、生育引起的妊娠斑人群    、年纪增长引起的老年斑人群    、化妆品色素沉积、辐射斑人群    、长期日照引起的日晒斑人群    、肌肤暗淡急需美白的人群 《祛斑小方法》 去色斑的那些产品好,同时为您分享祛斑小方法 、防晒!此条非常重要!因为色斑最怕日晒。日光的暴晒或x 线、紫外线的照射过多皆可促发色斑,并使其加剧。 、防止各种电离辐射! 、慎用各种有创伤性的治疗 包括冷冻、激光、电离子、强酸 强碱等腐蚀性物质,否则容易造成毁容 original issue reported on code google com by additive gmail com on jul at
1
185,970
21,897,630,988
IssuesEvent
2022-05-20 10:10:18
elastic/integrations
https://api.github.com/repos/elastic/integrations
closed
[gcp] GKE audit log data differs between GCP integrations
Team:Security-External Integrations Integration:GCP v8.3.0 8.3 candidate
My colleague and I discovered, while building a lab for developing k8 detections, that the GCP generic pub-sub integration provides the full scope of k8 audit data, required to detect malicious or suspicious behavior, while the GCP audit integration only provides a small subset of that data. The GCP generic pub-sub integration supplies the needed data but it does not map the data to fields in ECS leaving it all condensed together in the message field essentially making it useless in regards to writing any detection logic or analysis. The GCP audit integration on the other hand, like I said, supplies only a small subset of the data provided by the GKE audit logs leaving out many critical data fields required for observability and security. Below I have included events from both integrations as an example. You can see that the GCP generic pub-sub k8 audit log event message field contains a great deal more data than the GCP audit log event fields provide. [GCP generic pub-sub](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-gcp.html) k8 audit log event: [gcp-gen-pub-sub-event.txt](https://github.com/elastic/integrations/files/8319125/gcp-gen-pub-sub-event.txt) vs. [GCP audit](https://docs.elastic.co/en/integrations/gcp#audit) log event: [gcp-audit-event.txt](https://github.com/elastic/integrations/files/8319130/gcp-audit-event.txt)
True
[gcp] GKE audit log data differs between GCP integrations - My colleague and I discovered, while building a lab for developing k8 detections, that the GCP generic pub-sub integration provides the full scope of k8 audit data, required to detect malicious or suspicious behavior, while the GCP audit integration only provides a small subset of that data. The GCP generic pub-sub integration supplies the needed data but it does not map the data to fields in ECS leaving it all condensed together in the message field essentially making it useless in regards to writing any detection logic or analysis. The GCP audit integration on the other hand, like I said, supplies only a small subset of the data provided by the GKE audit logs leaving out many critical data fields required for observability and security. Below I have included events from both integrations as an example. You can see that the GCP generic pub-sub k8 audit log event message field contains a great deal more data than the GCP audit log event fields provide. [GCP generic pub-sub](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-gcp.html) k8 audit log event: [gcp-gen-pub-sub-event.txt](https://github.com/elastic/integrations/files/8319125/gcp-gen-pub-sub-event.txt) vs. [GCP audit](https://docs.elastic.co/en/integrations/gcp#audit) log event: [gcp-audit-event.txt](https://github.com/elastic/integrations/files/8319130/gcp-audit-event.txt)
non_defect
gke audit log data differs between gcp integrations my colleague and i discovered while building a lab for developing detections that the gcp generic pub sub integration provides the full scope of audit data required to detect malicious or suspicious behavior while the gcp audit integration only provides a small subset of that data the gcp generic pub sub integration supplies the needed data but it does not map the data to fields in ecs leaving it all condensed together in the message field essentially making it useless in regards to writing any detection logic or analysis the gcp audit integration on the other hand like i said supplies only a small subset of the data provided by the gke audit logs leaving out many critical data fields required for observability and security below i have included events from both integrations as an example you can see that the gcp generic pub sub audit log event message field contains a great deal more data than the gcp audit log event fields provide audit log event vs log event
0
20,280
6,840,944,448
IssuesEvent
2017-11-11 06:34:11
trilinos/Trilinos
https://api.github.com/repos/trilinos/Trilinos
closed
Panzer: enforce Intrepid2_ENABLE_KokkosDynRankView requirement during CMake configure
build Panzer
<!--- Provide a general summary of the issue in the Title above. --> Panzer will not build properly if `Intrepid2_ENABLE_KokkosDynRankView` is set to `OFF` during CMake configure, but the configure succeeds if it is set that way. It would be better if the configure failed with an error indicating the requirement. @trilinos/@panzer
1.0
Panzer: enforce Intrepid2_ENABLE_KokkosDynRankView requirement during CMake configure - <!--- Provide a general summary of the issue in the Title above. --> Panzer will not build properly if `Intrepid2_ENABLE_KokkosDynRankView` is set to `OFF` during CMake configure, but the configure succeeds if it is set that way. It would be better if the configure failed with an error indicating the requirement. @trilinos/@panzer
non_defect
panzer enforce enable kokkosdynrankview requirement during cmake configure panzer will not build properly if enable kokkosdynrankview is set to off during cmake configure but the configure succeeds if it is set that way it would be better if the configure failed with an error indicating the requirement trilinos panzer
0
60,560
7,359,179,687
IssuesEvent
2018-03-10 02:59:56
classmere/ios
https://api.github.com/repos/classmere/ios
closed
Add filters for sections in CourseViewController
design feature
Some courses have a ton of sections (like CS 161 or PH 211). Allow the user to filter these based on term, location, availability, type, campus, etc. Maybe just filter on term as a proof-of concept and then add other filters later.
1.0
Add filters for sections in CourseViewController - Some courses have a ton of sections (like CS 161 or PH 211). Allow the user to filter these based on term, location, availability, type, campus, etc. Maybe just filter on term as a proof-of concept and then add other filters later.
non_defect
add filters for sections in courseviewcontroller some courses have a ton of sections like cs or ph allow the user to filter these based on term location availability type campus etc maybe just filter on term as a proof of concept and then add other filters later
0
11,807
2,666,210,872
IssuesEvent
2015-03-21 09:33:07
scanmem/scanmem
https://api.github.com/repos/scanmem/scanmem
closed
First refine search freezes GameConqueror/scanmem
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1.Select process 2.First search 3.Refine search What is the expected output? What do you see instead? It would be expected to continue to refine the search and scoring variables in the panel below the application instead (GameConqueror) freezes What version of the product are you using? On what operating system? scanmem-git 8c5947bb72 and d1dd0ad on Archlinux x64 with kernel 3.17.1-1 and Gnome Shell. Please provide any additional information below. No tengo mas informacion adicional salvo que he provado con varios juegos de Steam y robot de gnome. ``` Original issue reported on code.google.com by `xosl...@gmail.com` on 31 Oct 2014 at 9:36
1.0
First refine search freezes GameConqueror/scanmem - ``` What steps will reproduce the problem? 1.Select process 2.First search 3.Refine search What is the expected output? What do you see instead? It would be expected to continue to refine the search and scoring variables in the panel below the application instead (GameConqueror) freezes What version of the product are you using? On what operating system? scanmem-git 8c5947bb72 and d1dd0ad on Archlinux x64 with kernel 3.17.1-1 and Gnome Shell. Please provide any additional information below. No tengo mas informacion adicional salvo que he provado con varios juegos de Steam y robot de gnome. ``` Original issue reported on code.google.com by `xosl...@gmail.com` on 31 Oct 2014 at 9:36
defect
first refine search freezes gameconqueror scanmem what steps will reproduce the problem select process first search refine search what is the expected output what do you see instead it would be expected to continue to refine the search and scoring variables in the panel below the application instead gameconqueror freezes what version of the product are you using on what operating system scanmem git and on archlinux with kernel and gnome shell please provide any additional information below no tengo mas informacion adicional salvo que he provado con varios juegos de steam y robot de gnome original issue reported on code google com by xosl gmail com on oct at
1
766
2,587,972,610
IssuesEvent
2015-02-17 21:49:34
chrsmith/codesearch
https://api.github.com/repos/chrsmith/codesearch
opened
Error in read.go comments
auto-migrated Priority-Medium Type-Defect
``` http://code.google.com/p/codesearch/source/browse/index/read.go#36 The comment reads : "For example, the delta list [2,5,1,1,0] encodes the file ID list 1, 6, 7, 8." Am I misunderstanding something, or should the delta list read : [1,5,1,1,0] to match up to the given file ID list? ``` ----- Original issue reported on code.google.com by `deepak.j...@gmail.com` on 26 Apr 2012 at 7:10
1.0
Error in read.go comments - ``` http://code.google.com/p/codesearch/source/browse/index/read.go#36 The comment reads : "For example, the delta list [2,5,1,1,0] encodes the file ID list 1, 6, 7, 8." Am I misunderstanding something, or should the delta list read : [1,5,1,1,0] to match up to the given file ID list? ``` ----- Original issue reported on code.google.com by `deepak.j...@gmail.com` on 26 Apr 2012 at 7:10
defect
error in read go comments the comment reads for example the delta list encodes the file id list am i misunderstanding something or should the delta list read to match up to the given file id list original issue reported on code google com by deepak j gmail com on apr at
1
75,555
15,435,842,875
IssuesEvent
2021-03-07 10:39:05
dodekanisou/home-automation
https://api.github.com/repos/dodekanisou/home-automation
closed
CVE-2020-15366 (Medium) detected in ajv-6.10.2.tgz, ajv-5.5.2.tgz
security vulnerability
## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ajv-6.10.2.tgz</b>, <b>ajv-5.5.2.tgz</b></p></summary> <p> <details><summary><b>ajv-6.10.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz</a></p> <p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/admin-lte/package.json</p> <p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/admin-lte/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - eslint-6.8.0.tgz (Root Library) - :x: **ajv-6.10.2.tgz** (Vulnerable Library) </details> <details><summary><b>ajv-5.5.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz">https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz</a></p> <p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/admin-lte/package.json</p> <p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/admin-lte/node_modules/extract-text-webpack-plugin/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - extract-text-webpack-plugin-3.0.2.tgz (Root Library) - schema-utils-0.3.0.tgz - :x: **ajv-5.5.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/dodekanisou/home-automation/commit/9b930d6dfb4815ac9f51831ae63b498835ce6700">9b930d6dfb4815ac9f51831ae63b498835ce6700</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15366 (Medium) detected in ajv-6.10.2.tgz, ajv-5.5.2.tgz - ## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ajv-6.10.2.tgz</b>, <b>ajv-5.5.2.tgz</b></p></summary> <p> <details><summary><b>ajv-6.10.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz</a></p> <p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/admin-lte/package.json</p> <p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/admin-lte/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - eslint-6.8.0.tgz (Root Library) - :x: **ajv-6.10.2.tgz** (Vulnerable Library) </details> <details><summary><b>ajv-5.5.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz">https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz</a></p> <p>Path to dependency file: home-automation/RpiHost/wwwroot/lib/admin-lte/package.json</p> <p>Path to vulnerable library: home-automation/RpiHost/wwwroot/lib/admin-lte/node_modules/extract-text-webpack-plugin/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - extract-text-webpack-plugin-3.0.2.tgz (Root Library) - schema-utils-0.3.0.tgz - :x: **ajv-5.5.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/dodekanisou/home-automation/commit/9b930d6dfb4815ac9f51831ae63b498835ce6700">9b930d6dfb4815ac9f51831ae63b498835ce6700</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in ajv tgz ajv tgz cve medium severity vulnerability vulnerable libraries ajv tgz ajv tgz ajv tgz another json schema validator library home page a href path to dependency file home automation rpihost wwwroot lib admin lte package json path to vulnerable library home automation rpihost wwwroot lib admin lte node modules ajv package json dependency hierarchy eslint tgz root library x ajv tgz vulnerable library ajv tgz another json schema validator library home page a href path to dependency file home automation rpihost wwwroot lib admin lte package json path to vulnerable library home automation rpihost wwwroot lib admin lte node modules extract text webpack plugin node modules ajv package json dependency hierarchy extract text webpack plugin tgz root library schema utils tgz x ajv tgz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource
0