Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
29,966
13,189,828,860
IssuesEvent
2020-08-13 09:07:34
Azure/azure-rest-api-specs
https://api.github.com/repos/Azure/azure-rest-api-specs
closed
SQL threat detection policy doesn't show up in ATP on portal
SQL Service Attention
I have created a threat detection policy by api. But the storage account information doesn't show up in ATP as the image below. Get https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/yupsql4/providers/Microsoft.Sql/servers/yupsql4/databases/sqltestabcd123/securityAlertPolicies/default?api-version=2014-04-01 ``` <?xml version="1.0" encoding="utf-8"?> <entry xml:base="https://management.eastus.control.database.windows.net/v2/ManagementService.Trusted.svc/modules/ServerManagement/" xmlns="http://www.w3.org/2005/Atom" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <id>https://management.eastus.control.database.windows.net/v2/ManagementService.Trusted.svc/modules/ServerManagement/databaseSecurityAlertPolicies/default</id> <category term="Microsoft.SqlServer.Management.Service.Domain.ArmResourceProvider.DatabaseSecurityAlertPolicy" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <link rel="edit" title="DatabaseSecurityAlertPolicy" href="databaseSecurityAlertPolicies/default" /> <title /> <updated>2020-08-10T08:17:15Z</updated> <author> <name /> </author> <content type="application/xml"> <m:properties> <d:id>/subscriptions/.../resourceGroups/yupsql4/providers/Microsoft.Sql/servers/yupsql4/databases/sqltestabcd123/securityAlertPolicies/default</d:id> <d:name>default</d:name> <d:type>Microsoft.Sql/servers/databases/securityAlertPolicies</d:type> <d:location>East US</d:location> <d:kind m:null="true" /> <d:properties m:type="Microsoft.SqlServer.Management.Service.Domain.ArmResourceProvider.DatabaseSecurityAlertPolicy_DatabaseSecurityAlertPolicyProperties"> <d:useServerDefault>Disabled</d:useServerDefault> <d:state>Enabled</d:state> <d:disabledAlerts></d:disabledAlerts> <d:emailAddresses></d:emailAddresses> <d:emailAccountAdmins>Disabled</d:emailAccountAdmins> <d:storageEndpoint>https://yupstr4.blob.core.windows.net/</d:storageEndpoint> <d:storageAccountAccessKey></d:storageAccountAccessKey> <d:retentionDays m:type="Edm.Int32">6</d:retentionDays> </d:properties> </m:properties> </content> </entry> ``` ![image](https://user-images.githubusercontent.com/56525716/89763540-51bf8400-db25-11ea-8b21-85225854274a.png)
1.0
SQL threat detection policy doesn't show up in ATP on portal - I have created a threat detection policy by api. But the storage account information doesn't show up in ATP as the image below. Get https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/yupsql4/providers/Microsoft.Sql/servers/yupsql4/databases/sqltestabcd123/securityAlertPolicies/default?api-version=2014-04-01 ``` <?xml version="1.0" encoding="utf-8"?> <entry xml:base="https://management.eastus.control.database.windows.net/v2/ManagementService.Trusted.svc/modules/ServerManagement/" xmlns="http://www.w3.org/2005/Atom" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <id>https://management.eastus.control.database.windows.net/v2/ManagementService.Trusted.svc/modules/ServerManagement/databaseSecurityAlertPolicies/default</id> <category term="Microsoft.SqlServer.Management.Service.Domain.ArmResourceProvider.DatabaseSecurityAlertPolicy" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <link rel="edit" title="DatabaseSecurityAlertPolicy" href="databaseSecurityAlertPolicies/default" /> <title /> <updated>2020-08-10T08:17:15Z</updated> <author> <name /> </author> <content type="application/xml"> <m:properties> <d:id>/subscriptions/.../resourceGroups/yupsql4/providers/Microsoft.Sql/servers/yupsql4/databases/sqltestabcd123/securityAlertPolicies/default</d:id> <d:name>default</d:name> <d:type>Microsoft.Sql/servers/databases/securityAlertPolicies</d:type> <d:location>East US</d:location> <d:kind m:null="true" /> <d:properties m:type="Microsoft.SqlServer.Management.Service.Domain.ArmResourceProvider.DatabaseSecurityAlertPolicy_DatabaseSecurityAlertPolicyProperties"> <d:useServerDefault>Disabled</d:useServerDefault> <d:state>Enabled</d:state> <d:disabledAlerts></d:disabledAlerts> <d:emailAddresses></d:emailAddresses> <d:emailAccountAdmins>Disabled</d:emailAccountAdmins> <d:storageEndpoint>https://yupstr4.blob.core.windows.net/</d:storageEndpoint> <d:storageAccountAccessKey></d:storageAccountAccessKey> <d:retentionDays m:type="Edm.Int32">6</d:retentionDays> </d:properties> </m:properties> </content> </entry> ``` ![image](https://user-images.githubusercontent.com/56525716/89763540-51bf8400-db25-11ea-8b21-85225854274a.png)
non_process
sql threat detection policy doesn t show up in atp on portal i have created a threat detection policy by api but the storage account information doesn t show up in atp as the image below get entry xml base xmlns xmlns d xmlns m subscriptions resourcegroups providers microsoft sql servers databases securityalertpolicies default default microsoft sql servers databases securityalertpolicies east us disabled enabled disabled
0
11,381
14,222,763,652
IssuesEvent
2020-11-17 17:17:30
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
Hold NixOS release retrospective
0.kind: enhancement 6.topic: nixos 6.topic: release process
Some of the larger pain points already have some related threads around their discussion. https://discourse.nixos.org/t/what-should-stable-nixos-prioritize/9646 However, it would be nice for people to voice their opinion on pain-points (both large and small) felt across the community. The goal of the retrospective would be to find ways to improve the process: from people contributing to the process, schedules, tasks, human processes, etc. cc @worldofpeace
1.0
Hold NixOS release retrospective - Some of the larger pain points already have some related threads around their discussion. https://discourse.nixos.org/t/what-should-stable-nixos-prioritize/9646 However, it would be nice for people to voice their opinion on pain-points (both large and small) felt across the community. The goal of the retrospective would be to find ways to improve the process: from people contributing to the process, schedules, tasks, human processes, etc. cc @worldofpeace
process
hold nixos release retrospective some of the larger pain points already have some related threads around their discussion however it would be nice for people to voice their opinion on pain points both large and small felt across the community the goal of the retrospective would be to find ways to improve the process from people contributing to the process schedules tasks human processes etc cc worldofpeace
1
14,470
17,579,039,900
IssuesEvent
2021-08-16 03:18:13
googleapis/python-spanner
https://api.github.com/repos/googleapis/python-spanner
closed
tests.system.test_session_api: test_transaction_execute_update_then_insert_commit failed
api: spanner type: process flakybot: issue flakybot: flaky
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 5629bacf780cf79c3af4cda3de50986c82e18c0d buildURL: [Build Status](https://source.cloud.google.com/results/invocations/04a0bbd2-5177-4652-90eb-d548044bfed6), [Sponge](http://sponge2/04a0bbd2-5177-4652-90eb-d548044bfed6) status: failed <details><summary>Test output</summary><br><pre>args = (session: "projects/precise-truck-742/instances/google-cloud-1628673888891/databases/test_sessions_1628676428092/sessi... string_value: "Phlyntstone" } values { string_value: "wylma@example.com" } } } } ,) kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1628673888891/databa...axZl0EgaLfm_VRoTVUUUzobMWu17ImOQFfnSBEQ'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.7.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:67: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f581f1ebe50> request = session: "projects/precise-truck-742/instances/google-cloud-1628673888891/databases/test_sessions_1628676428092/sessio... string_value: "Phlyntstone" } values { string_value: "wylma@example.com" } } } } timeout = None metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1628673888891/databases/test_sess...6axZl0EgaLfm_VRoTVUUUzobMWu17ImOQFfnSBEQ'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.7.0')] credentials = None, wait_for_ready = None, compression = None def __call__(self, request, timeout=None, metadata=None, credentials=None, wait_for_ready=None, compression=None): state, call, = self._blocking(request, timeout, metadata, credentials, wait_for_ready, compression) > return _end_unary_response_blocking(state, call, False, None) .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7f581f2b4220> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f581f287b40> with_call = False, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _MultiThreadedRendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _InactiveRpcError(state) E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: E status = StatusCode.ABORTED E details = "Transaction aborted. Database schema probably changed during transaction, retry may succeed." E debug_error_string = "{"created":"@1628676436.164783797","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"Transaction aborted. Database schema probably changed during transaction, retry may succeed.","grpc_status":10}" E > .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError The above exception was the direct cause of the following exception: sessions_database = <google.cloud.spanner_v1.database.Database object at 0x7f581f1f95e0> sessions_to_delete = [<google.cloud.spanner_v1.session.Session object at 0x7f581f2b4640>] @_helpers.retry_mabye_conflict def test_transaction_execute_update_then_insert_commit( sessions_database, sessions_to_delete ): # [START spanner_test_dml_with_mutation] # [START spanner_test_dml_update] sd = _sample_data session = sessions_database.session() session.create() sessions_to_delete.append(session) with session.batch() as batch: batch.delete(sd.TABLE, sd.ALL) insert_statement = list(_generate_insert_statements())[0] with session.transaction() as transaction: rows = list(transaction.read(sd.TABLE, sd.COLUMNS, sd.ALL)) assert rows == [] row_count = transaction.execute_update(insert_statement) assert row_count == 1 > transaction.insert(sd.TABLE, sd.COLUMNS, sd.ROW_DATA[1:]) tests/system/test_session_api.py:654: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/spanner_v1/transaction.py:375: in __exit__ self.commit() google/cloud/spanner_v1/transaction.py:162: in commit response = api.commit(request=request, metadata=metadata,) google/cloud/spanner_v1/services/spanner/client.py:1323: in commit response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:145: in __call__ return wrapped_func(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:286: in retry_wrapped_func return retry_target( .nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:189: in retry_target return target() .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None from_value = <_InactiveRpcError of RPC that terminated with: status = StatusCode.ABORTED details = "Transaction aborted. Database...ge":"Transaction aborted. Database schema probably changed during transaction, retry may succeed.","grpc_status":10}" > > ??? E google.api_core.exceptions.Aborted: 409 Transaction aborted. Database schema probably changed during transaction, retry may succeed. <string>:3: Aborted</pre></details>
1.0
tests.system.test_session_api: test_transaction_execute_update_then_insert_commit failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 5629bacf780cf79c3af4cda3de50986c82e18c0d buildURL: [Build Status](https://source.cloud.google.com/results/invocations/04a0bbd2-5177-4652-90eb-d548044bfed6), [Sponge](http://sponge2/04a0bbd2-5177-4652-90eb-d548044bfed6) status: failed <details><summary>Test output</summary><br><pre>args = (session: "projects/precise-truck-742/instances/google-cloud-1628673888891/databases/test_sessions_1628676428092/sessi... string_value: "Phlyntstone" } values { string_value: "wylma@example.com" } } } } ,) kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1628673888891/databa...axZl0EgaLfm_VRoTVUUUzobMWu17ImOQFfnSBEQ'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.7.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:67: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f581f1ebe50> request = session: "projects/precise-truck-742/instances/google-cloud-1628673888891/databases/test_sessions_1628676428092/sessio... string_value: "Phlyntstone" } values { string_value: "wylma@example.com" } } } } timeout = None metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1628673888891/databases/test_sess...6axZl0EgaLfm_VRoTVUUUzobMWu17ImOQFfnSBEQ'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.7.0')] credentials = None, wait_for_ready = None, compression = None def __call__(self, request, timeout=None, metadata=None, credentials=None, wait_for_ready=None, compression=None): state, call, = self._blocking(request, timeout, metadata, credentials, wait_for_ready, compression) > return _end_unary_response_blocking(state, call, False, None) .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7f581f2b4220> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f581f287b40> with_call = False, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _MultiThreadedRendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _InactiveRpcError(state) E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: E status = StatusCode.ABORTED E details = "Transaction aborted. Database schema probably changed during transaction, retry may succeed." E debug_error_string = "{"created":"@1628676436.164783797","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"Transaction aborted. Database schema probably changed during transaction, retry may succeed.","grpc_status":10}" E > .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError The above exception was the direct cause of the following exception: sessions_database = <google.cloud.spanner_v1.database.Database object at 0x7f581f1f95e0> sessions_to_delete = [<google.cloud.spanner_v1.session.Session object at 0x7f581f2b4640>] @_helpers.retry_mabye_conflict def test_transaction_execute_update_then_insert_commit( sessions_database, sessions_to_delete ): # [START spanner_test_dml_with_mutation] # [START spanner_test_dml_update] sd = _sample_data session = sessions_database.session() session.create() sessions_to_delete.append(session) with session.batch() as batch: batch.delete(sd.TABLE, sd.ALL) insert_statement = list(_generate_insert_statements())[0] with session.transaction() as transaction: rows = list(transaction.read(sd.TABLE, sd.COLUMNS, sd.ALL)) assert rows == [] row_count = transaction.execute_update(insert_statement) assert row_count == 1 > transaction.insert(sd.TABLE, sd.COLUMNS, sd.ROW_DATA[1:]) tests/system/test_session_api.py:654: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/spanner_v1/transaction.py:375: in __exit__ self.commit() google/cloud/spanner_v1/transaction.py:162: in commit response = api.commit(request=request, metadata=metadata,) google/cloud/spanner_v1/services/spanner/client.py:1323: in commit response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:145: in __call__ return wrapped_func(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:286: in retry_wrapped_func return retry_target( .nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:189: in retry_target return target() .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None from_value = <_InactiveRpcError of RPC that terminated with: status = StatusCode.ABORTED details = "Transaction aborted. Database...ge":"Transaction aborted. Database schema probably changed during transaction, retry may succeed.","grpc_status":10}" > > ??? E google.api_core.exceptions.Aborted: 409 Transaction aborted. Database schema probably changed during transaction, retry may succeed. <string>:3: Aborted</pre></details>
process
tests system test session api test transaction execute update then insert commit failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args session projects precise truck instances google cloud databases test sessions sessi string value phlyntstone values string value wylma example com kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs nox system lib site packages google api core grpc helpers py self request session projects precise truck instances google cloud databases test sessions sessio string value phlyntstone values string value wylma example com timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox system lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode aborted e details transaction aborted database schema probably changed during transaction retry may succeed e debug error string created description error received from peer file src core lib surface call cc file line grpc message transaction aborted database schema probably changed during transaction retry may succeed grpc status e nox system lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception sessions database sessions to delete helpers retry mabye conflict def test transaction execute update then insert commit sessions database sessions to delete sd sample data session sessions database session session create sessions to delete append session with session batch as batch batch delete sd table sd all insert statement list generate insert statements with session transaction as transaction rows list transaction read sd table sd columns sd all assert rows row count transaction execute update insert statement assert row count transaction insert sd table sd columns sd row data tests system test session api py google cloud spanner transaction py in exit self commit google cloud spanner transaction py in commit response api commit request request metadata metadata google cloud spanner services spanner client py in commit response rpc request retry retry timeout timeout metadata metadata nox system lib site packages google api core gapic method py in call return wrapped func args kwargs nox system lib site packages google api core retry py in retry wrapped func return retry target nox system lib site packages google api core retry py in retry target return target nox system lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value inactiverpcerror of rpc that terminated with status statuscode aborted details transaction aborted database ge transaction aborted database schema probably changed during transaction retry may succeed grpc status e google api core exceptions aborted transaction aborted database schema probably changed during transaction retry may succeed aborted
1
156,639
5,971,805,160
IssuesEvent
2017-05-31 04:22:07
renatobenks/treasy-challenge
https://api.github.com/repos/renatobenks/treasy-challenge
opened
TREASY-03 - Configurar transpiler ES6+
Priority: Highest Type: enhancement
Configurar transpiler ES6+ --- Habilitar a escrita da aplicação em ES6+ através do Babel transpiler, carregado através do loader do `webpack`. - [Babel](http://babeljs.io/)
1.0
TREASY-03 - Configurar transpiler ES6+ - Configurar transpiler ES6+ --- Habilitar a escrita da aplicação em ES6+ através do Babel transpiler, carregado através do loader do `webpack`. - [Babel](http://babeljs.io/)
non_process
treasy configurar transpiler configurar transpiler habilitar a escrita da aplicação em através do babel transpiler carregado através do loader do webpack
0
46,989
24,816,001,085
IssuesEvent
2022-10-25 13:12:48
grafana/mimir
https://api.github.com/repos/grafana/mimir
opened
Store-Gateway: High memory allocation loading bucket block chunks
type/performance component/store-gateway
#### Describe the bug Store-gateway memory profiles shows a high memory usage when populating chunks from a bucket block. ![prof](https://user-images.githubusercontent.com/888899/197781687-a78329b1-483b-4fd8-aa45-bdfea8eceae0.png) The above screenshot shows that roughly `26%` of total allocated objects are generated while invoking `loadChunks`, and `22%` is distributed between the functions `populateChunk` and `loadChunk` itself. [store-gateway.cortex-prod-01.mem.profile.zip](https://github.com/grafana/mimir/files/9860518/store-gateway.cortex-prod-01.mem.profile.zip)
True
Store-Gateway: High memory allocation loading bucket block chunks - #### Describe the bug Store-gateway memory profiles shows a high memory usage when populating chunks from a bucket block. ![prof](https://user-images.githubusercontent.com/888899/197781687-a78329b1-483b-4fd8-aa45-bdfea8eceae0.png) The above screenshot shows that roughly `26%` of total allocated objects are generated while invoking `loadChunks`, and `22%` is distributed between the functions `populateChunk` and `loadChunk` itself. [store-gateway.cortex-prod-01.mem.profile.zip](https://github.com/grafana/mimir/files/9860518/store-gateway.cortex-prod-01.mem.profile.zip)
non_process
store gateway high memory allocation loading bucket block chunks describe the bug store gateway memory profiles shows a high memory usage when populating chunks from a bucket block the above screenshot shows that roughly of total allocated objects are generated while invoking loadchunks and is distributed between the functions populatechunk and loadchunk itself
0
6,743
9,872,957,162
IssuesEvent
2019-06-22 09:48:26
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
reclassify grid values - options
Feature Request Processing
Author Name: **stefano campus** (@skampus) Original Redmine Issue: [5740](https://issues.qgis.org/issues/5740) Redmine category:processing/saga Assignee: Victor Olaya --- original saga module has the possibility of using external lookout table for reclassification
1.0
reclassify grid values - options - Author Name: **stefano campus** (@skampus) Original Redmine Issue: [5740](https://issues.qgis.org/issues/5740) Redmine category:processing/saga Assignee: Victor Olaya --- original saga module has the possibility of using external lookout table for reclassification
process
reclassify grid values options author name stefano campus skampus original redmine issue redmine category processing saga assignee victor olaya original saga module has the possibility of using external lookout table for reclassification
1
8,315
11,485,619,087
IssuesEvent
2020-02-11 08:07:30
Arch666Angel/mods
https://api.github.com/repos/Arch666Angel/mods
opened
Greenhouse locked too far away
bioprocessing bug
![image](https://user-images.githubusercontent.com/26593477/74219617-9caf2a80-4cad-11ea-8930-9259e23cd767.png) - [ ] Unlock seed extractor with greenhouse - [ ] Maybe tweak recipe a bit - [ ] Maaaybe allow tree cutting automation with steam/coal powered assembler from bob?
1.0
Greenhouse locked too far away - ![image](https://user-images.githubusercontent.com/26593477/74219617-9caf2a80-4cad-11ea-8930-9259e23cd767.png) - [ ] Unlock seed extractor with greenhouse - [ ] Maybe tweak recipe a bit - [ ] Maaaybe allow tree cutting automation with steam/coal powered assembler from bob?
process
greenhouse locked too far away unlock seed extractor with greenhouse maybe tweak recipe a bit maaaybe allow tree cutting automation with steam coal powered assembler from bob
1
466,131
13,397,400,072
IssuesEvent
2020-09-03 11:32:09
medialab/fonio
https://api.github.com/repos/medialab/fonio
closed
hyphen section title only if screen too small ?
enhancement priority:medium user interface
**Is your feature request related to a problem? Please describe.** ![image](https://user-images.githubusercontent.com/193478/84271236-253eba00-ab2c-11ea-8f98-51ae16c60086.png) My section is not readable entirely whereas my screen is large enough. Actually if I make my screen very narrow the hypenation is not enough : ![image](https://user-images.githubusercontent.com/193478/84271568-8bc3d800-ab2c-11ea-93c6-feec9c9736dd.png) **Describe the solution you'd like** Be able to read all the title is my screen is large enough. **Describe alternatives you've considered** don't hyphen but wrap ? don't hyphen at all and let the editor size show what it can ?
1.0
hyphen section title only if screen too small ? - **Is your feature request related to a problem? Please describe.** ![image](https://user-images.githubusercontent.com/193478/84271236-253eba00-ab2c-11ea-8f98-51ae16c60086.png) My section is not readable entirely whereas my screen is large enough. Actually if I make my screen very narrow the hypenation is not enough : ![image](https://user-images.githubusercontent.com/193478/84271568-8bc3d800-ab2c-11ea-93c6-feec9c9736dd.png) **Describe the solution you'd like** Be able to read all the title is my screen is large enough. **Describe alternatives you've considered** don't hyphen but wrap ? don't hyphen at all and let the editor size show what it can ?
non_process
hyphen section title only if screen too small is your feature request related to a problem please describe my section is not readable entirely whereas my screen is large enough actually if i make my screen very narrow the hypenation is not enough describe the solution you d like be able to read all the title is my screen is large enough describe alternatives you ve considered don t hyphen but wrap don t hyphen at all and let the editor size show what it can
0
19,890
26,338,773,369
IssuesEvent
2023-01-10 16:08:27
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/filter] panic: interface conversion: interface {} is string, not bool
bug priority:p2 processor/filter
**Describe the bug** Using the filter processor, we have a collector configured to route metrics to a pipeline that have a specific label. We have noticed this error happens intermittently: ``` panic: interface conversion: interface {} is string, not bool [recovered] panic: interface conversion: interface {} is string, not bool goroutine 2858 [running]: go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End.func1() go.opentelemetry.io/otel/sdk@v1.6.3/trace/span.go:359 +0x2a go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End(0xc031aac900, {0x0, 0x0, 0x5?}) go.opentelemetry.io/otel/sdk@v1.6.3/trace/span.go:398 +0x8ee panic({0x3045540, 0xc036c22750}) runtime/panic.go:884 +0x212 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).match(0x99999999999999?, 0x38?) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:124 +0x77 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).matchEnv(0x4246c7?, {0xc031b68e10, 0x27}, {0x20?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:109 +0x88 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).matchSum(0x0?, {0xc031b68e10, 0x27}, {0x41c5e6?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:83 +0x67 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).MatchMetric(0x40dd3f?, {0x7fe8ef1e2a68?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:58 +0x157 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filtermetric.(*exprMatcher).MatchMetric(0x7fe8ef1e2a68?, {0x80?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filtermetric/expr_matcher.go:41 +0x5f github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).shouldKeepMetric(0xc001acdc00, {0xc0304e0fb8?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:161 +0x38 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics.func1.1.1({0xa09c85?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:140 +0x3a go.opentelemetry.io/collector/pdata/internal.MetricSlice.RemoveIf({0x1?}, 0xc0304e10b0) go.opentelemetry.io/collector/pdata@v0.50.0/internal/generated_pmetric.go:536 +0x62 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics.func1.1({0x32524a0?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:139 +0x47 go.opentelemetry.io/collector/pdata/internal.ScopeMetricsSlice.RemoveIf({0x7fe8ef1e2a68?}, 0xc0304e1128) go.opentelemetry.io/collector/pdata@v0.50.0/internal/generated_pmetric.go:342 +0x62 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics.func1({0x1c0?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:138 +0x85 go.opentelemetry.io/collector/pdata/internal.ResourceMetricsSlice.RemoveIf({0x7fe8ef1e2a68?}, 0xc0304e1198) go.opentelemetry.io/collector/pdata@v0.50.0/internal/generated_pmetric.go:148 +0x62 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics(0x40e087?, {0x10?, 0x2d9c000?}, {0x1?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:128 +0x3f go.opentelemetry.io/collector/processor/processorhelper.NewMetricsProcessor.func1({0x3d3e208, 0xc036c22660}, {0x0?}) go.opentelemetry.io/collector@v0.50.0/processor/processorhelper/metrics.go:62 +0xf8 go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector@v0.50.0/consumer/metrics.go:36 go.opentelemetry.io/collector/service/internal/fanoutconsumer.(*metricsConsumer).ConsumeMetrics(0xc000e12810, {0x3d3e208, 0xc036c22660}, {0x3682bde?}) go.opentelemetry.io/collector@v0.50.0/service/internal/fanoutconsumer/metrics.go:72 +0x16f go.opentelemetry.io/collector/receiver/otlpreceiver/internal/metrics.(*Receiver).Export(0xc00035b4b8, {0x3d3e208, 0xc036c225d0}, {0xc03235ee40?}) go.opentelemetry.io/collector@v0.50.0/receiver/otlpreceiver/internal/metrics/otlp.go:59 +0xd3 go.opentelemetry.io/collector/pdata/pmetric/pmetricotlp.rawMetricsServer.Export({{0x3d14880?, 0xc00035b4b8?}}, {0x3d3e208, 0xc036c225d0}, 0xc038514150) go.opentelemetry.io/collector/pdata@v0.50.0/pmetric/pmetricotlp/metrics.go:167 +0xff go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1._MetricsService_Export_Handler.func1({0x3d3e208, 0xc036c225d0}, {0x34c6080?, 0xc038514150}) go.opentelemetry.io/collector/pdata@v0.50.0/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:216 +0x78 go.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1({0x3d3e208?, 0xc036c22570?}, {0x34c6080, 0xc038514150}, 0x0?, 0xc0385141f8) go.opentelemetry.io/collector@v0.50.0/config/configgrpc/configgrpc.go:386 +0x4c google.golang.org/grpc.chainUnaryInterceptors.func1.1({0x3d3e208?, 0xc036c22570?}, {0x34c6080?, 0xc038514150?}) google.golang.org/grpc@v1.46.0/server.go:1117 +0x5b go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1({0x3d3e208, 0xc036c22480}, {0x34c6080, 0xc038514150}, 0xc03480ca20, 0xc0306ae700) go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.31.0/interceptor.go:325 +0x676 ``` **Steps to reproduce** Simplified collector config with: 1. otlp receiver 2. filter processor 3. logging exporter ``` receivers: otlp: processors: filter/test: metrics: include: match_type: expr expressions: - HasLabel("some.attribute") exporters: logging: service: pipelines: metrics: receivers: [otlp] processors: [filter/test] exporters: [logging] ``` **What did you expect to see?** **What did you see instead?** For some telemetry, we are seeing some log errors: ``` { caller: filterprocessor@v0.50.0/filter_processor.go:142 error: runtime error: index out of range [-1] (1:1) | HasLabel("some.attribute") | ^ kind: processor level: error msg: shouldKeepMetric failed name: filter/test pipeline: metrics ts: 1661354049.4371357 } ``` **What version did you use?** Version: 0.50.0 We have not seen any updates to the filterprocessor since that indicates it's fixed in future releases. **What config did you use?** Config: (e.g. the yaml config file) **Environment** OS: Linux **Additional context** We followed the stack trace down to this code which expects a `boolean` but gets a `string` https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.50.0/internal/coreinternal/processor/filterexpr/matcher.go#L124
1.0
[processor/filter] panic: interface conversion: interface {} is string, not bool - **Describe the bug** Using the filter processor, we have a collector configured to route metrics to a pipeline that have a specific label. We have noticed this error happens intermittently: ``` panic: interface conversion: interface {} is string, not bool [recovered] panic: interface conversion: interface {} is string, not bool goroutine 2858 [running]: go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End.func1() go.opentelemetry.io/otel/sdk@v1.6.3/trace/span.go:359 +0x2a go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End(0xc031aac900, {0x0, 0x0, 0x5?}) go.opentelemetry.io/otel/sdk@v1.6.3/trace/span.go:398 +0x8ee panic({0x3045540, 0xc036c22750}) runtime/panic.go:884 +0x212 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).match(0x99999999999999?, 0x38?) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:124 +0x77 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).matchEnv(0x4246c7?, {0xc031b68e10, 0x27}, {0x20?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:109 +0x88 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).matchSum(0x0?, {0xc031b68e10, 0x27}, {0x41c5e6?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:83 +0x67 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filterexpr.(*Matcher).MatchMetric(0x40dd3f?, {0x7fe8ef1e2a68?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filterexpr/matcher.go:58 +0x157 github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/processor/filtermetric.(*exprMatcher).MatchMetric(0x7fe8ef1e2a68?, {0x80?}) github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal@v0.50.0/processor/filtermetric/expr_matcher.go:41 +0x5f github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).shouldKeepMetric(0xc001acdc00, {0xc0304e0fb8?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:161 +0x38 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics.func1.1.1({0xa09c85?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:140 +0x3a go.opentelemetry.io/collector/pdata/internal.MetricSlice.RemoveIf({0x1?}, 0xc0304e10b0) go.opentelemetry.io/collector/pdata@v0.50.0/internal/generated_pmetric.go:536 +0x62 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics.func1.1({0x32524a0?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:139 +0x47 go.opentelemetry.io/collector/pdata/internal.ScopeMetricsSlice.RemoveIf({0x7fe8ef1e2a68?}, 0xc0304e1128) go.opentelemetry.io/collector/pdata@v0.50.0/internal/generated_pmetric.go:342 +0x62 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics.func1({0x1c0?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:138 +0x85 go.opentelemetry.io/collector/pdata/internal.ResourceMetricsSlice.RemoveIf({0x7fe8ef1e2a68?}, 0xc0304e1198) go.opentelemetry.io/collector/pdata@v0.50.0/internal/generated_pmetric.go:148 +0x62 github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor.(*filterMetricProcessor).processMetrics(0x40e087?, {0x10?, 0x2d9c000?}, {0x1?}) github.com/open-telemetry/opentelemetry-collector-contrib/processor/filterprocessor@v0.50.0/filter_processor.go:128 +0x3f go.opentelemetry.io/collector/processor/processorhelper.NewMetricsProcessor.func1({0x3d3e208, 0xc036c22660}, {0x0?}) go.opentelemetry.io/collector@v0.50.0/processor/processorhelper/metrics.go:62 +0xf8 go.opentelemetry.io/collector/consumer.ConsumeMetricsFunc.ConsumeMetrics(...) go.opentelemetry.io/collector@v0.50.0/consumer/metrics.go:36 go.opentelemetry.io/collector/service/internal/fanoutconsumer.(*metricsConsumer).ConsumeMetrics(0xc000e12810, {0x3d3e208, 0xc036c22660}, {0x3682bde?}) go.opentelemetry.io/collector@v0.50.0/service/internal/fanoutconsumer/metrics.go:72 +0x16f go.opentelemetry.io/collector/receiver/otlpreceiver/internal/metrics.(*Receiver).Export(0xc00035b4b8, {0x3d3e208, 0xc036c225d0}, {0xc03235ee40?}) go.opentelemetry.io/collector@v0.50.0/receiver/otlpreceiver/internal/metrics/otlp.go:59 +0xd3 go.opentelemetry.io/collector/pdata/pmetric/pmetricotlp.rawMetricsServer.Export({{0x3d14880?, 0xc00035b4b8?}}, {0x3d3e208, 0xc036c225d0}, 0xc038514150) go.opentelemetry.io/collector/pdata@v0.50.0/pmetric/pmetricotlp/metrics.go:167 +0xff go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/metrics/v1._MetricsService_Export_Handler.func1({0x3d3e208, 0xc036c225d0}, {0x34c6080?, 0xc038514150}) go.opentelemetry.io/collector/pdata@v0.50.0/internal/data/protogen/collector/metrics/v1/metrics_service.pb.go:216 +0x78 go.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1({0x3d3e208?, 0xc036c22570?}, {0x34c6080, 0xc038514150}, 0x0?, 0xc0385141f8) go.opentelemetry.io/collector@v0.50.0/config/configgrpc/configgrpc.go:386 +0x4c google.golang.org/grpc.chainUnaryInterceptors.func1.1({0x3d3e208?, 0xc036c22570?}, {0x34c6080?, 0xc038514150?}) google.golang.org/grpc@v1.46.0/server.go:1117 +0x5b go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1({0x3d3e208, 0xc036c22480}, {0x34c6080, 0xc038514150}, 0xc03480ca20, 0xc0306ae700) go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.31.0/interceptor.go:325 +0x676 ``` **Steps to reproduce** Simplified collector config with: 1. otlp receiver 2. filter processor 3. logging exporter ``` receivers: otlp: processors: filter/test: metrics: include: match_type: expr expressions: - HasLabel("some.attribute") exporters: logging: service: pipelines: metrics: receivers: [otlp] processors: [filter/test] exporters: [logging] ``` **What did you expect to see?** **What did you see instead?** For some telemetry, we are seeing some log errors: ``` { caller: filterprocessor@v0.50.0/filter_processor.go:142 error: runtime error: index out of range [-1] (1:1) | HasLabel("some.attribute") | ^ kind: processor level: error msg: shouldKeepMetric failed name: filter/test pipeline: metrics ts: 1661354049.4371357 } ``` **What version did you use?** Version: 0.50.0 We have not seen any updates to the filterprocessor since that indicates it's fixed in future releases. **What config did you use?** Config: (e.g. the yaml config file) **Environment** OS: Linux **Additional context** We followed the stack trace down to this code which expects a `boolean` but gets a `string` https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.50.0/internal/coreinternal/processor/filterexpr/matcher.go#L124
process
panic interface conversion interface is string not bool describe the bug using the filter processor we have a collector configured to route metrics to a pipeline that have a specific label we have noticed this error happens intermittently panic interface conversion interface is string not bool panic interface conversion interface is string not bool goroutine go opentelemetry io otel sdk trace recordingspan end go opentelemetry io otel sdk trace span go go opentelemetry io otel sdk trace recordingspan end go opentelemetry io otel sdk trace span go panic runtime panic go github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher match github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher go github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher matchenv github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher go github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher matchsum github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher go github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher matchmetric github com open telemetry opentelemetry collector contrib internal coreinternal processor filterexpr matcher go github com open telemetry opentelemetry collector contrib internal coreinternal processor filtermetric exprmatcher matchmetric github com open telemetry opentelemetry collector contrib internal coreinternal processor filtermetric expr matcher go github com open telemetry opentelemetry collector contrib processor filterprocessor filtermetricprocessor shouldkeepmetric github com open telemetry opentelemetry collector contrib processor filterprocessor filter processor go github com open telemetry opentelemetry collector contrib processor filterprocessor filtermetricprocessor processmetrics github com open telemetry opentelemetry collector contrib processor filterprocessor filter processor go go opentelemetry io collector pdata internal metricslice removeif go opentelemetry io collector pdata internal generated pmetric go github com open telemetry opentelemetry collector contrib processor filterprocessor filtermetricprocessor processmetrics github com open telemetry opentelemetry collector contrib processor filterprocessor filter processor go go opentelemetry io collector pdata internal scopemetricsslice removeif go opentelemetry io collector pdata internal generated pmetric go github com open telemetry opentelemetry collector contrib processor filterprocessor filtermetricprocessor processmetrics github com open telemetry opentelemetry collector contrib processor filterprocessor filter processor go go opentelemetry io collector pdata internal resourcemetricsslice removeif go opentelemetry io collector pdata internal generated pmetric go github com open telemetry opentelemetry collector contrib processor filterprocessor filtermetricprocessor processmetrics github com open telemetry opentelemetry collector contrib processor filterprocessor filter processor go go opentelemetry io collector processor processorhelper newmetricsprocessor go opentelemetry io collector processor processorhelper metrics go go opentelemetry io collector consumer consumemetricsfunc consumemetrics go opentelemetry io collector consumer metrics go go opentelemetry io collector service internal fanoutconsumer metricsconsumer consumemetrics go opentelemetry io collector service internal fanoutconsumer metrics go go opentelemetry io collector receiver otlpreceiver internal metrics receiver export go opentelemetry io collector receiver otlpreceiver internal metrics otlp go go opentelemetry io collector pdata pmetric pmetricotlp rawmetricsserver export go opentelemetry io collector pdata pmetric pmetricotlp metrics go go opentelemetry io collector pdata internal data protogen collector metrics metricsservice export handler go opentelemetry io collector pdata internal data protogen collector metrics metrics service pb go go opentelemetry io collector config configgrpc enhancewithclientinformation go opentelemetry io collector config configgrpc configgrpc go google golang org grpc chainunaryinterceptors google golang org grpc server go go opentelemetry io contrib instrumentation google golang org grpc otelgrpc unaryserverinterceptor go opentelemetry io contrib instrumentation google golang org grpc otelgrpc interceptor go steps to reproduce simplified collector config with otlp receiver filter processor logging exporter receivers otlp processors filter test metrics include match type expr expressions haslabel some attribute exporters logging service pipelines metrics receivers processors exporters what did you expect to see what did you see instead for some telemetry we are seeing some log errors caller filterprocessor filter processor go error runtime error index out of range haslabel some attribute kind processor level error msg shouldkeepmetric failed name filter test pipeline metrics ts what version did you use version we have not seen any updates to the filterprocessor since that indicates it s fixed in future releases what config did you use config e g the yaml config file environment os linux additional context we followed the stack trace down to this code which expects a boolean but gets a string
1
152,127
23,919,218,679
IssuesEvent
2022-09-09 15:15:15
pulibrary/dpul
https://api.github.com/repos/pulibrary/dpul
closed
Make vertical alignment even across the top of a search result listing
enhancement visual design learning-friendly
## DPUL screenshot: ![Screen Shot 2021-11-16 at 9 29 04 AM](https://user-images.githubusercontent.com/845363/142004222-5e435c69-72bd-4bbc-bd40-1ec1e42135c4.png) ## orangelight screenshot: ![Screen Shot 2021-11-16 at 9 29 35 AM](https://user-images.githubusercontent.com/845363/142004242-ad74f3c5-2026-4030-b207-dbea34797a3f.png)
1.0
Make vertical alignment even across the top of a search result listing - ## DPUL screenshot: ![Screen Shot 2021-11-16 at 9 29 04 AM](https://user-images.githubusercontent.com/845363/142004222-5e435c69-72bd-4bbc-bd40-1ec1e42135c4.png) ## orangelight screenshot: ![Screen Shot 2021-11-16 at 9 29 35 AM](https://user-images.githubusercontent.com/845363/142004242-ad74f3c5-2026-4030-b207-dbea34797a3f.png)
non_process
make vertical alignment even across the top of a search result listing dpul screenshot orangelight screenshot
0
229,551
17,571,908,373
IssuesEvent
2021-08-14 21:53:38
fga-eps-mds/2021.1-Pro-Especies-Docs
https://api.github.com/repos/fga-eps-mds/2021.1-Pro-Especies-Docs
closed
Confecção do Protótipo de Média Fidelidade
documentation EPS MDS Product Owner
## Confecção do Protótipo de Média Fidelidade criar a issue com um nome simples e descritivo: ## Descrição da Issue Utilizar os protótipos de baixa fidelidade feitos para o desenvolvimento do protótipo de média fidelidade. ### Tasks: - [x] Criação de Protótipo de média fidelidade com as funcionalidades levantadas do lean inception. ### Critérios de aceitação - [x] Protótipo em conformidade com as funcionalidades do lean inception. - [x] Protótipo seguindo as ideias levantadas nos protótipos de baixa fidelidade. - [x] Protótipo validado pelo grupo.
1.0
Confecção do Protótipo de Média Fidelidade - ## Confecção do Protótipo de Média Fidelidade criar a issue com um nome simples e descritivo: ## Descrição da Issue Utilizar os protótipos de baixa fidelidade feitos para o desenvolvimento do protótipo de média fidelidade. ### Tasks: - [x] Criação de Protótipo de média fidelidade com as funcionalidades levantadas do lean inception. ### Critérios de aceitação - [x] Protótipo em conformidade com as funcionalidades do lean inception. - [x] Protótipo seguindo as ideias levantadas nos protótipos de baixa fidelidade. - [x] Protótipo validado pelo grupo.
non_process
confecção do protótipo de média fidelidade confecção do protótipo de média fidelidade criar a issue com um nome simples e descritivo descrição da issue utilizar os protótipos de baixa fidelidade feitos para o desenvolvimento do protótipo de média fidelidade tasks criação de protótipo de média fidelidade com as funcionalidades levantadas do lean inception critérios de aceitação protótipo em conformidade com as funcionalidades do lean inception protótipo seguindo as ideias levantadas nos protótipos de baixa fidelidade protótipo validado pelo grupo
0
21,117
28,080,970,126
IssuesEvent
2023-03-30 06:17:53
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
firestore: remove apiv1beta1
api: firestore type: process
This client has not been generated in years but the code was never cleaned up. It is marked as unstable so it should be safe to remove.
1.0
firestore: remove apiv1beta1 - This client has not been generated in years but the code was never cleaned up. It is marked as unstable so it should be safe to remove.
process
firestore remove this client has not been generated in years but the code was never cleaned up it is marked as unstable so it should be safe to remove
1
15,862
20,035,824,128
IssuesEvent
2022-02-02 11:46:38
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] Text scale > Slider button not visible on clicking 'Save for later' and opening the activity
Bug P2 iOS Process: Fixed Process: Tested dev
**Steps:** 1. Configure a text scale horizontal/vertical for question/form step 2. Publish updates 3. Enroll into study from iOS 4. Open the activity 5. Choose some of the options and click on Cancel 6. Choose save for later 7. Again open the activity 8. Observe slider button not visible **Actual:** Slider button not visible on clicking 'Save for later' and opening the activity **Expected:** Slider button should be visible Issue observed for 1. Question step - Text scale horizontal 2. Question step - Text scale Vertical 3. Form step - Text scale horizontal 4. Form step - Text scale Vertical App ID - GCPMOB001 Study ID - TriteraNatural **Refer video** https://user-images.githubusercontent.com/60386291/143011833-4c57808f-d870-4da4-9c2b-b12b82c22c97.MOV
2.0
[iOS] Text scale > Slider button not visible on clicking 'Save for later' and opening the activity - **Steps:** 1. Configure a text scale horizontal/vertical for question/form step 2. Publish updates 3. Enroll into study from iOS 4. Open the activity 5. Choose some of the options and click on Cancel 6. Choose save for later 7. Again open the activity 8. Observe slider button not visible **Actual:** Slider button not visible on clicking 'Save for later' and opening the activity **Expected:** Slider button should be visible Issue observed for 1. Question step - Text scale horizontal 2. Question step - Text scale Vertical 3. Form step - Text scale horizontal 4. Form step - Text scale Vertical App ID - GCPMOB001 Study ID - TriteraNatural **Refer video** https://user-images.githubusercontent.com/60386291/143011833-4c57808f-d870-4da4-9c2b-b12b82c22c97.MOV
process
text scale slider button not visible on clicking save for later and opening the activity steps configure a text scale horizontal vertical for question form step publish updates enroll into study from ios open the activity choose some of the options and click on cancel choose save for later again open the activity observe slider button not visible actual slider button not visible on clicking save for later and opening the activity expected slider button should be visible issue observed for question step text scale horizontal question step text scale vertical form step text scale horizontal form step text scale vertical app id study id triteranatural refer video
1
728,324
25,075,188,636
IssuesEvent
2022-11-07 15:01:44
cloud-native-toolkit/software-everywhere
https://api.github.com/repos/cloud-native-toolkit/software-everywhere
closed
Update CP4I modules to use gitops terraform provider
enhancement priority
The instructions to upgrade a module are provided here - https://ibm.box.com/s/q5s5jwsd8b37k9tbzt73hkmo4tb2tbb8 During this process, two changes will be made: 1. The null_resource block that populates the gitops repo will be replaced with the terraform provider resource 2. The unit test will be updated to use gitea for the gitops repo ### Integration - [x] [CP Platform Navigator](https://github.com/cloud-native-toolkit/terraform-gitops-cp-platform-navigator) https://github.com/cloud-native-toolkit/terraform-gitops-cp-platform-navigator/pull/19 - [x] [CP AppConnect](https://github.com/cloud-native-toolkit/terraform-gitops-cp-app-connect) https://github.com/cloud-native-toolkit/terraform-gitops-cp-app-connect/pull/18 - [x] [CP ACE Dashboard](https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-dashboard) https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-dashboard/pull/14 - [x] [CP ACE Designer](https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-designer) https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-designer/pull/10 - [x] [CP Datapower Operator] (github.com/cloud-native-toolkit/terraform-gitops-cp-datapower-operator) - [x] [CP Datapower] ( https://github.com/cloud-native-toolkit/terraform-gitops-cp-datapower/issues/18) - [X] [CP Event Steams operator] (https://github.com/cloud-native-toolkit/terraform-gitops-cp-es-operator/issues/15) - [x] [CP Event Streams] https://github.com/cloud-native-toolkit/terraform-gitops-cp-event-streams/issues/22 - [x] [CP MQ] https://github.com/cloud-native-toolkit/terraform-gitops-cp-mq/issues/19 - [x] [CP ApiConnect operator] https://github.com/cloud-native-toolkit/terraform-gitops-cp-apic-operator/issues/21 - [x] [CP ApiConnect] https://github.com/cloud-native-toolkit/terraform-gitops-cp-apic/issues/15 - [x] [CP Queue Manager] https://github.com/cloud-native-toolkit/terraform-gitops-cp-queue-manager/issues/16 - [x] [CP MQ Uniform Cluster] https://github.com/cloud-native-toolkit/terraform-gitops-cp-mq-uniform-cluster/issues/11
1.0
Update CP4I modules to use gitops terraform provider - The instructions to upgrade a module are provided here - https://ibm.box.com/s/q5s5jwsd8b37k9tbzt73hkmo4tb2tbb8 During this process, two changes will be made: 1. The null_resource block that populates the gitops repo will be replaced with the terraform provider resource 2. The unit test will be updated to use gitea for the gitops repo ### Integration - [x] [CP Platform Navigator](https://github.com/cloud-native-toolkit/terraform-gitops-cp-platform-navigator) https://github.com/cloud-native-toolkit/terraform-gitops-cp-platform-navigator/pull/19 - [x] [CP AppConnect](https://github.com/cloud-native-toolkit/terraform-gitops-cp-app-connect) https://github.com/cloud-native-toolkit/terraform-gitops-cp-app-connect/pull/18 - [x] [CP ACE Dashboard](https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-dashboard) https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-dashboard/pull/14 - [x] [CP ACE Designer](https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-designer) https://github.com/cloud-native-toolkit/terraform-gitops-cp-ace-designer/pull/10 - [x] [CP Datapower Operator] (github.com/cloud-native-toolkit/terraform-gitops-cp-datapower-operator) - [x] [CP Datapower] ( https://github.com/cloud-native-toolkit/terraform-gitops-cp-datapower/issues/18) - [X] [CP Event Steams operator] (https://github.com/cloud-native-toolkit/terraform-gitops-cp-es-operator/issues/15) - [x] [CP Event Streams] https://github.com/cloud-native-toolkit/terraform-gitops-cp-event-streams/issues/22 - [x] [CP MQ] https://github.com/cloud-native-toolkit/terraform-gitops-cp-mq/issues/19 - [x] [CP ApiConnect operator] https://github.com/cloud-native-toolkit/terraform-gitops-cp-apic-operator/issues/21 - [x] [CP ApiConnect] https://github.com/cloud-native-toolkit/terraform-gitops-cp-apic/issues/15 - [x] [CP Queue Manager] https://github.com/cloud-native-toolkit/terraform-gitops-cp-queue-manager/issues/16 - [x] [CP MQ Uniform Cluster] https://github.com/cloud-native-toolkit/terraform-gitops-cp-mq-uniform-cluster/issues/11
non_process
update modules to use gitops terraform provider the instructions to upgrade a module are provided here during this process two changes will be made the null resource block that populates the gitops repo will be replaced with the terraform provider resource the unit test will be updated to use gitea for the gitops repo integration github com cloud native toolkit terraform gitops cp datapower operator
0
22,235
30,784,785,493
IssuesEvent
2023-07-31 12:34:08
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
6809 (6x09_push.sinc) : Inaccurate PSHU implementation
Feature: Processor/MC6800 Status: Internal
**Description:** While decompiling some old firmware I noticed that the definition of the 6809 PSHU command in Ghidra most likely is affected by a copy and paste error. **Solution:** In [6x09_push.sinc](https://github.com/NationalSecurityAgency/ghidra/blob/master/Ghidra/Processors/MC6800/data/languages/6x09_push.sinc), in the definitions affecting the registers CC, A and B (listing starting with line 29): ``` pshu0: CC is CC & imm80=1 { Push1(S, CC); } pshu0: is imm80=0 { } pshu1: pshu0" "A is A & imm81=1 & pshu0 { Push1(S, A); } pshu1: pshu0 is imm81=0 & pshu0 { } pshu2: pshu1" "B is B & imm82=1 & pshu1 { Push1(S, B); } pshu2: pshu1 is imm82=0 & pshu1 { } pshu3: pshu2" "DP is DP & imm83=1 & pshu2 { Push1(U, DP); } pshu3: pshu2 is imm83=0 & pshu2 { } pshu4: pshu3" "X is X & imm84=1 & pshu3 { Push2(U, X); } pshu4: pshu3 is imm84=0 & pshu3 { } pshu5: pshu4" "Y is Y & imm85=1 & pshu4 { Push2(U, Y); } pshu5: pshu4 is imm85=0 & pshu4 { } pshu6: pshu5" "S is S & imm86=1 & pshu5 { Push2(U, S); } pshu6: pshu5 is imm86=0 & pshu5 { } pshu7: pshu6" "PC is PC & imm87=1 & pshu6 { local t:2 = inst_next; Push2(U, t); } pshu7: pshu6 is imm87=0 & pshu6 { } ``` lines 29, 31, 33 appear to be inconsistent and should possibly be changed to: ``` pshu0: CC is CC & imm80=1 { Push1(U, CC); } pshu0: is imm80=0 { } pshu1: pshu0" "A is A & imm81=1 & pshu0 { Push1(U, A); } pshu1: pshu0 is imm81=0 & pshu0 { } pshu2: pshu1" "B is B & imm82=1 & pshu1 { Push1(U, B); } pshu2: pshu1 is imm82=0 & pshu1 { } pshu3: pshu2" "DP is DP & imm83=1 & pshu2 { Push1(U, DP); } pshu3: pshu2 is imm83=0 & pshu2 { } pshu4: pshu3" "X is X & imm84=1 & pshu3 { Push2(U, X); } pshu4: pshu3 is imm84=0 & pshu3 { } pshu5: pshu4" "Y is Y & imm85=1 & pshu4 { Push2(U, Y); } pshu5: pshu4 is imm85=0 & pshu4 { } pshu6: pshu5" "S is S & imm86=1 & pshu5 { Push2(U, S); } pshu6: pshu5 is imm86=0 & pshu5 { } pshu7: pshu6" "PC is PC & imm87=1 & pshu6 { local t:2 = inst_next; Push2(U, t); } pshu7: pshu6 is imm87=0 & pshu6 { } ``` replacing three occurrences of S with U. **Environment:** - Ghidra Version: 10.3
1.0
6809 (6x09_push.sinc) : Inaccurate PSHU implementation - **Description:** While decompiling some old firmware I noticed that the definition of the 6809 PSHU command in Ghidra most likely is affected by a copy and paste error. **Solution:** In [6x09_push.sinc](https://github.com/NationalSecurityAgency/ghidra/blob/master/Ghidra/Processors/MC6800/data/languages/6x09_push.sinc), in the definitions affecting the registers CC, A and B (listing starting with line 29): ``` pshu0: CC is CC & imm80=1 { Push1(S, CC); } pshu0: is imm80=0 { } pshu1: pshu0" "A is A & imm81=1 & pshu0 { Push1(S, A); } pshu1: pshu0 is imm81=0 & pshu0 { } pshu2: pshu1" "B is B & imm82=1 & pshu1 { Push1(S, B); } pshu2: pshu1 is imm82=0 & pshu1 { } pshu3: pshu2" "DP is DP & imm83=1 & pshu2 { Push1(U, DP); } pshu3: pshu2 is imm83=0 & pshu2 { } pshu4: pshu3" "X is X & imm84=1 & pshu3 { Push2(U, X); } pshu4: pshu3 is imm84=0 & pshu3 { } pshu5: pshu4" "Y is Y & imm85=1 & pshu4 { Push2(U, Y); } pshu5: pshu4 is imm85=0 & pshu4 { } pshu6: pshu5" "S is S & imm86=1 & pshu5 { Push2(U, S); } pshu6: pshu5 is imm86=0 & pshu5 { } pshu7: pshu6" "PC is PC & imm87=1 & pshu6 { local t:2 = inst_next; Push2(U, t); } pshu7: pshu6 is imm87=0 & pshu6 { } ``` lines 29, 31, 33 appear to be inconsistent and should possibly be changed to: ``` pshu0: CC is CC & imm80=1 { Push1(U, CC); } pshu0: is imm80=0 { } pshu1: pshu0" "A is A & imm81=1 & pshu0 { Push1(U, A); } pshu1: pshu0 is imm81=0 & pshu0 { } pshu2: pshu1" "B is B & imm82=1 & pshu1 { Push1(U, B); } pshu2: pshu1 is imm82=0 & pshu1 { } pshu3: pshu2" "DP is DP & imm83=1 & pshu2 { Push1(U, DP); } pshu3: pshu2 is imm83=0 & pshu2 { } pshu4: pshu3" "X is X & imm84=1 & pshu3 { Push2(U, X); } pshu4: pshu3 is imm84=0 & pshu3 { } pshu5: pshu4" "Y is Y & imm85=1 & pshu4 { Push2(U, Y); } pshu5: pshu4 is imm85=0 & pshu4 { } pshu6: pshu5" "S is S & imm86=1 & pshu5 { Push2(U, S); } pshu6: pshu5 is imm86=0 & pshu5 { } pshu7: pshu6" "PC is PC & imm87=1 & pshu6 { local t:2 = inst_next; Push2(U, t); } pshu7: pshu6 is imm87=0 & pshu6 { } ``` replacing three occurrences of S with U. **Environment:** - Ghidra Version: 10.3
process
push sinc inaccurate pshu implementation description while decompiling some old firmware i noticed that the definition of the pshu command in ghidra most likely is affected by a copy and paste error solution in in the definitions affecting the registers cc a and b listing starting with line cc is cc s cc is a is a s a is b is b s b is dp is dp u dp is x is x u x is y is y u y is s is s u s is pc is pc local t inst next u t is lines appear to be inconsistent and should possibly be changed to cc is cc u cc is a is a u a is b is b u b is dp is dp u dp is x is x u x is y is y u y is s is s u s is pc is pc local t inst next u t is replacing three occurrences of s with u environment ghidra version
1
9,714
12,708,904,371
IssuesEvent
2020-06-23 11:23:40
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Defer reading images list to end of preprocessing
feature preprocess priority/medium stale
In the current preprocessing design, list of images is generated during gen-list target. However, this relies on listing image formats in a configuration file and not relying on `<image>` element references, since key defined images are not processes at that point. An alternative design would be to create a list of images during image-metadata target where image dimensions are read. This would allow getting an accurate list of images that doesn't rely on `@format` or file extensions.
1.0
Defer reading images list to end of preprocessing - In the current preprocessing design, list of images is generated during gen-list target. However, this relies on listing image formats in a configuration file and not relying on `<image>` element references, since key defined images are not processes at that point. An alternative design would be to create a list of images during image-metadata target where image dimensions are read. This would allow getting an accurate list of images that doesn't rely on `@format` or file extensions.
process
defer reading images list to end of preprocessing in the current preprocessing design list of images is generated during gen list target however this relies on listing image formats in a configuration file and not relying on element references since key defined images are not processes at that point an alternative design would be to create a list of images during image metadata target where image dimensions are read this would allow getting an accurate list of images that doesn t rely on format or file extensions
1
8,750
11,873,048,119
IssuesEvent
2020-03-26 16:42:22
digitalmethodsinitiative/4cat
https://api.github.com/repos/digitalmethodsinitiative/4cat
opened
Allow multi-word queries for word trees
enhancement processors
Word trees can only be made for single words (tokens) now, which is of limited utility in a lot of cases.
1.0
Allow multi-word queries for word trees - Word trees can only be made for single words (tokens) now, which is of limited utility in a lot of cases.
process
allow multi word queries for word trees word trees can only be made for single words tokens now which is of limited utility in a lot of cases
1
3,631
6,665,741,351
IssuesEvent
2017-10-03 03:35:34
aspnet/IISIntegration
https://api.github.com/repos/aspnet/IISIntegration
closed
Find any functional differences between in-process mode and Kestrel for main request/response logic.
in-process
Though this may involve a bit of code duplication for now, the main control flow of the in-process mode matches Kestrel's HttpProtocol. We should find any function differences and fix them! 😄
1.0
Find any functional differences between in-process mode and Kestrel for main request/response logic. - Though this may involve a bit of code duplication for now, the main control flow of the in-process mode matches Kestrel's HttpProtocol. We should find any function differences and fix them! 😄
process
find any functional differences between in process mode and kestrel for main request response logic though this may involve a bit of code duplication for now the main control flow of the in process mode matches kestrel s httpprotocol we should find any function differences and fix them 😄
1
8,263
11,426,444,963
IssuesEvent
2020-02-03 21:57:50
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Add applicant status pills to non student landing page
Apply Process Approved Landing page Requirements Ready
Who: Non student applicants What: Add application status pillls Why: to provide visual and make the page consistent Acceptance Criteria: - Add applicant status pills to the non student landing page ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/af68fb76-1687-4e91-9926-18cbb2732f52) Related tickets: 4411 - Create applicant status pills 4412 - Display applicant status pills on student landing page 4329 - Applicant dashboard enhancements 4414 - Add applicant status pills to non student landing page
1.0
Add applicant status pills to non student landing page - Who: Non student applicants What: Add application status pillls Why: to provide visual and make the page consistent Acceptance Criteria: - Add applicant status pills to the non student landing page ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/af68fb76-1687-4e91-9926-18cbb2732f52) Related tickets: 4411 - Create applicant status pills 4412 - Display applicant status pills on student landing page 4329 - Applicant dashboard enhancements 4414 - Add applicant status pills to non student landing page
process
add applicant status pills to non student landing page who non student applicants what add application status pillls why to provide visual and make the page consistent acceptance criteria add applicant status pills to the non student landing page related tickets create applicant status pills display applicant status pills on student landing page applicant dashboard enhancements add applicant status pills to non student landing page
1
31,679
5,969,245,398
IssuesEvent
2017-05-30 19:53:31
Azure/Azure-Functions
https://api.github.com/repos/Azure/Azure-Functions
closed
StorageQueue Topic only covers string example
documentation P0 ready triaged
We should ensure that we have an example which covers the CloudQueueMessage implementation as well, or at least provide a link to some relevant content.
1.0
StorageQueue Topic only covers string example - We should ensure that we have an example which covers the CloudQueueMessage implementation as well, or at least provide a link to some relevant content.
non_process
storagequeue topic only covers string example we should ensure that we have an example which covers the cloudqueuemessage implementation as well or at least provide a link to some relevant content
0
18,211
24,269,783,194
IssuesEvent
2022-09-28 09:21:51
redhat-developer/vscode-java
https://api.github.com/repos/redhat-developer/vscode-java
closed
Support @Nullable annotations?
enhancement annotation-processing
It'd be cool if the VS Code Java extensions could recognize that a method is annotated with javax.annotation.Nullable and then warn if it notices potential null-exception usages. I have no idea if this is possible in this plugin or where the appropriate place to request it as a feature would be. But your plugins are generally awesome so I wanted to suggest it as a further improvement. Thanks for the great work :)
1.0
Support @Nullable annotations? - It'd be cool if the VS Code Java extensions could recognize that a method is annotated with javax.annotation.Nullable and then warn if it notices potential null-exception usages. I have no idea if this is possible in this plugin or where the appropriate place to request it as a feature would be. But your plugins are generally awesome so I wanted to suggest it as a further improvement. Thanks for the great work :)
process
support nullable annotations it d be cool if the vs code java extensions could recognize that a method is annotated with javax annotation nullable and then warn if it notices potential null exception usages i have no idea if this is possible in this plugin or where the appropriate place to request it as a feature would be but your plugins are generally awesome so i wanted to suggest it as a further improvement thanks for the great work
1
15,810
20,012,210,496
IssuesEvent
2022-02-01 08:14:49
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Error: Algorithm not found using grassprovider in PyQGIS standalone script
Processing Regression Bug
### What is the bug or the crash? Running a GRASS algorithm in a standalone PyQGIS-Script with QGIS 3.22, e.g. `processing.run("grass7:v.dissolve", ...)` I get `_core.QgsProcessingException: Error: Algorithm grass7:v.dissolve not found` When I then list all the registered algs ``` for alg in QgsApplication.processingRegistry().algorithms(): print("{}:{} --> {}".format(alg.provider().name(), alg.name(), alg.displayName())) ``` I get (after the entries of GDAL algs) a list of blank entries: ``` : --> : --> : --> : --> ... ``` ### Steps to reproduce the issue Run test.bat/test.py with QGIS 3.22: [test.py.zip](https://github.com/qgis/QGIS/files/7496276/test.py.zip) ### Versions QGIS-Version 3.22.0-Białowieża QGIS-Codeversion d9022691f1 Qt-Version 5.15.2 Python-Version 3.9.5 Kompiliert mit GDAL/OGR 3.3.2 Läuft mit GDAL/OGR 3.3.3 PROJ-Version 8.1.1 EPSG-Registraturdatenbankversion v10.038 (2021-10-21) GEOS-Version 3.10.0-CAPI-1.16.0 SQLite-Version 3.35.2 PDAL-Versio 2.3.0 PostgreSQL-Client-Version 13.0 SpatiaLite-Version 5.0.1 QWT-Version 6.1.3 QScintilla2-Version 2.11.5 BS-Version Windows 10 Version 2004 Aktive Python-Erweiterungen db_manager 0.1.20 grassprovider 2.12.99 MetaSearch 0.3.5 processing 2.12.99 sagaprovider 2.12.99 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [X] I tried with a new QGIS profile ### Additional context see [gis se](https://gis.stackexchange.com/questions/415644/error-algorithm-not-found-using-grassprovider-in-pyqgis-standalone-script)
1.0
Error: Algorithm not found using grassprovider in PyQGIS standalone script - ### What is the bug or the crash? Running a GRASS algorithm in a standalone PyQGIS-Script with QGIS 3.22, e.g. `processing.run("grass7:v.dissolve", ...)` I get `_core.QgsProcessingException: Error: Algorithm grass7:v.dissolve not found` When I then list all the registered algs ``` for alg in QgsApplication.processingRegistry().algorithms(): print("{}:{} --> {}".format(alg.provider().name(), alg.name(), alg.displayName())) ``` I get (after the entries of GDAL algs) a list of blank entries: ``` : --> : --> : --> : --> ... ``` ### Steps to reproduce the issue Run test.bat/test.py with QGIS 3.22: [test.py.zip](https://github.com/qgis/QGIS/files/7496276/test.py.zip) ### Versions QGIS-Version 3.22.0-Białowieża QGIS-Codeversion d9022691f1 Qt-Version 5.15.2 Python-Version 3.9.5 Kompiliert mit GDAL/OGR 3.3.2 Läuft mit GDAL/OGR 3.3.3 PROJ-Version 8.1.1 EPSG-Registraturdatenbankversion v10.038 (2021-10-21) GEOS-Version 3.10.0-CAPI-1.16.0 SQLite-Version 3.35.2 PDAL-Versio 2.3.0 PostgreSQL-Client-Version 13.0 SpatiaLite-Version 5.0.1 QWT-Version 6.1.3 QScintilla2-Version 2.11.5 BS-Version Windows 10 Version 2004 Aktive Python-Erweiterungen db_manager 0.1.20 grassprovider 2.12.99 MetaSearch 0.3.5 processing 2.12.99 sagaprovider 2.12.99 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [X] I tried with a new QGIS profile ### Additional context see [gis se](https://gis.stackexchange.com/questions/415644/error-algorithm-not-found-using-grassprovider-in-pyqgis-standalone-script)
process
error algorithm not found using grassprovider in pyqgis standalone script what is the bug or the crash running a grass algorithm in a standalone pyqgis script with qgis e g processing run v dissolve i get core qgsprocessingexception error algorithm v dissolve not found when i then list all the registered algs for alg in qgsapplication processingregistry algorithms print format alg provider name alg name alg displayname i get after the entries of gdal algs a list of blank entries steps to reproduce the issue run test bat test py with qgis versions qgis version białowieża qgis codeversion qt version python version kompiliert mit gdal ogr läuft mit gdal ogr proj version epsg registraturdatenbankversion geos version capi sqlite version pdal versio postgresql client version spatialite version qwt version version bs version windows version aktive python erweiterungen db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context see
1
18,141
24,186,342,035
IssuesEvent
2022-09-23 13:37:16
ni/grpc-labview
https://api.github.com/repos/ni/grpc-labview
closed
GitHub Actions for automated build and packaging of distribution
type: enhancement type: process improvement
Creating is currently a manual process on building libraries for each supported OS, moving everything to the correct locations, and creating the install packages. We should document the process and automated as much as possible.
1.0
GitHub Actions for automated build and packaging of distribution - Creating is currently a manual process on building libraries for each supported OS, moving everything to the correct locations, and creating the install packages. We should document the process and automated as much as possible.
process
github actions for automated build and packaging of distribution creating is currently a manual process on building libraries for each supported os moving everything to the correct locations and creating the install packages we should document the process and automated as much as possible
1
17,012
22,386,217,368
IssuesEvent
2022-06-17 00:51:36
figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
https://api.github.com/repos/figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
closed
Review FrontEnd Baja de Puntos de carga
task process
Esfuerzo en HS-P: Estimado: 1 Real: 1 (@matiasmolinolo), 1 (@figlesias221 )
1.0
Review FrontEnd Baja de Puntos de carga - Esfuerzo en HS-P: Estimado: 1 Real: 1 (@matiasmolinolo), 1 (@figlesias221 )
process
review frontend baja de puntos de carga esfuerzo en hs p estimado real matiasmolinolo
1
21,884
30,328,771,301
IssuesEvent
2023-07-11 03:47:10
didi/mpx
https://api.github.com/repos/didi/mpx
closed
[Bug report]转 web时 onReachBottom , onPageScroll 等页面事件报错
processing
**问题描述** 脚手架生成的新项目没做其他修改,转 web时 onReachBottom , onPageScroll 等页面时间会报 Cannot set properties of undefined (setting 'pageEvents') ![image](https://github.com/didi/mpx/assets/20768176/68b9582d-33bc-447b-9a06-dcef075382f7) ![image](https://github.com/didi/mpx/assets/20768176/5a7e1f81-cf46-4da9-9b2d-cfbfc811d248) **环境信息描述** 至少包含以下部分: 1. 系统类型 MacOS 2. Mpx 2.8 以上 **最简复现demo** [mpx-2.8-page-event.zip](https://github.com/didi/mpx/files/11513319/mpx-2.8-page-event.zip)
1.0
[Bug report]转 web时 onReachBottom , onPageScroll 等页面事件报错 - **问题描述** 脚手架生成的新项目没做其他修改,转 web时 onReachBottom , onPageScroll 等页面时间会报 Cannot set properties of undefined (setting 'pageEvents') ![image](https://github.com/didi/mpx/assets/20768176/68b9582d-33bc-447b-9a06-dcef075382f7) ![image](https://github.com/didi/mpx/assets/20768176/5a7e1f81-cf46-4da9-9b2d-cfbfc811d248) **环境信息描述** 至少包含以下部分: 1. 系统类型 MacOS 2. Mpx 2.8 以上 **最简复现demo** [mpx-2.8-page-event.zip](https://github.com/didi/mpx/files/11513319/mpx-2.8-page-event.zip)
process
转 web时 onreachbottom , onpagescroll 等页面事件报错 问题描述 脚手架生成的新项目没做其他修改,转 web时 onreachbottom , onpagescroll 等页面时间会报 cannot set properties of undefined setting pageevents 环境信息描述 至少包含以下部分: 系统类型 macos mpx 以上 最简复现demo
1
8,850
11,951,852,415
IssuesEvent
2020-04-03 17:39:24
googleapis/python-storage
https://api.github.com/repos/googleapis/python-storage
closed
TestRetentionPolicy: bucket deletion conflict on teardown
api: storage testing type: process
`test_system.TestRetentionPolicy` teardown occasionally fails because of presence of a blob with not yet discarded retention policy. Detected in [PR](https://github.com/googleapis/python-storage/pull/96). kokoro logs: ``` ==================================== ERRORS ==================================== _ ERROR at teardown of TestRetentionPolicy.test_bucket_w_default_event_based_hold _ self = <test_system.TestRetentionPolicy testMethod=test_bucket_w_default_event_based_hold> def tearDown(self): for bucket_name in self.case_buckets_to_delete: bucket = Config.CLIENT.bucket(bucket_name) > retry_429_harder(bucket.delete)() tests/system/test_system.py:1709: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_utils/test_utils/retry.py:95: in wrapped_function return to_wrap(*args, **kwargs) google/cloud/storage/bucket.py:1131: in delete client._connection.api_request( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f41c87a7970> method = 'DELETE', path = '/b/w-def-ebh-1585825330597', query_params = {} data = None, content_type = None, headers = None, api_base_url = None api_version = None, expect_json = True, _target_object = None, timeout = 60 def api_request( self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None, timeout=_DEFAULT_TIMEOUT, ): ... if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E google.api_core.exceptions.Conflict: 409 DELETE https://storage.googleapis.com/storage/v1/b/w-def-ebh-1585825330597: The bucket you tried to delete was not empty. .nox/system-3-8/lib/python3.8/site-packages/google/cloud/_http.py:423: Conflict =================================== FAILURES =================================== __________ TestRetentionPolicy.test_bucket_w_default_event_based_hold __________ self = <test_system.TestRetentionPolicy testMethod=test_bucket_w_default_event_based_hold> def test_bucket_w_default_event_based_hold(self): ... self.assertIsNone(other.retention_expiration_time) > blob.delete() tests/system/test_system.py:1810: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/storage/blob.py:630: in delete self.bucket.delete_blob( google/cloud/storage/bucket.py:1189: in delete_blob client._connection.api_request( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f41c87a7970> method = 'DELETE', path = '/b/w-def-ebh-1585825330597/o/test-blob' query_params = {'generation': 1585825333262407}, data = None content_type = None, headers = None, api_base_url = None, api_version = None expect_json = True, _target_object = None, timeout = 60 def api_request( self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None, timeout=_DEFAULT_TIMEOUT, ): ... if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E google.api_core.exceptions.Forbidden: 403 DELETE https://storage.googleapis.com/storage/v1/b/w-def-ebh-1585825330597/o/test-blob?generation=1585825333262407: Object 'w-def-ebh-1585825330597/test-blob' is under active Event-Based hold and cannot be deleted, overwritten or archived until hold is removed. ```
1.0
TestRetentionPolicy: bucket deletion conflict on teardown - `test_system.TestRetentionPolicy` teardown occasionally fails because of presence of a blob with not yet discarded retention policy. Detected in [PR](https://github.com/googleapis/python-storage/pull/96). kokoro logs: ``` ==================================== ERRORS ==================================== _ ERROR at teardown of TestRetentionPolicy.test_bucket_w_default_event_based_hold _ self = <test_system.TestRetentionPolicy testMethod=test_bucket_w_default_event_based_hold> def tearDown(self): for bucket_name in self.case_buckets_to_delete: bucket = Config.CLIENT.bucket(bucket_name) > retry_429_harder(bucket.delete)() tests/system/test_system.py:1709: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_utils/test_utils/retry.py:95: in wrapped_function return to_wrap(*args, **kwargs) google/cloud/storage/bucket.py:1131: in delete client._connection.api_request( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f41c87a7970> method = 'DELETE', path = '/b/w-def-ebh-1585825330597', query_params = {} data = None, content_type = None, headers = None, api_base_url = None api_version = None, expect_json = True, _target_object = None, timeout = 60 def api_request( self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None, timeout=_DEFAULT_TIMEOUT, ): ... if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E google.api_core.exceptions.Conflict: 409 DELETE https://storage.googleapis.com/storage/v1/b/w-def-ebh-1585825330597: The bucket you tried to delete was not empty. .nox/system-3-8/lib/python3.8/site-packages/google/cloud/_http.py:423: Conflict =================================== FAILURES =================================== __________ TestRetentionPolicy.test_bucket_w_default_event_based_hold __________ self = <test_system.TestRetentionPolicy testMethod=test_bucket_w_default_event_based_hold> def test_bucket_w_default_event_based_hold(self): ... self.assertIsNone(other.retention_expiration_time) > blob.delete() tests/system/test_system.py:1810: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/storage/blob.py:630: in delete self.bucket.delete_blob( google/cloud/storage/bucket.py:1189: in delete_blob client._connection.api_request( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f41c87a7970> method = 'DELETE', path = '/b/w-def-ebh-1585825330597/o/test-blob' query_params = {'generation': 1585825333262407}, data = None content_type = None, headers = None, api_base_url = None, api_version = None expect_json = True, _target_object = None, timeout = 60 def api_request( self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None, timeout=_DEFAULT_TIMEOUT, ): ... if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E google.api_core.exceptions.Forbidden: 403 DELETE https://storage.googleapis.com/storage/v1/b/w-def-ebh-1585825330597/o/test-blob?generation=1585825333262407: Object 'w-def-ebh-1585825330597/test-blob' is under active Event-Based hold and cannot be deleted, overwritten or archived until hold is removed. ```
process
testretentionpolicy bucket deletion conflict on teardown test system testretentionpolicy teardown occasionally fails because of presence of a blob with not yet discarded retention policy detected in kokoro logs errors error at teardown of testretentionpolicy test bucket w default event based hold self def teardown self for bucket name in self case buckets to delete bucket config client bucket bucket name retry harder bucket delete tests system test system py test utils test utils retry py in wrapped function return to wrap args kwargs google cloud storage bucket py in delete client connection api request self method delete path b w def ebh query params data none content type none headers none api base url none api version none expect json true target object none timeout def api request self method path query params none data none content type none headers none api base url none api version none expect json true target object none timeout default timeout if not response status code raise exceptions from http response response e google api core exceptions conflict delete the bucket you tried to delete was not empty nox system lib site packages google cloud http py conflict failures testretentionpolicy test bucket w default event based hold self def test bucket w default event based hold self self assertisnone other retention expiration time blob delete tests system test system py google cloud storage blob py in delete self bucket delete blob google cloud storage bucket py in delete blob client connection api request self method delete path b w def ebh o test blob query params generation data none content type none headers none api base url none api version none expect json true target object none timeout def api request self method path query params none data none content type none headers none api base url none api version none expect json true target object none timeout default timeout if not response status code raise exceptions from http response response e google api core exceptions forbidden delete object w def ebh test blob is under active event based hold and cannot be deleted overwritten or archived until hold is removed
1
4,471
7,334,292,383
IssuesEvent
2018-03-05 22:15:30
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
WorkGroup
active-directory cxp in-process triaged
Hi, in the video you've explained about the domain joined laptop which open portal page and sign-in is not required, excellent.!! :) i had a question on this, how about the workgroup machines ? will they still be able to access the apps and o365 (forget about single sign on, am considered about services, if any of the user whose identity is in on prem AD and Azure AD, but he's trying to authenticate on portal.office.com with his on prem AD credentials using workgroup PC , will he be able to sign-in ?? ) --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e4e5a986-727e-a83d-568f-304ec86caef3 * Version Independent ID: 8d384088-0ae0-d111-a10b-173720a9106b * [Content](https://docs.microsoft.com/en-gb/azure/active-directory/connect/active-directory-aadconnect-pass-through-authentication#) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/connect/active-directory-aadconnect-pass-through-authentication.md) * Service: active-directory
1.0
WorkGroup - Hi, in the video you've explained about the domain joined laptop which open portal page and sign-in is not required, excellent.!! :) i had a question on this, how about the workgroup machines ? will they still be able to access the apps and o365 (forget about single sign on, am considered about services, if any of the user whose identity is in on prem AD and Azure AD, but he's trying to authenticate on portal.office.com with his on prem AD credentials using workgroup PC , will he be able to sign-in ?? ) --- #### Document details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e4e5a986-727e-a83d-568f-304ec86caef3 * Version Independent ID: 8d384088-0ae0-d111-a10b-173720a9106b * [Content](https://docs.microsoft.com/en-gb/azure/active-directory/connect/active-directory-aadconnect-pass-through-authentication#) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/connect/active-directory-aadconnect-pass-through-authentication.md) * Service: active-directory
process
workgroup hi in the video you ve explained about the domain joined laptop which open portal page and sign in is not required excellent i had a question on this how about the workgroup machines will they still be able to access the apps and forget about single sign on am considered about services if any of the user whose identity is in on prem ad and azure ad but he s trying to authenticate on portal office com with his on prem ad credentials using workgroup pc will he be able to sign in document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service active directory
1
2,807
5,738,516,967
IssuesEvent
2017-04-23 05:05:44
SIMEXP/niak
https://api.github.com/repos/SIMEXP/niak
closed
resample mean functional at T1 resolution
enhancement preprocessing
For QC purposes it would be prettier to have functional images resampled at T1 resolution.
1.0
resample mean functional at T1 resolution - For QC purposes it would be prettier to have functional images resampled at T1 resolution.
process
resample mean functional at resolution for qc purposes it would be prettier to have functional images resampled at resolution
1
308,222
26,589,706,877
IssuesEvent
2023-01-23 07:19:01
codestates-seb/seb41_main_017
https://api.github.com/repos/codestates-seb/seb41_main_017
closed
[BE] 결제 기능 테스트 작성
☔️ Test
## 만들고자 하는 기능이 무엇인가요? - 결제 기능과 관련된 기능을 테스트하는 테스트 코드 ## 해당 기능을 구현하기 위해 할 일이 무엇인가요? - [x] 컨트롤러 테스트 구현 - [x] 서비스 테스트 구현
1.0
[BE] 결제 기능 테스트 작성 - ## 만들고자 하는 기능이 무엇인가요? - 결제 기능과 관련된 기능을 테스트하는 테스트 코드 ## 해당 기능을 구현하기 위해 할 일이 무엇인가요? - [x] 컨트롤러 테스트 구현 - [x] 서비스 테스트 구현
non_process
결제 기능 테스트 작성 만들고자 하는 기능이 무엇인가요 결제 기능과 관련된 기능을 테스트하는 테스트 코드 해당 기능을 구현하기 위해 할 일이 무엇인가요 컨트롤러 테스트 구현 서비스 테스트 구현
0
235,015
7,733,869,172
IssuesEvent
2018-05-26 16:59:00
bitshares/bitshares-ui
https://api.github.com/repos/bitshares/bitshares-ui
closed
[1] Generated password doesn't show completely
bug high priority
**Describe the bug** Account creation page, the generated password is too long, or font is too big. **To Reproduce** Steps to reproduce the behavior: 1. Go to https://wallet.bitshares.org/#/create-account/password with a new incognito window **Expected behavior** The whole password will show on page **Screenshots** ![image](https://user-images.githubusercontent.com/9946777/40267535-67b4787c-5b5e-11e8-89fd-f843914e1c04.png) **Desktop (please complete the following information):** - OS: Win7 - Browser Chrome - Version latest **Additional context** Add any other context about the problem here.
1.0
[1] Generated password doesn't show completely - **Describe the bug** Account creation page, the generated password is too long, or font is too big. **To Reproduce** Steps to reproduce the behavior: 1. Go to https://wallet.bitshares.org/#/create-account/password with a new incognito window **Expected behavior** The whole password will show on page **Screenshots** ![image](https://user-images.githubusercontent.com/9946777/40267535-67b4787c-5b5e-11e8-89fd-f843914e1c04.png) **Desktop (please complete the following information):** - OS: Win7 - Browser Chrome - Version latest **Additional context** Add any other context about the problem here.
non_process
generated password doesn t show completely describe the bug account creation page the generated password is too long or font is too big to reproduce steps to reproduce the behavior go to with a new incognito window expected behavior the whole password will show on page screenshots desktop please complete the following information os browser chrome version latest additional context add any other context about the problem here
0
6,634
9,745,076,388
IssuesEvent
2019-06-03 08:45:43
mysterin/question_and_answer
https://api.github.com/repos/mysterin/question_and_answer
closed
BeanPostProcessor 和 BeanFactoryPostProcessor
BeanPostProcessor spring
### BeanPostProcessor bean 的后置处理器, 在 bean 初始化前后调用 ### BeanFactoryPostProcessor beanFactory 后置处理器, 在 beanFactory 完成标准初始化后调用, 此时容器已经加载了所有 bean 的定义, 但是还没有初始化 bean
1.0
BeanPostProcessor 和 BeanFactoryPostProcessor - ### BeanPostProcessor bean 的后置处理器, 在 bean 初始化前后调用 ### BeanFactoryPostProcessor beanFactory 后置处理器, 在 beanFactory 完成标准初始化后调用, 此时容器已经加载了所有 bean 的定义, 但是还没有初始化 bean
process
beanpostprocessor 和 beanfactorypostprocessor beanpostprocessor bean 的后置处理器 在 bean 初始化前后调用 beanfactorypostprocessor beanfactory 后置处理器 在 beanfactory 完成标准初始化后调用 此时容器已经加载了所有 bean 的定义 但是还没有初始化 bean
1
261,701
27,813,407,118
IssuesEvent
2023-03-18 11:57:44
LibrIT/passhport
https://api.github.com/repos/LibrIT/passhport
closed
CVE-2022-40898 (High) detected in wheel-0.37.1-py2.py3-none-any.whl
Mend: dependency security vulnerability
## CVE-2022-40898 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>wheel-0.37.1-py2.py3-none-any.whl</b></p></summary> <p>A built-package format for Python</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/27/d6/003e593296a85fd6ed616ed962795b2f87709c3eee2bca4f6d0fe55c6d00/wheel-0.37.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/27/d6/003e593296a85fd6ed616ed962795b2f87709c3eee2bca4f6d0fe55c6d00/wheel-0.37.1-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **wheel-0.37.1-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/85f9855d31f91d439049909da1bfb4711986a3da">85f9855d31f91d439049909da1bfb4711986a3da</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue discovered in Python Packaging Authority (PyPA) Wheel 0.37.1 and earlier allows remote attackers to cause a denial of service via attacker controlled input to wheel cli. <p>Publish Date: 2022-12-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40898>CVE-2022-40898</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-12-23</p> <p>Fix Resolution: 0.38.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-40898 (High) detected in wheel-0.37.1-py2.py3-none-any.whl - ## CVE-2022-40898 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>wheel-0.37.1-py2.py3-none-any.whl</b></p></summary> <p>A built-package format for Python</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/27/d6/003e593296a85fd6ed616ed962795b2f87709c3eee2bca4f6d0fe55c6d00/wheel-0.37.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/27/d6/003e593296a85fd6ed616ed962795b2f87709c3eee2bca4f6d0fe55c6d00/wheel-0.37.1-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **wheel-0.37.1-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/85f9855d31f91d439049909da1bfb4711986a3da">85f9855d31f91d439049909da1bfb4711986a3da</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue discovered in Python Packaging Authority (PyPA) Wheel 0.37.1 and earlier allows remote attackers to cause a denial of service via attacker controlled input to wheel cli. <p>Publish Date: 2022-12-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40898>CVE-2022-40898</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-12-23</p> <p>Fix Resolution: 0.38.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in wheel none any whl cve high severity vulnerability vulnerable library wheel none any whl a built package format for python library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x wheel none any whl vulnerable library found in head commit a href found in base branch master vulnerability details an issue discovered in python packaging authority pypa wheel and earlier allows remote attackers to cause a denial of service via attacker controlled input to wheel cli publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend
0
14,958
18,439,438,459
IssuesEvent
2021-10-14 16:15:27
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Support for application_name argument in connection URL for postgres
kind/feature process/candidate tech/engines tech/typescript team/migrations team/client topic: postgresql topic: engine
## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> I would like to identify which connection prisma is making to a postgres database, but it seems as though application_name is ignored in the connection URL. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Please support the application_name argument in the connection URL for postgres. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. --> ![image](https://user-images.githubusercontent.com/42816933/122920430-4082fe80-d316-11eb-81f2-a18701949027.png) I believe that prisma is the first connection in the list, but even though the connection URL used has an application_name argument, it doesn't show up.
1.0
Support for application_name argument in connection URL for postgres - ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> I would like to identify which connection prisma is making to a postgres database, but it seems as though application_name is ignored in the connection URL. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Please support the application_name argument in the connection URL for postgres. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. --> ![image](https://user-images.githubusercontent.com/42816933/122920430-4082fe80-d316-11eb-81f2-a18701949027.png) I believe that prisma is the first connection in the list, but even though the connection URL used has an application_name argument, it doesn't show up.
process
support for application name argument in connection url for postgres problem i would like to identify which connection prisma is making to a postgres database but it seems as though application name is ignored in the connection url suggested solution please support the application name argument in the connection url for postgres alternatives additional context i believe that prisma is the first connection in the list but even though the connection url used has an application name argument it doesn t show up
1
129,609
27,522,755,676
IssuesEvent
2023-03-06 16:01:42
ita-social-projects/StreetCode
https://api.github.com/repos/ita-social-projects/StreetCode
opened
Admin/Chronology block
User Story (Epic#2) Admin/New StreetCode
### Acceptance Criteria 1. Acceptance Criteria Admin can add a new history event via "+" button 1.1. System opens a modal window where an Admin can add a date (period), title and text. All items are mandatory 1.2. Admin can add no more than 400 symbols to text fields. Counter for "symbols left" is displayed 2. Admin can edit existing events via "pencil" icon 3. Admin can delete existing events via "trash" icon 4. All events are displayed as small blocks with titles on it
1.0
Admin/Chronology block - ### Acceptance Criteria 1. Acceptance Criteria Admin can add a new history event via "+" button 1.1. System opens a modal window where an Admin can add a date (period), title and text. All items are mandatory 1.2. Admin can add no more than 400 symbols to text fields. Counter for "symbols left" is displayed 2. Admin can edit existing events via "pencil" icon 3. Admin can delete existing events via "trash" icon 4. All events are displayed as small blocks with titles on it
non_process
admin chronology block acceptance criteria acceptance criteria admin can add a new history event via button system opens a modal window where an admin can add a date period title and text all items are mandatory admin can add no more than symbols to text fields counter for symbols left is displayed admin can edit existing events via pencil icon admin can delete existing events via trash icon all events are displayed as small blocks with titles on it
0
10,135
13,044,162,409
IssuesEvent
2020-07-29 03:47:32
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `JsonContainsPathSig` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `JsonContainsPathSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `JsonContainsPathSig` from TiDB - ## Description Port the scalar function `JsonContainsPathSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function jsoncontainspathsig from tidb description port the scalar function jsoncontainspathsig from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
1
164,149
25,927,900,266
IssuesEvent
2022-12-16 07:06:13
p2hacks2022/post-team05
https://api.github.com/repos/p2hacks2022/post-team05
closed
アプリケーションのUXを検討
Priority: Mid Design
# 概要 <!-- ここに機能の概要を書いてね --> - [x] 時空を超えている感覚を体験できているか - [x] かくれんぼをITで拡張できており楽しさを提供できているか - [x] 結果画面において各種ランキングの作成 # 関連するissue <!-- 関連するissueがあればここに書いてね --> <!-- ex. #{issue番号} --> # 参考 <!-- 参考資料などはここに書いてね --> # チェックリスト - [x] 優先度tagを設定しましたか? - [x] ラベルを設定しましたか?
1.0
アプリケーションのUXを検討 - # 概要 <!-- ここに機能の概要を書いてね --> - [x] 時空を超えている感覚を体験できているか - [x] かくれんぼをITで拡張できており楽しさを提供できているか - [x] 結果画面において各種ランキングの作成 # 関連するissue <!-- 関連するissueがあればここに書いてね --> <!-- ex. #{issue番号} --> # 参考 <!-- 参考資料などはここに書いてね --> # チェックリスト - [x] 優先度tagを設定しましたか? - [x] ラベルを設定しましたか?
non_process
アプリケーションのuxを検討 概要 時空を超えている感覚を体験できているか かくれんぼをitで拡張できており楽しさを提供できているか 結果画面において各種ランキングの作成 関連するissue 参考 チェックリスト 優先度tagを設定しましたか? ラベルを設定しましたか?
0
103,431
11,356,235,223
IssuesEvent
2020-01-24 22:07:15
deptofdefense/solo
https://api.github.com/repos/deptofdefense/solo
closed
Create diagram outlining CI/CD pipeline
LOE 2 documentation
**Is your feature request related to a problem? Please describe.** N/A **Describe the solution you'd like** An easily digestible showing the CI/CD pipeline steps in order for inclusion in SSAG. **Describe alternatives you've considered** N/A **Additional context** N/A
1.0
Create diagram outlining CI/CD pipeline - **Is your feature request related to a problem? Please describe.** N/A **Describe the solution you'd like** An easily digestible showing the CI/CD pipeline steps in order for inclusion in SSAG. **Describe alternatives you've considered** N/A **Additional context** N/A
non_process
create diagram outlining ci cd pipeline is your feature request related to a problem please describe n a describe the solution you d like an easily digestible showing the ci cd pipeline steps in order for inclusion in ssag describe alternatives you ve considered n a additional context n a
0
36,146
8,055,917,469
IssuesEvent
2018-08-02 10:52:19
open-learning-exchange/planet
https://api.github.com/repos/open-learning-exchange/planet
closed
ConfigurationComponent has too many functions
Code Quality in progress
I think we can split this file into a ConfigurationService with the functions that handle data manipulation moved there to make this file more manageable.
1.0
ConfigurationComponent has too many functions - I think we can split this file into a ConfigurationService with the functions that handle data manipulation moved there to make this file more manageable.
non_process
configurationcomponent has too many functions i think we can split this file into a configurationservice with the functions that handle data manipulation moved there to make this file more manageable
0
7,275
10,429,340,907
IssuesEvent
2019-09-17 02:22:17
pawn-lang/compiler
https://api.github.com/repos/pawn-lang/compiler
closed
Ignoring unknown directives.
area: pre-processor state: stale type: enhancement
In short, this should be allowed: ```pawn #if 0 #unknown_directive #endif ``` Currently that gives an error, when it should be skipped. Implementing this is trivially easy - I already did it: https://github.com/Y-Less/compiler/commit/267c8f8ffdb250ba855aefda6b52f84d5d8fac0a But there's one HUGE issue - what happens if that unknown and ignored directive from a future compiler version actually ends or somehow changes the `#if`? E.g: ```pawn #if 0 printf("never"); #fi ``` An older compiler that sees that will just ignore `#fi`, not end the `#if`, and probably reach the end of the file without doing anything. There are two solutions: 1. Ignore this problem. If you want to make your code backwards-compatible just don't rely on older compilers correctly ignoring (or not) important directives. 2. Enumerate all possible future control-flow directives there ever may be, this could be, but is not limited to: ```pawn #if #else #elseif #elif #ifdef #elifdef #elseifdef #endif #fi #el #ifndef #elseifndef #elifndef ``` Will we ever want `#iff`? `#switch`, `#esac` etc? We can't possibly know. Thus I think option 1 is fine - just don't be an idiot!
1.0
Ignoring unknown directives. - In short, this should be allowed: ```pawn #if 0 #unknown_directive #endif ``` Currently that gives an error, when it should be skipped. Implementing this is trivially easy - I already did it: https://github.com/Y-Less/compiler/commit/267c8f8ffdb250ba855aefda6b52f84d5d8fac0a But there's one HUGE issue - what happens if that unknown and ignored directive from a future compiler version actually ends or somehow changes the `#if`? E.g: ```pawn #if 0 printf("never"); #fi ``` An older compiler that sees that will just ignore `#fi`, not end the `#if`, and probably reach the end of the file without doing anything. There are two solutions: 1. Ignore this problem. If you want to make your code backwards-compatible just don't rely on older compilers correctly ignoring (or not) important directives. 2. Enumerate all possible future control-flow directives there ever may be, this could be, but is not limited to: ```pawn #if #else #elseif #elif #ifdef #elifdef #elseifdef #endif #fi #el #ifndef #elseifndef #elifndef ``` Will we ever want `#iff`? `#switch`, `#esac` etc? We can't possibly know. Thus I think option 1 is fine - just don't be an idiot!
process
ignoring unknown directives in short this should be allowed pawn if unknown directive endif currently that gives an error when it should be skipped implementing this is trivially easy i already did it but there s one huge issue what happens if that unknown and ignored directive from a future compiler version actually ends or somehow changes the if e g pawn if printf never fi an older compiler that sees that will just ignore fi not end the if and probably reach the end of the file without doing anything there are two solutions ignore this problem if you want to make your code backwards compatible just don t rely on older compilers correctly ignoring or not important directives enumerate all possible future control flow directives there ever may be this could be but is not limited to pawn if else elseif elif ifdef elifdef elseifdef endif fi el ifndef elseifndef elifndef will we ever want iff switch esac etc we can t possibly know thus i think option is fine just don t be an idiot
1
17,062
22,499,832,941
IssuesEvent
2022-06-23 10:46:52
stacc/stacc-flow-community
https://api.github.com/repos/stacc/stacc-flow-community
closed
flow-cli/process-tester:requries additional documentation
type: Enhancement module: flow-process status: In development team: flyt
**Relevant Flow module(s)** flow-cli process-tester **What do you want to accomplish? Please describe.** Would be nice with some more detailed documentation. (current: https://flow-docs.stacc.dev/framework/flow-process/10.6/testing) **Describe the suggested solution** E.g: yaml described a series of _types_ These should be listed in detail: - what do they do - what args Configuration: All process tests need to be listed in a single dir (not allowed to nest files) - would be nice with if we could permit a nested structure as this dir can easily become cluttered
1.0
flow-cli/process-tester:requries additional documentation - **Relevant Flow module(s)** flow-cli process-tester **What do you want to accomplish? Please describe.** Would be nice with some more detailed documentation. (current: https://flow-docs.stacc.dev/framework/flow-process/10.6/testing) **Describe the suggested solution** E.g: yaml described a series of _types_ These should be listed in detail: - what do they do - what args Configuration: All process tests need to be listed in a single dir (not allowed to nest files) - would be nice with if we could permit a nested structure as this dir can easily become cluttered
process
flow cli process tester requries additional documentation relevant flow module s flow cli process tester what do you want to accomplish please describe would be nice with some more detailed documentation current describe the suggested solution e g yaml described a series of types these should be listed in detail what do they do what args configuration all process tests need to be listed in a single dir not allowed to nest files would be nice with if we could permit a nested structure as this dir can easily become cluttered
1
13,614
16,195,316,409
IssuesEvent
2021-05-04 13:56:36
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
*Which Tasks* are supported by server (aka agentless) jobs?
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
Please link to or document the tasks that are supported by agentless jobs [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#server-jobs) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4 * Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce * Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
*Which Tasks* are supported by server (aka agentless) jobs? - Please link to or document the tasks that are supported by agentless jobs [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#server-jobs) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4 * Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce * Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
which tasks are supported by server aka agentless jobs please link to or document the tasks that are supported by agentless jobs document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
663
3,133,820,999
IssuesEvent
2015-09-10 05:44:50
nodejs/node
https://api.github.com/repos/nodejs/node
closed
process.send is not documented
doc process
While I believe this is a public API, and part of how one is supposed to communicate between workers and child processes, I was very surprised to see the `process` page not listing the `process.send` function.
1.0
process.send is not documented - While I believe this is a public API, and part of how one is supposed to communicate between workers and child processes, I was very surprised to see the `process` page not listing the `process.send` function.
process
process send is not documented while i believe this is a public api and part of how one is supposed to communicate between workers and child processes i was very surprised to see the process page not listing the process send function
1
19,724
26,073,836,217
IssuesEvent
2022-12-24 07:08:03
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Crash on game launch if Transport Drones is enabled
bug mod:pypostprocessing crash compatibility
### Mod source Factorio Mod Portal ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [X] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? ![image](https://user-images.githubusercontent.com/7324655/195976090-0492c80c-3ed2-42ba-8471-b4694e7b5545.png) ### Steps to reproduce 1. Enable PyMods 2. Enable Transport Drones 3. Observe error when game tries to load mods ### Additional context _No response_ ### Log file _No response_
1.0
Crash on game launch if Transport Drones is enabled - ### Mod source Factorio Mod Portal ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [X] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? ![image](https://user-images.githubusercontent.com/7324655/195976090-0492c80c-3ed2-42ba-8471-b4694e7b5545.png) ### Steps to reproduce 1. Enable PyMods 2. Enable Transport Drones 3. Observe error when game tries to load mods ### Additional context _No response_ ### Log file _No response_
process
crash on game launch if transport drones is enabled mod source factorio mod portal which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem steps to reproduce enable pymods enable transport drones observe error when game tries to load mods additional context no response log file no response
1
233,439
18,987,788,039
IssuesEvent
2021-11-22 00:32:24
ILIYANGERMANOV/ivy-wallet
https://api.github.com/repos/ILIYANGERMANOV/ivy-wallet
closed
[UI Tests] Onboarding flow UI tests
dev tests
**Is the project building?** Yes/No **What would you like to improve?** I want to: - cover the Onboarding flow basic UI tests Because: - it'll prevent us from breaking it by accident - it'll open the opportunity to cover with UI tests other parts of the app
1.0
[UI Tests] Onboarding flow UI tests - **Is the project building?** Yes/No **What would you like to improve?** I want to: - cover the Onboarding flow basic UI tests Because: - it'll prevent us from breaking it by accident - it'll open the opportunity to cover with UI tests other parts of the app
non_process
onboarding flow ui tests is the project building yes no what would you like to improve i want to cover the onboarding flow basic ui tests because it ll prevent us from breaking it by accident it ll open the opportunity to cover with ui tests other parts of the app
0
326,950
9,962,987,350
IssuesEvent
2019-07-07 19:17:40
eads/desapariciones
https://api.github.com/repos/eads/desapariciones
opened
sync components on mapbox viewport change
category: site priority: high
moving around the viewport should allow for (throttled) data updates
1.0
sync components on mapbox viewport change - moving around the viewport should allow for (throttled) data updates
non_process
sync components on mapbox viewport change moving around the viewport should allow for throttled data updates
0
16,602
21,657,839,921
IssuesEvent
2022-05-06 15:47:20
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
CI: Builds from external contributors duplicate tests run across mutliple machines
process: tests CI E2E-core stage: fire watch
### Current behavior When a build is created by an external contributor, the tests are not split up across the machines due to them not having access to the record keys. We still parallelize the job (up to 7 machines) which means we are using up CI time and increasing the likelihood that the job fails due to flake. ### Desired behavior We should not be parallelizing jobs if the tests cannot be split up, or we should use our own logic for splitting the tests across machines ### Test code to reproduce Check out a build from an external contributor e.g. https://app.circleci.com/pipelines/github/cypress-io/cypress/37271/workflows/1ad10bf4-86c3-4bff-bad8-044b3952ad4b/jobs/1499238/parallel-runs/5?filterBy=ALL ### Cypress Version 9.6.0 ### Other _No response_
1.0
CI: Builds from external contributors duplicate tests run across mutliple machines - ### Current behavior When a build is created by an external contributor, the tests are not split up across the machines due to them not having access to the record keys. We still parallelize the job (up to 7 machines) which means we are using up CI time and increasing the likelihood that the job fails due to flake. ### Desired behavior We should not be parallelizing jobs if the tests cannot be split up, or we should use our own logic for splitting the tests across machines ### Test code to reproduce Check out a build from an external contributor e.g. https://app.circleci.com/pipelines/github/cypress-io/cypress/37271/workflows/1ad10bf4-86c3-4bff-bad8-044b3952ad4b/jobs/1499238/parallel-runs/5?filterBy=ALL ### Cypress Version 9.6.0 ### Other _No response_
process
ci builds from external contributors duplicate tests run across mutliple machines current behavior when a build is created by an external contributor the tests are not split up across the machines due to them not having access to the record keys we still parallelize the job up to machines which means we are using up ci time and increasing the likelihood that the job fails due to flake desired behavior we should not be parallelizing jobs if the tests cannot be split up or we should use our own logic for splitting the tests across machines test code to reproduce check out a build from an external contributor e g cypress version other no response
1
17,031
22,407,038,357
IssuesEvent
2022-06-18 05:37:28
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
metricstransformprocessor: for filtering with label use OTLP metric type instead of OpenCensus type
proc: metricstransformprocessor
**Description** This issue is for tracking an upcoming refactoring. Metrics transform processor using OpenCensus metric type. The community is planning to rewrite the whole plugin to use OTLP metric type instead of OpenCensusu metric type. @hossain-rayhan added a new feature on that processor which enables to filter metrics matching metric name as well as label values. As Rayhan is approaching his deadline, community decided to include this change on top of our existing codebase. Once we plan to rewrite the processor after the deadline, Rayhan will contribute to re-write his code and possibly help with other parts of the metrictransform processor. Here is the PR link #3201 .
1.0
metricstransformprocessor: for filtering with label use OTLP metric type instead of OpenCensus type - **Description** This issue is for tracking an upcoming refactoring. Metrics transform processor using OpenCensus metric type. The community is planning to rewrite the whole plugin to use OTLP metric type instead of OpenCensusu metric type. @hossain-rayhan added a new feature on that processor which enables to filter metrics matching metric name as well as label values. As Rayhan is approaching his deadline, community decided to include this change on top of our existing codebase. Once we plan to rewrite the processor after the deadline, Rayhan will contribute to re-write his code and possibly help with other parts of the metrictransform processor. Here is the PR link #3201 .
process
metricstransformprocessor for filtering with label use otlp metric type instead of opencensus type description this issue is for tracking an upcoming refactoring metrics transform processor using opencensus metric type the community is planning to rewrite the whole plugin to use otlp metric type instead of opencensusu metric type hossain rayhan added a new feature on that processor which enables to filter metrics matching metric name as well as label values as rayhan is approaching his deadline community decided to include this change on top of our existing codebase once we plan to rewrite the processor after the deadline rayhan will contribute to re write his code and possibly help with other parts of the metrictransform processor here is the pr link
1
16,524
21,531,428,197
IssuesEvent
2022-04-29 01:29:49
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Better Error message when loading native addons (process.dlopen)
feature request process stale
* **Version**: v12.16.1 * **Platform**: Microsoft Windows NT 10.0.17763.0 x64 * **Subsystem**: ### What steps will reproduce the bug? This issue was discovered while using electron.js. It also behaves the same way on node. I am using a Node Native C++ addon (.node) that has additional dependency libraries (.dlls). If one or more of the dependency libraries are missing, an error message is thrown. But the message is very misleading. The error message states: "Uncaught exception: Error: The specified module can not be found. [Module path] [Stack]" To repoduce: 1. Create an addon named addon (for example) with additional required libraries (example foo.dll) 2. load the module with foo.dll in the addon directory. Run node with a test.js file as follows: test.js: `var addon= require("./addon");` cli: `node test.js` 3. Rename foo.dll to _foo.dll. 4. Rerun `node test.js` 5. The exception is thrown ![image](https://user-images.githubusercontent.com/6537543/108427859-65018980-7203-11eb-81ab-8400d363d1ec.png) 6. Rename _foo.dll back to foo.dll 7. Rerun `node test.js` ### How often does it reproduce? Is there a required condition? Every time. ### What is the expected behavior? A better message should be thrown. Either foo.dll is not found, or, at least, one of dependency libraries is missing or not found. ### What do you see instead? See image above
1.0
Better Error message when loading native addons (process.dlopen) - * **Version**: v12.16.1 * **Platform**: Microsoft Windows NT 10.0.17763.0 x64 * **Subsystem**: ### What steps will reproduce the bug? This issue was discovered while using electron.js. It also behaves the same way on node. I am using a Node Native C++ addon (.node) that has additional dependency libraries (.dlls). If one or more of the dependency libraries are missing, an error message is thrown. But the message is very misleading. The error message states: "Uncaught exception: Error: The specified module can not be found. [Module path] [Stack]" To repoduce: 1. Create an addon named addon (for example) with additional required libraries (example foo.dll) 2. load the module with foo.dll in the addon directory. Run node with a test.js file as follows: test.js: `var addon= require("./addon");` cli: `node test.js` 3. Rename foo.dll to _foo.dll. 4. Rerun `node test.js` 5. The exception is thrown ![image](https://user-images.githubusercontent.com/6537543/108427859-65018980-7203-11eb-81ab-8400d363d1ec.png) 6. Rename _foo.dll back to foo.dll 7. Rerun `node test.js` ### How often does it reproduce? Is there a required condition? Every time. ### What is the expected behavior? A better message should be thrown. Either foo.dll is not found, or, at least, one of dependency libraries is missing or not found. ### What do you see instead? See image above
process
better error message when loading native addons process dlopen version platform microsoft windows nt subsystem what steps will reproduce the bug this issue was discovered while using electron js it also behaves the same way on node i am using a node native c addon node that has additional dependency libraries dlls if one or more of the dependency libraries are missing an error message is thrown but the message is very misleading the error message states uncaught exception error the specified module can not be found to repoduce create an addon named addon for example with additional required libraries example foo dll load the module with foo dll in the addon directory run node with a test js file as follows test js var addon require addon cli node test js rename foo dll to foo dll rerun node test js the exception is thrown rename foo dll back to foo dll rerun node test js how often does it reproduce is there a required condition every time what is the expected behavior a better message should be thrown either foo dll is not found or at least one of dependency libraries is missing or not found what do you see instead see image above
1
12,483
14,951,257,179
IssuesEvent
2021-01-26 14:10:58
laurent-daniel-utt/MeshIneBits
https://api.github.com/repos/laurent-daniel-utt/MeshIneBits
closed
add center points for prehensor orientation
2D view enhancement preprocessor
coordinates of center of mark for bit orientation should be added when preprocessing bits. at best these points should be visible on 2D interface.
1.0
add center points for prehensor orientation - coordinates of center of mark for bit orientation should be added when preprocessing bits. at best these points should be visible on 2D interface.
process
add center points for prehensor orientation coordinates of center of mark for bit orientation should be added when preprocessing bits at best these points should be visible on interface
1
11,753
14,589,675,549
IssuesEvent
2020-12-19 03:13:37
kwinborne/asm6809
https://api.github.com/repos/kwinborne/asm6809
opened
Assembler Directives
Preprocessor
Create a system to manage assembler directives present in a source file.
1.0
Assembler Directives - Create a system to manage assembler directives present in a source file.
process
assembler directives create a system to manage assembler directives present in a source file
1
273,197
23,736,276,430
IssuesEvent
2022-08-31 08:24:23
valory-xyz/open-autonomy
https://api.github.com/repos/valory-xyz/open-autonomy
closed
interdependency of skills and their tests
enhancement test
There are tests for skill where we import classes from other skill while we should not. I have not systematically assessed the situation, but will list some of my findings here to illustrate. https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_price_estimation_abci/test_rounds.py#L30-L33 https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_safe_deployment_abci/test_rounds.py#L26-L29 https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_transaction_settlement_abci/test_rounds.py#L37 the same thing goes for reuse of parts from tests. If not from `test_abstract_round.py` These should reside in some `base.py` or `helpers.py` script, e.g. https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_safe_deployment_abci/test_rounds.py#L42-L48 Suggestion: - test that concrete skills do not have any imports from other concrete skills - a test for the same in the tests modules of concrete tests
1.0
interdependency of skills and their tests - There are tests for skill where we import classes from other skill while we should not. I have not systematically assessed the situation, but will list some of my findings here to illustrate. https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_price_estimation_abci/test_rounds.py#L30-L33 https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_safe_deployment_abci/test_rounds.py#L26-L29 https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_transaction_settlement_abci/test_rounds.py#L37 the same thing goes for reuse of parts from tests. If not from `test_abstract_round.py` These should reside in some `base.py` or `helpers.py` script, e.g. https://github.com/valory-xyz/open-autonomy/blob/3ea8ff010d2b6ed55059ff43ee916a1371aff143/tests/test_skills/test_safe_deployment_abci/test_rounds.py#L42-L48 Suggestion: - test that concrete skills do not have any imports from other concrete skills - a test for the same in the tests modules of concrete tests
non_process
interdependency of skills and their tests there are tests for skill where we import classes from other skill while we should not i have not systematically assessed the situation but will list some of my findings here to illustrate the same thing goes for reuse of parts from tests if not from test abstract round py these should reside in some base py or helpers py script e g suggestion test that concrete skills do not have any imports from other concrete skills a test for the same in the tests modules of concrete tests
0
3,114
6,144,154,758
IssuesEvent
2017-06-27 08:11:49
JustusAdam/marvin
https://api.github.com/repos/JustusAdam/marvin
closed
Versatile external-scripts.json
component:preprocessor enhancement needs testing topic:api
## Current Status external-scripts.json only contains a list of modules to be imported: ```json [ "Marvin.Script.SomeScript", "Marvin.Script.AnotherScript", "SomeUserScript", "Some.Library.Script" ] ``` Each module must export a value called `script` of type `ScriptInit` which marvin-pp imports and add to the list of scripts. This limits external scripts to only one script per module and enforces the naming convention. ## Proposition A more versatile approach would be to let the user optionally specify the module and the imported scripts. I'm thinking of three forms for the specification, and they should be usable in parallel ### Form 1 - String ```json "Marvin.Scripts.TheScript" ``` Same functionality as today, should be a module which exports a script. ### Form 2 - Array Needs at least 2 elements. First element is interpreted as the module name, all following are names of exported script values. ```json ["Marvin.Scripts.Module", "script1", "script2"] ``` ### Form 3 - Object Needs at least a key `module` and either `script` (single script value) or `scripts` (array of script values) ```json { "module": "Marvin.Scripts.Module", "script": "my_script" } ``` Maybe we could also support a key `scriptlist` which should then be an exported value of typ `[ScriptInit]`? ## Example ```json [ "Marvin.Script.SomeScript", ["Marvin.Script.AnotherScript", "scriptValue1", "scriptValue2"], { "module": "SomeUserScript", "script": "aScript" }, { "module": "Some.Library.Script", "scripts": ["firstScript", "secondScript"] } ] ```
1.0
Versatile external-scripts.json - ## Current Status external-scripts.json only contains a list of modules to be imported: ```json [ "Marvin.Script.SomeScript", "Marvin.Script.AnotherScript", "SomeUserScript", "Some.Library.Script" ] ``` Each module must export a value called `script` of type `ScriptInit` which marvin-pp imports and add to the list of scripts. This limits external scripts to only one script per module and enforces the naming convention. ## Proposition A more versatile approach would be to let the user optionally specify the module and the imported scripts. I'm thinking of three forms for the specification, and they should be usable in parallel ### Form 1 - String ```json "Marvin.Scripts.TheScript" ``` Same functionality as today, should be a module which exports a script. ### Form 2 - Array Needs at least 2 elements. First element is interpreted as the module name, all following are names of exported script values. ```json ["Marvin.Scripts.Module", "script1", "script2"] ``` ### Form 3 - Object Needs at least a key `module` and either `script` (single script value) or `scripts` (array of script values) ```json { "module": "Marvin.Scripts.Module", "script": "my_script" } ``` Maybe we could also support a key `scriptlist` which should then be an exported value of typ `[ScriptInit]`? ## Example ```json [ "Marvin.Script.SomeScript", ["Marvin.Script.AnotherScript", "scriptValue1", "scriptValue2"], { "module": "SomeUserScript", "script": "aScript" }, { "module": "Some.Library.Script", "scripts": ["firstScript", "secondScript"] } ] ```
process
versatile external scripts json current status external scripts json only contains a list of modules to be imported json marvin script somescript marvin script anotherscript someuserscript some library script each module must export a value called script of type scriptinit which marvin pp imports and add to the list of scripts this limits external scripts to only one script per module and enforces the naming convention proposition a more versatile approach would be to let the user optionally specify the module and the imported scripts i m thinking of three forms for the specification and they should be usable in parallel form string json marvin scripts thescript same functionality as today should be a module which exports a script form array needs at least elements first element is interpreted as the module name all following are names of exported script values json form object needs at least a key module and either script single script value or scripts array of script values json module marvin scripts module script my script maybe we could also support a key scriptlist which should then be an exported value of typ example json marvin script somescript module someuserscript script ascript module some library script scripts
1
93,773
19,322,808,593
IssuesEvent
2021-12-14 08:12:25
zyantific/zydis
https://api.github.com/repos/zyantific/zydis
closed
Consider reworking branch types in the encoder
C-enhancement P-medium A-encoder
This is a continuation of the discussion [here](https://github.com/zyantific/zydis/pull/254#discussion_r739544494). Thinking about this for a bit longer and having gained some usage experience with the encoder, I withdraw my initial suggestion. When encoding branches, a user must currently explicitly choose a very specific branch type like `ZYDIS_ENCODABLE_BRANCH_TYPE_NEAR64` in the request, which is quite inconvenient for code generation. I think that it would be preferable to instead keep the decoder type as-is, and instead split the `ZydisEncodableBranchType` type into two separate fields in the encoder request such as: ```c enum ZydisBranchWidth { ZYDIS_BRANCH_WIDTH_NONE, ZYDIS_BRANCH_WIDTH_16, ZYDIS_BRANCH_WIDTH_32, ZYDIS_BRANCH_WIDTH_64 }; struct ZydisEncoderRequest { ZydisBranchType branch_type; // reusing the decoder type ZydisBranchWidth branch_width; // remaining fields omitted }; ``` `ZYDIS_BRANCH_TYPE_NONE` and `ZYDIS_BRANCH_WIDTH_NONE` would mean "infer" to the encoder. `ZYDIS_BRANCH_TYPE_NONE` specifically would mean "infer whether near or short", but never consider "far", which must be requested explicitly. I'd argue that in 99% of cases, users will want some variation of a near/short branch. In a semantic encoder interface, I'd further argue that it should be the encoder that picks a good branch variant for the provided offset. The upside with the current approach is that it's impossible to specify invalid combinations, reducing the amount of validation required. I assume that's what you meant with "needing a tuple type" in your comment [here](https://github.com/zyantific/zydis/pull/254#discussion_r739657745). However, I personally don't think this warrants the downsides. If such a tuple type is truly required, we should keep it internal to the encoder, abstracting it away from the user.
1.0
Consider reworking branch types in the encoder - This is a continuation of the discussion [here](https://github.com/zyantific/zydis/pull/254#discussion_r739544494). Thinking about this for a bit longer and having gained some usage experience with the encoder, I withdraw my initial suggestion. When encoding branches, a user must currently explicitly choose a very specific branch type like `ZYDIS_ENCODABLE_BRANCH_TYPE_NEAR64` in the request, which is quite inconvenient for code generation. I think that it would be preferable to instead keep the decoder type as-is, and instead split the `ZydisEncodableBranchType` type into two separate fields in the encoder request such as: ```c enum ZydisBranchWidth { ZYDIS_BRANCH_WIDTH_NONE, ZYDIS_BRANCH_WIDTH_16, ZYDIS_BRANCH_WIDTH_32, ZYDIS_BRANCH_WIDTH_64 }; struct ZydisEncoderRequest { ZydisBranchType branch_type; // reusing the decoder type ZydisBranchWidth branch_width; // remaining fields omitted }; ``` `ZYDIS_BRANCH_TYPE_NONE` and `ZYDIS_BRANCH_WIDTH_NONE` would mean "infer" to the encoder. `ZYDIS_BRANCH_TYPE_NONE` specifically would mean "infer whether near or short", but never consider "far", which must be requested explicitly. I'd argue that in 99% of cases, users will want some variation of a near/short branch. In a semantic encoder interface, I'd further argue that it should be the encoder that picks a good branch variant for the provided offset. The upside with the current approach is that it's impossible to specify invalid combinations, reducing the amount of validation required. I assume that's what you meant with "needing a tuple type" in your comment [here](https://github.com/zyantific/zydis/pull/254#discussion_r739657745). However, I personally don't think this warrants the downsides. If such a tuple type is truly required, we should keep it internal to the encoder, abstracting it away from the user.
non_process
consider reworking branch types in the encoder this is a continuation of the discussion thinking about this for a bit longer and having gained some usage experience with the encoder i withdraw my initial suggestion when encoding branches a user must currently explicitly choose a very specific branch type like zydis encodable branch type in the request which is quite inconvenient for code generation i think that it would be preferable to instead keep the decoder type as is and instead split the zydisencodablebranchtype type into two separate fields in the encoder request such as c enum zydisbranchwidth zydis branch width none zydis branch width zydis branch width zydis branch width struct zydisencoderrequest zydisbranchtype branch type reusing the decoder type zydisbranchwidth branch width remaining fields omitted zydis branch type none and zydis branch width none would mean infer to the encoder zydis branch type none specifically would mean infer whether near or short but never consider far which must be requested explicitly i d argue that in of cases users will want some variation of a near short branch in a semantic encoder interface i d further argue that it should be the encoder that picks a good branch variant for the provided offset the upside with the current approach is that it s impossible to specify invalid combinations reducing the amount of validation required i assume that s what you meant with needing a tuple type in your comment however i personally don t think this warrants the downsides if such a tuple type is truly required we should keep it internal to the encoder abstracting it away from the user
0
10,169
13,044,162,728
IssuesEvent
2020-07-29 03:47:35
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `ValuesJSON` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `ValuesJSON` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `ValuesJSON` from TiDB - ## Description Port the scalar function `ValuesJSON` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function valuesjson from tidb description port the scalar function valuesjson from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
1
15,362
19,532,458,969
IssuesEvent
2021-12-30 19:52:19
googleapis/google-cloud-ruby
https://api.github.com/repos/googleapis/google-cloud-ruby
closed
chore: Add Rake task for running rubocop against inline doc samples
type: process samples
Inline doc samples should be tested in the manual libraries using the same Rubocop configuration that is used for source code. refs: #11117
1.0
chore: Add Rake task for running rubocop against inline doc samples - Inline doc samples should be tested in the manual libraries using the same Rubocop configuration that is used for source code. refs: #11117
process
chore add rake task for running rubocop against inline doc samples inline doc samples should be tested in the manual libraries using the same rubocop configuration that is used for source code refs
1
151,799
19,665,378,895
IssuesEvent
2022-01-10 21:49:09
billmcchesney1/flowgate
https://api.github.com/repos/billmcchesney1/flowgate
opened
WS-2022-0008 (Medium) detected in node-forge-0.10.0.tgz, node-forge-0.7.5.tgz
security vulnerability
## WS-2022-0008 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-forge-0.10.0.tgz</b>, <b>node-forge-0.7.5.tgz</b></p></summary> <p> <details><summary><b>node-forge-0.10.0.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p> <p>Path to dependency file: /ui/package.json</p> <p>Path to vulnerable library: /flowgate/ui/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - :x: **node-forge-0.10.0.tgz** (Vulnerable Library) </details> <details><summary><b>node-forge-0.7.5.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz</a></p> <p>Path to dependency file: /ui/package.json</p> <p>Path to vulnerable library: /ui/node_modules/selfsigned/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - webpack-dev-server-3.1.14.tgz (Root Library) - selfsigned-1.10.4.tgz - :x: **node-forge-0.7.5.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way. <p>Publish Date: 2022-01-08 <p>URL: <a href=https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562>WS-2022-0008</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-5rrq-pxf6-6jx5">https://github.com/advisories/GHSA-5rrq-pxf6-6jx5</a></p> <p>Release Date: 2022-01-08</p> <p>Fix Resolution: node-forge - 1.0.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.10.0","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":false,"dependencyTree":"node-forge:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-forge - 1.0.0","isBinary":false},{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.7.5","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.1.14;selfsigned:1.10.4;node-forge:0.7.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-forge - 1.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2022-0008","vulnerabilityDetails":"The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.","vulnerabilityUrl":"https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
WS-2022-0008 (Medium) detected in node-forge-0.10.0.tgz, node-forge-0.7.5.tgz - ## WS-2022-0008 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-forge-0.10.0.tgz</b>, <b>node-forge-0.7.5.tgz</b></p></summary> <p> <details><summary><b>node-forge-0.10.0.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p> <p>Path to dependency file: /ui/package.json</p> <p>Path to vulnerable library: /flowgate/ui/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - :x: **node-forge-0.10.0.tgz** (Vulnerable Library) </details> <details><summary><b>node-forge-0.7.5.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz</a></p> <p>Path to dependency file: /ui/package.json</p> <p>Path to vulnerable library: /ui/node_modules/selfsigned/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - webpack-dev-server-3.1.14.tgz (Root Library) - selfsigned-1.10.4.tgz - :x: **node-forge-0.7.5.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way. <p>Publish Date: 2022-01-08 <p>URL: <a href=https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562>WS-2022-0008</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-5rrq-pxf6-6jx5">https://github.com/advisories/GHSA-5rrq-pxf6-6jx5</a></p> <p>Release Date: 2022-01-08</p> <p>Fix Resolution: node-forge - 1.0.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.10.0","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":false,"dependencyTree":"node-forge:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-forge - 1.0.0","isBinary":false},{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.7.5","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.1.14;selfsigned:1.10.4;node-forge:0.7.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-forge - 1.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2022-0008","vulnerabilityDetails":"The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.","vulnerabilityUrl":"https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
ws medium detected in node forge tgz node forge tgz ws medium severity vulnerability vulnerable libraries node forge tgz node forge tgz node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file ui package json path to vulnerable library flowgate ui node modules node forge package json dependency hierarchy x node forge tgz vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file ui package json path to vulnerable library ui node modules selfsigned node modules node forge package json dependency hierarchy webpack dev server tgz root library selfsigned tgz x node forge tgz vulnerable library found in base branch master vulnerability details the forge debug api had a potential prototype pollution issue if called with untrusted input the api was only used for internal debug purposes in a safe way and never documented or advertised it is suspected that uses of this api if any exist would likely not have used untrusted inputs in a vulnerable way publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree node forge isminimumfixversionavailable true minimumfixversion node forge isbinary false packagetype javascript node js packagename node forge packageversion packagefilepaths istransitivedependency true dependencytree webpack dev server selfsigned node forge isminimumfixversionavailable true minimumfixversion node forge isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails the forge debug api had a potential prototype pollution issue if called with untrusted input the api was only used for internal debug purposes in a safe way and never documented or advertised it is suspected that uses of this api if any exist would likely not have used untrusted inputs in a vulnerable way vulnerabilityurl
0
53,386
6,719,260,298
IssuesEvent
2017-10-15 22:07:16
bnzk/djangocms-misc
https://api.github.com/repos/bnzk/djangocms-misc
closed
editmode_fallback: ordering and copy pasting not working...
bug design decision needed musthave
ordering and copy pasting is not working when done when fallback plugins are shown, as the plugin's placeholder id is the one from the original language the plugins where added. this is hard! monkey patch ahead :(
1.0
editmode_fallback: ordering and copy pasting not working... - ordering and copy pasting is not working when done when fallback plugins are shown, as the plugin's placeholder id is the one from the original language the plugins where added. this is hard! monkey patch ahead :(
non_process
editmode fallback ordering and copy pasting not working ordering and copy pasting is not working when done when fallback plugins are shown as the plugin s placeholder id is the one from the original language the plugins where added this is hard monkey patch ahead
0
19,228
25,377,903,762
IssuesEvent
2022-11-21 15:23:02
deepset-ai/haystack
https://api.github.com/repos/deepset-ai/haystack
opened
Enhance YAML validation for incompatible argument subsets
topic:pipeline topic:preprocessing
**Problem** Right now YAML validation checks for the presence and type of every argument of nodes, but does not verify whether all arguments together make sense. For example PrePrpcessor has parameters that need to be present only of others are set to some specific values, and FAISSDocumetnStore can only be given a path to a save file or the parameters to create a new instance, but not both. This makes YAML validation somewhat incomplete, as a file that passed validation can still easily break for issues related to the YAML definition itself. **Solution** Nodes should declare the subsets of valid/invalid parameter values as class attributes, so that they can be checked without instantiating the node. This might be done by adding a `additional_constraints` class attribute to the nodes that require it, quite like `outgoing_edges`, specifying additional rules to be injected in the YAML schema at validation time. The validate method will then check for the presence of such field in the classes it's validating and inject those snippets into the schema right before validation. @ArzelaAscoIi
1.0
Enhance YAML validation for incompatible argument subsets - **Problem** Right now YAML validation checks for the presence and type of every argument of nodes, but does not verify whether all arguments together make sense. For example PrePrpcessor has parameters that need to be present only of others are set to some specific values, and FAISSDocumetnStore can only be given a path to a save file or the parameters to create a new instance, but not both. This makes YAML validation somewhat incomplete, as a file that passed validation can still easily break for issues related to the YAML definition itself. **Solution** Nodes should declare the subsets of valid/invalid parameter values as class attributes, so that they can be checked without instantiating the node. This might be done by adding a `additional_constraints` class attribute to the nodes that require it, quite like `outgoing_edges`, specifying additional rules to be injected in the YAML schema at validation time. The validate method will then check for the presence of such field in the classes it's validating and inject those snippets into the schema right before validation. @ArzelaAscoIi
process
enhance yaml validation for incompatible argument subsets problem right now yaml validation checks for the presence and type of every argument of nodes but does not verify whether all arguments together make sense for example preprpcessor has parameters that need to be present only of others are set to some specific values and faissdocumetnstore can only be given a path to a save file or the parameters to create a new instance but not both this makes yaml validation somewhat incomplete as a file that passed validation can still easily break for issues related to the yaml definition itself solution nodes should declare the subsets of valid invalid parameter values as class attributes so that they can be checked without instantiating the node this might be done by adding a additional constraints class attribute to the nodes that require it quite like outgoing edges specifying additional rules to be injected in the yaml schema at validation time the validate method will then check for the presence of such field in the classes it s validating and inject those snippets into the schema right before validation arzelaascoii
1
10,440
3,113,725,711
IssuesEvent
2015-09-03 01:45:59
piwik/piwik
https://api.github.com/repos/piwik/piwik
closed
Failing System test on Mysqli CI
c: Tests & QA not-in-changelog
* Reproduce: https://travis-ci.org/piwik/piwik/jobs/77998360 * Fails only in the MYSQLI AllTests CI build. ``` 1) Warning The data provider specified for Piwik\Tests\System\AutoSuggestAPITest::testApi is invalid. Unknown database 'piwik_tests' ``` maybe someone has some idea?
1.0
Failing System test on Mysqli CI - * Reproduce: https://travis-ci.org/piwik/piwik/jobs/77998360 * Fails only in the MYSQLI AllTests CI build. ``` 1) Warning The data provider specified for Piwik\Tests\System\AutoSuggestAPITest::testApi is invalid. Unknown database 'piwik_tests' ``` maybe someone has some idea?
non_process
failing system test on mysqli ci reproduce fails only in the mysqli alltests ci build warning the data provider specified for piwik tests system autosuggestapitest testapi is invalid unknown database piwik tests maybe someone has some idea
0
75,630
25,961,424,176
IssuesEvent
2022-12-18 23:28:30
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: DevToolsExcetion with multiple windows and/or tabs.
I-defect needs-triaging
### What happened? I was trying to test the block url with multiple windows or tab. I am able to block the urls on different tabs and windows, but when I try to call close() on the devtools object it throws "org.openqa.selenium.WebDriverException: {"id":19,"error":{"code":-32602,"message":"No session with given id"}}". First I got the idea that each window/tab has it's own devtool session, but then how do I manage it as createSession*() methods returns void. ### How can we reproduce the issue? ```shell package com.seleniumtest.devtools; import java.util.Optional; import io.github.bonigarcia.wdm.WebDriverManager; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WindowType; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.chromium.ChromiumDriver; import org.openqa.selenium.devtools.DevTools; import org.openqa.selenium.devtools.v107.network.Network; import com.google.common.collect.ImmutableList; public class DevToolBug { static DevTools devTools; static WebDriver driver = null; public static void main(String[] args) { try { WebDriverManager.chromedriver().setup(); driver = new ChromeDriver(); driver.manage().window().maximize(); devTools = ((ChromiumDriver) driver).getDevTools(); devTools.createSession(); devTools.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Network.setBlockedURLs(ImmutableList.of("*.css"))); driver.get("https://amazon.com"); driver.switchTo().newWindow(WindowType.TAB); devTools.createSession(driver.getWindowHandle()); devTools.send(org.openqa.selenium.devtools.v105.network.Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Network.setBlockedURLs(ImmutableList.of("*.css"))); driver.get("https://amazon.com"); driver.switchTo().newWindow(WindowType.WINDOW); devTools.createSession(driver.getWindowHandle()); devTools.send(org.openqa.selenium.devtools.v105.network.Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Network.setBlockedURLs(ImmutableList.of("*.css"))); driver.get("https://amazon.com"); } catch (Exception ex) { ex.printStackTrace(); } finally { if (devTools != null) { devTools.close(); // <-- Issue. } if (driver != null) { driver.quit(); } } } } ``` ### Relevant log output ```shell Task :DevToolBug.main() SLF4J: No SLF4J providers were found. SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See https://www.slf4j.org/codes.html#noProviders for further details. Starting ChromeDriver 108.0.5359.71 (1e0e3868ee06e91ad636a874420e3ca3ae3756ac-refs/branch-heads/5359@{#1016}) on port 48713 Only local connections are allowed. Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe. ChromeDriver was started successfully. Dec 16, 2022 1:58:25 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch WARNING: Unable to find an exact match for CDP version 108, so returning the closest version found: 107 Exception in thread "main" org.openqa.selenium.devtools.DevToolsException: {"id":19,"error":{"code":-32602,"message":"No session with given id"}} Build info: version: '4.6.0', revision: '79f1c02ae20' System info: os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '17.0.1' Driver info: driver.version: unknown at org.openqa.selenium.devtools.Connection.sendAndWait(Connection.java:159) at org.openqa.selenium.devtools.DevTools.disconnectSession(DevTools.java:67) at org.openqa.selenium.devtools.DevTools.close(DevTools.java:60) at com.seleniumtest.devtools.DevToolBug.main(DevToolBug.java:50) Caused by: org.openqa.selenium.WebDriverException: {"id":19,"error":{"code":-32602,"message":"No session with given id"}} Build info: version: '4.6.0', revision: '79f1c02ae20' System info: os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '17.0.1' Driver info: driver.version: unknown at org.openqa.selenium.devtools.Connection.handle(Connection.java:234) at org.openqa.selenium.devtools.Connection.access$200(Connection.java:58) at org.openqa.selenium.devtools.Connection$Listener.lambda$onText$0(Connection.java:199) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) ``` ### Operating System Windows 10 ### Selenium version 4.6.0 ### What are the browser(s) and version(s) where you see this issue? 108.0.5359.125 ### What are the browser driver(s) and version(s) where you see this issue? Chrome Driver 107 ### Are you using Selenium Grid? NA
1.0
[🐛 Bug]: DevToolsExcetion with multiple windows and/or tabs. - ### What happened? I was trying to test the block url with multiple windows or tab. I am able to block the urls on different tabs and windows, but when I try to call close() on the devtools object it throws "org.openqa.selenium.WebDriverException: {"id":19,"error":{"code":-32602,"message":"No session with given id"}}". First I got the idea that each window/tab has it's own devtool session, but then how do I manage it as createSession*() methods returns void. ### How can we reproduce the issue? ```shell package com.seleniumtest.devtools; import java.util.Optional; import io.github.bonigarcia.wdm.WebDriverManager; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WindowType; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.chromium.ChromiumDriver; import org.openqa.selenium.devtools.DevTools; import org.openqa.selenium.devtools.v107.network.Network; import com.google.common.collect.ImmutableList; public class DevToolBug { static DevTools devTools; static WebDriver driver = null; public static void main(String[] args) { try { WebDriverManager.chromedriver().setup(); driver = new ChromeDriver(); driver.manage().window().maximize(); devTools = ((ChromiumDriver) driver).getDevTools(); devTools.createSession(); devTools.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Network.setBlockedURLs(ImmutableList.of("*.css"))); driver.get("https://amazon.com"); driver.switchTo().newWindow(WindowType.TAB); devTools.createSession(driver.getWindowHandle()); devTools.send(org.openqa.selenium.devtools.v105.network.Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Network.setBlockedURLs(ImmutableList.of("*.css"))); driver.get("https://amazon.com"); driver.switchTo().newWindow(WindowType.WINDOW); devTools.createSession(driver.getWindowHandle()); devTools.send(org.openqa.selenium.devtools.v105.network.Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Network.setBlockedURLs(ImmutableList.of("*.css"))); driver.get("https://amazon.com"); } catch (Exception ex) { ex.printStackTrace(); } finally { if (devTools != null) { devTools.close(); // <-- Issue. } if (driver != null) { driver.quit(); } } } } ``` ### Relevant log output ```shell Task :DevToolBug.main() SLF4J: No SLF4J providers were found. SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See https://www.slf4j.org/codes.html#noProviders for further details. Starting ChromeDriver 108.0.5359.71 (1e0e3868ee06e91ad636a874420e3ca3ae3756ac-refs/branch-heads/5359@{#1016}) on port 48713 Only local connections are allowed. Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe. ChromeDriver was started successfully. Dec 16, 2022 1:58:25 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch WARNING: Unable to find an exact match for CDP version 108, so returning the closest version found: 107 Exception in thread "main" org.openqa.selenium.devtools.DevToolsException: {"id":19,"error":{"code":-32602,"message":"No session with given id"}} Build info: version: '4.6.0', revision: '79f1c02ae20' System info: os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '17.0.1' Driver info: driver.version: unknown at org.openqa.selenium.devtools.Connection.sendAndWait(Connection.java:159) at org.openqa.selenium.devtools.DevTools.disconnectSession(DevTools.java:67) at org.openqa.selenium.devtools.DevTools.close(DevTools.java:60) at com.seleniumtest.devtools.DevToolBug.main(DevToolBug.java:50) Caused by: org.openqa.selenium.WebDriverException: {"id":19,"error":{"code":-32602,"message":"No session with given id"}} Build info: version: '4.6.0', revision: '79f1c02ae20' System info: os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '17.0.1' Driver info: driver.version: unknown at org.openqa.selenium.devtools.Connection.handle(Connection.java:234) at org.openqa.selenium.devtools.Connection.access$200(Connection.java:58) at org.openqa.selenium.devtools.Connection$Listener.lambda$onText$0(Connection.java:199) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) ``` ### Operating System Windows 10 ### Selenium version 4.6.0 ### What are the browser(s) and version(s) where you see this issue? 108.0.5359.125 ### What are the browser driver(s) and version(s) where you see this issue? Chrome Driver 107 ### Are you using Selenium Grid? NA
non_process
devtoolsexcetion with multiple windows and or tabs what happened i was trying to test the block url with multiple windows or tab i am able to block the urls on different tabs and windows but when i try to call close on the devtools object it throws org openqa selenium webdriverexception id error code message no session with given id first i got the idea that each window tab has it s own devtool session but then how do i manage it as createsession methods returns void how can we reproduce the issue shell package com seleniumtest devtools import java util optional import io github bonigarcia wdm webdrivermanager import org openqa selenium webdriver import org openqa selenium windowtype import org openqa selenium chrome chromedriver import org openqa selenium chromium chromiumdriver import org openqa selenium devtools devtools import org openqa selenium devtools network network import com google common collect immutablelist public class devtoolbug static devtools devtools static webdriver driver null public static void main string args try webdrivermanager chromedriver setup driver new chromedriver driver manage window maximize devtools chromiumdriver driver getdevtools devtools createsession devtools send network enable optional empty optional empty optional empty devtools send network setblockedurls immutablelist of css driver get driver switchto newwindow windowtype tab devtools createsession driver getwindowhandle devtools send org openqa selenium devtools network network enable optional empty optional empty optional empty devtools send network setblockedurls immutablelist of css driver get driver switchto newwindow windowtype window devtools createsession driver getwindowhandle devtools send org openqa selenium devtools network network enable optional empty optional empty optional empty devtools send network setblockedurls immutablelist of css driver get catch exception ex ex printstacktrace finally if devtools null devtools close issue if driver null driver quit relevant log output shell task devtoolbug main no providers were found defaulting to no operation nop logger implementation see for further details starting chromedriver refs branch heads on port only local connections are allowed please see for suggestions on keeping chromedriver safe chromedriver was started successfully dec pm org openqa selenium devtools cdpversionfinder findnearestmatch warning unable to find an exact match for cdp version so returning the closest version found exception in thread main org openqa selenium devtools devtoolsexception id error code message no session with given id build info version revision system info os name windows os arch os version java version driver info driver version unknown at org openqa selenium devtools connection sendandwait connection java at org openqa selenium devtools devtools disconnectsession devtools java at org openqa selenium devtools devtools close devtools java at com seleniumtest devtools devtoolbug main devtoolbug java caused by org openqa selenium webdriverexception id error code message no session with given id build info version revision system info os name windows os arch os version java version driver info driver version unknown at org openqa selenium devtools connection handle connection java at org openqa selenium devtools connection access connection java at org openqa selenium devtools connection listener lambda ontext connection java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java operating system windows selenium version what are the browser s and version s where you see this issue what are the browser driver s and version s where you see this issue chrome driver are you using selenium grid na
0
216,155
16,638,009,838
IssuesEvent
2021-06-04 03:10:57
zulip/zulip
https://api.github.com/repos/zulip/zulip
closed
Add user documentation on stream/topic best practices
area: documentation (user) priority: high
Per #675: > having advice on when you should split a stream into multiple streams, when you should use a new topic vs. replying to an existing one when sending a message, etc., would probably be useful. This is also the sort of thing that would work well as a tip sent to new users about a month after they join Zulip.
1.0
Add user documentation on stream/topic best practices - Per #675: > having advice on when you should split a stream into multiple streams, when you should use a new topic vs. replying to an existing one when sending a message, etc., would probably be useful. This is also the sort of thing that would work well as a tip sent to new users about a month after they join Zulip.
non_process
add user documentation on stream topic best practices per having advice on when you should split a stream into multiple streams when you should use a new topic vs replying to an existing one when sending a message etc would probably be useful this is also the sort of thing that would work well as a tip sent to new users about a month after they join zulip
0
13,825
16,589,608,961
IssuesEvent
2021-06-01 05:42:32
topcoder-platform/community-app
https://api.github.com/repos/topcoder-platform/community-app
closed
Hide recommender toggle for private communities
P2 ShapeupProcess challenge- recommender-tool
Hide recommender toggle for private communities
1.0
Hide recommender toggle for private communities - Hide recommender toggle for private communities
process
hide recommender toggle for private communities hide recommender toggle for private communities
1
154,419
19,722,747,899
IssuesEvent
2022-01-13 16:50:53
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Document Field Level Security on Frozen Tier not working correctly
>bug :Security/Authorization Team:Security
**Elasticsearch version** (`bin/elasticsearch --version`): 7.16.2 (running inside Elastic Cloud) **Plugins installed**: [] **JVM version** (`java -version`): n/a (Elastic Cloud) **OS version** (`uname -a` if on a Unix-like system): n/a (Elastic Cloud) **Description of the problem including expected versus actual behavior**: I seem to be running into an issue where Field Level Security throws a null exception when operating on frozen indices. I have a simple ILM policy for my index that moves data from Hot to Frozen after 12 hours. Within that data set, I would like to grant access to all fields except for a few specific ones that I would like to remain internal only. If I create a new user and grant them a custom role with field level security (allowing and denying specific fields), that user cannot search for anything beyond my hot data tier without getting the following exception back ``` "reason": "unsupported_operation_exception: null" ``` Within the data access role, If I disable `Grant access to specific fields`, the user can see and return results from the frozen tier. I will note that in my current environment, this role also is using a `Grant read privileges to specific documents` templated query, however that does not seem to have an impact on this issue. I have tried to produce a working example below that does not involve that privilege. **Steps to reproduce**: 1. Create a simple ILM policy that rolls data out of a hot index and into a frozen index 2. Index data into your ILM managed index so that you have both hot data AND frozen data within your cluster. If my ILM index alias was called `pulse`, my underlying indices are `pulse-0001`, `pulse-0002`, etc and the frozen indices look like `partial-pulse-0001`, `partial-pulse-0002`... etc 3. Create a new role that grants read access to you your desired indices, like below (I am using Kibana): <img width="1399" alt="Screen Shot 2021-12-22 at 1 24 46 PM" src="https://user-images.githubusercontent.com/3504194/147138332-7718ada6-0d8d-4775-b3a3-0311738e9706.png"> 4. Create a new user, and assign them typical access to a kibana space and grant them the data role from step 3 5. In a new private browser, log in as your new user and validate they have access to your frozen tier data and hot tier data, by viewing the Discover panel and looking at a timerange that spans hot and frozen tiers. (24 hrs in my case, see below as an example) <img width="1480" alt="Screen Shot 2021-12-22 at 1 29 14 PM" src="https://user-images.githubusercontent.com/3504194/147138759-6dd4c7d2-09a6-4078-967d-0829b4978c16.png"> 6. Go back to the role you created as an admin, and check the box `Grant access to specific fields`. Deny a field in your data (see below as an example) <img width="1193" alt="Screen Shot 2021-12-22 at 1 30 29 PM" src="https://user-images.githubusercontent.com/3504194/147138977-da783c36-626e-49bb-9295-35872e000412.png"> 7. Back as your new user, refresh the page to see shard exceptions being thrown for all your frozen indices (even though my time range is still set to 24 hours, I get exceptions for my entire frozen tier) <img width="1485" alt="Screen Shot 2021-12-22 at 1 31 46 PM" src="https://user-images.githubusercontent.com/3504194/147139213-4c47a230-47f7-4346-9414-03b32d4f497c.png"> Note in the screenshot above that my data is cut off arbitrarily, right near my frozen tier rollover line from my ILM policy 8. Investigate the exception further and you get the following <img width="861" alt="image" src="https://user-images.githubusercontent.com/3504194/147139308-6ab996fc-bdb1-467a-9250-061463e55a64.png"> 9. Clicking the tab for "Request" shows very normal request, and the "Response" tab looks like below: <img width="758" alt="Screen Shot 2021-12-22 at 1 35 18 PM" src="https://user-images.githubusercontent.com/3504194/147139434-383d6cec-7bbc-4081-9e72-8c9a1315740e.png"> 10. From the command line, I can search the cluster easily if I use a simple count search on a hot tier index ``` curl https://user:pass@my-cluster.es.us-east-1.aws.found.io:9243/pulse-000252/_count # returns {"count":<real number here>,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0}} ``` But if I try to do an operation on the whole alias that includes frozen shards, I get shard exceptions. ``` curl https://user:pass@my-cluster.es.us-east-1.aws.found.io:9243/pulse/_count # returns {"count":<partial number here>,"_shards":{"total":248,"successful":14,"skipped":0,"failed":234,"failures":[{"shard":0,"index":"partial-pulse-000015","node":"XCRMYhdLR3KHuHxm74vlCg","reason":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: null"}},{"shard":0,"index":"partial-pulse-000016","node":"9SNaA5L9TCqZ8l0BA39c1Q","reason":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: null"}},{"shard":0,"index":"partial-pulse-000017","node":"XCRMYhdLR3KHuHxm74vlCg","reason":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: null"}},..... ``` 11. For sanity you can go back to your role configuration and uncheck "Grant access to specific fields" and run that _count command again: ``` curl https://user:pass@my-cluster.es.us-east-1.aws.found.io:9243/pulse/_count {"count":<real number here>,"_shards":{"total":248,"successful":248,"skipped":0,"failed":0}} ``` and it works. I have also tried combing through the built in [roles](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/built-in-roles.html) for Elastic, as well as the built in [index priviledges](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-privileges.html#privileges-list-indices) to see if there was anything related to the frozen tier specifically that causes this behavior, without much luck. **Provide logs (if relevant)**: I have tried to comb the logs inside of Elastic Cloud but the UI does not seem to be surfacing this exception where I can find it.
True
Document Field Level Security on Frozen Tier not working correctly - **Elasticsearch version** (`bin/elasticsearch --version`): 7.16.2 (running inside Elastic Cloud) **Plugins installed**: [] **JVM version** (`java -version`): n/a (Elastic Cloud) **OS version** (`uname -a` if on a Unix-like system): n/a (Elastic Cloud) **Description of the problem including expected versus actual behavior**: I seem to be running into an issue where Field Level Security throws a null exception when operating on frozen indices. I have a simple ILM policy for my index that moves data from Hot to Frozen after 12 hours. Within that data set, I would like to grant access to all fields except for a few specific ones that I would like to remain internal only. If I create a new user and grant them a custom role with field level security (allowing and denying specific fields), that user cannot search for anything beyond my hot data tier without getting the following exception back ``` "reason": "unsupported_operation_exception: null" ``` Within the data access role, If I disable `Grant access to specific fields`, the user can see and return results from the frozen tier. I will note that in my current environment, this role also is using a `Grant read privileges to specific documents` templated query, however that does not seem to have an impact on this issue. I have tried to produce a working example below that does not involve that privilege. **Steps to reproduce**: 1. Create a simple ILM policy that rolls data out of a hot index and into a frozen index 2. Index data into your ILM managed index so that you have both hot data AND frozen data within your cluster. If my ILM index alias was called `pulse`, my underlying indices are `pulse-0001`, `pulse-0002`, etc and the frozen indices look like `partial-pulse-0001`, `partial-pulse-0002`... etc 3. Create a new role that grants read access to you your desired indices, like below (I am using Kibana): <img width="1399" alt="Screen Shot 2021-12-22 at 1 24 46 PM" src="https://user-images.githubusercontent.com/3504194/147138332-7718ada6-0d8d-4775-b3a3-0311738e9706.png"> 4. Create a new user, and assign them typical access to a kibana space and grant them the data role from step 3 5. In a new private browser, log in as your new user and validate they have access to your frozen tier data and hot tier data, by viewing the Discover panel and looking at a timerange that spans hot and frozen tiers. (24 hrs in my case, see below as an example) <img width="1480" alt="Screen Shot 2021-12-22 at 1 29 14 PM" src="https://user-images.githubusercontent.com/3504194/147138759-6dd4c7d2-09a6-4078-967d-0829b4978c16.png"> 6. Go back to the role you created as an admin, and check the box `Grant access to specific fields`. Deny a field in your data (see below as an example) <img width="1193" alt="Screen Shot 2021-12-22 at 1 30 29 PM" src="https://user-images.githubusercontent.com/3504194/147138977-da783c36-626e-49bb-9295-35872e000412.png"> 7. Back as your new user, refresh the page to see shard exceptions being thrown for all your frozen indices (even though my time range is still set to 24 hours, I get exceptions for my entire frozen tier) <img width="1485" alt="Screen Shot 2021-12-22 at 1 31 46 PM" src="https://user-images.githubusercontent.com/3504194/147139213-4c47a230-47f7-4346-9414-03b32d4f497c.png"> Note in the screenshot above that my data is cut off arbitrarily, right near my frozen tier rollover line from my ILM policy 8. Investigate the exception further and you get the following <img width="861" alt="image" src="https://user-images.githubusercontent.com/3504194/147139308-6ab996fc-bdb1-467a-9250-061463e55a64.png"> 9. Clicking the tab for "Request" shows very normal request, and the "Response" tab looks like below: <img width="758" alt="Screen Shot 2021-12-22 at 1 35 18 PM" src="https://user-images.githubusercontent.com/3504194/147139434-383d6cec-7bbc-4081-9e72-8c9a1315740e.png"> 10. From the command line, I can search the cluster easily if I use a simple count search on a hot tier index ``` curl https://user:pass@my-cluster.es.us-east-1.aws.found.io:9243/pulse-000252/_count # returns {"count":<real number here>,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0}} ``` But if I try to do an operation on the whole alias that includes frozen shards, I get shard exceptions. ``` curl https://user:pass@my-cluster.es.us-east-1.aws.found.io:9243/pulse/_count # returns {"count":<partial number here>,"_shards":{"total":248,"successful":14,"skipped":0,"failed":234,"failures":[{"shard":0,"index":"partial-pulse-000015","node":"XCRMYhdLR3KHuHxm74vlCg","reason":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: null"}},{"shard":0,"index":"partial-pulse-000016","node":"9SNaA5L9TCqZ8l0BA39c1Q","reason":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: null"}},{"shard":0,"index":"partial-pulse-000017","node":"XCRMYhdLR3KHuHxm74vlCg","reason":{"type":"unsupported_operation_exception","reason":"unsupported_operation_exception: null"}},..... ``` 11. For sanity you can go back to your role configuration and uncheck "Grant access to specific fields" and run that _count command again: ``` curl https://user:pass@my-cluster.es.us-east-1.aws.found.io:9243/pulse/_count {"count":<real number here>,"_shards":{"total":248,"successful":248,"skipped":0,"failed":0}} ``` and it works. I have also tried combing through the built in [roles](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/built-in-roles.html) for Elastic, as well as the built in [index priviledges](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-privileges.html#privileges-list-indices) to see if there was anything related to the frozen tier specifically that causes this behavior, without much luck. **Provide logs (if relevant)**: I have tried to comb the logs inside of Elastic Cloud but the UI does not seem to be surfacing this exception where I can find it.
non_process
document field level security on frozen tier not working correctly elasticsearch version bin elasticsearch version running inside elastic cloud plugins installed jvm version java version n a elastic cloud os version uname a if on a unix like system n a elastic cloud description of the problem including expected versus actual behavior i seem to be running into an issue where field level security throws a null exception when operating on frozen indices i have a simple ilm policy for my index that moves data from hot to frozen after hours within that data set i would like to grant access to all fields except for a few specific ones that i would like to remain internal only if i create a new user and grant them a custom role with field level security allowing and denying specific fields that user cannot search for anything beyond my hot data tier without getting the following exception back reason unsupported operation exception null within the data access role if i disable grant access to specific fields the user can see and return results from the frozen tier i will note that in my current environment this role also is using a grant read privileges to specific documents templated query however that does not seem to have an impact on this issue i have tried to produce a working example below that does not involve that privilege steps to reproduce create a simple ilm policy that rolls data out of a hot index and into a frozen index index data into your ilm managed index so that you have both hot data and frozen data within your cluster if my ilm index alias was called pulse my underlying indices are pulse pulse etc and the frozen indices look like partial pulse partial pulse etc create a new role that grants read access to you your desired indices like below i am using kibana img width alt screen shot at pm src create a new user and assign them typical access to a kibana space and grant them the data role from step in a new private browser log in as your new user and validate they have access to your frozen tier data and hot tier data by viewing the discover panel and looking at a timerange that spans hot and frozen tiers hrs in my case see below as an example img width alt screen shot at pm src go back to the role you created as an admin and check the box grant access to specific fields deny a field in your data see below as an example img width alt screen shot at pm src back as your new user refresh the page to see shard exceptions being thrown for all your frozen indices even though my time range is still set to hours i get exceptions for my entire frozen tier img width alt screen shot at pm src note in the screenshot above that my data is cut off arbitrarily right near my frozen tier rollover line from my ilm policy investigate the exception further and you get the following img width alt image src clicking the tab for request shows very normal request and the response tab looks like below img width alt screen shot at pm src from the command line i can search the cluster easily if i use a simple count search on a hot tier index curl returns count shards total successful skipped failed but if i try to do an operation on the whole alias that includes frozen shards i get shard exceptions curl returns count shards total successful skipped failed failures shard index partial pulse node reason type unsupported operation exception reason unsupported operation exception null shard index partial pulse node reason type unsupported operation exception reason unsupported operation exception null shard index partial pulse node reason type unsupported operation exception reason unsupported operation exception null for sanity you can go back to your role configuration and uncheck grant access to specific fields and run that count command again curl count shards total successful skipped failed and it works i have also tried combing through the built in for elastic as well as the built in to see if there was anything related to the frozen tier specifically that causes this behavior without much luck provide logs if relevant i have tried to comb the logs inside of elastic cloud but the ui does not seem to be surfacing this exception where i can find it
0
228,050
18,152,608,748
IssuesEvent
2021-09-26 14:27:32
uorocketry/rocket-code-2020
https://api.github.com/repos/uorocketry/rocket-code-2020
closed
Rework "Pause Filling" to be a new event instead of abort
hotfire-test
It should be possible to resume filling, or continue on to ignition - [x] Update state machine - [x] Update config file in ground station
1.0
Rework "Pause Filling" to be a new event instead of abort - It should be possible to resume filling, or continue on to ignition - [x] Update state machine - [x] Update config file in ground station
non_process
rework pause filling to be a new event instead of abort it should be possible to resume filling or continue on to ignition update state machine update config file in ground station
0
2,564
4,792,704,174
IssuesEvent
2016-10-31 16:11:44
AdguardTeam/AdguardForWindows
https://api.github.com/repos/AdguardTeam/AdguardForWindows
reopened
Allow user to select filters update check period
Service
Just like it's done in KIS for instance. Manual Every day/hour/week/month at XX:XX Automatic .... **Section:** Ad Blocker **Setting name:** Filters update period **Values:** Default 1 hour 3 hours 6 hours 12 hours 24 hours 48 hours **Default** means that filter "Expires" attribute defines update check period.
1.0
Allow user to select filters update check period - Just like it's done in KIS for instance. Manual Every day/hour/week/month at XX:XX Automatic .... **Section:** Ad Blocker **Setting name:** Filters update period **Values:** Default 1 hour 3 hours 6 hours 12 hours 24 hours 48 hours **Default** means that filter "Expires" attribute defines update check period.
non_process
allow user to select filters update check period just like it s done in kis for instance manual every day hour week month at xx xx automatic section ad blocker setting name filters update period values default hour hours hours hours hours hours default means that filter expires attribute defines update check period
0
248,553
26,790,047,056
IssuesEvent
2023-02-01 07:42:38
thinktecture-labs/cloud-native-sample
https://api.github.com/repos/thinktecture-labs/cloud-native-sample
closed
Dockle report for Gateway
containers security
The container image for Gateway (`gateway:20e7b23263cd9c9f0f76d275c56b3d5d87619e61`) was scanned during CI for 20e7b23263cd9c9f0f76d275c56b3d5d87619e61 using dockle. Please see the findings mentioned below: ## Dockle results ```json { "summary": { "fatal": 1, "warn": 0, "info": 3, "skip": 0, "pass": 12 }, "details": [ { "code": "CIS-DI-0009", "title": "Use COPY instead of ADD in Dockerfile", "level": "FATAL", "alerts": [ "Use COPY : /bin/sh -c #(nop) ADD file:5dfb5928594745d5de89f61e7104168b42696281bc24f1ce9047bbb688387771 in / " ] }, { "code": "CIS-DI-0005", "title": "Enable Content trust for Docker", "level": "INFO", "alerts": [ "export DOCKER_CONTENT_TRUST=1 before docker pull/build" ] }, { "code": "CIS-DI-0006", "title": "Add HEALTHCHECK instruction to the container image", "level": "INFO", "alerts": [ "not found HEALTHCHECK statement" ] }, { "code": "CIS-DI-0008", "title": "Confirm safety of setuid/setgid files", "level": "INFO", "alerts": [ "setuid file: urwxr-xr-x usr/bin/umount", "setuid file: urwxr-xr-x usr/bin/gpasswd", "setgid file: grwxr-xr-x usr/sbin/unix_chkpwd", "setuid file: urwxr-xr-x usr/bin/chsh", "setuid file: urwxr-xr-x usr/bin/su", "setuid file: urwxr-xr-x usr/bin/chfn", "setuid file: urwxr-xr-x usr/bin/mount", "setuid file: urwxr-xr-x usr/bin/passwd", "setuid file: urwxr-xr-x usr/bin/newgrp", "setgid file: grwxr-xr-x usr/bin/wall", "setgid file: grwxr-xr-x usr/bin/chage", "setgid file: grwxr-xr-x usr/sbin/pam_extrausers_chkpwd", "setgid file: grwxr-xr-x usr/bin/expiry" ] } ] } ``` Container image has **not been pushed** to Azure Container Registry.
True
Dockle report for Gateway - The container image for Gateway (`gateway:20e7b23263cd9c9f0f76d275c56b3d5d87619e61`) was scanned during CI for 20e7b23263cd9c9f0f76d275c56b3d5d87619e61 using dockle. Please see the findings mentioned below: ## Dockle results ```json { "summary": { "fatal": 1, "warn": 0, "info": 3, "skip": 0, "pass": 12 }, "details": [ { "code": "CIS-DI-0009", "title": "Use COPY instead of ADD in Dockerfile", "level": "FATAL", "alerts": [ "Use COPY : /bin/sh -c #(nop) ADD file:5dfb5928594745d5de89f61e7104168b42696281bc24f1ce9047bbb688387771 in / " ] }, { "code": "CIS-DI-0005", "title": "Enable Content trust for Docker", "level": "INFO", "alerts": [ "export DOCKER_CONTENT_TRUST=1 before docker pull/build" ] }, { "code": "CIS-DI-0006", "title": "Add HEALTHCHECK instruction to the container image", "level": "INFO", "alerts": [ "not found HEALTHCHECK statement" ] }, { "code": "CIS-DI-0008", "title": "Confirm safety of setuid/setgid files", "level": "INFO", "alerts": [ "setuid file: urwxr-xr-x usr/bin/umount", "setuid file: urwxr-xr-x usr/bin/gpasswd", "setgid file: grwxr-xr-x usr/sbin/unix_chkpwd", "setuid file: urwxr-xr-x usr/bin/chsh", "setuid file: urwxr-xr-x usr/bin/su", "setuid file: urwxr-xr-x usr/bin/chfn", "setuid file: urwxr-xr-x usr/bin/mount", "setuid file: urwxr-xr-x usr/bin/passwd", "setuid file: urwxr-xr-x usr/bin/newgrp", "setgid file: grwxr-xr-x usr/bin/wall", "setgid file: grwxr-xr-x usr/bin/chage", "setgid file: grwxr-xr-x usr/sbin/pam_extrausers_chkpwd", "setgid file: grwxr-xr-x usr/bin/expiry" ] } ] } ``` Container image has **not been pushed** to Azure Container Registry.
non_process
dockle report for gateway the container image for gateway gateway was scanned during ci for using dockle please see the findings mentioned below dockle results json summary fatal warn info skip pass details code cis di title use copy instead of add in dockerfile level fatal alerts use copy bin sh c nop add file in code cis di title enable content trust for docker level info alerts export docker content trust before docker pull build code cis di title add healthcheck instruction to the container image level info alerts not found healthcheck statement code cis di title confirm safety of setuid setgid files level info alerts setuid file urwxr xr x usr bin umount setuid file urwxr xr x usr bin gpasswd setgid file grwxr xr x usr sbin unix chkpwd setuid file urwxr xr x usr bin chsh setuid file urwxr xr x usr bin su setuid file urwxr xr x usr bin chfn setuid file urwxr xr x usr bin mount setuid file urwxr xr x usr bin passwd setuid file urwxr xr x usr bin newgrp setgid file grwxr xr x usr bin wall setgid file grwxr xr x usr bin chage setgid file grwxr xr x usr sbin pam extrausers chkpwd setgid file grwxr xr x usr bin expiry container image has not been pushed to azure container registry
0
421,944
12,264,004,743
IssuesEvent
2020-05-07 02:50:15
starcoinorg/starcoin
https://api.github.com/repos/starcoinorg/starcoin
closed
start chain fail memory problem
bug priority:high
When I start chain first time,I got this error: 2020-04-22T14:47:14.688720+08:00 ERROR move_vm_state::data_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/state/src/data_cache.rs::47 - [VM] Error getting data from storage for AccessPath { address: 00000000000000000000000000000000, path: 00c078a5b1bbee86b7d44048822b34e011a098a2e2ce8c95fa7242fc730bd226f1 } 2020-04-22T14:47:14.688831+08:00 ERROR move_vm_runtime::code_cache::module_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/runtime/src/code_cache/module_cache.rs::277 - [VM] Error fetching module with id ModuleId { address: 00000000000000000000000000000000, name: Identifier("GasSchedule") } 2020-04-22T14:47:14.689625+08:00 ERROR move_vm_state::data_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/state/src/data_cache.rs::47 - [VM] Error getting data from storage for AccessPath { address: 00000000000000000000000000000000, path: 0098e22046b4f74322fabf6b24c2c213ba337554b3c47818c691b5c3f2f57ccc39 } 2020-04-22T14:47:14.689730+08:00 ERROR move_vm_runtime::code_cache::module_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/runtime/src/code_cache/module_cache.rs::277 - [VM] Error fetching module with id ModuleId { address: 00000000000000000000000000000000, name: Identifier("LibraBlock") } starcoin(59662,0x700007c69000) malloc: Incorrect checksum for freed object 0x7f894c552820: probably modified after being freed. Corrupt value: 0x7f896c60100 starcoin(59662,0x700007c69000) malloc: *** set a breakpoint in malloc_error_break to debug
1.0
start chain fail memory problem - When I start chain first time,I got this error: 2020-04-22T14:47:14.688720+08:00 ERROR move_vm_state::data_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/state/src/data_cache.rs::47 - [VM] Error getting data from storage for AccessPath { address: 00000000000000000000000000000000, path: 00c078a5b1bbee86b7d44048822b34e011a098a2e2ce8c95fa7242fc730bd226f1 } 2020-04-22T14:47:14.688831+08:00 ERROR move_vm_runtime::code_cache::module_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/runtime/src/code_cache/module_cache.rs::277 - [VM] Error fetching module with id ModuleId { address: 00000000000000000000000000000000, name: Identifier("GasSchedule") } 2020-04-22T14:47:14.689625+08:00 ERROR move_vm_state::data_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/state/src/data_cache.rs::47 - [VM] Error getting data from storage for AccessPath { address: 00000000000000000000000000000000, path: 0098e22046b4f74322fabf6b24c2c213ba337554b3c47818c691b5c3f2f57ccc39 } 2020-04-22T14:47:14.689730+08:00 ERROR move_vm_runtime::code_cache::module_cache::/Users/fanngyuan/.cargo/git/checkouts/libra-05693b40248a74d2/a65fce0/language/move-vm/runtime/src/code_cache/module_cache.rs::277 - [VM] Error fetching module with id ModuleId { address: 00000000000000000000000000000000, name: Identifier("LibraBlock") } starcoin(59662,0x700007c69000) malloc: Incorrect checksum for freed object 0x7f894c552820: probably modified after being freed. Corrupt value: 0x7f896c60100 starcoin(59662,0x700007c69000) malloc: *** set a breakpoint in malloc_error_break to debug
non_process
start chain fail memory problem when i start chain first time i got this error error move vm state data cache users fanngyuan cargo git checkouts libra language move vm state src data cache rs error getting data from storage for accesspath address path error move vm runtime code cache module cache users fanngyuan cargo git checkouts libra language move vm runtime src code cache module cache rs error fetching module with id moduleid address name identifier gasschedule error move vm state data cache users fanngyuan cargo git checkouts libra language move vm state src data cache rs error getting data from storage for accesspath address path error move vm runtime code cache module cache users fanngyuan cargo git checkouts libra language move vm runtime src code cache module cache rs error fetching module with id moduleid address name identifier librablock starcoin malloc incorrect checksum for freed object probably modified after being freed corrupt value starcoin malloc set a breakpoint in malloc error break to debug
0
2,239
5,088,624,809
IssuesEvent
2016-12-31 23:29:37
sw4j-org/tool-jpa-processor
https://api.github.com/repos/sw4j-org/tool-jpa-processor
opened
Handle @JoinTable Annotation
annotation processor task
Handle the `@JoinTable` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.9 JoinTable Annotation
1.0
Handle @JoinTable Annotation - Handle the `@JoinTable` annotation for a property or field. See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf) - 11.1.9 JoinTable Annotation
process
handle jointable annotation handle the jointable annotation for a property or field see jointable annotation
1
182,972
14,926,046,770
IssuesEvent
2021-01-24 09:40:17
XENONnT/straxen
https://api.github.com/repos/XENONnT/straxen
opened
Missing documentation aqusisition monitor
documentation
We are missing the acquisition monitor in the documentation.
1.0
Missing documentation aqusisition monitor - We are missing the acquisition monitor in the documentation.
non_process
missing documentation aqusisition monitor we are missing the acquisition monitor in the documentation
0
7,306
10,443,180,619
IssuesEvent
2019-09-18 14:25:34
zero-os/home
https://api.github.com/repos/zero-os/home
closed
Improve general 0-os documentation
priority_normal process_wontfix type_task
- [x] cleanup old documentation: https://github.com/zero-os/home/commit/d60ae8ac8c7271d8318360b4f33761f953f3c0ab - [ ] write getting started - [ ] how to download kernel - [ ] boot on a bare metal server - [ ] how to talk to 0-core - [ ] how to talk to 0-robot - [ ] deploy an actual application in a container using the 0-robot - [ ] general documentation about the ecosystem and the goals - [ ] update diagrams
1.0
Improve general 0-os documentation - - [x] cleanup old documentation: https://github.com/zero-os/home/commit/d60ae8ac8c7271d8318360b4f33761f953f3c0ab - [ ] write getting started - [ ] how to download kernel - [ ] boot on a bare metal server - [ ] how to talk to 0-core - [ ] how to talk to 0-robot - [ ] deploy an actual application in a container using the 0-robot - [ ] general documentation about the ecosystem and the goals - [ ] update diagrams
process
improve general os documentation cleanup old documentation write getting started how to download kernel boot on a bare metal server how to talk to core how to talk to robot deploy an actual application in a container using the robot general documentation about the ecosystem and the goals update diagrams
1
21,719
11,660,475,710
IssuesEvent
2020-03-03 03:28:10
cityofaustin/atd-data-and-performance
https://api.github.com/repos/cityofaustin/atd-data-and-performance
closed
ETL: Signal Timing and Phasing Data
Service: Dev Type: Data Workgroup: AMD
In order to be able to visualize signal timing and phasing data we need to work on the data infrastructure required to make the data public. https://github.com/cityofaustin/transportation.austintexas.io/issues/268 This data current lives on the KITS app server database, which is an MSSQL server called KITSDB
1.0
ETL: Signal Timing and Phasing Data - In order to be able to visualize signal timing and phasing data we need to work on the data infrastructure required to make the data public. https://github.com/cityofaustin/transportation.austintexas.io/issues/268 This data current lives on the KITS app server database, which is an MSSQL server called KITSDB
non_process
etl signal timing and phasing data in order to be able to visualize signal timing and phasing data we need to work on the data infrastructure required to make the data public this data current lives on the kits app server database which is an mssql server called kitsdb
0
1,479
4,055,137,885
IssuesEvent
2016-05-24 14:38:31
mpiwg-ft/ft-issues
https://api.github.com/repos/mpiwg-ft/ft-issues
opened
Define how to disconnect a failed communicator
dynamic processing ulfm
In the current ULFM spec, there is no text to define how to use `MPI_COMM_DISCONNECT` when a process has failed.
1.0
Define how to disconnect a failed communicator - In the current ULFM spec, there is no text to define how to use `MPI_COMM_DISCONNECT` when a process has failed.
process
define how to disconnect a failed communicator in the current ulfm spec there is no text to define how to use mpi comm disconnect when a process has failed
1
5,914
8,736,190,620
IssuesEvent
2018-12-11 18:50:41
ipfs/go-ipfs
https://api.github.com/repos/ipfs/go-ipfs
closed
Improving how we work - choose two
meta process
## Context We took a couple of weeks to solicit input about *what* we'd like to improve, and some unavoidable conversation about how we'd do it in https://github.com/ipfs/go-ipfs/issues/5781. I've tried, with absolute impartiality, to group our comments and now: - Vote for two challenges/comments - just thumbs up 'em. Other emoji will not be tallied. - We'll take the top two, propose some solutions, and if there's obvious consensus we'll just try them, otherwise we may vote again. ## **Guidelines** I'd like to try to steer solutions towards those that are practical *now* and with only modest effort. Some of these are (or related to) complex challenges; but let's look for improvements that we can trial and discard quickly if we don't like them.
1.0
Improving how we work - choose two - ## Context We took a couple of weeks to solicit input about *what* we'd like to improve, and some unavoidable conversation about how we'd do it in https://github.com/ipfs/go-ipfs/issues/5781. I've tried, with absolute impartiality, to group our comments and now: - Vote for two challenges/comments - just thumbs up 'em. Other emoji will not be tallied. - We'll take the top two, propose some solutions, and if there's obvious consensus we'll just try them, otherwise we may vote again. ## **Guidelines** I'd like to try to steer solutions towards those that are practical *now* and with only modest effort. Some of these are (or related to) complex challenges; but let's look for improvements that we can trial and discard quickly if we don't like them.
process
improving how we work choose two context we took a couple of weeks to solicit input about what we d like to improve and some unavoidable conversation about how we d do it in i ve tried with absolute impartiality to group our comments and now vote for two challenges comments just thumbs up em other emoji will not be tallied we ll take the top two propose some solutions and if there s obvious consensus we ll just try them otherwise we may vote again guidelines i d like to try to steer solutions towards those that are practical now and with only modest effort some of these are or related to complex challenges but let s look for improvements that we can trial and discard quickly if we don t like them
1
18,506
24,551,333,708
IssuesEvent
2022-10-12 12:50:28
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] [Offline indicator] Review updated consent functionality is not working as expected when participant internet comes back
Bug P0 iOS Process: Fixed Process: Tested QA Process: Tested dev
Steps: 1. Sign up or sign in to the mobile app 2. Enroll to the study 3. In SB, update the consent 4. Turn off the data 5. Go to the mobile app 6. Click on the Review button of the updated consent 7. Turn On the data 8. Observe AR: Showing Participant is offline even though participant's internet comes back ER: Consent flow should work as per expected when participant is Online https://user-images.githubusercontent.com/71445210/182863135-1e86b078-d21b-44e4-b0e9-b41657ac9285.MOV
3.0
[iOS] [Offline indicator] Review updated consent functionality is not working as expected when participant internet comes back - Steps: 1. Sign up or sign in to the mobile app 2. Enroll to the study 3. In SB, update the consent 4. Turn off the data 5. Go to the mobile app 6. Click on the Review button of the updated consent 7. Turn On the data 8. Observe AR: Showing Participant is offline even though participant's internet comes back ER: Consent flow should work as per expected when participant is Online https://user-images.githubusercontent.com/71445210/182863135-1e86b078-d21b-44e4-b0e9-b41657ac9285.MOV
process
review updated consent functionality is not working as expected when participant internet comes back steps sign up or sign in to the mobile app enroll to the study in sb update the consent turn off the data go to the mobile app click on the review button of the updated consent turn on the data observe ar showing participant is offline even though participant s internet comes back er consent flow should work as per expected when participant is online
1
17,731
23,639,574,544
IssuesEvent
2022-08-25 15:50:26
MicrosoftDocs/windows-uwp
https://api.github.com/repos/MicrosoftDocs/windows-uwp
closed
Typo in Step 6
uwp/prod processes-and-threading/tech Pri2
[Enter feedback here] There is a typo in Step 6. "Step 6: Build and run the qpp" --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e2c2ab3c-221d-41a1-c145-2614a44cd2c4 * Version Independent ID: 27647b64-5700-e15c-3e90-f6dd5afe73e8 * Content: [Auto-launching with AutoPlay - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/auto-launching-with-autoplay) * Content Source: [windows-apps-src/launch-resume/auto-launching-with-autoplay.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/auto-launching-with-autoplay.md) * Product: **uwp** * Technology: **processes-and-threading** * GitHub Login: @alvinashcraft * Microsoft Alias: **aashcraft**
1.0
Typo in Step 6 - [Enter feedback here] There is a typo in Step 6. "Step 6: Build and run the qpp" --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e2c2ab3c-221d-41a1-c145-2614a44cd2c4 * Version Independent ID: 27647b64-5700-e15c-3e90-f6dd5afe73e8 * Content: [Auto-launching with AutoPlay - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/auto-launching-with-autoplay) * Content Source: [windows-apps-src/launch-resume/auto-launching-with-autoplay.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/auto-launching-with-autoplay.md) * Product: **uwp** * Technology: **processes-and-threading** * GitHub Login: @alvinashcraft * Microsoft Alias: **aashcraft**
process
typo in step there is a typo in step step build and run the qpp document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft
1
43,516
7,048,709,659
IssuesEvent
2018-01-02 18:53:03
SwifterSwift/SwifterSwift
https://api.github.com/repos/SwifterSwift/SwifterSwift
opened
Update LICENSE copyright date to 2018
documentation good first issue help wanted
https://github.com/SwifterSwift/SwifterSwift/blob/master/LICENSE ```diff MIT License -- Copyright (c) 2015-2017 SwifterSwift (https://github.com/swifterswift) ++ Copyright (c) 2015-2018 SwifterSwift (https://github.com/swifterswift) ```
1.0
Update LICENSE copyright date to 2018 - https://github.com/SwifterSwift/SwifterSwift/blob/master/LICENSE ```diff MIT License -- Copyright (c) 2015-2017 SwifterSwift (https://github.com/swifterswift) ++ Copyright (c) 2015-2018 SwifterSwift (https://github.com/swifterswift) ```
non_process
update license copyright date to diff mit license copyright c swifterswift copyright c swifterswift
0
780,241
27,386,597,576
IssuesEvent
2023-02-28 13:44:35
roq-trading/roq-issues
https://api.github.com/repos/roq-trading/roq-issues
closed
[roq-binance-futures] Download trade history
enhancement medium priority support
Opt-in using the `--download_trades` flag. Current implementation does not receive the ClOrdID and can therefore not populate the `user_id` / `order_id` fields. Everything will look therefore like trades originating from external orders. > It is possible to download historical orders, but the request is very expensive in terms of the exchange's rate limiter.
1.0
[roq-binance-futures] Download trade history - Opt-in using the `--download_trades` flag. Current implementation does not receive the ClOrdID and can therefore not populate the `user_id` / `order_id` fields. Everything will look therefore like trades originating from external orders. > It is possible to download historical orders, but the request is very expensive in terms of the exchange's rate limiter.
non_process
download trade history opt in using the download trades flag current implementation does not receive the clordid and can therefore not populate the user id order id fields everything will look therefore like trades originating from external orders it is possible to download historical orders but the request is very expensive in terms of the exchange s rate limiter
0
16,055
20,198,325,232
IssuesEvent
2022-02-11 12:50:55
didi/mpx
https://api.github.com/repos/didi/mpx
closed
[Bug report] custom-tab-bar 无法输出
bug processing
**问题描述** 升级后最新的`"@mpxjs/webpack-plugin": "^2.7.5",`后,发现原来项目`custom-tab-bar`无法正确输出 ![image](https://user-images.githubusercontent.com/3941174/153556440-f2a6f564-d2f4-4adf-aede-8ef7a1949601.png)
1.0
[Bug report] custom-tab-bar 无法输出 - **问题描述** 升级后最新的`"@mpxjs/webpack-plugin": "^2.7.5",`后,发现原来项目`custom-tab-bar`无法正确输出 ![image](https://user-images.githubusercontent.com/3941174/153556440-f2a6f564-d2f4-4adf-aede-8ef7a1949601.png)
process
custom tab bar 无法输出 问题描述 升级后最新的 mpxjs webpack plugin 后,发现原来项目 custom tab bar 无法正确输出
1
486,695
14,012,752,376
IssuesEvent
2020-10-29 09:29:10
enso-org/enso
https://api.github.com/repos/enso-org/enso
opened
Lexer Testing DSL
Category: Syntax Change: Non-Breaking Difficulty: Core Contributor Priority: Low Type: Enhancement
### Summary Some of the team find it quite hard to keep track of the literal token streams used as expected values in the lexer tests. To that end we need to design a DSL that provides a short-hand for testing the lexer while still allowing for the tester to provide all of the necessary information. ### Value The lexer tests will be more concise. ### Specification - [ ] Design a DSL for testing the lexer. It must allow the user to explicitly specify all elements of the tokens so as to maintain the rigour of the tests. - [ ] Implement this DSL. - [ ] Move all of the lexer tests over to use this DSL. ### Acceptance Criteria & Test Cases - [ ] The lexer tests all pass using the new DSL.
1.0
Lexer Testing DSL - ### Summary Some of the team find it quite hard to keep track of the literal token streams used as expected values in the lexer tests. To that end we need to design a DSL that provides a short-hand for testing the lexer while still allowing for the tester to provide all of the necessary information. ### Value The lexer tests will be more concise. ### Specification - [ ] Design a DSL for testing the lexer. It must allow the user to explicitly specify all elements of the tokens so as to maintain the rigour of the tests. - [ ] Implement this DSL. - [ ] Move all of the lexer tests over to use this DSL. ### Acceptance Criteria & Test Cases - [ ] The lexer tests all pass using the new DSL.
non_process
lexer testing dsl summary some of the team find it quite hard to keep track of the literal token streams used as expected values in the lexer tests to that end we need to design a dsl that provides a short hand for testing the lexer while still allowing for the tester to provide all of the necessary information value the lexer tests will be more concise specification design a dsl for testing the lexer it must allow the user to explicitly specify all elements of the tokens so as to maintain the rigour of the tests implement this dsl move all of the lexer tests over to use this dsl acceptance criteria test cases the lexer tests all pass using the new dsl
0
266,151
23,226,054,421
IssuesEvent
2022-08-03 00:19:12
osmosis-labs/osmosis
https://api.github.com/repos/osmosis-labs/osmosis
closed
x/incentives: refactor create gauge and add to gauge fees to use txfees denom
T:bug 🐛 T:tests C:x/incentives
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺ v ✰ Thanks for creating an issue! ✰ ☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --> ## Background In #2227, we hard coded the fee denom for create gauge and add to gauge fees. This was done to minimize the changes for a smooth and expedited v11 release. However, the hard-coded denom causes issues with the simulator. Simulator assumes "stake" denom for majority of its tests by default. As a result, no account has the hard coded "uosmo" denom to pay the fee. We had to disable simulator in the [incentives tests](https://github.com/osmosis-labs/osmosis/pull/2227/commits/87dd1fe8423b4b680826375138ddfd4d2337489d) for now. ## Suggested Design We should revert the following 3 commits separately and avoid backporting them to `v11.x`: - https://github.com/osmosis-labs/osmosis/pull/2227/commits/100b693c51cb28e2da6e23954f7960b881485546 - https://github.com/osmosis-labs/osmosis/pull/2227/commits/915e9ff99a4a615ee4a5de2c74a06bf6368dc94a - https://github.com/osmosis-labs/osmosis/pull/2227/commits/87dd1fe8423b4b680826375138ddfd4d2337489d ## Acceptance Criteria - create gauge and add to gauge fee uses denom from the `txfees` keeper - simulator incentives tests function correctly
1.0
x/incentives: refactor create gauge and add to gauge fees to use txfees denom - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺ v ✰ Thanks for creating an issue! ✰ ☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --> ## Background In #2227, we hard coded the fee denom for create gauge and add to gauge fees. This was done to minimize the changes for a smooth and expedited v11 release. However, the hard-coded denom causes issues with the simulator. Simulator assumes "stake" denom for majority of its tests by default. As a result, no account has the hard coded "uosmo" denom to pay the fee. We had to disable simulator in the [incentives tests](https://github.com/osmosis-labs/osmosis/pull/2227/commits/87dd1fe8423b4b680826375138ddfd4d2337489d) for now. ## Suggested Design We should revert the following 3 commits separately and avoid backporting them to `v11.x`: - https://github.com/osmosis-labs/osmosis/pull/2227/commits/100b693c51cb28e2da6e23954f7960b881485546 - https://github.com/osmosis-labs/osmosis/pull/2227/commits/915e9ff99a4a615ee4a5de2c74a06bf6368dc94a - https://github.com/osmosis-labs/osmosis/pull/2227/commits/87dd1fe8423b4b680826375138ddfd4d2337489d ## Acceptance Criteria - create gauge and add to gauge fee uses denom from the `txfees` keeper - simulator incentives tests function correctly
non_process
x incentives refactor create gauge and add to gauge fees to use txfees denom ☺ v ✰ thanks for creating an issue ✰ ☺ background in we hard coded the fee denom for create gauge and add to gauge fees this was done to minimize the changes for a smooth and expedited release however the hard coded denom causes issues with the simulator simulator assumes stake denom for majority of its tests by default as a result no account has the hard coded uosmo denom to pay the fee we had to disable simulator in the for now suggested design we should revert the following commits separately and avoid backporting them to x acceptance criteria create gauge and add to gauge fee uses denom from the txfees keeper simulator incentives tests function correctly
0
778,123
27,304,569,553
IssuesEvent
2023-02-24 06:53:04
WavesHQ/bridge
https://api.github.com/repos/WavesHQ/bridge
closed
`dev` - allow local playground network option when running locally
needs/area needs/triage kind/feature needs/priority
<!-- Please only use this template for submitting enhancement/feature requests --> #### What would you like to be added: As titled. Remove it on production environments.
1.0
`dev` - allow local playground network option when running locally - <!-- Please only use this template for submitting enhancement/feature requests --> #### What would you like to be added: As titled. Remove it on production environments.
non_process
dev allow local playground network option when running locally what would you like to be added as titled remove it on production environments
0
9,779
12,797,853,608
IssuesEvent
2020-07-02 13:02:13
nodejs/node
https://api.github.com/repos/nodejs/node
closed
child_process.spawnSync - timeout option doesn't work
child_process
* **Version**: v10.18.0 * **Platform**: 64-bit (Windows) * **Subsystem**: FILE 1: > var exec = require('child_process'); > > const path = require('path'); > const IDVT_exec = 'myFile.exe'; > > /* Executes myFile.exe file and wait till finishes*/ > let IDVT = function (parameters, options) { > const testParameters = parameters > const testOptions = options > return exec.spawnSync(path.resolve(IDVT_exec), testParameters, testOptions) > } > > module.exports.IDVT = IDVT; FILE 2: > var assert = require('assert'); > var idvt = require('../idvt'); > > let testParameters = '-source samples_small -output reports_small -split' > testParameters = testParameters.split(' ') > > const testOptions = { > timeout: 1 > } > > describe('Execute myFile.exe', function () { > it('check program start and its output log', function () { > let result = idvt.IDVT(testParameters, testOptions) > }); > > }); RESULT: - when launching script, the timeout does not apply EXPECTED: - timeout applies
1.0
child_process.spawnSync - timeout option doesn't work - * **Version**: v10.18.0 * **Platform**: 64-bit (Windows) * **Subsystem**: FILE 1: > var exec = require('child_process'); > > const path = require('path'); > const IDVT_exec = 'myFile.exe'; > > /* Executes myFile.exe file and wait till finishes*/ > let IDVT = function (parameters, options) { > const testParameters = parameters > const testOptions = options > return exec.spawnSync(path.resolve(IDVT_exec), testParameters, testOptions) > } > > module.exports.IDVT = IDVT; FILE 2: > var assert = require('assert'); > var idvt = require('../idvt'); > > let testParameters = '-source samples_small -output reports_small -split' > testParameters = testParameters.split(' ') > > const testOptions = { > timeout: 1 > } > > describe('Execute myFile.exe', function () { > it('check program start and its output log', function () { > let result = idvt.IDVT(testParameters, testOptions) > }); > > }); RESULT: - when launching script, the timeout does not apply EXPECTED: - timeout applies
process
child process spawnsync timeout option doesn t work version platform bit windows subsystem file var exec require child process const path require path const idvt exec myfile exe executes myfile exe file and wait till finishes let idvt function parameters options const testparameters parameters const testoptions options return exec spawnsync path resolve idvt exec testparameters testoptions module exports idvt idvt file var assert require assert var idvt require idvt let testparameters source samples small output reports small split testparameters testparameters split const testoptions timeout describe execute myfile exe function it check program start and its output log function let result idvt idvt testparameters testoptions result when launching script the timeout does not apply expected timeout applies
1
240,280
7,800,801,352
IssuesEvent
2018-06-09 13:49:12
kcgrimes/grimes-simple-revive
https://api.github.com/repos/kcgrimes/grimes-simple-revive
opened
Since V0.92, G_fnc_initNewAI results in error
Priority: High Status: Completed Type: Bug
Per Rockapes this function doesn't seem to be working anymore "spawn G_fnc_initNewAI;" Any AI loaded at mission start the revive system is working but any after running the spawn function isnt. in previous version .91 it was working fine. If I just manually do "spawn G_fnc_EH" the AI work fine.
1.0
Since V0.92, G_fnc_initNewAI results in error - Per Rockapes this function doesn't seem to be working anymore "spawn G_fnc_initNewAI;" Any AI loaded at mission start the revive system is working but any after running the spawn function isnt. in previous version .91 it was working fine. If I just manually do "spawn G_fnc_EH" the AI work fine.
non_process
since g fnc initnewai results in error per rockapes this function doesn t seem to be working anymore spawn g fnc initnewai any ai loaded at mission start the revive system is working but any after running the spawn function isnt in previous version it was working fine if i just manually do spawn g fnc eh the ai work fine
0
12,084
14,740,048,580
IssuesEvent
2021-01-07 08:25:33
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Billings - SA Billing - Late Fee Account List
anc-process anp-important ant-bug has attachment
In GitLab by @kdjstudios on Oct 3, 2018, 11:01 [Billings.xlsx](/uploads/ea18b30dbfc26bfc1281f9bbfc263520/Billings.xlsx) HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-48124/conversation
1.0
Billings - SA Billing - Late Fee Account List - In GitLab by @kdjstudios on Oct 3, 2018, 11:01 [Billings.xlsx](/uploads/ea18b30dbfc26bfc1281f9bbfc263520/Billings.xlsx) HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-48124/conversation
process
billings sa billing late fee account list in gitlab by kdjstudios on oct uploads billings xlsx hd
1
611,259
18,949,795,229
IssuesEvent
2021-11-18 14:07:51
pulibrary/dpul
https://api.github.com/repos/pulibrary/dpul
closed
"Link to search" is not functioning in Metadata Display facet in Dashboard
priority: high work-cycle
When I click on "link to search" in thee metadata display options and click save, the selected fields are not linked to search. Current links active before upgrade are still functional. <img width="1440" alt="Screen Shot 2021-11-02 at 9 46 02 PM" src="https://user-images.githubusercontent.com/45043014/139999583-9f942e78-b285-4f9f-a305-d8e4f2750276.png">
1.0
"Link to search" is not functioning in Metadata Display facet in Dashboard - When I click on "link to search" in thee metadata display options and click save, the selected fields are not linked to search. Current links active before upgrade are still functional. <img width="1440" alt="Screen Shot 2021-11-02 at 9 46 02 PM" src="https://user-images.githubusercontent.com/45043014/139999583-9f942e78-b285-4f9f-a305-d8e4f2750276.png">
non_process
link to search is not functioning in metadata display facet in dashboard when i click on link to search in thee metadata display options and click save the selected fields are not linked to search current links active before upgrade are still functional img width alt screen shot at pm src
0
12,056
14,739,352,293
IssuesEvent
2021-01-07 07:01:51
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Need to expose a way to handle SIGTTIN/SIGTTOU
area-System.Diagnostics.Process needs author feedback no recent activity
[AB#1240696](https://devdiv.visualstudio.com/10e66e43-9645-4201-b128-0fdc3769cc17/_workitems/edit/1240696) For PowerShell, you can start a sub-process that owns the console. In `gdb`, I can see that `SIGTTIN` is being sent to `pwsh`, but there is currently no way for `pwsh` to handle this signal indicating it doesn't own the console and should stop reading from it. This eventually results in a crash in pwsh as it tries to use the Console APIs and gets an unhandled exception.
1.0
Need to expose a way to handle SIGTTIN/SIGTTOU - [AB#1240696](https://devdiv.visualstudio.com/10e66e43-9645-4201-b128-0fdc3769cc17/_workitems/edit/1240696) For PowerShell, you can start a sub-process that owns the console. In `gdb`, I can see that `SIGTTIN` is being sent to `pwsh`, but there is currently no way for `pwsh` to handle this signal indicating it doesn't own the console and should stop reading from it. This eventually results in a crash in pwsh as it tries to use the Console APIs and gets an unhandled exception.
process
need to expose a way to handle sigttin sigttou for powershell you can start a sub process that owns the console in gdb i can see that sigttin is being sent to pwsh but there is currently no way for pwsh to handle this signal indicating it doesn t own the console and should stop reading from it this eventually results in a crash in pwsh as it tries to use the console apis and gets an unhandled exception
1
17,414
23,230,305,144
IssuesEvent
2022-08-03 06:57:45
maticnetwork/miden
https://api.github.com/repos/maticnetwork/miden
opened
Potential stack overflow table refactoring
processor air v0.3
Currently, we limit the initial and the final stack depth to exactly 16. The latter part is especially annoying because we need to drop items deep in the stack before a program finishes executing. This can lead to user confusing (as in #342) but also is really annoying for tests (where we need to add a bunch of extra operations at the end to ensure the stack is in the right state). We can relax these limitations by setting initial/final values of the running product column controlling stack overflow table to values which are not always $1$. This would mean two things: 1. We could initialize the stack with an arbitrary number of values. We'll need to get a little creative on how to build the state of the overflow table because we use `clk` values for backlinks - but it shouldn't be too difficult (we could just use "negative" values). 2. Stack depth at the end of execution could be arbitrary as well. This would get rid of the annoying issues with tests etc. The downside of this approach are: 1. It may encourage people to use a lot of public inputs, which are expensive for the verifier. 2. Not cleaning up the stack at the end may lead to some information leaking (relevant only for private transactions). Though, this could still be mitigated by calling `finalize_stack` at the end, if needed.
1.0
Potential stack overflow table refactoring - Currently, we limit the initial and the final stack depth to exactly 16. The latter part is especially annoying because we need to drop items deep in the stack before a program finishes executing. This can lead to user confusing (as in #342) but also is really annoying for tests (where we need to add a bunch of extra operations at the end to ensure the stack is in the right state). We can relax these limitations by setting initial/final values of the running product column controlling stack overflow table to values which are not always $1$. This would mean two things: 1. We could initialize the stack with an arbitrary number of values. We'll need to get a little creative on how to build the state of the overflow table because we use `clk` values for backlinks - but it shouldn't be too difficult (we could just use "negative" values). 2. Stack depth at the end of execution could be arbitrary as well. This would get rid of the annoying issues with tests etc. The downside of this approach are: 1. It may encourage people to use a lot of public inputs, which are expensive for the verifier. 2. Not cleaning up the stack at the end may lead to some information leaking (relevant only for private transactions). Though, this could still be mitigated by calling `finalize_stack` at the end, if needed.
process
potential stack overflow table refactoring currently we limit the initial and the final stack depth to exactly the latter part is especially annoying because we need to drop items deep in the stack before a program finishes executing this can lead to user confusing as in but also is really annoying for tests where we need to add a bunch of extra operations at the end to ensure the stack is in the right state we can relax these limitations by setting initial final values of the running product column controlling stack overflow table to values which are not always this would mean two things we could initialize the stack with an arbitrary number of values we ll need to get a little creative on how to build the state of the overflow table because we use clk values for backlinks but it shouldn t be too difficult we could just use negative values stack depth at the end of execution could be arbitrary as well this would get rid of the annoying issues with tests etc the downside of this approach are it may encourage people to use a lot of public inputs which are expensive for the verifier not cleaning up the stack at the end may lead to some information leaking relevant only for private transactions though this could still be mitigated by calling finalize stack at the end if needed
1
104,663
4,216,936,176
IssuesEvent
2016-06-30 11:07:58
pombase/canto
https://api.github.com/repos/pombase/canto
opened
streamlining genotype management
high priority
We want the "add a genotype page" and "genotype management page" combined, to reduce clicking between the two pages (to speed up copy+edit existing genotypes for example).
1.0
streamlining genotype management - We want the "add a genotype page" and "genotype management page" combined, to reduce clicking between the two pages (to speed up copy+edit existing genotypes for example).
non_process
streamlining genotype management we want the add a genotype page and genotype management page combined to reduce clicking between the two pages to speed up copy edit existing genotypes for example
0
98,123
8,674,843,605
IssuesEvent
2018-11-30 09:07:18
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
[CI] CcrMultiClusterLicenseIT.testAutoFollow failure
:Distributed/CCR >test-failure
Windows related problem: ``` All tests run in this JVM: [CcrMultiClusterLicenseIT] 20:16:01 > at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) 20:16:01 > at java.security.AccessController.checkPermission(AccessController.java:884) 20:16:01 20:16:01 > at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) 20:16:01 > at java.lang.SecurityManager.checkRead(SecurityManager.java:888) 20:16:01 > at sun.nio.fs.WindowsChannelFactory.open(WindowsChannelFactory.java:293) 20:16:01 > at sun.nio.fs.WindowsChannelFactory.newFileChannel(WindowsChannelFactory.java:162) 20:16:01 > at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:225) 20:16:01 > at java.nio.file.Files.newByteChannel(Files.java:361) 20:16:01 > at java.nio.file.Files.newByteChannel(Files.java:407) 20:16:01 > at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.HandleTrackingFS.newInputStream(HandleTrackingFS.java:92) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.HandleTrackingFS.newInputStream(HandleTrackingFS.java:92) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at java.nio.file.Files.newInputStream(Files.java:152) 20:16:01 > at java.nio.file.Files.newBufferedReader(Files.java:2784) 20:16:01 > at java.nio.file.Files.readAllLines(Files.java:3202) 20:16:01 > at java.nio.file.Files.readAllLines(Files.java:3242) 20:16:01 > at org.elasticsearch.xpack.ccr.CcrMultiClusterLicenseIT.lambda$testAutoFollow$0(CcrMultiClusterLicenseIT.java:57) 20:16:01 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:835) 20:16:01 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:821) 20:16:01 > at org.elasticsearch.xpack.ccr.CcrMultiClusterLicenseIT.testAutoFollow(CcrMultiClusterLicenseIT.java:56) 20:16:01 > at java.lang.Thread.run(Thread.java:748) ``` Two build failure thus far: * https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+multijob-windows-compatibility/1281/console * https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-windows-compatibility/1863/console
1.0
[CI] CcrMultiClusterLicenseIT.testAutoFollow failure - Windows related problem: ``` All tests run in this JVM: [CcrMultiClusterLicenseIT] 20:16:01 > at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) 20:16:01 > at java.security.AccessController.checkPermission(AccessController.java:884) 20:16:01 20:16:01 > at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) 20:16:01 > at java.lang.SecurityManager.checkRead(SecurityManager.java:888) 20:16:01 > at sun.nio.fs.WindowsChannelFactory.open(WindowsChannelFactory.java:293) 20:16:01 > at sun.nio.fs.WindowsChannelFactory.newFileChannel(WindowsChannelFactory.java:162) 20:16:01 > at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:225) 20:16:01 > at java.nio.file.Files.newByteChannel(Files.java:361) 20:16:01 > at java.nio.file.Files.newByteChannel(Files.java:407) 20:16:01 > at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.HandleTrackingFS.newInputStream(HandleTrackingFS.java:92) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at org.apache.lucene.mockfile.HandleTrackingFS.newInputStream(HandleTrackingFS.java:92) 20:16:01 > at org.apache.lucene.mockfile.FilterFileSystemProvider.newInputStream(FilterFileSystemProvider.java:192) 20:16:01 > at java.nio.file.Files.newInputStream(Files.java:152) 20:16:01 > at java.nio.file.Files.newBufferedReader(Files.java:2784) 20:16:01 > at java.nio.file.Files.readAllLines(Files.java:3202) 20:16:01 > at java.nio.file.Files.readAllLines(Files.java:3242) 20:16:01 > at org.elasticsearch.xpack.ccr.CcrMultiClusterLicenseIT.lambda$testAutoFollow$0(CcrMultiClusterLicenseIT.java:57) 20:16:01 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:835) 20:16:01 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:821) 20:16:01 > at org.elasticsearch.xpack.ccr.CcrMultiClusterLicenseIT.testAutoFollow(CcrMultiClusterLicenseIT.java:56) 20:16:01 > at java.lang.Thread.run(Thread.java:748) ``` Two build failure thus far: * https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+multijob-windows-compatibility/1281/console * https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-windows-compatibility/1863/console
non_process
ccrmulticlusterlicenseit testautofollow failure windows related problem all tests run in this jvm at java security accesscontrolcontext checkpermission accesscontrolcontext java at java security accesscontroller checkpermission accesscontroller java at java lang securitymanager checkpermission securitymanager java at java lang securitymanager checkread securitymanager java at sun nio fs windowschannelfactory open windowschannelfactory java at sun nio fs windowschannelfactory newfilechannel windowschannelfactory java at sun nio fs windowsfilesystemprovider newbytechannel windowsfilesystemprovider java at java nio file files newbytechannel files java at java nio file files newbytechannel files java at java nio file spi filesystemprovider newinputstream filesystemprovider java at org apache lucene mockfile filterfilesystemprovider newinputstream filterfilesystemprovider java at org apache lucene mockfile filterfilesystemprovider newinputstream filterfilesystemprovider java at org apache lucene mockfile filterfilesystemprovider newinputstream filterfilesystemprovider java at org apache lucene mockfile handletrackingfs newinputstream handletrackingfs java at org apache lucene mockfile filterfilesystemprovider newinputstream filterfilesystemprovider java at org apache lucene mockfile handletrackingfs newinputstream handletrackingfs java at org apache lucene mockfile filterfilesystemprovider newinputstream filterfilesystemprovider java at java nio file files newinputstream files java at java nio file files newbufferedreader files java at java nio file files readalllines files java at java nio file files readalllines files java at org elasticsearch xpack ccr ccrmulticlusterlicenseit lambda testautofollow ccrmulticlusterlicenseit java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch xpack ccr ccrmulticlusterlicenseit testautofollow ccrmulticlusterlicenseit java at java lang thread run thread java two build failure thus far
0
22,571
31,791,465,556
IssuesEvent
2023-09-13 03:57:16
varabyte/kobweb
https://api.github.com/repos/varabyte/kobweb
opened
Move icons artifacts under kobwebx?
process
Do "font-icons-fa" and "font-icons-mdi" really belong under core Silk namespaces? Probably not! They should probably be at the same level as 'markdown' is, as optional but useful add-ons. Many users might want to omit them because they increase their site's download footprint. Note that recently Silk has been adding more and more SVG icons into its feature set. That seems more like something "core". If we go through with this change, I think it implies the artifact should become "com.varabyte.kobwebx:kobwebx-silk-icons-xxx" and then the namespace for all the icons move to "com.varabyte.kobwebx". Of course, we'll need to update all templates to use the new ones. Old codebases should still work because "com.varabyte.kobweb:kobweb-silk-icons-xxx" will still be hosted, although once they bump up their kobweb version, they may get compile errors.
1.0
Move icons artifacts under kobwebx? - Do "font-icons-fa" and "font-icons-mdi" really belong under core Silk namespaces? Probably not! They should probably be at the same level as 'markdown' is, as optional but useful add-ons. Many users might want to omit them because they increase their site's download footprint. Note that recently Silk has been adding more and more SVG icons into its feature set. That seems more like something "core". If we go through with this change, I think it implies the artifact should become "com.varabyte.kobwebx:kobwebx-silk-icons-xxx" and then the namespace for all the icons move to "com.varabyte.kobwebx". Of course, we'll need to update all templates to use the new ones. Old codebases should still work because "com.varabyte.kobweb:kobweb-silk-icons-xxx" will still be hosted, although once they bump up their kobweb version, they may get compile errors.
process
move icons artifacts under kobwebx do font icons fa and font icons mdi really belong under core silk namespaces probably not they should probably be at the same level as markdown is as optional but useful add ons many users might want to omit them because they increase their site s download footprint note that recently silk has been adding more and more svg icons into its feature set that seems more like something core if we go through with this change i think it implies the artifact should become com varabyte kobwebx kobwebx silk icons xxx and then the namespace for all the icons move to com varabyte kobwebx of course we ll need to update all templates to use the new ones old codebases should still work because com varabyte kobweb kobweb silk icons xxx will still be hosted although once they bump up their kobweb version they may get compile errors
1
2,220
3,068,570,187
IssuesEvent
2015-08-18 16:12:47
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
Pod log error message "must select container name" should print valid container names
area/usability
``` $ kubectl logs master-1-e6bup error: POD master-1-e6bup has more than one container; please specify the container to print logs for ``` Should try harder to be useful to the user: ``` error: Pod master-1-e6bup has the following containers: api, controllers; please specify the container to print logs for with -c` Also, `POD` capitalized is bad
True
Pod log error message "must select container name" should print valid container names - ``` $ kubectl logs master-1-e6bup error: POD master-1-e6bup has more than one container; please specify the container to print logs for ``` Should try harder to be useful to the user: ``` error: Pod master-1-e6bup has the following containers: api, controllers; please specify the container to print logs for with -c` Also, `POD` capitalized is bad
non_process
pod log error message must select container name should print valid container names kubectl logs master error pod master has more than one container please specify the container to print logs for should try harder to be useful to the user error pod master has the following containers api controllers please specify the container to print logs for with c also pod capitalized is bad
0
38,302
8,752,087,596
IssuesEvent
2018-12-14 01:12:09
CenturyLinkCloud/mdw
https://api.github.com/repos/CenturyLinkCloud/mdw
closed
Dashboard charts are broken
defect
In MDWHub the Dashboard tab is not working correctly. For example the process flow seems to only show the trend line for one process instead of all the selected processes. Here's a screenshot from mdw55 that depicts how process selection is supposed to work: https://centurylinkcloud.github.io/mdw/docs/guides/images/screenshots/dashboard/select.png
1.0
Dashboard charts are broken - In MDWHub the Dashboard tab is not working correctly. For example the process flow seems to only show the trend line for one process instead of all the selected processes. Here's a screenshot from mdw55 that depicts how process selection is supposed to work: https://centurylinkcloud.github.io/mdw/docs/guides/images/screenshots/dashboard/select.png
non_process
dashboard charts are broken in mdwhub the dashboard tab is not working correctly for example the process flow seems to only show the trend line for one process instead of all the selected processes here s a screenshot from that depicts how process selection is supposed to work
0
14,430
17,480,984,049
IssuesEvent
2021-08-09 02:11:59
Leviatan-Analytics/LA-data-processing
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
closed
Test or research optimizations EasyOCR text recognition [3]
Data Processing Week 1 Sprint 3
Find ways to optimize EasyOCR. Improve: - Accuracy - Processing time
1.0
Test or research optimizations EasyOCR text recognition [3] - Find ways to optimize EasyOCR. Improve: - Accuracy - Processing time
process
test or research optimizations easyocr text recognition find ways to optimize easyocr improve accuracy processing time
1
4,147
7,097,688,409
IssuesEvent
2018-01-14 21:57:45
ViktorKuryshev/CRM
https://api.github.com/repos/ViktorKuryshev/CRM
closed
ProcV - 059 Использовать асинхронное программирование для загрузки проектов
Normal Process enhancement
Применить асинхронное программирование для процесса загрузки проектов с СК с использованием ключевых слов async и await.
1.0
ProcV - 059 Использовать асинхронное программирование для загрузки проектов - Применить асинхронное программирование для процесса загрузки проектов с СК с использованием ключевых слов async и await.
process
procv использовать асинхронное программирование для загрузки проектов применить асинхронное программирование для процесса загрузки проектов с ск с использованием ключевых слов async и await
1
5,581
8,435,845,696
IssuesEvent
2018-10-17 14:06:38
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
What's the "\N" in ["protocol": "HTTP/1.1\N"]
log-processing
Hi, I found some json output files have `"protocol": "HTTP/1.1\N"` string and thus can't be loaded by **python json.loads()**, I don't know why there is a "\N", maybe it should be ""protocol": "HTTP/1.1" or "protocol": "HTTP/1.1\\N"?
1.0
What's the "\N" in ["protocol": "HTTP/1.1\N"] - Hi, I found some json output files have `"protocol": "HTTP/1.1\N"` string and thus can't be loaded by **python json.loads()**, I don't know why there is a "\N", maybe it should be ""protocol": "HTTP/1.1" or "protocol": "HTTP/1.1\\N"?
process
what s the n in hi i found some json output files have protocol http n string and thus can t be loaded by python json loads i don t know why there is a n maybe it should be protocol http or protocol http n
1
18,803
24,703,865,318
IssuesEvent
2022-10-19 17:20:54
benthosdev/benthos
https://api.github.com/repos/benthosdev/benthos
closed
Failed to decompress compressed json file with arrays
question processors
I don't know if this is a bug or just me creating the config incorrectly. I found that when I used the `decompress` processor like so, ``` input: file: paths: ["./tests/resources/testing.gz"] codec: lines pipeline: processors: - label: unzip_file decompress: algorithm: "gzip" ``` It failed to decompress gzip json files that have arrays in them. For example: ``` {"id": "12345", "item": ["1234"]} ``` It gives me the following error message ``` ERRO Failed to decompress message part: gzip: invalid header @service=benthos label=unzip_file path=root.pipeline.processors.0 ERRO Failed to decompress message part: unexpected EOF @service=benthos label=unzip_file path=root.pipeline.processors.0 path=root.pipeline.processors.0 ``` I have tried all of the decompression algorithm and none of them work. When I remove the array in the json (example below) and gzip it back up, I doesn't give that error anymore. ``` {"id": "12345", "item": "1234"} ``` I'm using Benthos version ``` Version: 4.9.1 Date: 2022-10-06T16:19:29Z ```
1.0
Failed to decompress compressed json file with arrays - I don't know if this is a bug or just me creating the config incorrectly. I found that when I used the `decompress` processor like so, ``` input: file: paths: ["./tests/resources/testing.gz"] codec: lines pipeline: processors: - label: unzip_file decompress: algorithm: "gzip" ``` It failed to decompress gzip json files that have arrays in them. For example: ``` {"id": "12345", "item": ["1234"]} ``` It gives me the following error message ``` ERRO Failed to decompress message part: gzip: invalid header @service=benthos label=unzip_file path=root.pipeline.processors.0 ERRO Failed to decompress message part: unexpected EOF @service=benthos label=unzip_file path=root.pipeline.processors.0 path=root.pipeline.processors.0 ``` I have tried all of the decompression algorithm and none of them work. When I remove the array in the json (example below) and gzip it back up, I doesn't give that error anymore. ``` {"id": "12345", "item": "1234"} ``` I'm using Benthos version ``` Version: 4.9.1 Date: 2022-10-06T16:19:29Z ```
process
failed to decompress compressed json file with arrays i don t know if this is a bug or just me creating the config incorrectly i found that when i used the decompress processor like so input file paths codec lines pipeline processors label unzip file decompress algorithm gzip it failed to decompress gzip json files that have arrays in them for example id item it gives me the following error message erro failed to decompress message part gzip invalid header service benthos label unzip file path root pipeline processors erro failed to decompress message part unexpected eof service benthos label unzip file path root pipeline processors path root pipeline processors i have tried all of the decompression algorithm and none of them work when i remove the array in the json example below and gzip it back up i doesn t give that error anymore id item i m using benthos version version date
1
10,030
13,044,161,486
IssuesEvent
2020-07-29 03:47:23
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `AddDateStringReal` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `AddDateStringReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `AddDateStringReal` from TiDB - ## Description Port the scalar function `AddDateStringReal` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function adddatestringreal from tidb description port the scalar function adddatestringreal from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
1
6,103
8,961,466,021
IssuesEvent
2019-01-28 09:44:20
linnovate/root
https://api.github.com/repos/linnovate/root
closed
duplicate task doesnt duplicate fully
2.0.6 Process bug
duplicate task doesnt duplicate the lables, sub tasks, files and related project& discussion that were in the original task. original task: ![image](https://user-images.githubusercontent.com/38312178/51541205-eb024900-1e60-11e9-977d-225a407ecff9.png) duplicated task: ![image](https://user-images.githubusercontent.com/38312178/51541264-12591600-1e61-11e9-8dd2-55e67c630bca.png)
1.0
duplicate task doesnt duplicate fully - duplicate task doesnt duplicate the lables, sub tasks, files and related project& discussion that were in the original task. original task: ![image](https://user-images.githubusercontent.com/38312178/51541205-eb024900-1e60-11e9-977d-225a407ecff9.png) duplicated task: ![image](https://user-images.githubusercontent.com/38312178/51541264-12591600-1e61-11e9-8dd2-55e67c630bca.png)
process
duplicate task doesnt duplicate fully duplicate task doesnt duplicate the lables sub tasks files and related project discussion that were in the original task original task duplicated task
1
248,878
7,937,255,934
IssuesEvent
2018-07-09 12:18:29
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
iefponline.iefp.pt - design is broken
browser-firefox-mobile priority-normal
<!-- @browser: Firefox Mobile 63.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://iefponline.iefp.pt/IEFP/pesquisas/pesqOfertas2.jsp?action= Pesquisar **Browser / Version**: Firefox Mobile 63.0 **Operating System**: Android 7.1.2 **Tested Another Browser**: No **Problem type**: Design is broken **Description**: Broken layout **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
iefponline.iefp.pt - design is broken - <!-- @browser: Firefox Mobile 63.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://iefponline.iefp.pt/IEFP/pesquisas/pesqOfertas2.jsp?action= Pesquisar **Browser / Version**: Firefox Mobile 63.0 **Operating System**: Android 7.1.2 **Tested Another Browser**: No **Problem type**: Design is broken **Description**: Broken layout **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
iefponline iefp pt design is broken url pesquisar browser version firefox mobile operating system android tested another browser no problem type design is broken description broken layout steps to reproduce from with ❤️
0
382,545
11,307,672,328
IssuesEvent
2020-01-18 22:38:22
codingkapoor/intimations
https://api.github.com/repos/codingkapoor/intimations
closed
Show a message when there are no active intimations (today or planned)
Priority: Low enhancement
User should be shown a message on UI if there are no active intimations from any employee. The blank screen doesn't convey anything.
1.0
Show a message when there are no active intimations (today or planned) - User should be shown a message on UI if there are no active intimations from any employee. The blank screen doesn't convey anything.
non_process
show a message when there are no active intimations today or planned user should be shown a message on ui if there are no active intimations from any employee the blank screen doesn t convey anything
0