Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
253,295 | 8,053,806,660 | IssuesEvent | 2018-08-02 01:17:34 | Prospress/action-scheduler | https://api.github.com/repos/Prospress/action-scheduler | opened | Restore batch processing loop to boost default batch processing speed | enhancement priority:high | At the moment, Action Scheduler will process 25 actions per run. That's it. No more until the next loop, despite [what the `README.md` file says](https://github.com/Prospress/action-scheduler/blob/2.0.0/README.md#batch-processing).
This is because the [loop for processing actions until running out of time or memory was removed](https://github.com/Prospress/action-scheduler/commit/5145a9d66e8a12639f3ebd870a0381ca977803db) to prevent the timeout errors that have since been addressed with #134.
**To (greatly) increase the default processing speed, we should reinstate this loop.**
While we have timeout prevention in place with #134, we still need memory monitoring to safely introduce a loop.
The `WP_Background_Process` provides a good approach for how to do this with the `memory_exceeded()` method.
I checked both the [WC Implementation of `WP_Background_Process::memory_exceeded()`](https://github.com/woocommerce/woocommerce/blob/3.4.0/includes/libraries/wp-background-process.php#L343-L353) and the [SkyVerge implementation](https://github.com/woocommerce/woocommerce/blob/3.4.0/includes/libraries/wp-background-process.php#L335-L353), and neither of this have added any additional patches to that, so the approach is likely working reliably at their scale and can be relied upon in Action Scheduler.
Once we have the memory limit in place, we should be able to implement a method like `WC_Background_Process ::batch_limit_exceeded()`](https://github.com/woocommerce/woocommerce/blob/master/includes/abstracts/class-wc-background-process.php#L80-L87) in `ActionScheduler_Abstract_QueueRunner` to control the loop. This means that method can be overridden in other runners, namely `ActionScheduler_WPCLI_QueueRunner`, that do not want to be constrained by timeouts (or likely memory usage, as it has the `stop_the_insanity()` method). | 1.0 | Restore batch processing loop to boost default batch processing speed - At the moment, Action Scheduler will process 25 actions per run. That's it. No more until the next loop, despite [what the `README.md` file says](https://github.com/Prospress/action-scheduler/blob/2.0.0/README.md#batch-processing).
This is because the [loop for processing actions until running out of time or memory was removed](https://github.com/Prospress/action-scheduler/commit/5145a9d66e8a12639f3ebd870a0381ca977803db) to prevent the timeout errors that have since been addressed with #134.
**To (greatly) increase the default processing speed, we should reinstate this loop.**
While we have timeout prevention in place with #134, we still need memory monitoring to safely introduce a loop.
The `WP_Background_Process` provides a good approach for how to do this with the `memory_exceeded()` method.
I checked both the [WC Implementation of `WP_Background_Process::memory_exceeded()`](https://github.com/woocommerce/woocommerce/blob/3.4.0/includes/libraries/wp-background-process.php#L343-L353) and the [SkyVerge implementation](https://github.com/woocommerce/woocommerce/blob/3.4.0/includes/libraries/wp-background-process.php#L335-L353), and neither of this have added any additional patches to that, so the approach is likely working reliably at their scale and can be relied upon in Action Scheduler.
Once we have the memory limit in place, we should be able to implement a method like `WC_Background_Process ::batch_limit_exceeded()`](https://github.com/woocommerce/woocommerce/blob/master/includes/abstracts/class-wc-background-process.php#L80-L87) in `ActionScheduler_Abstract_QueueRunner` to control the loop. This means that method can be overridden in other runners, namely `ActionScheduler_WPCLI_QueueRunner`, that do not want to be constrained by timeouts (or likely memory usage, as it has the `stop_the_insanity()` method). | priority | restore batch processing loop to boost default batch processing speed at the moment action scheduler will process actions per run that s it no more until the next loop despite this is because the to prevent the timeout errors that have since been addressed with to greatly increase the default processing speed we should reinstate this loop while we have timeout prevention in place with we still need memory monitoring to safely introduce a loop the wp background process provides a good approach for how to do this with the memory exceeded method i checked both the and the and neither of this have added any additional patches to that so the approach is likely working reliably at their scale and can be relied upon in action scheduler once we have the memory limit in place we should be able to implement a method like wc background process batch limit exceeded in actionscheduler abstract queuerunner to control the loop this means that method can be overridden in other runners namely actionscheduler wpcli queuerunner that do not want to be constrained by timeouts or likely memory usage as it has the stop the insanity method | 1 |
587,647 | 17,627,640,117 | IssuesEvent | 2021-08-19 01:16:13 | parallel-finance/parallel | https://api.github.com/repos/parallel-finance/parallel | closed | Implement LiquidStaking2.0, stake-client interaction part | high priority | **Motivation**
On-chain staking pallet needs to interact with off-chain stake-client, here are the main methods.
Please adjust if necessary.
let's develop from this pr, #362 @alannotnerd
**Suggested Solution**
- [x] trigger_new_era
- [x] record_reward
- [x] record_slash
- [x] record_bond_response/record_bond_extra_response/record_rebond_response/record_unbond_response
- [x] transfer_to_relaychain | 1.0 | Implement LiquidStaking2.0, stake-client interaction part - **Motivation**
On-chain staking pallet needs to interact with off-chain stake-client, here are the main methods.
Please adjust if necessary.
let's develop from this pr, #362 @alannotnerd
**Suggested Solution**
- [x] trigger_new_era
- [x] record_reward
- [x] record_slash
- [x] record_bond_response/record_bond_extra_response/record_rebond_response/record_unbond_response
- [x] transfer_to_relaychain | priority | implement stake client interaction part motivation on chain staking pallet needs to interact with off chain stake client here are the main methods please adjust if necessary let s develop from this pr alannotnerd suggested solution trigger new era record reward record slash record bond response record bond extra response record rebond response record unbond response transfer to relaychain | 1 |
614,860 | 19,191,177,861 | IssuesEvent | 2021-12-06 00:45:16 | myConsciousness/duolingo4d | https://api.github.com/repos/myConsciousness/duolingo4d | opened | Duolingo APIから返却されたJSONの変換時に文字コードを明示的に指定 | Priority: high Type: improvement | <!--
Please describe the feature you'd like to see us implement along with a use
case.
-->
Duolingo APIから返却されたJSON文字列を変換する際に文字コードを明示的に指定するように修正を行う。
```dart
jsonDecode(utf8.decode(response.body.runes.toList()));
``` | 1.0 | Duolingo APIから返却されたJSONの変換時に文字コードを明示的に指定 - <!--
Please describe the feature you'd like to see us implement along with a use
case.
-->
Duolingo APIから返却されたJSON文字列を変換する際に文字コードを明示的に指定するように修正を行う。
```dart
jsonDecode(utf8.decode(response.body.runes.toList()));
``` | priority | duolingo apiから返却されたjsonの変換時に文字コードを明示的に指定 please describe the feature you d like to see us implement along with a use case duolingo apiから返却されたjson文字列を変換する際に文字コードを明示的に指定するように修正を行う。 dart jsondecode decode response body runes tolist | 1 |
439,165 | 12,678,491,696 | IssuesEvent | 2020-06-19 09:51:57 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Langlib tests are failing in windows due to exception in positionRangeCheck | Component/LangLib Component/Parser Priority/High Type/Bug | **Description:**
Disabled in commit
https://github.com/ballerina-platform/ballerina-lang/pull/24258/commits/2a79b95fe5b555ea10d2982e1f6dc5eaeaaa01f9 | 1.0 | Langlib tests are failing in windows due to exception in positionRangeCheck - **Description:**
Disabled in commit
https://github.com/ballerina-platform/ballerina-lang/pull/24258/commits/2a79b95fe5b555ea10d2982e1f6dc5eaeaaa01f9 | priority | langlib tests are failing in windows due to exception in positionrangecheck description disabled in commit | 1 |
140,411 | 5,408,640,339 | IssuesEvent | 2017-03-01 00:48:47 | ubc/compair | https://api.github.com/repos/ubc/compair | closed | Create supporting website for UBC and external users | developer suggestion enhancement front end high priority instructor request | Use Github pages to add a basic static site that supports (and promotes) the application. | 1.0 | Create supporting website for UBC and external users - Use Github pages to add a basic static site that supports (and promotes) the application. | priority | create supporting website for ubc and external users use github pages to add a basic static site that supports and promotes the application | 1 |
767,382 | 26,921,798,864 | IssuesEvent | 2023-02-07 10:59:22 | Public-Health-Scotland/source-linkage-files | https://api.github.com/repos/Public-Health-Scotland/source-linkage-files | closed | Compute age in Home care extract | bug Priority: High | When comparing SPSS vs R i have noticed that the variable age is missing from the home care R extract. This will need added back into the code in either the ALL home care script or year specific. | 1.0 | Compute age in Home care extract - When comparing SPSS vs R i have noticed that the variable age is missing from the home care R extract. This will need added back into the code in either the ALL home care script or year specific. | priority | compute age in home care extract when comparing spss vs r i have noticed that the variable age is missing from the home care r extract this will need added back into the code in either the all home care script or year specific | 1 |
390,874 | 11,565,103,510 | IssuesEvent | 2020-02-20 09:54:19 | localstack/localstack | https://api.github.com/repos/localstack/localstack | closed | ExtendedS3DestinationConfiguration not supported to set S3 destination on Firehose stream | feature-missing needs-triaging priority-high | When creating a new firehose stream it's not possible to set an S3 destination using the `.withExtendedS3DestinationConfiguration()` method. Currently, you have to use `.withS3DestinationConfiguration()`, but this method has been deprecated. Could you please update the code to support the new ExtendedS3DestinationConfiguration? | 1.0 | ExtendedS3DestinationConfiguration not supported to set S3 destination on Firehose stream - When creating a new firehose stream it's not possible to set an S3 destination using the `.withExtendedS3DestinationConfiguration()` method. Currently, you have to use `.withS3DestinationConfiguration()`, but this method has been deprecated. Could you please update the code to support the new ExtendedS3DestinationConfiguration? | priority | not supported to set destination on firehose stream when creating a new firehose stream it s not possible to set an destination using the method currently you have to use but this method has been deprecated could you please update the code to support the new | 1 |
137,202 | 5,299,693,370 | IssuesEvent | 2017-02-10 01:09:23 | atilatosta/dotnet-standard-sdk | https://api.github.com/repos/atilatosta/dotnet-standard-sdk | closed | [documentation] Write code samples | high-priority | Create a code samples project for examples of using services and document code samples in ReadMe for each service endpoint implemented so far.
- [x] Speech to Text
- [x] Text to Speech
- [x] Tone Analyzer
- [x] Personality Insights
- [x] Language Translation
- [ ] Discovery
- [x] Conversation
- [ ] Visual Recognition | 1.0 | [documentation] Write code samples - Create a code samples project for examples of using services and document code samples in ReadMe for each service endpoint implemented so far.
- [x] Speech to Text
- [x] Text to Speech
- [x] Tone Analyzer
- [x] Personality Insights
- [x] Language Translation
- [ ] Discovery
- [x] Conversation
- [ ] Visual Recognition | priority | write code samples create a code samples project for examples of using services and document code samples in readme for each service endpoint implemented so far speech to text text to speech tone analyzer personality insights language translation discovery conversation visual recognition | 1 |
38,040 | 2,838,412,396 | IssuesEvent | 2015-05-27 07:29:04 | Stratio/sparkta | https://api.github.com/repos/Stratio/sparkta | opened | Policies Persistence. Step 3) Composition of saved fragments | component - driver enhancement priority - high | If the user sends a post message to start a new policy and it contains fragments, these fragments should be interpreted and changed with the real JSON | 1.0 | Policies Persistence. Step 3) Composition of saved fragments - If the user sends a post message to start a new policy and it contains fragments, these fragments should be interpreted and changed with the real JSON | priority | policies persistence step composition of saved fragments if the user sends a post message to start a new policy and it contains fragments these fragments should be interpreted and changed with the real json | 1 |
229,657 | 7,582,425,030 | IssuesEvent | 2018-04-25 04:08:16 | HXLStandard/hxl-proxy | https://api.github.com/repos/HXLStandard/hxl-proxy | reopened | pcodes service generate empty file for itos | high-priority | If adm4 doesn't exist for ITOS for Guinea, the pcodes service of hxl proxy will generate an empty csv file which crashes the validation
https://beta.proxy.hxlstandard.org/data/validate?url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F19oLyom0jNYSZJKXgbPKEVYbdlv8zIAuGGRYJAYttOb0%2Fedit%23gid%3D0&schema_url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1EyUzGO7M2juKyVuY2p1nPcxM29SauV1kT2vjRBnS9wU%2Fedit%23gid%3D1513801580&url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F19oLyom0jNYSZJKXgbPKEVYbdlv8zIAuGGRYJAYttOb0%2Fedit%23gid%3D0
<img width="866" alt="screen shot 2018-04-24 at 12 23 13" src="https://user-images.githubusercontent.com/3865844/39181556-52d7667e-47ba-11e8-842d-2dccdbefdbbf.png">
| 1.0 | pcodes service generate empty file for itos - If adm4 doesn't exist for ITOS for Guinea, the pcodes service of hxl proxy will generate an empty csv file which crashes the validation
https://beta.proxy.hxlstandard.org/data/validate?url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F19oLyom0jNYSZJKXgbPKEVYbdlv8zIAuGGRYJAYttOb0%2Fedit%23gid%3D0&schema_url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1EyUzGO7M2juKyVuY2p1nPcxM29SauV1kT2vjRBnS9wU%2Fedit%23gid%3D1513801580&url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F19oLyom0jNYSZJKXgbPKEVYbdlv8zIAuGGRYJAYttOb0%2Fedit%23gid%3D0
<img width="866" alt="screen shot 2018-04-24 at 12 23 13" src="https://user-images.githubusercontent.com/3865844/39181556-52d7667e-47ba-11e8-842d-2dccdbefdbbf.png">
| priority | pcodes service generate empty file for itos if doesn t exist for itos for guinea the pcodes service of hxl proxy will generate an empty csv file which crashes the validation img width alt screen shot at src | 1 |
248,637 | 7,934,499,415 | IssuesEvent | 2018-07-08 19:57:49 | HealthRex/CDSS | https://api.github.com/repos/HealthRex/CDSS | closed | Batch unit test failures - Some library dependencies and floating point checks | Priority - 1 High help wanted | ======================================================================
ERROR: medinfo.dataconversion.test.TestEventDigraph (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: medinfo.dataconversion.test.TestEventDigraph
Traceback (most recent call last):
File "C:\Dev\Python27\lib\unittest\loader.py", line 254, in _find_tests
module = self._get_module_from_name(name)
File "C:\Dev\Python27\lib\unittest\loader.py", line 232, in _get_module_from_name
__import__(name)
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestEventDigraph.py", line 6, in <module>
import networkx as nx
ImportError: No module named networkx
======================================================================
ERROR: test_dataConversion (medinfo.dataconversion.test.TestSTRIDEDxListConversion.TestSTRIDEDxListConversion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestSTRIDEDxListConversion.py", line 60, in setUp
(dataItemId, isNew) = DBUtil.findOrInsertItem("stride_dx_list", dataModel, retrieveCol="pat_id" );
File "C:\HealthRex\CDSS\medinfo\db\DBUtil.py", line 682, in findOrInsertItem
cur.execute( searchQuery, searchParams );
ProgrammingError: column "dx_icd10_code_list" does not exist
LINE 8: AND dx_icd10_code_list = ''
^
======================================================================
ERROR: test_dataConversion_maxMixtureCount (medinfo.dataconversion.test.TestSTRIDEPreAdmitMedConversion.TestSTRIDEPreAdmitMedConversion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestSTRIDEPreAdmitMedConversion.py", line 249, in test_dataConversion_maxMixtureCount
self.converter.convertSourceItems(convOptions);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 48, in convertSourceItems
rxcuiDataByMedId = self.loadRXCUIData(conn=conn);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 81, in loadRXCUIData
DBUtil.execute(query)
File "C:\HealthRex\CDSS\medinfo\db\DBUtil.py", line 261, in execute
cur.execute( query, parameters )
ProgrammingError: syntax error at or near "NOT"
LINE 4: ADD COLUMN IF NOT EXISTS
^
======================================================================
ERROR: test_dataConversion_normalized (medinfo.dataconversion.test.TestSTRIDEPreAdmitMedConversion.TestSTRIDEPreAdmitMedConversion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestSTRIDEPreAdmitMedConversion.py", line 124, in test_dataConversion_normalized
self.converter.convertSourceItems(convOptions);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 48, in convertSourceItems
rxcuiDataByMedId = self.loadRXCUIData(conn=conn);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 81, in loadRXCUIData
DBUtil.execute(query)
File "C:\HealthRex\CDSS\medinfo\db\DBUtil.py", line 261, in execute
cur.execute( query, parameters )
ProgrammingError: syntax error at or near "NOT"
LINE 4: ADD COLUMN IF NOT EXISTS
^
======================================================================
FAIL: test_addTimeCycleFeatures (medinfo.dataconversion.test.TestFeatureMatrixFactory.TestFeatureMatrixFactory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestFeatureMatrixFactory.py", line 447, in test_addTimeCycleFeatures
self.assertEqualList(resultMatrix[2:], expectedMatrix)
File "C:\HealthRex\CDSS\medinfo\common\test\Util.py", line 142, in assertEqualList
self.assertEqual(verifyItem, sampleItem)
AssertionError: Lists differ: ['-789', '-900', 'LABMETB', '2... != ['-789', '-900', 'LABMETB', '2...
First differing element 6:
'0.866025403784'
'0.8660254037844388'
['-789',
'-900',
'LABMETB',
'2009-05-06 15:00:00',
'0',
'5',
- '0.866025403784',
+ '0.8660254037844388',
? ++++
- '-0.5',
+ '-0.4999999999999998',
'15',
- '-0.707106781187',
+ '-0.7071067811865471',
? +++ +
- '-0.707106781187']
+ '-0.7071067811865479']
? +++ +
======================================================================
FAIL: test_buildFeatureMatrix_multiFlowsheet (medinfo.dataconversion.test.TestFeatureMatrixFactory.TestFeatureMatrixFactory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestFeatureMatrixFactory.py", line 405, in test_buildFeatureMatrix_multiFlowsheet
self.assertEqualList(resultMatrix[2:], expectedMatrix)
File "C:\HealthRex\CDSS\medinfo\common\test\Util.py", line 142, in assertEqualList
self.assertEqual(verifyItem, sampleItem)
AssertionError: Lists differ: ['-789', '-900', 'LABMETB', '2... != ['-789', '-900', 'LABMETB', '2...
First differing element 40:
'0.666666666667'
'0.6666666666666666'
Diff is 702 characters long. Set self.maxDiff to None to see it.
======================================================================
FAIL: test_build_FeatureMatrix_multiLabTest (medinfo.dataconversion.test.TestFeatureMatrixFactory.TestFeatureMatrixFactory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestFeatureMatrixFactory.py", line 345, in test_build_FeatureMatrix_multiLabTest
self.assertEqualTable(resultMatrix[2:], expectedMatrix)
File "C:\HealthRex\CDSS\medinfo\common\test\Util.py", line 172, in assertEqualTable
self.assertEqual(verifyItem, sampleItem)
AssertionError: '0.666666666667' != '0.6666666666666666'
----------------------------------------------------------------------
Ran 122 tests in 97.337s
FAILED (failures=3, errors=4) | 1.0 | Batch unit test failures - Some library dependencies and floating point checks - ======================================================================
ERROR: medinfo.dataconversion.test.TestEventDigraph (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: medinfo.dataconversion.test.TestEventDigraph
Traceback (most recent call last):
File "C:\Dev\Python27\lib\unittest\loader.py", line 254, in _find_tests
module = self._get_module_from_name(name)
File "C:\Dev\Python27\lib\unittest\loader.py", line 232, in _get_module_from_name
__import__(name)
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestEventDigraph.py", line 6, in <module>
import networkx as nx
ImportError: No module named networkx
======================================================================
ERROR: test_dataConversion (medinfo.dataconversion.test.TestSTRIDEDxListConversion.TestSTRIDEDxListConversion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestSTRIDEDxListConversion.py", line 60, in setUp
(dataItemId, isNew) = DBUtil.findOrInsertItem("stride_dx_list", dataModel, retrieveCol="pat_id" );
File "C:\HealthRex\CDSS\medinfo\db\DBUtil.py", line 682, in findOrInsertItem
cur.execute( searchQuery, searchParams );
ProgrammingError: column "dx_icd10_code_list" does not exist
LINE 8: AND dx_icd10_code_list = ''
^
======================================================================
ERROR: test_dataConversion_maxMixtureCount (medinfo.dataconversion.test.TestSTRIDEPreAdmitMedConversion.TestSTRIDEPreAdmitMedConversion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestSTRIDEPreAdmitMedConversion.py", line 249, in test_dataConversion_maxMixtureCount
self.converter.convertSourceItems(convOptions);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 48, in convertSourceItems
rxcuiDataByMedId = self.loadRXCUIData(conn=conn);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 81, in loadRXCUIData
DBUtil.execute(query)
File "C:\HealthRex\CDSS\medinfo\db\DBUtil.py", line 261, in execute
cur.execute( query, parameters )
ProgrammingError: syntax error at or near "NOT"
LINE 4: ADD COLUMN IF NOT EXISTS
^
======================================================================
ERROR: test_dataConversion_normalized (medinfo.dataconversion.test.TestSTRIDEPreAdmitMedConversion.TestSTRIDEPreAdmitMedConversion)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestSTRIDEPreAdmitMedConversion.py", line 124, in test_dataConversion_normalized
self.converter.convertSourceItems(convOptions);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 48, in convertSourceItems
rxcuiDataByMedId = self.loadRXCUIData(conn=conn);
File "C:\HealthRex\CDSS\medinfo\dataconversion\STRIDEPreAdmitMedConversion.py", line 81, in loadRXCUIData
DBUtil.execute(query)
File "C:\HealthRex\CDSS\medinfo\db\DBUtil.py", line 261, in execute
cur.execute( query, parameters )
ProgrammingError: syntax error at or near "NOT"
LINE 4: ADD COLUMN IF NOT EXISTS
^
======================================================================
FAIL: test_addTimeCycleFeatures (medinfo.dataconversion.test.TestFeatureMatrixFactory.TestFeatureMatrixFactory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestFeatureMatrixFactory.py", line 447, in test_addTimeCycleFeatures
self.assertEqualList(resultMatrix[2:], expectedMatrix)
File "C:\HealthRex\CDSS\medinfo\common\test\Util.py", line 142, in assertEqualList
self.assertEqual(verifyItem, sampleItem)
AssertionError: Lists differ: ['-789', '-900', 'LABMETB', '2... != ['-789', '-900', 'LABMETB', '2...
First differing element 6:
'0.866025403784'
'0.8660254037844388'
['-789',
'-900',
'LABMETB',
'2009-05-06 15:00:00',
'0',
'5',
- '0.866025403784',
+ '0.8660254037844388',
? ++++
- '-0.5',
+ '-0.4999999999999998',
'15',
- '-0.707106781187',
+ '-0.7071067811865471',
? +++ +
- '-0.707106781187']
+ '-0.7071067811865479']
? +++ +
======================================================================
FAIL: test_buildFeatureMatrix_multiFlowsheet (medinfo.dataconversion.test.TestFeatureMatrixFactory.TestFeatureMatrixFactory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestFeatureMatrixFactory.py", line 405, in test_buildFeatureMatrix_multiFlowsheet
self.assertEqualList(resultMatrix[2:], expectedMatrix)
File "C:\HealthRex\CDSS\medinfo\common\test\Util.py", line 142, in assertEqualList
self.assertEqual(verifyItem, sampleItem)
AssertionError: Lists differ: ['-789', '-900', 'LABMETB', '2... != ['-789', '-900', 'LABMETB', '2...
First differing element 40:
'0.666666666667'
'0.6666666666666666'
Diff is 702 characters long. Set self.maxDiff to None to see it.
======================================================================
FAIL: test_build_FeatureMatrix_multiLabTest (medinfo.dataconversion.test.TestFeatureMatrixFactory.TestFeatureMatrixFactory)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\HealthRex\CDSS\medinfo\dataconversion\test\TestFeatureMatrixFactory.py", line 345, in test_build_FeatureMatrix_multiLabTest
self.assertEqualTable(resultMatrix[2:], expectedMatrix)
File "C:\HealthRex\CDSS\medinfo\common\test\Util.py", line 172, in assertEqualTable
self.assertEqual(verifyItem, sampleItem)
AssertionError: '0.666666666667' != '0.6666666666666666'
----------------------------------------------------------------------
Ran 122 tests in 97.337s
FAILED (failures=3, errors=4) | priority | batch unit test failures some library dependencies and floating point checks error medinfo dataconversion test testeventdigraph unittest loader moduleimportfailure importerror failed to import test module medinfo dataconversion test testeventdigraph traceback most recent call last file c dev lib unittest loader py line in find tests module self get module from name name file c dev lib unittest loader py line in get module from name import name file c healthrex cdss medinfo dataconversion test testeventdigraph py line in import networkx as nx importerror no module named networkx error test dataconversion medinfo dataconversion test teststridedxlistconversion teststridedxlistconversion traceback most recent call last file c healthrex cdss medinfo dataconversion test teststridedxlistconversion py line in setup dataitemid isnew dbutil findorinsertitem stride dx list datamodel retrievecol pat id file c healthrex cdss medinfo db dbutil py line in findorinsertitem cur execute searchquery searchparams programmingerror column dx code list does not exist line and dx code list error test dataconversion maxmixturecount medinfo dataconversion test teststridepreadmitmedconversion teststridepreadmitmedconversion traceback most recent call last file c healthrex cdss medinfo dataconversion test teststridepreadmitmedconversion py line in test dataconversion maxmixturecount self converter convertsourceitems convoptions file c healthrex cdss medinfo dataconversion stridepreadmitmedconversion py line in convertsourceitems rxcuidatabymedid self loadrxcuidata conn conn file c healthrex cdss medinfo dataconversion stridepreadmitmedconversion py line in loadrxcuidata dbutil execute query file c healthrex cdss medinfo db dbutil py line in execute cur execute query parameters programmingerror syntax error at or near not line add column if not exists error test dataconversion normalized medinfo dataconversion test teststridepreadmitmedconversion teststridepreadmitmedconversion traceback most recent call last file c healthrex cdss medinfo dataconversion test teststridepreadmitmedconversion py line in test dataconversion normalized self converter convertsourceitems convoptions file c healthrex cdss medinfo dataconversion stridepreadmitmedconversion py line in convertsourceitems rxcuidatabymedid self loadrxcuidata conn conn file c healthrex cdss medinfo dataconversion stridepreadmitmedconversion py line in loadrxcuidata dbutil execute query file c healthrex cdss medinfo db dbutil py line in execute cur execute query parameters programmingerror syntax error at or near not line add column if not exists fail test addtimecyclefeatures medinfo dataconversion test testfeaturematrixfactory testfeaturematrixfactory traceback most recent call last file c healthrex cdss medinfo dataconversion test testfeaturematrixfactory py line in test addtimecyclefeatures self assertequallist resultmatrix expectedmatrix file c healthrex cdss medinfo common test util py line in assertequallist self assertequal verifyitem sampleitem assertionerror lists differ labmetb labmetb first differing element labmetb fail test buildfeaturematrix multiflowsheet medinfo dataconversion test testfeaturematrixfactory testfeaturematrixfactory traceback most recent call last file c healthrex cdss medinfo dataconversion test testfeaturematrixfactory py line in test buildfeaturematrix multiflowsheet self assertequallist resultmatrix expectedmatrix file c healthrex cdss medinfo common test util py line in assertequallist self assertequal verifyitem sampleitem assertionerror lists differ labmetb labmetb first differing element diff is characters long set self maxdiff to none to see it fail test build featurematrix multilabtest medinfo dataconversion test testfeaturematrixfactory testfeaturematrixfactory traceback most recent call last file c healthrex cdss medinfo dataconversion test testfeaturematrixfactory py line in test build featurematrix multilabtest self assertequaltable resultmatrix expectedmatrix file c healthrex cdss medinfo common test util py line in assertequaltable self assertequal verifyitem sampleitem assertionerror ran tests in failed failures errors | 1 |
157,712 | 6,011,378,610 | IssuesEvent | 2017-06-06 15:05:45 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | RNN CUDNN backend OOM issue | bug high priority | Hi,
I think I have stumbled upon something weird with the CUDNN backend for RNN. I am using CUDNN v5 on Cent OS 7.3.1.
```
torch.version.__version__ = e1d257bc6d472ee297df1719bf344bae359dbeaa
```
I have discussed this with @soumith as well.
The code snippet for reproducing is below. Enabling the cudnn backend increases the memory used linearly (goes OOM eventually). Disabling the backend results in expected behavior.
```python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
torch.backends.cudnn.enabled = False
import torch.cuda
import torch.nn as nn
from torch.autograd import Variable
import gc
print(torch.version.__version__)
def get_num_tensors():
ctr = 0
for obj in gc.get_objects():
if torch.is_tensor(obj):
ctr += 1
return ctr
wordvec_dim = 300
hidden_dim = 256
rnn_num_layers = 1
batch_size = 10
vocab_size = 100
rnn_dropout = 0.5
model = nn.LSTM(wordvec_dim, hidden_dim, rnn_num_layers,
dropout=rnn_dropout, batch_first=True)
# set training mode
model.cuda()
model.train()
encoded = Variable(torch.FloatTensor(batch_size, 1, wordvec_dim))
encoded = encoded.cuda()
h0 = Variable(torch.zeros(rnn_num_layers, batch_size, hidden_dim))
c0 = Variable(torch.zeros(rnn_num_layers, batch_size, hidden_dim))
h = h0.cuda()
c = c0.cuda()
print('Start:', get_num_tensors())
num_forward_passes = 10
for _i in range(num_forward_passes):
output, (h, c) = model(encoded, (h, c))
print(_i, get_num_tensors())
print('End:', get_num_tensors())
```
Output *with* cudnn enabled
```
e1d257bc6d472ee297df1719bf344bae359dbeaa
Start: 9
0 16
1 22
2 28
3 34
4 40
5 46
6 52
7 58
8 64
9 70
End: 70
```
Output without cudnn
```
e1d257bc6d472ee297df1719bf344bae359dbeaa
Start: 9
0 10
1 10
2 10
3 10
4 10
5 10
6 10
7 10
8 10
9 10
End: 10
``` | 1.0 | RNN CUDNN backend OOM issue - Hi,
I think I have stumbled upon something weird with the CUDNN backend for RNN. I am using CUDNN v5 on Cent OS 7.3.1.
```
torch.version.__version__ = e1d257bc6d472ee297df1719bf344bae359dbeaa
```
I have discussed this with @soumith as well.
The code snippet for reproducing is below. Enabling the cudnn backend increases the memory used linearly (goes OOM eventually). Disabling the backend results in expected behavior.
```python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
torch.backends.cudnn.enabled = False
import torch.cuda
import torch.nn as nn
from torch.autograd import Variable
import gc
print(torch.version.__version__)
def get_num_tensors():
ctr = 0
for obj in gc.get_objects():
if torch.is_tensor(obj):
ctr += 1
return ctr
wordvec_dim = 300
hidden_dim = 256
rnn_num_layers = 1
batch_size = 10
vocab_size = 100
rnn_dropout = 0.5
model = nn.LSTM(wordvec_dim, hidden_dim, rnn_num_layers,
dropout=rnn_dropout, batch_first=True)
# set training mode
model.cuda()
model.train()
encoded = Variable(torch.FloatTensor(batch_size, 1, wordvec_dim))
encoded = encoded.cuda()
h0 = Variable(torch.zeros(rnn_num_layers, batch_size, hidden_dim))
c0 = Variable(torch.zeros(rnn_num_layers, batch_size, hidden_dim))
h = h0.cuda()
c = c0.cuda()
print('Start:', get_num_tensors())
num_forward_passes = 10
for _i in range(num_forward_passes):
output, (h, c) = model(encoded, (h, c))
print(_i, get_num_tensors())
print('End:', get_num_tensors())
```
Output *with* cudnn enabled
```
e1d257bc6d472ee297df1719bf344bae359dbeaa
Start: 9
0 16
1 22
2 28
3 34
4 40
5 46
6 52
7 58
8 64
9 70
End: 70
```
Output without cudnn
```
e1d257bc6d472ee297df1719bf344bae359dbeaa
Start: 9
0 10
1 10
2 10
3 10
4 10
5 10
6 10
7 10
8 10
9 10
End: 10
``` | priority | rnn cudnn backend oom issue hi i think i have stumbled upon something weird with the cudnn backend for rnn i am using cudnn on cent os torch version version i have discussed this with soumith as well the code snippet for reproducing is below enabling the cudnn backend increases the memory used linearly goes oom eventually disabling the backend results in expected behavior python from future import absolute import from future import division from future import print function from future import unicode literals import torch torch backends cudnn enabled false import torch cuda import torch nn as nn from torch autograd import variable import gc print torch version version def get num tensors ctr for obj in gc get objects if torch is tensor obj ctr return ctr wordvec dim hidden dim rnn num layers batch size vocab size rnn dropout model nn lstm wordvec dim hidden dim rnn num layers dropout rnn dropout batch first true set training mode model cuda model train encoded variable torch floattensor batch size wordvec dim encoded encoded cuda variable torch zeros rnn num layers batch size hidden dim variable torch zeros rnn num layers batch size hidden dim h cuda c cuda print start get num tensors num forward passes for i in range num forward passes output h c model encoded h c print i get num tensors print end get num tensors output with cudnn enabled start end output without cudnn start end | 1 |
120,225 | 4,786,956,151 | IssuesEvent | 2016-10-29 18:32:13 | devists/projectile | https://api.github.com/repos/devists/projectile | closed | Mock User Detail for login | backend hacktoberfest Priority - High | Please provide sample username and password for testing.
This will be the username and password everyone will use to login.
Currently we are not able to login. | 1.0 | Mock User Detail for login - Please provide sample username and password for testing.
This will be the username and password everyone will use to login.
Currently we are not able to login. | priority | mock user detail for login please provide sample username and password for testing this will be the username and password everyone will use to login currently we are not able to login | 1 |
718,057 | 24,702,610,226 | IssuesEvent | 2022-10-19 16:21:54 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | opened | Initialization of the backend project | priority-high status-new back-end | ### Issue Description
As we started the implementation of the main app, we need to initialize the node project for main backend application.
We decided to use our folder structure and packages from the practice app from the last year.
Under this issue I will kickstart the project with required packages and complete the issue in ASAP so that we can begin implementation.
### Step Details
Steps that will be performed:
- [ ] Initialize the npm project.
- [ ] Load the required packages.
### Final Actions
Upon completion of this issue, backend team will start the implementation of the authorization endpoints as discussed in meeting 2
### Deadline of the Issue
20.10.2022 10:00 AM
### Reviewer
Hasan Can Erol
### Deadline for the Review
20.10.2022 10:00 PM | 1.0 | Initialization of the backend project - ### Issue Description
As we started the implementation of the main app, we need to initialize the node project for main backend application.
We decided to use our folder structure and packages from the practice app from the last year.
Under this issue I will kickstart the project with required packages and complete the issue in ASAP so that we can begin implementation.
### Step Details
Steps that will be performed:
- [ ] Initialize the npm project.
- [ ] Load the required packages.
### Final Actions
Upon completion of this issue, backend team will start the implementation of the authorization endpoints as discussed in meeting 2
### Deadline of the Issue
20.10.2022 10:00 AM
### Reviewer
Hasan Can Erol
### Deadline for the Review
20.10.2022 10:00 PM | priority | initialization of the backend project issue description as we started the implementation of the main app we need to initialize the node project for main backend application we decided to use our folder structure and packages from the practice app from the last year under this issue i will kickstart the project with required packages and complete the issue in asap so that we can begin implementation step details steps that will be performed initialize the npm project load the required packages final actions upon completion of this issue backend team will start the implementation of the authorization endpoints as discussed in meeting deadline of the issue am reviewer hasan can erol deadline for the review pm | 1 |
831,657 | 32,057,306,852 | IssuesEvent | 2023-09-24 08:30:49 | varundeepsaini/discordbot | https://api.github.com/repos/varundeepsaini/discordbot | closed | bug: the bot shows `the application didnt respond` even after sending the message | bug good first issue priority: high | Bug:

| 1.0 | bug: the bot shows `the application didnt respond` even after sending the message - Bug:

| priority | bug the bot shows the application didnt respond even after sending the message bug | 1 |
426,437 | 12,372,435,735 | IssuesEvent | 2020-05-18 20:24:23 | technologiestiftung/tsb-trees-frontend | https://api.github.com/repos/technologiestiftung/tsb-trees-frontend | closed | Cookie content | Priority HIGH | There is a local storage object being generated with following content: mapbox.eventData:ZmRua2xn:{"lastSuccess":1588594083071,"tokenU":"fdnklg"}
?? | 1.0 | Cookie content - There is a local storage object being generated with following content: mapbox.eventData:ZmRua2xn:{"lastSuccess":1588594083071,"tokenU":"fdnklg"}
?? | priority | cookie content there is a local storage object being generated with following content mapbox eventdata lastsuccess tokenu fdnklg | 1 |
705,435 | 24,234,729,763 | IssuesEvent | 2022-09-26 21:45:51 | nasa/fprime | https://api.github.com/repos/nasa/fprime | closed | Cross-compile toolchain for raspberry pi is deprecated | High Priority Easy First Issue | [Cross-compile toolchain](https://github.com/raspberrypi/tools) in [package installation](https://github.com/nasa/fprime/blob/devel/RPI/README.md#package-installation) for raspberry pi demo is listed as a deprecated project. | 1.0 | Cross-compile toolchain for raspberry pi is deprecated - [Cross-compile toolchain](https://github.com/raspberrypi/tools) in [package installation](https://github.com/nasa/fprime/blob/devel/RPI/README.md#package-installation) for raspberry pi demo is listed as a deprecated project. | priority | cross compile toolchain for raspberry pi is deprecated in for raspberry pi demo is listed as a deprecated project | 1 |
644,453 | 20,978,108,906 | IssuesEvent | 2022-03-28 17:03:37 | bcgov/foi-flow | https://api.github.com/repos/bcgov/foi-flow | closed | Closed request is showing in IAO user queue | bug high priority | **Describe the bug in current situation**
Closed request is showing in IAO user queue
**Link bug to the User Story**
**Impact of this bug**
medium - goes against ac, does not prevent users from using the system
**Chance of Occurring (high/medium/low/very low)**
high
**Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?**
**Steps to Reproduce**
Steps to reproduce the behavior:
- create a request
- login as any IAO user (processing, intake, flex)
- move to open stage
- close, note the request id
- return to queue
- search for request in queue (not advanced search)
**Actual/ observed behaviour/ results**
closed request shows up
**Expected behaviour**
closed request should not show up
**Screenshots/ Visual Reference/ Source**

| 1.0 | Closed request is showing in IAO user queue - **Describe the bug in current situation**
Closed request is showing in IAO user queue
**Link bug to the User Story**
**Impact of this bug**
medium - goes against ac, does not prevent users from using the system
**Chance of Occurring (high/medium/low/very low)**
high
**Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?**
**Steps to Reproduce**
Steps to reproduce the behavior:
- create a request
- login as any IAO user (processing, intake, flex)
- move to open stage
- close, note the request id
- return to queue
- search for request in queue (not advanced search)
**Actual/ observed behaviour/ results**
closed request shows up
**Expected behaviour**
closed request should not show up
**Screenshots/ Visual Reference/ Source**

| priority | closed request is showing in iao user queue describe the bug in current situation closed request is showing in iao user queue link bug to the user story impact of this bug medium goes against ac does not prevent users from using the system chance of occurring high medium low very low high pre conditions which env any pre requesites or assumptions to execute steps steps to reproduce steps to reproduce the behavior create a request login as any iao user processing intake flex move to open stage close note the request id return to queue search for request in queue not advanced search actual observed behaviour results closed request shows up expected behaviour closed request should not show up screenshots visual reference source | 1 |
430,283 | 12,450,710,964 | IssuesEvent | 2020-05-27 09:14:43 | bounswe/bounswe2020group8 | https://api.github.com/repos/bounswe/bounswe2020group8 | closed | Missing urls2.py [Shipment_Calculator branch] | Priority: High help wanted | urls2.py is missing in the shipment_calculator. To test my code, I commented the 22. line in practice_app/urls.py.
Could you please upload the urls2.py and uncomment the 22. line in practice_app/urls.py? | 1.0 | Missing urls2.py [Shipment_Calculator branch] - urls2.py is missing in the shipment_calculator. To test my code, I commented the 22. line in practice_app/urls.py.
Could you please upload the urls2.py and uncomment the 22. line in practice_app/urls.py? | priority | missing py py is missing in the shipment calculator to test my code i commented the line in practice app urls py could you please upload the py and uncomment the line in practice app urls py | 1 |
781,897 | 27,453,891,075 | IssuesEvent | 2023-03-02 19:36:43 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [all] Upgrade to Java 17 | new feature priority: high triage | ### Duplicates
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Java 11 LTS is approaching EOL
### Describe the solution you'd like
Upgrade to Java 17 LTS | 1.0 | [all] Upgrade to Java 17 - ### Duplicates
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Java 11 LTS is approaching EOL
### Describe the solution you'd like
Upgrade to Java 17 LTS | priority | upgrade to java duplicates i have searched the existing issues is your feature request related to a problem please describe java lts is approaching eol describe the solution you d like upgrade to java lts | 1 |
219,335 | 7,335,038,779 | IssuesEvent | 2018-03-06 01:45:34 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | error message improvement when adding users in hierarchical configuration | priority:high sprint1 status:pending type:bug | Problem statement: error message when adding xdsh capabilities, does not reveal that the user is missing from the service nodes, in this hierarchical configuration.
Use case: A user was added to the primary xCAT server, and granted initial ability to open a node console.
> run '/opt/xcat/share/xcat/scripts/setup-local-client.sh ccuser' as 'root' to generate the client certificates
Then the policy table was updated for this local user to do xCAT commands, which included xdsh.
> chdef -t policy -o 5.7 name=ccuser commands=nodels,rpower,lsdef,xdsh rule=allow
A postscript is run to add the user to the compute nodes, and ssh, nodels, rpower, all are tested and work. Finally, I attempted to add xdsh capability.
> xdsh cn01 -K
...
> Error: xdsh plugin bug, pid 68135, process description: ‘xcatd SSL: xdsh: xdsh instance’ with error ‘cannot open file /.ssh/copy.sh
> ’ while trying to fulfill request for the following nodes: cn01
The root cause of the error was that xdsh, unlike ssh, was attempting to route through the service node (by design). I had neglected to add the user and sync the /etc/passwd files to the service nodes.
This issue request is to remove the plugin bug message and have some better message to figure out where the error is coming from, if it's MGMT node or SN. Otherwise it's quite difficult to pinpoint the error.
documentation reference for this user scenario:
http://xcat-docs.readthedocs.io/en/stable/advanced/security/security.html | 1.0 | error message improvement when adding users in hierarchical configuration - Problem statement: error message when adding xdsh capabilities, does not reveal that the user is missing from the service nodes, in this hierarchical configuration.
Use case: A user was added to the primary xCAT server, and granted initial ability to open a node console.
> run '/opt/xcat/share/xcat/scripts/setup-local-client.sh ccuser' as 'root' to generate the client certificates
Then the policy table was updated for this local user to do xCAT commands, which included xdsh.
> chdef -t policy -o 5.7 name=ccuser commands=nodels,rpower,lsdef,xdsh rule=allow
A postscript is run to add the user to the compute nodes, and ssh, nodels, rpower, all are tested and work. Finally, I attempted to add xdsh capability.
> xdsh cn01 -K
...
> Error: xdsh plugin bug, pid 68135, process description: ‘xcatd SSL: xdsh: xdsh instance’ with error ‘cannot open file /.ssh/copy.sh
> ’ while trying to fulfill request for the following nodes: cn01
The root cause of the error was that xdsh, unlike ssh, was attempting to route through the service node (by design). I had neglected to add the user and sync the /etc/passwd files to the service nodes.
This issue request is to remove the plugin bug message and have some better message to figure out where the error is coming from, if it's MGMT node or SN. Otherwise it's quite difficult to pinpoint the error.
documentation reference for this user scenario:
http://xcat-docs.readthedocs.io/en/stable/advanced/security/security.html | priority | error message improvement when adding users in hierarchical configuration problem statement error message when adding xdsh capabilities does not reveal that the user is missing from the service nodes in this hierarchical configuration use case a user was added to the primary xcat server and granted initial ability to open a node console run opt xcat share xcat scripts setup local client sh ccuser as root to generate the client certificates then the policy table was updated for this local user to do xcat commands which included xdsh chdef t policy o name ccuser commands nodels rpower lsdef xdsh rule allow a postscript is run to add the user to the compute nodes and ssh nodels rpower all are tested and work finally i attempted to add xdsh capability xdsh k error xdsh plugin bug pid process description ‘xcatd ssl xdsh xdsh instance’ with error ‘cannot open file ssh copy sh ’ while trying to fulfill request for the following nodes the root cause of the error was that xdsh unlike ssh was attempting to route through the service node by design i had neglected to add the user and sync the etc passwd files to the service nodes this issue request is to remove the plugin bug message and have some better message to figure out where the error is coming from if it s mgmt node or sn otherwise it s quite difficult to pinpoint the error documentation reference for this user scenario | 1 |
290,590 | 8,901,159,144 | IssuesEvent | 2019-01-17 01:02:06 | VIDY/embed.js | https://api.github.com/repos/VIDY/embed.js | closed | Hover: Occasionally prevents freeform movement | High Priority bug | This isn't consistent, but _does_ happen semi-regularly as of late.
On mobile (Android & iOS), when holding a vlink, the player will immediately shutdown/reset once your finger leaves the vlink's boundaries. It behaves as if it's in desktop mode, but abruptly & without the exit animation. | 1.0 | Hover: Occasionally prevents freeform movement - This isn't consistent, but _does_ happen semi-regularly as of late.
On mobile (Android & iOS), when holding a vlink, the player will immediately shutdown/reset once your finger leaves the vlink's boundaries. It behaves as if it's in desktop mode, but abruptly & without the exit animation. | priority | hover occasionally prevents freeform movement this isn t consistent but does happen semi regularly as of late on mobile android ios when holding a vlink the player will immediately shutdown reset once your finger leaves the vlink s boundaries it behaves as if it s in desktop mode but abruptly without the exit animation | 1 |
250,147 | 7,969,228,846 | IssuesEvent | 2018-07-16 08:18:00 | ISISScientificComputing/autoreduce | https://api.github.com/repos/ISISScientificComputing/autoreduce | closed | Dealing with negative RB numbers and when RB number is 0 | Bug High Priority | Currently the webapp fails when the RB number for the run is negative. This is the case for calibration runs. We should find a more elegant way to deal with this case when it occurs.
We should also ensure RB=0 is working correctly.
*This functionality was requested by Pascal*
```
Hello,
We cannot autoreduce data that is in calibration or commissioning RB as they do not appear listed on SECI when we go to change experiment.
Is it also possible to have a RB number =0 (quite a few people use that) so we can put stuff there and it can be autoprocessed.
Thanks,
P
``` | 1.0 | Dealing with negative RB numbers and when RB number is 0 - Currently the webapp fails when the RB number for the run is negative. This is the case for calibration runs. We should find a more elegant way to deal with this case when it occurs.
We should also ensure RB=0 is working correctly.
*This functionality was requested by Pascal*
```
Hello,
We cannot autoreduce data that is in calibration or commissioning RB as they do not appear listed on SECI when we go to change experiment.
Is it also possible to have a RB number =0 (quite a few people use that) so we can put stuff there and it can be autoprocessed.
Thanks,
P
``` | priority | dealing with negative rb numbers and when rb number is currently the webapp fails when the rb number for the run is negative this is the case for calibration runs we should find a more elegant way to deal with this case when it occurs we should also ensure rb is working correctly this functionality was requested by pascal hello we cannot autoreduce data that is in calibration or commissioning rb as they do not appear listed on seci when we go to change experiment is it also possible to have a rb number quite a few people use that so we can put stuff there and it can be autoprocessed thanks p | 1 |
142,133 | 5,459,712,332 | IssuesEvent | 2017-03-09 01:42:33 | CS2103JAN2017-T09-B4/main | https://api.github.com/repos/CS2103JAN2017-T09-B4/main | opened | Indicate a starting and ending time for my tasks | priority.high status.ongoing type.story | So that I can keep track of events I need to attend | 1.0 | Indicate a starting and ending time for my tasks - So that I can keep track of events I need to attend | priority | indicate a starting and ending time for my tasks so that i can keep track of events i need to attend | 1 |
681,342 | 23,306,692,576 | IssuesEvent | 2022-08-08 02:20:22 | jhugon/semantic-data-taking-webapp | https://api.github.com/repos/jhugon/semantic-data-taking-webapp | closed | Cloud userfile environment variable isn’t working as it should | bug High priority | The userfile location set by the env var shows up correctly at the beginning of the log, but then appears as the default userfile.txt later in the log and when the auth functions are called | 1.0 | Cloud userfile environment variable isn’t working as it should - The userfile location set by the env var shows up correctly at the beginning of the log, but then appears as the default userfile.txt later in the log and when the auth functions are called | priority | cloud userfile environment variable isn’t working as it should the userfile location set by the env var shows up correctly at the beginning of the log but then appears as the default userfile txt later in the log and when the auth functions are called | 1 |
706,632 | 24,279,974,938 | IssuesEvent | 2022-09-28 16:31:45 | oceanprotocol/df-web | https://api.github.com/repos/oceanprotocol/df-web | opened | Lock Ocean UX - Add multi-step component to flesh out Approve+Lock (and all possible veOCEAN flows). | Priority: High | There are 3 different locking flows the user can experience
#### Creating Lock
1. Approve Ocean
2. Create Lock
#### Update Lock (if user adds an amount)
1. Approve Ocean
2. Update Lock
#### Update Lock (if user does not add an amount, only a date)
1. Update Lock
### DoD:
- [ ] We have 1 (or 3, 1 for each) multi-step component that reflects all of these flows and are contextual to the user actions
- [ ] Substitute the 1 button, for this multi-step flow | 1.0 | Lock Ocean UX - Add multi-step component to flesh out Approve+Lock (and all possible veOCEAN flows). - There are 3 different locking flows the user can experience
#### Creating Lock
1. Approve Ocean
2. Create Lock
#### Update Lock (if user adds an amount)
1. Approve Ocean
2. Update Lock
#### Update Lock (if user does not add an amount, only a date)
1. Update Lock
### DoD:
- [ ] We have 1 (or 3, 1 for each) multi-step component that reflects all of these flows and are contextual to the user actions
- [ ] Substitute the 1 button, for this multi-step flow | priority | lock ocean ux add multi step component to flesh out approve lock and all possible veocean flows there are different locking flows the user can experience creating lock approve ocean create lock update lock if user adds an amount approve ocean update lock update lock if user does not add an amount only a date update lock dod we have or for each multi step component that reflects all of these flows and are contextual to the user actions substitute the button for this multi step flow | 1 |
566,827 | 16,831,517,849 | IssuesEvent | 2021-06-18 05:59:41 | tooploox/autonomous_car_model | https://api.github.com/repos/tooploox/autonomous_car_model | closed | Physical properties of the simulated model requires checking | bug good first issue high priority | As the model behaves strangely in simulation I guess there's something wrong with its description and/or physical properties defined in URDF (`collision`, `inertia`, and `joints` tags in general should be checked).
Potentially helpful resources: http://gazebosim.org/tutorials/?tut=ros_urdf
It should make more sense to resolve #1 first. | 1.0 | Physical properties of the simulated model requires checking - As the model behaves strangely in simulation I guess there's something wrong with its description and/or physical properties defined in URDF (`collision`, `inertia`, and `joints` tags in general should be checked).
Potentially helpful resources: http://gazebosim.org/tutorials/?tut=ros_urdf
It should make more sense to resolve #1 first. | priority | physical properties of the simulated model requires checking as the model behaves strangely in simulation i guess there s something wrong with its description and or physical properties defined in urdf collision inertia and joints tags in general should be checked potentially helpful resources it should make more sense to resolve first | 1 |
717,263 | 24,668,373,097 | IssuesEvent | 2022-10-18 12:05:52 | proyectos-tsdwad/integrador-modulo-fullstack | https://api.github.com/repos/proyectos-tsdwad/integrador-modulo-fullstack | closed | #US02 Como usuario quiero poder ver el header del home | high priority 3 Story Point | - [x] #TK05 Diseñar header del home.
- [x] #TK06 Maquetar el mismo.
- [x] #TK07 Testear.
| 1.0 | #US02 Como usuario quiero poder ver el header del home - - [x] #TK05 Diseñar header del home.
- [x] #TK06 Maquetar el mismo.
- [x] #TK07 Testear.
| priority | como usuario quiero poder ver el header del home diseñar header del home maquetar el mismo testear | 1 |
58,993 | 3,098,436,931 | IssuesEvent | 2015-08-28 11:00:42 | artofkot/evarist | https://api.github.com/repos/artofkot/evarist | opened | Как отображать решения. | high priority todo | Сейчас на сайте сделано так - студент постит решение, проверяющие его читают, голосуют за верно или неверно и комментируют. Далее студент нажимает "исправить решение", пишет новое решение, решение исправляется (при этом старое решение вообще удаляется). Далее проверяющий при необходимости отменяет свой голос, и голосует заново.
Нужно добавить две вещи:
1. Чтобы при исправлении решения старые версии решения сохранялись, и проверяющий потом не отменял свой голос, а голосовал уже за новое (исправленное) решение. Это нужно, чтобы можно было отследить прогресс по решению.
2. Чтобы студент мог отправить совершенно новое решение задачи (несмотря на то, что может быть у него уже есть решения этой задачи).
То есть грубо говоря, студент всегда может отправить решение, и оно либо является исправлением какого-то решения, и тогда это решение отправляется в соответствующую цепочку версий решения. Либо решение совсем новое, и оно рождает новую цепочку решений.
Существенное изменение в архитектуре появляется одно - добавляется объект цепочка решений, который объединяет собой разные версии одного решения. То есть content_block with type='problem' будет хранить не решения теперь, а цепочки решений, а уже цепочки решений хранят решения.
Выглядит цепочка на сайте примерно так:
> Цепочка решений.
> кнопка "исправить решение"
> Решение v1.3 (latest)
> comment 1
> comment 2
> comment 3
> кнопка "комметировать"
> Решение v1.2
> comment 1
> comment 2
> comment 3
> Решение v1.1
> Решение v1.0
На сайте на страничках /problem, /check, /my_solutions вместо решений и комментарий к ним будут соответственно находиться цепочки. В цепочке показывать имеет смысл только последнюю версию решения, остальные либо под collapse.js, либо фетчаться через ajax requests. | 1.0 | Как отображать решения. - Сейчас на сайте сделано так - студент постит решение, проверяющие его читают, голосуют за верно или неверно и комментируют. Далее студент нажимает "исправить решение", пишет новое решение, решение исправляется (при этом старое решение вообще удаляется). Далее проверяющий при необходимости отменяет свой голос, и голосует заново.
Нужно добавить две вещи:
1. Чтобы при исправлении решения старые версии решения сохранялись, и проверяющий потом не отменял свой голос, а голосовал уже за новое (исправленное) решение. Это нужно, чтобы можно было отследить прогресс по решению.
2. Чтобы студент мог отправить совершенно новое решение задачи (несмотря на то, что может быть у него уже есть решения этой задачи).
То есть грубо говоря, студент всегда может отправить решение, и оно либо является исправлением какого-то решения, и тогда это решение отправляется в соответствующую цепочку версий решения. Либо решение совсем новое, и оно рождает новую цепочку решений.
Существенное изменение в архитектуре появляется одно - добавляется объект цепочка решений, который объединяет собой разные версии одного решения. То есть content_block with type='problem' будет хранить не решения теперь, а цепочки решений, а уже цепочки решений хранят решения.
Выглядит цепочка на сайте примерно так:
> Цепочка решений.
> кнопка "исправить решение"
> Решение v1.3 (latest)
> comment 1
> comment 2
> comment 3
> кнопка "комметировать"
> Решение v1.2
> comment 1
> comment 2
> comment 3
> Решение v1.1
> Решение v1.0
На сайте на страничках /problem, /check, /my_solutions вместо решений и комментарий к ним будут соответственно находиться цепочки. В цепочке показывать имеет смысл только последнюю версию решения, остальные либо под collapse.js, либо фетчаться через ajax requests. | priority | как отображать решения сейчас на сайте сделано так студент постит решение проверяющие его читают голосуют за верно или неверно и комментируют далее студент нажимает исправить решение пишет новое решение решение исправляется при этом старое решение вообще удаляется далее проверяющий при необходимости отменяет свой голос и голосует заново нужно добавить две вещи чтобы при исправлении решения старые версии решения сохранялись и проверяющий потом не отменял свой голос а голосовал уже за новое исправленное решение это нужно чтобы можно было отследить прогресс по решению чтобы студент мог отправить совершенно новое решение задачи несмотря на то что может быть у него уже есть решения этой задачи то есть грубо говоря студент всегда может отправить решение и оно либо является исправлением какого то решения и тогда это решение отправляется в соответствующую цепочку версий решения либо решение совсем новое и оно рождает новую цепочку решений существенное изменение в архитектуре появляется одно добавляется объект цепочка решений который объединяет собой разные версии одного решения то есть content block with type problem будет хранить не решения теперь а цепочки решений а уже цепочки решений хранят решения выглядит цепочка на сайте примерно так цепочка решений кнопка исправить решение решение latest comment comment comment кнопка комметировать решение comment comment comment решение решение на сайте на страничках problem check my solutions вместо решений и комментарий к ним будут соответственно находиться цепочки в цепочке показывать имеет смысл только последнюю версию решения остальные либо под collapse js либо фетчаться через ajax requests | 1 |
551,856 | 16,190,172,143 | IssuesEvent | 2021-05-04 07:16:08 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | tinder.com - see bug description | browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal | <!-- @browser: Firefox 89.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/72394 -->
**URL**: https://tinder.com/app/recs
**Browser / Version**: Firefox 89.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: just a white page appear
**Steps to Reproduce**:
With both browsers, just after the webside started to load, the design switched to a white page and is not usable. Failure appears since three days.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/4/8aaf98b7-291b-4ade-a0fb-52140b922056.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210429190114</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/4/801262df-b99c-420a-a04c-a5aca8d3d9a3)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | tinder.com - see bug description - <!-- @browser: Firefox 89.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/72394 -->
**URL**: https://tinder.com/app/recs
**Browser / Version**: Firefox 89.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: just a white page appear
**Steps to Reproduce**:
With both browsers, just after the webside started to load, the design switched to a white page and is not usable. Failure appears since three days.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/4/8aaf98b7-291b-4ade-a0fb-52140b922056.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210429190114</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/4/801262df-b99c-420a-a04c-a5aca8d3d9a3)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | tinder com see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description just a white page appear steps to reproduce with both browsers just after the webside started to load the design switched to a white page and is not usable failure appears since three days view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
678,794 | 23,210,867,276 | IssuesEvent | 2022-08-02 10:00:05 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | SAML related cookie not clearing when a User's session is timedout | Priority/Highest Component/SAML bug 6.0.0-bug-fixing | When using a saml app after a user's session timed out the user will have to re-login at that time the old samlssoTokenId is assigned instead of a new one. | 1.0 | SAML related cookie not clearing when a User's session is timedout - When using a saml app after a user's session timed out the user will have to re-login at that time the old samlssoTokenId is assigned instead of a new one. | priority | saml related cookie not clearing when a user s session is timedout when using a saml app after a user s session timed out the user will have to re login at that time the old samlssotokenid is assigned instead of a new one | 1 |
299,841 | 9,205,904,333 | IssuesEvent | 2019-03-08 12:03:26 | geosolutions-it/tdipisa | https://api.github.com/repos/geosolutions-it/tdipisa | closed | PHOTOMAP - Legend print checks | Priority: High enhancement in progress | I attached print screen from print output (pdf), where the text for layer (legend) is not show in full. would it be possible to wrap it into more lines.
 | 1.0 | PHOTOMAP - Legend print checks - I attached print screen from print output (pdf), where the text for layer (legend) is not show in full. would it be possible to wrap it into more lines.
 | priority | photomap legend print checks i attached print screen from print output pdf where the text for layer legend is not show in full would it be possible to wrap it into more lines | 1 |
605,264 | 18,727,864,312 | IssuesEvent | 2021-11-03 18:08:35 | AY2122S1-CS2103T-W13-2/tp | https://api.github.com/repos/AY2122S1-CS2103T-W13-2/tp | closed | [PE-D] Unclear name constraints. | type.Bug priority.High severity.Medium | 

The constraints say that names should only contain `alphanumeric characters and spaces`. However, based on this definition of alphanumeric characters, symbols should be allowed for names but it is not allowed, even though numbers are.

<!--session: 1635494539368-6e4ca0fb-c787-43c4-a16e-48a16480861f--><!--Version: Web v3.4.1-->
-------------
Labels: `type.FunctionalityBug` `severity.Medium`
original: s7u4rt99/ped#3 | 1.0 | [PE-D] Unclear name constraints. - 

The constraints say that names should only contain `alphanumeric characters and spaces`. However, based on this definition of alphanumeric characters, symbols should be allowed for names but it is not allowed, even though numbers are.

<!--session: 1635494539368-6e4ca0fb-c787-43c4-a16e-48a16480861f--><!--Version: Web v3.4.1-->
-------------
Labels: `type.FunctionalityBug` `severity.Medium`
original: s7u4rt99/ped#3 | priority | unclear name constraints the constraints say that names should only contain alphanumeric characters and spaces however based on this definition of alphanumeric characters symbols should be allowed for names but it is not allowed even though numbers are labels type functionalitybug severity medium original ped | 1 |
780,115 | 27,379,739,871 | IssuesEvent | 2023-02-28 09:13:33 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | When selecting a post with the structure /%postname%/amp/, it does not work. (User end ) | bug [Priority: HIGH] Ready for Review | When the user sets structure /%postname%/amp/, this is not working in AMP, and the "post_link" action is being modified depending on the selected category.
Ref ticket: https://magazine3.in/conversation/83169?folder_id=29 | 1.0 | When selecting a post with the structure /%postname%/amp/, it does not work. (User end ) - When the user sets structure /%postname%/amp/, this is not working in AMP, and the "post_link" action is being modified depending on the selected category.
Ref ticket: https://magazine3.in/conversation/83169?folder_id=29 | priority | when selecting a post with the structure postname amp it does not work user end when the user sets structure postname amp this is not working in amp and the post link action is being modified depending on the selected category ref ticket | 1 |
652,742 | 21,560,351,430 | IssuesEvent | 2022-05-01 04:03:13 | bitfoundation/bitframework | https://api.github.com/repos/bitfoundation/bitframework | closed | Correct the old parameters of the Bit components used in the `TodoTemplate` project based on the new updates | area / project template high priority enhancement | Some of the Bit components have been updated and some parameters have been renamed that have not been updated in the `TodoTemplate` project.
For example, the `BitRadioButtonGroup` component parameter `Key` should be changed to the `Value` | 1.0 | Correct the old parameters of the Bit components used in the `TodoTemplate` project based on the new updates - Some of the Bit components have been updated and some parameters have been renamed that have not been updated in the `TodoTemplate` project.
For example, the `BitRadioButtonGroup` component parameter `Key` should be changed to the `Value` | priority | correct the old parameters of the bit components used in the todotemplate project based on the new updates some of the bit components have been updated and some parameters have been renamed that have not been updated in the todotemplate project for example the bitradiobuttongroup component parameter key should be changed to the value | 1 |
498,094 | 14,400,594,607 | IssuesEvent | 2020-12-03 12:36:12 | ls1intum/Artemis | https://api.github.com/repos/ls1intum/Artemis | opened | createManualProgrammingExerciseResult_resultExists fails on develop | bug priority:high | There is a failing test on develop branch.
ProgrammingAssessmentIntegrationTest > createManualProgrammingExerciseResult_resultExists() FAILED
org.opentest4j.AssertionFailedError:
Expecting value to be true but was false
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)
at de.tum.in.www1.artemis.programmingexercise.ProgrammingAssessmentIntegrationTest.createManualProgrammingExerciseResult_resultExists(ProgrammingAssessmentIntegrationTest.java:370) | 1.0 | createManualProgrammingExerciseResult_resultExists fails on develop - There is a failing test on develop branch.
ProgrammingAssessmentIntegrationTest > createManualProgrammingExerciseResult_resultExists() FAILED
org.opentest4j.AssertionFailedError:
Expecting value to be true but was false
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)
at de.tum.in.www1.artemis.programmingexercise.ProgrammingAssessmentIntegrationTest.createManualProgrammingExerciseResult_resultExists(ProgrammingAssessmentIntegrationTest.java:370) | priority | createmanualprogrammingexerciseresult resultexists fails on develop there is a failing test on develop branch programmingassessmentintegrationtest createmanualprogrammingexerciseresult resultexists failed org assertionfailederror expecting value to be true but was false at java base jdk internal reflect nativeconstructoraccessorimpl native method at java base jdk internal reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at java base jdk internal reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java base java lang reflect constructor newinstancewithcaller constructor java at de tum in artemis programmingexercise programmingassessmentintegrationtest createmanualprogrammingexerciseresult resultexists programmingassessmentintegrationtest java | 1 |
484,720 | 13,956,389,179 | IssuesEvent | 2020-10-24 00:55:35 | SforAiDl/genrl | https://api.github.com/repos/SforAiDl/genrl | closed | Integrate Reverb for use as buffer | Priority:High no-issue-activity | Update Algorithms to use reverb tables
https://github.com/deepmind/reverb
A stretch goal could be to build pytorch integration for reverb | 1.0 | Integrate Reverb for use as buffer - Update Algorithms to use reverb tables
https://github.com/deepmind/reverb
A stretch goal could be to build pytorch integration for reverb | priority | integrate reverb for use as buffer update algorithms to use reverb tables a stretch goal could be to build pytorch integration for reverb | 1 |
714,234 | 24,555,170,493 | IssuesEvent | 2022-10-12 15:20:19 | AY2223S1-CS2103T-T15-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-T15-2/tp | closed | [User Story] As a new user, I can view a list of students with their corresponding student ID | priority.HIGH type.Story type.Epic | so that I can have an uncluttered overview of all the students in my class.
### Breakdown
- [x] Implement `viewAll` feature (adapted from `list` command)
- [x] Add test cases
- [x] Update UG
- [x] Update DG | 1.0 | [User Story] As a new user, I can view a list of students with their corresponding student ID - so that I can have an uncluttered overview of all the students in my class.
### Breakdown
- [x] Implement `viewAll` feature (adapted from `list` command)
- [x] Add test cases
- [x] Update UG
- [x] Update DG | priority | as a new user i can view a list of students with their corresponding student id so that i can have an uncluttered overview of all the students in my class breakdown implement viewall feature adapted from list command add test cases update ug update dg | 1 |
115,046 | 4,651,138,619 | IssuesEvent | 2016-10-03 08:55:12 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | Layers are not removed from map when removed from TOC | bug Priority: High | The removal is done by name and not by id | 1.0 | Layers are not removed from map when removed from TOC - The removal is done by name and not by id | priority | layers are not removed from map when removed from toc the removal is done by name and not by id | 1 |
219,710 | 7,345,266,634 | IssuesEvent | 2018-03-07 16:54:44 | synthetichealth/module-builder | https://api.github.com/repos/synthetichealth/module-builder | opened | Fix PriorState conditional | bug high priority | The PriorState conditional does not render properly in the editor. Additionally, the module builder crashes when a conditional is changed to this type. | 1.0 | Fix PriorState conditional - The PriorState conditional does not render properly in the editor. Additionally, the module builder crashes when a conditional is changed to this type. | priority | fix priorstate conditional the priorstate conditional does not render properly in the editor additionally the module builder crashes when a conditional is changed to this type | 1 |
387,523 | 11,463,065,560 | IssuesEvent | 2020-02-07 15:19:02 | UC-Davis-molecular-computing/scadnano-python-package | https://api.github.com/repos/UC-Davis-molecular-computing/scadnano-python-package | opened | alter helices_view_order to be permutation of helix indices, not of 0,1,...,h | high priority invalid | Currently if the number of helices is *h*, then `DNADesign.helices_view_order` is a permutation of the list [0,1,...,*h*-1].
It would be more natural if it is a permutation of the set of helix indices. | 1.0 | alter helices_view_order to be permutation of helix indices, not of 0,1,...,h - Currently if the number of helices is *h*, then `DNADesign.helices_view_order` is a permutation of the list [0,1,...,*h*-1].
It would be more natural if it is a permutation of the set of helix indices. | priority | alter helices view order to be permutation of helix indices not of h currently if the number of helices is h then dnadesign helices view order is a permutation of the list it would be more natural if it is a permutation of the set of helix indices | 1 |
211,351 | 7,200,525,090 | IssuesEvent | 2018-02-05 19:21:21 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | F4 CCAFS Impact Pathways corrections | Priority - High Type -Task | - [x] Key Output 1: Drop Outcome 5
- [x] Key Output 2: Add Outcome 3.
- [x] Key Output 4: Add Outcome 3.
- [x] Key Output 6: Add Outcome 2 and Outcome 5. Drop Outcome 4.
- [x] Key Output 7: Add Outcome 2. Drop Outcome 4.
- [x] Key Output 11: Drop Outcome 4.
- [x] Key Output 13: Belongs to CoA 4.3, not 4.4.
- [x] Key Output 17: Drop Outcome 1. Add Outcome 3 and Outcome 5.
| 1.0 | F4 CCAFS Impact Pathways corrections - - [x] Key Output 1: Drop Outcome 5
- [x] Key Output 2: Add Outcome 3.
- [x] Key Output 4: Add Outcome 3.
- [x] Key Output 6: Add Outcome 2 and Outcome 5. Drop Outcome 4.
- [x] Key Output 7: Add Outcome 2. Drop Outcome 4.
- [x] Key Output 11: Drop Outcome 4.
- [x] Key Output 13: Belongs to CoA 4.3, not 4.4.
- [x] Key Output 17: Drop Outcome 1. Add Outcome 3 and Outcome 5.
| priority | ccafs impact pathways corrections key output drop outcome key output add outcome key output add outcome key output add outcome and outcome drop outcome key output add outcome drop outcome key output drop outcome key output belongs to coa not key output drop outcome add outcome and outcome | 1 |
337,351 | 10,216,312,094 | IssuesEvent | 2019-08-15 10:12:10 | epam/cloud-pipeline | https://api.github.com/repos/epam/cloud-pipeline | opened | `pipe`: `run` command doesn't launch a pipeline without forcible specifying a price type | kind/bug priority/high sys/cli | Currently, if you try to launch an existing pipeline via `pipe run` with only pipeline name/id - it will cause an error, e.g.:
``` bash
$ pipe run -n simplepipeline
Fetching pipeline info...done.
Evaluating estimated price...Error: Failed to fetch data from server. Server responded with message: Your operation has been aborted as we encountered a server problem.
$
```
If you additionally set price type, the launch will be started normally. | 1.0 | `pipe`: `run` command doesn't launch a pipeline without forcible specifying a price type - Currently, if you try to launch an existing pipeline via `pipe run` with only pipeline name/id - it will cause an error, e.g.:
``` bash
$ pipe run -n simplepipeline
Fetching pipeline info...done.
Evaluating estimated price...Error: Failed to fetch data from server. Server responded with message: Your operation has been aborted as we encountered a server problem.
$
```
If you additionally set price type, the launch will be started normally. | priority | pipe run command doesn t launch a pipeline without forcible specifying a price type currently if you try to launch an existing pipeline via pipe run with only pipeline name id it will cause an error e g bash pipe run n simplepipeline fetching pipeline info done evaluating estimated price error failed to fetch data from server server responded with message your operation has been aborted as we encountered a server problem if you additionally set price type the launch will be started normally | 1 |
726,940 | 25,017,001,890 | IssuesEvent | 2022-11-03 19:46:49 | crytic/slither | https://api.github.com/repos/crytic/slither | closed | [Bug]: Slither fails with library events | bug High Priority | ### Describe the issue:
Slither fails when given a contract which uses a library event. See minimal example below.
### Code example to reproduce the issue:
```
// SPDX-License-Identifier: MIT
pragma solidity 0.8.16;
library Lib {
event Event();
}
contract Test {
function foo() external {
emit Lib.Event(); // This line specifically
}
}
```
### Version:
0.9.0
### Relevant log output:
```shell
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 826, in main_impl
) = process_all(filename, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 97, in process_all
) = process_single(compilation, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 75, in process_single
slither = Slither(target, ast_format=ast, **vars(args))
File "/usr/local/lib/python3.10/site-packages/slither/slither.py", line 130, in __init__
parser.analyze_contracts()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 503, in analyze_contracts
self._convert_to_slithir()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 688, in _convert_to_slithir
func.generate_slithir_and_analyze()
File "/usr/local/lib/python3.10/site-packages/slither/core/declarations/function.py", line 1705, in generate_slithir_and_analyze
node.slithir_generation()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 721, in slithir_generation
self._find_read_write_call()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 919, in _find_read_write_call
assert isinstance(ir.function, Function)
AssertionError
Error in contracts/Test.sol
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 826, in main_impl
) = process_all(filename, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 97, in process_all
) = process_single(compilation, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 75, in process_single
slither = Slither(target, ast_format=ast, **vars(args))
File "/usr/local/lib/python3.10/site-packages/slither/slither.py", line 130, in __init__
parser.analyze_contracts()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 503, in analyze_contracts
self._convert_to_slithir()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 688, in _convert_to_slithir
func.generate_slithir_and_analyze()
File "/usr/local/lib/python3.10/site-packages/slither/core/declarations/function.py", line 1705, in generate_slithir_and_analyze
node.slithir_generation()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 721, in slithir_generation
self._find_read_write_call()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 919, in _find_read_write_call
assert isinstance(ir.function, Function)
AssertionError
```
| 1.0 | [Bug]: Slither fails with library events - ### Describe the issue:
Slither fails when given a contract which uses a library event. See minimal example below.
### Code example to reproduce the issue:
```
// SPDX-License-Identifier: MIT
pragma solidity 0.8.16;
library Lib {
event Event();
}
contract Test {
function foo() external {
emit Lib.Event(); // This line specifically
}
}
```
### Version:
0.9.0
### Relevant log output:
```shell
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 826, in main_impl
) = process_all(filename, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 97, in process_all
) = process_single(compilation, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 75, in process_single
slither = Slither(target, ast_format=ast, **vars(args))
File "/usr/local/lib/python3.10/site-packages/slither/slither.py", line 130, in __init__
parser.analyze_contracts()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 503, in analyze_contracts
self._convert_to_slithir()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 688, in _convert_to_slithir
func.generate_slithir_and_analyze()
File "/usr/local/lib/python3.10/site-packages/slither/core/declarations/function.py", line 1705, in generate_slithir_and_analyze
node.slithir_generation()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 721, in slithir_generation
self._find_read_write_call()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 919, in _find_read_write_call
assert isinstance(ir.function, Function)
AssertionError
Error in contracts/Test.sol
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 826, in main_impl
) = process_all(filename, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 97, in process_all
) = process_single(compilation, args, detector_classes, printer_classes)
File "/usr/local/lib/python3.10/site-packages/slither/__main__.py", line 75, in process_single
slither = Slither(target, ast_format=ast, **vars(args))
File "/usr/local/lib/python3.10/site-packages/slither/slither.py", line 130, in __init__
parser.analyze_contracts()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 503, in analyze_contracts
self._convert_to_slithir()
File "/usr/local/lib/python3.10/site-packages/slither/solc_parsing/slither_compilation_unit_solc.py", line 688, in _convert_to_slithir
func.generate_slithir_and_analyze()
File "/usr/local/lib/python3.10/site-packages/slither/core/declarations/function.py", line 1705, in generate_slithir_and_analyze
node.slithir_generation()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 721, in slithir_generation
self._find_read_write_call()
File "/usr/local/lib/python3.10/site-packages/slither/core/cfg/node.py", line 919, in _find_read_write_call
assert isinstance(ir.function, Function)
AssertionError
```
| priority | slither fails with library events describe the issue slither fails when given a contract which uses a library event see minimal example below code example to reproduce the issue spdx license identifier mit pragma solidity library lib event event contract test function foo external emit lib event this line specifically version relevant log output shell traceback most recent call last file usr local lib site packages slither main py line in main impl process all filename args detector classes printer classes file usr local lib site packages slither main py line in process all process single compilation args detector classes printer classes file usr local lib site packages slither main py line in process single slither slither target ast format ast vars args file usr local lib site packages slither slither py line in init parser analyze contracts file usr local lib site packages slither solc parsing slither compilation unit solc py line in analyze contracts self convert to slithir file usr local lib site packages slither solc parsing slither compilation unit solc py line in convert to slithir func generate slithir and analyze file usr local lib site packages slither core declarations function py line in generate slithir and analyze node slithir generation file usr local lib site packages slither core cfg node py line in slithir generation self find read write call file usr local lib site packages slither core cfg node py line in find read write call assert isinstance ir function function assertionerror error in contracts test sol traceback most recent call last file usr local lib site packages slither main py line in main impl process all filename args detector classes printer classes file usr local lib site packages slither main py line in process all process single compilation args detector classes printer classes file usr local lib site packages slither main py line in process single slither slither target ast format ast vars args file usr local lib site packages slither slither py line in init parser analyze contracts file usr local lib site packages slither solc parsing slither compilation unit solc py line in analyze contracts self convert to slithir file usr local lib site packages slither solc parsing slither compilation unit solc py line in convert to slithir func generate slithir and analyze file usr local lib site packages slither core declarations function py line in generate slithir and analyze node slithir generation file usr local lib site packages slither core cfg node py line in slithir generation self find read write call file usr local lib site packages slither core cfg node py line in find read write call assert isinstance ir function function assertionerror | 1 |
505,640 | 14,643,337,936 | IssuesEvent | 2020-12-25 15:55:05 | bounswe/bounswe2020group3 | https://api.github.com/repos/bounswe/bounswe2020group3 | closed | [Android] Implement public/private profile pages | Android Priority: High Type: Enhancement | * **Project: ANDROID**
* **This is a: FEATURE REQUEST**
* **Description of the issue**
Currently we have only one profile page where users can see their own profile.
* **For feature requests: Expected functionality of the requested feature**
We need to come up with these pages:
* Profile page for public pages, seen by others (or followers)
* Profile page for private pages, seen by others
* **Deadline for resolution:**
08.12.2020
| 1.0 | [Android] Implement public/private profile pages - * **Project: ANDROID**
* **This is a: FEATURE REQUEST**
* **Description of the issue**
Currently we have only one profile page where users can see their own profile.
* **For feature requests: Expected functionality of the requested feature**
We need to come up with these pages:
* Profile page for public pages, seen by others (or followers)
* Profile page for private pages, seen by others
* **Deadline for resolution:**
08.12.2020
| priority | implement public private profile pages project android this is a feature request description of the issue currently we have only one profile page where users can see their own profile for feature requests expected functionality of the requested feature we need to come up with these pages profile page for public pages seen by others or followers profile page for private pages seen by others deadline for resolution | 1 |
604,335 | 18,681,839,123 | IssuesEvent | 2021-11-01 07:09:08 | SEbbaDK/sonica | https://api.github.com/repos/SEbbaDK/sonica | closed | Modular architecture is needed | high priority | If we are gonna have different interfaces to control Sonica, and many different loops listening for input, i think we should do an architecture overhaul.
I've been looking at [gRPC](https://grpc.io/) and it seems perfect for this usecase, not only will each part of Sonica be a seperate process that can be restarted as necessary, it can also handle implementations in multiple languages, so the webserver could be crystal, while the discord and core is kept python, etc. | 1.0 | Modular architecture is needed - If we are gonna have different interfaces to control Sonica, and many different loops listening for input, i think we should do an architecture overhaul.
I've been looking at [gRPC](https://grpc.io/) and it seems perfect for this usecase, not only will each part of Sonica be a seperate process that can be restarted as necessary, it can also handle implementations in multiple languages, so the webserver could be crystal, while the discord and core is kept python, etc. | priority | modular architecture is needed if we are gonna have different interfaces to control sonica and many different loops listening for input i think we should do an architecture overhaul i ve been looking at and it seems perfect for this usecase not only will each part of sonica be a seperate process that can be restarted as necessary it can also handle implementations in multiple languages so the webserver could be crystal while the discord and core is kept python etc | 1 |
87,304 | 3,744,763,658 | IssuesEvent | 2016-03-10 03:58:09 | cs2103jan2016-t09-2j/main | https://api.github.com/repos/cs2103jan2016-t09-2j/main | opened | Must not have conflicting tasks | priority.high type.enhancement | -Task 1 can max overlap with task 2 by half an hr, where task 2 is 1hr or longer
-Task 1 can max overlap by 15 mins with task 2, where task 2 is btw 30mins to 1 hr
-Task 1 cannot overlap with task 2 if task 2's duration is below 30 mins | 1.0 | Must not have conflicting tasks - -Task 1 can max overlap with task 2 by half an hr, where task 2 is 1hr or longer
-Task 1 can max overlap by 15 mins with task 2, where task 2 is btw 30mins to 1 hr
-Task 1 cannot overlap with task 2 if task 2's duration is below 30 mins | priority | must not have conflicting tasks task can max overlap with task by half an hr where task is or longer task can max overlap by mins with task where task is btw to hr task cannot overlap with task if task s duration is below mins | 1 |
130,735 | 5,126,072,788 | IssuesEvent | 2017-01-10 00:13:01 | Spearhead-Gaming-Milsim/Queens-Stallions | https://api.github.com/repos/Spearhead-Gaming-Milsim/Queens-Stallions | closed | Persistence Broken | Bug High Priority | So we know its not R3F. Theres probably a conflict between ALiVE and some mod somewhere. I'll work best I can to narrow down what it is but it will take time. Probably won't have it fixed before launch. | 1.0 | Persistence Broken - So we know its not R3F. Theres probably a conflict between ALiVE and some mod somewhere. I'll work best I can to narrow down what it is but it will take time. Probably won't have it fixed before launch. | priority | persistence broken so we know its not theres probably a conflict between alive and some mod somewhere i ll work best i can to narrow down what it is but it will take time probably won t have it fixed before launch | 1 |
159,192 | 6,041,926,841 | IssuesEvent | 2017-06-11 07:29:34 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | closed | 50-80ms delay when closing a tab | bug perf priority/high | Adding to the appearance of sluggishness is the delay seen when closing a tab.
Click on the close button.
Notice how long it takes before it closes after mouse-up event.
Brave: 0.13.5
rev: 7373a0197df9fbacc0fb83c5f7494be1fe0ae879
Muon: 2.56.7
libchromiumcontent: 56.0.2924.87
V8: 5.6.326.50
Node.js: 7.4.0
Update Channel: dev
os.platform: darwin
os.release: 16.3.0
os.arch: x64 | 1.0 | 50-80ms delay when closing a tab - Adding to the appearance of sluggishness is the delay seen when closing a tab.
Click on the close button.
Notice how long it takes before it closes after mouse-up event.
Brave: 0.13.5
rev: 7373a0197df9fbacc0fb83c5f7494be1fe0ae879
Muon: 2.56.7
libchromiumcontent: 56.0.2924.87
V8: 5.6.326.50
Node.js: 7.4.0
Update Channel: dev
os.platform: darwin
os.release: 16.3.0
os.arch: x64 | priority | delay when closing a tab adding to the appearance of sluggishness is the delay seen when closing a tab click on the close button notice how long it takes before it closes after mouse up event brave rev muon libchromiumcontent node js update channel dev os platform darwin os release os arch | 1 |
669,261 | 22,617,992,005 | IssuesEvent | 2022-06-30 01:30:38 | Railcraft/Railcraft | https://api.github.com/repos/Railcraft/Railcraft | closed | Server startup crash | bug high priority | railcraft-12.1.0-beta-7
Seems to be referring to some client classes on server launch, doesn't happen with beta 4
Crashlog: https://gist.github.com/wormzjl/93b6b18066d48c870165c7accda04c9b
| 1.0 | Server startup crash - railcraft-12.1.0-beta-7
Seems to be referring to some client classes on server launch, doesn't happen with beta 4
Crashlog: https://gist.github.com/wormzjl/93b6b18066d48c870165c7accda04c9b
| priority | server startup crash railcraft beta seems to be referring to some client classes on server launch doesn t happen with beta crashlog | 1 |
585,535 | 17,499,901,887 | IssuesEvent | 2021-08-10 08:08:40 | stackabletech/issues | https://api.github.com/repos/stackabletech/issues | opened | integration test strategy | type/enhancement priority/high | As of now, we have already have a wide range of options to run integration tests:
- locally using a setup specific to the dev who is running them
- locally using the test-dev-cluster
- on a cluster provisioned at a cloud provider by T2 (agent-integration-tests, zookeeper-integration-tests running nightly as spikes)
- with GitHub Actions using the test-dev-cluster (not tried so far, but should work ;-)
What's missing is a general idea/strategy on what we want to test where, how often, ...
In the daily on 2021-08-10 we agreed that @backstreetkiwi will make a draft of an integration test strategy to serve as a basis for a discussion.
- [ ] prepare a draft integration test strategy (@backstreetkiwi)
- [ ] organise first meeting
- [ ] ... | 1.0 | integration test strategy - As of now, we have already have a wide range of options to run integration tests:
- locally using a setup specific to the dev who is running them
- locally using the test-dev-cluster
- on a cluster provisioned at a cloud provider by T2 (agent-integration-tests, zookeeper-integration-tests running nightly as spikes)
- with GitHub Actions using the test-dev-cluster (not tried so far, but should work ;-)
What's missing is a general idea/strategy on what we want to test where, how often, ...
In the daily on 2021-08-10 we agreed that @backstreetkiwi will make a draft of an integration test strategy to serve as a basis for a discussion.
- [ ] prepare a draft integration test strategy (@backstreetkiwi)
- [ ] organise first meeting
- [ ] ... | priority | integration test strategy as of now we have already have a wide range of options to run integration tests locally using a setup specific to the dev who is running them locally using the test dev cluster on a cluster provisioned at a cloud provider by agent integration tests zookeeper integration tests running nightly as spikes with github actions using the test dev cluster not tried so far but should work what s missing is a general idea strategy on what we want to test where how often in the daily on we agreed that backstreetkiwi will make a draft of an integration test strategy to serve as a basis for a discussion prepare a draft integration test strategy backstreetkiwi organise first meeting | 1 |
689,631 | 23,628,129,612 | IssuesEvent | 2022-08-25 06:54:27 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | '/api/files' endpoint fails validation on files with __v property | Bug :lady_beetle: Sprint Priority: High Backend 💾 | According to the git history, we have turned off the version key feature of mongoose on the file model. This feature appends the __v property to the files. New files now created do not have this property, but there was no migration, so files already in the database since before the change still do have it.
Now, we have set the ajv validation strict, and the `/api/files` endpoints fail on old files having this property, since it is not part of the file schema. This has not been detected, since the test have been kept up to date both in the unit tests and to nighmare e2e. Strictly speaking, the problem was the forgotten migration.
While discussing this, it was questioned whether removing the feature was a good idea in the first place. The task in this issue is:
1. Research about this mongoose feature.
2. Decide whether we want to:
- Keep it as it is, and write a migration to remove the __v property from the file objects.
- Re-enable it, and make sure that this property does not get to the front end to later fail on validation through save.
- Any better solution that comes up during the research.
3. Implement the change. | 1.0 | '/api/files' endpoint fails validation on files with __v property - According to the git history, we have turned off the version key feature of mongoose on the file model. This feature appends the __v property to the files. New files now created do not have this property, but there was no migration, so files already in the database since before the change still do have it.
Now, we have set the ajv validation strict, and the `/api/files` endpoints fail on old files having this property, since it is not part of the file schema. This has not been detected, since the test have been kept up to date both in the unit tests and to nighmare e2e. Strictly speaking, the problem was the forgotten migration.
While discussing this, it was questioned whether removing the feature was a good idea in the first place. The task in this issue is:
1. Research about this mongoose feature.
2. Decide whether we want to:
- Keep it as it is, and write a migration to remove the __v property from the file objects.
- Re-enable it, and make sure that this property does not get to the front end to later fail on validation through save.
- Any better solution that comes up during the research.
3. Implement the change. | priority | api files endpoint fails validation on files with v property according to the git history we have turned off the version key feature of mongoose on the file model this feature appends the v property to the files new files now created do not have this property but there was no migration so files already in the database since before the change still do have it now we have set the ajv validation strict and the api files endpoints fail on old files having this property since it is not part of the file schema this has not been detected since the test have been kept up to date both in the unit tests and to nighmare strictly speaking the problem was the forgotten migration while discussing this it was questioned whether removing the feature was a good idea in the first place the task in this issue is research about this mongoose feature decide whether we want to keep it as it is and write a migration to remove the v property from the file objects re enable it and make sure that this property does not get to the front end to later fail on validation through save any better solution that comes up during the research implement the change | 1 |
728,136 | 25,067,421,156 | IssuesEvent | 2022-11-07 09:27:32 | CLOSER-Cohorts/archivist | https://api.github.com/repos/CLOSER-Cohorts/archivist | closed | REACT: Intrument Export > View is not working for ALSPAC version | bug High priority | https://closer-archivist-alspac.herokuapp.com/admin/instruments/exports
The Action View is not loading for any of the instruments. It appears okay for all other instances. | 1.0 | REACT: Intrument Export > View is not working for ALSPAC version - https://closer-archivist-alspac.herokuapp.com/admin/instruments/exports
The Action View is not loading for any of the instruments. It appears okay for all other instances. | priority | react intrument export view is not working for alspac version the action view is not loading for any of the instruments it appears okay for all other instances | 1 |
338,200 | 10,225,739,763 | IssuesEvent | 2019-08-16 15:52:43 | r-lib/styler | https://api.github.com/repos/r-lib/styler | closed | No blank added at EOF when file to style already corresponds to style guide | Complexity: Low Priority: High Status: Unassigned Type: Bug | This is because equality check if performed without taking into account if the file has a blank line at the end. The only way to figure this out is to use a lower-level implementation of `xfun::read_utf8()`, i.e. `readLines()`, catch the warning (if any) about missing blank line and propagate the information upwards in the call stack. Interacts with #320. | 1.0 | No blank added at EOF when file to style already corresponds to style guide - This is because equality check if performed without taking into account if the file has a blank line at the end. The only way to figure this out is to use a lower-level implementation of `xfun::read_utf8()`, i.e. `readLines()`, catch the warning (if any) about missing blank line and propagate the information upwards in the call stack. Interacts with #320. | priority | no blank added at eof when file to style already corresponds to style guide this is because equality check if performed without taking into account if the file has a blank line at the end the only way to figure this out is to use a lower level implementation of xfun read i e readlines catch the warning if any about missing blank line and propagate the information upwards in the call stack interacts with | 1 |
660,767 | 21,997,208,304 | IssuesEvent | 2022-05-26 07:49:05 | TYPO3-Documentation/T3DocTeam | https://api.github.com/repos/TYPO3-Documentation/T3DocTeam | closed | Update the docs before we release the new version. | enhancement decision required high priority | Copied from an issue with several suggestions: https://github.com/TYPO3-Documentation/DocsTypo3Org-Homepage/issues/64
> Create a full clone of this document for every new framework version, update the docs before you release the new version.
...
See the Python docs as a reference.
I agree:
* we should not have docs online that are unfinished, outdated.
* It would be great to release a new version of the docs with the next TYPO3 release (not add chapters for the new stuff **after** it is released or way after it is released)
How can we do this?
Related:
* [How can we get changes added to documentation early? And what is master: master or latest release?](#133) (issue 133)
* [Add information when manual (or page) was last reviewed and what state it is in (e.g. up-to-date for version ...)](#73) (issue 73)
* ["These docs are inconsistent"](https://github.com/TYPO3-Documentation/TYPO3CMS-Reference-CoreApi/issues/430)
* ["Question: How to deal with Changelog becoming main documentation for some new features?"](https://github.com/TYPO3-Documentation/TYPO3CMS-Reference-CoreApi/issues/418)
| 1.0 | Update the docs before we release the new version. - Copied from an issue with several suggestions: https://github.com/TYPO3-Documentation/DocsTypo3Org-Homepage/issues/64
> Create a full clone of this document for every new framework version, update the docs before you release the new version.
...
See the Python docs as a reference.
I agree:
* we should not have docs online that are unfinished, outdated.
* It would be great to release a new version of the docs with the next TYPO3 release (not add chapters for the new stuff **after** it is released or way after it is released)
How can we do this?
Related:
* [How can we get changes added to documentation early? And what is master: master or latest release?](#133) (issue 133)
* [Add information when manual (or page) was last reviewed and what state it is in (e.g. up-to-date for version ...)](#73) (issue 73)
* ["These docs are inconsistent"](https://github.com/TYPO3-Documentation/TYPO3CMS-Reference-CoreApi/issues/430)
* ["Question: How to deal with Changelog becoming main documentation for some new features?"](https://github.com/TYPO3-Documentation/TYPO3CMS-Reference-CoreApi/issues/418)
| priority | update the docs before we release the new version copied from an issue with several suggestions create a full clone of this document for every new framework version update the docs before you release the new version see the python docs as a reference i agree we should not have docs online that are unfinished outdated it would be great to release a new version of the docs with the next release not add chapters for the new stuff after it is released or way after it is released how can we do this related issue issue | 1 |
600,774 | 18,356,285,077 | IssuesEvent | 2021-10-08 18:41:18 | vtdangg/fa21-cse110-lab3 | https://api.github.com/repos/vtdangg/fa21-cse110-lab3 | opened | Use CSS Selectors | enhancement high priority collaborate | ## What is the purpose fo the new feature or addition?
To style the HTML elements from the meeting minutes.
## A clear and concise description of what the addition is and what it does.
Each selector to be used will target a different identifier on the HTML element.
| 1.0 | Use CSS Selectors - ## What is the purpose fo the new feature or addition?
To style the HTML elements from the meeting minutes.
## A clear and concise description of what the addition is and what it does.
Each selector to be used will target a different identifier on the HTML element.
| priority | use css selectors what is the purpose fo the new feature or addition to style the html elements from the meeting minutes a clear and concise description of what the addition is and what it does each selector to be used will target a different identifier on the html element | 1 |
372,225 | 11,011,227,915 | IssuesEvent | 2019-12-04 15:56:21 | NCAR/METplus | https://api.github.com/repos/NCAR/METplus | closed | Remove MPI errors in METplus output | priority: high | When METplus is run, many errors show up from MPI and clutter the output. | 1.0 | Remove MPI errors in METplus output - When METplus is run, many errors show up from MPI and clutter the output. | priority | remove mpi errors in metplus output when metplus is run many errors show up from mpi and clutter the output | 1 |
692,861 | 23,751,836,742 | IssuesEvent | 2022-08-31 21:28:26 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Upgrade from 4.8 to 4.9 fails | question area: production installer priority: high | Hey guys, I wanted to update my Zulip instance from 4.8 to 4.9 but if fails with this error:
```
Zulip stopped successfully!
2022-03-02 17:18:13,968 upgrade-zulip-stage-2: Applying Puppet changes...
Notice: /Stage[main]/Zulip::Apt_repository/Exec[setup_apt_repo]/returns: executed successfully
Notice: Applied catalog in 8.91 seconds
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
The following packages have been kept back:
linux-generic linux-headers-generic linux-image-generic linux-image-virtual
postgresql-client snapd
The following packages will be DOWNGRADED:
postgresql-10 postgresql-client-10
0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 6 not upgraded.
Need to get 4714 kB of archives.
After this operation, 11.7 MB disk space will be freed.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 postgresql-client-10 amd64 10.19-0ubuntu0.18.04.1 [942 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 postgresql-10 amd64 10.19-0ubuntu0.18.04.1 [3772 kB]
Fetched 4714 kB in 0s (20.5 MB/s)
dpkg: warning: downgrading postgresql-client-10 from 10.20-1.pgdg18.04+1 to 10.19-0ubuntu0.18.04.1
(Reading database ... 121148 files and directories currently installed.)
Preparing to unpack .../postgresql-client-10_10.19-0ubuntu0.18.04.1_amd64.deb ...
Unpacking postgresql-client-10 (10.19-0ubuntu0.18.04.1) over (10.20-1.pgdg18.04+1) ...
dpkg: warning: downgrading postgresql-10 from 10.20-1.pgdg18.04+1 to 10.19-0ubuntu0.18.04.1
Preparing to unpack .../postgresql-10_10.19-0ubuntu0.18.04.1_amd64.deb ...
Unpacking postgresql-10 (10.19-0ubuntu0.18.04.1) over (10.20-1.pgdg18.04+1) ...
Setting up postgresql-client-10 (10.19-0ubuntu0.18.04.1) ...
Setting up postgresql-10 (10.19-0ubuntu0.18.04.1) ...
Processing triggers for postgresql-common (238.pgdg18.04+1) ...
Building PostgreSQL dictionaries from installed myspell/hunspell packages...
en_us
Removing obsolete dictionary files:
Traceback (most recent call last):
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/db.py", line 47, in __init__
super().__init__(*args, **kwargs)
psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./manage.py", line 52, in <module>
execute_from_command_line(sys.argv)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/management/commands/create_realm_internal_bots.py", line 17, in handle
create_if_missing_realm_internal_bots()
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/onboarding.py", line 57, in create_if_missing_realm_internal_bots
if missing_any_realm_internal_bots():
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/onboarding.py", line 27, in missing_any_realm_internal_bots
realm_count = Realm.objects.count()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/query.py", line 412, in count
return self.query.get_count(using=self.db)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/sql/query.py", line 519, in get_count
number = obj.get_aggregation(using, ['__count'])['__count']
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/sql/query.py", line 504, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1173, in execute_sql
cursor = self.connection.cursor()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/db.py", line 47, in __init__
super().__init__(*args, **kwargs)
django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
Traceback (most recent call last):
File "/home/zulip/deployments/2022-03-02-18-17-44/scripts/lib/upgrade-zulip-stage-2", line 359, in <module>
subprocess.check_call(["./manage.py", "create_realm_internal_bots"], preexec_fn=su_to_zulip)
File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['./manage.py', 'create_realm_internal_bots']' returned non-zero exit status 1.
```
Any ideas on how to resolve this?
OS: Ubuntu 18.04.6 LTS
PSQL-Version: 13.6 (Ubuntu 13.6-1.pgdg18.04+1) | 1.0 | Upgrade from 4.8 to 4.9 fails - Hey guys, I wanted to update my Zulip instance from 4.8 to 4.9 but if fails with this error:
```
Zulip stopped successfully!
2022-03-02 17:18:13,968 upgrade-zulip-stage-2: Applying Puppet changes...
Notice: /Stage[main]/Zulip::Apt_repository/Exec[setup_apt_repo]/returns: executed successfully
Notice: Applied catalog in 8.91 seconds
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
The following packages have been kept back:
linux-generic linux-headers-generic linux-image-generic linux-image-virtual
postgresql-client snapd
The following packages will be DOWNGRADED:
postgresql-10 postgresql-client-10
0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 6 not upgraded.
Need to get 4714 kB of archives.
After this operation, 11.7 MB disk space will be freed.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 postgresql-client-10 amd64 10.19-0ubuntu0.18.04.1 [942 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 postgresql-10 amd64 10.19-0ubuntu0.18.04.1 [3772 kB]
Fetched 4714 kB in 0s (20.5 MB/s)
dpkg: warning: downgrading postgresql-client-10 from 10.20-1.pgdg18.04+1 to 10.19-0ubuntu0.18.04.1
(Reading database ... 121148 files and directories currently installed.)
Preparing to unpack .../postgresql-client-10_10.19-0ubuntu0.18.04.1_amd64.deb ...
Unpacking postgresql-client-10 (10.19-0ubuntu0.18.04.1) over (10.20-1.pgdg18.04+1) ...
dpkg: warning: downgrading postgresql-10 from 10.20-1.pgdg18.04+1 to 10.19-0ubuntu0.18.04.1
Preparing to unpack .../postgresql-10_10.19-0ubuntu0.18.04.1_amd64.deb ...
Unpacking postgresql-10 (10.19-0ubuntu0.18.04.1) over (10.20-1.pgdg18.04+1) ...
Setting up postgresql-client-10 (10.19-0ubuntu0.18.04.1) ...
Setting up postgresql-10 (10.19-0ubuntu0.18.04.1) ...
Processing triggers for postgresql-common (238.pgdg18.04+1) ...
Building PostgreSQL dictionaries from installed myspell/hunspell packages...
en_us
Removing obsolete dictionary files:
Traceback (most recent call last):
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/db.py", line 47, in __init__
super().__init__(*args, **kwargs)
psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./manage.py", line 52, in <module>
execute_from_command_line(sys.argv)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/management/commands/create_realm_internal_bots.py", line 17, in handle
create_if_missing_realm_internal_bots()
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/onboarding.py", line 57, in create_if_missing_realm_internal_bots
if missing_any_realm_internal_bots():
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/onboarding.py", line 27, in missing_any_realm_internal_bots
realm_count = Realm.objects.count()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/query.py", line 412, in count
return self.query.get_count(using=self.db)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/sql/query.py", line 519, in get_count
number = obj.get_aggregation(using, ['__count'])['__count']
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/sql/query.py", line 504, in get_aggregation
result = compiler.execute_sql(SINGLE)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1173, in execute_sql
cursor = self.connection.cursor()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/zulip/deployments/2022-03-02-18-17-44/zulip-py3-venv/lib/python3.6/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
File "/home/zulip/deployments/2022-03-02-18-17-44/zerver/lib/db.py", line 47, in __init__
super().__init__(*args, **kwargs)
django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
Traceback (most recent call last):
File "/home/zulip/deployments/2022-03-02-18-17-44/scripts/lib/upgrade-zulip-stage-2", line 359, in <module>
subprocess.check_call(["./manage.py", "create_realm_internal_bots"], preexec_fn=su_to_zulip)
File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['./manage.py', 'create_realm_internal_bots']' returned non-zero exit status 1.
```
Any ideas on how to resolve this?
OS: Ubuntu 18.04.6 LTS
PSQL-Version: 13.6 (Ubuntu 13.6-1.pgdg18.04+1) | priority | upgrade from to fails hey guys i wanted to update my zulip instance from to but if fails with this error zulip stopped successfully upgrade zulip stage applying puppet changes notice stage zulip apt repository exec returns executed successfully notice applied catalog in seconds reading package lists building dependency tree reading state information calculating upgrade the following packages have been kept back linux generic linux headers generic linux image generic linux image virtual postgresql client snapd the following packages will be downgraded postgresql postgresql client upgraded newly installed downgraded to remove and not upgraded need to get kb of archives after this operation mb disk space will be freed get bionic updates main postgresql client get bionic updates main postgresql fetched kb in mb s dpkg warning downgrading postgresql client from to reading database files and directories currently installed preparing to unpack postgresql client deb unpacking postgresql client over dpkg warning downgrading postgresql from to preparing to unpack postgresql deb unpacking postgresql over setting up postgresql client setting up postgresql processing triggers for postgresql common building postgresql dictionaries from installed myspell hunspell packages en us removing obsolete dictionary files traceback most recent call last file home zulip deployments zulip venv lib site packages django db backends base base py line in ensure connection self connect file home zulip deployments zulip venv lib site packages django utils asyncio py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django db backends base base py line in connect self connection self get new connection conn params file home zulip deployments zulip venv lib site packages django utils asyncio py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django db backends postgresql base py line in get new connection connection database connect conn params file home zulip deployments zulip venv lib site packages init py line in connect conn connect dsn connection factory connection factory kwasync file home zulip deployments zerver lib db py line in init super init args kwargs operationalerror connection to server on socket var run postgresql s pgsql failed no such file or directory is the server running locally and accepting connections on that socket the above exception was the direct cause of the following exception traceback most recent call last file manage py line in execute from command line sys argv file home zulip deployments zulip venv lib site packages django core management init py line in execute from command line utility execute file home zulip deployments zulip venv lib site packages django core management init py line in execute self fetch command subcommand run from argv self argv file home zulip deployments zulip venv lib site packages django core management base py line in run from argv self execute args cmd options file home zulip deployments zulip venv lib site packages django core management base py line in execute output self handle args options file home zulip deployments zerver management commands create realm internal bots py line in handle create if missing realm internal bots file home zulip deployments zerver lib onboarding py line in create if missing realm internal bots if missing any realm internal bots file home zulip deployments zerver lib onboarding py line in missing any realm internal bots realm count realm objects count file home zulip deployments zulip venv lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file home zulip deployments zulip venv lib site packages django db models query py line in count return self query get count using self db file home zulip deployments zulip venv lib site packages django db models sql query py line in get count number obj get aggregation using file home zulip deployments zulip venv lib site packages django db models sql query py line in get aggregation result compiler execute sql single file home zulip deployments zulip venv lib site packages django db models sql compiler py line in execute sql cursor self connection cursor file home zulip deployments zulip venv lib site packages django utils asyncio py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django db backends base base py line in cursor return self cursor file home zulip deployments zulip venv lib site packages django db backends base base py line in cursor self ensure connection file home zulip deployments zulip venv lib site packages django utils asyncio py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django db backends base base py line in ensure connection self connect file home zulip deployments zulip venv lib site packages django db utils py line in exit raise dj exc value with traceback traceback from exc value file home zulip deployments zulip venv lib site packages django db backends base base py line in ensure connection self connect file home zulip deployments zulip venv lib site packages django utils asyncio py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django db backends base base py line in connect self connection self get new connection conn params file home zulip deployments zulip venv lib site packages django utils asyncio py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django db backends postgresql base py line in get new connection connection database connect conn params file home zulip deployments zulip venv lib site packages init py line in connect conn connect dsn connection factory connection factory kwasync file home zulip deployments zerver lib db py line in init super init args kwargs django db utils operationalerror connection to server on socket var run postgresql s pgsql failed no such file or directory is the server running locally and accepting connections on that socket traceback most recent call last file home zulip deployments scripts lib upgrade zulip stage line in subprocess check call preexec fn su to zulip file usr lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status any ideas on how to resolve this os ubuntu lts psql version ubuntu | 1 |
532,386 | 15,555,255,049 | IssuesEvent | 2021-03-16 05:45:42 | AY2021S2-CS2113-F10-2/tp | https://api.github.com/repos/AY2021S2-CS2113-F10-2/tp | closed | H-13: Add classes for basic framework | priority.High type.Task | Add classes including User, FitCenter, RecordList, Record, and enum RecordType | 1.0 | H-13: Add classes for basic framework - Add classes including User, FitCenter, RecordList, Record, and enum RecordType | priority | h add classes for basic framework add classes including user fitcenter recordlist record and enum recordtype | 1 |
783,690 | 27,542,245,354 | IssuesEvent | 2023-03-07 09:21:41 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Strand name becomes `null` when we use multiple `wait {}` | Type/Bug Priority/High Team/CompilerFE Lang/Actions/Wait Reason/EngineeringMistake Deferred | **Description:**
Strand name becomes `null` when we use multiple `wait {}`. As a result of this, the strand name is represented as `anonymous` in the debugger view.
Related to https://github.com/ballerina-platform/ballerina-lang/issues/30636
Potential fix https://github.com/ballerina-platform/ballerina-lang/pull/31055
**Steps to reproduce:**
- Consider the below code segment:
```
public function main() {
future<int> f1 = start addition(1, 2);
map<int|error> results = wait {f1};
}
function addition(int x, int y) returns int {
return x + y;
}
```
- Add breakpoint inside the `addition()` method and start the program in debug mode.
- We can observe that the stand name becomes `anonymous` in the debug view.
<img width="1005" alt="Screen Shot 2021-06-10 at 11 57 03 AM" src="https://user-images.githubusercontent.com/11292766/121475600-40643400-c9e3-11eb-8587-b558adb1b2d6.png">
- The above behaviour is not observed when we use normal `wait`
`int|error results = wait f1;`
<img width="975" alt="Screen Shot 2021-06-10 at 12 01 44 PM" src="https://user-images.githubusercontent.com/11292766/121476095-e1eb8580-c9e3-11eb-8ad6-dd885bdafec3.png">
**Affected Versions:**
| 1.0 | Strand name becomes `null` when we use multiple `wait {}` - **Description:**
Strand name becomes `null` when we use multiple `wait {}`. As a result of this, the strand name is represented as `anonymous` in the debugger view.
Related to https://github.com/ballerina-platform/ballerina-lang/issues/30636
Potential fix https://github.com/ballerina-platform/ballerina-lang/pull/31055
**Steps to reproduce:**
- Consider the below code segment:
```
public function main() {
future<int> f1 = start addition(1, 2);
map<int|error> results = wait {f1};
}
function addition(int x, int y) returns int {
return x + y;
}
```
- Add breakpoint inside the `addition()` method and start the program in debug mode.
- We can observe that the stand name becomes `anonymous` in the debug view.
<img width="1005" alt="Screen Shot 2021-06-10 at 11 57 03 AM" src="https://user-images.githubusercontent.com/11292766/121475600-40643400-c9e3-11eb-8587-b558adb1b2d6.png">
- The above behaviour is not observed when we use normal `wait`
`int|error results = wait f1;`
<img width="975" alt="Screen Shot 2021-06-10 at 12 01 44 PM" src="https://user-images.githubusercontent.com/11292766/121476095-e1eb8580-c9e3-11eb-8ad6-dd885bdafec3.png">
**Affected Versions:**
| priority | strand name becomes null when we use multiple wait description strand name becomes null when we use multiple wait as a result of this the strand name is represented as anonymous in the debugger view related to potential fix steps to reproduce consider the below code segment public function main future start addition map results wait function addition int x int y returns int return x y add breakpoint inside the addition method and start the program in debug mode we can observe that the stand name becomes anonymous in the debug view img width alt screen shot at am src the above behaviour is not observed when we use normal wait int error results wait img width alt screen shot at pm src affected versions | 1 |
278,395 | 8,640,856,480 | IssuesEvent | 2018-11-24 11:30:12 | cosmos/voyager | https://api.github.com/repos/cosmos/voyager | closed | Add Staking parameter tab to display component | high priority staking-1 | Description:
<!-- Steps to reproduce, logs, and screenshots are helpful for us to resolve the bug -->
Was previously hidden, but now we can include it | 1.0 | Add Staking parameter tab to display component - Description:
<!-- Steps to reproduce, logs, and screenshots are helpful for us to resolve the bug -->
Was previously hidden, but now we can include it | priority | add staking parameter tab to display component description was previously hidden but now we can include it | 1 |
353,594 | 10,554,710,608 | IssuesEvent | 2019-10-03 20:06:12 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | Relationships filters side panel cant be closed | Bug Priority: High Status: Sprint | When viewing an entity there is no way to close/reopen the relationships filter side panel, this is a huge problem in mobile because it just covers all the entity info.
This problem only happens in the entity viewer, but not in the document viewer. | 1.0 | Relationships filters side panel cant be closed - When viewing an entity there is no way to close/reopen the relationships filter side panel, this is a huge problem in mobile because it just covers all the entity info.
This problem only happens in the entity viewer, but not in the document viewer. | priority | relationships filters side panel cant be closed when viewing an entity there is no way to close reopen the relationships filter side panel this is a huge problem in mobile because it just covers all the entity info this problem only happens in the entity viewer but not in the document viewer | 1 |
371,745 | 10,980,858,174 | IssuesEvent | 2019-11-30 17:26:14 | okTurtles/group-income-simple | https://api.github.com/repos/okTurtles/group-income-simple | closed | Create self-removing component for scoped banner messages | App:Frontend Note:Research Note:UI/UX Note:Up-for-grabs Priority:High | ### Problem
The **Type 2** "Context-Sensitive Scoped Success and Error Banners" (or just, "scoped banners") component described in #688 doesn't exist and needs to be designed and implemented.
Moreover, per the discussion in that issue, it needs to be implemented in a very specific way in order to satisfy our DRY **requirement** (which really needs to be part of our JavaScript Style Guidelines...).
### Solution
As shown in the examples in [this comment](https://github.com/okTurtles/group-income-simple/issues/688#issuecomment-536091742), currently the way these sorts of messages are implemented in our UI is through the use of `v-if` directives and corresponding variables.
This is a very ugly violation of DRY. So the goal for this issue is to see if it's possible (and I believe it should be), to create a component that does not require the use of `v-if` directives and variables.
Quoting from [here](https://github.com/okTurtles/group-income-simple/issues/688#issuecomment-536098776):
> I.e. something like this:
>
> ```
> insertError(<target ref>, {message: 'error message', icon: 'cross'})
> ```
> And for that to insert a component into `<target ref>`, that itself contains all the logic necessary for automatically removing itself from the parent component
In other words, what is needed to close this issue is:
1. Implementing a component for Type 2 errors (i.e. the markup and CSS)
2. For this component to take props specifying what type of message it's displaying (i.e. an error or a success message, the message itself, and how the message should remove itself from the parent component<sup>[1]</sup>)
2. For a function similar to the `insertError` function above to be implemented (perhaps `insertScopedBanner()`), making it possible to use **one line** to insert this banner inside of the node specified by `<target ref>`. Again: to respect DRY, the parent component must not use `v-if` logic to show/hide this banner
<sup>[1]</sup> Possible values include: automatically removing itself after a given amount of time, or when the X to be clicked, or because another call to the same `<target ref>` has been made with `insertScopedBanner()`, causing the previous banner to be replaced. Barring either of those outcomes, it should always remove itself when the parent component is unloaded (i.e. if we navigate to a different page, or close the containing modal, etc., so that when we return to the parent view, the errors/success messages from before are all cleared)
If any of this is confusing or unclear, please ask questions! | 1.0 | Create self-removing component for scoped banner messages - ### Problem
The **Type 2** "Context-Sensitive Scoped Success and Error Banners" (or just, "scoped banners") component described in #688 doesn't exist and needs to be designed and implemented.
Moreover, per the discussion in that issue, it needs to be implemented in a very specific way in order to satisfy our DRY **requirement** (which really needs to be part of our JavaScript Style Guidelines...).
### Solution
As shown in the examples in [this comment](https://github.com/okTurtles/group-income-simple/issues/688#issuecomment-536091742), currently the way these sorts of messages are implemented in our UI is through the use of `v-if` directives and corresponding variables.
This is a very ugly violation of DRY. So the goal for this issue is to see if it's possible (and I believe it should be), to create a component that does not require the use of `v-if` directives and variables.
Quoting from [here](https://github.com/okTurtles/group-income-simple/issues/688#issuecomment-536098776):
> I.e. something like this:
>
> ```
> insertError(<target ref>, {message: 'error message', icon: 'cross'})
> ```
> And for that to insert a component into `<target ref>`, that itself contains all the logic necessary for automatically removing itself from the parent component
In other words, what is needed to close this issue is:
1. Implementing a component for Type 2 errors (i.e. the markup and CSS)
2. For this component to take props specifying what type of message it's displaying (i.e. an error or a success message, the message itself, and how the message should remove itself from the parent component<sup>[1]</sup>)
2. For a function similar to the `insertError` function above to be implemented (perhaps `insertScopedBanner()`), making it possible to use **one line** to insert this banner inside of the node specified by `<target ref>`. Again: to respect DRY, the parent component must not use `v-if` logic to show/hide this banner
<sup>[1]</sup> Possible values include: automatically removing itself after a given amount of time, or when the X to be clicked, or because another call to the same `<target ref>` has been made with `insertScopedBanner()`, causing the previous banner to be replaced. Barring either of those outcomes, it should always remove itself when the parent component is unloaded (i.e. if we navigate to a different page, or close the containing modal, etc., so that when we return to the parent view, the errors/success messages from before are all cleared)
If any of this is confusing or unclear, please ask questions! | priority | create self removing component for scoped banner messages problem the type context sensitive scoped success and error banners or just scoped banners component described in doesn t exist and needs to be designed and implemented moreover per the discussion in that issue it needs to be implemented in a very specific way in order to satisfy our dry requirement which really needs to be part of our javascript style guidelines solution as shown in the examples in currently the way these sorts of messages are implemented in our ui is through the use of v if directives and corresponding variables this is a very ugly violation of dry so the goal for this issue is to see if it s possible and i believe it should be to create a component that does not require the use of v if directives and variables quoting from i e something like this inserterror message error message icon cross and for that to insert a component into that itself contains all the logic necessary for automatically removing itself from the parent component in other words what is needed to close this issue is implementing a component for type errors i e the markup and css for this component to take props specifying what type of message it s displaying i e an error or a success message the message itself and how the message should remove itself from the parent component for a function similar to the inserterror function above to be implemented perhaps insertscopedbanner making it possible to use one line to insert this banner inside of the node specified by again to respect dry the parent component must not use v if logic to show hide this banner possible values include automatically removing itself after a given amount of time or when the x to be clicked or because another call to the same has been made with insertscopedbanner causing the previous banner to be replaced barring either of those outcomes it should always remove itself when the parent component is unloaded i e if we navigate to a different page or close the containing modal etc so that when we return to the parent view the errors success messages from before are all cleared if any of this is confusing or unclear please ask questions | 1 |
353,714 | 10,556,590,077 | IssuesEvent | 2019-10-04 02:32:38 | AY1920S1-CS2103T-T12-2/main | https://api.github.com/repos/AY1920S1-CS2103T-T12-2/main | opened | As an administrator I can add and remove food items from the restaurants' menu | priority.High type.Epic | ... so that restaurants can adjust their menus whenever without any delay | 1.0 | As an administrator I can add and remove food items from the restaurants' menu - ... so that restaurants can adjust their menus whenever without any delay | priority | as an administrator i can add and remove food items from the restaurants menu so that restaurants can adjust their menus whenever without any delay | 1 |
122,553 | 4,837,021,132 | IssuesEvent | 2016-11-08 21:20:34 | Baystation12/Baystation12 | https://api.github.com/repos/Baystation12/Baystation12 | closed | Every single animal in the game teleports themselves into deep space on roundstart | feature ⚠ priority: high ⚠ | #### Description of issue
Every simple_animal materialises themselves into deep space when the game begins and float off in their separate directions, perishing rapidly. Unsure if it happens when the round officially starts or on world initialisation.
Edit: Just reproduced locally - appears to happen on round start. Each animal is gradually plucked away, one by one, and shot into zlevel 6.
Edit 2: Jesus christ it's all of them
Edit 3: JESUS CHRIST IT'S NOT JUST THE MONKEYS
#### Difference between expected and actual behavior
u wot m8
u
fukin
_wot_
m8
#### Steps to reproduce
Start a round, observe as the animals begin suffocating to death in the infinite inky black.
#### Length of time in which bug has been known to occur
Recently - last day or two, probably. It isn't really that noticeable, so maybe a little longer.
#### Client version, Server revision & Game ID
Client Version: 510
Server Revision: 49612a6880661d795cb782e70c5be5455eb61305 - dev -
Game ID: bLs-aWDB
#### Issue bingo
Please check whatever applies. More checkboxes checked increase your chances of the issue being looked at sooner.
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x] Issue could be reproduced at least once
- [x] Issue could be reproduced by different players
- [x] Issue could be reproduced in multiple rounds
- [x] Issue happened in a recent (less than 7 days ago) round
- [x] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
| 1.0 | Every single animal in the game teleports themselves into deep space on roundstart - #### Description of issue
Every simple_animal materialises themselves into deep space when the game begins and float off in their separate directions, perishing rapidly. Unsure if it happens when the round officially starts or on world initialisation.
Edit: Just reproduced locally - appears to happen on round start. Each animal is gradually plucked away, one by one, and shot into zlevel 6.
Edit 2: Jesus christ it's all of them
Edit 3: JESUS CHRIST IT'S NOT JUST THE MONKEYS
#### Difference between expected and actual behavior
u wot m8
u
fukin
_wot_
m8
#### Steps to reproduce
Start a round, observe as the animals begin suffocating to death in the infinite inky black.
#### Length of time in which bug has been known to occur
Recently - last day or two, probably. It isn't really that noticeable, so maybe a little longer.
#### Client version, Server revision & Game ID
Client Version: 510
Server Revision: 49612a6880661d795cb782e70c5be5455eb61305 - dev -
Game ID: bLs-aWDB
#### Issue bingo
Please check whatever applies. More checkboxes checked increase your chances of the issue being looked at sooner.
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x] Issue could be reproduced at least once
- [x] Issue could be reproduced by different players
- [x] Issue could be reproduced in multiple rounds
- [x] Issue happened in a recent (less than 7 days ago) round
- [x] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
| priority | every single animal in the game teleports themselves into deep space on roundstart description of issue every simple animal materialises themselves into deep space when the game begins and float off in their separate directions perishing rapidly unsure if it happens when the round officially starts or on world initialisation edit just reproduced locally appears to happen on round start each animal is gradually plucked away one by one and shot into zlevel edit jesus christ it s all of them edit jesus christ it s not just the monkeys difference between expected and actual behavior u wot u fukin wot steps to reproduce start a round observe as the animals begin suffocating to death in the infinite inky black length of time in which bug has been known to occur recently last day or two probably it isn t really that noticeable so maybe a little longer client version server revision game id client version server revision dev game id bls awdb issue bingo please check whatever applies more checkboxes checked increase your chances of the issue being looked at sooner issue could be reproduced at least once issue could be reproduced by different players issue could be reproduced in multiple rounds issue happened in a recent less than days ago round | 1 |
315,381 | 9,612,743,131 | IssuesEvent | 2019-05-13 09:37:10 | fossasia/pslab-android | https://api.github.com/repos/fossasia/pslab-android | closed | New Instrument : GyroScope | Priority: High | **Actual Behaviour**
Currently the app has accelerometer and compass, though both may seem similar to gyroscope, functionality of gyroscope is quite different. Gyroscope provides absolute rotation of device, where accelerpmeter give changes in orientation , whereas compass uses magnetic sensors. Since almost every smart phone has gyroscope built in, this would be a good feature to include
<!-- Please state here what is currently happening. -->
**Would you like to work on the issue?**
Yes
<!-- Let us know if this issue should be assigned to you or tell us who you think could help to solve this issue. -->
| 1.0 | New Instrument : GyroScope - **Actual Behaviour**
Currently the app has accelerometer and compass, though both may seem similar to gyroscope, functionality of gyroscope is quite different. Gyroscope provides absolute rotation of device, where accelerpmeter give changes in orientation , whereas compass uses magnetic sensors. Since almost every smart phone has gyroscope built in, this would be a good feature to include
<!-- Please state here what is currently happening. -->
**Would you like to work on the issue?**
Yes
<!-- Let us know if this issue should be assigned to you or tell us who you think could help to solve this issue. -->
| priority | new instrument gyroscope actual behaviour currently the app has accelerometer and compass though both may seem similar to gyroscope functionality of gyroscope is quite different gyroscope provides absolute rotation of device where accelerpmeter give changes in orientation whereas compass uses magnetic sensors since almost every smart phone has gyroscope built in this would be a good feature to include would you like to work on the issue yes | 1 |
467,602 | 13,451,259,280 | IssuesEvent | 2020-09-08 19:56:35 | ChainSafe/gossamer | https://api.github.com/repos/ChainSafe/gossamer | closed | state: managing multiple variants | Priority: 2 - High Type: Enhancement | <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- see spec 3.1.2.4. Handling Runtimes update to the State and 3.3.3. Managaing Multiple Variants of State
- we need to keep multiple versions of the state trie for each block that is currently unfinalized
- also, need to keep track of transient state when doing something like execute_block (revert changes if not successful)
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
- one state trie
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [x] I have provided as much information as possible and necessary
- [x] I am planning to submit a pull request to fix this issue myself
| 1.0 | state: managing multiple variants - <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- see spec 3.1.2.4. Handling Runtimes update to the State and 3.3.3. Managaing Multiple Variants of State
- we need to keep multiple versions of the state trie for each block that is currently unfinalized
- also, need to keep track of transient state when doing something like execute_block (revert changes if not successful)
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
- one state trie
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [x] I have provided as much information as possible and necessary
- [x] I am planning to submit a pull request to fix this issue myself
| priority | state managing multiple variants please read carefully expected behavior if you re describing a bug tell us what should happen if you re suggesting a change improvement tell us how it should work see spec handling runtimes update to the state and managaing multiple variants of state we need to keep multiple versions of the state trie for each block that is currently unfinalized also need to keep track of transient state when doing something like execute block revert changes if not successful current behavior if describing a bug tell us what happens instead of the expected behavior if suggesting a change or an improvement explain the difference between your suggestion and current behavior one state trie checklist each empty square brackets below is a checkbox replace with to check the box after completing the task i have read and i have provided as much information as possible and necessary i am planning to submit a pull request to fix this issue myself | 1 |
753,774 | 26,361,152,773 | IssuesEvent | 2023-01-11 13:29:24 | fh-fvtt/zweihander | https://api.github.com/repos/fh-fvtt/zweihander | opened | Spanish Language Support | Type: Enhancement Focus: Application Priority: High | - [x] Release beta version
- [ ] Collect missing keys from test users
- [ ] Clean up keys | 1.0 | Spanish Language Support - - [x] Release beta version
- [ ] Collect missing keys from test users
- [ ] Clean up keys | priority | spanish language support release beta version collect missing keys from test users clean up keys | 1 |
4,212 | 2,546,036,602 | IssuesEvent | 2015-01-29 21:04:22 | Metaswitch/sprout | https://api.github.com/repos/Metaswitch/sprout | closed | Call traffic impacted when a Sprout is isolated from the network | high-priority | While running 2 CPS traffic on a duplex Clearwater configuration, Sprout-1 was isolated from the network (port disconnected from vSwitch). Calls start failing mostly with 503 Service Unavailable errors. This happens when Sprout-2 tries to use Sprout-1 as the terminating S-CSCF after an SRV lookup. The attempted TCP connection times out after 32S with a transport failure. Sprout-2 attempts to lookup another S-CSCF from Homestead, but only the local clustered record is provisioned (which was just attempted and failed) so a 503 is issued. These errors persist as long as Sprout-1 remains isolated from the network.
The same behaviour is also observed if Sprout-1 is simply powered off as well.
| 1.0 | Call traffic impacted when a Sprout is isolated from the network - While running 2 CPS traffic on a duplex Clearwater configuration, Sprout-1 was isolated from the network (port disconnected from vSwitch). Calls start failing mostly with 503 Service Unavailable errors. This happens when Sprout-2 tries to use Sprout-1 as the terminating S-CSCF after an SRV lookup. The attempted TCP connection times out after 32S with a transport failure. Sprout-2 attempts to lookup another S-CSCF from Homestead, but only the local clustered record is provisioned (which was just attempted and failed) so a 503 is issued. These errors persist as long as Sprout-1 remains isolated from the network.
The same behaviour is also observed if Sprout-1 is simply powered off as well.
| priority | call traffic impacted when a sprout is isolated from the network while running cps traffic on a duplex clearwater configuration sprout was isolated from the network port disconnected from vswitch calls start failing mostly with service unavailable errors this happens when sprout tries to use sprout as the terminating s cscf after an srv lookup the attempted tcp connection times out after with a transport failure sprout attempts to lookup another s cscf from homestead but only the local clustered record is provisioned which was just attempted and failed so a is issued these errors persist as long as sprout remains isolated from the network the same behaviour is also observed if sprout is simply powered off as well | 1 |
625,841 | 19,767,956,063 | IssuesEvent | 2022-01-17 06:26:02 | GeekyAnts/NativeBase | https://api.github.com/repos/GeekyAnts/NativeBase | closed | InputGroup renders single child two times | high priority v3 | **Describe the bug**
`InputGroup` renders single child two times.
**To Reproduce**
```js
<InputGroup>
<Input /> {/* renders two times */}
</InputGroup>
```
**Expected behaviour**
Render only one time.
**CodeSandBox/Snack link**
https://codesandbox.io/s/hardcore-mclaren-4f5ww?file=/src/App.js
**Platform**
- NativeBase version: 3.2.2, 3.3.0-rc | 1.0 | InputGroup renders single child two times - **Describe the bug**
`InputGroup` renders single child two times.
**To Reproduce**
```js
<InputGroup>
<Input /> {/* renders two times */}
</InputGroup>
```
**Expected behaviour**
Render only one time.
**CodeSandBox/Snack link**
https://codesandbox.io/s/hardcore-mclaren-4f5ww?file=/src/App.js
**Platform**
- NativeBase version: 3.2.2, 3.3.0-rc | priority | inputgroup renders single child two times describe the bug inputgroup renders single child two times to reproduce js renders two times expected behaviour render only one time codesandbox snack link platform nativebase version rc | 1 |
309,783 | 9,480,240,510 | IssuesEvent | 2019-04-20 16:07:09 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | closed | Lazy-load more carousel items on subjects page | Priority-2-High browsing | Similar to #602 -- support browsing experience on openlibrary
https://openlibrary.org/subjects/fantasy only shows 2 pages of results with no way to browse
related to https://stackoverflow.com/questions/36908678/slick-js-carousel-detect-last-thumb-to-load-more
```
$('.carousel').on('afterChange', function(e, slick, cur) {
if (cur === slick.$slides.length - 1) {
//Replace with your "Load More" logic
console.log('Load More');
}
});
``` | 1.0 | Lazy-load more carousel items on subjects page - Similar to #602 -- support browsing experience on openlibrary
https://openlibrary.org/subjects/fantasy only shows 2 pages of results with no way to browse
related to https://stackoverflow.com/questions/36908678/slick-js-carousel-detect-last-thumb-to-load-more
```
$('.carousel').on('afterChange', function(e, slick, cur) {
if (cur === slick.$slides.length - 1) {
//Replace with your "Load More" logic
console.log('Load More');
}
});
``` | priority | lazy load more carousel items on subjects page similar to support browsing experience on openlibrary only shows pages of results with no way to browse related to carousel on afterchange function e slick cur if cur slick slides length replace with your load more logic console log load more | 1 |
528,311 | 15,364,217,057 | IssuesEvent | 2021-03-01 21:41:15 | unitystation/unitystation | https://api.github.com/repos/unitystation/unitystation | opened | Changelings Stage 1 Implementation Bounty B:TBD | Bounty Priority: High System: Health Type: Feature |
## Description:
This bounty calls for the first implementation of changelings to the game. Before it can be started, it requires #6152 to be merged.
Changelings are alien predators that take on the appearance of those that they slay and drain the DNA of. They're pretty rad. As one of the most developed antagonists in /tg/, they have a lot of content. This bounty is for only the core gameplay systems and a couple evolution abilities for a first release.
The core gameplay features of changelings in draining DNA from other players to assum their identities. It is done either by a DNA extraction sting, which leaves the victim alive or by absorbing the victim, which leaves the victim a dead husk. Once a Changeling has acquired the DNA of a victim, they can at will transform to any of the last 7 identities they have collected the DNA for.
A secondary game mechanic for changelings is the chemicals management system. Changelings have a resource generically referred to as chemicals, which lets them use a variety of special abilities. Chemicals regenerate at a fairly quick rate after use.
A stock changeling comes with a few standard abilities:
### Reviving Stasis
the changeling enters a dormant state that looks like death. After a period of time in stasis, the changeling can choose to reactivate, getting a massive on activation healing effect. Changelings do not produce chemicals while in stasis. _Changelings can use this ability even while **dead** if they have enough chemicals left over before they died to revive themselves._ If a changeling dies due to gibbing, husking, or other similar deaths, they cannot revive themselves.
### Regenerate
the changeling instantly regenerates lost limbs and organs.
### Transform
The changeling transforms into any of the DNA identities they have stored.
### Extract DNA
The changeling stings are target to take their DNA. This is a stealthy method of assuming the identities of still living crew members
### Absorb DNA
Absorbing a victim does a few things. First, it gives you the target's DNA. It also refills your chemical storage. It also gives you an opportunity to refund and respec any purchased changeling abilities. Also, when done to another changeling, you get all of their stored DNA. If the target was a syndie or traitor with a syndicate uplink, the changeling also receives their uplink code via absorbing their memories.
A person that has been absorbed by a changeling is turned into husk that cannot be cloned or revived with any rudimentary methods.
The Hive channel feature of changelings is not required in this bounty, as it has not been decided whether we want to include it or not for game balance reasons. If need be, it will be added at a later point.
### Evolved abilities
For a first implementation, only a handful of the changeling's evolved abilities are required; enough to have a total cost of 12 evolution points.
## What is required to fulfill the bounty:
- [ ] Add the changeling and changeling gamemode.
- [ ] Add player DNA genomes.
- [ ] Add all stock changeling abilities listed, including Reviving Stasis, Regenerate, Transform, Extract DNA, and Absorb DNA.
- [ ] Add 12 evolution points worth of evolved abilities.
## Additional Notes:
Some evolved ability recommendations below based on their feasibility of incorporation:
Augmented Eyesight
Biodagrade
Chameleon skin
Adrenaline Sacs
Fleshmend
Lesser Form
Mimic Voice
Arm Blade
Tentacle
chitinous armor
transformation sting
mute sting
blind sting
cryogenic sting
| 1.0 | Changelings Stage 1 Implementation Bounty B:TBD -
## Description:
This bounty calls for the first implementation of changelings to the game. Before it can be started, it requires #6152 to be merged.
Changelings are alien predators that take on the appearance of those that they slay and drain the DNA of. They're pretty rad. As one of the most developed antagonists in /tg/, they have a lot of content. This bounty is for only the core gameplay systems and a couple evolution abilities for a first release.
The core gameplay features of changelings in draining DNA from other players to assum their identities. It is done either by a DNA extraction sting, which leaves the victim alive or by absorbing the victim, which leaves the victim a dead husk. Once a Changeling has acquired the DNA of a victim, they can at will transform to any of the last 7 identities they have collected the DNA for.
A secondary game mechanic for changelings is the chemicals management system. Changelings have a resource generically referred to as chemicals, which lets them use a variety of special abilities. Chemicals regenerate at a fairly quick rate after use.
A stock changeling comes with a few standard abilities:
### Reviving Stasis
the changeling enters a dormant state that looks like death. After a period of time in stasis, the changeling can choose to reactivate, getting a massive on activation healing effect. Changelings do not produce chemicals while in stasis. _Changelings can use this ability even while **dead** if they have enough chemicals left over before they died to revive themselves._ If a changeling dies due to gibbing, husking, or other similar deaths, they cannot revive themselves.
### Regenerate
the changeling instantly regenerates lost limbs and organs.
### Transform
The changeling transforms into any of the DNA identities they have stored.
### Extract DNA
The changeling stings are target to take their DNA. This is a stealthy method of assuming the identities of still living crew members
### Absorb DNA
Absorbing a victim does a few things. First, it gives you the target's DNA. It also refills your chemical storage. It also gives you an opportunity to refund and respec any purchased changeling abilities. Also, when done to another changeling, you get all of their stored DNA. If the target was a syndie or traitor with a syndicate uplink, the changeling also receives their uplink code via absorbing their memories.
A person that has been absorbed by a changeling is turned into husk that cannot be cloned or revived with any rudimentary methods.
The Hive channel feature of changelings is not required in this bounty, as it has not been decided whether we want to include it or not for game balance reasons. If need be, it will be added at a later point.
### Evolved abilities
For a first implementation, only a handful of the changeling's evolved abilities are required; enough to have a total cost of 12 evolution points.
## What is required to fulfill the bounty:
- [ ] Add the changeling and changeling gamemode.
- [ ] Add player DNA genomes.
- [ ] Add all stock changeling abilities listed, including Reviving Stasis, Regenerate, Transform, Extract DNA, and Absorb DNA.
- [ ] Add 12 evolution points worth of evolved abilities.
## Additional Notes:
Some evolved ability recommendations below based on their feasibility of incorporation:
Augmented Eyesight
Biodagrade
Chameleon skin
Adrenaline Sacs
Fleshmend
Lesser Form
Mimic Voice
Arm Blade
Tentacle
chitinous armor
transformation sting
mute sting
blind sting
cryogenic sting
| priority | changelings stage implementation bounty b tbd description this bounty calls for the first implementation of changelings to the game before it can be started it requires to be merged changelings are alien predators that take on the appearance of those that they slay and drain the dna of they re pretty rad as one of the most developed antagonists in tg they have a lot of content this bounty is for only the core gameplay systems and a couple evolution abilities for a first release the core gameplay features of changelings in draining dna from other players to assum their identities it is done either by a dna extraction sting which leaves the victim alive or by absorbing the victim which leaves the victim a dead husk once a changeling has acquired the dna of a victim they can at will transform to any of the last identities they have collected the dna for a secondary game mechanic for changelings is the chemicals management system changelings have a resource generically referred to as chemicals which lets them use a variety of special abilities chemicals regenerate at a fairly quick rate after use a stock changeling comes with a few standard abilities reviving stasis the changeling enters a dormant state that looks like death after a period of time in stasis the changeling can choose to reactivate getting a massive on activation healing effect changelings do not produce chemicals while in stasis changelings can use this ability even while dead if they have enough chemicals left over before they died to revive themselves if a changeling dies due to gibbing husking or other similar deaths they cannot revive themselves regenerate the changeling instantly regenerates lost limbs and organs transform the changeling transforms into any of the dna identities they have stored extract dna the changeling stings are target to take their dna this is a stealthy method of assuming the identities of still living crew members absorb dna absorbing a victim does a few things first it gives you the target s dna it also refills your chemical storage it also gives you an opportunity to refund and respec any purchased changeling abilities also when done to another changeling you get all of their stored dna if the target was a syndie or traitor with a syndicate uplink the changeling also receives their uplink code via absorbing their memories a person that has been absorbed by a changeling is turned into husk that cannot be cloned or revived with any rudimentary methods the hive channel feature of changelings is not required in this bounty as it has not been decided whether we want to include it or not for game balance reasons if need be it will be added at a later point evolved abilities for a first implementation only a handful of the changeling s evolved abilities are required enough to have a total cost of evolution points what is required to fulfill the bounty add the changeling and changeling gamemode add player dna genomes add all stock changeling abilities listed including reviving stasis regenerate transform extract dna and absorb dna add evolution points worth of evolved abilities additional notes some evolved ability recommendations below based on their feasibility of incorporation augmented eyesight biodagrade chameleon skin adrenaline sacs fleshmend lesser form mimic voice arm blade tentacle chitinous armor transformation sting mute sting blind sting cryogenic sting | 1 |
796,300 | 28,105,865,115 | IssuesEvent | 2023-03-31 00:32:52 | AY2223S2-CS2103-W17-2/tp | https://api.github.com/repos/AY2223S2-CS2103-W17-2/tp | closed | As a user, I can view the number of cases in a neighbourhood | priority.High type.Story | so that I know how urgent or severe the situation is in a particular area. | 1.0 | As a user, I can view the number of cases in a neighbourhood - so that I know how urgent or severe the situation is in a particular area. | priority | as a user i can view the number of cases in a neighbourhood so that i know how urgent or severe the situation is in a particular area | 1 |
753,137 | 26,340,235,711 | IssuesEvent | 2023-01-10 17:06:41 | vaticle/typedb | https://api.github.com/repos/vaticle/typedb | closed | Non-deterministically receiving too many answers from rules with value predicates | type: bug priority: high domain: reasoner | ## Description
When given a schema with rules about two related attributes and some test data, only Workbase infers the attributes correctly. Console / client-java read transaction returns unexpected values (and once it does, Workbase shows them, too).
## Environment
1. OS: Docker on Win10
2. TypeDB version: TypeDB 2.1.1
3. TypeDB client: console (or client-java)
4. Workbase version: 2.1.0
## Reproducible Steps
Steps to create the smallest reproducible scenario:
1. Create schema and data gql files:
s.gql -
```
define
name sub attribute, value string;
volume sub attribute, value long;
timestamp sub attribute, value datetime;
available sub attribute, value boolean;
deliverable sub attribute, value boolean;
Customer sub entity,
owns name @key,
plays Order:customer;
Resource sub entity,
owns name,
owns volume;
Ingredient sub entity,
owns name,
owns volume,
owns available,
plays Product:ingredient;
Order sub relation,
owns name @key,
owns timestamp,
owns deliverable,
relates customer,
relates product;
Product sub relation,
owns name @key,
plays Order:product,
relates ingredient;
rule ingredient-available-rule:
when {
$i isa Ingredient, has name $n, has volume $iv;
$r isa Resource, has name $n, has volume $rv;
$rv > $iv;
} then {
$i has available true;
};
rule ingredient-unavailable-rule:
when {
$i isa Ingredient, has name $n, has volume $iv;
not {
$r isa Resource, has name $n, has volume $rv;
$rv > $iv;
};
} then {
$i has available false;
};
rule undeliverable-rule:
when {
$o (product: $p) isa Order;
$p (ingredient: $i) isa Product;
$i isa Ingredient, has available false;
} then {
$o has deliverable false;
};
rule deliverable-rule:
when {
$o (product: $p) isa Order;
$p isa Product;
not {
$i isa Ingredient;
$p (ingredient: $i);
$i has available false;
};
} then {
$o has deliverable true;
};
```
d.gql -
```
insert
$i1 isa Ingredient,
has name "Liquid 1",
has volume 120;
$i2 isa Ingredient,
has name "Liquid 2",
has volume 30;
$i3 isa Ingredient,
has name "Liquid 3",
has volume 250;
$r11 isa Resource,
has name "Liquid 1",
has volume 100;
$r12 isa Resource,
has name "Liquid 1",
has volume 500;
$r21 isa Resource,
has name "Liquid 2",
has volume 200;
$r22 isa Resource,
has name "Liquid 2",
has volume 250;
$r31 isa Resource,
has name "Liquid 3",
has volume 10;
$r32 isa Resource,
has name "Liquid 3",
has volume 220;
$p1 (ingredient: $i1, ingredient: $i2) isa Product,
has name "Product 1";
$p2 (ingredient: $i1, ingredient: $i3) isa Product,
has name "Product 2";
$john isa Customer,
has name "John Doe";
$o1 (product: $p1, customer: $john) isa Order,
has name "Order 1";
$o2 (product: $p2, customer: $john) isa Order,
has name "Order 2";
$o3 (product: $p1, customer: $john) isa Order,
has name "Order 3";
```
2. Move these files to your Docker container:opt/typedb-all-linux/
3. Run these commands in console (either as a script or one by one):
```
database create test
transaction test schema write
source s.gql
commit
transaction test data write
source d.gql
commit
```
4. Open TypeDB Workbase (v2.1.0) connect to your server, select database "test" and run:
`match $x isa Order, has deliverable $d;`
(notice correct inference)
5. Go back to console, open a read transaction with inference on and check the same thing (optionally add the name attribute for readability):
```
transaction test data read --infer true
test::data::read> match $x isa Order, has deliverable $d, has name $n;
```
(notice incorrect inference, Order 2 is both deliverable and undeliverable)
6. Go back to TypeDB Workbase, run the same query as in Step 4 and notice the same incorrect inference as in Step 5.
## Expected Output
The output given by Workbase in Step 4 matches expectations
## Actual Output
Console and client-java infer that Order 2 is both deliverable and undeliverable.
## Additional information



| 1.0 | Non-deterministically receiving too many answers from rules with value predicates - ## Description
When given a schema with rules about two related attributes and some test data, only Workbase infers the attributes correctly. Console / client-java read transaction returns unexpected values (and once it does, Workbase shows them, too).
## Environment
1. OS: Docker on Win10
2. TypeDB version: TypeDB 2.1.1
3. TypeDB client: console (or client-java)
4. Workbase version: 2.1.0
## Reproducible Steps
Steps to create the smallest reproducible scenario:
1. Create schema and data gql files:
s.gql -
```
define
name sub attribute, value string;
volume sub attribute, value long;
timestamp sub attribute, value datetime;
available sub attribute, value boolean;
deliverable sub attribute, value boolean;
Customer sub entity,
owns name @key,
plays Order:customer;
Resource sub entity,
owns name,
owns volume;
Ingredient sub entity,
owns name,
owns volume,
owns available,
plays Product:ingredient;
Order sub relation,
owns name @key,
owns timestamp,
owns deliverable,
relates customer,
relates product;
Product sub relation,
owns name @key,
plays Order:product,
relates ingredient;
rule ingredient-available-rule:
when {
$i isa Ingredient, has name $n, has volume $iv;
$r isa Resource, has name $n, has volume $rv;
$rv > $iv;
} then {
$i has available true;
};
rule ingredient-unavailable-rule:
when {
$i isa Ingredient, has name $n, has volume $iv;
not {
$r isa Resource, has name $n, has volume $rv;
$rv > $iv;
};
} then {
$i has available false;
};
rule undeliverable-rule:
when {
$o (product: $p) isa Order;
$p (ingredient: $i) isa Product;
$i isa Ingredient, has available false;
} then {
$o has deliverable false;
};
rule deliverable-rule:
when {
$o (product: $p) isa Order;
$p isa Product;
not {
$i isa Ingredient;
$p (ingredient: $i);
$i has available false;
};
} then {
$o has deliverable true;
};
```
d.gql -
```
insert
$i1 isa Ingredient,
has name "Liquid 1",
has volume 120;
$i2 isa Ingredient,
has name "Liquid 2",
has volume 30;
$i3 isa Ingredient,
has name "Liquid 3",
has volume 250;
$r11 isa Resource,
has name "Liquid 1",
has volume 100;
$r12 isa Resource,
has name "Liquid 1",
has volume 500;
$r21 isa Resource,
has name "Liquid 2",
has volume 200;
$r22 isa Resource,
has name "Liquid 2",
has volume 250;
$r31 isa Resource,
has name "Liquid 3",
has volume 10;
$r32 isa Resource,
has name "Liquid 3",
has volume 220;
$p1 (ingredient: $i1, ingredient: $i2) isa Product,
has name "Product 1";
$p2 (ingredient: $i1, ingredient: $i3) isa Product,
has name "Product 2";
$john isa Customer,
has name "John Doe";
$o1 (product: $p1, customer: $john) isa Order,
has name "Order 1";
$o2 (product: $p2, customer: $john) isa Order,
has name "Order 2";
$o3 (product: $p1, customer: $john) isa Order,
has name "Order 3";
```
2. Move these files to your Docker container:opt/typedb-all-linux/
3. Run these commands in console (either as a script or one by one):
```
database create test
transaction test schema write
source s.gql
commit
transaction test data write
source d.gql
commit
```
4. Open TypeDB Workbase (v2.1.0) connect to your server, select database "test" and run:
`match $x isa Order, has deliverable $d;`
(notice correct inference)
5. Go back to console, open a read transaction with inference on and check the same thing (optionally add the name attribute for readability):
```
transaction test data read --infer true
test::data::read> match $x isa Order, has deliverable $d, has name $n;
```
(notice incorrect inference, Order 2 is both deliverable and undeliverable)
6. Go back to TypeDB Workbase, run the same query as in Step 4 and notice the same incorrect inference as in Step 5.
## Expected Output
The output given by Workbase in Step 4 matches expectations
## Actual Output
Console and client-java infer that Order 2 is both deliverable and undeliverable.
## Additional information



| priority | non deterministically receiving too many answers from rules with value predicates description when given a schema with rules about two related attributes and some test data only workbase infers the attributes correctly console client java read transaction returns unexpected values and once it does workbase shows them too environment os docker on typedb version typedb typedb client console or client java workbase version reproducible steps steps to create the smallest reproducible scenario create schema and data gql files s gql define name sub attribute value string volume sub attribute value long timestamp sub attribute value datetime available sub attribute value boolean deliverable sub attribute value boolean customer sub entity owns name key plays order customer resource sub entity owns name owns volume ingredient sub entity owns name owns volume owns available plays product ingredient order sub relation owns name key owns timestamp owns deliverable relates customer relates product product sub relation owns name key plays order product relates ingredient rule ingredient available rule when i isa ingredient has name n has volume iv r isa resource has name n has volume rv rv iv then i has available true rule ingredient unavailable rule when i isa ingredient has name n has volume iv not r isa resource has name n has volume rv rv iv then i has available false rule undeliverable rule when o product p isa order p ingredient i isa product i isa ingredient has available false then o has deliverable false rule deliverable rule when o product p isa order p isa product not i isa ingredient p ingredient i i has available false then o has deliverable true d gql insert isa ingredient has name liquid has volume isa ingredient has name liquid has volume isa ingredient has name liquid has volume isa resource has name liquid has volume isa resource has name liquid has volume isa resource has name liquid has volume isa resource has name liquid has volume isa resource has name liquid has volume isa resource has name liquid has volume ingredient ingredient isa product has name product ingredient ingredient isa product has name product john isa customer has name john doe product customer john isa order has name order product customer john isa order has name order product customer john isa order has name order move these files to your docker container opt typedb all linux run these commands in console either as a script or one by one database create test transaction test schema write source s gql commit transaction test data write source d gql commit open typedb workbase connect to your server select database test and run match x isa order has deliverable d notice correct inference go back to console open a read transaction with inference on and check the same thing optionally add the name attribute for readability transaction test data read infer true test data read match x isa order has deliverable d has name n notice incorrect inference order is both deliverable and undeliverable go back to typedb workbase run the same query as in step and notice the same incorrect inference as in step expected output the output given by workbase in step matches expectations actual output console and client java infer that order is both deliverable and undeliverable additional information | 1 |
43,211 | 2,886,417,466 | IssuesEvent | 2015-06-12 08:09:05 | xkill/remmina-unity-launcher | https://api.github.com/repos/xkill/remmina-unity-launcher | closed | Auto update | auto-migrated Priority-High Type-Enhancement | ```
What steps will reproduce the problem?
1. Normal Usage
What is the expected output? What do you see instead?
An automatically updated list. Currently operation is to manually re-run the
script.
Not implemented yet, suggested solution is to make a "wrapper" script that
updates the list and then runs the selected item.
```
Original issue reported on code.google.com by `sdeonline` on 26 May 2011 at 7:26 | 1.0 | Auto update - ```
What steps will reproduce the problem?
1. Normal Usage
What is the expected output? What do you see instead?
An automatically updated list. Currently operation is to manually re-run the
script.
Not implemented yet, suggested solution is to make a "wrapper" script that
updates the list and then runs the selected item.
```
Original issue reported on code.google.com by `sdeonline` on 26 May 2011 at 7:26 | priority | auto update what steps will reproduce the problem normal usage what is the expected output what do you see instead an automatically updated list currently operation is to manually re run the script not implemented yet suggested solution is to make a wrapper script that updates the list and then runs the selected item original issue reported on code google com by sdeonline on may at | 1 |
772,982 | 27,141,884,989 | IssuesEvent | 2023-02-16 16:54:32 | mikeblazanin/gcplyr | https://api.github.com/repos/mikeblazanin/gcplyr | opened | Enhancements to auc: blanks and trans_y | type: enhancement priority: high | Blanks would allow subtraction of the blank area
trans_y would enable auc on log axes, presumably using the minimum value as the baseline (how this would work across wells is tbd, maybe you would just require the blank when trans_y and use that value as the baseline) | 1.0 | Enhancements to auc: blanks and trans_y - Blanks would allow subtraction of the blank area
trans_y would enable auc on log axes, presumably using the minimum value as the baseline (how this would work across wells is tbd, maybe you would just require the blank when trans_y and use that value as the baseline) | priority | enhancements to auc blanks and trans y blanks would allow subtraction of the blank area trans y would enable auc on log axes presumably using the minimum value as the baseline how this would work across wells is tbd maybe you would just require the blank when trans y and use that value as the baseline | 1 |
761,235 | 26,672,698,304 | IssuesEvent | 2023-01-26 11:44:48 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | [Priest] Mindcontrol causing errors | Class: Priest Spell Priority: High Status: Confirmed | If a player has auto attack enabled and is then mindcontrolled, brought over to the priest and allowed to auto attack it causes a latency spike of about 3000, meaning abilties cannot be cased during the channel (they should be castable but interrupt the channel itself).
How to replicate - Mindcontrol another player with auto attack enabled on their side, walk towards the priest, allow the auto attack to hit, try using a spell during the channel from priest end
| 1.0 | [Priest] Mindcontrol causing errors - If a player has auto attack enabled and is then mindcontrolled, brought over to the priest and allowed to auto attack it causes a latency spike of about 3000, meaning abilties cannot be cased during the channel (they should be castable but interrupt the channel itself).
How to replicate - Mindcontrol another player with auto attack enabled on their side, walk towards the priest, allow the auto attack to hit, try using a spell during the channel from priest end
| priority | mindcontrol causing errors if a player has auto attack enabled and is then mindcontrolled brought over to the priest and allowed to auto attack it causes a latency spike of about meaning abilties cannot be cased during the channel they should be castable but interrupt the channel itself how to replicate mindcontrol another player with auto attack enabled on their side walk towards the priest allow the auto attack to hit try using a spell during the channel from priest end | 1 |
597,386 | 18,162,600,021 | IssuesEvent | 2021-09-27 11:22:15 | capgemini-stavanger/gi-en-jul | https://api.github.com/repos/capgemini-stavanger/gi-en-jul | opened | Include registration summary in first email | Priority High | **Is your feature request related to a problem? Please describe.**
When givers have been registered, they might forget what information they registered with, or how large the family they want to give to is.
**Describe the solution you'd like**
The first mail should include a summary of the information the giver registered with. | 1.0 | Include registration summary in first email - **Is your feature request related to a problem? Please describe.**
When givers have been registered, they might forget what information they registered with, or how large the family they want to give to is.
**Describe the solution you'd like**
The first mail should include a summary of the information the giver registered with. | priority | include registration summary in first email is your feature request related to a problem please describe when givers have been registered they might forget what information they registered with or how large the family they want to give to is describe the solution you d like the first mail should include a summary of the information the giver registered with | 1 |
544,413 | 15,893,388,970 | IssuesEvent | 2021-04-11 05:34:32 | wso2/streaming-integrator-tooling | https://api.github.com/repos/wso2/streaming-integrator-tooling | closed | Cannot Add New Server to deploy to server intermittently | Priority/Highest T7 bug | **Description:**
When trying to add new servers to **deploy to server** for the second time, It won't get added and no errors will be thrown.
| 1.0 | Cannot Add New Server to deploy to server intermittently - **Description:**
When trying to add new servers to **deploy to server** for the second time, It won't get added and no errors will be thrown.
| priority | cannot add new server to deploy to server intermittently description when trying to add new servers to deploy to server for the second time it won t get added and no errors will be thrown | 1 |
177,178 | 6,575,176,010 | IssuesEvent | 2017-09-11 15:15:53 | minio/minio | https://api.github.com/repos/minio/minio | closed | Cannot get ContentLength from headObject request using aws javascript sdk. | priority: high | The response in the callback to headObject should contain a ContentLength, however the field is non-existent. However, I can see the Content-Length in the response from the dev console.
[http://jsfiddle.net/sensbbnp/](url)
```
var s3 = new AWS.S3({
endpoint: 'https://play.minio.io:9000',
s3ForcePathStyle: 'true', // needed with minio?
signatureVersion: 'v4',
accessKeyId: 'Q3AM3UQ867SPQQA43P2F' ,
secretAccessKey: 'zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG' ,
}),
params = {
Bucket: 'kris',
Key: 'agent',
};
s3.makeRequest('headObject', params, logResponse);
// requires credentials but does same thing
//s3.headObject(params, logResponse);
function logResponse(err, data) {
console.log(err)
console.log(data)
}
``` | 1.0 | Cannot get ContentLength from headObject request using aws javascript sdk. - The response in the callback to headObject should contain a ContentLength, however the field is non-existent. However, I can see the Content-Length in the response from the dev console.
[http://jsfiddle.net/sensbbnp/](url)
```
var s3 = new AWS.S3({
endpoint: 'https://play.minio.io:9000',
s3ForcePathStyle: 'true', // needed with minio?
signatureVersion: 'v4',
accessKeyId: 'Q3AM3UQ867SPQQA43P2F' ,
secretAccessKey: 'zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG' ,
}),
params = {
Bucket: 'kris',
Key: 'agent',
};
s3.makeRequest('headObject', params, logResponse);
// requires credentials but does same thing
//s3.headObject(params, logResponse);
function logResponse(err, data) {
console.log(err)
console.log(data)
}
``` | priority | cannot get contentlength from headobject request using aws javascript sdk the response in the callback to headobject should contain a contentlength however the field is non existent however i can see the content length in the response from the dev console url var new aws endpoint true needed with minio signatureversion accesskeyid secretaccesskey zuf params bucket kris key agent makerequest headobject params logresponse requires credentials but does same thing headobject params logresponse function logresponse err data console log err console log data | 1 |
408,119 | 11,941,804,070 | IssuesEvent | 2020-04-02 19:06:24 | plinck/TeamATC | https://api.github.com/repos/plinck/TeamATC | closed | Jamal's strava file didnt upload - but my tests show that all works | bug priorityHigh | **Describe the bug**
Its some oddity about his file so maybe he used a different method to create. I will have to dig into thee file to find out
**To Reproduce**
Steps to reproduce the behavior:
Try to upload Jamal's fit file. and it wont fill in fields. My guess is that it is only affecting him
**Expected behavior**
**Screenshots**
**Additional context**
| 1.0 | Jamal's strava file didnt upload - but my tests show that all works - **Describe the bug**
Its some oddity about his file so maybe he used a different method to create. I will have to dig into thee file to find out
**To Reproduce**
Steps to reproduce the behavior:
Try to upload Jamal's fit file. and it wont fill in fields. My guess is that it is only affecting him
**Expected behavior**
**Screenshots**
**Additional context**
| priority | jamal s strava file didnt upload but my tests show that all works describe the bug its some oddity about his file so maybe he used a different method to create i will have to dig into thee file to find out to reproduce steps to reproduce the behavior try to upload jamal s fit file and it wont fill in fields my guess is that it is only affecting him expected behavior screenshots additional context | 1 |
338,669 | 10,235,433,285 | IssuesEvent | 2019-08-19 09:09:35 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [master-preview] Interaction is broken after placing the block | High Priority QA Regression | Steps:
1. Take hammer
2. Take Hewn log
3. Place some block on the ground
4. Try to interact with some object (stockpile or tent)
Expected:
Open object UI
Actual:
can't opent UI. Need to relog. | 1.0 | [master-preview] Interaction is broken after placing the block - Steps:
1. Take hammer
2. Take Hewn log
3. Place some block on the ground
4. Try to interact with some object (stockpile or tent)
Expected:
Open object UI
Actual:
can't opent UI. Need to relog. | priority | interaction is broken after placing the block steps take hammer take hewn log place some block on the ground try to interact with some object stockpile or tent expected open object ui actual can t opent ui need to relog | 1 |
581,494 | 17,294,792,538 | IssuesEvent | 2021-07-25 14:00:39 | VolmitSoftware/Iris | https://api.github.com/repos/VolmitSoftware/Iris | closed | Objects not found during mcaPregeneration. Started with recent updates. | Bug High Priority | https://bin.bloom.host/kupewilomi.md
Unknown issue with objects that is potentially fatal to generation progress.
We tried placing the objects with /ir o paste, which worked.
We then pasted all of them, saved them again (with the new system? V2 objects?), but no dice.
Unsure if this is due to them being old, some of them being new, or if it has something to do with the new pregeneration methods. | 1.0 | Objects not found during mcaPregeneration. Started with recent updates. - https://bin.bloom.host/kupewilomi.md
Unknown issue with objects that is potentially fatal to generation progress.
We tried placing the objects with /ir o paste, which worked.
We then pasted all of them, saved them again (with the new system? V2 objects?), but no dice.
Unsure if this is due to them being old, some of them being new, or if it has something to do with the new pregeneration methods. | priority | objects not found during mcapregeneration started with recent updates unknown issue with objects that is potentially fatal to generation progress we tried placing the objects with ir o paste which worked we then pasted all of them saved them again with the new system objects but no dice unsure if this is due to them being old some of them being new or if it has something to do with the new pregeneration methods | 1 |
347,228 | 10,426,790,651 | IssuesEvent | 2019-09-16 18:24:50 | BoiseState-AdaptLab/adapt-lidar-tools | https://api.github.com/repos/BoiseState-AdaptLab/adapt-lidar-tools | opened | Unit Test Bugs | High Priority bug | bin/PulseData_unittests
Running main() from deps/googletest/googletest/src/gtest_main.cc
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from PulseDataTest
[ RUN ] PulseDataTest.testSetterGetter
src/PulseData_unittests.cpp:63: Failure
Expected equality of these values:
streamData
Which is: "\nOutgoing:\n\n\tindices:\n\t0 1 2 \n\tamplitudes:\n\t34 45 67 \n\nReturning Wave:\n\n\tindices:\n\t0 1 2 \n\tamplitudes:\n\t76 65 43 "
realStream
Which is: "\nOutgoing:\n\n\tindices:\n\t0 1 2 \n\tamplitudes: \n\t34 45 67 \n\nReturning Wave:\n\n\tindices:\n\t0 1 2 \n\tamplitudes:\n\t76 65 43 "
With diff:
@@ -4,5 +4,5 @@
\tindices:
\t0 1 2
-\tamplitudes:
+\tamplitudes:
\t34 45 67
[ FAILED ] PulseDataTest.testSetterGetter (0 ms)
[----------] 1 test from PulseDataTest (0 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (0 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] PulseDataTest.testSetterGetter
1 FAILED TEST
make: [test] Error 1 (ignored) | 1.0 | Unit Test Bugs - bin/PulseData_unittests
Running main() from deps/googletest/googletest/src/gtest_main.cc
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from PulseDataTest
[ RUN ] PulseDataTest.testSetterGetter
src/PulseData_unittests.cpp:63: Failure
Expected equality of these values:
streamData
Which is: "\nOutgoing:\n\n\tindices:\n\t0 1 2 \n\tamplitudes:\n\t34 45 67 \n\nReturning Wave:\n\n\tindices:\n\t0 1 2 \n\tamplitudes:\n\t76 65 43 "
realStream
Which is: "\nOutgoing:\n\n\tindices:\n\t0 1 2 \n\tamplitudes: \n\t34 45 67 \n\nReturning Wave:\n\n\tindices:\n\t0 1 2 \n\tamplitudes:\n\t76 65 43 "
With diff:
@@ -4,5 +4,5 @@
\tindices:
\t0 1 2
-\tamplitudes:
+\tamplitudes:
\t34 45 67
[ FAILED ] PulseDataTest.testSetterGetter (0 ms)
[----------] 1 test from PulseDataTest (0 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (0 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] PulseDataTest.testSetterGetter
1 FAILED TEST
make: [test] Error 1 (ignored) | priority | unit test bugs bin pulsedata unittests running main from deps googletest googletest src gtest main cc running test from test suite global test environment set up test from pulsedatatest pulsedatatest testsettergetter src pulsedata unittests cpp failure expected equality of these values streamdata which is noutgoing n n tindices n n tamplitudes n n nreturning wave n n tindices n n tamplitudes n realstream which is noutgoing n n tindices n n tamplitudes n n nreturning wave n n tindices n n tamplitudes n with diff tindices tamplitudes tamplitudes pulsedatatest testsettergetter ms test from pulsedatatest ms total global test environment tear down test from test suite ran ms total tests test listed below pulsedatatest testsettergetter failed test make error ignored | 1 |
652,127 | 21,522,956,059 | IssuesEvent | 2022-04-28 15:39:52 | Azordev/various-tasks | https://api.github.com/repos/Azordev/various-tasks | closed | Membership Cards Have A Lot Of Separation Between The Title And The Subtitle. | EE-1 css fix priority high skill-normal Under Review | ### Explanation:
In [Partner View](https://wwwsimplepracticegatsbystaging.gatsbyjs.io/partners/), in the section: "Help your network to be the best version of themselves (business)" there are some cards that have a title and a description. . I have noticed that the space between the title and the description is too large and needs to be corrected. I have also noticed that the image does not take up 100% of the width, there is a right border which makes the image smaller.
### Screenshot


### Checklist
- [ ] The space between title and subtitle is the same.
- [ ] The image is occupying 100% of the width.
### References
Gatsby: https://wwwsimplepracticegatsbystaging.gatsbyjs.io/partners/
Original: http://spgatsbystg.wpengine.com/partners/ | 1.0 | Membership Cards Have A Lot Of Separation Between The Title And The Subtitle. - ### Explanation:
In [Partner View](https://wwwsimplepracticegatsbystaging.gatsbyjs.io/partners/), in the section: "Help your network to be the best version of themselves (business)" there are some cards that have a title and a description. . I have noticed that the space between the title and the description is too large and needs to be corrected. I have also noticed that the image does not take up 100% of the width, there is a right border which makes the image smaller.
### Screenshot


### Checklist
- [ ] The space between title and subtitle is the same.
- [ ] The image is occupying 100% of the width.
### References
Gatsby: https://wwwsimplepracticegatsbystaging.gatsbyjs.io/partners/
Original: http://spgatsbystg.wpengine.com/partners/ | priority | membership cards have a lot of separation between the title and the subtitle explanation in in the section help your network to be the best version of themselves business there are some cards that have a title and a description i have noticed that the space between the title and the description is too large and needs to be corrected i have also noticed that the image does not take up of the width there is a right border which makes the image smaller screenshot checklist the space between title and subtitle is the same the image is occupying of the width references gatsby original | 1 |
813,203 | 30,448,602,318 | IssuesEvent | 2023-07-16 01:52:44 | bono-94/Full-Stack-Portfolio | https://api.github.com/repos/bono-94/Full-Stack-Portfolio | closed | USER STORY: Registration | COMPLETION: Must Have EPIC: Registration Form THEME: Account Management PRIORITY: High | As a **Site User/Admin** I can **register for an account** so that **I can access and participate on the website**
## Acceptance Criteria:
- Criteria 1: The user should be able to choose a username
- Criteria 2: The user should be able to choose a password
- Criteria 3: The user should be able to create user account by clicking the "submit button"
- Criteria 4: The user should also receive automatically created user profile with detailed information linked to user account
## Tasks
- [x] Update models
- [x] Update views
- [x] Update templates
- [x] Update urls
- [x] Update styling
- [x] Update javascript
- [x] Test the functionality
- [x] Update admin
- [x] Update status/completion | 1.0 | USER STORY: Registration - As a **Site User/Admin** I can **register for an account** so that **I can access and participate on the website**
## Acceptance Criteria:
- Criteria 1: The user should be able to choose a username
- Criteria 2: The user should be able to choose a password
- Criteria 3: The user should be able to create user account by clicking the "submit button"
- Criteria 4: The user should also receive automatically created user profile with detailed information linked to user account
## Tasks
- [x] Update models
- [x] Update views
- [x] Update templates
- [x] Update urls
- [x] Update styling
- [x] Update javascript
- [x] Test the functionality
- [x] Update admin
- [x] Update status/completion | priority | user story registration as a site user admin i can register for an account so that i can access and participate on the website acceptance criteria criteria the user should be able to choose a username criteria the user should be able to choose a password criteria the user should be able to create user account by clicking the submit button criteria the user should also receive automatically created user profile with detailed information linked to user account tasks update models update views update templates update urls update styling update javascript test the functionality update admin update status completion | 1 |
785,997 | 27,630,645,508 | IssuesEvent | 2023-03-10 10:33:57 | pycaret/pycaret | https://api.github.com/repos/pycaret/pycaret | closed | [BUG]: OHE not working correctly | bug priority_high | ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [X] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
why is age column not getting one hot encoded in `X_train_transformed`.
```
# load dataset
from pycaret.datasets import get_data
hepatitis = get_data('hepatitis')
# init setup
from pycaret.classification import *
clf1 = setup(data = hepatitis, target = 'Class', categorical_features = ['AGE'], max_encoding_ohe=5000)
```


### Reproducible Example
```python
see above
```
### Expected Behavior
see above
### Actual Results
```python-traceback
see above
```
### Installed Versions
rc9 | 1.0 | [BUG]: OHE not working correctly - ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [X] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
why is age column not getting one hot encoded in `X_train_transformed`.
```
# load dataset
from pycaret.datasets import get_data
hepatitis = get_data('hepatitis')
# init setup
from pycaret.classification import *
clf1 = setup(data = hepatitis, target = 'Class', categorical_features = ['AGE'], max_encoding_ohe=5000)
```


### Reproducible Example
```python
see above
```
### Expected Behavior
see above
### Actual Results
```python-traceback
see above
```
### Installed Versions
rc9 | priority | ohe not working correctly pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the master branch of pycaret pip install u git issue description why is age column not getting one hot encoded in x train transformed load dataset from pycaret datasets import get data hepatitis get data hepatitis init setup from pycaret classification import setup data hepatitis target class categorical features max encoding ohe reproducible example python see above expected behavior see above actual results python traceback see above installed versions | 1 |
263,635 | 8,299,533,165 | IssuesEvent | 2018-09-21 03:29:16 | webhintio/hint | https://api.github.com/repos/webhintio/hint | closed | Response body could be incomplete when using debugging protocol | area:connector connector:CDP priority:high type:bug | I first encountered this when using `chrome` and `javascript` parser. The parser would fail on large scripts. After some debugging I find out the code we are passing is sometimes truncated.
The reasons seems similar to https://github.com/ChromeDevTools/devtools-protocol/issues/44
Looks like we should be waiting for `Network.loadingFinished` before fetching the content.
I'll work on this. | 1.0 | Response body could be incomplete when using debugging protocol - I first encountered this when using `chrome` and `javascript` parser. The parser would fail on large scripts. After some debugging I find out the code we are passing is sometimes truncated.
The reasons seems similar to https://github.com/ChromeDevTools/devtools-protocol/issues/44
Looks like we should be waiting for `Network.loadingFinished` before fetching the content.
I'll work on this. | priority | response body could be incomplete when using debugging protocol i first encountered this when using chrome and javascript parser the parser would fail on large scripts after some debugging i find out the code we are passing is sometimes truncated the reasons seems similar to looks like we should be waiting for network loadingfinished before fetching the content i ll work on this | 1 |
27,082 | 2,690,246,810 | IssuesEvent | 2015-03-31 15:11:26 | cs2103jan2015-t17-1c/main | https://api.github.com/repos/cs2103jan2015-t17-1c/main | closed | The user can perform simple text search for finding an item if the user remembers some keywords from the item description | priority.high | so that the user can get the result of the vague search and perform the further operation on the task record | 1.0 | The user can perform simple text search for finding an item if the user remembers some keywords from the item description - so that the user can get the result of the vague search and perform the further operation on the task record | priority | the user can perform simple text search for finding an item if the user remembers some keywords from the item description so that the user can get the result of the vague search and perform the further operation on the task record | 1 |
709,374 | 24,375,593,453 | IssuesEvent | 2022-10-04 00:23:49 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | Improvements to reports table | Priority-High (Needed for work) function-Reports | We need to add some columns to the list of reports for sorting and filtering.
Report type - loan, loan header, label
Created by - arctos username of person who added the report
Collection - collection the form was customized for
What else?
| 1.0 | Improvements to reports table - We need to add some columns to the list of reports for sorting and filtering.
Report type - loan, loan header, label
Created by - arctos username of person who added the report
Collection - collection the form was customized for
What else?
| priority | improvements to reports table we need to add some columns to the list of reports for sorting and filtering report type loan loan header label created by arctos username of person who added the report collection collection the form was customized for what else | 1 |
449,468 | 12,968,919,212 | IssuesEvent | 2020-07-21 06:49:17 | wowdev/noggit3 | https://api.github.com/repos/wowdev/noggit3 | opened | Allow to adjust fishable, update deep in MH2O attributes | bug priority: high | Currently, we load from the input and copy that 1:1. This leads to TC at least not removing fatigue if users edit water to be shallow. It also means users can’t make a piece of water non-/fishable. | 1.0 | Allow to adjust fishable, update deep in MH2O attributes - Currently, we load from the input and copy that 1:1. This leads to TC at least not removing fatigue if users edit water to be shallow. It also means users can’t make a piece of water non-/fishable. | priority | allow to adjust fishable update deep in attributes currently we load from the input and copy that this leads to tc at least not removing fatigue if users edit water to be shallow it also means users can’t make a piece of water non fishable | 1 |
292,971 | 8,971,394,003 | IssuesEvent | 2019-01-29 15:49:36 | nasa-jpl/LiveViewOpenSource | https://api.github.com/repos/nasa-jpl/LiveViewOpenSource | closed | Provide compatibility for Yocto embedded Linux platforms with display capability | enhancement high priority | Provide compatibility for Yocto embedded Linux platforms with display capability (CARBO project flight Focal Plane Interface Electronics Digital platform: Xilinx UltraScale+ MPSoC Zynq) | 1.0 | Provide compatibility for Yocto embedded Linux platforms with display capability - Provide compatibility for Yocto embedded Linux platforms with display capability (CARBO project flight Focal Plane Interface Electronics Digital platform: Xilinx UltraScale+ MPSoC Zynq) | priority | provide compatibility for yocto embedded linux platforms with display capability provide compatibility for yocto embedded linux platforms with display capability carbo project flight focal plane interface electronics digital platform xilinx ultrascale mpsoc zynq | 1 |
518,304 | 15,026,712,656 | IssuesEvent | 2021-02-01 23:12:00 | Rocket-Eagle/Rocket-Eagle | https://api.github.com/repos/Rocket-Eagle/Rocket-Eagle | opened | Play Online With Others | Basic Game Mechanics Feature High Priority Menu System Multiplayer | Ability to get into a match with another person
**SubFeatures**
-Get into a 4 person lobby
-Client Side Server
**Risk**
- High
**Story Points**
- Story Points: 60 | 1.0 | Play Online With Others - Ability to get into a match with another person
**SubFeatures**
-Get into a 4 person lobby
-Client Side Server
**Risk**
- High
**Story Points**
- Story Points: 60 | priority | play online with others ability to get into a match with another person subfeatures get into a person lobby client side server risk high story points story points | 1 |
812,400 | 30,332,042,357 | IssuesEvent | 2023-07-11 07:09:20 | aoberoi/dotfiles | https://api.github.com/repos/aoberoi/dotfiles | closed | Set up fuzzy finder | priority:high | https://github.com/junegunn/fzf
Aside from normal setup, I'd like to make sure the reverse history search in zsh utilizes fzf. | 1.0 | Set up fuzzy finder - https://github.com/junegunn/fzf
Aside from normal setup, I'd like to make sure the reverse history search in zsh utilizes fzf. | priority | set up fuzzy finder aside from normal setup i d like to make sure the reverse history search in zsh utilizes fzf | 1 |
453,821 | 13,090,802,211 | IssuesEvent | 2020-08-03 04:42:00 | flimlib/flimj-ui | https://api.github.com/repos/flimlib/flimj-ui | closed | Unexpected Intensity Thresh. Behaviour | bug priority:high | Open test2.sdt
Analyze>Lifetime>FLIMJ
Enter Time Bin (ns) = 0.039, click OK
Set Intensity Thresh. = 100
Click Fit Dataset and wait
Set Show to tau as Color
Result: Appears that no pixels have been processed on image and scale bar.
In Export, set Images to tau
Click Export
Result: See that pixels have been processed and have a non-zero value in the image exported into Fiji.
Compare with: Intensity Thresh. = 50
tested on flimj-ui-1.0.7 | 1.0 | Unexpected Intensity Thresh. Behaviour - Open test2.sdt
Analyze>Lifetime>FLIMJ
Enter Time Bin (ns) = 0.039, click OK
Set Intensity Thresh. = 100
Click Fit Dataset and wait
Set Show to tau as Color
Result: Appears that no pixels have been processed on image and scale bar.
In Export, set Images to tau
Click Export
Result: See that pixels have been processed and have a non-zero value in the image exported into Fiji.
Compare with: Intensity Thresh. = 50
tested on flimj-ui-1.0.7 | priority | unexpected intensity thresh behaviour open sdt analyze lifetime flimj enter time bin ns click ok set intensity thresh click fit dataset and wait set show to tau as color result appears that no pixels have been processed on image and scale bar in export set images to tau click export result see that pixels have been processed and have a non zero value in the image exported into fiji compare with intensity thresh tested on flimj ui | 1 |
381,561 | 11,276,714,236 | IssuesEvent | 2020-01-15 00:08:32 | Paulinakhew/web_trader | https://api.github.com/repos/Paulinakhew/web_trader | closed | Set API key as an env variable | bug :bug: easy 😌 enhancement :sparkles: high priority 🚨 | The current API key has been revoked. I'll need to set up the new one as an env var | 1.0 | Set API key as an env variable - The current API key has been revoked. I'll need to set up the new one as an env var | priority | set api key as an env variable the current api key has been revoked i ll need to set up the new one as an env var | 1 |
769,425 | 27,004,866,072 | IssuesEvent | 2023-02-10 10:48:36 | codesydney/migram-frontend | https://api.github.com/repos/codesydney/migram-frontend | opened | feat: add a Checkout button to TaskCard | enhancement high priority | Show a Checkout button when a `Task`/`Offer` is completed.
This should navigate to the route at `/checkout/:taskId`. | 1.0 | feat: add a Checkout button to TaskCard - Show a Checkout button when a `Task`/`Offer` is completed.
This should navigate to the route at `/checkout/:taskId`. | priority | feat add a checkout button to taskcard show a checkout button when a task offer is completed this should navigate to the route at checkout taskid | 1 |
443,273 | 12,780,387,178 | IssuesEvent | 2020-07-01 00:34:23 | microsoft/fluentui | https://api.github.com/repos/microsoft/fluentui | closed | Accessibility: [Two UI]: [Choice group]: In High contrast black disabled option radio button text is looks like regular text. | Component: ChoiceGroup High Contrast Priority 2: Normal Status: In PR Type: Bug :bug: |
===== Comments Moved by Partner Bug Service 06/16/2020 09:15 PM UTC =====
Platform:
OS - Windows 10-Version 1909- (Build 18363.778)
Microsoft Edge Dev Build
Version 84.0.502.0 (Official build) dev (64-bit)
Pre-requisites:
High contrast black -Turn on
Repro steps:
1.Load https://account.microsoft.com/internal/twoui/iframe.html?id=common-choicegroup--when-an-option-is-disabled&viewas=engineer in edge browser with valid credentials.
2. Select option B.
3.Observe Disable option in High contrast black.
Expected Result:
Disable option should be in Green color.
Actual Result:
Disable option is visible as regular option. User is unable to find disable option in choice group.
Note:
This behavior was observed in Edge Chromium browser.


| 1.0 | Accessibility: [Two UI]: [Choice group]: In High contrast black disabled option radio button text is looks like regular text. -
===== Comments Moved by Partner Bug Service 06/16/2020 09:15 PM UTC =====
Platform:
OS - Windows 10-Version 1909- (Build 18363.778)
Microsoft Edge Dev Build
Version 84.0.502.0 (Official build) dev (64-bit)
Pre-requisites:
High contrast black -Turn on
Repro steps:
1.Load https://account.microsoft.com/internal/twoui/iframe.html?id=common-choicegroup--when-an-option-is-disabled&viewas=engineer in edge browser with valid credentials.
2. Select option B.
3.Observe Disable option in High contrast black.
Expected Result:
Disable option should be in Green color.
Actual Result:
Disable option is visible as regular option. User is unable to find disable option in choice group.
Note:
This behavior was observed in Edge Chromium browser.


| priority | accessibility in high contrast black disabled option radio button text is looks like regular text comments moved by partner bug service pm utc platform os windows version build microsoft edge dev build version official build dev bit pre requisites high contrast black turn on repro steps load in edge browser with valid credentials select option b observe disable option in high contrast black expected result disable option should be in green color actual result disable option is visible as regular option user is unable to find disable option in choice group note this behavior was observed in edge chromium browser | 1 |
387,419 | 11,461,218,091 | IssuesEvent | 2020-02-07 11:24:29 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to move "Breadcrumbs From Yoast" to SEO option Panel section. | NEXT UPDATE [Priority: HIGH] enhancement | Screenshot: https://monosnap.com/file/nmpkslbZwTP8S18FYFWdSANEFPVKCj
We need to move the "Breadcrumbs From Yoast" from Single option panel to SEO option Panel section. | 1.0 | Need to move "Breadcrumbs From Yoast" to SEO option Panel section. - Screenshot: https://monosnap.com/file/nmpkslbZwTP8S18FYFWdSANEFPVKCj
We need to move the "Breadcrumbs From Yoast" from Single option panel to SEO option Panel section. | priority | need to move breadcrumbs from yoast to seo option panel section screenshot we need to move the breadcrumbs from yoast from single option panel to seo option panel section | 1 |
595,583 | 18,069,429,531 | IssuesEvent | 2021-09-20 23:53:44 | emillly-b/Saturn-Bot | https://api.github.com/repos/emillly-b/Saturn-Bot | opened | Airlock Module | enhancement High Priority | Intro voting using reaction roles
-mods are instant vet
-auto welcome
mirror channel into intros channel
vetting tools? | 1.0 | Airlock Module - Intro voting using reaction roles
-mods are instant vet
-auto welcome
mirror channel into intros channel
vetting tools? | priority | airlock module intro voting using reaction roles mods are instant vet auto welcome mirror channel into intros channel vetting tools | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.