Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
725,614 | 24,968,216,343 | IssuesEvent | 2022-11-01 21:27:22 | jesseClegg/Calorie-Tracker-Client-Software-Engineering | https://api.github.com/repos/jesseClegg/Calorie-Tracker-Client-Software-Engineering | closed | Auth | high priority | Need a login page and full auth integration into the app.
This would also be a good time to add Joey's landing page.
We also should add joey as collab | 1.0 | Auth - Need a login page and full auth integration into the app.
This would also be a good time to add Joey's landing page.
We also should add joey as collab | priority | auth need a login page and full auth integration into the app this would also be a good time to add joey s landing page we also should add joey as collab | 1 |
186,021 | 6,732,801,509 | IssuesEvent | 2017-10-18 12:54:23 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | closed | The text boxes are not well formatted in try-it | 0.94-pre-release Priority/High Severity/Minor Type/Bug | The text boxes are not well formatted in try-it


| 1.0 | The text boxes are not well formatted in try-it - The text boxes are not well formatted in try-it


| priority | the text boxes are not well formatted in try it the text boxes are not well formatted in try it | 1 |
268,994 | 8,418,923,178 | IssuesEvent | 2018-10-15 03:45:02 | CS2103-AY1819S1-W17-4/main | https://api.github.com/repos/CS2103-AY1819S1-W17-4/main | closed | Filter/search by 'and'/'or' composite predicates | priority.High type.Enhancement | - Support 'and' and 'or' operations, e.g. "tag>CS2101 or tag>CS2103T" | 1.0 | Filter/search by 'and'/'or' composite predicates - - Support 'and' and 'or' operations, e.g. "tag>CS2101 or tag>CS2103T" | priority | filter search by and or composite predicates support and and or operations e g tag or tag | 1 |
560,479 | 16,597,592,343 | IssuesEvent | 2021-06-01 15:07:23 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | make normalization's hash inclusion in unnested table names optional | SM priority/high type/enhancement | The release of this change depends on https://github.com/airbytehq/airbyte/issues/3522
Consumers of tables produced by normalization cannot easily use nested tables due to the presence of hashes (used for collision prevention). These hashes are generally unnecessary outside of Postgres/similar dbs with low char limits for table names.
We should make this optional and set this to off by default except for Postgres/similar.
โIssue is synchronized with this [Asana task](https://app.asana.com/0/1200367912513076/1200368129700092) by [Unito](https://www.unito.io)
| 1.0 | make normalization's hash inclusion in unnested table names optional - The release of this change depends on https://github.com/airbytehq/airbyte/issues/3522
Consumers of tables produced by normalization cannot easily use nested tables due to the presence of hashes (used for collision prevention). These hashes are generally unnecessary outside of Postgres/similar dbs with low char limits for table names.
We should make this optional and set this to off by default except for Postgres/similar.
โIssue is synchronized with this [Asana task](https://app.asana.com/0/1200367912513076/1200368129700092) by [Unito](https://www.unito.io)
| priority | make normalization s hash inclusion in unnested table names optional the release of this change depends on consumers of tables produced by normalization cannot easily use nested tables due to the presence of hashes used for collision prevention these hashes are generally unnecessary outside of postgres similar dbs with low char limits for table names we should make this optional and set this to off by default except for postgres similar โissue is synchronized with this by | 1 |
355,014 | 10,575,628,246 | IssuesEvent | 2019-10-07 16:05:41 | arjo129/darpasubt | https://api.github.com/repos/arjo129/darpasubt | closed | [UGV] Chassis protection w/ cable management | priority.high ugv1 | A proper, polished chassis protection is required for qualification video.
This also includes having adequate cable management for safety and robustness against external elements. | 1.0 | [UGV] Chassis protection w/ cable management - A proper, polished chassis protection is required for qualification video.
This also includes having adequate cable management for safety and robustness against external elements. | priority | chassis protection w cable management a proper polished chassis protection is required for qualification video this also includes having adequate cable management for safety and robustness against external elements | 1 |
344,787 | 10,349,640,108 | IssuesEvent | 2019-09-04 23:18:11 | oslc-op/jira-migration-landfill | https://api.github.com/repos/oslc-op/jira-migration-landfill | closed | literal_value of the oslc_where syntax is not well-defined | Core: Query Priority: High Xtra: Jira | The spec is not clear on how to interpret the literals w/o the xsd data type.
E.g.
The terms boolean and decimal are short forms for typed literals. For example, true is a short form for "true"^xsd:booleancode>, 42 is a short form for "42"xsd:integer and 3.14159 is a short form for "3.14159"^xsd:decimal.
does not specify how I am supposed to know whether 42 is an integer but 3.14 is a decimal (or a single-precision float?), let alone how I am supposed to ensure that โtrueโ is a boolean True, not a "true" string literal.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-134 (opened by @berezovskyi; previously assigned to @oslc-bot)_
| 1.0 | literal_value of the oslc_where syntax is not well-defined - The spec is not clear on how to interpret the literals w/o the xsd data type.
E.g.
The terms boolean and decimal are short forms for typed literals. For example, true is a short form for "true"^xsd:booleancode>, 42 is a short form for "42"xsd:integer and 3.14159 is a short form for "3.14159"^xsd:decimal.
does not specify how I am supposed to know whether 42 is an integer but 3.14 is a decimal (or a single-precision float?), let alone how I am supposed to ensure that โtrueโ is a boolean True, not a "true" string literal.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-134 (opened by @berezovskyi; previously assigned to @oslc-bot)_
| priority | literal value of the oslc where syntax is not well defined the spec is not clear on how to interpret the literals w o the xsd data type e g the terms boolean and decimal are short forms for typed literals for example true is a short form for true xsd booleancode is a short form for xsd integer and is a short form for xsd decimal does not specify how i am supposed to know whether is an integer but is a decimal or a single precision float let alone how i am supposed to ensure that โtrueโ is a boolean true not a true string literal migrated from opened by berezovskyi previously assigned to oslc bot | 1 |
559,550 | 16,565,564,489 | IssuesEvent | 2021-05-29 10:29:28 | olive-editor/olive | https://api.github.com/repos/olive-editor/olive | closed | [NODES] Video and/or audio breaks when reconnecting nodes | High Priority Nodes/Compositing | <!-- โ Do not delete this issue template! โ -->
**Commit Hash** <!-- 8 character string of letters/numbers in title bar or Help > About dialog (e.g. 3ea173c9) -->
a0200f22
**Platform** <!-- e.g. Windows 10, Ubuntu 20.04 or macOS 10.15 -->
Kubuntu 21.04
**Summary**
When reconnecting nodes by CTRL + Dragging to override a connection with another identical connection, or just deleting the node connect and making a new one, audio and/or video breaks (depending upon which connection you change). I'm talking specifically about the connection from the media node to the transform and or volume node.

Reconnecting from the volume or transform nodes to their respective clip nodes causes no issues.
Please let me know if you need more information, but I think this should be pretty reproducible for testing/debugging purposes.
| 1.0 | [NODES] Video and/or audio breaks when reconnecting nodes - <!-- โ Do not delete this issue template! โ -->
**Commit Hash** <!-- 8 character string of letters/numbers in title bar or Help > About dialog (e.g. 3ea173c9) -->
a0200f22
**Platform** <!-- e.g. Windows 10, Ubuntu 20.04 or macOS 10.15 -->
Kubuntu 21.04
**Summary**
When reconnecting nodes by CTRL + Dragging to override a connection with another identical connection, or just deleting the node connect and making a new one, audio and/or video breaks (depending upon which connection you change). I'm talking specifically about the connection from the media node to the transform and or volume node.

Reconnecting from the volume or transform nodes to their respective clip nodes causes no issues.
Please let me know if you need more information, but I think this should be pretty reproducible for testing/debugging purposes.
| priority | video and or audio breaks when reconnecting nodes commit hash about dialog e g platform kubuntu summary when reconnecting nodes by ctrl dragging to override a connection with another identical connection or just deleting the node connect and making a new one audio and or video breaks depending upon which connection you change i m talking specifically about the connection from the media node to the transform and or volume node reconnecting from the volume or transform nodes to their respective clip nodes causes no issues please let me know if you need more information but i think this should be pretty reproducible for testing debugging purposes | 1 |
328,387 | 9,994,130,713 | IssuesEvent | 2019-07-11 16:53:01 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | KeyError: 'PublicIPAddress' during prepare command | HighPriority bug | I experienced the following KeyError exception during a "prepare" run:
```
$ python cloudmapper.py prepare --config [CONFIG] --account [ACCOUNT]
[...]
Traceback (most recent call last):
File "cloudmapper.py", line 73, in <module>
main()
File "cloudmapper.py", line 67, in main
commands[command].run(arguments)
File "[PATH]/cloudmapper/commands/prepare.py", line 649, in run
prepare(account, config, outputfilter)
File "[PATH]/cloudmapper/commands/prepare.py", line 569, in prepare
cytoscape_json = build_data_structure(account, config, outputfilter)
File "[PATH]/cloudmapper/commands/prepare.py", line 479, in build_data_structure
for c, reasons in get_connections(cidrs, vpc, outputfilter).items():
File "[PATH]/cloudmapper/commands/prepare.py", line 211, in get_connections
for ip in sourceInstance.ips:
File "[PATH]/cloudmapper/shared/nodes.py", line 685, in ips
ips.append(cluster_node['PublicIPAddress'])
KeyError: 'PublicIPAddress'
```
I just added a try-except in the _shared/nodes.py_ file as workaround, which solved it for the moment:
```
@property
def ips(self):
ips = []
for cluster_node in self._json_blob['ClusterNodes']:
try:
ips.append(cluster_node['PrivateIPAddress'])
ips.append(cluster_node['PublicIPAddress'])
except:
continue
return ips
```
EDIT:
It indeed seems to expect both private and public IPs to be assigned to Redshift cluster nodes:
```
$ cat account-data/[ACCOUNT]/eu-central-1/redshift-describe-clusters.json
{
"Clusters": [
{
"AllowVersionUpgrade": true,
"AutomatedSnapshotRetentionPeriod": 5,
"AvailabilityZone": "eu-central-1b",
"ClusterCreateTime": "2019-01-27T00:47:29.584000+00:00",
"ClusterIdentifier": "CLUSTERNAME",
"ClusterNodes": [
{
"NodeRole": "LEADER",
"PrivateIPAddress": "10.0.0.1"
},
{
"NodeRole": "COMPUTE-0",
"PrivateIPAddress": "10.0.0.3"
},
{
"NodeRole": "COMPUTE-1",
"PrivateIPAddress": "10.0.0.2"
},
{
"NodeRole": "COMPUTE-2",
"PrivateIPAddress": "10.0.0.4"
},
{
"NodeRole": "COMPUTE-3",
"PrivateIPAddress": "10.0.0.5"
}
],
[...]
``` | 1.0 | KeyError: 'PublicIPAddress' during prepare command - I experienced the following KeyError exception during a "prepare" run:
```
$ python cloudmapper.py prepare --config [CONFIG] --account [ACCOUNT]
[...]
Traceback (most recent call last):
File "cloudmapper.py", line 73, in <module>
main()
File "cloudmapper.py", line 67, in main
commands[command].run(arguments)
File "[PATH]/cloudmapper/commands/prepare.py", line 649, in run
prepare(account, config, outputfilter)
File "[PATH]/cloudmapper/commands/prepare.py", line 569, in prepare
cytoscape_json = build_data_structure(account, config, outputfilter)
File "[PATH]/cloudmapper/commands/prepare.py", line 479, in build_data_structure
for c, reasons in get_connections(cidrs, vpc, outputfilter).items():
File "[PATH]/cloudmapper/commands/prepare.py", line 211, in get_connections
for ip in sourceInstance.ips:
File "[PATH]/cloudmapper/shared/nodes.py", line 685, in ips
ips.append(cluster_node['PublicIPAddress'])
KeyError: 'PublicIPAddress'
```
I just added a try-except in the _shared/nodes.py_ file as workaround, which solved it for the moment:
```
@property
def ips(self):
ips = []
for cluster_node in self._json_blob['ClusterNodes']:
try:
ips.append(cluster_node['PrivateIPAddress'])
ips.append(cluster_node['PublicIPAddress'])
except:
continue
return ips
```
EDIT:
It indeed seems to expect both private and public IPs to be assigned to Redshift cluster nodes:
```
$ cat account-data/[ACCOUNT]/eu-central-1/redshift-describe-clusters.json
{
"Clusters": [
{
"AllowVersionUpgrade": true,
"AutomatedSnapshotRetentionPeriod": 5,
"AvailabilityZone": "eu-central-1b",
"ClusterCreateTime": "2019-01-27T00:47:29.584000+00:00",
"ClusterIdentifier": "CLUSTERNAME",
"ClusterNodes": [
{
"NodeRole": "LEADER",
"PrivateIPAddress": "10.0.0.1"
},
{
"NodeRole": "COMPUTE-0",
"PrivateIPAddress": "10.0.0.3"
},
{
"NodeRole": "COMPUTE-1",
"PrivateIPAddress": "10.0.0.2"
},
{
"NodeRole": "COMPUTE-2",
"PrivateIPAddress": "10.0.0.4"
},
{
"NodeRole": "COMPUTE-3",
"PrivateIPAddress": "10.0.0.5"
}
],
[...]
``` | priority | keyerror publicipaddress during prepare command i experienced the following keyerror exception during a prepare run python cloudmapper py prepare config account traceback most recent call last file cloudmapper py line in main file cloudmapper py line in main commands run arguments file cloudmapper commands prepare py line in run prepare account config outputfilter file cloudmapper commands prepare py line in prepare cytoscape json build data structure account config outputfilter file cloudmapper commands prepare py line in build data structure for c reasons in get connections cidrs vpc outputfilter items file cloudmapper commands prepare py line in get connections for ip in sourceinstance ips file cloudmapper shared nodes py line in ips ips append cluster node keyerror publicipaddress i just added a try except in the shared nodes py file as workaround which solved it for the moment property def ips self ips for cluster node in self json blob try ips append cluster node ips append cluster node except continue return ips edit it indeed seems to expect both private and public ips to be assigned to redshift cluster nodes cat account data eu central redshift describe clusters json clusters allowversionupgrade true automatedsnapshotretentionperiod availabilityzone eu central clustercreatetime clusteridentifier clustername clusternodes noderole leader privateipaddress noderole compute privateipaddress noderole compute privateipaddress noderole compute privateipaddress noderole compute privateipaddress | 1 |
261,155 | 8,224,963,104 | IssuesEvent | 2018-09-06 14:59:57 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | Update Struts Version (2..5.17) | Priority - High Type - Enhancement Type -Task | Apache anucemented a possible vulnerability in the struts versions (2.5.1 - 2.5.16)
https://nvd.nist.gov/vuln/detail/CVE-2018-11776
For security, we need change the version to 2.5.17. | 1.0 | Update Struts Version (2..5.17) - Apache anucemented a possible vulnerability in the struts versions (2.5.1 - 2.5.16)
https://nvd.nist.gov/vuln/detail/CVE-2018-11776
For security, we need change the version to 2.5.17. | priority | update struts version apache anucemented a possible vulnerability in the struts versions for security we need change the version to | 1 |
543,994 | 15,888,408,256 | IssuesEvent | 2021-04-10 07:11:30 | AY2021S2-CS2113-W10-3/tp | https://api.github.com/repos/AY2021S2-CS2113-W10-3/tp | closed | [PE-D] List Function Does Not Follow The Conventional Format | priority.High severity.VeryLow type.Bug | "list" function does not follow the format used in other functions, i.e. "list p/PROJECT_NAME".

<!--session: 1617437683676-c79db70e-3b76-47c5-b72f-e5698e9ed087-->
-------------
Labels: `severity.VeryLow` `type.FeatureFlaw`
original: baggiiiie/ped#6 | 1.0 | [PE-D] List Function Does Not Follow The Conventional Format - "list" function does not follow the format used in other functions, i.e. "list p/PROJECT_NAME".

<!--session: 1617437683676-c79db70e-3b76-47c5-b72f-e5698e9ed087-->
-------------
Labels: `severity.VeryLow` `type.FeatureFlaw`
original: baggiiiie/ped#6 | priority | list function does not follow the conventional format list function does not follow the format used in other functions i e list p project name labels severity verylow type featureflaw original baggiiiie ped | 1 |
506,083 | 14,658,050,865 | IssuesEvent | 2020-12-28 16:58:13 | bounswe/bounswe2020group8 | https://api.github.com/repos/bounswe/bounswe2020group8 | closed | Add Main Product not adding parameter values correctly | Priority: High bug web | **Describe the bug**
When vendor adds a main product, the parameter values are added as one string instead of array.
<!---
If not found, remove this part
--->
**Possible code location**
createMainProduct method in screens/VendorAccount/AddProduct.js
**To Reproduce**
Steps to reproduce the behavior:
1. Sign in with a vendor
2. Go to Add Product in the side menu in My Products
3. Scroll down to Create Main Product
4. Fill in form with a parameter name and value
5. Check Main Products
**Expected behavior**
The parameter values should be listed as an array.
**Additional context**
This should be solved if the parameter values are stored as an array. | 1.0 | Add Main Product not adding parameter values correctly - **Describe the bug**
When vendor adds a main product, the parameter values are added as one string instead of array.
<!---
If not found, remove this part
--->
**Possible code location**
createMainProduct method in screens/VendorAccount/AddProduct.js
**To Reproduce**
Steps to reproduce the behavior:
1. Sign in with a vendor
2. Go to Add Product in the side menu in My Products
3. Scroll down to Create Main Product
4. Fill in form with a parameter name and value
5. Check Main Products
**Expected behavior**
The parameter values should be listed as an array.
**Additional context**
This should be solved if the parameter values are stored as an array. | priority | add main product not adding parameter values correctly describe the bug when vendor adds a main product the parameter values are added as one string instead of array if not found remove this part possible code location createmainproduct method in screens vendoraccount addproduct js to reproduce steps to reproduce the behavior sign in with a vendor go to add product in the side menu in my products scroll down to create main product fill in form with a parameter name and value check main products expected behavior the parameter values should be listed as an array additional context this should be solved if the parameter values are stored as an array | 1 |
798,099 | 28,236,141,172 | IssuesEvent | 2023-04-06 00:47:44 | steedos/steedos-platform | https://api.github.com/repos/steedos/steedos-platform | closed | [Bug]: ไปปๅก่ฏฆ็ป้กต็ธๅ
ณ้กน(type: lookup, reference_to: !<tag:yaml.org,2002:js/function>)ๅญๆฎตๆชๆพ็คบ | bug done priority: High | ### Description

### Steps To Reproduce ้็ฐๆญฅ้ชค
1ใๅๅ
ฌๆจกๅๆฐๅปบไปปๅก
### Version ็ๆฌ
2.4.8 | 1.0 | [Bug]: ไปปๅก่ฏฆ็ป้กต็ธๅ
ณ้กน(type: lookup, reference_to: !<tag:yaml.org,2002:js/function>)ๅญๆฎตๆชๆพ็คบ - ### Description

### Steps To Reproduce ้็ฐๆญฅ้ชค
1ใๅๅ
ฌๆจกๅๆฐๅปบไปปๅก
### Version ็ๆฌ
2.4.8 | priority | ไปปๅก่ฏฆ็ป้กต็ธๅ
ณ้กน type lookup reference to ๅญๆฎตๆชๆพ็คบ description steps to reproduce ้็ฐๆญฅ้ชค ใๅๅ
ฌๆจกๅๆฐๅปบไปปๅก version ็ๆฌ | 1 |
40,940 | 2,868,956,043 | IssuesEvent | 2015-06-05 22:11:16 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Convert "pub build" to use barback | enhancement Fixed Priority-High | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#13880_
----
Right now, it has some hardcoded stuff. It should use barback and transformers for its back-end. | 1.0 | Convert "pub build" to use barback - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#13880_
----
Right now, it has some hardcoded stuff. It should use barback and transformers for its back-end. | priority | convert pub build to use barback issue by originally opened as dart lang sdk right now it has some hardcoded stuff it should use barback and transformers for its back end | 1 |
254,654 | 8,081,373,798 | IssuesEvent | 2018-08-08 03:06:24 | vmware/harbor | https://api.github.com/repos/vmware/harbor | closed | Helm Chart Card View Refactor | priority/high target/1.6.0 | 1. card view of chart

2. card view of chart versions

| 1.0 | Helm Chart Card View Refactor - 1. card view of chart

2. card view of chart versions

| priority | helm chart card view refactor card view of chart card view of chart versions | 1 |
170,953 | 6,475,268,698 | IssuesEvent | 2017-08-17 20:00:12 | semperfiwebdesign/all-in-one-seo-pack | https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack | closed | Sorry, you are not allowed to access this page when activating Social Meta module | Bug Priority | High | When activating the Social Meta module, the first time you click on the Social Meta menu item you get a Sorry, you are not allowed to access this page error.
This is because the menu link is http://testsite.dev/wp-admin/admin.php?page=all-in-one-seo-pack/modules/aioseop_opengraph.php when it should be http://testsite.dev/wp-admin/admin.php?page=aiosp_opengraph
Need to find when this broke. | 1.0 | Sorry, you are not allowed to access this page when activating Social Meta module - When activating the Social Meta module, the first time you click on the Social Meta menu item you get a Sorry, you are not allowed to access this page error.
This is because the menu link is http://testsite.dev/wp-admin/admin.php?page=all-in-one-seo-pack/modules/aioseop_opengraph.php when it should be http://testsite.dev/wp-admin/admin.php?page=aiosp_opengraph
Need to find when this broke. | priority | sorry you are not allowed to access this page when activating social meta module when activating the social meta module the first time you click on the social meta menu item you get a sorry you are not allowed to access this page error this is because the menu link is when it should be need to find when this broke | 1 |
664,856 | 22,290,958,622 | IssuesEvent | 2022-06-12 11:01:14 | h-dt/hola-clone | https://api.github.com/repos/h-dt/hola-clone | opened | [dev, update] Board Read์ Comment์ Skill ๋ก์ง ์ถ๊ฐ | enhancement status: in progress priority:high | ## ๋ณธ๋ฌธ ๋ด์ฉ
### ์ถ๊ฐ/๊ฐ์ ํ์ ์์
Board Read์ Comment์ Skill ๋ก์ง ์ถ๊ฐ
### ์์ธ ์๊ตฌ ์ฌํญ
- ๊ธฐ์กด์ Board๋ง ๋ง๋ค์ด ๋์๋ ๋ก์ง์ Comment์ Skill ๋ก์ง ์ถ๊ฐํ๊ธฐ
## issue-number
> #6
| 1.0 | [dev, update] Board Read์ Comment์ Skill ๋ก์ง ์ถ๊ฐ - ## ๋ณธ๋ฌธ ๋ด์ฉ
### ์ถ๊ฐ/๊ฐ์ ํ์ ์์
Board Read์ Comment์ Skill ๋ก์ง ์ถ๊ฐ
### ์์ธ ์๊ตฌ ์ฌํญ
- ๊ธฐ์กด์ Board๋ง ๋ง๋ค์ด ๋์๋ ๋ก์ง์ Comment์ Skill ๋ก์ง ์ถ๊ฐํ๊ธฐ
## issue-number
> #6
| priority | board read์ comment์ skill ๋ก์ง ์ถ๊ฐ ๋ณธ๋ฌธ ๋ด์ฉ ์ถ๊ฐ ๊ฐ์ ํ์ ์์ board read์ comment์ skill ๋ก์ง ์ถ๊ฐ ์์ธ ์๊ตฌ ์ฌํญ ๊ธฐ์กด์ board๋ง ๋ง๋ค์ด ๋์๋ ๋ก์ง์ comment์ skill ๋ก์ง ์ถ๊ฐํ๊ธฐ issue number | 1 |
699,999 | 24,041,139,083 | IssuesEvent | 2022-09-16 02:00:44 | wintercms/winter | https://api.github.com/repos/wintercms/winter | closed | [v1.2] CMS editor doesn't save all components added to page/layout/etc | Type: Bug Priority: High | ### Winter CMS Build
dev-develop
### PHP Version
8.0
### Database engine
MySQL/MariaDB
### Plugins installed
Winter.Pages
### Issue description
Adding multiple components to a layout/page from the CMS editor does not work, and only the last added component is saved to the actual file itself. All other components are lost.
### Steps to replicate
1. Install fresh winter cms project
2. Install Winter.Pages plugin
3. Add `static page` and `static menu` components to the default layout.
4. Save
5. View the `layouts/default.htm` file from your IDE/editor and see only one component is actually saved to the file.
### Workaround
Add components manually from your editor/IDE without using the backend. | 1.0 | [v1.2] CMS editor doesn't save all components added to page/layout/etc - ### Winter CMS Build
dev-develop
### PHP Version
8.0
### Database engine
MySQL/MariaDB
### Plugins installed
Winter.Pages
### Issue description
Adding multiple components to a layout/page from the CMS editor does not work, and only the last added component is saved to the actual file itself. All other components are lost.
### Steps to replicate
1. Install fresh winter cms project
2. Install Winter.Pages plugin
3. Add `static page` and `static menu` components to the default layout.
4. Save
5. View the `layouts/default.htm` file from your IDE/editor and see only one component is actually saved to the file.
### Workaround
Add components manually from your editor/IDE without using the backend. | priority | cms editor doesn t save all components added to page layout etc winter cms build dev develop php version database engine mysql mariadb plugins installed winter pages issue description adding multiple components to a layout page from the cms editor does not work and only the last added component is saved to the actual file itself all other components are lost steps to replicate install fresh winter cms project install winter pages plugin add static page and static menu components to the default layout save view the layouts default htm file from your ide editor and see only one component is actually saved to the file workaround add components manually from your editor ide without using the backend | 1 |
695,170 | 23,847,510,958 | IssuesEvent | 2022-09-06 15:03:20 | Qiskit/qiskit-ibm-runtime | https://api.github.com/repos/Qiskit/qiskit-ibm-runtime | closed | Sampler returns SamplerResult with dict | bug priority: high | **Describe the bug**
Sampler returns SamplerResult with a list of dict. However, it should be SamplerResult with a list of QuasiDistribution.
See https://github.com/Qiskit/qiskit-terra/blob/df54ae26d6125bee671ad27f4d412a92912aad1a/qiskit/primitives/sampler_result.py#L42 for detail.
====
This may cause bugs in the future. In particular, applications will be written assuming the type.
| 1.0 | Sampler returns SamplerResult with dict - **Describe the bug**
Sampler returns SamplerResult with a list of dict. However, it should be SamplerResult with a list of QuasiDistribution.
See https://github.com/Qiskit/qiskit-terra/blob/df54ae26d6125bee671ad27f4d412a92912aad1a/qiskit/primitives/sampler_result.py#L42 for detail.
====
This may cause bugs in the future. In particular, applications will be written assuming the type.
| priority | sampler returns samplerresult with dict describe the bug sampler returns samplerresult with a list of dict however it should be samplerresult with a list of quasidistribution see for detail this may cause bugs in the future in particular applications will be written assuming the type | 1 |
832,387 | 32,078,257,919 | IssuesEvent | 2023-09-25 12:27:16 | risingwavelabs/risingwave | https://api.github.com/repos/risingwavelabs/risingwave | opened | timestamp panic when selecting from kafka source with _rw_kafka_timestamp | type/bug priority/high | ```
create source test2 (v int) with (
connector = 'kafka',
topic = 'test',
properties.bootstrap.server = '127.0.0.1:29092',
scan.start_up.mode = 'earliest',
) FORMAT PLAIN ENCODE JSON
dev=> select * from test2 where _rw_kafka_timestamp >= '2023-09-25 10:34:33.000000+00:00';
v
---
1
2
(2 rows)
```
This is OK
But:
```
dev=> select * from test2 where _rw_kafka_timestamp >= '2023-09-25 10:34:33.000000';
ERROR: Panicked when processing: called `Result::unwrap()` on an `Err` value: Expr error: Unsupported function: cast(varchar) -> timestamptz
0: std::backtrace_rs::backtrace::libunwind::trace
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: std::backtrace::Backtrace::create
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/backtrace.rs:331:13
3: <risingwave_common::error::RwError as core::convert::From<risingwave_common::error::ErrorCode>>::from
at ./src/common/src/error.rs:174:33
4: <T as core::convert::Into<U>>::into
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/convert/mod.rs:716:9
5: risingwave_expr::error::<impl core::convert::From<risingwave_expr::error::ExprError> for risingwave_common::error::RwError>::from
at ./src/expr/src/error.rs:91:9
6: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/result.rs:1962:27
7: risingwave_frontend::expr::ExprImpl::eval_row::{{closure}}
at ./src/frontend/src/expr/mod.rs:312:28
8: futures_util::future::future::FutureExt::now_or_never
at /Users/martin/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.28/src/future/future/mod.rs:605:15
9: risingwave_frontend::expr::ExprImpl::try_fold_const
at ./src/frontend/src/expr/mod.rs:324:13
10: risingwave_frontend::optimizer::plan_node::logical_source::expr_to_kafka_timestamp_range::{{closure}}
at ./src/frontend/src/optimizer/plan_node/logical_source.rs:342:46
11: risingwave_frontend::optimizer::plan_node::logical_source::expr_to_kafka_timestamp_range
at ./src/frontend/src/optimizer/plan_node/logical_source.rs:372:58
```
or
```
dev=> select * from test2 where _rw_kafka_timestamp >= TO_TIMESTAMP('2023-09-25 10:34:33.000000', 'YYYY-MM-DD HH24:MI:SS.US');
ERROR: Panicked when processing: called `Result::unwrap()` on an `Err` value: Expr error: Unsupported function: to_timestamp should have been rewritten to include timezone
0: std::backtrace_rs::backtrace::libunwind::trace
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: std::backtrace::Backtrace::create
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/backtrace.rs:331:13
3: <risingwave_common::error::RwError as core::convert::From<risingwave_common::error::ErrorCode>>::from
at ./src/common/src/error.rs:174:33
4: <T as core::convert::Into<U>>::into
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/convert/mod.rs:716:9
5: risingwave_expr::error::<impl core::convert::From<risingwave_expr::error::ExprError> for risingwave_common::error::RwError>::from
at ./src/expr/src/error.rs:91:9
6: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/result.rs:1962:27
7: risingwave_frontend::expr::ExprImpl::eval_row::{{closure}}
at ./src/frontend/src/expr/mod.rs:312:28
8: futures_util::future::future::FutureExt::now_or_never
```
We remark that the last query is generated by the users' Superset automatically, so the first query, as the workaround, cannot work. | 1.0 | timestamp panic when selecting from kafka source with _rw_kafka_timestamp - ```
create source test2 (v int) with (
connector = 'kafka',
topic = 'test',
properties.bootstrap.server = '127.0.0.1:29092',
scan.start_up.mode = 'earliest',
) FORMAT PLAIN ENCODE JSON
dev=> select * from test2 where _rw_kafka_timestamp >= '2023-09-25 10:34:33.000000+00:00';
v
---
1
2
(2 rows)
```
This is OK
But:
```
dev=> select * from test2 where _rw_kafka_timestamp >= '2023-09-25 10:34:33.000000';
ERROR: Panicked when processing: called `Result::unwrap()` on an `Err` value: Expr error: Unsupported function: cast(varchar) -> timestamptz
0: std::backtrace_rs::backtrace::libunwind::trace
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: std::backtrace::Backtrace::create
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/backtrace.rs:331:13
3: <risingwave_common::error::RwError as core::convert::From<risingwave_common::error::ErrorCode>>::from
at ./src/common/src/error.rs:174:33
4: <T as core::convert::Into<U>>::into
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/convert/mod.rs:716:9
5: risingwave_expr::error::<impl core::convert::From<risingwave_expr::error::ExprError> for risingwave_common::error::RwError>::from
at ./src/expr/src/error.rs:91:9
6: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/result.rs:1962:27
7: risingwave_frontend::expr::ExprImpl::eval_row::{{closure}}
at ./src/frontend/src/expr/mod.rs:312:28
8: futures_util::future::future::FutureExt::now_or_never
at /Users/martin/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.28/src/future/future/mod.rs:605:15
9: risingwave_frontend::expr::ExprImpl::try_fold_const
at ./src/frontend/src/expr/mod.rs:324:13
10: risingwave_frontend::optimizer::plan_node::logical_source::expr_to_kafka_timestamp_range::{{closure}}
at ./src/frontend/src/optimizer/plan_node/logical_source.rs:342:46
11: risingwave_frontend::optimizer::plan_node::logical_source::expr_to_kafka_timestamp_range
at ./src/frontend/src/optimizer/plan_node/logical_source.rs:372:58
```
or
```
dev=> select * from test2 where _rw_kafka_timestamp >= TO_TIMESTAMP('2023-09-25 10:34:33.000000', 'YYYY-MM-DD HH24:MI:SS.US');
ERROR: Panicked when processing: called `Result::unwrap()` on an `Err` value: Expr error: Unsupported function: to_timestamp should have been rewritten to include timezone
0: std::backtrace_rs::backtrace::libunwind::trace
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: std::backtrace::Backtrace::create
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/std/src/backtrace.rs:331:13
3: <risingwave_common::error::RwError as core::convert::From<risingwave_common::error::ErrorCode>>::from
at ./src/common/src/error.rs:174:33
4: <T as core::convert::Into<U>>::into
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/convert/mod.rs:716:9
5: risingwave_expr::error::<impl core::convert::From<risingwave_expr::error::ExprError> for risingwave_common::error::RwError>::from
at ./src/expr/src/error.rs:91:9
6: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
at /rustc/62ebe3a2b177d50ec664798d731b8a8d1a9120d1/library/core/src/result.rs:1962:27
7: risingwave_frontend::expr::ExprImpl::eval_row::{{closure}}
at ./src/frontend/src/expr/mod.rs:312:28
8: futures_util::future::future::FutureExt::now_or_never
```
We remark that the last query is generated by the users' Superset automatically, so the first query, as the workaround, cannot work. | priority | timestamp panic when selecting from kafka source with rw kafka timestamp create source v int with connector kafka topic test properties bootstrap server scan start up mode earliest format plain encode json dev select from where rw kafka timestamp v rows this is ok but dev select from where rw kafka timestamp error panicked when processing called result unwrap on an err value expr error unsupported function cast varchar timestamptz std backtrace rs backtrace libunwind trace at rustc library std src backtrace src backtrace libunwind rs std backtrace rs backtrace trace unsynchronized at rustc library std src backtrace src backtrace mod rs std backtrace backtrace create at rustc library std src backtrace rs from at src common src error rs into at rustc library core src convert mod rs risingwave expr error for risingwave common error rwerror from at src expr src error rs as core ops try trait fromresidual from residual at rustc library core src result rs risingwave frontend expr exprimpl eval row closure at src frontend src expr mod rs futures util future future futureext now or never at users martin cargo registry src index crates io futures util src future future mod rs risingwave frontend expr exprimpl try fold const at src frontend src expr mod rs risingwave frontend optimizer plan node logical source expr to kafka timestamp range closure at src frontend src optimizer plan node logical source rs risingwave frontend optimizer plan node logical source expr to kafka timestamp range at src frontend src optimizer plan node logical source rs or dev select from where rw kafka timestamp to timestamp yyyy mm dd mi ss us error panicked when processing called result unwrap on an err value expr error unsupported function to timestamp should have been rewritten to include timezone std backtrace rs backtrace libunwind trace at rustc library std src backtrace src backtrace libunwind rs std backtrace rs backtrace trace unsynchronized at rustc library std src backtrace src backtrace mod rs std backtrace backtrace create at rustc library std src backtrace rs from at src common src error rs into at rustc library core src convert mod rs risingwave expr error for risingwave common error rwerror from at src expr src error rs as core ops try trait fromresidual from residual at rustc library core src result rs risingwave frontend expr exprimpl eval row closure at src frontend src expr mod rs futures util future future futureext now or never we remark that the last query is generated by the users superset automatically so the first query as the workaround cannot work | 1 |
756,580 | 26,477,466,520 | IssuesEvent | 2023-01-17 12:17:34 | codersforcauses/poops | https://api.github.com/repos/codersforcauses/poops | closed | Add vet concerns to notes | backend enhancement priority::high point::2 | **Is your feature request related to a problem? Please describe.**
Vet concerns are currently not added to the notes of the specific visit.
**Describe the solution you'd like**
Vet concerns should be added to the notes of the specific visit.
And also add the vet concerns to a collection called `vet_concerns`.
The schema for the document should be:
```yaml
/vet_concerns
/{vet_concern}
- user_uid: string
- user_name: string
- user_email: string
- user_phone?: integer
- client_name: string
- pet_name: string
- visit_time: timestamp
- visit_id: string
- detail: string
- created_at: timestamp
```
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | Add vet concerns to notes - **Is your feature request related to a problem? Please describe.**
Vet concerns are currently not added to the notes of the specific visit.
**Describe the solution you'd like**
Vet concerns should be added to the notes of the specific visit.
And also add the vet concerns to a collection called `vet_concerns`.
The schema for the document should be:
```yaml
/vet_concerns
/{vet_concern}
- user_uid: string
- user_name: string
- user_email: string
- user_phone?: integer
- client_name: string
- pet_name: string
- visit_time: timestamp
- visit_id: string
- detail: string
- created_at: timestamp
```
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| priority | add vet concerns to notes is your feature request related to a problem please describe vet concerns are currently not added to the notes of the specific visit describe the solution you d like vet concerns should be added to the notes of the specific visit and also add the vet concerns to a collection called vet concerns the schema for the document should be yaml vet concerns vet concern user uid string user name string user email string user phone integer client name string pet name string visit time timestamp visit id string detail string created at timestamp describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
121,832 | 4,821,931,266 | IssuesEvent | 2016-11-05 16:00:51 | California-Planet-Search/radvel | https://api.github.com/repos/California-Planet-Search/radvel | closed | "per tc e w k" fitting basis not working | bug priority:high | Fitted values are crazy when working in this basis. Need to check the basis conversions.
| 1.0 | "per tc e w k" fitting basis not working - Fitted values are crazy when working in this basis. Need to check the basis conversions.
| priority | per tc e w k fitting basis not working fitted values are crazy when working in this basis need to check the basis conversions | 1 |
512,811 | 14,910,081,835 | IssuesEvent | 2021-01-22 09:03:39 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Airbyte should fail when instance version doesn't match data version | priority/high type/enhancement | ## Tell us about the problem you're trying to solve
* If a user upgrades the version of Airbyte but doesn't run the appropriate migration of the underlying data, then the behavior is undefined and potentially will corrupt data.
## Describe the solution youโd like
* Airbyte should not allow starting up when data version and airbyte version are incompatible. | 1.0 | Airbyte should fail when instance version doesn't match data version - ## Tell us about the problem you're trying to solve
* If a user upgrades the version of Airbyte but doesn't run the appropriate migration of the underlying data, then the behavior is undefined and potentially will corrupt data.
## Describe the solution youโd like
* Airbyte should not allow starting up when data version and airbyte version are incompatible. | priority | airbyte should fail when instance version doesn t match data version tell us about the problem you re trying to solve if a user upgrades the version of airbyte but doesn t run the appropriate migration of the underlying data then the behavior is undefined and potentially will corrupt data describe the solution youโd like airbyte should not allow starting up when data version and airbyte version are incompatible | 1 |
344,420 | 10,344,410,025 | IssuesEvent | 2019-09-04 11:09:26 | openshift/odo | https://api.github.com/repos/openshift/odo | closed | "odo service create" doesn't set parameters correctly unless used in interactive mode | priority/High | [kind/bug]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
- Operating System: Fedora
- Output of `odo version`: master
## How did you run odo exactly?
- Interactively create a service using just "odo service create" command and set the parameters as requested interactively. For example:
```sh
$ odo service create
? Which kind of service do you wish to create database
? Which database service class should we use dh-postgresql-apb
? Which service plan should we use dev
? Enter a value for string property postgresql_database (PostgreSQL Database Name): mydata
? Enter a value for string property postgresql_password (PostgreSQL Password): secret
? Enter a value for string property postgresql_user (PostgreSQL User): luke
? Enter a value for string property postgresql_version (PostgreSQL Version): 10
? How should we name your service dh-postgresql-apb
? Output the non-interactive version of the selected options Yes
? Wait for the service to be ready No
โ Creating service [80ms]
โ Service 'dh-postgresql-apb' was created
Progress of the provisioning will not be reported and might take a long time.
You can see the current status by executing 'odo service list'
Equivalent command:
odo service create dh-postgresql-apb dh-postgresql-apb --plan dev -p postgresql_database=mydata -p postgresql_password=secret -p postgresql_user=luke -p postgresql_version=10
$ oc describe po/postgresql-289609a5-c4b1-11e9-85e6-0242ac11000d-1-x2vhq | grep -A3 Env
Environment:
POSTGRESQL_PASSWORD: secret
POSTGRESQL_USER: luke
POSTGRESQL_DATABASE: mydata
```
- Use the "Equivalent command" printed in above output to create the service:
```sh
$ odo service create dh-postgresql-apb dh-postgresql-apb --plan dev -p postgresql_database=mydata -p postgresql_password=secret -p postgresql_user=luke -p postgresql_version=10
$ oc describe po/postgresql-a210cbb2-c4bd-11e9-85e6-0242ac11000d-1-n895l | grep -A3 Env
Environment:
POSTGRESQL_PASSWORD: changeme
POSTGRESQL_USER: admin
POSTGRESQL_DATABASE: admin
```
## Actual behavior
Environment variables are not set correctly when spinning up the service using full `odo service create` command in non-interactive mode.
## Expected behavior
Environment variables should be set to same values when creating service using either interactive mode or full command.
## Any logs, error output, etc?
| 1.0 | "odo service create" doesn't set parameters correctly unless used in interactive mode - [kind/bug]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
- Operating System: Fedora
- Output of `odo version`: master
## How did you run odo exactly?
- Interactively create a service using just "odo service create" command and set the parameters as requested interactively. For example:
```sh
$ odo service create
? Which kind of service do you wish to create database
? Which database service class should we use dh-postgresql-apb
? Which service plan should we use dev
? Enter a value for string property postgresql_database (PostgreSQL Database Name): mydata
? Enter a value for string property postgresql_password (PostgreSQL Password): secret
? Enter a value for string property postgresql_user (PostgreSQL User): luke
? Enter a value for string property postgresql_version (PostgreSQL Version): 10
? How should we name your service dh-postgresql-apb
? Output the non-interactive version of the selected options Yes
? Wait for the service to be ready No
โ Creating service [80ms]
โ Service 'dh-postgresql-apb' was created
Progress of the provisioning will not be reported and might take a long time.
You can see the current status by executing 'odo service list'
Equivalent command:
odo service create dh-postgresql-apb dh-postgresql-apb --plan dev -p postgresql_database=mydata -p postgresql_password=secret -p postgresql_user=luke -p postgresql_version=10
$ oc describe po/postgresql-289609a5-c4b1-11e9-85e6-0242ac11000d-1-x2vhq | grep -A3 Env
Environment:
POSTGRESQL_PASSWORD: secret
POSTGRESQL_USER: luke
POSTGRESQL_DATABASE: mydata
```
- Use the "Equivalent command" printed in above output to create the service:
```sh
$ odo service create dh-postgresql-apb dh-postgresql-apb --plan dev -p postgresql_database=mydata -p postgresql_password=secret -p postgresql_user=luke -p postgresql_version=10
$ oc describe po/postgresql-a210cbb2-c4bd-11e9-85e6-0242ac11000d-1-n895l | grep -A3 Env
Environment:
POSTGRESQL_PASSWORD: changeme
POSTGRESQL_USER: admin
POSTGRESQL_DATABASE: admin
```
## Actual behavior
Environment variables are not set correctly when spinning up the service using full `odo service create` command in non-interactive mode.
## Expected behavior
Environment variables should be set to same values when creating service using either interactive mode or full command.
## Any logs, error output, etc?
| priority | odo service create doesn t set parameters correctly unless used in interactive mode welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system fedora output of odo version master how did you run odo exactly interactively create a service using just odo service create command and set the parameters as requested interactively for example sh odo service create which kind of service do you wish to create database which database service class should we use dh postgresql apb which service plan should we use dev enter a value for string property postgresql database postgresql database name mydata enter a value for string property postgresql password postgresql password secret enter a value for string property postgresql user postgresql user luke enter a value for string property postgresql version postgresql version how should we name your service dh postgresql apb output the non interactive version of the selected options yes wait for the service to be ready no โ creating service โ service dh postgresql apb was created progress of the provisioning will not be reported and might take a long time you can see the current status by executing odo service list equivalent command odo service create dh postgresql apb dh postgresql apb plan dev p postgresql database mydata p postgresql password secret p postgresql user luke p postgresql version oc describe po postgresql grep env environment postgresql password secret postgresql user luke postgresql database mydata use the equivalent command printed in above output to create the service sh odo service create dh postgresql apb dh postgresql apb plan dev p postgresql database mydata p postgresql password secret p postgresql user luke p postgresql version oc describe po postgresql grep env environment postgresql password changeme postgresql user admin postgresql database admin actual behavior environment variables are not set correctly when spinning up the service using full odo service create command in non interactive mode expected behavior environment variables should be set to same values when creating service using either interactive mode or full command any logs error output etc | 1 |
168,324 | 6,369,390,637 | IssuesEvent | 2017-08-01 11:45:46 | e-e-e/dat-library | https://api.github.com/repos/e-e-e/dat-library | closed | It's not clear that the zoom icon on a library is actually 'show in finder' | enhancement high priority | I had no idea what that button would do.

May I suggest that the text "show in finder" would be more useful that the icon here (for MacOS at least, I'm sure other OS's have an equivalent concept). It would help people discover that feature without having to click the button.
Also the common meaning of that icon might not easily sit conceptually with what the button doesโbut this is my first time using the app (great work by the way โค๏ธ ). | 1.0 | It's not clear that the zoom icon on a library is actually 'show in finder' - I had no idea what that button would do.

May I suggest that the text "show in finder" would be more useful that the icon here (for MacOS at least, I'm sure other OS's have an equivalent concept). It would help people discover that feature without having to click the button.
Also the common meaning of that icon might not easily sit conceptually with what the button doesโbut this is my first time using the app (great work by the way โค๏ธ ). | priority | it s not clear that the zoom icon on a library is actually show in finder i had no idea what that button would do may i suggest that the text show in finder would be more useful that the icon here for macos at least i m sure other os s have an equivalent concept it would help people discover that feature without having to click the button also the common meaning of that icon might not easily sit conceptually with what the button doesโbut this is my first time using the app great work by the way โค๏ธ | 1 |
298,439 | 9,200,274,609 | IssuesEvent | 2019-03-07 16:41:53 | rstudio/shinycannon | https://api.github.com/repos/rstudio/shinycannon | closed | When run with no args should show usage | Difficulty: Intermediate Effort: Low Priority: High Type: Enhancement | Currently, `shinycannon` displays detailed help without any usage instructions when run without args. Instead, it should display minimal help and demonstrate usage. | 1.0 | When run with no args should show usage - Currently, `shinycannon` displays detailed help without any usage instructions when run without args. Instead, it should display minimal help and demonstrate usage. | priority | when run with no args should show usage currently shinycannon displays detailed help without any usage instructions when run without args instead it should display minimal help and demonstrate usage | 1 |
196,139 | 6,924,907,169 | IssuesEvent | 2017-11-30 14:24:38 | fusetools/fuselibs-public | https://api.github.com/repos/fusetools/fuselibs-public | opened | Updates to property object are not being updated | Priority: High Severity: Bug | In the below project you can edit the right-hand values of the items. This should update the values in the view part as well, but it does not.
It's uncertain if this is a problem with using ux:Property, or so other defect.
Project: https://github.com/mortoray/fork-sandbox/tree/master/Nov2017/ItemListing | 1.0 | Updates to property object are not being updated - In the below project you can edit the right-hand values of the items. This should update the values in the view part as well, but it does not.
It's uncertain if this is a problem with using ux:Property, or so other defect.
Project: https://github.com/mortoray/fork-sandbox/tree/master/Nov2017/ItemListing | priority | updates to property object are not being updated in the below project you can edit the right hand values of the items this should update the values in the view part as well but it does not it s uncertain if this is a problem with using ux property or so other defect project | 1 |
76,552 | 3,489,213,944 | IssuesEvent | 2016-01-03 17:53:08 | benbaptist/benbot | https://api.github.com/repos/benbaptist/benbot | closed | No Access to BenBot Dashboard | bug dashboard high priority | Whatup man! So BenBot (aka CowCowBot) is in my chat (beam.pro/misterjoker) but won't do any of the commands. So far deactivate and activate work, but none of the custom commands. Also, going to http://dashboard.benbot.rocks/ automatically puts me at http://dashboard.benbot.rocks/switch
And when I click on my channel, it doesn't do anything. I've restarted the browser, I've logged out, and logged in, I've restarted. And, like I said prior, I !deactivate_bot and !activate the bot with no luck.
Also tested this during a stream as well with no luck whatsoever.
Thanks!
-mj | 1.0 | No Access to BenBot Dashboard - Whatup man! So BenBot (aka CowCowBot) is in my chat (beam.pro/misterjoker) but won't do any of the commands. So far deactivate and activate work, but none of the custom commands. Also, going to http://dashboard.benbot.rocks/ automatically puts me at http://dashboard.benbot.rocks/switch
And when I click on my channel, it doesn't do anything. I've restarted the browser, I've logged out, and logged in, I've restarted. And, like I said prior, I !deactivate_bot and !activate the bot with no luck.
Also tested this during a stream as well with no luck whatsoever.
Thanks!
-mj | priority | no access to benbot dashboard whatup man so benbot aka cowcowbot is in my chat beam pro misterjoker but won t do any of the commands so far deactivate and activate work but none of the custom commands also going to automatically puts me at and when i click on my channel it doesn t do anything i ve restarted the browser i ve logged out and logged in i ve restarted and like i said prior i deactivate bot and activate the bot with no luck also tested this during a stream as well with no luck whatsoever thanks mj | 1 |
127,539 | 5,032,032,778 | IssuesEvent | 2016-12-16 09:47:22 | fossasia/gci16.fossasia.org | https://api.github.com/repos/fossasia/gci16.fossasia.org | closed | Travis build fails | help wanted Priority: HIGH size/S | Why does [the current travis build](https://travis-ci.org/fossasia/gci16.fossasia.org/builds/184473563) fail?
Error:
```
/home/travis/.rvm/gems/ruby-2.3.3/gems/safe_yaml-1.0.4/lib/safe_yaml/load.rb:143:in `parse':
(/home/travis/build/fossasia/gci16.fossasia.org/_data/logosv2.yml):
did not find expected '-' indicator while parsing a block collection at line 2 column 1 (Psych::SyntaxError)
```
But if I look at the [file](https://github.com/fossasia/gci16.fossasia.org/blob/gh-pages/_data/logosv2.yml), I see this:
```
- author: Nguyen Chanh Dai
img: NguyenChanhDai.svg
```
Where is the problem? How can we fix travis? | 1.0 | Travis build fails - Why does [the current travis build](https://travis-ci.org/fossasia/gci16.fossasia.org/builds/184473563) fail?
Error:
```
/home/travis/.rvm/gems/ruby-2.3.3/gems/safe_yaml-1.0.4/lib/safe_yaml/load.rb:143:in `parse':
(/home/travis/build/fossasia/gci16.fossasia.org/_data/logosv2.yml):
did not find expected '-' indicator while parsing a block collection at line 2 column 1 (Psych::SyntaxError)
```
But if I look at the [file](https://github.com/fossasia/gci16.fossasia.org/blob/gh-pages/_data/logosv2.yml), I see this:
```
- author: Nguyen Chanh Dai
img: NguyenChanhDai.svg
```
Where is the problem? How can we fix travis? | priority | travis build fails why does fail error home travis rvm gems ruby gems safe yaml lib safe yaml load rb in parse home travis build fossasia fossasia org data yml did not find expected indicator while parsing a block collection at line column psych syntaxerror but if i look at the i see this author nguyen chanh dai img nguyenchanhdai svg where is the problem how can we fix travis | 1 |
434,530 | 12,519,686,692 | IssuesEvent | 2020-06-03 14:47:32 | DeadlyBossMods/DBM-Classic | https://api.github.com/repos/DeadlyBossMods/DBM-Classic | opened | GUI Issues | โ ๏ธ High Priority ๐ Bug | Main tracking on retail DBM project here:
https://github.com/DeadlyBossMods/DeadlyBossMods/issues/204
Tracked here because the same issues would still impact DBM-Classic | 1.0 | GUI Issues - Main tracking on retail DBM project here:
https://github.com/DeadlyBossMods/DeadlyBossMods/issues/204
Tracked here because the same issues would still impact DBM-Classic | priority | gui issues main tracking on retail dbm project here tracked here because the same issues would still impact dbm classic | 1 |
287,320 | 8,809,161,808 | IssuesEvent | 2018-12-27 18:09:17 | comfortleaf/Legacy-WP | https://api.github.com/repos/comfortleaf/Legacy-WP | closed | Email Follow Up Campaigns | High Priority | Email Campaign / Drips are here:
https://comfortleaf.activehosted.com/series/9
-------
- [x] A message from our CEO
- [x] Comfort Leaf's Product Ingredients
- [x] Is CBD Legal? Legal Status of CBD in 50 States
- [x] Top 20 CBD Myths Debunked
- [x] This is how CBD is touching lives
- [x] Your Endocannabinoid System (Did you know?)
- [x] Comfort Leaf's Product Line
- [x] CBD Compliance, Lab Testing & Safety
- [x] What Doctors & Physicians are saying about CBD
- [x] 10 Little Known Uses For CBD
- [x] How to use CBD? Picking the best Product for me?
- [x] Have questions about CBD? Need assistance?
- [x] Have you had a chance to try CBD? (Free Sample) | 1.0 | Email Follow Up Campaigns - Email Campaign / Drips are here:
https://comfortleaf.activehosted.com/series/9
-------
- [x] A message from our CEO
- [x] Comfort Leaf's Product Ingredients
- [x] Is CBD Legal? Legal Status of CBD in 50 States
- [x] Top 20 CBD Myths Debunked
- [x] This is how CBD is touching lives
- [x] Your Endocannabinoid System (Did you know?)
- [x] Comfort Leaf's Product Line
- [x] CBD Compliance, Lab Testing & Safety
- [x] What Doctors & Physicians are saying about CBD
- [x] 10 Little Known Uses For CBD
- [x] How to use CBD? Picking the best Product for me?
- [x] Have questions about CBD? Need assistance?
- [x] Have you had a chance to try CBD? (Free Sample) | priority | email follow up campaigns email campaign drips are here a message from our ceo comfort leaf s product ingredients is cbd legal legal status of cbd in states top cbd myths debunked this is how cbd is touching lives your endocannabinoid system did you know comfort leaf s product line cbd compliance lab testing safety what doctors physicians are saying about cbd little known uses for cbd how to use cbd picking the best product for me have questions about cbd need assistance have you had a chance to try cbd free sample | 1 |
655,767 | 21,708,272,436 | IssuesEvent | 2022-05-10 11:42:22 | workcraft/workcraft | https://api.github.com/repos/workcraft/workcraft | closed | Incorrect extraction of set/reset functions from GenLib latch function | bug priority:high tag:model:circuit status:confirmed | Please answer these questions before submitting your issue. Thanks!
1. What version of Workcraft are you using?
Workcraft 3.3.7
2. What operating system are you using?
CentOS release 7.9.2009
3. What did you do? If possible, provide a list of steps to reproduce
the error.
I defined a gate library in the SIS Genlib format which includes a SR latch, as shown in the image below. I'm using it for technology mapping with MPSat.
4. What did you expect to see?
The SR latch is defined as follows:

When both inputs are high the output is supposed to be high as well.
5. What did you see instead?
When using the Initialisation Analyser with a circuit that uses these latches and both inputs are initialised to high, the output is not shown as propagated high as I was expecting.

| 1.0 | Incorrect extraction of set/reset functions from GenLib latch function - Please answer these questions before submitting your issue. Thanks!
1. What version of Workcraft are you using?
Workcraft 3.3.7
2. What operating system are you using?
CentOS release 7.9.2009
3. What did you do? If possible, provide a list of steps to reproduce
the error.
I defined a gate library in the SIS Genlib format which includes a SR latch, as shown in the image below. I'm using it for technology mapping with MPSat.
4. What did you expect to see?
The SR latch is defined as follows:

When both inputs are high the output is supposed to be high as well.
5. What did you see instead?
When using the Initialisation Analyser with a circuit that uses these latches and both inputs are initialised to high, the output is not shown as propagated high as I was expecting.

| priority | incorrect extraction of set reset functions from genlib latch function please answer these questions before submitting your issue thanks what version of workcraft are you using workcraft what operating system are you using centos release what did you do if possible provide a list of steps to reproduce the error i defined a gate library in the sis genlib format which includes a sr latch as shown in the image below i m using it for technology mapping with mpsat what did you expect to see the sr latch is defined as follows when both inputs are high the output is supposed to be high as well what did you see instead when using the initialisation analyser with a circuit that uses these latches and both inputs are initialised to high the output is not shown as propagated high as i was expecting | 1 |
128,066 | 5,048,135,928 | IssuesEvent | 2016-12-20 11:48:48 | Financial-Times/n-storylines | https://api.github.com/repos/Financial-Times/n-storylines | closed | Scroll on Mobile | High Priority | On Default and Small devices, the timeline should travel off the page to indicate that it can be scrolled:
<img width="506" alt="screen shot 2016-12-13 at 17 07 00" src="https://cloud.githubusercontent.com/assets/8199751/21152703/5e259ea2-c15f-11e6-9e15-cd183e745ce3.png"> | 1.0 | Scroll on Mobile - On Default and Small devices, the timeline should travel off the page to indicate that it can be scrolled:
<img width="506" alt="screen shot 2016-12-13 at 17 07 00" src="https://cloud.githubusercontent.com/assets/8199751/21152703/5e259ea2-c15f-11e6-9e15-cd183e745ce3.png"> | priority | scroll on mobile on default and small devices the timeline should travel off the page to indicate that it can be scrolled img width alt screen shot at src | 1 |
141,802 | 5,444,621,619 | IssuesEvent | 2017-03-07 03:37:00 | david-gay/2340 | https://api.github.com/repos/david-gay/2340 | closed | As a non-banned user, I want to create new water source reports, so that I can contribute to the community of people in search of water | Feature HighPriority | Must have some kind of input screen for the report where all the information is captured. The report should be stored somewhere in the model.
**Requirements**
- [x] After login, application should display the main screen of the application
- [x] You should have a way to navigate to the submit report screen
- [x] The submit report should prompt for all required information
- [x] Canceling the report does not save any information
- [x] Submitting the report should store it in the model
- [x] Need a way to view a list of all reports in the system | 1.0 | As a non-banned user, I want to create new water source reports, so that I can contribute to the community of people in search of water - Must have some kind of input screen for the report where all the information is captured. The report should be stored somewhere in the model.
**Requirements**
- [x] After login, application should display the main screen of the application
- [x] You should have a way to navigate to the submit report screen
- [x] The submit report should prompt for all required information
- [x] Canceling the report does not save any information
- [x] Submitting the report should store it in the model
- [x] Need a way to view a list of all reports in the system | priority | as a non banned user i want to create new water source reports so that i can contribute to the community of people in search of water must have some kind of input screen for the report where all the information is captured the report should be stored somewhere in the model requirements after login application should display the main screen of the application you should have a way to navigate to the submit report screen the submit report should prompt for all required information canceling the report does not save any information submitting the report should store it in the model need a way to view a list of all reports in the system | 1 |
521,278 | 15,106,928,538 | IssuesEvent | 2021-02-08 14:51:57 | cabouman/svmbir | https://api.github.com/repos/cabouman/svmbir | opened | package versioning | Priority High | As we transition from the "wild west" approach, as Charlie puts it, we need to establish a protocol for versioning the package. We should especially do this before putting this up on pypi.
This is a useful reference contributors should read:
https://nvie.com/posts/a-successful-git-branching-model/ | 1.0 | package versioning - As we transition from the "wild west" approach, as Charlie puts it, we need to establish a protocol for versioning the package. We should especially do this before putting this up on pypi.
This is a useful reference contributors should read:
https://nvie.com/posts/a-successful-git-branching-model/ | priority | package versioning as we transition from the wild west approach as charlie puts it we need to establish a protocol for versioning the package we should especially do this before putting this up on pypi this is a useful reference contributors should read | 1 |
70,172 | 3,320,823,572 | IssuesEvent | 2015-11-09 02:57:50 | web-cat/code-workout | https://api.github.com/repos/web-cat/code-workout | closed | Figure out VT login instructions | priority:high | How should VT users login? Perhaps we should just use VT G-mail credentials for logging in students? We need to figure out the procedure and instructions so students can begin using CodeWorkout in summer. | 1.0 | Figure out VT login instructions - How should VT users login? Perhaps we should just use VT G-mail credentials for logging in students? We need to figure out the procedure and instructions so students can begin using CodeWorkout in summer. | priority | figure out vt login instructions how should vt users login perhaps we should just use vt g mail credentials for logging in students we need to figure out the procedure and instructions so students can begin using codeworkout in summer | 1 |
352,077 | 10,531,547,272 | IssuesEvent | 2019-10-01 08:46:42 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | Mobile connectivity takeover + landing page misuse โin favour ofโ | Priority: High | 1\. Go to [ubuntu.com](https://ubuntu.com/).
2\. If necessary, reload the page until you see โThe future of mobile connectivityโ.
3\. Follow the link to [the whitepaper page](https://ubuntu.com/engage/ubuntu-lime-telco?utm_source=Takeover&utm_medium=Takeover&utm_campaign=FY19_IOT_UbuntuCore_Whitepaper_LimeSDR).
What happens:
2, 3. Both pages have a subhed: โAre mobile operators ready to adopt open source in favour of proprietary technology?โ
Whatโs wrong with this: It says the opposite of whatโs intended.
The whitepaper page says: โRather than rely on costly proprietary hardware and operating models, the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure.โ So, more open-source stuff, _less_ proprietary stuff.
But โin favour ofโ means โapprovingโ, โto the benefit ofโ, or โto show preference forโ. (References: [Collins](https://www.collinsdictionary.com/dictionary/english/in-favour-of), [Merriam-Webster](https://www.merriam-webster.com/dictionary/favor).) So โin favour of proprietary technologyโ would mean adopting _more_ proprietary technology.
What should happen: A simple fix would be to change โin favour ofโ to โrather thanโ or โinstead ofโ.
---
*Reported from: https://ubuntu.com/* | 1.0 | Mobile connectivity takeover + landing page misuse โin favour ofโ - 1\. Go to [ubuntu.com](https://ubuntu.com/).
2\. If necessary, reload the page until you see โThe future of mobile connectivityโ.
3\. Follow the link to [the whitepaper page](https://ubuntu.com/engage/ubuntu-lime-telco?utm_source=Takeover&utm_medium=Takeover&utm_campaign=FY19_IOT_UbuntuCore_Whitepaper_LimeSDR).
What happens:
2, 3. Both pages have a subhed: โAre mobile operators ready to adopt open source in favour of proprietary technology?โ
Whatโs wrong with this: It says the opposite of whatโs intended.
The whitepaper page says: โRather than rely on costly proprietary hardware and operating models, the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure.โ So, more open-source stuff, _less_ proprietary stuff.
But โin favour ofโ means โapprovingโ, โto the benefit ofโ, or โto show preference forโ. (References: [Collins](https://www.collinsdictionary.com/dictionary/english/in-favour-of), [Merriam-Webster](https://www.merriam-webster.com/dictionary/favor).) So โin favour of proprietary technologyโ would mean adopting _more_ proprietary technology.
What should happen: A simple fix would be to change โin favour ofโ to โrather thanโ or โinstead ofโ.
---
*Reported from: https://ubuntu.com/* | priority | mobile connectivity takeover landing page misuse โin favour ofโ go to if necessary reload the page until you see โthe future of mobile connectivityโ follow the link to what happens both pages have a subhed โare mobile operators ready to adopt open source in favour of proprietary technology โ whatโs wrong with this it says the opposite of whatโs intended the whitepaper page says โrather than rely on costly proprietary hardware and operating models the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure โ so more open source stuff less proprietary stuff but โin favour ofโ means โapprovingโ โto the benefit ofโ or โto show preference forโ references so โin favour of proprietary technologyโ would mean adopting more proprietary technology what should happen a simple fix would be to change โin favour ofโ to โrather thanโ or โinstead ofโ reported from | 1 |
135,931 | 5,266,951,892 | IssuesEvent | 2017-02-04 17:51:41 | senderle/topic-modeling-tool | https://api.github.com/repos/senderle/topic-modeling-tool | closed | Fix MalformedInputException for metadata caused by Excel output | priority-high | CSV output from Excel on (at least) Macs uses a file encoding that breaks Java. This will probably require using the stream API.
| 1.0 | Fix MalformedInputException for metadata caused by Excel output - CSV output from Excel on (at least) Macs uses a file encoding that breaks Java. This will probably require using the stream API.
| priority | fix malformedinputexception for metadata caused by excel output csv output from excel on at least macs uses a file encoding that breaks java this will probably require using the stream api | 1 |
149,728 | 5,724,747,510 | IssuesEvent | 2017-04-20 15:08:33 | metasfresh/metasfresh-webui-frontend | https://api.github.com/repos/metasfresh/metasfresh-webui-frontend | closed | frontend: refactor /process/start response | enhancement high priority integrated release-candidate | We refactored the /process/start endpoint's respone as described here: https://github.com/metasfresh/metasfresh-webui-api/issues/294 .
Pls adapt the frontend to it, because more "action"s will come.
| 1.0 | frontend: refactor /process/start response - We refactored the /process/start endpoint's respone as described here: https://github.com/metasfresh/metasfresh-webui-api/issues/294 .
Pls adapt the frontend to it, because more "action"s will come.
| priority | frontend refactor process start response we refactored the process start endpoint s respone as described here pls adapt the frontend to it because more action s will come | 1 |
64,170 | 3,205,939,625 | IssuesEvent | 2015-10-04 15:41:52 | cs2103aug2015-f09-4c/main | https://api.github.com/repos/cs2103aug2015-f09-4c/main | closed | A user can save the data | comp.LOGIC priority.high (must have) type.story | so that the user can close the program and open the program with the saved data later. | 1.0 | A user can save the data - so that the user can close the program and open the program with the saved data later. | priority | a user can save the data so that the user can close the program and open the program with the saved data later | 1 |
108,557 | 4,347,322,908 | IssuesEvent | 2016-07-29 19:05:36 | GoogleCloudPlatform/gcloud-eclipse-tools | https://api.github.com/repos/GoogleCloudPlatform/gcloud-eclipse-tools | opened | Cannot deploy | bug high priority | I'm on the master branch. Not sure if this is happens on my machine only.
Steps to reproduce:
1. Create a new project (either Maven-based or plain)
1. Do deploy
* Nothing happens
I did some investigation: `null` is returned [here](https://github.com/GoogleCloudPlatform/gcloud-eclipse-tools/blob/master/plugins/com.google.cloud.tools.eclipse.appengine.deploy/src/com/google/cloud/tools/eclipse/appengine/deploy/standard/StandardDeployCommandHandler.java#L51). Inside `getProject()`, it seems like `structuredSelection.getFirstElement()` doesn't return an instance of `IProject`. | 1.0 | Cannot deploy - I'm on the master branch. Not sure if this is happens on my machine only.
Steps to reproduce:
1. Create a new project (either Maven-based or plain)
1. Do deploy
* Nothing happens
I did some investigation: `null` is returned [here](https://github.com/GoogleCloudPlatform/gcloud-eclipse-tools/blob/master/plugins/com.google.cloud.tools.eclipse.appengine.deploy/src/com/google/cloud/tools/eclipse/appengine/deploy/standard/StandardDeployCommandHandler.java#L51). Inside `getProject()`, it seems like `structuredSelection.getFirstElement()` doesn't return an instance of `IProject`. | priority | cannot deploy i m on the master branch not sure if this is happens on my machine only steps to reproduce create a new project either maven based or plain do deploy nothing happens i did some investigation null is returned inside getproject it seems like structuredselection getfirstelement doesn t return an instance of iproject | 1 |
243,284 | 7,855,238,849 | IssuesEvent | 2018-06-21 00:28:49 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | 7.5.0 Textures move when vehicles move | High Priority | Placed sand blocks in the trunk of the truck or steam truck. When the car moves, the texture also moves when the sand texture should stay static. Similar issue came up with the crane moving dirt in the show 'n tell last week. | 1.0 | 7.5.0 Textures move when vehicles move - Placed sand blocks in the trunk of the truck or steam truck. When the car moves, the texture also moves when the sand texture should stay static. Similar issue came up with the crane moving dirt in the show 'n tell last week. | priority | textures move when vehicles move placed sand blocks in the trunk of the truck or steam truck when the car moves the texture also moves when the sand texture should stay static similar issue came up with the crane moving dirt in the show n tell last week | 1 |
637,592 | 20,672,678,865 | IssuesEvent | 2022-03-10 05:11:34 | ballerina-platform/ballerina-dev-website | https://api.github.com/repos/ballerina-platform/ballerina-dev-website | closed | Change All Main and Sub Headings to Sentence Case | Priority/Highest Area/Docs Type/Task Points/0.25 | **Description:**
Change all main and subheadings to sentence case. Also, check the reference links and change them too.
**Describe your task(s)**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers canโt assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canโt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | Change All Main and Sub Headings to Sentence Case - **Description:**
Change all main and subheadings to sentence case. Also, check the reference links and change them too.
**Describe your task(s)**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers canโt assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canโt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | change all main and sub headings to sentence case description change all main and subheadings to sentence case also check the reference links and change them too describe your task s related issues optional suggested labels optional suggested assignees optional | 1 |
291,269 | 8,922,444,508 | IssuesEvent | 2019-01-21 13:02:21 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Spreadsheet Validation dialog appears behind the Window in which the Spreadsheet is placed | Bug C: Spreadsheet C: Window Kendo2 Priority 2 SEV: High Triaged | ### Bug report
If the Spreadsheet widget is placed within a Kendo Window, when validation error occurs, the validation window appears behind the Window.

### Reproduction of the problem
* Go to the following [Dojo](http://dojo.telerik.com/aYogaW/3);
* Type invalid input in any of the cells with validation;
* **Click in another cell to remove the focus from the edited cell and trigger validation;**
### Current behavior
The validation dialog will appear behind the Kendo Window - move the window to see it.
### Expected/desired behavior
The validation dialog will appear in front of the Kendo Window
### Environment
* **Kendo UI version:** 2018.2.620
* **Browser:** [all] | 1.0 | Spreadsheet Validation dialog appears behind the Window in which the Spreadsheet is placed - ### Bug report
If the Spreadsheet widget is placed within a Kendo Window, when validation error occurs, the validation window appears behind the Window.

### Reproduction of the problem
* Go to the following [Dojo](http://dojo.telerik.com/aYogaW/3);
* Type invalid input in any of the cells with validation;
* **Click in another cell to remove the focus from the edited cell and trigger validation;**
### Current behavior
The validation dialog will appear behind the Kendo Window - move the window to see it.
### Expected/desired behavior
The validation dialog will appear in front of the Kendo Window
### Environment
* **Kendo UI version:** 2018.2.620
* **Browser:** [all] | priority | spreadsheet validation dialog appears behind the window in which the spreadsheet is placed bug report if the spreadsheet widget is placed within a kendo window when validation error occurs the validation window appears behind the window reproduction of the problem go to the following type invalid input in any of the cells with validation click in another cell to remove the focus from the edited cell and trigger validation current behavior the validation dialog will appear behind the kendo window move the window to see it expected desired behavior the validation dialog will appear in front of the kendo window environment kendo ui version browser | 1 |
426,967 | 12,391,046,800 | IssuesEvent | 2020-05-20 11:48:24 | RonAsis/Wsep202 | https://api.github.com/repos/RonAsis/Wsep202 | opened | change design style of policies | High priority | In case bar is the manager, someone else will be assigned to the task. | 1.0 | change design style of policies - In case bar is the manager, someone else will be assigned to the task. | priority | change design style of policies in case bar is the manager someone else will be assigned to the task | 1 |
351,011 | 10,511,842,588 | IssuesEvent | 2019-09-27 16:22:02 | REGnosys/rosetta-dsl | https://api.github.com/repos/REGnosys/rosetta-dsl | closed | Allow only a single expression in 'condition' and remove semi-colon delimiter | high-priority review | All conditions that have multiple Expressions should have expressions `and`ed together. | 1.0 | Allow only a single expression in 'condition' and remove semi-colon delimiter - All conditions that have multiple Expressions should have expressions `and`ed together. | priority | allow only a single expression in condition and remove semi colon delimiter all conditions that have multiple expressions should have expressions and ed together | 1 |
270,098 | 8,452,265,001 | IssuesEvent | 2018-10-20 01:36:44 | ilmtest/search-engine | https://api.github.com/repos/ilmtest/search-engine | closed | Implement search invocation on Maktabah | feature fixed priority/high usability | So a user can highlight text from /entries (ie: a hadith) and easily check its checking. | 1.0 | Implement search invocation on Maktabah - So a user can highlight text from /entries (ie: a hadith) and easily check its checking. | priority | implement search invocation on maktabah so a user can highlight text from entries ie a hadith and easily check its checking | 1 |
382,814 | 11,320,196,042 | IssuesEvent | 2020-01-21 03:01:36 | Novusphere/discussions-app | https://api.github.com/repos/Novusphere/discussions-app | closed | Amount/Fee for tips is incorrect | bug high priority | Currently, the amount the tipper writes, i.e. `#tip 1 ATMOS` is being used as the **amount**. But this is wrong, it should be used as the **total Amount**, so when a person does `#tip 1 ATMOS` the person being tipped receives 0.999 and the fee is 0.001 since ATMOS fee is currently set to 0.1% | 1.0 | Amount/Fee for tips is incorrect - Currently, the amount the tipper writes, i.e. `#tip 1 ATMOS` is being used as the **amount**. But this is wrong, it should be used as the **total Amount**, so when a person does `#tip 1 ATMOS` the person being tipped receives 0.999 and the fee is 0.001 since ATMOS fee is currently set to 0.1% | priority | amount fee for tips is incorrect currently the amount the tipper writes i e tip atmos is being used as the amount but this is wrong it should be used as the total amount so when a person does tip atmos the person being tipped receives and the fee is since atmos fee is currently set to | 1 |
695,563 | 23,864,112,393 | IssuesEvent | 2022-09-07 09:32:35 | git4school/git4school-automation | https://api.github.com/repos/git4school/git4school-automation | closed | Generate an executable file on several platforms | technical story high-priority | **Description**
We want to have a tool to generate a one-file executable, working at least on **Windows** and **Linux**.
**Hints**
A bit of working have been done around _pyinstaller_ that lets us generate a .exe file. It could be integrated into Github Actions CD pipeline easily.
_Pyinstaller_ can't compile and generate an executable for an OS from another as the following citation says :
> PyInstaller supports making executables for Windows, Linux, and macOS, but it cannot [cross compile](https://en.wikipedia.org/wiki/Cross_compiler). Therefore, you cannot make an executable targeting one Operating System from another Operating System. So, to distribute executables for multiple types of OS, youโll need a build machine for each supported OS.
- One way to generate the executable for different OS could be using Docker | 1.0 | Generate an executable file on several platforms - **Description**
We want to have a tool to generate a one-file executable, working at least on **Windows** and **Linux**.
**Hints**
A bit of working have been done around _pyinstaller_ that lets us generate a .exe file. It could be integrated into Github Actions CD pipeline easily.
_Pyinstaller_ can't compile and generate an executable for an OS from another as the following citation says :
> PyInstaller supports making executables for Windows, Linux, and macOS, but it cannot [cross compile](https://en.wikipedia.org/wiki/Cross_compiler). Therefore, you cannot make an executable targeting one Operating System from another Operating System. So, to distribute executables for multiple types of OS, youโll need a build machine for each supported OS.
- One way to generate the executable for different OS could be using Docker | priority | generate an executable file on several platforms description we want to have a tool to generate a one file executable working at least on windows and linux hints a bit of working have been done around pyinstaller that lets us generate a exe file it could be integrated into github actions cd pipeline easily pyinstaller can t compile and generate an executable for an os from another as the following citation says pyinstaller supports making executables for windows linux and macos but it cannot therefore you cannot make an executable targeting one operating system from another operating system so to distribute executables for multiple types of os youโll need a build machine for each supported os one way to generate the executable for different os could be using docker | 1 |
417,552 | 12,167,277,644 | IssuesEvent | 2020-04-27 10:37:51 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Deadlock in view system | Priority: High Status: Fixed Week Task | Client update thread is locking on client.PendingView:

Then later up the callstack its locking on controller, which happens to be a workoder:

MEANWHILE in a worker thread that's ticking the work order, the work order is locked:

Then later up the callstack, it locks client.PendingView:

Dump is here: https://drive.google.com/open?id=19hxZtyFr56z1kU5hWvUoIOE4bVx6YbwP
Dump + heap available locally if you need it.
Could this be a server-wide system problem resulting from RPCs and Client Updates being on different threads? | 1.0 | Deadlock in view system - Client update thread is locking on client.PendingView:

Then later up the callstack its locking on controller, which happens to be a workoder:

MEANWHILE in a worker thread that's ticking the work order, the work order is locked:

Then later up the callstack, it locks client.PendingView:

Dump is here: https://drive.google.com/open?id=19hxZtyFr56z1kU5hWvUoIOE4bVx6YbwP
Dump + heap available locally if you need it.
Could this be a server-wide system problem resulting from RPCs and Client Updates being on different threads? | priority | deadlock in view system client update thread is locking on client pendingview then later up the callstack its locking on controller which happens to be a workoder meanwhile in a worker thread that s ticking the work order the work order is locked then later up the callstack it locks client pendingview dump is here dump heap available locally if you need it could this be a server wide system problem resulting from rpcs and client updates being on different threads | 1 |
364,516 | 10,765,222,102 | IssuesEvent | 2019-11-01 10:25:36 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | opened | Muon AlC interface crashes | Added during Sprint High Priority ISIS Team: Spectroscopy | ### Expected behavior
If you enter the first set of data, tick `Auto` and press `Load` it should load the data
### Actual behavior
Hard crash
### Steps to reproduce the behavior
Set the default instrument to EMU, and turn on the data archive
`Interfaces`->`Muon`->`ALC`
Enter some valid data for the first run and tick the `Auto` box
Press `Load`
It crashes
### Platforms affected
Tested on windows but probably all | 1.0 | Muon AlC interface crashes - ### Expected behavior
If you enter the first set of data, tick `Auto` and press `Load` it should load the data
### Actual behavior
Hard crash
### Steps to reproduce the behavior
Set the default instrument to EMU, and turn on the data archive
`Interfaces`->`Muon`->`ALC`
Enter some valid data for the first run and tick the `Auto` box
Press `Load`
It crashes
### Platforms affected
Tested on windows but probably all | priority | muon alc interface crashes expected behavior if you enter the first set of data tick auto and press load it should load the data actual behavior hard crash steps to reproduce the behavior set the default instrument to emu and turn on the data archive interfaces muon alc enter some valid data for the first run and tick the auto box press load it crashes platforms affected tested on windows but probably all | 1 |
283,542 | 8,719,838,772 | IssuesEvent | 2018-12-08 05:09:06 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Problem replacing a database with multiple plots of history variables from Ale3d. | bug crash likelihood medium priority reviewed severity high wrong results | The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Steps to demonstrate bug:
1) Turn off "Apply subset selections to all plots"
2) Open if7f_001.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Bring up the Subset window and select material 2 for hist/incmat2_2/dmfdt/temp
6) Press Draw
7) Highlight if7f_004.00020
8) Press Replace
It will display both plots on all the materials, which is wrong.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 979
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Problem replacing a database with multiple plots of history variables from Ale3d.
Assigned to: Eric Brugger
Category:
Target version: 2.4.2
Author: Eric Brugger
Start: 02/23/2012
Due date:
% Done: 100
Estimated time: 3.0
Created: 02/23/2012 07:09 pm
Updated: 02/23/2012 08:29 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Steps to demonstrate bug:
1) Turn off "Apply subset selections to all plots"
2) Open if7f_001.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Bring up the Subset window and select material 2 for hist/incmat2_2/dmfdt/temp
6) Press Draw
7) Highlight if7f_004.00020
8) Press Replace
It will display both plots on all the materials, which is wrong.
Comments:
I committed revisions 17424 and 17426 to the 2.4 RC and trunk with thefollowing change:1) I modified VisIt so that when you replace a database it only sets the SIL from a compatible plot if the variable is not material restricted. This resolves #979.M help/en_US/relnotes2.4.2.htmlM viewer/main/ViewerPlotList.C
| 1.0 | Problem replacing a database with multiple plots of history variables from Ale3d. - The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Steps to demonstrate bug:
1) Turn off "Apply subset selections to all plots"
2) Open if7f_001.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Bring up the Subset window and select material 2 for hist/incmat2_2/dmfdt/temp
6) Press Draw
7) Highlight if7f_004.00020
8) Press Replace
It will display both plots on all the materials, which is wrong.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 979
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Problem replacing a database with multiple plots of history variables from Ale3d.
Assigned to: Eric Brugger
Category:
Target version: 2.4.2
Author: Eric Brugger
Start: 02/23/2012
Due date:
% Done: 100
Estimated time: 3.0
Created: 02/23/2012 07:09 pm
Updated: 02/23/2012 08:29 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The data files to reproduce the bug are:
if7f_001.00020
if7f_004.00020
Steps to demonstrate bug:
1) Turn off "Apply subset selections to all plots"
2) Open if7f_001.00020
3) Add a Pseudocolor of hist/incmat1_1/dmfdt/temp
4) Add a Pseudocolor of hist/incmat2_2/dmfdt/temp
5) Bring up the Subset window and select material 2 for hist/incmat2_2/dmfdt/temp
6) Press Draw
7) Highlight if7f_004.00020
8) Press Replace
It will display both plots on all the materials, which is wrong.
Comments:
I committed revisions 17424 and 17426 to the 2.4 RC and trunk with thefollowing change:1) I modified VisIt so that when you replace a database it only sets the SIL from a compatible plot if the variable is not material restricted. This resolves #979.M help/en_US/relnotes2.4.2.htmlM viewer/main/ViewerPlotList.C
| priority | problem replacing a database with multiple plots of history variables from the data files to reproduce the bug are steps to demonstrate bug turn off apply subset selections to all plots open add a pseudocolor of hist dmfdt temp add a pseudocolor of hist dmfdt temp bring up the subset window and select material for hist dmfdt temp press draw highlight press replace it will display both plots on all the materials which is wrong redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject problem replacing a database with multiple plots of history variables from assigned to eric brugger category target version author eric brugger start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description the data files to reproduce the bug are steps to demonstrate bug turn off apply subset selections to all plots open add a pseudocolor of hist dmfdt temp add a pseudocolor of hist dmfdt temp bring up the subset window and select material for hist dmfdt temp press draw highlight press replace it will display both plots on all the materials which is wrong comments i committed revisions and to the rc and trunk with thefollowing change i modified visit so that when you replace a database it only sets the sil from a compatible plot if the variable is not material restricted this resolves m help en us htmlm viewer main viewerplotlist c | 1 |
328,331 | 9,993,283,456 | IssuesEvent | 2019-07-11 15:03:15 | OpenSIPS/opensips | https://api.github.com/repos/OpenSIPS/opensips | closed | segfaults when using async rest_get at | bug high-priority | I am doing some testing where async functions with rest_get are involved.
Using this I get reproducible segfaults from the opensips after around 80.000 or 300.000 calls.
Currently this happens within an isolated sipp testing environment. I modified several parts of the opensips.cfg but the issue is reproducible whenever the rest_get is used.
Tests done so far:
Scenario without rest_get: works without issues, 27M calls at 500 cps without issues
Scenario with rest_get: depending on the call-rate, segfaults after 80.000 or 300.000 calls
```
Nov 26 09:39:52 dialplan02 /usr/sbin/opensips[11452]: NOTICE:signaling:mod_init: initializing module ...
Nov 26 09:42:26 dialplan02 /usr/sbin/opensips[11463]: CRITICAL:core:process_lumps: #012>>> ADD|SUBST|OPT#012It seems you have hit a programming bug.#012Please help us make OpenSIPS better by reporting it at https://github.com/OpenSIPS/opensips/issues
Nov 26 09:42:26 dialplan02 /usr/sbin/opensips[11463]: CRITICAL:core:process_lumps: #012>>> ADD|SUBST|OPT#012It seems you have hit a programming bug.#012Please help us make OpenSIPS better by reporting it at https://github.com/OpenSIPS/opensips/issues
Nov 26 09:42:29 dialplan02 /usr/sbin/opensips[11465]: CRITICAL:core:sig_usr: segfault in process pid: 11465, id: 13
Nov 26 09:42:31 dialplan02 /usr/sbin/opensips[11475]: CRITICAL:core:handle_worker: dead child 13 (EOF received), pid 11465
```
Stacktrace: [core.11465.txt](https://github.com/OpenSIPS/opensips/files/2616191/core.11465.txt)
Another incarnation of this:
```
Nov 22 13:43:26 dialplan02 /usr/sbin/opensips[37137]: NOTICE:core:main: version: opensips 2.4.3 (x86_64/linux)
Nov 22 13:43:26 dialplan02 /usr/sbin/opensips[37137]: NOTICE:signaling:mod_init: initializing module ...
Nov 22 13:56:29 dialplan02 /usr/sbin/opensips[37151]: CRITICAL:core:sig_usr: segfault in process pid: 37151, id: 13
Nov 22 13:56:30 dialplan02 /usr/sbin/opensips[37161]: CRITICAL:core:handle_worker: dead child 13 (EOF received), pid 3715
```
Stacktrace:
[core.37151.txt](https://github.com/OpenSIPS/opensips/files/2616192/core.37151.txt)
I using version: opensips 2.4.3 (x86_64/linux)
flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535
poll method support: poll, epoll, sigio_rt, select.
main.c compiled on with gcc 4.9.2
Memory parameters: -m 1024 -M 128
I can reproduce this | 1.0 | segfaults when using async rest_get at - I am doing some testing where async functions with rest_get are involved.
Using this I get reproducible segfaults from the opensips after around 80.000 or 300.000 calls.
Currently this happens within an isolated sipp testing environment. I modified several parts of the opensips.cfg but the issue is reproducible whenever the rest_get is used.
Tests done so far:
Scenario without rest_get: works without issues, 27M calls at 500 cps without issues
Scenario with rest_get: depending on the call-rate, segfaults after 80.000 or 300.000 calls
```
Nov 26 09:39:52 dialplan02 /usr/sbin/opensips[11452]: NOTICE:signaling:mod_init: initializing module ...
Nov 26 09:42:26 dialplan02 /usr/sbin/opensips[11463]: CRITICAL:core:process_lumps: #012>>> ADD|SUBST|OPT#012It seems you have hit a programming bug.#012Please help us make OpenSIPS better by reporting it at https://github.com/OpenSIPS/opensips/issues
Nov 26 09:42:26 dialplan02 /usr/sbin/opensips[11463]: CRITICAL:core:process_lumps: #012>>> ADD|SUBST|OPT#012It seems you have hit a programming bug.#012Please help us make OpenSIPS better by reporting it at https://github.com/OpenSIPS/opensips/issues
Nov 26 09:42:29 dialplan02 /usr/sbin/opensips[11465]: CRITICAL:core:sig_usr: segfault in process pid: 11465, id: 13
Nov 26 09:42:31 dialplan02 /usr/sbin/opensips[11475]: CRITICAL:core:handle_worker: dead child 13 (EOF received), pid 11465
```
Stacktrace: [core.11465.txt](https://github.com/OpenSIPS/opensips/files/2616191/core.11465.txt)
Another incarnation of this:
```
Nov 22 13:43:26 dialplan02 /usr/sbin/opensips[37137]: NOTICE:core:main: version: opensips 2.4.3 (x86_64/linux)
Nov 22 13:43:26 dialplan02 /usr/sbin/opensips[37137]: NOTICE:signaling:mod_init: initializing module ...
Nov 22 13:56:29 dialplan02 /usr/sbin/opensips[37151]: CRITICAL:core:sig_usr: segfault in process pid: 37151, id: 13
Nov 22 13:56:30 dialplan02 /usr/sbin/opensips[37161]: CRITICAL:core:handle_worker: dead child 13 (EOF received), pid 3715
```
Stacktrace:
[core.37151.txt](https://github.com/OpenSIPS/opensips/files/2616192/core.37151.txt)
I using version: opensips 2.4.3 (x86_64/linux)
flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC, FAST_LOCK-ADAPTIVE_WAIT
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16, MAX_URI_SIZE 1024, BUF_SIZE 65535
poll method support: poll, epoll, sigio_rt, select.
main.c compiled on with gcc 4.9.2
Memory parameters: -m 1024 -M 128
I can reproduce this | priority | segfaults when using async rest get at i am doing some testing where async functions with rest get are involved using this i get reproducible segfaults from the opensips after around or calls currently this happens within an isolated sipp testing environment i modified several parts of the opensips cfg but the issue is reproducible whenever the rest get is used tests done so far scenario without rest get works without issues calls at cps without issues scenario with rest get depending on the call rate segfaults after or calls nov usr sbin opensips notice signaling mod init initializing module nov usr sbin opensips critical core process lumps add subst opt seems you have hit a programming bug help us make opensips better by reporting it at nov usr sbin opensips critical core process lumps add subst opt seems you have hit a programming bug help us make opensips better by reporting it at nov usr sbin opensips critical core sig usr segfault in process pid id nov usr sbin opensips critical core handle worker dead child eof received pid stacktrace another incarnation of this nov usr sbin opensips notice core main version opensips linux nov usr sbin opensips notice signaling mod init initializing module nov usr sbin opensips critical core sig usr segfault in process pid id nov usr sbin opensips critical core handle worker dead child eof received pid stacktrace i using version opensips linux flags stats on disable nagle use mcast shm mmap pkg malloc f malloc fast lock adaptive wait adaptive wait loops max recv buffer size max listen max uri size buf size poll method support poll epoll sigio rt select main c compiled on with gcc memory parameters m m i can reproduce this | 1 |
761,478 | 26,682,708,448 | IssuesEvent | 2023-01-26 19:04:19 | fyusuf-a/ft_transcendence | https://api.github.com/repos/fyusuf-a/ft_transcendence | closed | fix(docker): remove profiles | wontfix HIGH PRIORITY | We need to remove the docker profiles before submitting the project. Otherwise the command `docker-compose up --build` will fail to launch our project properly | 1.0 | fix(docker): remove profiles - We need to remove the docker profiles before submitting the project. Otherwise the command `docker-compose up --build` will fail to launch our project properly | priority | fix docker remove profiles we need to remove the docker profiles before submitting the project otherwise the command docker compose up build will fail to launch our project properly | 1 |
616,843 | 19,322,442,456 | IssuesEvent | 2021-12-14 07:45:49 | google/android-fhir | https://api.github.com/repos/google/android-fhir | closed | Design authentication & authorization for GCP | high priority research Q4 2021 Blocked | **Describe the issue to be researched**
Include any background information and available resources.
**Describe the goal of the research**
What's the desired outcome of this task? What artifacts should be produced?
**Describe the methodology**
Where can more information be found? Who should the assignee approach to ask questions? How can a decision be made?
**Would you like to work on the issue?**
| 1.0 | Design authentication & authorization for GCP - **Describe the issue to be researched**
Include any background information and available resources.
**Describe the goal of the research**
What's the desired outcome of this task? What artifacts should be produced?
**Describe the methodology**
Where can more information be found? Who should the assignee approach to ask questions? How can a decision be made?
**Would you like to work on the issue?**
| priority | design authentication authorization for gcp describe the issue to be researched include any background information and available resources describe the goal of the research what s the desired outcome of this task what artifacts should be produced describe the methodology where can more information be found who should the assignee approach to ask questions how can a decision be made would you like to work on the issue | 1 |
303,543 | 9,308,171,861 | IssuesEvent | 2019-03-25 14:01:59 | FundacionParaguaya/MentorApp | https://api.github.com/repos/FundacionParaguaya/MentorApp | closed | Linkable Drafts and Life Maps | high priority | On the dashboard currently the only types of lifemaps that are "clickable" are drafts.
This is incorrect - the system should be:
Draft - Goes to Draft
Sync pending - Goes to draft
Sync error - Goes to draft
Complete- Goes to completed lifemap within the family profile page
 | 1.0 | Linkable Drafts and Life Maps - On the dashboard currently the only types of lifemaps that are "clickable" are drafts.
This is incorrect - the system should be:
Draft - Goes to Draft
Sync pending - Goes to draft
Sync error - Goes to draft
Complete- Goes to completed lifemap within the family profile page
 | priority | linkable drafts and life maps on the dashboard currently the only types of lifemaps that are clickable are drafts this is incorrect the system should be draft goes to draft sync pending goes to draft sync error goes to draft complete goes to completed lifemap within the family profile page | 1 |
282,756 | 8,710,125,736 | IssuesEvent | 2018-12-06 15:40:56 | brian-team/brian2 | https://api.github.com/repos/brian-team/brian2 | opened | Synapse generator syntax sometimes incorrect with rand() in condition | bug component: synapses high priority | The following code tries to connect to a neuron with target index 10 (which does not exist), triggered by a `rand()` condition in the `if` part. It does not matter whether this is actually doing anything, e.g. `rand() <= 1.0` triggers the same problem.
```Python
source = NeuronGroup(10, '')
target = NeuronGroup(10, '')
syn = Synapses(source, target)
syn.connect(j='k for k in range(i, i+5) if i < 5 and rand() < 0.5')
```
Of course users should rather use `sample(i, i+5, p=0.5)` here, but 1) it should still work and 2) it might be a symptom of a larger issue. | 1.0 | Synapse generator syntax sometimes incorrect with rand() in condition - The following code tries to connect to a neuron with target index 10 (which does not exist), triggered by a `rand()` condition in the `if` part. It does not matter whether this is actually doing anything, e.g. `rand() <= 1.0` triggers the same problem.
```Python
source = NeuronGroup(10, '')
target = NeuronGroup(10, '')
syn = Synapses(source, target)
syn.connect(j='k for k in range(i, i+5) if i < 5 and rand() < 0.5')
```
Of course users should rather use `sample(i, i+5, p=0.5)` here, but 1) it should still work and 2) it might be a symptom of a larger issue. | priority | synapse generator syntax sometimes incorrect with rand in condition the following code tries to connect to a neuron with target index which does not exist triggered by a rand condition in the if part it does not matter whether this is actually doing anything e g rand triggers the same problem python source neurongroup target neurongroup syn synapses source target syn connect j k for k in range i i if i and rand of course users should rather use sample i i p here but it should still work and it might be a symptom of a larger issue | 1 |
126,572 | 4,997,612,560 | IssuesEvent | 2016-12-09 17:14:25 | tsgrp/HPI | https://api.github.com/repos/tsgrp/HPI | closed | Filtering Search Results - "Snapshot" isn't cleared | High Priority issue New Hire | In this example, a search is done on two different types. When attempting to filter results on the second type, we get an error because the previous "snapshot":
```
//before we filter - create a snapshot of the current state to go back to
snapshotState: function() {
this.snapshot = new OCQuery._Snapshot(_.extend({}, this.records.attributes), this.fullCollection);
},
```
is not reset. This causes the [findWhere](http://backbonejs.org/#Collection-findWhere) to fail because the `currentSearchConfig()` is updates, but the snapshot is not.
See [here](http://g.recordit.co/bZnDVypNAO.gif) (sorry it's a separate link).
Related to #1249 | 1.0 | Filtering Search Results - "Snapshot" isn't cleared - In this example, a search is done on two different types. When attempting to filter results on the second type, we get an error because the previous "snapshot":
```
//before we filter - create a snapshot of the current state to go back to
snapshotState: function() {
this.snapshot = new OCQuery._Snapshot(_.extend({}, this.records.attributes), this.fullCollection);
},
```
is not reset. This causes the [findWhere](http://backbonejs.org/#Collection-findWhere) to fail because the `currentSearchConfig()` is updates, but the snapshot is not.
See [here](http://g.recordit.co/bZnDVypNAO.gif) (sorry it's a separate link).
Related to #1249 | priority | filtering search results snapshot isn t cleared in this example a search is done on two different types when attempting to filter results on the second type we get an error because the previous snapshot before we filter create a snapshot of the current state to go back to snapshotstate function this snapshot new ocquery snapshot extend this records attributes this fullcollection is not reset this causes the to fail because the currentsearchconfig is updates but the snapshot is not see sorry it s a separate link related to | 1 |
540,847 | 15,818,122,219 | IssuesEvent | 2021-04-05 15:34:33 | idaholab/Deep-Lynx | https://api.github.com/repos/idaholab/Deep-Lynx | closed | Data Import: Database transaction not releasing correctly | High Priority bug | ## Bug Description
The `process` method inside `src/data_processing/processing.ts` is attempting to rollback and/or complete the database transaction passed to it. In reality, the caller of `process` should be in charge of rollbacking/completing database transactions - or the rollback transaction method should stop releasing a client.
## Steps to Reproduce
Attempt to process data, you'll find yourself getting either locking errors or double release errors.
## Impact
High impact, anyone attempting to process data will run into this issue.
| 1.0 | Data Import: Database transaction not releasing correctly - ## Bug Description
The `process` method inside `src/data_processing/processing.ts` is attempting to rollback and/or complete the database transaction passed to it. In reality, the caller of `process` should be in charge of rollbacking/completing database transactions - or the rollback transaction method should stop releasing a client.
## Steps to Reproduce
Attempt to process data, you'll find yourself getting either locking errors or double release errors.
## Impact
High impact, anyone attempting to process data will run into this issue.
| priority | data import database transaction not releasing correctly bug description the process method inside src data processing processing ts is attempting to rollback and or complete the database transaction passed to it in reality the caller of process should be in charge of rollbacking completing database transactions or the rollback transaction method should stop releasing a client steps to reproduce attempt to process data you ll find yourself getting either locking errors or double release errors impact high impact anyone attempting to process data will run into this issue | 1 |
489,775 | 14,112,108,113 | IssuesEvent | 2020-11-07 03:19:59 | AY2021S1-CS2103T-T12-1/tp | https://api.github.com/repos/AY2021S1-CS2103T-T12-1/tp | opened | PPP submission | priority.High type.Task | To convert the UG/DG/PPP into PDF format, go to the generated page in your project's github.io site and use this technique to save as a pdf file. Using other techniques can result in poor quality resolution (will be considered a bug) and unnecessarily large files.
Ensure hyperlinks in the pdf files work. Your UG/DG/PPP will be evaluated using PDF files during the PE. Broken/non-working hyperlinks in the PDF files will be considered as bugs and will count against your project score. Again, use the conversion technique given above to ensure links in the PDF files work. | 1.0 | PPP submission - To convert the UG/DG/PPP into PDF format, go to the generated page in your project's github.io site and use this technique to save as a pdf file. Using other techniques can result in poor quality resolution (will be considered a bug) and unnecessarily large files.
Ensure hyperlinks in the pdf files work. Your UG/DG/PPP will be evaluated using PDF files during the PE. Broken/non-working hyperlinks in the PDF files will be considered as bugs and will count against your project score. Again, use the conversion technique given above to ensure links in the PDF files work. | priority | ppp submission to convert the ug dg ppp into pdf format go to the generated page in your project s github io site and use this technique to save as a pdf file using other techniques can result in poor quality resolution will be considered a bug and unnecessarily large files ensure hyperlinks in the pdf files work your ug dg ppp will be evaluated using pdf files during the pe broken non working hyperlinks in the pdf files will be considered as bugs and will count against your project score again use the conversion technique given above to ensure links in the pdf files work | 1 |
514,593 | 14,941,297,794 | IssuesEvent | 2021-01-25 19:32:46 | indianapublicmedia/indianapublicmedia-web | https://api.github.com/repos/indianapublicmedia/indianapublicmedia-web | closed | API: First deployment | enhancement high priority | After a good bit of local development the first pieces of API/DB functionality is finally ready to go on the live server. | 1.0 | API: First deployment - After a good bit of local development the first pieces of API/DB functionality is finally ready to go on the live server. | priority | api first deployment after a good bit of local development the first pieces of api db functionality is finally ready to go on the live server | 1 |
288,614 | 8,849,432,236 | IssuesEvent | 2019-01-08 10:14:32 | eaudeweb/ozone | https://api.github.com/repos/eaudeweb/ozone | closed | Source for importing legacy lab uses data? | Priority: Highest Type: Analysis | quantity_laboratory_analytical_uses in Import, Export, Production etc.
Probably from table LabUses, but it only contains Production and Consumption columns | 1.0 | Source for importing legacy lab uses data? - quantity_laboratory_analytical_uses in Import, Export, Production etc.
Probably from table LabUses, but it only contains Production and Consumption columns | priority | source for importing legacy lab uses data quantity laboratory analytical uses in import export production etc probably from table labuses but it only contains production and consumption columns | 1 |
620,689 | 19,567,614,199 | IssuesEvent | 2022-01-04 04:22:38 | bounswe/2021SpringGroup2 | https://api.github.com/repos/bounswe/2021SpringGroup2 | closed | [Android] Discussion section for Detailed Event Page | type: enhancement priority: high Android | Discussion section for Detailed Event Page needs to be added. | 1.0 | [Android] Discussion section for Detailed Event Page - Discussion section for Detailed Event Page needs to be added. | priority | discussion section for detailed event page discussion section for detailed event page needs to be added | 1 |
280,697 | 8,685,470,488 | IssuesEvent | 2018-12-03 07:52:03 | Spudnik-Group/Spudnik | https://api.github.com/repos/Spudnik-Group/Spudnik | closed | Add support for environment variables for config settings | enhancement priority:high ready-for-review | The bot should look for environment variables only, we should no longer include any files like that with the bot. | 1.0 | Add support for environment variables for config settings - The bot should look for environment variables only, we should no longer include any files like that with the bot. | priority | add support for environment variables for config settings the bot should look for environment variables only we should no longer include any files like that with the bot | 1 |
462,414 | 13,246,763,514 | IssuesEvent | 2020-08-19 16:10:16 | oncokb/oncokb | https://api.github.com/repos/oncokb/oncokb | closed | This summary does not make sense | high priority | The summary seems come from the Other Tumor Types. But we do have the TT curated under chronic myelogenous leukemia.
https://cbioportal.mskcc.org/patient?studyId=mskimpact&caseId=P-0029673#navCaseIds=mskimpact:P-0029673,mskimpact:P-0030403,mskimpact:P-0032770,mskimpact:P-0036081,mskimpact:P-0037851

| 1.0 | This summary does not make sense - The summary seems come from the Other Tumor Types. But we do have the TT curated under chronic myelogenous leukemia.
https://cbioportal.mskcc.org/patient?studyId=mskimpact&caseId=P-0029673#navCaseIds=mskimpact:P-0029673,mskimpact:P-0030403,mskimpact:P-0032770,mskimpact:P-0036081,mskimpact:P-0037851

| priority | this summary does not make sense the summary seems come from the other tumor types but we do have the tt curated under chronic myelogenous leukemia | 1 |
634,986 | 20,376,240,831 | IssuesEvent | 2022-02-21 15:53:40 | Eventhood/Eventhood-app | https://api.github.com/repos/Eventhood/Eventhood-app | closed | [Feature โจ] - User login | Priority: High Status: Needs Review | ## User Story
As a user, I want to authenticate to the application
## Description
The system allows for input of user information as a user in the system:
- Username/email
- Password
The system should allow for users to input information that will be verified to determine if the user exists, and information inputted is valid
## Acceptance Criteria
The user registration should have:
- Username/email
- Password
User has entered their account email and password into the login form.
System has validated that both fields have been entered, that the values match user credentials stored in Firebase, and provided the user with an auth token.
## Testing
- Correct credentials
- Right user name and wrong password
- Wrong username
- Make sure, not able to access the app before authenticate
## Backend issue
- [ ] https://github.com/Eventhood/Eventhood-app/issues/42 | 1.0 | [Feature โจ] - User login - ## User Story
As a user, I want to authenticate to the application
## Description
The system allows for input of user information as a user in the system:
- Username/email
- Password
The system should allow for users to input information that will be verified to determine if the user exists, and information inputted is valid
## Acceptance Criteria
The user registration should have:
- Username/email
- Password
User has entered their account email and password into the login form.
System has validated that both fields have been entered, that the values match user credentials stored in Firebase, and provided the user with an auth token.
## Testing
- Correct credentials
- Right user name and wrong password
- Wrong username
- Make sure, not able to access the app before authenticate
## Backend issue
- [ ] https://github.com/Eventhood/Eventhood-app/issues/42 | priority | user login user story as a user i want to authenticate to the application description the system allows for input of user information as a user in the system username email password the system should allow for users to input information that will be verified to determine if the user exists and information inputted is valid acceptance criteria the user registration should have username email password user has entered their account email and password into the login form system has validated that both fields have been entered that the values match user credentials stored in firebase and provided the user with an auth token testing correct credentials right user name and wrong password wrong username make sure not able to access the app before authenticate backend issue | 1 |
542,121 | 15,839,342,044 | IssuesEvent | 2021-04-07 00:37:25 | PlaceOS/PlaceOS | https://api.github.com/repos/PlaceOS/PlaceOS | closed | Builds failing due to cache issues | meta priority: high status: in progress type: bug | Nightly builds have recently started failing due to a failure in the cache cleanup step. This will likely impact release builds too.
Example failed run: https://github.com/PlaceOS/PlaceOS/actions/runs/720052976. | 1.0 | Builds failing due to cache issues - Nightly builds have recently started failing due to a failure in the cache cleanup step. This will likely impact release builds too.
Example failed run: https://github.com/PlaceOS/PlaceOS/actions/runs/720052976. | priority | builds failing due to cache issues nightly builds have recently started failing due to a failure in the cache cleanup step this will likely impact release builds too example failed run | 1 |
409,160 | 11,957,862,574 | IssuesEvent | 2020-04-04 15:49:41 | Arquisoft/viade_en1a | https://api.github.com/repos/Arquisoft/viade_en1a | opened | Learn how to parse ttl (notification) files | enhancement high priority | We have to learn how to parse these files in order to:
-Nut+Daniel: show in the map the new routes
-Sofia+Lucia: list the notifications and list more clearly the new routes | 1.0 | Learn how to parse ttl (notification) files - We have to learn how to parse these files in order to:
-Nut+Daniel: show in the map the new routes
-Sofia+Lucia: list the notifications and list more clearly the new routes | priority | learn how to parse ttl notification files we have to learn how to parse these files in order to nut daniel show in the map the new routes sofia lucia list the notifications and list more clearly the new routes | 1 |
779,505 | 27,355,239,332 | IssuesEvent | 2023-02-27 12:24:23 | jellywallet/extension | https://api.github.com/repos/jellywallet/extension | closed | Selling 0 DFI | Bug Priority: High | When you accidentally try to sell 0 DFI or whatever the wallet gets stuck in the pending screen.
<img width="179" alt="image" src="https://user-images.githubusercontent.com/90688059/220535434-2cec3da2-8645-452d-9a6b-5b6e56709a75.png">
| 1.0 | Selling 0 DFI - When you accidentally try to sell 0 DFI or whatever the wallet gets stuck in the pending screen.
<img width="179" alt="image" src="https://user-images.githubusercontent.com/90688059/220535434-2cec3da2-8645-452d-9a6b-5b6e56709a75.png">
| priority | selling dfi when you accidentally try to sell dfi or whatever the wallet gets stuck in the pending screen img width alt image src | 1 |
240,096 | 7,800,390,007 | IssuesEvent | 2018-06-09 08:50:14 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0009086:
allow to send mail to all attending group members | Calendar Feature Request Mantis high priority | **Reported by pschuele on 21 Oct 2013 11:27**
**Version:** Collin (2013.10.1~rc2)
allow to send mail to all attending group members
| 1.0 | 0009086:
allow to send mail to all attending group members - **Reported by pschuele on 21 Oct 2013 11:27**
**Version:** Collin (2013.10.1~rc2)
allow to send mail to all attending group members
| priority | allow to send mail to all attending group members reported by pschuele on oct version collin allow to send mail to all attending group members | 1 |
722,215 | 24,854,905,631 | IssuesEvent | 2022-10-27 00:41:27 | E3SM-Project/zppy | https://api.github.com/repos/E3SM-Project/zppy | closed | Include hemispheric averaging | New feature High priority | Include new hemispheric averaging from NCO. Add figures for hemispheric time-series to global-time-series. To keep backwards-compatibility, weโll have to check the dimensions: if regional dimension is not there, do what we were doing before. Keep `glb` directory structure? | 1.0 | Include hemispheric averaging - Include new hemispheric averaging from NCO. Add figures for hemispheric time-series to global-time-series. To keep backwards-compatibility, weโll have to check the dimensions: if regional dimension is not there, do what we were doing before. Keep `glb` directory structure? | priority | include hemispheric averaging include new hemispheric averaging from nco add figures for hemispheric time series to global time series to keep backwards compatibility weโll have to check the dimensions if regional dimension is not there do what we were doing before keep glb directory structure | 1 |
451,594 | 13,039,003,320 | IssuesEvent | 2020-07-28 16:02:32 | isi-vista/adam | https://api.github.com/repos/isi-vista/adam | closed | Chinese learner demonstration | priority-0-high size-medium | We want to create a short write-up about what we've been able to implement in Chinese so far. This includes:
- [x] Curriculum generation
We have generated an M8 curriculum fully in Chinese so this is a starting place.
- [x] Learning basic stuff (nouns and attributes)
- [x] Learning prepositions
- [x] Learning basic verbs
- [x] Learning more complex stuff if we have time (e.g. verbs with dynamic prepositions (contingent on #769), subtle distinctions, etc). | 1.0 | Chinese learner demonstration - We want to create a short write-up about what we've been able to implement in Chinese so far. This includes:
- [x] Curriculum generation
We have generated an M8 curriculum fully in Chinese so this is a starting place.
- [x] Learning basic stuff (nouns and attributes)
- [x] Learning prepositions
- [x] Learning basic verbs
- [x] Learning more complex stuff if we have time (e.g. verbs with dynamic prepositions (contingent on #769), subtle distinctions, etc). | priority | chinese learner demonstration we want to create a short write up about what we ve been able to implement in chinese so far this includes curriculum generation we have generated an curriculum fully in chinese so this is a starting place learning basic stuff nouns and attributes learning prepositions learning basic verbs learning more complex stuff if we have time e g verbs with dynamic prepositions contingent on subtle distinctions etc | 1 |
382,648 | 11,309,854,696 | IssuesEvent | 2020-01-19 15:49:27 | EthereumCommonwealth/Auditing | https://api.github.com/repos/EthereumCommonwealth/Auditing | closed | Jointer token v2. | approved priority: 1 (high) solidity | # Audit request
Jointer token contract.
[Jointer Whitepaper-compressed.pdf](https://github.com/EthereumCommonwealth/Auditing/files/4003275/Jointer.Whitepaper-compressed.pdf)
# Source code
https://github.com/mak2296/JntrToken
# Disclosure policy
[Standard disclosure policy](https://github.com/EthereumCommonwealth/Auditing/blob/master/Standard_disclosure_policy.md).
# Contact information (optional)
Kyle@jointer.io
# Platform
ETH
# Budget
[2.645 Ether](https://etherscan.io/tx/0x9b6672e3810e99cd540968c459721b1967e24f3322fc6f7dba2e58278995d31e)
Reaudit of https://github.com/EthereumCommonwealth/Auditing/issues/420 | 1.0 | Jointer token v2. - # Audit request
Jointer token contract.
[Jointer Whitepaper-compressed.pdf](https://github.com/EthereumCommonwealth/Auditing/files/4003275/Jointer.Whitepaper-compressed.pdf)
# Source code
https://github.com/mak2296/JntrToken
# Disclosure policy
[Standard disclosure policy](https://github.com/EthereumCommonwealth/Auditing/blob/master/Standard_disclosure_policy.md).
# Contact information (optional)
Kyle@jointer.io
# Platform
ETH
# Budget
[2.645 Ether](https://etherscan.io/tx/0x9b6672e3810e99cd540968c459721b1967e24f3322fc6f7dba2e58278995d31e)
Reaudit of https://github.com/EthereumCommonwealth/Auditing/issues/420 | priority | jointer token audit request jointer token contract source code disclosure policy contact information optional kyle jointer io platform eth budget reaudit of | 1 |
704,668 | 24,205,445,638 | IssuesEvent | 2022-09-25 06:27:44 | AY2223S1-CS2103T-W15-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-W15-4/tp | opened | As a user, I can delete certain comments that are no longer relevant | type.Story priority.High | So that old comments do not clutter up the space. | 1.0 | As a user, I can delete certain comments that are no longer relevant - So that old comments do not clutter up the space. | priority | as a user i can delete certain comments that are no longer relevant so that old comments do not clutter up the space | 1 |
597,097 | 18,154,517,762 | IssuesEvent | 2021-09-26 20:57:04 | TechnicPack/TechnicSolder | https://api.github.com/repos/TechnicPack/TechnicSolder | opened | Renaming latest or recommended build doesn't carry over status | Type: Bug Priority: High | If you rename the latest or the recommended build, then there will seemingly be no latest or recommended build (depending on the build you renamed). | 1.0 | Renaming latest or recommended build doesn't carry over status - If you rename the latest or the recommended build, then there will seemingly be no latest or recommended build (depending on the build you renamed). | priority | renaming latest or recommended build doesn t carry over status if you rename the latest or the recommended build then there will seemingly be no latest or recommended build depending on the build you renamed | 1 |
489,478 | 14,107,168,318 | IssuesEvent | 2020-11-06 15:55:15 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | closed | Update migration docs from other platforms to 2.0 | area:rasa-oss :ferris_wheel: priority:high type:docs :book: type:enhancement :sparkles: | Part of this umbrella ticket: https://github.com/RasaHQ/rasa/issues/6474
This will be updated by https://github.com/RasaHQ/rasa/issues/6991 with what we need to update docs for
Dialogflow
LUIS
wit.ai | 1.0 | Update migration docs from other platforms to 2.0 - Part of this umbrella ticket: https://github.com/RasaHQ/rasa/issues/6474
This will be updated by https://github.com/RasaHQ/rasa/issues/6991 with what we need to update docs for
Dialogflow
LUIS
wit.ai | priority | update migration docs from other platforms to part of this umbrella ticket this will be updated by with what we need to update docs for dialogflow luis wit ai | 1 |
315,365 | 9,612,521,647 | IssuesEvent | 2019-05-13 09:06:59 | tsea83-g34/TSEA83_Hardware | https://api.github.com/repos/tsea83-g34/TSEA83_Hardware | closed | Add correct hardware for rjumpreg | high priority | Add correct hardware for rjumpreg, first add to blockdiagram, then implement.
The current implementation seems incorrect and doesn't have any corresponding hardware in the blockdiagram.
High priority. | 1.0 | Add correct hardware for rjumpreg - Add correct hardware for rjumpreg, first add to blockdiagram, then implement.
The current implementation seems incorrect and doesn't have any corresponding hardware in the blockdiagram.
High priority. | priority | add correct hardware for rjumpreg add correct hardware for rjumpreg first add to blockdiagram then implement the current implementation seems incorrect and doesn t have any corresponding hardware in the blockdiagram high priority | 1 |
390,389 | 11,543,100,415 | IssuesEvent | 2020-02-18 08:58:26 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | SupplierCredit created from an item shouldn't display the item code | Bug: development Docs: not needed Effort: small Module: dispensary Priority: high | ## Describe the bug
In a `SupplierCredit` created for an `Item` rather than an `Invoice`, the `ItemCode` column is redundant
### To reproduce
N/A
### Expected behaviour
N/A
### Proposed Solution
N/A
### Version and device info
N/A
### Additional context
N/A
| 1.0 | SupplierCredit created from an item shouldn't display the item code - ## Describe the bug
In a `SupplierCredit` created for an `Item` rather than an `Invoice`, the `ItemCode` column is redundant
### To reproduce
N/A
### Expected behaviour
N/A
### Proposed Solution
N/A
### Version and device info
N/A
### Additional context
N/A
| priority | suppliercredit created from an item shouldn t display the item code describe the bug in a suppliercredit created for an item rather than an invoice the itemcode column is redundant to reproduce n a expected behaviour n a proposed solution n a version and device info n a additional context n a | 1 |
114,241 | 4,622,719,890 | IssuesEvent | 2016-09-27 08:36:43 | commercialhaskell/intero | https://api.github.com/repos/commercialhaskell/intero | closed | Completion does not work for qualified modules | component: intero priority: high type: enhancement | With this code company shows `forM` `forM_` and `forever` as posible completions
```haskell
module Main where
import Control.Monad
main :: IO ()
main = do
for|--pointer is here
putStrLn "hello"
```
But if code is written like this, nothing is shown as completion
```haskell
module Main where
import qualified Control.Monad as CM
main :: IO ()
main = do
CM.for|--pointer is here
putStrLn "hello"
```
Expected: `CM.forM` `CM.forM_` and `CM.forever` are shown as posible completions
Thanks in advance! | 1.0 | Completion does not work for qualified modules - With this code company shows `forM` `forM_` and `forever` as posible completions
```haskell
module Main where
import Control.Monad
main :: IO ()
main = do
for|--pointer is here
putStrLn "hello"
```
But if code is written like this, nothing is shown as completion
```haskell
module Main where
import qualified Control.Monad as CM
main :: IO ()
main = do
CM.for|--pointer is here
putStrLn "hello"
```
Expected: `CM.forM` `CM.forM_` and `CM.forever` are shown as posible completions
Thanks in advance! | priority | completion does not work for qualified modules with this code company shows form form and forever as posible completions haskell module main where import control monad main io main do for pointer is here putstrln hello but if code is written like this nothing is shown as completion haskell module main where import qualified control monad as cm main io main do cm for pointer is here putstrln hello expected cm form cm form and cm forever are shown as posible completions thanks in advance | 1 |
691,657 | 23,705,692,031 | IssuesEvent | 2022-08-30 00:41:49 | artesaos/seotools | https://api.github.com/repos/artesaos/seotools | closed | Bump composer.json dependencies to allow for php 8.0+ | feature high priority | For php only 8.0 compatibility:
```json
"require": {
"php": ">=7.1", // Needs Bump to ">=7.1|~8.0.*",
"ext-json": "*",
"illuminate/config": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0",
"illuminate/support": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0"
},
```
For php 8.1 compatibility:
```json
"require": {
"php": ">=7.1|^8.0", // Needs Bump ">=7.1|^8.0",
"ext-json": "*",
"illuminate/config": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0",
"illuminate/support": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0"
},
"require-dev": {
"orchestra/testbench": "~3.8.4 || ^4.0 || ^5.0",
"phpspec/phpspec": "~5.1.1 || ^6.0" // Needs bump to "~5.1.1 || ^6.0 || ^7.0"
},
``` | 1.0 | Bump composer.json dependencies to allow for php 8.0+ - For php only 8.0 compatibility:
```json
"require": {
"php": ">=7.1", // Needs Bump to ">=7.1|~8.0.*",
"ext-json": "*",
"illuminate/config": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0",
"illuminate/support": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0"
},
```
For php 8.1 compatibility:
```json
"require": {
"php": ">=7.1|^8.0", // Needs Bump ">=7.1|^8.0",
"ext-json": "*",
"illuminate/config": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0",
"illuminate/support": "5.8.* || ^6.0 || ^7.0 || ^8.0 || ^9.0"
},
"require-dev": {
"orchestra/testbench": "~3.8.4 || ^4.0 || ^5.0",
"phpspec/phpspec": "~5.1.1 || ^6.0" // Needs bump to "~5.1.1 || ^6.0 || ^7.0"
},
``` | priority | bump composer json dependencies to allow for php for php only compatibility json require php needs bump to ext json illuminate config illuminate support for php compatibility json require php needs bump ext json illuminate config illuminate support require dev orchestra testbench phpspec phpspec needs bump to | 1 |
627,603 | 19,909,797,763 | IssuesEvent | 2022-01-25 16:06:24 | ewels/MultiQC | https://api.github.com/repos/ewels/MultiQC | closed | Could not find RSeQC Section 'gene_body_coverage' | priority: high module: bug | ### Description of bug
Today I updated to the latest development version of MultiQC (`v1.12.dev`), and I noticed an error: `"Could not find RSeQC Section 'gene_body_coverage'"`.
Since I had not noticed this error before, I also parsed the exact same RSeQC output files with the release version (`v1.11`). The error did not occur then...
Any idea what may causing this? Thanks in advance for having a look!
G
### File that triggers the error
_No response_
### MultiQC Error log
```console
# install latest version (i.e. dev) of MultiQC, and run it.
[guidoh@localhost projectViVa]$ pip install --user --upgrade --force-reinstall git+https://github.com/ewels/MultiQC.git
[guidoh@localhost projectViVa]$ multiqc ./data_out --filename QC_report_ViVa.html --title "QC report ViVa study" --export --force
/// MultiQC ๐ | v1.12.dev0
| multiqc | Report title:QC report ViVa study
| multiqc | Search path : /mnt/files/guido/projectViVa/data_out
| searching | โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100% 1152/1152
| rseqc | Found 12 read_distribution reports
| rseqc | Could not find RSeQC Section 'gene_body_coverage'
| rseqc | Found 12 inner_distance reports
| rseqc | Found 12 read_duplication reports
| rseqc | Found 12 junction_annotation reports
| rseqc | Found 12 junction_saturation reports
| rseqc | Found 12 infer_experiment reports
| rseqc | Found 12 bam_stat reports
| rseqc | Found 12 tin reports
| picard | Found 12 InsertSizeMetrics reports
| picard | Found 12 RnaSeqMetrics reports
| salmon | Found 12 meta reports
| salmon | Found 12 fragment length distributions
| star | Found 12 reports and 12 gene count files
| fastp | Found 12 reports
| fastqc | Found 24 reports
| multiqc | Compressing plot data
| multiqc | Report : QC_report_ViVa.html
| multiqc | Data : QC_report_ViVa_data
| multiqc | Plots : QC_report_ViVa_plots
| multiqc | MultiQC complete
[guidoh@localhost projectViVa]$
# downgrade (revert back) to release version of MultiQC, and run it again (thus using same input).
[guidoh@localhost projectViVa]$ pip install --user git+https://github.com/ewels/MultiQC.git@v1.11
[guidoh@localhost projectViVa]$ multiqc ./data_out --filename QC_report_ViVa.html --title "QC report ViVa study" --export --force
/// MultiQC ๐ | v1.11
| multiqc | Report title: QC report ViVa study
| multiqc | Search path : /mnt/files/guido/projectViVa/data_out
| searching | โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100% 1152/1152
| rseqc | Found 12 read_distribution reports
| rseqc | Found 12 gene_body_coverage reports
| rseqc | Found 12 inner_distance reports
| rseqc | Found 12 read_duplication reports
| rseqc | Found 12 junction_annotation reports
| rseqc | Found 12 junction_saturation reports
| rseqc | Found 12 infer_experiment reports
| rseqc | Found 12 bam_stat reports
| rseqc | Found 12 tin reports
| picard | Found 12 InsertSizeMetrics reports
| picard | Found 12 RnaSeqMetrics reports
| salmon | Found 12 meta reports
| salmon | Found 12 fragment length distributions
| star | Found 12 reports and 12 gene count files
| fastp | Found 12 reports
| fastqc | Found 24 reports
| multiqc | Compressing plot data
| multiqc | Deleting : QC_report_ViVa.html (-f was specified)
| multiqc | Deleting : QC_report_ViVa_data (-f was specified)
| multiqc | Deleting : QC_report_ViVa_plots (-f was specified)
| multiqc | Report : QC_report_ViVa.html
| multiqc | Data : QC_report_ViVa_data
| multiqc | Plots : QC_report_ViVa_plots
| multiqc | MultiQC complete
[guidoh@localhost projectViVa]$
```
| 1.0 | Could not find RSeQC Section 'gene_body_coverage' - ### Description of bug
Today I updated to the latest development version of MultiQC (`v1.12.dev`), and I noticed an error: `"Could not find RSeQC Section 'gene_body_coverage'"`.
Since I had not noticed this error before, I also parsed the exact same RSeQC output files with the release version (`v1.11`). The error did not occur then...
Any idea what may causing this? Thanks in advance for having a look!
G
### File that triggers the error
_No response_
### MultiQC Error log
```console
# install latest version (i.e. dev) of MultiQC, and run it.
[guidoh@localhost projectViVa]$ pip install --user --upgrade --force-reinstall git+https://github.com/ewels/MultiQC.git
[guidoh@localhost projectViVa]$ multiqc ./data_out --filename QC_report_ViVa.html --title "QC report ViVa study" --export --force
/// MultiQC ๐ | v1.12.dev0
| multiqc | Report title:QC report ViVa study
| multiqc | Search path : /mnt/files/guido/projectViVa/data_out
| searching | โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100% 1152/1152
| rseqc | Found 12 read_distribution reports
| rseqc | Could not find RSeQC Section 'gene_body_coverage'
| rseqc | Found 12 inner_distance reports
| rseqc | Found 12 read_duplication reports
| rseqc | Found 12 junction_annotation reports
| rseqc | Found 12 junction_saturation reports
| rseqc | Found 12 infer_experiment reports
| rseqc | Found 12 bam_stat reports
| rseqc | Found 12 tin reports
| picard | Found 12 InsertSizeMetrics reports
| picard | Found 12 RnaSeqMetrics reports
| salmon | Found 12 meta reports
| salmon | Found 12 fragment length distributions
| star | Found 12 reports and 12 gene count files
| fastp | Found 12 reports
| fastqc | Found 24 reports
| multiqc | Compressing plot data
| multiqc | Report : QC_report_ViVa.html
| multiqc | Data : QC_report_ViVa_data
| multiqc | Plots : QC_report_ViVa_plots
| multiqc | MultiQC complete
[guidoh@localhost projectViVa]$
# downgrade (revert back) to release version of MultiQC, and run it again (thus using same input).
[guidoh@localhost projectViVa]$ pip install --user git+https://github.com/ewels/MultiQC.git@v1.11
[guidoh@localhost projectViVa]$ multiqc ./data_out --filename QC_report_ViVa.html --title "QC report ViVa study" --export --force
/// MultiQC ๐ | v1.11
| multiqc | Report title: QC report ViVa study
| multiqc | Search path : /mnt/files/guido/projectViVa/data_out
| searching | โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 100% 1152/1152
| rseqc | Found 12 read_distribution reports
| rseqc | Found 12 gene_body_coverage reports
| rseqc | Found 12 inner_distance reports
| rseqc | Found 12 read_duplication reports
| rseqc | Found 12 junction_annotation reports
| rseqc | Found 12 junction_saturation reports
| rseqc | Found 12 infer_experiment reports
| rseqc | Found 12 bam_stat reports
| rseqc | Found 12 tin reports
| picard | Found 12 InsertSizeMetrics reports
| picard | Found 12 RnaSeqMetrics reports
| salmon | Found 12 meta reports
| salmon | Found 12 fragment length distributions
| star | Found 12 reports and 12 gene count files
| fastp | Found 12 reports
| fastqc | Found 24 reports
| multiqc | Compressing plot data
| multiqc | Deleting : QC_report_ViVa.html (-f was specified)
| multiqc | Deleting : QC_report_ViVa_data (-f was specified)
| multiqc | Deleting : QC_report_ViVa_plots (-f was specified)
| multiqc | Report : QC_report_ViVa.html
| multiqc | Data : QC_report_ViVa_data
| multiqc | Plots : QC_report_ViVa_plots
| multiqc | MultiQC complete
[guidoh@localhost projectViVa]$
```
| priority | could not find rseqc section gene body coverage description of bug today i updated to the latest development version of multiqc dev and i noticed an error could not find rseqc section gene body coverage since i had not noticed this error before i also parsed the exact same rseqc output files with the release version the error did not occur then any idea what may causing this thanks in advance for having a look g file that triggers the error no response multiqc error log console install latest version i e dev of multiqc and run it pip install user upgrade force reinstall git multiqc data out filename qc report viva html title qc report viva study export force multiqc ๐ multiqc report title qc report viva study multiqc search path mnt files guido projectviva data out searching โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ rseqc found read distribution reports rseqc could not find rseqc section gene body coverage rseqc found inner distance reports rseqc found read duplication reports rseqc found junction annotation reports rseqc found junction saturation reports rseqc found infer experiment reports rseqc found bam stat reports rseqc found tin reports picard found insertsizemetrics reports picard found rnaseqmetrics reports salmon found meta reports salmon found fragment length distributions star found reports and gene count files fastp found reports fastqc found reports multiqc compressing plot data multiqc report qc report viva html multiqc data qc report viva data multiqc plots qc report viva plots multiqc multiqc complete downgrade revert back to release version of multiqc and run it again thus using same input pip install user git multiqc data out filename qc report viva html title qc report viva study export force multiqc ๐ multiqc report title qc report viva study multiqc search path mnt files guido projectviva data out searching โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ rseqc found read distribution reports rseqc found gene body coverage reports rseqc found inner distance reports rseqc found read duplication reports rseqc found junction annotation reports rseqc found junction saturation reports rseqc found infer experiment reports rseqc found bam stat reports rseqc found tin reports picard found insertsizemetrics reports picard found rnaseqmetrics reports salmon found meta reports salmon found fragment length distributions star found reports and gene count files fastp found reports fastqc found reports multiqc compressing plot data multiqc deleting qc report viva html f was specified multiqc deleting qc report viva data f was specified multiqc deleting qc report viva plots f was specified multiqc report qc report viva html multiqc data qc report viva data multiqc plots qc report viva plots multiqc multiqc complete | 1 |
592,216 | 17,872,638,301 | IssuesEvent | 2021-09-06 18:31:00 | mZubeldia/easy-delivery | https://api.github.com/repos/mZubeldia/easy-delivery | closed | home: tareas | high priority | Fecha de recogida.
Fecha de entrega
Direccion de recogida.
Direccion de entrega.
| 1.0 | home: tareas - Fecha de recogida.
Fecha de entrega
Direccion de recogida.
Direccion de entrega.
| priority | home tareas fecha de recogida fecha de entrega direccion de recogida direccion de entrega | 1 |
240,149 | 7,800,491,149 | IssuesEvent | 2018-06-09 10:04:25 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0009630:
sanitize attender quantity | Bug Calendar Mantis high priority | **Reported by pschuele on 3 Feb 2014 19:47**
**Version:** Collin (2013.10.4)
sanitize attender quantity as client might try to update with empty quantity
**Additional information:** 4bc79 cmohr - 2014-01-23T08:44:11+00:00 INFO (6): Tinebase_Controller_Record_Abstract::_handleRecordCreateOrUpdateException::563 SQLSTATE[HY000]: General error: 1366 Incorrect integer value: '' for column 'quantity' at
row 1
4bc79 cmohr - 2014-01-23T08:44:11+00:00 DEBUG (7): Calendar_Controller_Event::update::500 Rolling back because: exception 'Zend_Db_Statement_Exception' with message 'SQLSTATE[HY000]: General error: 1366 Incorrect intege
r value: '' for column 'quantity' at row 1' in /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Statement/Pdo.php:238
Stack trace:
#0 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Statement.php(284): Zend_Db_Statement_Pdo->_execute(Array)
#1 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Adapter/Abstract.php(468): Zend_Db_Statement->execute(Array)
#2 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('UPDATE `tine20_...', Array)
#3 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Adapter/Abstract.php(604): Zend_Db_Adapter_Pdo_Abstract->query('UPDATE `tine20_...', Array)
#4 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/Tinebase/Backend/Sql/Abstract.php(1090): Zend_Db_Adapter_Abstract->update('tine20_cal_atte...', Array, Array)
#5 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/Calendar/Backend/Sql.php(667): Tinebase_Backend_Sql_Abstract->update(Object(Calendar_Model_Attender))
| 1.0 | 0009630:
sanitize attender quantity - **Reported by pschuele on 3 Feb 2014 19:47**
**Version:** Collin (2013.10.4)
sanitize attender quantity as client might try to update with empty quantity
**Additional information:** 4bc79 cmohr - 2014-01-23T08:44:11+00:00 INFO (6): Tinebase_Controller_Record_Abstract::_handleRecordCreateOrUpdateException::563 SQLSTATE[HY000]: General error: 1366 Incorrect integer value: '' for column 'quantity' at
row 1
4bc79 cmohr - 2014-01-23T08:44:11+00:00 DEBUG (7): Calendar_Controller_Event::update::500 Rolling back because: exception 'Zend_Db_Statement_Exception' with message 'SQLSTATE[HY000]: General error: 1366 Incorrect intege
r value: '' for column 'quantity' at row 1' in /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Statement/Pdo.php:238
Stack trace:
#0 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Statement.php(284): Zend_Db_Statement_Pdo->_execute(Array)
#1 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Adapter/Abstract.php(468): Zend_Db_Statement->execute(Array)
#2 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('UPDATE `tine20_...', Array)
#3 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/library/Zend/Db/Adapter/Abstract.php(604): Zend_Db_Adapter_Pdo_Abstract->query('UPDATE `tine20_...', Array)
#4 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/Tinebase/Backend/Sql/Abstract.php(1090): Zend_Db_Adapter_Abstract->update('tine20_cal_atte...', Array, Array)
#5 /opt/local/tine/2013.10.4metaways2-rechnungen/htdocs/Calendar/Backend/Sql.php(667): Tinebase_Backend_Sql_Abstract->update(Object(Calendar_Model_Attender))
| priority | sanitize attender quantity reported by pschuele on feb version collin sanitize attender quantity as client might try to update with empty quantity additional information cmohr info tinebase controller record abstract handlerecordcreateorupdateexception sqlstate general error incorrect integer value for column quantity at row cmohr debug calendar controller event update rolling back because exception zend db statement exception with message sqlstate general error incorrect intege r value for column quantity at row in opt local tine rechnungen htdocs library zend db statement pdo php stack trace opt local tine rechnungen htdocs library zend db statement php zend db statement pdo gt execute array opt local tine rechnungen htdocs library zend db adapter abstract php zend db statement gt execute array opt local tine rechnungen htdocs library zend db adapter pdo abstract php zend db adapter abstract gt query update array opt local tine rechnungen htdocs library zend db adapter abstract php zend db adapter pdo abstract gt query update array opt local tine rechnungen htdocs tinebase backend sql abstract php zend db adapter abstract gt update cal atte array array opt local tine rechnungen htdocs calendar backend sql php tinebase backend sql abstract gt update object calendar model attender | 1 |
461,437 | 13,230,156,349 | IssuesEvent | 2020-08-18 09:21:37 | wso2/kubernetes-mi | https://api.github.com/repos/wso2/kubernetes-mi | closed | Add path for Health endpoint | Priority/High Type/Task | **Description:**
It is required to expose health endpoints to external traffic.
**Affected Product Version:**
MI 1.2.0
| 1.0 | Add path for Health endpoint - **Description:**
It is required to expose health endpoints to external traffic.
**Affected Product Version:**
MI 1.2.0
| priority | add path for health endpoint description it is required to expose health endpoints to external traffic affected product version mi | 1 |
521,992 | 15,146,640,545 | IssuesEvent | 2021-02-11 07:42:10 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.netflix.com - see bug description | browser-chrome-mobile ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Chrome Mobile 88.0.4324 -->
<!-- @ua_header: Mozilla/5.0 (Linux; Android 8.1.0; vivo 1811) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.93 Mobile Safari/537.36 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/66933 -->
**URL**: https://www.netflix.com
**Browser / Version**: Chrome Mobile 88.0.4324
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: b11-pdc.enstage-sas.com error occuring
**Steps to Reproduce**:
I tried to update my payment but this error kept popping up, i have received the otp but the screen shows this error over and over again.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | 1.0 | www.netflix.com - see bug description - <!-- @browser: Chrome Mobile 88.0.4324 -->
<!-- @ua_header: Mozilla/5.0 (Linux; Android 8.1.0; vivo 1811) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.93 Mobile Safari/537.36 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/66933 -->
**URL**: https://www.netflix.com
**Browser / Version**: Chrome Mobile 88.0.4324
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: b11-pdc.enstage-sas.com error occuring
**Steps to Reproduce**:
I tried to update my payment but this error kept popping up, i have received the otp but the screen shows this error over and over again.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_ | priority | see bug description url browser version chrome mobile operating system android tested another browser yes chrome problem type something else description pdc enstage sas com error occuring steps to reproduce i tried to update my payment but this error kept popping up i have received the otp but the screen shows this error over and over again browser configuration none from with โค๏ธ | 1 |
829,156 | 31,856,906,295 | IssuesEvent | 2023-09-15 08:08:52 | agency-of-learning/PairApp | https://api.github.com/repos/agency-of-learning/PairApp | closed | [UserMenteeApplication] Build real state flow into mentee application statuses | high priority epic | The current state flow for applications is a placeholder. We need to update this to real states the users want and the business rules for how to move between them. The full flow can be seen [here on notion](https://www.notion.so/toyhammered/Application-Submission-Design-Doc-7a45fd8e25ce4f448e9b47bc00a142b5?pvs=4#aea7590985294f589a22e6a7b70f800c) | 1.0 | [UserMenteeApplication] Build real state flow into mentee application statuses - The current state flow for applications is a placeholder. We need to update this to real states the users want and the business rules for how to move between them. The full flow can be seen [here on notion](https://www.notion.so/toyhammered/Application-Submission-Design-Doc-7a45fd8e25ce4f448e9b47bc00a142b5?pvs=4#aea7590985294f589a22e6a7b70f800c) | priority | build real state flow into mentee application statuses the current state flow for applications is a placeholder we need to update this to real states the users want and the business rules for how to move between them the full flow can be seen | 1 |
506,819 | 14,673,431,354 | IssuesEvent | 2020-12-30 13:05:44 | tarantool/doc | https://api.github.com/repos/tarantool/doc | closed | [36pt] Provide an example to use yielding routines in finalizer | 1.10 high_priority server user_guide | Fiber switch is forbidden in `__gc` metamethod since [this change](https://github.com/tarantool/tarantool/issues/4518#issuecomment-704259323). However, one may need to use a yielding function to finalize the resources (e.g. to close the socket). We should provide the example with the proper way implementing such methods.
Since one can't use explicitly yielding function in `__gc` metamethod, this action should be scheduled to be guaranteed executed by the platform. @Gerold103 already filed [the ticket](https://github.com/tarantool/tarantool/issues/5544) regarding the Tarantool-wide scheduler for such lightweight tasks, __but__
1. This feature will be prioritized, designed, implemented and only then released, but one can't yield in scope of the finalizer already __now__
2. Even after being implemented this feature won't be backported to 1.10, but fiber switch in `__gc` is forbidden in LTS versions starting from [1.10.7-47-g8099cb053](https://github.com/tarantool/tarantool/commit/8099cb053)
---
Here is the short example to implement the valid finalizer for the particular FFI `<custom_t>` type.
* `simple.lua`
```lua
local ffi = require('ffi')
local fiber = require('fiber')
ffi.cdef('struct custom { int a; };')
local function __custom_gc(self)
print(("Entered custom GC finalizer for %s... (before yield)"):format(self.a))
fiber.yield()
print(("Leaving custom GC finalizer for %s... (after yield)"):format(self.a))
end
local custom_t = ffi.metatype('struct custom', {
__gc = function(self)
-- XXX: Do not invoke yielding functions in __gc metamethod.
-- Create a new fiber to be run after the execution leaves
-- this routine.
fiber.new(__custom_gc, self)
print(("Finalization is scheduled for %s..."):format(self.a))
end
})
-- Create a cdata object of <custom_t> type.
local c = custom_t(42)
-- Remove a single reference to that object.
c = nil
-- Run full GC cycle to purge the unreferenced object.
collectgarbage('collect')
-- > Finalization is scheduled for 42...
-- XXX: There is no finalization made until the running fiber
-- yields its execution. Let's do it now.
fiber.yield()
-- > Entered custom GC finalizer for 42... (before yield)
-- > Leaving custom GC finalizer for 42... (after yield)
```
---
Here is another simple example to implement the valid finalizer for the particular `<struct custom>` user type.
* `custom.c`
```c
#include <lauxlib.h>
#include <lua.h>
#include <module.h>
#include <stdio.h>
struct custom {
int a;
};
const char *CUSTOM_MTNAME = "CUSTOM_MTNAME";
/*
* XXX: Do not invoke yielding functions in __gc metamethod.
* Create a new fiber to be run after the execution leaves
* this routine. Unfortunately we can't pass the parameters to the
* routine to be executed by the created fiber via <fiber_new_ex>.
* So there is a workaround to load the Lua code below to create
* __gc metamethod passing the object for finalization via Lua
* stack to the spawned fiber.
*/
const char *gc_wrapper_constructor = " local fiber = require('fiber') "
" print('constructor is initialized') "
" return function(__custom_gc) "
" print('constructor is called') "
" return function(self) "
" print('__gc is called') "
" fiber.new(__custom_gc, self) "
" print('Finalization is scheduled') "
" end "
" end "
;
int custom_gc(lua_State *L) {
struct custom *self = luaL_checkudata(L, 1, CUSTOM_MTNAME);
printf("Entered custom_gc for %d... (before yield)\n", self->a);
fiber_sleep(0);
printf("Leaving custom_gc for %d... (after yield)\n", self->a);
return 0;
}
int custom_new(lua_State *L) {
struct custom *self = lua_newuserdata(L, sizeof(struct custom));
luaL_getmetatable(L, CUSTOM_MTNAME);
lua_setmetatable(L, -2);
self->a = lua_tonumber(L, 1);
return 1;
}
static const struct luaL_Reg libcustom_methods [] = {
{ "new", custom_new },
{ NULL, NULL }
};
int luaopen_custom(lua_State *L) {
int rc;
/* Create metatable for struct custom type */
luaL_newmetatable(L, CUSTOM_MTNAME);
/*
* Run the constructor initializer for GC finalizer:
* - load fiber module as an upvalue for GC finalizer
* constructor
* - return GC finalizer constructor on the top of the
* Lua stack
*/
rc = luaL_dostring(L, gc_wrapper_constructor);
/*
* Check whether constructor is initialized (i.e. neither
* syntax nor runtime error is raised).
*/
if (rc != LUA_OK)
luaL_error(L, "test module loading failed: constructor init");
/*
* Create GC object for <custom_gc> function to be called
* in scope of the GC finalizer and push it on top of the
* constructor returned before.
*/
lua_pushcfunction(L, custom_gc);
/*
* Run the constructor with <custom_gc> GCfunc object as
* a single argument. As a result GC finalizer is returned
* on the top of the Lua stack.
*/
rc = lua_pcall(L, 1, 1, 0);
/*
* Check whether GC finalizer is created (i.e. neither
* syntax nor runtime error is raised).
*/
if (rc != LUA_OK)
luaL_error(L, "test module loading failed: __gc init");
/*
* Assign the returned function as a __gc metamethod to
* custom type metatable.
*/
lua_setfield(L, -2, "__gc");
/*
* Initialize Lua table for custom module and fill it
* with the custom methods.
*/
lua_newtable(L);
luaL_register(L, NULL, libcustom_methods);
return 1;
}
```
* `simple-c.lua`
```lua
-- Load custom Lua C extension.
local custom = require('custom')
-- > constructor is initialized
-- > constructor is called
-- Create a userdata object of <struct custom> type.
local c = custom.new(9)
-- Remove a single reference to that object.
c = nil
-- Run full GC cycle to purge the unreferenced object.
collectgarbage('collect')
-- > __gc is called
-- > Finalization is scheduled
-- XXX: There is no finalization made until the running fiber
-- yields its execution. Let's do it now.
require('fiber').yield()
-- > Entered custom_gc for 9... (before yield)
-- XXX: Finalizer yields the execution, so now we are here.
print('We are here')
-- > We are here
-- XXX: This fiber finishes its execution, so yield to the
-- remaining fiber to finish the postponed finalization.
-- > Leaving custom_gc for 9... (after yield)
```
---
It's worth to mention that such implementation increases the pressure to the platform creating a new fiber on each `__gc` call. To prevent excess fibers spawning it's better to start a single "scheduler" fiber and provide the interface to potpone the required async action. There are the module `sched.lua` itself and its usage in `init.lua` below.
* `sched.lua`
```lua
local fiber = require('fiber')
local worker_next_task = nil
local worker_last_task
local worker_fiber
local worker_cv = fiber.cond()
-- XXX: the module is not ready for reloading, so worker_fiber is
-- respawned when sched.lua is purged from package.loaded.
--
-- Worker is a singleton fiber for not urgent delayed execution of
-- functions. Main purpose - schedule execution of a function,
-- which is going to yield, from a context, where a yield is not
-- allowed. Such as an FFI object's GC callback.
--
local function worker_f()
while true do
local task
while true do
task = worker_next_task
if task then break end
-- XXX: Make the fiber wait until the task is added.
worker_cv:wait()
end
worker_next_task = task.next
task.f(task.arg)
fiber.yield()
end
end
local function worker_safe_f()
pcall(worker_f)
-- The function <worker_f> never returns. If the execution is
-- here, this fiber is probably canceled and now is not able to
-- sleep. Create a new one.
worker_fiber = fiber.new(worker_safe_f)
end
worker_fiber = fiber.new(worker_safe_f)
local function worker_schedule_task(f, arg)
local task = { f = f, arg = arg }
if not worker_next_task then
worker_next_task = task
else
worker_last_task.next = task
end
worker_last_task = task
worker_cv:signal()
end
return {
postpone = worker_schedule_task
}
```
* `init.lua`
```lua
local ffi = require('ffi')
local fiber = require('fiber')
local sched = require('sched')
local function __custom_gc(self)
print(("Entered custom GC finalizer for %s... (before yield)"):format(self.a))
fiber.yield()
print(("Leaving custom GC finalizer for %s... (after yield)"):format(self.a))
end
ffi.cdef('struct custom { int a; };')
local custom_t = ffi.metatype('struct custom', {
__gc = function(self)
-- XXX: Do not invoke yielding functions in __gc metamethod.
-- Schedule __custom_gc call via sched.postpone to be run
-- after the execution leaves this routine.
sched.postpone(__custom_gc, self)
print(("Finalization is scheduled for %s..."):format(self.a))
end
})
-- Create several <custom_t> objects to be finalized later.
local t = { }
for i = 1, 10 do t[i] = custom_t(i) end
-- Run full GC cycle to collect the existing garbage. Nothing is
-- going to be printed, since the table <t> is still "alive".
collectgarbage('collect')
-- Remove the reference to the table and ergo all references to
-- the objects.
t = nil
-- Run full GC cycle to collect the table and objects inside it.
-- As a result all <custom_t> objects are scheduled for further
-- finalization, but the finalizer itself (i.e. __custom_gc
-- functions) is not called.
collectgarbage('collect')
-- > Finalization is scheduled for 10...
-- > Finalization is scheduled for 9...
-- > ...
-- > Finalization is scheduled for 2...
-- > Finalization is scheduled for 1...
-- XXX: There is no finalization made until the running fiber
-- yields its execution. Let's do it now.
fiber.yield()
-- > Entered custom GC finalizer for 10... (before yield)
-- XXX: Oops, we are here now, since the scheduler fiber yielded
-- the execution to this one. Check this out.
print("We're here now. Let's continue the scheduled finalization.")
-- > We're here now. Let's continue the finalization
-- OK, wait a second to allow the scheduler to cleanup the
-- remaining garbage.
fiber.sleep(1)
-- > Leaving custom GC finalizer for 10... (after yield)
-- > Entered custom GC finalizer for 9... (before yield)
-- > Leaving custom GC finalizer for 9... (after yield)
-- > ...
-- > Entered custom GC finalizer for 1... (before yield)
-- > Leaving custom GC finalizer for 1... (after yield)
print("Did we finish? I guess so.")
-- > Did we finish? I guess so.
-- Stop the intstance.
os.exit(0)
```
| 1.0 | [36pt] Provide an example to use yielding routines in finalizer - Fiber switch is forbidden in `__gc` metamethod since [this change](https://github.com/tarantool/tarantool/issues/4518#issuecomment-704259323). However, one may need to use a yielding function to finalize the resources (e.g. to close the socket). We should provide the example with the proper way implementing such methods.
Since one can't use explicitly yielding function in `__gc` metamethod, this action should be scheduled to be guaranteed executed by the platform. @Gerold103 already filed [the ticket](https://github.com/tarantool/tarantool/issues/5544) regarding the Tarantool-wide scheduler for such lightweight tasks, __but__
1. This feature will be prioritized, designed, implemented and only then released, but one can't yield in scope of the finalizer already __now__
2. Even after being implemented this feature won't be backported to 1.10, but fiber switch in `__gc` is forbidden in LTS versions starting from [1.10.7-47-g8099cb053](https://github.com/tarantool/tarantool/commit/8099cb053)
---
Here is the short example to implement the valid finalizer for the particular FFI `<custom_t>` type.
* `simple.lua`
```lua
local ffi = require('ffi')
local fiber = require('fiber')
ffi.cdef('struct custom { int a; };')
local function __custom_gc(self)
print(("Entered custom GC finalizer for %s... (before yield)"):format(self.a))
fiber.yield()
print(("Leaving custom GC finalizer for %s... (after yield)"):format(self.a))
end
local custom_t = ffi.metatype('struct custom', {
__gc = function(self)
-- XXX: Do not invoke yielding functions in __gc metamethod.
-- Create a new fiber to be run after the execution leaves
-- this routine.
fiber.new(__custom_gc, self)
print(("Finalization is scheduled for %s..."):format(self.a))
end
})
-- Create a cdata object of <custom_t> type.
local c = custom_t(42)
-- Remove a single reference to that object.
c = nil
-- Run full GC cycle to purge the unreferenced object.
collectgarbage('collect')
-- > Finalization is scheduled for 42...
-- XXX: There is no finalization made until the running fiber
-- yields its execution. Let's do it now.
fiber.yield()
-- > Entered custom GC finalizer for 42... (before yield)
-- > Leaving custom GC finalizer for 42... (after yield)
```
---
Here is another simple example to implement the valid finalizer for the particular `<struct custom>` user type.
* `custom.c`
```c
#include <lauxlib.h>
#include <lua.h>
#include <module.h>
#include <stdio.h>
struct custom {
int a;
};
const char *CUSTOM_MTNAME = "CUSTOM_MTNAME";
/*
* XXX: Do not invoke yielding functions in __gc metamethod.
* Create a new fiber to be run after the execution leaves
* this routine. Unfortunately we can't pass the parameters to the
* routine to be executed by the created fiber via <fiber_new_ex>.
* So there is a workaround to load the Lua code below to create
* __gc metamethod passing the object for finalization via Lua
* stack to the spawned fiber.
*/
const char *gc_wrapper_constructor = " local fiber = require('fiber') "
" print('constructor is initialized') "
" return function(__custom_gc) "
" print('constructor is called') "
" return function(self) "
" print('__gc is called') "
" fiber.new(__custom_gc, self) "
" print('Finalization is scheduled') "
" end "
" end "
;
int custom_gc(lua_State *L) {
struct custom *self = luaL_checkudata(L, 1, CUSTOM_MTNAME);
printf("Entered custom_gc for %d... (before yield)\n", self->a);
fiber_sleep(0);
printf("Leaving custom_gc for %d... (after yield)\n", self->a);
return 0;
}
int custom_new(lua_State *L) {
struct custom *self = lua_newuserdata(L, sizeof(struct custom));
luaL_getmetatable(L, CUSTOM_MTNAME);
lua_setmetatable(L, -2);
self->a = lua_tonumber(L, 1);
return 1;
}
static const struct luaL_Reg libcustom_methods [] = {
{ "new", custom_new },
{ NULL, NULL }
};
int luaopen_custom(lua_State *L) {
int rc;
/* Create metatable for struct custom type */
luaL_newmetatable(L, CUSTOM_MTNAME);
/*
* Run the constructor initializer for GC finalizer:
* - load fiber module as an upvalue for GC finalizer
* constructor
* - return GC finalizer constructor on the top of the
* Lua stack
*/
rc = luaL_dostring(L, gc_wrapper_constructor);
/*
* Check whether constructor is initialized (i.e. neither
* syntax nor runtime error is raised).
*/
if (rc != LUA_OK)
luaL_error(L, "test module loading failed: constructor init");
/*
* Create GC object for <custom_gc> function to be called
* in scope of the GC finalizer and push it on top of the
* constructor returned before.
*/
lua_pushcfunction(L, custom_gc);
/*
* Run the constructor with <custom_gc> GCfunc object as
* a single argument. As a result GC finalizer is returned
* on the top of the Lua stack.
*/
rc = lua_pcall(L, 1, 1, 0);
/*
* Check whether GC finalizer is created (i.e. neither
* syntax nor runtime error is raised).
*/
if (rc != LUA_OK)
luaL_error(L, "test module loading failed: __gc init");
/*
* Assign the returned function as a __gc metamethod to
* custom type metatable.
*/
lua_setfield(L, -2, "__gc");
/*
* Initialize Lua table for custom module and fill it
* with the custom methods.
*/
lua_newtable(L);
luaL_register(L, NULL, libcustom_methods);
return 1;
}
```
* `simple-c.lua`
```lua
-- Load custom Lua C extension.
local custom = require('custom')
-- > constructor is initialized
-- > constructor is called
-- Create a userdata object of <struct custom> type.
local c = custom.new(9)
-- Remove a single reference to that object.
c = nil
-- Run full GC cycle to purge the unreferenced object.
collectgarbage('collect')
-- > __gc is called
-- > Finalization is scheduled
-- XXX: There is no finalization made until the running fiber
-- yields its execution. Let's do it now.
require('fiber').yield()
-- > Entered custom_gc for 9... (before yield)
-- XXX: Finalizer yields the execution, so now we are here.
print('We are here')
-- > We are here
-- XXX: This fiber finishes its execution, so yield to the
-- remaining fiber to finish the postponed finalization.
-- > Leaving custom_gc for 9... (after yield)
```
---
It's worth to mention that such implementation increases the pressure to the platform creating a new fiber on each `__gc` call. To prevent excess fibers spawning it's better to start a single "scheduler" fiber and provide the interface to potpone the required async action. There are the module `sched.lua` itself and its usage in `init.lua` below.
* `sched.lua`
```lua
local fiber = require('fiber')
local worker_next_task = nil
local worker_last_task
local worker_fiber
local worker_cv = fiber.cond()
-- XXX: the module is not ready for reloading, so worker_fiber is
-- respawned when sched.lua is purged from package.loaded.
--
-- Worker is a singleton fiber for not urgent delayed execution of
-- functions. Main purpose - schedule execution of a function,
-- which is going to yield, from a context, where a yield is not
-- allowed. Such as an FFI object's GC callback.
--
local function worker_f()
while true do
local task
while true do
task = worker_next_task
if task then break end
-- XXX: Make the fiber wait until the task is added.
worker_cv:wait()
end
worker_next_task = task.next
task.f(task.arg)
fiber.yield()
end
end
local function worker_safe_f()
pcall(worker_f)
-- The function <worker_f> never returns. If the execution is
-- here, this fiber is probably canceled and now is not able to
-- sleep. Create a new one.
worker_fiber = fiber.new(worker_safe_f)
end
worker_fiber = fiber.new(worker_safe_f)
local function worker_schedule_task(f, arg)
local task = { f = f, arg = arg }
if not worker_next_task then
worker_next_task = task
else
worker_last_task.next = task
end
worker_last_task = task
worker_cv:signal()
end
return {
postpone = worker_schedule_task
}
```
* `init.lua`
```lua
local ffi = require('ffi')
local fiber = require('fiber')
local sched = require('sched')
local function __custom_gc(self)
print(("Entered custom GC finalizer for %s... (before yield)"):format(self.a))
fiber.yield()
print(("Leaving custom GC finalizer for %s... (after yield)"):format(self.a))
end
ffi.cdef('struct custom { int a; };')
local custom_t = ffi.metatype('struct custom', {
__gc = function(self)
-- XXX: Do not invoke yielding functions in __gc metamethod.
-- Schedule __custom_gc call via sched.postpone to be run
-- after the execution leaves this routine.
sched.postpone(__custom_gc, self)
print(("Finalization is scheduled for %s..."):format(self.a))
end
})
-- Create several <custom_t> objects to be finalized later.
local t = { }
for i = 1, 10 do t[i] = custom_t(i) end
-- Run full GC cycle to collect the existing garbage. Nothing is
-- going to be printed, since the table <t> is still "alive".
collectgarbage('collect')
-- Remove the reference to the table and ergo all references to
-- the objects.
t = nil
-- Run full GC cycle to collect the table and objects inside it.
-- As a result all <custom_t> objects are scheduled for further
-- finalization, but the finalizer itself (i.e. __custom_gc
-- functions) is not called.
collectgarbage('collect')
-- > Finalization is scheduled for 10...
-- > Finalization is scheduled for 9...
-- > ...
-- > Finalization is scheduled for 2...
-- > Finalization is scheduled for 1...
-- XXX: There is no finalization made until the running fiber
-- yields its execution. Let's do it now.
fiber.yield()
-- > Entered custom GC finalizer for 10... (before yield)
-- XXX: Oops, we are here now, since the scheduler fiber yielded
-- the execution to this one. Check this out.
print("We're here now. Let's continue the scheduled finalization.")
-- > We're here now. Let's continue the finalization
-- OK, wait a second to allow the scheduler to cleanup the
-- remaining garbage.
fiber.sleep(1)
-- > Leaving custom GC finalizer for 10... (after yield)
-- > Entered custom GC finalizer for 9... (before yield)
-- > Leaving custom GC finalizer for 9... (after yield)
-- > ...
-- > Entered custom GC finalizer for 1... (before yield)
-- > Leaving custom GC finalizer for 1... (after yield)
print("Did we finish? I guess so.")
-- > Did we finish? I guess so.
-- Stop the intstance.
os.exit(0)
```
| priority | provide an example to use yielding routines in finalizer fiber switch is forbidden in gc metamethod since however one may need to use a yielding function to finalize the resources e g to close the socket we should provide the example with the proper way implementing such methods since one can t use explicitly yielding function in gc metamethod this action should be scheduled to be guaranteed executed by the platform already filed regarding the tarantool wide scheduler for such lightweight tasks but this feature will be prioritized designed implemented and only then released but one can t yield in scope of the finalizer already now even after being implemented this feature won t be backported to but fiber switch in gc is forbidden in lts versions starting from here is the short example to implement the valid finalizer for the particular ffi type simple lua lua local ffi require ffi local fiber require fiber ffi cdef struct custom int a local function custom gc self print entered custom gc finalizer for s before yield format self a fiber yield print leaving custom gc finalizer for s after yield format self a end local custom t ffi metatype struct custom gc function self xxx do not invoke yielding functions in gc metamethod create a new fiber to be run after the execution leaves this routine fiber new custom gc self print finalization is scheduled for s format self a end create a cdata object of type local c custom t remove a single reference to that object c nil run full gc cycle to purge the unreferenced object collectgarbage collect finalization is scheduled for xxx there is no finalization made until the running fiber yields its execution let s do it now fiber yield entered custom gc finalizer for before yield leaving custom gc finalizer for after yield here is another simple example to implement the valid finalizer for the particular user type custom c c include include include include struct custom int a const char custom mtname custom mtname xxx do not invoke yielding functions in gc metamethod create a new fiber to be run after the execution leaves this routine unfortunately we can t pass the parameters to the routine to be executed by the created fiber via so there is a workaround to load the lua code below to create gc metamethod passing the object for finalization via lua stack to the spawned fiber const char gc wrapper constructor local fiber require fiber print constructor is initialized return function custom gc print constructor is called return function self print gc is called fiber new custom gc self print finalization is scheduled end end int custom gc lua state l struct custom self lual checkudata l custom mtname printf entered custom gc for d before yield n self a fiber sleep printf leaving custom gc for d after yield n self a return int custom new lua state l struct custom self lua newuserdata l sizeof struct custom lual getmetatable l custom mtname lua setmetatable l self a lua tonumber l return static const struct lual reg libcustom methods new custom new null null int luaopen custom lua state l int rc create metatable for struct custom type lual newmetatable l custom mtname run the constructor initializer for gc finalizer load fiber module as an upvalue for gc finalizer constructor return gc finalizer constructor on the top of the lua stack rc lual dostring l gc wrapper constructor check whether constructor is initialized i e neither syntax nor runtime error is raised if rc lua ok lual error l test module loading failed constructor init create gc object for function to be called in scope of the gc finalizer and push it on top of the constructor returned before lua pushcfunction l custom gc run the constructor with gcfunc object as a single argument as a result gc finalizer is returned on the top of the lua stack rc lua pcall l check whether gc finalizer is created i e neither syntax nor runtime error is raised if rc lua ok lual error l test module loading failed gc init assign the returned function as a gc metamethod to custom type metatable lua setfield l gc initialize lua table for custom module and fill it with the custom methods lua newtable l lual register l null libcustom methods return simple c lua lua load custom lua c extension local custom require custom constructor is initialized constructor is called create a userdata object of type local c custom new remove a single reference to that object c nil run full gc cycle to purge the unreferenced object collectgarbage collect gc is called finalization is scheduled xxx there is no finalization made until the running fiber yields its execution let s do it now require fiber yield entered custom gc for before yield xxx finalizer yields the execution so now we are here print we are here we are here xxx this fiber finishes its execution so yield to the remaining fiber to finish the postponed finalization leaving custom gc for after yield it s worth to mention that such implementation increases the pressure to the platform creating a new fiber on each gc call to prevent excess fibers spawning it s better to start a single scheduler fiber and provide the interface to potpone the required async action there are the module sched lua itself and its usage in init lua below sched lua lua local fiber require fiber local worker next task nil local worker last task local worker fiber local worker cv fiber cond xxx the module is not ready for reloading so worker fiber is respawned when sched lua is purged from package loaded worker is a singleton fiber for not urgent delayed execution of functions main purpose schedule execution of a function which is going to yield from a context where a yield is not allowed such as an ffi object s gc callback local function worker f while true do local task while true do task worker next task if task then break end xxx make the fiber wait until the task is added worker cv wait end worker next task task next task f task arg fiber yield end end local function worker safe f pcall worker f the function never returns if the execution is here this fiber is probably canceled and now is not able to sleep create a new one worker fiber fiber new worker safe f end worker fiber fiber new worker safe f local function worker schedule task f arg local task f f arg arg if not worker next task then worker next task task else worker last task next task end worker last task task worker cv signal end return postpone worker schedule task init lua lua local ffi require ffi local fiber require fiber local sched require sched local function custom gc self print entered custom gc finalizer for s before yield format self a fiber yield print leaving custom gc finalizer for s after yield format self a end ffi cdef struct custom int a local custom t ffi metatype struct custom gc function self xxx do not invoke yielding functions in gc metamethod schedule custom gc call via sched postpone to be run after the execution leaves this routine sched postpone custom gc self print finalization is scheduled for s format self a end create several objects to be finalized later local t for i do t custom t i end run full gc cycle to collect the existing garbage nothing is going to be printed since the table is still alive collectgarbage collect remove the reference to the table and ergo all references to the objects t nil run full gc cycle to collect the table and objects inside it as a result all objects are scheduled for further finalization but the finalizer itself i e custom gc functions is not called collectgarbage collect finalization is scheduled for finalization is scheduled for finalization is scheduled for finalization is scheduled for xxx there is no finalization made until the running fiber yields its execution let s do it now fiber yield entered custom gc finalizer for before yield xxx oops we are here now since the scheduler fiber yielded the execution to this one check this out print we re here now let s continue the scheduled finalization we re here now let s continue the finalization ok wait a second to allow the scheduler to cleanup the remaining garbage fiber sleep leaving custom gc finalizer for after yield entered custom gc finalizer for before yield leaving custom gc finalizer for after yield entered custom gc finalizer for before yield leaving custom gc finalizer for after yield print did we finish i guess so did we finish i guess so stop the intstance os exit | 1 |
535,030 | 15,680,871,593 | IssuesEvent | 2021-03-25 03:57:13 | Psychoanalytic-Electronic-Publishing/PEP-Web-User-Interface | https://api.github.com/repos/Psychoanalytic-Electronic-Publishing/PEP-Web-User-Interface | opened | Expert Pick Graphics, Queue not working | Bug High Priority | I tried to push your latest merge/build to Stage, but now Expert Pick graphic is not working, nor is the Expert Pick queue.
I tried backing off versions of the server, but that didn't help.
I'm here early in the morning, if you want to work it out before I have to leave around 10am....wont be back until Monday night.
See my note about the upgrade to Documents/Image which can return the article ID now. Also, we have a small and controlled , 51 image set the random image function is picking from.
You can see it work here:
https://stage-api.pep-web.rocks/docs#/Documents/documents_image_fetch_v2_Documents_Image__imageID___get
Put a * in the ImageID
Set Download to 2 to get the article ID of the random pick
Set Reselect to True to force a new image
Set Reselect to False to always return the same image on a given day.
| 1.0 | Expert Pick Graphics, Queue not working - I tried to push your latest merge/build to Stage, but now Expert Pick graphic is not working, nor is the Expert Pick queue.
I tried backing off versions of the server, but that didn't help.
I'm here early in the morning, if you want to work it out before I have to leave around 10am....wont be back until Monday night.
See my note about the upgrade to Documents/Image which can return the article ID now. Also, we have a small and controlled , 51 image set the random image function is picking from.
You can see it work here:
https://stage-api.pep-web.rocks/docs#/Documents/documents_image_fetch_v2_Documents_Image__imageID___get
Put a * in the ImageID
Set Download to 2 to get the article ID of the random pick
Set Reselect to True to force a new image
Set Reselect to False to always return the same image on a given day.
| priority | expert pick graphics queue not working i tried to push your latest merge build to stage but now expert pick graphic is not working nor is the expert pick queue i tried backing off versions of the server but that didn t help i m here early in the morning if you want to work it out before i have to leave around wont be back until monday night see my note about the upgrade to documents image which can return the article id now also we have a small and controlled image set the random image function is picking from you can see it work here put a in the imageid set download to to get the article id of the random pick set reselect to true to force a new image set reselect to false to always return the same image on a given day | 1 |
520,924 | 15,097,440,261 | IssuesEvent | 2021-02-07 18:43:27 | jaredlinklater/KoiosKlicker | https://api.github.com/repos/jaredlinklater/KoiosKlicker | closed | Complete results/leaderboard page | high priority | Show graphs + table that compare current user to other users
Graphs include:
- Bell curve placement
- Average CPS column chart
- Maybe line chart of time between clicks | 1.0 | Complete results/leaderboard page - Show graphs + table that compare current user to other users
Graphs include:
- Bell curve placement
- Average CPS column chart
- Maybe line chart of time between clicks | priority | complete results leaderboard page show graphs table that compare current user to other users graphs include bell curve placement average cps column chart maybe line chart of time between clicks | 1 |
806,912 | 29,926,356,439 | IssuesEvent | 2023-06-22 06:00:55 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Getting Server failed to respond after submitting the self registration request. | Priority/Highest Severity/Blocker bug | **Describe the issue:**
In self registration flow, after filling and submitting the self registration form, user is redirected to an error page saying **Server failed to respond.**
**How to reproduce:**
1. Enable self registration by following https://is.docs.wso2.com/en/6.0.0/guides/identity-lifecycles/self-registration-workflow/
2. Navigate to myaccount login portal.
3. Proceed with Create Account, fill and submit the form.
4. Observe redirection to an error page with **Server failed to respond.**
https://github.com/wso2/product-is/assets/25563417/97d8ade7-f3a9-44d8-901c-2449700a59f6
**Expected behavior:**
User should be able to self register successfully.
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: [IS 6.2.0 Alpha 2](https://github.com/wso2/product-is/releases/download/v6.2.0-alpha2/wso2is-6.2.0-alpha2.zip)
- OS: [e.g., Windows, Linux, Mac]
- Database: [e.g., MySQL, H2]
- Userstore: [e.g., LDAP, JDBC]
---
| 1.0 | Getting Server failed to respond after submitting the self registration request. - **Describe the issue:**
In self registration flow, after filling and submitting the self registration form, user is redirected to an error page saying **Server failed to respond.**
**How to reproduce:**
1. Enable self registration by following https://is.docs.wso2.com/en/6.0.0/guides/identity-lifecycles/self-registration-workflow/
2. Navigate to myaccount login portal.
3. Proceed with Create Account, fill and submit the form.
4. Observe redirection to an error page with **Server failed to respond.**
https://github.com/wso2/product-is/assets/25563417/97d8ade7-f3a9-44d8-901c-2449700a59f6
**Expected behavior:**
User should be able to self register successfully.
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: [IS 6.2.0 Alpha 2](https://github.com/wso2/product-is/releases/download/v6.2.0-alpha2/wso2is-6.2.0-alpha2.zip)
- OS: [e.g., Windows, Linux, Mac]
- Database: [e.g., MySQL, H2]
- Userstore: [e.g., LDAP, JDBC]
---
| priority | getting server failed to respond after submitting the self registration request describe the issue in self registration flow after filling and submitting the self registration form user is redirected to an error page saying server failed to respond how to reproduce enable self registration by following navigate to myaccount login portal proceed with create account fill and submit the form observe redirection to an error page with server failed to respond expected behavior user should be able to self register successfully environment information please complete the following information remove any unnecessary fields product version os database userstore | 1 |
427,883 | 12,400,388,507 | IssuesEvent | 2020-05-21 07:44:41 | YJSoft/xe-module-exam | https://api.github.com/repos/YJSoft/xe-module-exam | closed | ๋ฌธ์ ์ญ์ ์ ํด๋น ๋ฌธ์ ์ ๋ฐฐ์ ๋งํผ ์ด ๋ฐฐ์ ๋ ๊ฐ์ฐ | priority/high type/bug | ํ์ฌ๋ ์ํ ๋ฌธ์ ๋ฅผ ์ญ์ ํ ๋ ํด๋น ๋ฌธ์ ์ ๋ฐฐ์ (point)๋งํผ ์ํ์ ์ด๋ฐฐ์ (total_point)์ด ๊ฐ์๋์ง ์๊ณ ์์ต๋๋ค.
procExamQuestionDelete() ํจ์๋ฅผ ๊ฐ์ ํ ํ์๊ฐ ์๊ฒ ์ต๋๋ค.
exam.controller.php #328-329 ํ์
$output = $this->deleteQuestion($args);
if(!$output->toBool()) return $output;
์๋ถ๋ถ์ ๋ค์๊ณผ ๊ฐ์ด ๋ฐ๊พธ๋ total_point๋ ์์ ๋๊ณ ์บ์ ์ญ์ ๋ก ๋ฐ๋ ๊ฐ๋ ๋ฐ๋ก ์ ์ฉ๋ฉ๋๋ค.
$output = $this->deleteQuestion($args);
if(!$output->toBool()) return $this->makeObject(-1, "msg_invalid_request");
// ์ญ์ ํ ๋ฌธ์ ์ ๋ฐฐ์ ๋งํผ ๋ง์ ์ฌ๊ณ์ฐ
$total_point = $examitem->get('total_point') - $questionitem->get('point');
$output = $this->updateTotalPoint($document_srl, $total_point);
if(!$output->toBool()) return $this->makeObject(-1, "Failed to update total_point");
//remove from cache
$oCacheHandler = CacheHandler::getInstance('object');
if($oCacheHandler->isSupport())
{
//remove document item from cache
$cache_key = 'exam_item:'. getNumberingPath($document_srl) . $document_srl;
$oCacheHandler->delete($cache_key);
} | 1.0 | ๋ฌธ์ ์ญ์ ์ ํด๋น ๋ฌธ์ ์ ๋ฐฐ์ ๋งํผ ์ด ๋ฐฐ์ ๋ ๊ฐ์ฐ - ํ์ฌ๋ ์ํ ๋ฌธ์ ๋ฅผ ์ญ์ ํ ๋ ํด๋น ๋ฌธ์ ์ ๋ฐฐ์ (point)๋งํผ ์ํ์ ์ด๋ฐฐ์ (total_point)์ด ๊ฐ์๋์ง ์๊ณ ์์ต๋๋ค.
procExamQuestionDelete() ํจ์๋ฅผ ๊ฐ์ ํ ํ์๊ฐ ์๊ฒ ์ต๋๋ค.
exam.controller.php #328-329 ํ์
$output = $this->deleteQuestion($args);
if(!$output->toBool()) return $output;
์๋ถ๋ถ์ ๋ค์๊ณผ ๊ฐ์ด ๋ฐ๊พธ๋ total_point๋ ์์ ๋๊ณ ์บ์ ์ญ์ ๋ก ๋ฐ๋ ๊ฐ๋ ๋ฐ๋ก ์ ์ฉ๋ฉ๋๋ค.
$output = $this->deleteQuestion($args);
if(!$output->toBool()) return $this->makeObject(-1, "msg_invalid_request");
// ์ญ์ ํ ๋ฌธ์ ์ ๋ฐฐ์ ๋งํผ ๋ง์ ์ฌ๊ณ์ฐ
$total_point = $examitem->get('total_point') - $questionitem->get('point');
$output = $this->updateTotalPoint($document_srl, $total_point);
if(!$output->toBool()) return $this->makeObject(-1, "Failed to update total_point");
//remove from cache
$oCacheHandler = CacheHandler::getInstance('object');
if($oCacheHandler->isSupport())
{
//remove document item from cache
$cache_key = 'exam_item:'. getNumberingPath($document_srl) . $document_srl;
$oCacheHandler->delete($cache_key);
} | priority | ๋ฌธ์ ์ญ์ ์ ํด๋น ๋ฌธ์ ์ ๋ฐฐ์ ๋งํผ ์ด ๋ฐฐ์ ๋ ๊ฐ์ฐ ํ์ฌ๋ ์ํ ๋ฌธ์ ๋ฅผ ์ญ์ ํ ๋ ํด๋น ๋ฌธ์ ์ ๋ฐฐ์ point ๋งํผ ์ํ์ ์ด๋ฐฐ์ total point ์ด ๊ฐ์๋์ง ์๊ณ ์์ต๋๋ค procexamquestiondelete ํจ์๋ฅผ ๊ฐ์ ํ ํ์๊ฐ ์๊ฒ ์ต๋๋ค exam controller php ํ์ output this deletequestion args if output tobool return output ์๋ถ๋ถ์ ๋ค์๊ณผ ๊ฐ์ด ๋ฐ๊พธ๋ total point๋ ์์ ๋๊ณ ์บ์ ์ญ์ ๋ก ๋ฐ๋ ๊ฐ๋ ๋ฐ๋ก ์ ์ฉ๋ฉ๋๋ค output this deletequestion args if output tobool return this makeobject msg invalid request ์ญ์ ํ ๋ฌธ์ ์ ๋ฐฐ์ ๋งํผ ๋ง์ ์ฌ๊ณ์ฐ total point examitem get total point questionitem get point output this updatetotalpoint document srl total point if output tobool return this makeobject failed to update total point remove from cache ocachehandler cachehandler getinstance object if ocachehandler issupport remove document item from cache cache key exam item getnumberingpath document srl document srl ocachehandler delete cache key | 1 |
140,640 | 5,413,729,848 | IssuesEvent | 2017-03-01 17:22:23 | mercadopago/px-ios | https://api.github.com/repos/mercadopago/px-ios | closed | Al regresar el control a la App del integrador el NavgationBar pierde totalmente el estilo. | bug external priority_high refactor UX wallet | ### Comportamiento Esperado
Se espera que al regresar el control a la App del integrador el NavBar tenga el mismo estilo que tenia justo antes de darnos el control a nosotros.
### Comportamiento Actual
Actualmente pierde totalmente el estilo, desapareciendo. Es invisible y translucent.
| 1.0 | Al regresar el control a la App del integrador el NavgationBar pierde totalmente el estilo. - ### Comportamiento Esperado
Se espera que al regresar el control a la App del integrador el NavBar tenga el mismo estilo que tenia justo antes de darnos el control a nosotros.
### Comportamiento Actual
Actualmente pierde totalmente el estilo, desapareciendo. Es invisible y translucent.
| priority | al regresar el control a la app del integrador el navgationbar pierde totalmente el estilo comportamiento esperado se espera que al regresar el control a la app del integrador el navbar tenga el mismo estilo que tenia justo antes de darnos el control a nosotros comportamiento actual actualmente pierde totalmente el estilo desapareciendo es invisible y translucent | 1 |
200,161 | 7,000,747,942 | IssuesEvent | 2017-12-18 07:10:45 | cdnjs/cdnjs | https://api.github.com/repos/cdnjs/cdnjs | opened | npm auto-update didn't fetch files of library "file-uploader" properly. | Bug High Priority | I've noticed this issue for a while.
The first time npm auto-updater fetched "file-uploader" new version, it always extracted only directory `azure.fine-uploader` as below, so I need to delete it and manually fetch its new version by `./auto-update.js run azure.fine-uploader` (not to run auto-update on all the libraries but only single `file-uploader`), then everything goes well.
```
5.16.0-RC1/
โโโ azure.fine-uploader
โโโ azure.fine-uploader.core.js
โโโ azure.fine-uploader.core.min.js
โโโ azure.fine-uploader.js
```
Haven't take a deep look at the root cause yet. | 1.0 | npm auto-update didn't fetch files of library "file-uploader" properly. - I've noticed this issue for a while.
The first time npm auto-updater fetched "file-uploader" new version, it always extracted only directory `azure.fine-uploader` as below, so I need to delete it and manually fetch its new version by `./auto-update.js run azure.fine-uploader` (not to run auto-update on all the libraries but only single `file-uploader`), then everything goes well.
```
5.16.0-RC1/
โโโ azure.fine-uploader
โโโ azure.fine-uploader.core.js
โโโ azure.fine-uploader.core.min.js
โโโ azure.fine-uploader.js
```
Haven't take a deep look at the root cause yet. | priority | npm auto update didn t fetch files of library file uploader properly i ve noticed this issue for a while the first time npm auto updater fetched file uploader new version it always extracted only directory azure fine uploader as below so i need to delete it and manually fetch its new version by auto update js run azure fine uploader not to run auto update on all the libraries but only single file uploader then everything goes well โโโ azure fine uploader โโโ azure fine uploader core js โโโ azure fine uploader core min js โโโ azure fine uploader js haven t take a deep look at the root cause yet | 1 |
32,222 | 2,750,936,457 | IssuesEvent | 2015-04-24 04:26:34 | chrislo27/ProjectMP | https://api.github.com/repos/chrislo27/ProjectMP | closed | Lighting graphical error on edge of world - right side | bug high priority | Same as #5 except this only applies to the right edge of the world.
 | 1.0 | Lighting graphical error on edge of world - right side - Same as #5 except this only applies to the right edge of the world.
 | priority | lighting graphical error on edge of world right side same as except this only applies to the right edge of the world | 1 |
743,442 | 25,899,274,324 | IssuesEvent | 2022-12-15 03:06:21 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | [Mage] Flame Orb | Class: Mage Spell Priority: High Status: Needs Confirmation | [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:** It spawns underground
**How to reproduce:**
**How it should work:**
**Database links:** I think its correct id https://cata-twinhead.twinstar.cz/?spell=82731
| 1.0 | [Mage] Flame Orb - [//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (http://wowhead.com/)
[//]: # (cata-twinhead.twinstar.cz)
**Description:** It spawns underground
**How to reproduce:**
**How it should work:**
**Database links:** I think its correct id https://cata-twinhead.twinstar.cz/?spell=82731
| priority | flame orb rembember add links to things related to the bug using for example cata twinhead twinstar cz description it spawns underground how to reproduce how it should work database links i think its correct id | 1 |
599,219 | 18,268,010,977 | IssuesEvent | 2021-10-04 10:43:50 | MineInAbyss/Graves | https://api.github.com/repos/MineInAbyss/Graves | opened | Graves overwrite BlockLocker containers | priority:high type:bug | If your grave spawns in front of a privated container, it temporarily replaces the sign, and when it normally returns the block or sign, it doesnt store the nbt.
Meaning people can steal from all privated chests | 1.0 | Graves overwrite BlockLocker containers - If your grave spawns in front of a privated container, it temporarily replaces the sign, and when it normally returns the block or sign, it doesnt store the nbt.
Meaning people can steal from all privated chests | priority | graves overwrite blocklocker containers if your grave spawns in front of a privated container it temporarily replaces the sign and when it normally returns the block or sign it doesnt store the nbt meaning people can steal from all privated chests | 1 |
550,693 | 16,130,344,355 | IssuesEvent | 2021-04-29 03:00:44 | woocommerce/woocommerce-gateway-stripe | https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe | closed | Invalid shipping address for UK and Apple Pay | component: payment request buttons priority: high type: bug | Added by @v18:
There's a fix for this in WCPay - it needs to be ported over.
---
**Describe the bug**
Apple Pay anonymises the part of the shipping address (postalCode and address lines) until payment is processed. This leads to `invalid shipping address` error in the payment request when shipping zones configured in a specific way.
I.e. when shipping zones are configured based on zip code regex (e.g. BN11*) it's not detected correctly by payment request due to zip code is truncated by Apple Pay.
More details and screenshots:
Support ticket: 2911575-zen.
Slack chat: p1589288438496600-slack-CNXMMQBDW
**To Reproduce**
1. Create shipping zone with regex-based zip code filter and flat rate (like in the example above).
2. Serve the site from a public domain, e.g. with ngrok.
3. Add some cards to Apple Pay.
4. Add product to the cart and go to the cart page.
5. Try to pay with Apple Pay and shipping address from UK matching the zone regex.
6. `Invalid shipping address` error.
**Expected behaviour**
Correct shipping zone and rate applied based on the selected shipping address.
| 1.0 | Invalid shipping address for UK and Apple Pay - Added by @v18:
There's a fix for this in WCPay - it needs to be ported over.
---
**Describe the bug**
Apple Pay anonymises the part of the shipping address (postalCode and address lines) until payment is processed. This leads to `invalid shipping address` error in the payment request when shipping zones configured in a specific way.
I.e. when shipping zones are configured based on zip code regex (e.g. BN11*) it's not detected correctly by payment request due to zip code is truncated by Apple Pay.
More details and screenshots:
Support ticket: 2911575-zen.
Slack chat: p1589288438496600-slack-CNXMMQBDW
**To Reproduce**
1. Create shipping zone with regex-based zip code filter and flat rate (like in the example above).
2. Serve the site from a public domain, e.g. with ngrok.
3. Add some cards to Apple Pay.
4. Add product to the cart and go to the cart page.
5. Try to pay with Apple Pay and shipping address from UK matching the zone regex.
6. `Invalid shipping address` error.
**Expected behaviour**
Correct shipping zone and rate applied based on the selected shipping address.
| priority | invalid shipping address for uk and apple pay added by there s a fix for this in wcpay it needs to be ported over describe the bug apple pay anonymises the part of the shipping address postalcode and address lines until payment is processed this leads to invalid shipping address error in the payment request when shipping zones configured in a specific way i e when shipping zones are configured based on zip code regex e g it s not detected correctly by payment request due to zip code is truncated by apple pay more details and screenshots support ticket zen slack chat slack cnxmmqbdw to reproduce create shipping zone with regex based zip code filter and flat rate like in the example above serve the site from a public domain e g with ngrok add some cards to apple pay add product to the cart and go to the cart page try to pay with apple pay and shipping address from uk matching the zone regex invalid shipping address error expected behaviour correct shipping zone and rate applied based on the selected shipping address | 1 |
521,494 | 15,110,099,144 | IssuesEvent | 2021-02-08 18:44:41 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [core] Switch to Guava | priority: high task | Maintain the same APIs and switch the cache engine to Guava. Ping with any concerns/issues. | 1.0 | [core] Switch to Guava - Maintain the same APIs and switch the cache engine to Guava. Ping with any concerns/issues. | priority | switch to guava maintain the same apis and switch the cache engine to guava ping with any concerns issues | 1 |
364,172 | 10,760,077,420 | IssuesEvent | 2019-10-31 17:51:18 | code4romania/monitorizare-vot-ios | https://api.github.com/repos/code4romania/monitorizare-vot-ios | closed | Update section list screen to new UI | enhancement good first issue high priority ios | - add icon for questions that have notes saved
- update icon for questions that have answered saved locally but not synced yet
Figma available here: https://www.figma.com/file/lww9lcabUpamTZg8zCExI8/MV-2.0-Prototype?node-id=26%3A161
 | 1.0 | Update section list screen to new UI - - add icon for questions that have notes saved
- update icon for questions that have answered saved locally but not synced yet
Figma available here: https://www.figma.com/file/lww9lcabUpamTZg8zCExI8/MV-2.0-Prototype?node-id=26%3A161
 | priority | update section list screen to new ui add icon for questions that have notes saved update icon for questions that have answered saved locally but not synced yet figma available here | 1 |
561,516 | 16,618,560,415 | IssuesEvent | 2021-06-02 20:15:12 | OpticFusion1/AntiMalwareToolKit | https://api.github.com/repos/OpticFusion1/AntiMalwareToolKit | closed | [FEATURE REQUEST] UpdateDatabase needs updated to use the new .db file | future help wanted priority: high type: feature | **Is your feature request related to a problem? Please describe.**
TITLE
**Describe the solution you'd like**
TITLE
**Describe alternatives you've considered**
TITLE
**Additional context**
TITLE | 1.0 | [FEATURE REQUEST] UpdateDatabase needs updated to use the new .db file - **Is your feature request related to a problem? Please describe.**
TITLE
**Describe the solution you'd like**
TITLE
**Describe alternatives you've considered**
TITLE
**Additional context**
TITLE | priority | updatedatabase needs updated to use the new db file is your feature request related to a problem please describe title describe the solution you d like title describe alternatives you ve considered title additional context title | 1 |
78,391 | 3,509,722,168 | IssuesEvent | 2016-01-09 00:40:19 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | opened | Enslave in Gruul's Lair. (BB #1159) | Category: Crash migrated Priority: High Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** Selphius
**Original Date:** 29.11.2015 13:18:54 GMT+0000
**Original Priority:** critical
**Original Type:** bug
**Original State:** new
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/1159
<hr>
I remember that I reported it long ago but the crash still happens.
Steps to reproduce it:
1. Go into Gruul's Lair instance.
2. Start High King Maulgar Encounter.
3. Cast Enslave at summoned felhunter.
4. Make the Felhunter attack Olm. (it can take several times to make server crash, in my case it's from 1 to 5 attack commands). It's random, but the crash happens. | 1.0 | Enslave in Gruul's Lair. (BB #1159) - This issue was migrated from bitbucket.
**Original Reporter:** Selphius
**Original Date:** 29.11.2015 13:18:54 GMT+0000
**Original Priority:** critical
**Original Type:** bug
**Original State:** new
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/1159
<hr>
I remember that I reported it long ago but the crash still happens.
Steps to reproduce it:
1. Go into Gruul's Lair instance.
2. Start High King Maulgar Encounter.
3. Cast Enslave at summoned felhunter.
4. Make the Felhunter attack Olm. (it can take several times to make server crash, in my case it's from 1 to 5 attack commands). It's random, but the crash happens. | priority | enslave in gruul s lair bb this issue was migrated from bitbucket original reporter selphius original date gmt original priority critical original type bug original state new direct link i remember that i reported it long ago but the crash still happens steps to reproduce it go into gruul s lair instance start high king maulgar encounter cast enslave at summoned felhunter make the felhunter attack olm it can take several times to make server crash in my case it s from to attack commands it s random but the crash happens | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.