Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
416,314 | 12,142,536,077 | IssuesEvent | 2020-04-24 01:59:41 | confidantstation/Confidant-Station | https://api.github.com/repos/confidantstation/Confidant-Station | closed | File transfer optimization 2 Distributed file storage and transfer between nodes | Priority: Medium Status: Accepted Status: In Progress Status: Pending enhancement | Is your feature request related to a problem? Please describe.
The current cross-node tox scheme file transfer performance is too poor, the file storage is single point storage, unreliable
Describe the solution you'd like
Refer to the distributed file storage scheme to organize a supplementary scheme suitable for the storage and transmission of our chain file data
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here. | 1.0 | File transfer optimization 2 Distributed file storage and transfer between nodes - Is your feature request related to a problem? Please describe.
The current cross-node tox scheme file transfer performance is too poor, the file storage is single point storage, unreliable
Describe the solution you'd like
Refer to the distributed file storage scheme to organize a supplementary scheme suitable for the storage and transmission of our chain file data
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here. | priority | file transfer optimization distributed file storage and transfer between nodes is your feature request related to a problem please describe the current cross node tox scheme file transfer performance is too poor the file storage is single point storage unreliable describe the solution you d like refer to the distributed file storage scheme to organize a supplementary scheme suitable for the storage and transmission of our chain file data describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
392,367 | 11,590,556,070 | IssuesEvent | 2020-02-24 07:08:11 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Hammer Feedback | Priority: Medium Status: Fixed | - [x] Wasnt able to drop this Copper Ore block while holding hammer, probably because it has no forms

- [x] Holding shift displayed the hammer UI off screen. Need to fix anchors for various resolutions, and clamp the ui

- [ ] Selection has a number of problems still. It needs to work off a projection, so it can place second selection even in midair. IE, once you start drawing a line it should always display that line no matter where you're looking.

- [x] Popup text should say how many you need. 'Not enough materials (5 needed)'

- [x] After placing a block, there's a delay before you see it. Notice in the gif how it blinks away for a second. Prediction should be handling this.

- [x] Text for placing first point in a shape should be more descriptive. Ie: 'Place first floor corner' or 'Place line starting position'
- [x] 'Material' section needs to be hidden of setup properly

- [x] Text should still have controls for rotation, Q/R

- [x] Clicking these brought up the hammer selection dialog, which is good, but they also stayed highlighted

Perhaps shift should be on the 'point' and 'ladder' icons instead.
- [x] Got this client exception at one point not sure when:

| 1.0 | Hammer Feedback - - [x] Wasnt able to drop this Copper Ore block while holding hammer, probably because it has no forms

- [x] Holding shift displayed the hammer UI off screen. Need to fix anchors for various resolutions, and clamp the ui

- [ ] Selection has a number of problems still. It needs to work off a projection, so it can place second selection even in midair. IE, once you start drawing a line it should always display that line no matter where you're looking.

- [x] Popup text should say how many you need. 'Not enough materials (5 needed)'

- [x] After placing a block, there's a delay before you see it. Notice in the gif how it blinks away for a second. Prediction should be handling this.

- [x] Text for placing first point in a shape should be more descriptive. Ie: 'Place first floor corner' or 'Place line starting position'
- [x] 'Material' section needs to be hidden of setup properly

- [x] Text should still have controls for rotation, Q/R

- [x] Clicking these brought up the hammer selection dialog, which is good, but they also stayed highlighted

Perhaps shift should be on the 'point' and 'ladder' icons instead.
- [x] Got this client exception at one point not sure when:

| priority | hammer feedback wasnt able to drop this copper ore block while holding hammer probably because it has no forms holding shift displayed the hammer ui off screen need to fix anchors for various resolutions and clamp the ui selection has a number of problems still it needs to work off a projection so it can place second selection even in midair ie once you start drawing a line it should always display that line no matter where you re looking popup text should say how many you need not enough materials needed after placing a block there s a delay before you see it notice in the gif how it blinks away for a second prediction should be handling this text for placing first point in a shape should be more descriptive ie place first floor corner or place line starting position material section needs to be hidden of setup properly text should still have controls for rotation q r clicking these brought up the hammer selection dialog which is good but they also stayed highlighted perhaps shift should be on the point and ladder icons instead got this client exception at one point not sure when | 1 |
614,486 | 19,184,130,548 | IssuesEvent | 2021-12-04 22:56:26 | MarketSquare/robotframework-browser | https://api.github.com/repos/MarketSquare/robotframework-browser | closed | Document that Get Text keyword also work with <input> and <textarea> elements. | bug priority: medium | Currently `Get Text` documentation mentions that it works with elements containing text, but actually it also works with input and textarea elements. With input and textarea elements keywords returns the value property. We should document this also in the keyword. | 1.0 | Document that Get Text keyword also work with <input> and <textarea> elements. - Currently `Get Text` documentation mentions that it works with elements containing text, but actually it also works with input and textarea elements. With input and textarea elements keywords returns the value property. We should document this also in the keyword. | priority | document that get text keyword also work with and elements currently get text documentation mentions that it works with elements containing text but actually it also works with input and textarea elements with input and textarea elements keywords returns the value property we should document this also in the keyword | 1 |
757,712 | 26,525,958,468 | IssuesEvent | 2023-01-19 08:45:13 | mi6/ic-design-system | https://api.github.com/repos/mi6/ic-design-system | opened | Anchor links on high level tabs within components not working | type: bug 🐛 priority: medium | ## Summary of the bug
You are unable to use a copied link from an anchor nav within the 'Code' or 'Accessibility' tabs to then take you back to that place. It loads the 'Guidance' tab.
## 🪜 How to reproduce
Tell us the steps to reproduce the problem:
1. Go to page: Footer component, 'Accessibility' tab.
2. Go to the heading 'For Assistive Technology'
3. Use the anchor to copy a link to that section (https://design.sis.gov.uk/components/footer#for-assistive-technology)
4. Paste it back into browser, it loads the 'Guidance' tab.
## 🧐 Expected behaviour
The link should take you to https://design.sis.gov.uk/components/footer#for-assistive-technology
| 1.0 | Anchor links on high level tabs within components not working - ## Summary of the bug
You are unable to use a copied link from an anchor nav within the 'Code' or 'Accessibility' tabs to then take you back to that place. It loads the 'Guidance' tab.
## 🪜 How to reproduce
Tell us the steps to reproduce the problem:
1. Go to page: Footer component, 'Accessibility' tab.
2. Go to the heading 'For Assistive Technology'
3. Use the anchor to copy a link to that section (https://design.sis.gov.uk/components/footer#for-assistive-technology)
4. Paste it back into browser, it loads the 'Guidance' tab.
## 🧐 Expected behaviour
The link should take you to https://design.sis.gov.uk/components/footer#for-assistive-technology
| priority | anchor links on high level tabs within components not working summary of the bug you are unable to use a copied link from an anchor nav within the code or accessibility tabs to then take you back to that place it loads the guidance tab 🪜 how to reproduce tell us the steps to reproduce the problem go to page footer component accessibility tab go to the heading for assistive technology use the anchor to copy a link to that section paste it back into browser it loads the guidance tab 🧐 expected behaviour the link should take you to | 1 |
789,077 | 27,777,672,912 | IssuesEvent | 2023-03-16 18:24:48 | impactMarket/app | https://api.github.com/repos/impactMarket/app | closed | [community search] searching when not on the first page, breaks the search | priority-2: medium type: bug | "you need to be in page 1. For example, if you search for communities in Venezuela and you go to page 3. Then you decide to clear search and search for Indonesia. But you were not on page 3 (when searching for Venezuela) so it won´t show you the results for Indonesia. You need to go back to Venezuela, puts it on page 1, and then it will show the Indonesia communities" - Catarina | 1.0 | [community search] searching when not on the first page, breaks the search - "you need to be in page 1. For example, if you search for communities in Venezuela and you go to page 3. Then you decide to clear search and search for Indonesia. But you were not on page 3 (when searching for Venezuela) so it won´t show you the results for Indonesia. You need to go back to Venezuela, puts it on page 1, and then it will show the Indonesia communities" - Catarina | priority | searching when not on the first page breaks the search you need to be in page for example if you search for communities in venezuela and you go to page then you decide to clear search and search for indonesia but you were not on page when searching for venezuela so it won´t show you the results for indonesia you need to go back to venezuela puts it on page and then it will show the indonesia communities catarina | 1 |
457,160 | 13,152,658,146 | IssuesEvent | 2020-08-09 23:36:39 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Make items in long-press menu of DeckPicker more intuitively accessible | Accepted Enhancement Priority-Medium Stale | Originally reported on Google Code with ID 977
```
A lot of users can not find the "delete deck" and "rename deck" actions, because they
don't have the idea to long-press in the deck picker (and I agree it is not very intuitive).
How about putting these actions in the "More" menu of the study options too?
```
Reported by `nicolas.raoul` on 2012-01-30 08:59:49
| 1.0 | Make items in long-press menu of DeckPicker more intuitively accessible - Originally reported on Google Code with ID 977
```
A lot of users can not find the "delete deck" and "rename deck" actions, because they
don't have the idea to long-press in the deck picker (and I agree it is not very intuitive).
How about putting these actions in the "More" menu of the study options too?
```
Reported by `nicolas.raoul` on 2012-01-30 08:59:49
| priority | make items in long press menu of deckpicker more intuitively accessible originally reported on google code with id a lot of users can not find the delete deck and rename deck actions because they don t have the idea to long press in the deck picker and i agree it is not very intuitive how about putting these actions in the more menu of the study options too reported by nicolas raoul on | 1 |
66,573 | 3,255,912,568 | IssuesEvent | 2015-10-20 11:10:29 | awesome-raccoons/gqt | https://api.github.com/repos/awesome-raccoons/gqt | opened | Keyboard shortcuts require shift key to be pressed | bug medium priority | Keyboard shortcuts were added in #46, but they require the shift key to be pressed, e.g. Ctrl+Shift+N instead of Ctrl-N. | 1.0 | Keyboard shortcuts require shift key to be pressed - Keyboard shortcuts were added in #46, but they require the shift key to be pressed, e.g. Ctrl+Shift+N instead of Ctrl-N. | priority | keyboard shortcuts require shift key to be pressed keyboard shortcuts were added in but they require the shift key to be pressed e g ctrl shift n instead of ctrl n | 1 |
443,025 | 12,758,167,397 | IssuesEvent | 2020-06-29 01:10:57 | minio/minio | https://api.github.com/repos/minio/minio | closed | minio crash when accessed from All-in-One WP Migration tool | community priority: medium | ## Expected Behavior
I am trying to use the All-in-One WP Migration tool S3 client plugin to back up wordpress web sites to minio.
## Current Behavior
When I attempt to backup to minio, the plugin reports curl error 52 "nothing" returned, and the minio server logs:
```
http: panic serving 192.168.1.1:60476: runtime error: invalid memory address or nil pointer dereference","source":["github.com/minio/minio@/cmd/server-main.go:453:cmd.serverMain.func2()
goroutine 262 [running]:","source":["github.com/minio/minio@/cmd/server-main.go:453:cmd.serverMain.func2()
"net/http.(*conn).serve.func1(0xc000936000)","source":["github.com/minio/minio@/cmd/server-main.go:453:cmd.serverMain.func2()
...
etc.
```
Taking a packet trace shows that plugin issues three requests in quick succession, which are as follows. Only the third gets a response.
```
HEAD / HTTP/1.1
Host: s3.elided.uk
Accept: */*
User-Agent: All-in-One WP Migration
x-amz-date: 20200623T152202Z
x-amz-content-sha256: ...elided...
Authorization: AWS4-HMAC-SHA256 Credential=unbind-trigger-bail/20200623/oracle/s3/aws4_request,SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date,Signature=...elided...
```
```
PUT / HTTP/1.1
Host: s3.elided.uk
Accept: */*
User-Agent: All-in-One WP Migration
Content-Type: application/xml
x-amz-date: 20200623T152202Z
x-amz-content-sha256: ...elided...
Authorization: AWS4-HMAC-SHA256 Credential=unbind-trigger-bail/20200623/oracle/s3/aws4_request,SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date,Signature=...elided...
Content-Length: 102
<CreateBucketConfiguration><LocationConstraint>oracle</LocationConstraint></CreateBucketConfiguration>
```
```
GET / HTTP/1.1
Host: s3.elided.uk
Accept: */*
User-Agent: All-in-One WP Migration
x-amz-date: 20200623T152202Z
x-amz-content-sha256: ...elided...
Authorization: AWS4-HMAC-SHA256 Credential=unbind-trigger-bail/20200623/oracle/s3/aws4_request,SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date,Signature=...elided...
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 461
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Server: MinIO/RELEASE.2020-06-22T03-12-50Z
Vary: Origin
X-Amz-Bucket-Region: oracle
X-Amz-Request-Id: 161B35815F87D1A1
X-Xss-Protection: 1; mode=block
Date: Tue, 23 Jun 2020 15:22:02 GMT
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>0...elided...</ID><DisplayName></DisplayName></Owner><Buckets><Bucket><Name>leek12-com</Name><CreationDate>2020-06-23T14:19:31.497Z</CreationDate></Bucket><Bucket><Name>migrate</Name><CreationDate>2020-06-23T13:39:06.594Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>
```
## Steps to Reproduce (for bugs)
I can reproduce using the mentioned plugin, clicking on update in the control panel causes the error. I have not yet been able to replicate using something like `curl` unfortunately.
## Your Environment
* Version used: RELEASE.2020-06-22T03-12-50Z
* Environment name and version: standalone
* Server type and version: Oracle Enterprise Linux (Oracle cloud server)
* Operating System and version: Linux ns2 4.14.35-1902.301.1.el7uek.x86_64 #2 SMP Tue Mar 31 16:50:32 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
| 1.0 | minio crash when accessed from All-in-One WP Migration tool - ## Expected Behavior
I am trying to use the All-in-One WP Migration tool S3 client plugin to back up wordpress web sites to minio.
## Current Behavior
When I attempt to backup to minio, the plugin reports curl error 52 "nothing" returned, and the minio server logs:
```
http: panic serving 192.168.1.1:60476: runtime error: invalid memory address or nil pointer dereference","source":["github.com/minio/minio@/cmd/server-main.go:453:cmd.serverMain.func2()
goroutine 262 [running]:","source":["github.com/minio/minio@/cmd/server-main.go:453:cmd.serverMain.func2()
"net/http.(*conn).serve.func1(0xc000936000)","source":["github.com/minio/minio@/cmd/server-main.go:453:cmd.serverMain.func2()
...
etc.
```
Taking a packet trace shows that plugin issues three requests in quick succession, which are as follows. Only the third gets a response.
```
HEAD / HTTP/1.1
Host: s3.elided.uk
Accept: */*
User-Agent: All-in-One WP Migration
x-amz-date: 20200623T152202Z
x-amz-content-sha256: ...elided...
Authorization: AWS4-HMAC-SHA256 Credential=unbind-trigger-bail/20200623/oracle/s3/aws4_request,SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date,Signature=...elided...
```
```
PUT / HTTP/1.1
Host: s3.elided.uk
Accept: */*
User-Agent: All-in-One WP Migration
Content-Type: application/xml
x-amz-date: 20200623T152202Z
x-amz-content-sha256: ...elided...
Authorization: AWS4-HMAC-SHA256 Credential=unbind-trigger-bail/20200623/oracle/s3/aws4_request,SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date,Signature=...elided...
Content-Length: 102
<CreateBucketConfiguration><LocationConstraint>oracle</LocationConstraint></CreateBucketConfiguration>
```
```
GET / HTTP/1.1
Host: s3.elided.uk
Accept: */*
User-Agent: All-in-One WP Migration
x-amz-date: 20200623T152202Z
x-amz-content-sha256: ...elided...
Authorization: AWS4-HMAC-SHA256 Credential=unbind-trigger-bail/20200623/oracle/s3/aws4_request,SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date,Signature=...elided...
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 461
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Server: MinIO/RELEASE.2020-06-22T03-12-50Z
Vary: Origin
X-Amz-Bucket-Region: oracle
X-Amz-Request-Id: 161B35815F87D1A1
X-Xss-Protection: 1; mode=block
Date: Tue, 23 Jun 2020 15:22:02 GMT
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>0...elided...</ID><DisplayName></DisplayName></Owner><Buckets><Bucket><Name>leek12-com</Name><CreationDate>2020-06-23T14:19:31.497Z</CreationDate></Bucket><Bucket><Name>migrate</Name><CreationDate>2020-06-23T13:39:06.594Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>
```
## Steps to Reproduce (for bugs)
I can reproduce using the mentioned plugin, clicking on update in the control panel causes the error. I have not yet been able to replicate using something like `curl` unfortunately.
## Your Environment
* Version used: RELEASE.2020-06-22T03-12-50Z
* Environment name and version: standalone
* Server type and version: Oracle Enterprise Linux (Oracle cloud server)
* Operating System and version: Linux ns2 4.14.35-1902.301.1.el7uek.x86_64 #2 SMP Tue Mar 31 16:50:32 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
| priority | minio crash when accessed from all in one wp migration tool expected behavior i am trying to use the all in one wp migration tool client plugin to back up wordpress web sites to minio current behavior when i attempt to backup to minio the plugin reports curl error nothing returned and the minio server logs http panic serving runtime error invalid memory address or nil pointer dereference source github com minio minio cmd server main go cmd servermain goroutine source github com minio minio cmd server main go cmd servermain net http conn serve source github com minio minio cmd server main go cmd servermain etc taking a packet trace shows that plugin issues three requests in quick succession which are as follows only the third gets a response head http host elided uk accept user agent all in one wp migration x amz date x amz content elided authorization hmac credential unbind trigger bail oracle request signedheaders host user agent x amz content x amz date signature elided put http host elided uk accept user agent all in one wp migration content type application xml x amz date x amz content elided authorization hmac credential unbind trigger bail oracle request signedheaders content type host user agent x amz content x amz date signature elided content length oracle get http host elided uk accept user agent all in one wp migration x amz date x amz content elided authorization hmac credential unbind trigger bail oracle request signedheaders host user agent x amz content x amz date signature elided http ok accept ranges bytes content length content security policy block all mixed content content type application xml server minio release vary origin x amz bucket region oracle x amz request id x xss protection mode block date tue jun gmt listallmybucketsresult xmlns steps to reproduce for bugs i can reproduce using the mentioned plugin clicking on update in the control panel causes the error i have not yet been able to replicate using something like curl unfortunately your environment version used release environment name and version standalone server type and version oracle enterprise linux oracle cloud server operating system and version linux smp tue mar pdt gnu linux | 1 |
386,154 | 11,432,807,212 | IssuesEvent | 2020-02-04 14:41:39 | ooni/backend | https://api.github.com/repos/ooni/backend | closed | Introduce measurement count tables | enhancement ooni/pipeline priority/medium | Analysis scripts and the private API often use count() on measurements.
Investigate introducing tables that contains msm counts or extend the ones named ooexpl* to improve speed. | 1.0 | Introduce measurement count tables - Analysis scripts and the private API often use count() on measurements.
Investigate introducing tables that contains msm counts or extend the ones named ooexpl* to improve speed. | priority | introduce measurement count tables analysis scripts and the private api often use count on measurements investigate introducing tables that contains msm counts or extend the ones named ooexpl to improve speed | 1 |
166,438 | 6,304,691,065 | IssuesEvent | 2017-07-21 16:29:45 | vmware/vic | https://api.github.com/repos/vmware/vic | opened | Incorrect IP address reported by ovftool | kind/bug priority/medium product/ova | Potentially related to:
https://github.com/vmware/vic/issues/4995
Powering on VM: VIC-mike
Task Completed
Received IP address: 172.17.0.1
Completed successfully
172.17.0.1 is definitely not the right address... | 1.0 | Incorrect IP address reported by ovftool - Potentially related to:
https://github.com/vmware/vic/issues/4995
Powering on VM: VIC-mike
Task Completed
Received IP address: 172.17.0.1
Completed successfully
172.17.0.1 is definitely not the right address... | priority | incorrect ip address reported by ovftool potentially related to powering on vm vic mike task completed received ip address completed successfully is definitely not the right address | 1 |
760,403 | 26,638,431,785 | IssuesEvent | 2023-01-25 00:49:23 | gabrielagqueiroz/portifolio | https://api.github.com/repos/gabrielagqueiroz/portifolio | closed | Adicionar meus dados de contato | Priority: Medium Weight:3 Type: Feature | ## Informações
- [ ] Linkedin
- [ ] Email
- [ ] Telefone (Whatsapp/Telegram)
- [ ] Github | 1.0 | Adicionar meus dados de contato - ## Informações
- [ ] Linkedin
- [ ] Email
- [ ] Telefone (Whatsapp/Telegram)
- [ ] Github | priority | adicionar meus dados de contato informações linkedin email telefone whatsapp telegram github | 1 |
248,653 | 7,934,720,918 | IssuesEvent | 2018-07-08 22:38:35 | commercialhaskell/hindent | https://api.github.com/repos/commercialhaskell/hindent | closed | Misaligned comment at top of do-block | component: hindent priority: medium type: bug | ```
long_function x = do
-- bla
let y = z
return z
```
becomes
```
long_function x
-- bla
= do
let y = z
return z
```
Version `hindent 5.2.1`, built from current master.
---
**Update** at v5.2.2, c2ac3e3ce57c834525dc8ec5ca87d5e8d728b69b:
The output is now
```
long_function x
-- bla
= do
let y = z
return z
``` | 1.0 | Misaligned comment at top of do-block - ```
long_function x = do
-- bla
let y = z
return z
```
becomes
```
long_function x
-- bla
= do
let y = z
return z
```
Version `hindent 5.2.1`, built from current master.
---
**Update** at v5.2.2, c2ac3e3ce57c834525dc8ec5ca87d5e8d728b69b:
The output is now
```
long_function x
-- bla
= do
let y = z
return z
``` | priority | misaligned comment at top of do block long function x do bla let y z return z becomes long function x bla do let y z return z version hindent built from current master update at the output is now long function x bla do let y z return z | 1 |
426,037 | 12,366,254,406 | IssuesEvent | 2020-05-18 10:07:58 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | map.draw_rectangle API is inconsistent with submap | Effort Medium Feature Request Hacktoberfest Package Novice Priority High map | submap uses top right and bottom left and draw rectangle uses bottom right width and height. This is stupid.
This is especially stupid as it prevents you from plotting a rectangle in the coordinates of one image over another. | 1.0 | map.draw_rectangle API is inconsistent with submap - submap uses top right and bottom left and draw rectangle uses bottom right width and height. This is stupid.
This is especially stupid as it prevents you from plotting a rectangle in the coordinates of one image over another. | priority | map draw rectangle api is inconsistent with submap submap uses top right and bottom left and draw rectangle uses bottom right width and height this is stupid this is especially stupid as it prevents you from plotting a rectangle in the coordinates of one image over another | 1 |
593,920 | 18,020,204,865 | IssuesEvent | 2021-09-16 18:23:29 | hashicorp/flight | https://api.github.com/repos/hashicorp/flight | closed | Set the value of the data-test-icon attribute to the name of the icon | priority: medium 1.0 | Small thing, but `<svg data-test-icon={{@name}} ...>` might be useful for testing? | 1.0 | Set the value of the data-test-icon attribute to the name of the icon - Small thing, but `<svg data-test-icon={{@name}} ...>` might be useful for testing? | priority | set the value of the data test icon attribute to the name of the icon small thing but might be useful for testing | 1 |
33,237 | 2,763,187,724 | IssuesEvent | 2015-04-29 07:20:34 | less/less.js | https://api.github.com/repos/less/less.js | closed | Latest less.js (2.1.1) behaves async in IE | Browser Bug Medium Priority | Latest less.js (2.1.1) behaves async in IE (ver 11) even when async:false is set explicitly.
The file processed is complex less with multiple @import commands that take about 1 second to load and process on dev machine, during which time IE shows style-free html. Firefox and Chrome behave correctly. Had to revert to 1.7.5 which behaves correctly.
Perhaps related to the recent promises changes? Perhaps setTimeout(function(){}, 0); allows IE to proceed in parallel and partially render the page? | 1.0 | Latest less.js (2.1.1) behaves async in IE - Latest less.js (2.1.1) behaves async in IE (ver 11) even when async:false is set explicitly.
The file processed is complex less with multiple @import commands that take about 1 second to load and process on dev machine, during which time IE shows style-free html. Firefox and Chrome behave correctly. Had to revert to 1.7.5 which behaves correctly.
Perhaps related to the recent promises changes? Perhaps setTimeout(function(){}, 0); allows IE to proceed in parallel and partially render the page? | priority | latest less js behaves async in ie latest less js behaves async in ie ver even when async false is set explicitly the file processed is complex less with multiple import commands that take about second to load and process on dev machine during which time ie shows style free html firefox and chrome behave correctly had to revert to which behaves correctly perhaps related to the recent promises changes perhaps settimeout function allows ie to proceed in parallel and partially render the page | 1 |
514,861 | 14,945,727,029 | IssuesEvent | 2021-01-26 04:56:28 | Plaxy-Technologies-Inc/YouPlanets-Bug-Report | https://api.github.com/repos/Plaxy-Technologies-Inc/YouPlanets-Bug-Report | closed | SIGN UP button too small - not visible on HomePage | Priority: Medium | First, it's not available on the homepage. So you already have to find it, which hinders new user enrollment.
Then, once you click on "Sign In", the sign up button is all the way down below, in small font. I missed it a couple times.
Sign-up option should be front and center, and very easy to see, because that's how we obtain new clients and new users.


| 1.0 | SIGN UP button too small - not visible on HomePage - First, it's not available on the homepage. So you already have to find it, which hinders new user enrollment.
Then, once you click on "Sign In", the sign up button is all the way down below, in small font. I missed it a couple times.
Sign-up option should be front and center, and very easy to see, because that's how we obtain new clients and new users.


| priority | sign up button too small not visible on homepage first it s not available on the homepage so you already have to find it which hinders new user enrollment then once you click on sign in the sign up button is all the way down below in small font i missed it a couple times sign up option should be front and center and very easy to see because that s how we obtain new clients and new users | 1 |
38,217 | 2,842,252,199 | IssuesEvent | 2015-05-28 08:11:15 | soi-toolkit/soi-toolkit-mule | https://api.github.com/repos/soi-toolkit/soi-toolkit-mule | closed | Introduce Catch/Rollback exception strategies for improved logging and fault handling | AffectsVersion-v0.6.0 BackwardCompatibility-MinorChange Component-tools-templates Milestone-Release0.7.0 Priority-Medium Type-Review | Original [issue 359](https://code.google.com/p/soi-toolkit/issues/detail?id=359) created by soi-toolkit on 2013-11-09T09:17:16.000Z:
The current ServiceExceptionStrategy-implementation (generated at the bottom of flows):
<custom-exception-strategy class="org.soitoolkit.commons.mule.error.ServiceExceptionStrategy"/>
lacks features:
1. Control over retry-handling: for which kind of exceptions should processing be retried/aborted?
Note: retry-handling currently falls back on individual transports, typically using JMS-inbound with retry-parameters.
2. Access to MuleMessage when exceptions occur: needed for logging error-context like message-headers, specifically correlationId for a flow.
Note: in current ServiceExceptionStrategy the MuleMessage is not available in all cases, like when a TransformerException occurs. | 1.0 | Introduce Catch/Rollback exception strategies for improved logging and fault handling - Original [issue 359](https://code.google.com/p/soi-toolkit/issues/detail?id=359) created by soi-toolkit on 2013-11-09T09:17:16.000Z:
The current ServiceExceptionStrategy-implementation (generated at the bottom of flows):
<custom-exception-strategy class="org.soitoolkit.commons.mule.error.ServiceExceptionStrategy"/>
lacks features:
1. Control over retry-handling: for which kind of exceptions should processing be retried/aborted?
Note: retry-handling currently falls back on individual transports, typically using JMS-inbound with retry-parameters.
2. Access to MuleMessage when exceptions occur: needed for logging error-context like message-headers, specifically correlationId for a flow.
Note: in current ServiceExceptionStrategy the MuleMessage is not available in all cases, like when a TransformerException occurs. | priority | introduce catch rollback exception strategies for improved logging and fault handling original created by soi toolkit on the current serviceexceptionstrategy implementation generated at the bottom of flows lt custom exception strategy class quot org soitoolkit commons mule error serviceexceptionstrategy quot gt lacks features control over retry handling for which kind of exceptions should processing be retried aborted note retry handling currently falls back on individual transports typically using jms inbound with retry parameters access to mulemessage when exceptions occur needed for logging error context like message headers specifically correlationid for a flow note in current serviceexceptionstrategy the mulemessage is not available in all cases like when a transformerexception occurs | 1 |
556,546 | 16,485,586,780 | IssuesEvent | 2021-05-24 17:27:07 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.3.4 beta release-226]Pollution spread only likes vertical | Category: Gameplay Priority: Medium Regression Type: Bug | Pollution expand seems to only really spread N/S
Brick line is pollution from 1 stockpile after 24 hours.
Reinforced concrete line is pollution from 4 stockpiles after another 24 hours. No change in East/West direction

| 1.0 | [0.9.3.4 beta release-226]Pollution spread only likes vertical - Pollution expand seems to only really spread N/S
Brick line is pollution from 1 stockpile after 24 hours.
Reinforced concrete line is pollution from 4 stockpiles after another 24 hours. No change in East/West direction

| priority | pollution spread only likes vertical pollution expand seems to only really spread n s brick line is pollution from stockpile after hours reinforced concrete line is pollution from stockpiles after another hours no change in east west direction | 1 |
84,126 | 3,654,276,145 | IssuesEvent | 2016-02-17 11:44:38 | brunoais/javadude | https://api.github.com/repos/brunoais/javadude | closed | Annotations - add binding code (for bean-bean binding, swing, swt, etc) | auto-migrated Priority-Medium Project-Annotations Type-Enhancement | ```
add binding code (for bean-bean binding, swing, swt, etc)
```
Original issue reported on code.google.com by `scott%ja...@gtempaccount.com` on 24 Dec 2008 at 10:12 | 1.0 | Annotations - add binding code (for bean-bean binding, swing, swt, etc) - ```
add binding code (for bean-bean binding, swing, swt, etc)
```
Original issue reported on code.google.com by `scott%ja...@gtempaccount.com` on 24 Dec 2008 at 10:12 | priority | annotations add binding code for bean bean binding swing swt etc add binding code for bean bean binding swing swt etc original issue reported on code google com by scott ja gtempaccount com on dec at | 1 |
585,826 | 17,535,692,291 | IssuesEvent | 2021-08-12 06:04:07 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | web admin: SNMP version is prefixed with a useless "v" | Type: Bug Priority: Medium | **Describe the bug**
When you specify a SNMP version for a switch using GUI, you have to choose between 3 values in a dropdown:
- v1
- v2c
- v3
but API call sent doesn't contain the **v** letter and value saved in `switches.conf` too.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new switch with SNMP Version equals to `v2c`
2. Check value sent by API call: `SNMPVersion: 2c`
3. Check value saved in `switches.conf`: `SNMPVersion=2c`
**Expected behavior**
We should remove `v` letter in dropdown list of GUI.
**Additional context**
Issue seems on API side:
```json
# pfperl-api get -M OPTIONS /api/v1/config/switches | jq .meta.SNMPVersion
{
"allow_custom": false,
"allowed": [
{
"text": "",
"value": ""
},
{
"text": "v1",
"value": "1"
},
{
"text": "v2c",
"value": "2c"
},
{
"text": "v3",
"value": "3"
}
],
"default": null,
"placeholder": "1",
"required": false,
"type": "string"
}
```
| 1.0 | web admin: SNMP version is prefixed with a useless "v" - **Describe the bug**
When you specify a SNMP version for a switch using GUI, you have to choose between 3 values in a dropdown:
- v1
- v2c
- v3
but API call sent doesn't contain the **v** letter and value saved in `switches.conf` too.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new switch with SNMP Version equals to `v2c`
2. Check value sent by API call: `SNMPVersion: 2c`
3. Check value saved in `switches.conf`: `SNMPVersion=2c`
**Expected behavior**
We should remove `v` letter in dropdown list of GUI.
**Additional context**
Issue seems on API side:
```json
# pfperl-api get -M OPTIONS /api/v1/config/switches | jq .meta.SNMPVersion
{
"allow_custom": false,
"allowed": [
{
"text": "",
"value": ""
},
{
"text": "v1",
"value": "1"
},
{
"text": "v2c",
"value": "2c"
},
{
"text": "v3",
"value": "3"
}
],
"default": null,
"placeholder": "1",
"required": false,
"type": "string"
}
```
| priority | web admin snmp version is prefixed with a useless v describe the bug when you specify a snmp version for a switch using gui you have to choose between values in a dropdown but api call sent doesn t contain the v letter and value saved in switches conf too to reproduce steps to reproduce the behavior create a new switch with snmp version equals to check value sent by api call snmpversion check value saved in switches conf snmpversion expected behavior we should remove v letter in dropdown list of gui additional context issue seems on api side json pfperl api get m options api config switches jq meta snmpversion allow custom false allowed text value text value text value text value default null placeholder required false type string | 1 |
251,701 | 8,025,931,700 | IssuesEvent | 2018-07-27 00:40:35 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Campfire Speed | Medium Priority | talked to Pam on discord about this a little.
Here is a GIF on Charred Tomato
[](https://gyazo.com/d2d07c978be78877770001a47e871927)
The TimeMult is set to 1.0 and everything else is running fine in seconds but anything in CampFire was going pretty fast, half speed basically. | 1.0 | Campfire Speed - talked to Pam on discord about this a little.
Here is a GIF on Charred Tomato
[](https://gyazo.com/d2d07c978be78877770001a47e871927)
The TimeMult is set to 1.0 and everything else is running fine in seconds but anything in CampFire was going pretty fast, half speed basically. | priority | campfire speed talked to pam on discord about this a little here is a gif on charred tomato the timemult is set to and everything else is running fine in seconds but anything in campfire was going pretty fast half speed basically | 1 |
81,253 | 3,588,165,252 | IssuesEvent | 2016-01-30 20:48:27 | marvinlabs/customer-area | https://api.github.com/repos/marvinlabs/customer-area | closed | Duplicate Notifications | enhancement Premium add-ons Priority - medium | 1) If you are the author of a conversation, and a member of the project to which the conversation is owned by, you will get two emails.
Adding $recipient_ids = array_unique($recipient_ids); to line 456 of notifications-addon.class.php resolves this
2) Functions "on_private_post_published" and "on_conversation_started" need a catch to stop an email being sent to author, similar to "on_new_reply_to_conversation" below;
if ( $user_id==$reply_author_id ) continue;
3) New conversation reply fires "on_private_post_published" and "on_new_reply_to_conversation" functions. Without the catch in point 2), the sender will get two emails as well.
| 1.0 | Duplicate Notifications - 1) If you are the author of a conversation, and a member of the project to which the conversation is owned by, you will get two emails.
Adding $recipient_ids = array_unique($recipient_ids); to line 456 of notifications-addon.class.php resolves this
2) Functions "on_private_post_published" and "on_conversation_started" need a catch to stop an email being sent to author, similar to "on_new_reply_to_conversation" below;
if ( $user_id==$reply_author_id ) continue;
3) New conversation reply fires "on_private_post_published" and "on_new_reply_to_conversation" functions. Without the catch in point 2), the sender will get two emails as well.
| priority | duplicate notifications if you are the author of a conversation and a member of the project to which the conversation is owned by you will get two emails adding recipient ids array unique recipient ids to line of notifications addon class php resolves this functions on private post published and on conversation started need a catch to stop an email being sent to author similar to on new reply to conversation below if user id reply author id continue new conversation reply fires on private post published and on new reply to conversation functions without the catch in point the sender will get two emails as well | 1 |
635,178 | 20,381,168,615 | IssuesEvent | 2022-02-21 22:04:31 | GDSCUTM-CommunityProjects/UTimeManager | https://api.github.com/repos/GDSCUTM-CommunityProjects/UTimeManager | closed | Login & Register (Backend) | Backend: Enhancement Priority: Medium | As a user, I want to log in so that I can access all my tasks for the week
Sub Tasks
- [x] Upon registration, data should be saved on the database (email, password, etc.)
- [ ] Upon registration, user information should be sanitized
Acceptance Criteria
- [x] Upon a successful login a token should be returned back to the user (including correct status codes, payload returns, etc.)
- [x] Upon a failed login the correct status code along with any payloads should be returned
Optional (Only done with the frontend)
- [ ] Emailing a user their token to activate their account | 1.0 | Login & Register (Backend) - As a user, I want to log in so that I can access all my tasks for the week
Sub Tasks
- [x] Upon registration, data should be saved on the database (email, password, etc.)
- [ ] Upon registration, user information should be sanitized
Acceptance Criteria
- [x] Upon a successful login a token should be returned back to the user (including correct status codes, payload returns, etc.)
- [x] Upon a failed login the correct status code along with any payloads should be returned
Optional (Only done with the frontend)
- [ ] Emailing a user their token to activate their account | priority | login register backend as a user i want to log in so that i can access all my tasks for the week sub tasks upon registration data should be saved on the database email password etc upon registration user information should be sanitized acceptance criteria upon a successful login a token should be returned back to the user including correct status codes payload returns etc upon a failed login the correct status code along with any payloads should be returned optional only done with the frontend emailing a user their token to activate their account | 1 |
146,649 | 5,625,618,242 | IssuesEvent | 2017-04-04 19:50:32 | phetsims/unit-rates | https://api.github.com/repos/phetsims/unit-rates | opened | Create sim primer | priority:3-medium | Tracking sim primer creation in this issue so that #1 can be closed.
Target Deadlines
- [ ] Script drafted by 4/14/17
- [ ] Script reviewed by 4/28/17
- [ ] First recording by 5/12/17
- [ ] Primer live by 6/1/17 | 1.0 | Create sim primer - Tracking sim primer creation in this issue so that #1 can be closed.
Target Deadlines
- [ ] Script drafted by 4/14/17
- [ ] Script reviewed by 4/28/17
- [ ] First recording by 5/12/17
- [ ] Primer live by 6/1/17 | priority | create sim primer tracking sim primer creation in this issue so that can be closed target deadlines script drafted by script reviewed by first recording by primer live by | 1 |
5,380 | 2,575,046,691 | IssuesEvent | 2015-02-11 20:33:24 | javalite/activejdbc | https://api.github.com/repos/javalite/activejdbc | closed | Add Expectation.shouldContain(String) | enhancement imported Priority-Medium | _Original author: ipolevoy@gmail.com (July 28, 2011 23:25:21)_
so as not to write things like
a(myString.contains("hello")).shouldBeTrue();
better syntax:
a(myString).shouldContain("hello");
_Original issue: http://code.google.com/p/activejdbc/issues/detail?id=100_ | 1.0 | Add Expectation.shouldContain(String) - _Original author: ipolevoy@gmail.com (July 28, 2011 23:25:21)_
so as not to write things like
a(myString.contains("hello")).shouldBeTrue();
better syntax:
a(myString).shouldContain("hello");
_Original issue: http://code.google.com/p/activejdbc/issues/detail?id=100_ | priority | add expectation shouldcontain string original author ipolevoy gmail com july so as not to write things like a mystring contains quot hello quot shouldbetrue better syntax a mystring shouldcontain quot hello quot original issue | 1 |
591,650 | 17,857,594,420 | IssuesEvent | 2021-09-05 10:41:09 | GIST-Petition-Site-Project/GIST-petition-web | https://api.github.com/repos/GIST-Petition-Site-Project/GIST-petition-web | opened | 페이지 이동 시 스크롤 조정 | Type: Feature/UI Type: Feature/Function Status: To Do Priority: Medium | ## Feature description
<li> 페이지를 이동할 때, 스크롤은 그대로여서 페이지를 이동할 때마다 스크롤을 맨 위로 조정함.
<li> 첫 페이지에서 메인 헤더의 g-talk g-talk 로고를 클릭하면 reload하게 만듦
<li> media query 적용한 메인 메뉴 심볼?에 마우스 커서를 올려도 반응이 없어서 cursor: pointer 적용함.
### Use cases
## Benefits
For whom and why.
## Requirements
## Links / references
| 1.0 | 페이지 이동 시 스크롤 조정 - ## Feature description
<li> 페이지를 이동할 때, 스크롤은 그대로여서 페이지를 이동할 때마다 스크롤을 맨 위로 조정함.
<li> 첫 페이지에서 메인 헤더의 g-talk g-talk 로고를 클릭하면 reload하게 만듦
<li> media query 적용한 메인 메뉴 심볼?에 마우스 커서를 올려도 반응이 없어서 cursor: pointer 적용함.
### Use cases
## Benefits
For whom and why.
## Requirements
## Links / references
| priority | 페이지 이동 시 스크롤 조정 feature description 페이지를 이동할 때 스크롤은 그대로여서 페이지를 이동할 때마다 스크롤을 맨 위로 조정함 첫 페이지에서 메인 헤더의 g talk g talk 로고를 클릭하면 reload하게 만듦 media query 적용한 메인 메뉴 심볼 에 마우스 커서를 올려도 반응이 없어서 cursor pointer 적용함 use cases benefits for whom and why requirements links references | 1 |
594,022 | 18,022,109,541 | IssuesEvent | 2021-09-16 20:54:18 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | [Windows] Right click in task bar is gone | bug windows general priority 2: medium | 1. Install windows build and run it
2. Go to task bar -> show hidden icons
3. Right click the Status icon
**Actual result:** context menu is gone
**Expected result:** right click on Status icon opens context menu with applicable actions (right click -> quit app for example)
<img width="1552" alt="Screenshot 2021-09-10 at 10 51 05" src="https://user-images.githubusercontent.com/82375995/132820722-2bd30d9d-000a-4436-9c37-42ea0bc75964.png">
https://user-images.githubusercontent.com/82375995/132820690-1acdd498-64e3-41f0-ae65-c5d9ede4591a.mov
| 1.0 | [Windows] Right click in task bar is gone - 1. Install windows build and run it
2. Go to task bar -> show hidden icons
3. Right click the Status icon
**Actual result:** context menu is gone
**Expected result:** right click on Status icon opens context menu with applicable actions (right click -> quit app for example)
<img width="1552" alt="Screenshot 2021-09-10 at 10 51 05" src="https://user-images.githubusercontent.com/82375995/132820722-2bd30d9d-000a-4436-9c37-42ea0bc75964.png">
https://user-images.githubusercontent.com/82375995/132820690-1acdd498-64e3-41f0-ae65-c5d9ede4591a.mov
| priority | right click in task bar is gone install windows build and run it go to task bar show hidden icons right click the status icon actual result context menu is gone expected result right click on status icon opens context menu with applicable actions right click quit app for example img width alt screenshot at src | 1 |
309,535 | 9,476,618,977 | IssuesEvent | 2019-04-19 15:39:53 | CosminNechifor/IKHNAIE | https://api.github.com/repos/CosminNechifor/IKHNAIE | closed | Manager should be the only contract that can modify the state of other contracts. | Medium Priority | ## Manager contract logic and implementation
The ``Manager`` contract should be the only contract that can modify the state of the deployed ``Component``s and ``Registry``.
## Needs to be taken care of:
- When a **ChildComponent** is removed from a **ParentComponent** → manager should do the following changes:
- ~~The **ParentComponent** should change it's state into ``Broken``.~~ **REMOVED BECAUSE OF COUNTER EXAMPLE:** If we take the windows out of a car, it would not be considered broken but it should show that it's not as in the **original state**.
- **ChildComponent** should have the ``address(0)`` as parent.
- **ChildComponent** is flaged as broken:
- ~~**ParentComponent** should become ``Broken`` as well till we replace the missing component. (The propagation will be done till we reach the top level ``Component``)~~ Not sure if the propagation should go to the top. The Root component could still work normally. Example: if you take the radio out of the car, that doesn't make the car broken. Instead the logic should change a little bit.
- When a **ParentComponent** is flaged as broken only the parent component should become broken.
| 1.0 | Manager should be the only contract that can modify the state of other contracts. - ## Manager contract logic and implementation
The ``Manager`` contract should be the only contract that can modify the state of the deployed ``Component``s and ``Registry``.
## Needs to be taken care of:
- When a **ChildComponent** is removed from a **ParentComponent** → manager should do the following changes:
- ~~The **ParentComponent** should change it's state into ``Broken``.~~ **REMOVED BECAUSE OF COUNTER EXAMPLE:** If we take the windows out of a car, it would not be considered broken but it should show that it's not as in the **original state**.
- **ChildComponent** should have the ``address(0)`` as parent.
- **ChildComponent** is flaged as broken:
- ~~**ParentComponent** should become ``Broken`` as well till we replace the missing component. (The propagation will be done till we reach the top level ``Component``)~~ Not sure if the propagation should go to the top. The Root component could still work normally. Example: if you take the radio out of the car, that doesn't make the car broken. Instead the logic should change a little bit.
- When a **ParentComponent** is flaged as broken only the parent component should become broken.
| priority | manager should be the only contract that can modify the state of other contracts manager contract logic and implementation the manager contract should be the only contract that can modify the state of the deployed component s and registry needs to be taken care of when a childcomponent is removed from a parentcomponent rarr manager should do the following changes the parentcomponent should change it s state into broken removed because of counter example if we take the windows out of a car it would not be considered broken but it should show that it s not as in the original state childcomponent should have the address as parent childcomponent is flaged as broken parentcomponent should become broken as well till we replace the missing component the propagation will be done till we reach the top level component not sure if the propagation should go to the top the root component could still work normally example if you take the radio out of the car that doesn t make the car broken instead the logic should change a little bit when a parentcomponent is flaged as broken only the parent component should become broken | 1 |
416,019 | 12,138,458,525 | IssuesEvent | 2020-04-23 17:16:30 | AbsaOSS/enceladus | https://api.github.com/repos/AbsaOSS/enceladus | closed | Make sure 'Source' and 'Raw' checkpoints are present when Standardization starts | Standardization feature priority: medium | ## Background
We need to start more strict validations of incoming _INFO files.
## Feature
Make sure 'Source' and 'Raw' checkpoints are present when Standardization starts.
## Additional context
It might require changes to Atum to allow clients access to checkpoints (instance of `ControlMeasure`)
This is related to #1186 | 1.0 | Make sure 'Source' and 'Raw' checkpoints are present when Standardization starts - ## Background
We need to start more strict validations of incoming _INFO files.
## Feature
Make sure 'Source' and 'Raw' checkpoints are present when Standardization starts.
## Additional context
It might require changes to Atum to allow clients access to checkpoints (instance of `ControlMeasure`)
This is related to #1186 | priority | make sure source and raw checkpoints are present when standardization starts background we need to start more strict validations of incoming info files feature make sure source and raw checkpoints are present when standardization starts additional context it might require changes to atum to allow clients access to checkpoints instance of controlmeasure this is related to | 1 |
743,058 | 25,885,538,588 | IssuesEvent | 2022-12-14 14:19:10 | ncssar/radiolog | https://api.github.com/repos/ncssar/radiolog | closed | pywintypes error 31 (ShellExecute error) when printing | bug Priority:Medium | From the transcript:
```
183754:PRINT radio log
183754:teamFilterList=['']
183754:generating radio log pdf: C:\Users\SAR 425\Documents\RadioLog Backups\Testing_2022_12_11_183332\Testing_2022_12_11_183332_OP1.pdf
183754:length:8
183754:valid logo file C:\Users\SAR 425\RadioLog\.config\radiolog_logo.jpg
183754:Page number:1
183754:Height:43.199999999999996
183754:Pagesize:(792.0, 612.0)
183755:done drawing printLogHeaderFooter canvas
183755:end of printLogHeaderFooter
Uncaught exception
Traceback (most recent call last):
File "radiolog.py", line 5084, in accept
File "radiolog.py", line 2611, in printLog
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
183819:PRINT radio log
183819:teamFilterList=['']
183819:generating radio log pdf: C:\Users\SAR 425\Documents\RadioLog Backups\Testing_2022_12_11_183332\Testing_2022_12_11_183332_OP1.pdf
183819:length:8
183819:valid logo file C:\Users\SAR 425\RadioLog\.config\radiolog_logo.jpg
183819:Page number:1
183819:Height:43.199999999999996
183819:Pagesize:(792.0, 612.0)
183819:done drawing printLogHeaderFooter canvas
183819:end of printLogHeaderFooter
Uncaught exception
Traceback (most recent call last):
File "radiolog.py", line 5084, in accept
File "radiolog.py", line 2611, in printLog
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
183822:PRINT team radio logs
183822:teamFilterList=['TeamAlpha', 'TeamBravo']
183822:generating radio log pdf: C:\Users\SAR 425\Documents\RadioLog Backups\Testing_2022_12_11_183332\Testing_2022_12_11_183332_teams_OP1.pdf
183822:length:6
183822:length:3
183822:valid logo file C:\Users\SAR 425\RadioLog\.config\radiolog_logo.jpg
183822:Page number:1
183822:Height:43.199999999999996
183822:Pagesize:(792.0, 612.0)
183822:done drawing printLogHeaderFooter canvas
183822:end of printLogHeaderFooter
Uncaught exception
Traceback (most recent call last):
File "radiolog.py", line 5087, in accept
File "radiolog.py", line 2619, in printTeamLogs
File "radiolog.py", line 2611, in printLog
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
183824:PRINT clue log
183824:appending: ['', 'Radio Log Begins: Sun Dec 11, 2022', '', '1833', '', '', '', '', '']
183824:Nothing to print for specified operational period 1
```
This was reported by @RadiosPRN.
A quick google of that error shows that a few folks determined that the problem was that no pdf reader application was installed, and/or it was not set as the default application for opening pdf files: https://stackoverflow.com/questions/36022695
Sure enough, if I uninstall Acrobat Reader, then printing from radiolog shows this:
```
175633:PRINT radio log
175633:teamFilterList=['']
175633:generating radio log pdf: C:\Users\caver\RadioLog\New_Incident_2022_12_11_161621\New_Incident_2022_12_11_161621_OP1.pdf
175633:length:8
175633:valid logo file C:\Users\caver\RadioLog\.config\radiolog_logo.jpg
175633:Page number:1
175633:Height:43.199999999999996
175633:Pagesize:(792.0, 612.0)
175633:done drawing printLogHeaderFooter canvas
175633:end of printLogHeaderFooter
Traceback (most recent call last):
File "C:\Users\caver\Documents\GitHub\radiolog\radiolog.py", line 5086, in accept
self.parent.printLog(opPeriod)
File "C:\Users\caver\Documents\GitHub\radiolog\radiolog.py", line 2613, in printLog
win32api.ShellExecute(0,"print",pdfName,'/d:"%s"' % win32print.GetDefaultPrinter(),".",0)
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
```
The error syntax is a bit different - not sure why - but the underlying error 31 seems to be the same.
This does match the behavior of what @RadiosPRN reported - you can only save one pdf file at a time, because the print failure for any given doc causes the subsequent pdf saves to be skipped.
So this issue could have two fixes:
1) find a way to print without needing a pdf reader to be installed (and set as the default application for pdfs)
2) if a print fails, continue with the save of the other requested pdfs | 1.0 | pywintypes error 31 (ShellExecute error) when printing - From the transcript:
```
183754:PRINT radio log
183754:teamFilterList=['']
183754:generating radio log pdf: C:\Users\SAR 425\Documents\RadioLog Backups\Testing_2022_12_11_183332\Testing_2022_12_11_183332_OP1.pdf
183754:length:8
183754:valid logo file C:\Users\SAR 425\RadioLog\.config\radiolog_logo.jpg
183754:Page number:1
183754:Height:43.199999999999996
183754:Pagesize:(792.0, 612.0)
183755:done drawing printLogHeaderFooter canvas
183755:end of printLogHeaderFooter
Uncaught exception
Traceback (most recent call last):
File "radiolog.py", line 5084, in accept
File "radiolog.py", line 2611, in printLog
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
183819:PRINT radio log
183819:teamFilterList=['']
183819:generating radio log pdf: C:\Users\SAR 425\Documents\RadioLog Backups\Testing_2022_12_11_183332\Testing_2022_12_11_183332_OP1.pdf
183819:length:8
183819:valid logo file C:\Users\SAR 425\RadioLog\.config\radiolog_logo.jpg
183819:Page number:1
183819:Height:43.199999999999996
183819:Pagesize:(792.0, 612.0)
183819:done drawing printLogHeaderFooter canvas
183819:end of printLogHeaderFooter
Uncaught exception
Traceback (most recent call last):
File "radiolog.py", line 5084, in accept
File "radiolog.py", line 2611, in printLog
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
183822:PRINT team radio logs
183822:teamFilterList=['TeamAlpha', 'TeamBravo']
183822:generating radio log pdf: C:\Users\SAR 425\Documents\RadioLog Backups\Testing_2022_12_11_183332\Testing_2022_12_11_183332_teams_OP1.pdf
183822:length:6
183822:length:3
183822:valid logo file C:\Users\SAR 425\RadioLog\.config\radiolog_logo.jpg
183822:Page number:1
183822:Height:43.199999999999996
183822:Pagesize:(792.0, 612.0)
183822:done drawing printLogHeaderFooter canvas
183822:end of printLogHeaderFooter
Uncaught exception
Traceback (most recent call last):
File "radiolog.py", line 5087, in accept
File "radiolog.py", line 2619, in printTeamLogs
File "radiolog.py", line 2611, in printLog
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
183824:PRINT clue log
183824:appending: ['', 'Radio Log Begins: Sun Dec 11, 2022', '', '1833', '', '', '', '', '']
183824:Nothing to print for specified operational period 1
```
This was reported by @RadiosPRN.
A quick google of that error shows that a few folks determined that the problem was that no pdf reader application was installed, and/or it was not set as the default application for opening pdf files: https://stackoverflow.com/questions/36022695
Sure enough, if I uninstall Acrobat Reader, then printing from radiolog shows this:
```
175633:PRINT radio log
175633:teamFilterList=['']
175633:generating radio log pdf: C:\Users\caver\RadioLog\New_Incident_2022_12_11_161621\New_Incident_2022_12_11_161621_OP1.pdf
175633:length:8
175633:valid logo file C:\Users\caver\RadioLog\.config\radiolog_logo.jpg
175633:Page number:1
175633:Height:43.199999999999996
175633:Pagesize:(792.0, 612.0)
175633:done drawing printLogHeaderFooter canvas
175633:end of printLogHeaderFooter
Traceback (most recent call last):
File "C:\Users\caver\Documents\GitHub\radiolog\radiolog.py", line 5086, in accept
self.parent.printLog(opPeriod)
File "C:\Users\caver\Documents\GitHub\radiolog\radiolog.py", line 2613, in printLog
win32api.ShellExecute(0,"print",pdfName,'/d:"%s"' % win32print.GetDefaultPrinter(),".",0)
pywintypes.error: (31, 'ShellExecute', 'A device attached to the system is not functioning.')
```
The error syntax is a bit different - not sure why - but the underlying error 31 seems to be the same.
This does match the behavior of what @RadiosPRN reported - you can only save one pdf file at a time, because the print failure for any given doc causes the subsequent pdf saves to be skipped.
So this issue could have two fixes:
1) find a way to print without needing a pdf reader to be installed (and set as the default application for pdfs)
2) if a print fails, continue with the save of the other requested pdfs | priority | pywintypes error shellexecute error when printing from the transcript print radio log teamfilterlist generating radio log pdf c users sar documents radiolog backups testing testing pdf length valid logo file c users sar radiolog config radiolog logo jpg page number height pagesize done drawing printlogheaderfooter canvas end of printlogheaderfooter uncaught exception traceback most recent call last file radiolog py line in accept file radiolog py line in printlog pywintypes error shellexecute a device attached to the system is not functioning print radio log teamfilterlist generating radio log pdf c users sar documents radiolog backups testing testing pdf length valid logo file c users sar radiolog config radiolog logo jpg page number height pagesize done drawing printlogheaderfooter canvas end of printlogheaderfooter uncaught exception traceback most recent call last file radiolog py line in accept file radiolog py line in printlog pywintypes error shellexecute a device attached to the system is not functioning print team radio logs teamfilterlist generating radio log pdf c users sar documents radiolog backups testing testing teams pdf length length valid logo file c users sar radiolog config radiolog logo jpg page number height pagesize done drawing printlogheaderfooter canvas end of printlogheaderfooter uncaught exception traceback most recent call last file radiolog py line in accept file radiolog py line in printteamlogs file radiolog py line in printlog pywintypes error shellexecute a device attached to the system is not functioning print clue log appending nothing to print for specified operational period this was reported by radiosprn a quick google of that error shows that a few folks determined that the problem was that no pdf reader application was installed and or it was not set as the default application for opening pdf files sure enough if i uninstall acrobat reader then printing from radiolog shows this print radio log teamfilterlist generating radio log pdf c users caver radiolog new incident new incident pdf length valid logo file c users caver radiolog config radiolog logo jpg page number height pagesize done drawing printlogheaderfooter canvas end of printlogheaderfooter traceback most recent call last file c users caver documents github radiolog radiolog py line in accept self parent printlog opperiod file c users caver documents github radiolog radiolog py line in printlog shellexecute print pdfname d s getdefaultprinter pywintypes error shellexecute a device attached to the system is not functioning the error syntax is a bit different not sure why but the underlying error seems to be the same this does match the behavior of what radiosprn reported you can only save one pdf file at a time because the print failure for any given doc causes the subsequent pdf saves to be skipped so this issue could have two fixes find a way to print without needing a pdf reader to be installed and set as the default application for pdfs if a print fails continue with the save of the other requested pdfs | 1 |
252,975 | 8,049,608,058 | IssuesEvent | 2018-08-01 10:39:32 | IBM/watson-assistant-workbench | https://api.github.com/repos/IBM/watson-assistant-workbench | opened | Use python library for parsing TOML config files | Priority: medium discussion | We use TOML config format (https://github.com/toml-lang/toml).
Think about using toml python library for parsing (https://pypi.python.org/pypi/toml). | 1.0 | Use python library for parsing TOML config files - We use TOML config format (https://github.com/toml-lang/toml).
Think about using toml python library for parsing (https://pypi.python.org/pypi/toml). | priority | use python library for parsing toml config files we use toml config format think about using toml python library for parsing | 1 |
52,295 | 3,022,484,097 | IssuesEvent | 2015-07-31 20:38:27 | information-artifact-ontology/IAO | https://api.github.com/repos/information-artifact-ontology/IAO | opened | Bibliographic metadata in IAO | imported Priority-Medium | _From [z_califo...@shiftingbalance.org](https://code.google.com/u/113769195097935438546/) on February 15, 2010 15:27:35_
I need to organize a collection of bibliographic references and I would like to consider whether IAO
ought to be expanded to allow the possibility of using it for that purpose. By "bibliographic," I mean
any kind of human expression that can be cataloged, not just books.
To attempt to invent a new standard for bibliographic metadata is not a good idea, and moreover it
would be a waste of time, since many such standards are in use today. The strategy should be to
represent these metadata standards in IAO. I see three central issues.
(1) Provide a mechanism in IAO which allows users to apply any metadata standard they want. I have a
few ideas about how to work this out which I can share with people in an IAO call.
(2) Realism about ontology. It's not clear whether the commonly used metadata standards describe
works in a way which reflects their most salient aspect from an ontological point of view. I think there
is going to have to be some compromise here, but it won't be too dear for realists.
(3) Making this work by surveying the available digital representations of the various metadata
schemes and modeling them in OWL, so that someone could enter records by hand, in Protege; and
creating parsers for importing records into an ontology en masse from the various catalogs and
indexes.
_Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=77_ | 1.0 | Bibliographic metadata in IAO - _From [z_califo...@shiftingbalance.org](https://code.google.com/u/113769195097935438546/) on February 15, 2010 15:27:35_
I need to organize a collection of bibliographic references and I would like to consider whether IAO
ought to be expanded to allow the possibility of using it for that purpose. By "bibliographic," I mean
any kind of human expression that can be cataloged, not just books.
To attempt to invent a new standard for bibliographic metadata is not a good idea, and moreover it
would be a waste of time, since many such standards are in use today. The strategy should be to
represent these metadata standards in IAO. I see three central issues.
(1) Provide a mechanism in IAO which allows users to apply any metadata standard they want. I have a
few ideas about how to work this out which I can share with people in an IAO call.
(2) Realism about ontology. It's not clear whether the commonly used metadata standards describe
works in a way which reflects their most salient aspect from an ontological point of view. I think there
is going to have to be some compromise here, but it won't be too dear for realists.
(3) Making this work by surveying the available digital representations of the various metadata
schemes and modeling them in OWL, so that someone could enter records by hand, in Protege; and
creating parsers for importing records into an ontology en masse from the various catalogs and
indexes.
_Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=77_ | priority | bibliographic metadata in iao from on february i need to organize a collection of bibliographic references and i would like to consider whether iao ought to be expanded to allow the possibility of using it for that purpose by bibliographic i mean any kind of human expression that can be cataloged not just books to attempt to invent a new standard for bibliographic metadata is not a good idea and moreover it would be a waste of time since many such standards are in use today the strategy should be to represent these metadata standards in iao i see three central issues provide a mechanism in iao which allows users to apply any metadata standard they want i have a few ideas about how to work this out which i can share with people in an iao call realism about ontology it s not clear whether the commonly used metadata standards describe works in a way which reflects their most salient aspect from an ontological point of view i think there is going to have to be some compromise here but it won t be too dear for realists making this work by surveying the available digital representations of the various metadata schemes and modeling them in owl so that someone could enter records by hand in protege and creating parsers for importing records into an ontology en masse from the various catalogs and indexes original issue | 1 |
477,564 | 13,764,587,094 | IssuesEvent | 2020-10-07 12:17:48 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Group Admin setting - can't remove the group parent | bug priority: medium | **Describe the bug**
If you have assigned the Group Parent and then if you have to remove the group parent from the backend then you can't remove the group parent it's not removing the group parent.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Support ticket links**
If applicable, add HelpScout link or ticket number where the issue was originally reported.
| 1.0 | Group Admin setting - can't remove the group parent - **Describe the bug**
If you have assigned the Group Parent and then if you have to remove the group parent from the backend then you can't remove the group parent it's not removing the group parent.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Support ticket links**
If applicable, add HelpScout link or ticket number where the issue was originally reported.
| priority | group admin setting can t remove the group parent describe the bug if you have assigned the group parent and then if you have to remove the group parent from the backend then you can t remove the group parent it s not removing the group parent to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem support ticket links if applicable add helpscout link or ticket number where the issue was originally reported | 1 |
675,232 | 23,085,442,649 | IssuesEvent | 2022-07-26 10:57:57 | COS301-SE-2022/Office-Booker | https://api.github.com/repos/COS301-SE-2022/Office-Booker | closed | Add email invite system | Priority: Medium | One a guest has been added to the guestlist, the system should automatically send them an email informing them. | 1.0 | Add email invite system - One a guest has been added to the guestlist, the system should automatically send them an email informing them. | priority | add email invite system one a guest has been added to the guestlist the system should automatically send them an email informing them | 1 |
600,501 | 18,298,544,681 | IssuesEvent | 2021-10-05 23:16:32 | cagov/design-system | https://api.github.com/repos/cagov/design-system | closed | Content principle writing: Make your tone conversationa, empathetic, and official | Medium Priority - Must | Write full documentation for the content principle _Make your tone conversationa, empathetic, and official_.
Work will take place in this [Google Doc](https://docs.google.com/document/d/1XflZYzozsFHuyyDX5G0HbygexBwQsxyLiK4wPkH9NLQ/edit?usp=sharing). | 1.0 | Content principle writing: Make your tone conversationa, empathetic, and official - Write full documentation for the content principle _Make your tone conversationa, empathetic, and official_.
Work will take place in this [Google Doc](https://docs.google.com/document/d/1XflZYzozsFHuyyDX5G0HbygexBwQsxyLiK4wPkH9NLQ/edit?usp=sharing). | priority | content principle writing make your tone conversationa empathetic and official write full documentation for the content principle make your tone conversationa empathetic and official work will take place in this | 1 |
587,514 | 17,618,077,513 | IssuesEvent | 2021-08-18 12:20:10 | knative/docs | https://api.github.com/repos/knative/docs | closed | Add documentation for BYO certificate for custom domains | triage/needs-eng-input priority/medium kind/serving | **Describe the change you'd like to see**
- https://github.com/knative/serving/issues/10530 starts BYO certs but no documentation is available yet.
- We need a new doc page or add the procedure in https://knative.dev/docs/developer/serving/services/custom-domains/ | 1.0 | Add documentation for BYO certificate for custom domains - **Describe the change you'd like to see**
- https://github.com/knative/serving/issues/10530 starts BYO certs but no documentation is available yet.
- We need a new doc page or add the procedure in https://knative.dev/docs/developer/serving/services/custom-domains/ | priority | add documentation for byo certificate for custom domains describe the change you d like to see starts byo certs but no documentation is available yet we need a new doc page or add the procedure in | 1 |
173,498 | 6,525,640,896 | IssuesEvent | 2017-08-29 16:35:33 | Polymer/polymer-cli | https://api.github.com/repos/Polymer/polymer-cli | opened | Project style detection | Priority: Medium Status: Available Type: Enhancement | This is an umbrella issue for project style detection: determining whether a project is a reusable element styl project, or application style project.
With this information we can differ the behavior and UX of a few sub-commands:
1. `serve` should should only the appropriate URL, and `server -o` should open the right URL
2. `lint` should warn on component directories in imports in element projects, and the lack of in application projects
3. `build` should warn on building elements, or we can figure out a sensible subset of build steps to run. | 1.0 | Project style detection - This is an umbrella issue for project style detection: determining whether a project is a reusable element styl project, or application style project.
With this information we can differ the behavior and UX of a few sub-commands:
1. `serve` should should only the appropriate URL, and `server -o` should open the right URL
2. `lint` should warn on component directories in imports in element projects, and the lack of in application projects
3. `build` should warn on building elements, or we can figure out a sensible subset of build steps to run. | priority | project style detection this is an umbrella issue for project style detection determining whether a project is a reusable element styl project or application style project with this information we can differ the behavior and ux of a few sub commands serve should should only the appropriate url and server o should open the right url lint should warn on component directories in imports in element projects and the lack of in application projects build should warn on building elements or we can figure out a sensible subset of build steps to run | 1 |
315,347 | 9,612,293,517 | IssuesEvent | 2019-05-13 08:34:43 | ReliefApplications/bms_front | https://api.github.com/repos/ReliefApplications/bms_front | closed | Remove ngModel from forms | In progress Medium Priority Refactoring | Forms use both **ngModel** and **formControl**.
These two entities are part of two very different form models, respectively [template-driven form](https://angular.io/guide/forms) and [reactive forms](https://angular.io/guide/reactive-forms).
using ngModel (part of the template-driven model) causes slowdown and should therefore be used as little as possible.
Using **formControl** to their full potential would not only speed up the app but make both *typescript* and *html* files cleaner and shorter. | 1.0 | Remove ngModel from forms - Forms use both **ngModel** and **formControl**.
These two entities are part of two very different form models, respectively [template-driven form](https://angular.io/guide/forms) and [reactive forms](https://angular.io/guide/reactive-forms).
using ngModel (part of the template-driven model) causes slowdown and should therefore be used as little as possible.
Using **formControl** to their full potential would not only speed up the app but make both *typescript* and *html* files cleaner and shorter. | priority | remove ngmodel from forms forms use both ngmodel and formcontrol these two entities are part of two very different form models respectively and using ngmodel part of the template driven model causes slowdown and should therefore be used as little as possible using formcontrol to their full potential would not only speed up the app but make both typescript and html files cleaner and shorter | 1 |
137,703 | 5,314,902,603 | IssuesEvent | 2017-02-13 16:06:07 | knipferrc/plate | https://api.github.com/repos/knipferrc/plate | closed | Create HOC for provider component | Priority: Medium Type: Feature | Create an HOC similar to the nextjs examples to wrap our pages in. | 1.0 | Create HOC for provider component - Create an HOC similar to the nextjs examples to wrap our pages in. | priority | create hoc for provider component create an hoc similar to the nextjs examples to wrap our pages in | 1 |
612,834 | 19,043,989,032 | IssuesEvent | 2021-11-25 04:03:32 | frappe/erpnext | https://api.github.com/repos/frappe/erpnext | closed | BOM: inconsistent behaviour of checkboxes | bug manufacturing validated Medium Priority | Source FR-ISS-322190
Inconsistent behavior of checkboxes
On unchecking and then rechecking ‘with operations’ the content in the table is removed.
On the other hand, on unchecking and rechecking ‘Quality InspectionRequire’ the template remains as is.
| 1.0 | BOM: inconsistent behaviour of checkboxes - Source FR-ISS-322190
Inconsistent behavior of checkboxes
On unchecking and then rechecking ‘with operations’ the content in the table is removed.
On the other hand, on unchecking and rechecking ‘Quality InspectionRequire’ the template remains as is.
| priority | bom inconsistent behaviour of checkboxes source fr iss inconsistent behavior of checkboxes on unchecking and then rechecking ‘with operations’ the content in the table is removed on the other hand on unchecking and rechecking ‘quality inspectionrequire’ the template remains as is | 1 |
511,876 | 14,883,916,899 | IssuesEvent | 2021-01-20 13:57:48 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Increase coverage of IRS Lite components | Priority: Medium | This PR drops the coverage by 0.6% https://github.com/onaio/reveal-frontend/pull/1379
We need to improve the coverage of these components to counter this drop. | 1.0 | Increase coverage of IRS Lite components - This PR drops the coverage by 0.6% https://github.com/onaio/reveal-frontend/pull/1379
We need to improve the coverage of these components to counter this drop. | priority | increase coverage of irs lite components this pr drops the coverage by we need to improve the coverage of these components to counter this drop | 1 |
46,017 | 2,944,678,390 | IssuesEvent | 2015-07-03 07:10:59 | servermon/servermon | https://api.github.com/repos/servermon/servermon | closed | Provide dummy development fixtures | feature Priority: Medium | Dummy fixtures for easier development would help a lot in figuring out problems with the web interface and would allow for easier testing as well | 1.0 | Provide dummy development fixtures - Dummy fixtures for easier development would help a lot in figuring out problems with the web interface and would allow for easier testing as well | priority | provide dummy development fixtures dummy fixtures for easier development would help a lot in figuring out problems with the web interface and would allow for easier testing as well | 1 |
658,241 | 21,882,556,970 | IssuesEvent | 2022-05-19 15:28:23 | AFM-SPM/TopoStats | https://api.github.com/repos/AFM-SPM/TopoStats | closed | Parity between gwyddion TopoStats grain statistics and `dev` | Medium Priority | **Is your feature request related to a problem? Please describe.**
`dev` does not produce grain statistics that are in parity with the old gwyddion TopoStats grain statistics.
**Describe the solution you'd like**
Parity needs to be made between them before continuing with development
A summary of the statistic discrepancies:

| 1.0 | Parity between gwyddion TopoStats grain statistics and `dev` - **Is your feature request related to a problem? Please describe.**
`dev` does not produce grain statistics that are in parity with the old gwyddion TopoStats grain statistics.
**Describe the solution you'd like**
Parity needs to be made between them before continuing with development
A summary of the statistic discrepancies:

| priority | parity between gwyddion topostats grain statistics and dev is your feature request related to a problem please describe dev does not produce grain statistics that are in parity with the old gwyddion topostats grain statistics describe the solution you d like parity needs to be made between them before continuing with development a summary of the statistic discrepancies | 1 |
805,600 | 29,577,943,258 | IssuesEvent | 2023-06-07 01:36:43 | codidact/qpixel | https://api.github.com/repos/codidact/qpixel | closed | Mathjax not rendered correctly in question previews on post lists | type: bug priority: medium area: markdown complexity: unassessed | https://math.codidact.com/posts/287481
On Math, formulas are not being shown correctly in question previews, though they are fine in the question pages. A comment there says:
> View Source shows that the raw HTML being sent to the browser is `A parabola is given by $y^2=2px$ with $p>0$. The point $D$ is on the parabola in the first quadrant at a distance of $8$ from the $x$-a...` so the problem is a double-escaping. | 1.0 | Mathjax not rendered correctly in question previews on post lists - https://math.codidact.com/posts/287481
On Math, formulas are not being shown correctly in question previews, though they are fine in the question pages. A comment there says:
> View Source shows that the raw HTML being sent to the browser is `A parabola is given by $y^2=2px$ with $p>0$. The point $D$ is on the parabola in the first quadrant at a distance of $8$ from the $x$-a...` so the problem is a double-escaping. | priority | mathjax not rendered correctly in question previews on post lists on math formulas are not being shown correctly in question previews though they are fine in the question pages a comment there says view source shows that the raw html being sent to the browser is a parabola is given by y with p gt the point d is on the parabola in the first quadrant at a distance of from the x a so the problem is a double escaping | 1 |
40,568 | 2,868,928,621 | IssuesEvent | 2015-06-05 22:01:02 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Test Pub against the real revision file | enhancement Fixed Priority-Medium | <a href="https://github.com/nex3"><img src="https://avatars.githubusercontent.com/u/188?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [nex3](https://github.com/nex3)**
_Originally opened as dart-lang/sdk#5906_
----
Currently Pub's SDK tests construct their own fake, sandboxed SDK directory to test the SDK source against. This works well for most purposes, but it doesn't provide any protection against the format of the actual revision file in the SDK changing.
We should have at least one test that runs against the real SDK. The test should only run locally if the SDK has been built, but it should always run on the build bots. | 1.0 | Test Pub against the real revision file - <a href="https://github.com/nex3"><img src="https://avatars.githubusercontent.com/u/188?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [nex3](https://github.com/nex3)**
_Originally opened as dart-lang/sdk#5906_
----
Currently Pub's SDK tests construct their own fake, sandboxed SDK directory to test the SDK source against. This works well for most purposes, but it doesn't provide any protection against the format of the actual revision file in the SDK changing.
We should have at least one test that runs against the real SDK. The test should only run locally if the SDK has been built, but it should always run on the build bots. | priority | test pub against the real revision file issue by originally opened as dart lang sdk currently pub s sdk tests construct their own fake sandboxed sdk directory to test the sdk source against this works well for most purposes but it doesn t provide any protection against the format of the actual revision file in the sdk changing we should have at least one test that runs against the real sdk the test should only run locally if the sdk has been built but it should always run on the build bots | 1 |
665,502 | 22,320,200,244 | IssuesEvent | 2022-06-14 05:16:48 | matrixorigin/matrixone | https://api.github.com/repos/matrixorigin/matrixone | closed | Refactor frontend server and configuration center | kind/enhancement priority/medium | There are five kinds of possible server combinations:
1. SQL frontend server + computation server + storage server, which is used to serve as aggregated computation and storage architecture. The latter two server share the same physical server instance.
2. SQL front server + computation server, which is used to serve as the computation node of disaggregated computation and storage architecture.
3. Computation server, similar with combination 2.
4. Storage server, pure storage server.
5. Storage server + computation server.
Corresponding adjustments:
- Refactor configuration center to support above combinations.
- Pass those relevant to initialization of server handlers.
Additional adjustments:
- Refactor frontend server using goetty
| 1.0 | Refactor frontend server and configuration center - There are five kinds of possible server combinations:
1. SQL frontend server + computation server + storage server, which is used to serve as aggregated computation and storage architecture. The latter two server share the same physical server instance.
2. SQL front server + computation server, which is used to serve as the computation node of disaggregated computation and storage architecture.
3. Computation server, similar with combination 2.
4. Storage server, pure storage server.
5. Storage server + computation server.
Corresponding adjustments:
- Refactor configuration center to support above combinations.
- Pass those relevant to initialization of server handlers.
Additional adjustments:
- Refactor frontend server using goetty
| priority | refactor frontend server and configuration center there are five kinds of possible server combinations sql frontend server computation server storage server which is used to serve as aggregated computation and storage architecture the latter two server share the same physical server instance sql front server computation server which is used to serve as the computation node of disaggregated computation and storage architecture computation server similar with combination storage server pure storage server storage server computation server corresponding adjustments refactor configuration center to support above combinations pass those relevant to initialization of server handlers additional adjustments refactor frontend server using goetty | 1 |
306,928 | 9,413,110,093 | IssuesEvent | 2019-04-10 06:53:33 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Add the option for Solr disabled/hidden when creating a site based on a bp for 3.1 | enhancement priority: medium | **Is your feature request related to a problem? Please describe.**
For the latest 3.1 in develop, when creating a site using Solr for search engine, bps that has ES queries will give a bunch of errors in the logs and an error when you preview the page in Studio.
**Describe the solution you'd like**
It would be nice to have the option for Solr as search engine disabled/hidden when creating a site based on a bp for bp's with ES queries by adding a flag in the yaml file for blueprints. (I think it's the `craftercms-plugin.yaml` file?) so new users playing around with creating sites will not get the error
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | [studio-ui] Add the option for Solr disabled/hidden when creating a site based on a bp for 3.1 - **Is your feature request related to a problem? Please describe.**
For the latest 3.1 in develop, when creating a site using Solr for search engine, bps that has ES queries will give a bunch of errors in the logs and an error when you preview the page in Studio.
**Describe the solution you'd like**
It would be nice to have the option for Solr as search engine disabled/hidden when creating a site based on a bp for bp's with ES queries by adding a flag in the yaml file for blueprints. (I think it's the `craftercms-plugin.yaml` file?) so new users playing around with creating sites will not get the error
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| priority | add the option for solr disabled hidden when creating a site based on a bp for is your feature request related to a problem please describe for the latest in develop when creating a site using solr for search engine bps that has es queries will give a bunch of errors in the logs and an error when you preview the page in studio describe the solution you d like it would be nice to have the option for solr as search engine disabled hidden when creating a site based on a bp for bp s with es queries by adding a flag in the yaml file for blueprints i think it s the craftercms plugin yaml file so new users playing around with creating sites will not get the error describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
431,135 | 12,475,586,867 | IssuesEvent | 2020-05-29 11:54:58 | hotosm/tasking-manager | https://api.github.com/repos/hotosm/tasking-manager | closed | Return to a consolidated comment box | Component: Frontend Difficulty: Medium Priority: Medium Status: Needs implementation Type: Enhancement | Neil previously consolidated the multiple comment boxes in the front end in #1201, but some of those were regressed in #1220. It would be good to re-implement this. | 1.0 | Return to a consolidated comment box - Neil previously consolidated the multiple comment boxes in the front end in #1201, but some of those were regressed in #1220. It would be good to re-implement this. | priority | return to a consolidated comment box neil previously consolidated the multiple comment boxes in the front end in but some of those were regressed in it would be good to re implement this | 1 |
634,152 | 20,328,007,049 | IssuesEvent | 2022-02-18 07:59:58 | Square789/PydayNightFunkin | https://api.github.com/repos/Square789/PydayNightFunkin | opened | Conductor resyncing sometimes goes crazy | bug priority: medium | Add some sort of safeguard that will regulate it down to 2 times/second or dynamically modify the (now) hardcoded bound of 20ms
(once you are out of the hole, of course) | 1.0 | Conductor resyncing sometimes goes crazy - Add some sort of safeguard that will regulate it down to 2 times/second or dynamically modify the (now) hardcoded bound of 20ms
(once you are out of the hole, of course) | priority | conductor resyncing sometimes goes crazy add some sort of safeguard that will regulate it down to times second or dynamically modify the now hardcoded bound of once you are out of the hole of course | 1 |
261,183 | 8,227,633,760 | IssuesEvent | 2018-09-07 00:09:03 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Problem with players who did not complete tutorial in existing world when updated to 7.7 | Medium-High Priority | Getting complaints from people that some are getting started over on worlds they were on after world updates to an existing world (not a new world) although most players were not back at start and had all their items, inventory, houses, land etc, started over. It seems like its related to tutorial completion
I started a world in 7.6.3 and created a player and did part of the tutorial, about 6 steps into it. Then I updated to 7.7 and I was able to load in fine - however I did end up starting over at the player creation screen. I can see that would definitely be an issue for someone that might have played for some time but not completed all parts the tutorial and when the world updates they have to start over
A couple of server owners have reported having to remove users that still had a few steps in tutorial to go but the world would not load at all, no dump file and the console gave no error (after they heard about the tutorial it talked about in support and several players on their severer said they had not yet finished the tutorial) they said they deleted the profiles of anyone still in the tutorial that the knew of and said most were close to the end of the tutorial. Since these crash were not making a crash report not sure if its the same issue or not.
I am not able to repro that a player with a player with an incomplete tutorial will block a game from loading. I tried several times, even taking a character right to the last two steps in the tutorial but the only thing that happened was that I was set back to the character creation screen in the updated world. All the worlds I tried this on the player just started over. Its possible that this was coincidental with some other problem with the player not the tutorial have to wait and see. I did not have time to test all the other possibilities that brought up, as they did take tools out of the tent, I had not, some had build houses, I had not and so on. | 1.0 | Problem with players who did not complete tutorial in existing world when updated to 7.7 - Getting complaints from people that some are getting started over on worlds they were on after world updates to an existing world (not a new world) although most players were not back at start and had all their items, inventory, houses, land etc, started over. It seems like its related to tutorial completion
I started a world in 7.6.3 and created a player and did part of the tutorial, about 6 steps into it. Then I updated to 7.7 and I was able to load in fine - however I did end up starting over at the player creation screen. I can see that would definitely be an issue for someone that might have played for some time but not completed all parts the tutorial and when the world updates they have to start over
A couple of server owners have reported having to remove users that still had a few steps in tutorial to go but the world would not load at all, no dump file and the console gave no error (after they heard about the tutorial it talked about in support and several players on their severer said they had not yet finished the tutorial) they said they deleted the profiles of anyone still in the tutorial that the knew of and said most were close to the end of the tutorial. Since these crash were not making a crash report not sure if its the same issue or not.
I am not able to repro that a player with a player with an incomplete tutorial will block a game from loading. I tried several times, even taking a character right to the last two steps in the tutorial but the only thing that happened was that I was set back to the character creation screen in the updated world. All the worlds I tried this on the player just started over. Its possible that this was coincidental with some other problem with the player not the tutorial have to wait and see. I did not have time to test all the other possibilities that brought up, as they did take tools out of the tent, I had not, some had build houses, I had not and so on. | priority | problem with players who did not complete tutorial in existing world when updated to getting complaints from people that some are getting started over on worlds they were on after world updates to an existing world not a new world although most players were not back at start and had all their items inventory houses land etc started over it seems like its related to tutorial completion i started a world in and created a player and did part of the tutorial about steps into it then i updated to and i was able to load in fine however i did end up starting over at the player creation screen i can see that would definitely be an issue for someone that might have played for some time but not completed all parts the tutorial and when the world updates they have to start over a couple of server owners have reported having to remove users that still had a few steps in tutorial to go but the world would not load at all no dump file and the console gave no error after they heard about the tutorial it talked about in support and several players on their severer said they had not yet finished the tutorial they said they deleted the profiles of anyone still in the tutorial that the knew of and said most were close to the end of the tutorial since these crash were not making a crash report not sure if its the same issue or not i am not able to repro that a player with a player with an incomplete tutorial will block a game from loading i tried several times even taking a character right to the last two steps in the tutorial but the only thing that happened was that i was set back to the character creation screen in the updated world all the worlds i tried this on the player just started over its possible that this was coincidental with some other problem with the player not the tutorial have to wait and see i did not have time to test all the other possibilities that brought up as they did take tools out of the tent i had not some had build houses i had not and so on | 1 |
267,967 | 8,395,218,928 | IssuesEvent | 2018-10-10 05:18:47 | CS2103-AY1819S1-T12-1/main | https://api.github.com/repos/CS2103-AY1819S1-T12-1/main | opened | As a forgetful user I would like to search with partial keywords | priority.Medium type.Story | so that i can search without remembering the exact keywords | 1.0 | As a forgetful user I would like to search with partial keywords - so that i can search without remembering the exact keywords | priority | as a forgetful user i would like to search with partial keywords so that i can search without remembering the exact keywords | 1 |
77,064 | 3,506,257,513 | IssuesEvent | 2016-01-08 05:01:42 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | what?Previously, this was not (BB #125) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 23.04.2010 13:27:55 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/125
<hr>
Раньше такого не было. где-то после 3-10 минут такая ошибка выбивает сервер
Oregon>Oregon>Max allowed socket connections 1024
Update time diff: 50. Players online: 1.
Update time diff: 51. Players online: 1.
*** glibc detected *** /opt/bin/oregon-core: munmap_chunk(): invalid pointer: 0x0000000001832158 ***
======= Backtrace: =========
/lib/libc.so.6[0x7fdef1966928]
/opt/bin/oregon-core(_ZN5World13SendBroadcastEv+0x1bf)[0x9bc413]
/opt/bin/oregon-core(_ZN5World6UpdateEl+0x797)[0x9c57d5]
/opt/bin/oregon-core(_ZN13WorldRunnable3runEv+0x74)[0x7477b8]
/opt/bin/oregon-core(_ZN9ACE_Based6Thread10ThreadTaskEPv+0x28)[0xad52b6]
/lib/libpthread.so.0[0x7fdef1c4cfc7]
/lib/libc.so.6(clone+0x6d)[0x7fdef19c259d]
======= Memory map: ========
00400000-00d39000 r-xp 00000000 08:09 48867 /opt/bin/oregon-core
00f38000-00f45000 rw-p 00938000 08:09 48867 /opt/bin/oregon-core
00f45000-02a9c000 rw-p 00f45000 00:00 0 [heap]
40000000-40001000 ---p 40000000 00:00 0
40001000-40801000 rwxp 40001000 00:00 0
40801000-40802000 ---p 40801000 00:00 0
40802000-41002000 rwxp 40802000 00:00 0
41002000-41003000 ---p 41002000 00:00 0
41003000-41803000 rwxp 41003000 00:00 0
41803000-41804000 ---p 41803000 00:00 0
41804000-42004000 rwxp 41804000 00:00 0
42004000-42005000 ---p 42004000 00:00 0
42005000-42805000 rwxp 42005000 00:00 0
42805000-42806000 ---p 42805000 00:00 0
42806000-43006000 rwxp 42806000 00:00 0
43006000-43007000 ---p 43006000 00:00 0
43007000-43807000 rwxp 43007000 00:00 0
43807000-43808000 ---p 43807000 00:00 0
43808000-44008000 rwxp 43808000 00:00 0
7fdedc000000-7fdeddac0000 rw-p 7fdedc000000 00:00 0
7fdeddac0000-7fdee0000000 ---p 7fdeddac0000 00:00 0
7fdee4000000-7fdee7fef000 rw-p 7fdee4000000 00:00 0
7fdee7fef000-7fdee8000000 ---p 7fdee7fef000 00:00 0
7fdee919b000-7fdeea789000 rw-p 7fdee919b000 00:00 0
7fdeebec6000-7fdeec000000 r--p 00000000 08:07 138808 /usr/lib/locale/locale-archive
7fdeec000000-7fdeeffcd000 rw-p 7fdeec000000 00:00 0
7fdeeffcd000-7fdef0000000 ---p 7fdeeffcd000 00:00 0
7fdef00ec000-7fdef08bd000 rw-p 7fdef00ec000 00:00 0
7fdef08bd000-7fdef08c7000 r-xp 00000000 08:07 32600 /lib/libnss_files-2.7.so
7fdef08c7000-7fdef0ac7000 ---p 0000a000 08:07 32600 /lib/libnss_files-2.7.so
7fdef0ac7000-7fdef0ac9000 rw-p 0000a000 08:07 32600 /lib/libnss_files-2.7.so
7fdef0ac9000-7fdef0ade000 r-xp 00000000 08:07 33857 /lib/libnsl-2.7.so
7fdef0ade000-7fdef0cdd000 ---p 00015000 08:07 33857 /lib/libnsl-2.7.so
7fdef0cdd000-7fdef0cdf000 rw-p 00014000 08:07 33857 /lib/libnsl-2.7.so
7fdef0cdf000-7fdef0ce1000 rw-p 7fdef0cdf000 00:00 0
7fdef0ce1000-7fdef0ce9000 r-xp 00000000 08:07 33856 /lib/libcrypt-2.7.so
7fdef0ce9000-7fdef0ee9000 ---p 00008000 08:07 33856 /lib/libcrypt-2.7.so
7fdef0ee9000-7fdef0eeb000 rw-p 00008000 08:07 33856 /lib/libcrypt-2.7.so
7fdef0eeb000-7fdef0f19000 rw-p 7fdef0eeb000 00:00 0
7fdef0f19000-7fdef0f21000 r-xp 00000000 08:07 32777 /lib/librt-2.7.so
7fdef0f21000-7fdef1120000 ---p 00008000 08:07 32777 /lib/librt-2.7.so
7fdef1120000-7fdef1122000 rw-p 00007000 08:07 32777 /lib/librt-2.7.so
7fdef1122000-7fdef115d000 r-xp 00000000 08:07 32735 /lib/libncurses.so.5.7
7fdef115d000-7fdef135c000 ---p 0003b000 08:07 32735 /lib/libncurses.so.5.7
7fdef135c000-7fdef1361000 rw-p 0003a000 08:07 32735 /lib/libncurses.so.5.7
7fdef1361000-7fdef1363000 r-xp 00000000 08:07 32780 /lib/libdl-2.7.so
7fdef1363000-7fdef1563000 ---p 00002000 08:07 32780 /lib/libdl-2.7.so
7fdef1563000-7fdef1565000 rw-p 00002000 08:07 32780 /lib/libdl-2.7.so
7fdef1565000-7fdef16cb000 r-xp 00000000 08:09 181410 /opt/org/lib/libcrypto.so.0.9.8
7fdef16cb000-7fdef18cb000 ---p 00166000 08:09 181410 /opt/org/lib/libcrypto.so.0.9.8
7fdef18cb000-7fdef18f0000 rw-p 00166000 08:09 181410 /opt/org/lib/libcrypto.so.0.9.8
7fdef18f0000-7fdef18f3000 rw-p 7fdef18f0000 00:00 Аварийный останов
| 1.0 | what?Previously, this was not (BB #125) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 23.04.2010 13:27:55 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/125
<hr>
Раньше такого не было. где-то после 3-10 минут такая ошибка выбивает сервер
Oregon>Oregon>Max allowed socket connections 1024
Update time diff: 50. Players online: 1.
Update time diff: 51. Players online: 1.
*** glibc detected *** /opt/bin/oregon-core: munmap_chunk(): invalid pointer: 0x0000000001832158 ***
======= Backtrace: =========
/lib/libc.so.6[0x7fdef1966928]
/opt/bin/oregon-core(_ZN5World13SendBroadcastEv+0x1bf)[0x9bc413]
/opt/bin/oregon-core(_ZN5World6UpdateEl+0x797)[0x9c57d5]
/opt/bin/oregon-core(_ZN13WorldRunnable3runEv+0x74)[0x7477b8]
/opt/bin/oregon-core(_ZN9ACE_Based6Thread10ThreadTaskEPv+0x28)[0xad52b6]
/lib/libpthread.so.0[0x7fdef1c4cfc7]
/lib/libc.so.6(clone+0x6d)[0x7fdef19c259d]
======= Memory map: ========
00400000-00d39000 r-xp 00000000 08:09 48867 /opt/bin/oregon-core
00f38000-00f45000 rw-p 00938000 08:09 48867 /opt/bin/oregon-core
00f45000-02a9c000 rw-p 00f45000 00:00 0 [heap]
40000000-40001000 ---p 40000000 00:00 0
40001000-40801000 rwxp 40001000 00:00 0
40801000-40802000 ---p 40801000 00:00 0
40802000-41002000 rwxp 40802000 00:00 0
41002000-41003000 ---p 41002000 00:00 0
41003000-41803000 rwxp 41003000 00:00 0
41803000-41804000 ---p 41803000 00:00 0
41804000-42004000 rwxp 41804000 00:00 0
42004000-42005000 ---p 42004000 00:00 0
42005000-42805000 rwxp 42005000 00:00 0
42805000-42806000 ---p 42805000 00:00 0
42806000-43006000 rwxp 42806000 00:00 0
43006000-43007000 ---p 43006000 00:00 0
43007000-43807000 rwxp 43007000 00:00 0
43807000-43808000 ---p 43807000 00:00 0
43808000-44008000 rwxp 43808000 00:00 0
7fdedc000000-7fdeddac0000 rw-p 7fdedc000000 00:00 0
7fdeddac0000-7fdee0000000 ---p 7fdeddac0000 00:00 0
7fdee4000000-7fdee7fef000 rw-p 7fdee4000000 00:00 0
7fdee7fef000-7fdee8000000 ---p 7fdee7fef000 00:00 0
7fdee919b000-7fdeea789000 rw-p 7fdee919b000 00:00 0
7fdeebec6000-7fdeec000000 r--p 00000000 08:07 138808 /usr/lib/locale/locale-archive
7fdeec000000-7fdeeffcd000 rw-p 7fdeec000000 00:00 0
7fdeeffcd000-7fdef0000000 ---p 7fdeeffcd000 00:00 0
7fdef00ec000-7fdef08bd000 rw-p 7fdef00ec000 00:00 0
7fdef08bd000-7fdef08c7000 r-xp 00000000 08:07 32600 /lib/libnss_files-2.7.so
7fdef08c7000-7fdef0ac7000 ---p 0000a000 08:07 32600 /lib/libnss_files-2.7.so
7fdef0ac7000-7fdef0ac9000 rw-p 0000a000 08:07 32600 /lib/libnss_files-2.7.so
7fdef0ac9000-7fdef0ade000 r-xp 00000000 08:07 33857 /lib/libnsl-2.7.so
7fdef0ade000-7fdef0cdd000 ---p 00015000 08:07 33857 /lib/libnsl-2.7.so
7fdef0cdd000-7fdef0cdf000 rw-p 00014000 08:07 33857 /lib/libnsl-2.7.so
7fdef0cdf000-7fdef0ce1000 rw-p 7fdef0cdf000 00:00 0
7fdef0ce1000-7fdef0ce9000 r-xp 00000000 08:07 33856 /lib/libcrypt-2.7.so
7fdef0ce9000-7fdef0ee9000 ---p 00008000 08:07 33856 /lib/libcrypt-2.7.so
7fdef0ee9000-7fdef0eeb000 rw-p 00008000 08:07 33856 /lib/libcrypt-2.7.so
7fdef0eeb000-7fdef0f19000 rw-p 7fdef0eeb000 00:00 0
7fdef0f19000-7fdef0f21000 r-xp 00000000 08:07 32777 /lib/librt-2.7.so
7fdef0f21000-7fdef1120000 ---p 00008000 08:07 32777 /lib/librt-2.7.so
7fdef1120000-7fdef1122000 rw-p 00007000 08:07 32777 /lib/librt-2.7.so
7fdef1122000-7fdef115d000 r-xp 00000000 08:07 32735 /lib/libncurses.so.5.7
7fdef115d000-7fdef135c000 ---p 0003b000 08:07 32735 /lib/libncurses.so.5.7
7fdef135c000-7fdef1361000 rw-p 0003a000 08:07 32735 /lib/libncurses.so.5.7
7fdef1361000-7fdef1363000 r-xp 00000000 08:07 32780 /lib/libdl-2.7.so
7fdef1363000-7fdef1563000 ---p 00002000 08:07 32780 /lib/libdl-2.7.so
7fdef1563000-7fdef1565000 rw-p 00002000 08:07 32780 /lib/libdl-2.7.so
7fdef1565000-7fdef16cb000 r-xp 00000000 08:09 181410 /opt/org/lib/libcrypto.so.0.9.8
7fdef16cb000-7fdef18cb000 ---p 00166000 08:09 181410 /opt/org/lib/libcrypto.so.0.9.8
7fdef18cb000-7fdef18f0000 rw-p 00166000 08:09 181410 /opt/org/lib/libcrypto.so.0.9.8
7fdef18f0000-7fdef18f3000 rw-p 7fdef18f0000 00:00 Аварийный останов
| priority | what previously this was not bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link раньше такого не было где то после минут такая ошибка выбивает сервер oregon oregon max allowed socket connections update time diff players online update time diff players online glibc detected opt bin oregon core munmap chunk invalid pointer backtrace lib libc so opt bin oregon core opt bin oregon core opt bin oregon core opt bin oregon core lib libpthread so lib libc so clone memory map r xp opt bin oregon core rw p opt bin oregon core rw p p rwxp p rwxp p rwxp p rwxp p rwxp p rwxp p rwxp p rwxp rw p p rw p p rw p r p usr lib locale locale archive rw p p rw p r xp lib libnss files so p lib libnss files so rw p lib libnss files so r xp lib libnsl so p lib libnsl so rw p lib libnsl so rw p r xp lib libcrypt so p lib libcrypt so rw p lib libcrypt so rw p r xp lib librt so p lib librt so rw p lib librt so r xp lib libncurses so p lib libncurses so rw p lib libncurses so r xp lib libdl so p lib libdl so rw p lib libdl so r xp opt org lib libcrypto so p opt org lib libcrypto so rw p opt org lib libcrypto so rw p аварийный останов | 1 |
16,702 | 2,615,122,067 | IssuesEvent | 2015-03-01 05:49:13 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | opened | Document how to use JSON partial response and update | auto-migrated Priority-Medium Type-Wiki | ```
External references, such as a standards document, or specification?
http://googlecode.blogspot.com/2011/07/lightning-fast-performance-tips-for.html
http://code.google.com/p/google-api-java-client/wiki/Json
Java environments (e.g. Java 6, Android 2.3, App Engine 1.4.2, or All)?
All
Please describe the feature requested.
Document how to use partial response and update for JSON.
```
Original issue reported on code.google.com by `yan...@google.com` on 16 Aug 2011 at 2:35 | 1.0 | Document how to use JSON partial response and update - ```
External references, such as a standards document, or specification?
http://googlecode.blogspot.com/2011/07/lightning-fast-performance-tips-for.html
http://code.google.com/p/google-api-java-client/wiki/Json
Java environments (e.g. Java 6, Android 2.3, App Engine 1.4.2, or All)?
All
Please describe the feature requested.
Document how to use partial response and update for JSON.
```
Original issue reported on code.google.com by `yan...@google.com` on 16 Aug 2011 at 2:35 | priority | document how to use json partial response and update external references such as a standards document or specification java environments e g java android app engine or all all please describe the feature requested document how to use partial response and update for json original issue reported on code google com by yan google com on aug at | 1 |
787,551 | 27,722,168,303 | IssuesEvent | 2023-03-14 21:38:27 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Possible inconsistent catalog version mechanisms used during YSQL upgrade | kind/bug area/ysql priority/medium | Jira Link: [DB-5621](https://yugabyte.atlassian.net/browse/DB-5621)
### Description
YSQL upgrade we can start from a very old release where the table
`pg_yb_catalog_version` either
(1) does not exist and `yb_catalog_version_type` is set to
`CATALOG_VERSION_PROTOBUF_ENTRY`
(2) exists but has not been populated yet, the PG connection that runs the
upgrade script will fall back to use the old protobuf based mechanism to manage
catalog version and `yb_catalog_version_type` is also set to
`CATALOG_VERSION_PROTOBUF_ENTRY`.
Once `yb_catalog_version_type` is set to `CATALOG_VERSION_PROTOBUF_ENTRY` it is
sticky according to the following code:
```
YbCatalogVersionType YbGetCatalogVersionType()
{
if (IsBootstrapProcessingMode())
{
/*
* We don't have the catalog version table at the start of initdb,
* and there's no point in switching later on.
*/
yb_catalog_version_type = CATALOG_VERSION_PROTOBUF_ENTRY;
}
else if (yb_catalog_version_type == CATALOG_VERSION_UNSET)
{
bool catalog_version_table_exists = false;
HandleYBStatus(YBCPgTableExists(
TemplateDbOid, YBCatalogVersionRelationId,
&catalog_version_table_exists));
yb_catalog_version_type = catalog_version_table_exists
? CATALOG_VERSION_CATALOG_TABLE
: CATALOG_VERSION_PROTOBUF_ENTRY;
}
return yb_catalog_version_type;
}
```
Note that in the above if `yb_catalog_version_type` is already set to
`CATALOG_VERSION_PROTOBUF_ENTRY`, it will remain
`CATALOG_VERSION_PROTOBUF_ENTRY`. The only other place to break the stickness
is that `YBRefreshCache()` unsets it back to `CATALOG_VERSION_UNSET` in order to
recalculate `yb_catalog_version_type`:
```
/*
* Get the latest syscatalog version from the master.
* Reset the cached version type if needed to force reading catalog version
* from the catalog table first.
*/
if (yb_catalog_version_type != CATALOG_VERSION_CATALOG_TABLE)
yb_catalog_version_type = CATALOG_VERSION_UNSET;
```
YSQL upgrades can work in two modes:
(1) single connection mode where at any given time there is only one connection
that can run any upgrade SQL migration file. In order to execute a migration
file in a different connection, the old connection will shutdown and a new
connection is created.
(2) multi-connection mode where at the beginning of YSQL upgrade, we pre-create all the
connections one for each database. For every migration file, we loop through each
connection and executes its SQL statements.
By default we use multi-connection mode, as shown in the following code
```
Register(
"upgrade_ysql", " [use_single_connection] (default false)",
```
Consider the first connection (template1) in multi-connection mode.
The problem is that the table `pg_yb_catalog_version` is a shared relation. Once
we run `V1__3979__pg_yb_catalog_version.sql` in the first connection (template1),
the table `pg_yb_catalog_version` will be created and properly initialized. The
next connection template0 may start using `pg_yb_catalog_version` rather than
the old protobuf mechanism, depending on whether it gets a chance to execute
`YBRefreshCache()` to reset its `yb_catalog_version_type` to
`CATALOG_VERSION_UNSET`. If a heartbeat arrives in time, it will start using
`pg_yb_catalog_version` because it is now ready. Once it starts using
`pg_yb_catalog_version` it will stick to that.
Secondly during tserver/master heartbeat response master will also pass back the
catalog version stored in `pg_yb_catalog_version` instead of that in protobuf
as long as it can read a non-zero catalog version from it, as shown below
```
if (table_info != nullptr) {
RETURN_NOT_OK(sys_catalog_->ReadYsqlDBCatalogVersion(kPgYbCatalogVersionTableId,
db_oid,
catalog_version,
last_breaking_version));
// If the version is properly initialized, we're done.
if ((!catalog_version || *catalog_version > 0) &&
(!last_breaking_version || *last_breaking_version > 0)) {
return Status::OK();
}
// However, it's possible for a table to have no entries mid-migration or if migration fails.
// In this case we'd like to fall back to the legacy approach.
}
```
Now come back to the template1 PG connection. It has yet to run all the rest
migration scripts and is still using the old protobuf mechanism because
`CATALOG_VERSION_PROTOBUF_ENTRY` is sticky as described above unless
`YBRefreshCache()` is executed. But `YBRefreshCache()` may not get executed at
all because for the connection where the DDL statement is executed it will not
need a call to `YBRefreshCache()` due to an optimization. In order for template1
connection to call `YBRefreshCache()` and have a chance to reset to
`CATALOG_VERSION_UNSET`, we will need
(1) another connection to create a new catalog version by incrementing the
current catalog version
(2) propagate that new catalog version to template1 connection via
heartbeat/shared memory mechanism
(3) the new catalog version must be greater than that of template1's own catalog version
If any of these 3 conditions is not met, template1 will not call
`YBRefreshCache()` and will stick with `CATALOG_VERSION_PROTOBUF_ENTRY`. So we can
end up in a state where template1 connection continues to use old protobuf mechansim
while other connections and tserver/master heartbeat use new
`pg_yb_catalog_version`. This inconsistency can cause upgrade test to fail
intermittently in the case where negative cache is involved.
[DB-5621]: https://yugabyte.atlassian.net/browse/DB-5621?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] Possible inconsistent catalog version mechanisms used during YSQL upgrade - Jira Link: [DB-5621](https://yugabyte.atlassian.net/browse/DB-5621)
### Description
YSQL upgrade we can start from a very old release where the table
`pg_yb_catalog_version` either
(1) does not exist and `yb_catalog_version_type` is set to
`CATALOG_VERSION_PROTOBUF_ENTRY`
(2) exists but has not been populated yet, the PG connection that runs the
upgrade script will fall back to use the old protobuf based mechanism to manage
catalog version and `yb_catalog_version_type` is also set to
`CATALOG_VERSION_PROTOBUF_ENTRY`.
Once `yb_catalog_version_type` is set to `CATALOG_VERSION_PROTOBUF_ENTRY` it is
sticky according to the following code:
```
YbCatalogVersionType YbGetCatalogVersionType()
{
if (IsBootstrapProcessingMode())
{
/*
* We don't have the catalog version table at the start of initdb,
* and there's no point in switching later on.
*/
yb_catalog_version_type = CATALOG_VERSION_PROTOBUF_ENTRY;
}
else if (yb_catalog_version_type == CATALOG_VERSION_UNSET)
{
bool catalog_version_table_exists = false;
HandleYBStatus(YBCPgTableExists(
TemplateDbOid, YBCatalogVersionRelationId,
&catalog_version_table_exists));
yb_catalog_version_type = catalog_version_table_exists
? CATALOG_VERSION_CATALOG_TABLE
: CATALOG_VERSION_PROTOBUF_ENTRY;
}
return yb_catalog_version_type;
}
```
Note that in the above if `yb_catalog_version_type` is already set to
`CATALOG_VERSION_PROTOBUF_ENTRY`, it will remain
`CATALOG_VERSION_PROTOBUF_ENTRY`. The only other place to break the stickness
is that `YBRefreshCache()` unsets it back to `CATALOG_VERSION_UNSET` in order to
recalculate `yb_catalog_version_type`:
```
/*
* Get the latest syscatalog version from the master.
* Reset the cached version type if needed to force reading catalog version
* from the catalog table first.
*/
if (yb_catalog_version_type != CATALOG_VERSION_CATALOG_TABLE)
yb_catalog_version_type = CATALOG_VERSION_UNSET;
```
YSQL upgrades can work in two modes:
(1) single connection mode where at any given time there is only one connection
that can run any upgrade SQL migration file. In order to execute a migration
file in a different connection, the old connection will shutdown and a new
connection is created.
(2) multi-connection mode where at the beginning of YSQL upgrade, we pre-create all the
connections one for each database. For every migration file, we loop through each
connection and executes its SQL statements.
By default we use multi-connection mode, as shown in the following code
```
Register(
"upgrade_ysql", " [use_single_connection] (default false)",
```
Consider the first connection (template1) in multi-connection mode.
The problem is that the table `pg_yb_catalog_version` is a shared relation. Once
we run `V1__3979__pg_yb_catalog_version.sql` in the first connection (template1),
the table `pg_yb_catalog_version` will be created and properly initialized. The
next connection template0 may start using `pg_yb_catalog_version` rather than
the old protobuf mechanism, depending on whether it gets a chance to execute
`YBRefreshCache()` to reset its `yb_catalog_version_type` to
`CATALOG_VERSION_UNSET`. If a heartbeat arrives in time, it will start using
`pg_yb_catalog_version` because it is now ready. Once it starts using
`pg_yb_catalog_version` it will stick to that.
Secondly during tserver/master heartbeat response master will also pass back the
catalog version stored in `pg_yb_catalog_version` instead of that in protobuf
as long as it can read a non-zero catalog version from it, as shown below
```
if (table_info != nullptr) {
RETURN_NOT_OK(sys_catalog_->ReadYsqlDBCatalogVersion(kPgYbCatalogVersionTableId,
db_oid,
catalog_version,
last_breaking_version));
// If the version is properly initialized, we're done.
if ((!catalog_version || *catalog_version > 0) &&
(!last_breaking_version || *last_breaking_version > 0)) {
return Status::OK();
}
// However, it's possible for a table to have no entries mid-migration or if migration fails.
// In this case we'd like to fall back to the legacy approach.
}
```
Now come back to the template1 PG connection. It has yet to run all the rest
migration scripts and is still using the old protobuf mechanism because
`CATALOG_VERSION_PROTOBUF_ENTRY` is sticky as described above unless
`YBRefreshCache()` is executed. But `YBRefreshCache()` may not get executed at
all because for the connection where the DDL statement is executed it will not
need a call to `YBRefreshCache()` due to an optimization. In order for template1
connection to call `YBRefreshCache()` and have a chance to reset to
`CATALOG_VERSION_UNSET`, we will need
(1) another connection to create a new catalog version by incrementing the
current catalog version
(2) propagate that new catalog version to template1 connection via
heartbeat/shared memory mechanism
(3) the new catalog version must be greater than that of template1's own catalog version
If any of these 3 conditions is not met, template1 will not call
`YBRefreshCache()` and will stick with `CATALOG_VERSION_PROTOBUF_ENTRY`. So we can
end up in a state where template1 connection continues to use old protobuf mechansim
while other connections and tserver/master heartbeat use new
`pg_yb_catalog_version`. This inconsistency can cause upgrade test to fail
intermittently in the case where negative cache is involved.
[DB-5621]: https://yugabyte.atlassian.net/browse/DB-5621?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | possible inconsistent catalog version mechanisms used during ysql upgrade jira link description ysql upgrade we can start from a very old release where the table pg yb catalog version either does not exist and yb catalog version type is set to catalog version protobuf entry exists but has not been populated yet the pg connection that runs the upgrade script will fall back to use the old protobuf based mechanism to manage catalog version and yb catalog version type is also set to catalog version protobuf entry once yb catalog version type is set to catalog version protobuf entry it is sticky according to the following code ybcatalogversiontype ybgetcatalogversiontype if isbootstrapprocessingmode we don t have the catalog version table at the start of initdb and there s no point in switching later on yb catalog version type catalog version protobuf entry else if yb catalog version type catalog version unset bool catalog version table exists false handleybstatus ybcpgtableexists templatedboid ybcatalogversionrelationid catalog version table exists yb catalog version type catalog version table exists catalog version catalog table catalog version protobuf entry return yb catalog version type note that in the above if yb catalog version type is already set to catalog version protobuf entry it will remain catalog version protobuf entry the only other place to break the stickness is that ybrefreshcache unsets it back to catalog version unset in order to recalculate yb catalog version type get the latest syscatalog version from the master reset the cached version type if needed to force reading catalog version from the catalog table first if yb catalog version type catalog version catalog table yb catalog version type catalog version unset ysql upgrades can work in two modes single connection mode where at any given time there is only one connection that can run any upgrade sql migration file in order to execute a migration file in a different connection the old connection will shutdown and a new connection is created multi connection mode where at the beginning of ysql upgrade we pre create all the connections one for each database for every migration file we loop through each connection and executes its sql statements by default we use multi connection mode as shown in the following code register upgrade ysql default false consider the first connection in multi connection mode the problem is that the table pg yb catalog version is a shared relation once we run pg yb catalog version sql in the first connection the table pg yb catalog version will be created and properly initialized the next connection may start using pg yb catalog version rather than the old protobuf mechanism depending on whether it gets a chance to execute ybrefreshcache to reset its yb catalog version type to catalog version unset if a heartbeat arrives in time it will start using pg yb catalog version because it is now ready once it starts using pg yb catalog version it will stick to that secondly during tserver master heartbeat response master will also pass back the catalog version stored in pg yb catalog version instead of that in protobuf as long as it can read a non zero catalog version from it as shown below if table info nullptr return not ok sys catalog readysqldbcatalogversion kpgybcatalogversiontableid db oid catalog version last breaking version if the version is properly initialized we re done if catalog version catalog version last breaking version last breaking version return status ok however it s possible for a table to have no entries mid migration or if migration fails in this case we d like to fall back to the legacy approach now come back to the pg connection it has yet to run all the rest migration scripts and is still using the old protobuf mechanism because catalog version protobuf entry is sticky as described above unless ybrefreshcache is executed but ybrefreshcache may not get executed at all because for the connection where the ddl statement is executed it will not need a call to ybrefreshcache due to an optimization in order for connection to call ybrefreshcache and have a chance to reset to catalog version unset we will need another connection to create a new catalog version by incrementing the current catalog version propagate that new catalog version to connection via heartbeat shared memory mechanism the new catalog version must be greater than that of s own catalog version if any of these conditions is not met will not call ybrefreshcache and will stick with catalog version protobuf entry so we can end up in a state where connection continues to use old protobuf mechansim while other connections and tserver master heartbeat use new pg yb catalog version this inconsistency can cause upgrade test to fail intermittently in the case where negative cache is involved | 1 |
350,015 | 10,477,243,679 | IssuesEvent | 2019-09-23 20:27:28 | uwigem/wiki2019 | https://api.github.com/repos/uwigem/wiki2019 | opened | In TabView, typing in the Tab Input box exits out of typing after 1 character | Priority: Medium Type: Bug | This is because the way it is set up, it refreshes the state after a single character is input (so it re-renders the component) and kicks the user out of focus for the input box.
The easiest way to fix this would be to re-focus the user, but the correct way would be to re-work how the state is being saved for these input boxes.
Not a super high priority because I don't envision people using this component a lot. | 1.0 | In TabView, typing in the Tab Input box exits out of typing after 1 character - This is because the way it is set up, it refreshes the state after a single character is input (so it re-renders the component) and kicks the user out of focus for the input box.
The easiest way to fix this would be to re-focus the user, but the correct way would be to re-work how the state is being saved for these input boxes.
Not a super high priority because I don't envision people using this component a lot. | priority | in tabview typing in the tab input box exits out of typing after character this is because the way it is set up it refreshes the state after a single character is input so it re renders the component and kicks the user out of focus for the input box the easiest way to fix this would be to re focus the user but the correct way would be to re work how the state is being saved for these input boxes not a super high priority because i don t envision people using this component a lot | 1 |
2,978 | 2,535,095,775 | IssuesEvent | 2015-01-25 17:52:59 | readium/SDKLauncher-Android | https://api.github.com/repos/readium/SDKLauncher-Android | closed | error compiling sources | Android bug priority medium | when compiling I get the following output:
In file included from ./../../ePub3/xml/utilities/io.cpp:22:0:
./../../ePub3/xml/utilities/io.h:30:1: error: 'EPUB3_XML_BEGIN_NAMESPACE' does not name a type
./../../ePub3/xml/utilities/io.h:65:1: error: expected class-name before '{' token
./../../ePub3/xml/utilities/io.h:93:1: error: expected class-name before '{' token
./../../ePub3/xml/utilities/io.h: In constructor 'StreamInputBuffer::StreamInputBuffer(std::istream&)':
./../../ePub3/xml/utilities/io.h:95:47: error: class 'StreamInputBuffer' does not have any field named 'InputBuffer'
./../../ePub3/xml/utilities/io.h: In constructor 'StreamInputBuffer::StreamInputBuffer(StreamInputBuffer&&)':
./../../ePub3/xml/utilities/io.h:96:49: error: class 'StreamInputBuffer' does not have any field named 'InputBuffer'
In file included from ./../../ePub3/xml/utilities/io.cpp:22:0:
./../../ePub3/xml/utilities/io.h: At global scope:
./../../ePub3/xml/utilities/io.h:125:1: error: 'EPUB3_XML_END_NAMESPACE' does not name a type
./../../ePub3/xml/utilities/io.cpp:32:1: error: 'InputBuffer' does not name a type
./../../ePub3/xml/utilities/io.cpp:37:5: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'int read_cb(void*, char*, int)':
./../../ePub3/xml/utilities/io.cpp:39:5: error: 'InputBuffer' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:39:19: error: 'p' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:39:40: error: expected type-specifier before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:39:40: error: expected '>' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:39:40: error: expected '(' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:39:52: error: expected primary-expression before '>' token
./../../ePub3/xml/utilities/io.cpp:39:62: error: expected ')' before ';' token
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:42:5: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'int close_cb(void*)':
./../../ePub3/xml/utilities/io.cpp:44:5: error: 'InputBuffer' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:44:19: error: 'p' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:44:35: error: expected type-specifier before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:44:35: error: expected '>' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:44:35: error: expected '(' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:44:47: error: expected primary-expression before '>' token
./../../ePub3/xml/utilities/io.cpp:44:57: error: expected ')' before ';' token
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:47:11: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'xmlDoc* xmlReadDocument(const char*, const char*, int)':
./../../ePub3/xml/utilities/io.cpp:49:22: error: '_buf' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:51:11: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'xmlDoc* htmlReadDocument(const char*, const char*, int)':
./../../ePub3/xml/utilities/io.cpp:53:23: error: '_buf' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp: In constructor 'OutputBuffer::OutputBuffer(const string&)':
./../../ePub3/xml/utilities/io.cpp:66:79: error: 'InternalError' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:72:65: error: 'InternalError' was not declared in this scope
make: Leaving directory `D:/Development/Workspaces/stuff/readium-sdk/Platform/Android'
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:119:1: error: 'EPUB3_XML_END_NAMESPACE' does not name a type
Anyone knows what could be the problem here? | 1.0 | error compiling sources - when compiling I get the following output:
In file included from ./../../ePub3/xml/utilities/io.cpp:22:0:
./../../ePub3/xml/utilities/io.h:30:1: error: 'EPUB3_XML_BEGIN_NAMESPACE' does not name a type
./../../ePub3/xml/utilities/io.h:65:1: error: expected class-name before '{' token
./../../ePub3/xml/utilities/io.h:93:1: error: expected class-name before '{' token
./../../ePub3/xml/utilities/io.h: In constructor 'StreamInputBuffer::StreamInputBuffer(std::istream&)':
./../../ePub3/xml/utilities/io.h:95:47: error: class 'StreamInputBuffer' does not have any field named 'InputBuffer'
./../../ePub3/xml/utilities/io.h: In constructor 'StreamInputBuffer::StreamInputBuffer(StreamInputBuffer&&)':
./../../ePub3/xml/utilities/io.h:96:49: error: class 'StreamInputBuffer' does not have any field named 'InputBuffer'
In file included from ./../../ePub3/xml/utilities/io.cpp:22:0:
./../../ePub3/xml/utilities/io.h: At global scope:
./../../ePub3/xml/utilities/io.h:125:1: error: 'EPUB3_XML_END_NAMESPACE' does not name a type
./../../ePub3/xml/utilities/io.cpp:32:1: error: 'InputBuffer' does not name a type
./../../ePub3/xml/utilities/io.cpp:37:5: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'int read_cb(void*, char*, int)':
./../../ePub3/xml/utilities/io.cpp:39:5: error: 'InputBuffer' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:39:19: error: 'p' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:39:40: error: expected type-specifier before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:39:40: error: expected '>' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:39:40: error: expected '(' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:39:52: error: expected primary-expression before '>' token
./../../ePub3/xml/utilities/io.cpp:39:62: error: expected ')' before ';' token
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:42:5: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'int close_cb(void*)':
./../../ePub3/xml/utilities/io.cpp:44:5: error: 'InputBuffer' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:44:19: error: 'p' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:44:35: error: expected type-specifier before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:44:35: error: expected '>' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:44:35: error: expected '(' before 'InputBuffer'
./../../ePub3/xml/utilities/io.cpp:44:47: error: expected primary-expression before '>' token
./../../ePub3/xml/utilities/io.cpp:44:57: error: expected ')' before ';' token
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:47:11: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'xmlDoc* xmlReadDocument(const char*, const char*, int)':
./../../ePub3/xml/utilities/io.cpp:49:22: error: '_buf' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:51:11: error: 'InputBuffer' has not been declared
./../../ePub3/xml/utilities/io.cpp: In function 'xmlDoc* htmlReadDocument(const char*, const char*, int)':
./../../ePub3/xml/utilities/io.cpp:53:23: error: '_buf' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp: In constructor 'OutputBuffer::OutputBuffer(const string&)':
./../../ePub3/xml/utilities/io.cpp:66:79: error: 'InternalError' was not declared in this scope
./../../ePub3/xml/utilities/io.cpp:72:65: error: 'InternalError' was not declared in this scope
make: Leaving directory `D:/Development/Workspaces/stuff/readium-sdk/Platform/Android'
./../../ePub3/xml/utilities/io.cpp: At global scope:
./../../ePub3/xml/utilities/io.cpp:119:1: error: 'EPUB3_XML_END_NAMESPACE' does not name a type
Anyone knows what could be the problem here? | priority | error compiling sources when compiling i get the following output in file included from xml utilities io cpp xml utilities io h error xml begin namespace does not name a type xml utilities io h error expected class name before token xml utilities io h error expected class name before token xml utilities io h in constructor streaminputbuffer streaminputbuffer std istream xml utilities io h error class streaminputbuffer does not have any field named inputbuffer xml utilities io h in constructor streaminputbuffer streaminputbuffer streaminputbuffer xml utilities io h error class streaminputbuffer does not have any field named inputbuffer in file included from xml utilities io cpp xml utilities io h at global scope xml utilities io h error xml end namespace does not name a type xml utilities io cpp error inputbuffer does not name a type xml utilities io cpp error inputbuffer has not been declared xml utilities io cpp in function int read cb void char int xml utilities io cpp error inputbuffer was not declared in this scope xml utilities io cpp error p was not declared in this scope xml utilities io cpp error expected type specifier before inputbuffer xml utilities io cpp error expected before inputbuffer xml utilities io cpp error expected before inputbuffer xml utilities io cpp error expected primary expression before token xml utilities io cpp error expected before token xml utilities io cpp at global scope xml utilities io cpp error inputbuffer has not been declared xml utilities io cpp in function int close cb void xml utilities io cpp error inputbuffer was not declared in this scope xml utilities io cpp error p was not declared in this scope xml utilities io cpp error expected type specifier before inputbuffer xml utilities io cpp error expected before inputbuffer xml utilities io cpp error expected before inputbuffer xml utilities io cpp error expected primary expression before token xml utilities io cpp error expected before token xml utilities io cpp at global scope xml utilities io cpp error inputbuffer has not been declared xml utilities io cpp in function xmldoc xmlreaddocument const char const char int xml utilities io cpp error buf was not declared in this scope xml utilities io cpp at global scope xml utilities io cpp error inputbuffer has not been declared xml utilities io cpp in function xmldoc htmlreaddocument const char const char int xml utilities io cpp error buf was not declared in this scope xml utilities io cpp in constructor outputbuffer outputbuffer const string xml utilities io cpp error internalerror was not declared in this scope xml utilities io cpp error internalerror was not declared in this scope make leaving directory d development workspaces stuff readium sdk platform android xml utilities io cpp at global scope xml utilities io cpp error xml end namespace does not name a type anyone knows what could be the problem here | 1 |
707,527 | 24,309,123,519 | IssuesEvent | 2022-09-29 20:19:45 | georchestra/georchestra | https://api.github.com/repos/georchestra/georchestra | closed | mapfishapp - take into account quotes in csw queries | enhancement 0 - Backlog priority-medium | <!---
@huboard:{"milestone_order":36.375,"order":0.002075195379683148,"custom_state":""}
-->
| 1.0 | mapfishapp - take into account quotes in csw queries - <!---
@huboard:{"milestone_order":36.375,"order":0.002075195379683148,"custom_state":""}
-->
| priority | mapfishapp take into account quotes in csw queries huboard milestone order order custom state | 1 |
3,127 | 2,537,161,292 | IssuesEvent | 2015-01-26 18:43:05 | jazzsequence/book-review-library | https://api.github.com/repos/jazzsequence/book-review-library | closed | Combine Author Image and Book Author meta boxes | CMB enhancement priority-medium taxonomy | Author Image and Book Author can be combined into a single Author Information box | 1.0 | Combine Author Image and Book Author meta boxes - Author Image and Book Author can be combined into a single Author Information box | priority | combine author image and book author meta boxes author image and book author can be combined into a single author information box | 1 |
652,716 | 21,559,537,359 | IssuesEvent | 2022-05-01 00:56:23 | synfinatic/aws-sso-cli | https://api.github.com/repos/synfinatic/aws-sso-cli | closed | Honor `$AWS_ROLE_SESSION_NAME` | enhancement good first issue wontfix priority:medium | https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Basically, for assumed roles, we should honor the AWS_ROLE_SESSION_NAME variable. | 1.0 | Honor `$AWS_ROLE_SESSION_NAME` - https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Basically, for assumed roles, we should honor the AWS_ROLE_SESSION_NAME variable. | priority | honor aws role session name basically for assumed roles we should honor the aws role session name variable | 1 |
232,316 | 7,657,609,137 | IssuesEvent | 2018-05-10 20:15:15 | ODM2/ODM2DataSharingPortal | https://api.github.com/repos/ODM2/ODM2DataSharingPortal | closed | Organizations are listed both as codes and full names in filters | close me fixed medium priority | When browsing sites, the organizations seem to appear randomly as either a full name or a code. So, in the snap below, "WSU" is a code; the full organization name is Winona State University, but "The Nature Conservancy" is the full organization name; it has a code of "TNC."

| 1.0 | Organizations are listed both as codes and full names in filters - When browsing sites, the organizations seem to appear randomly as either a full name or a code. So, in the snap below, "WSU" is a code; the full organization name is Winona State University, but "The Nature Conservancy" is the full organization name; it has a code of "TNC."

| priority | organizations are listed both as codes and full names in filters when browsing sites the organizations seem to appear randomly as either a full name or a code so in the snap below wsu is a code the full organization name is winona state university but the nature conservancy is the full organization name it has a code of tnc | 1 |
49,339 | 3,002,120,204 | IssuesEvent | 2015-07-24 15:26:20 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Groups UI: Add users allows selecting the same user multiple times though it is only saved once. | Priority: Medium Status: QA Type: Bug |
You can select the same user to add to a group multiple times though that user is saved only once when viewing group membership. | 1.0 | Groups UI: Add users allows selecting the same user multiple times though it is only saved once. -
You can select the same user to add to a group multiple times though that user is saved only once when viewing group membership. | priority | groups ui add users allows selecting the same user multiple times though it is only saved once you can select the same user to add to a group multiple times though that user is saved only once when viewing group membership | 1 |
119,199 | 4,762,887,710 | IssuesEvent | 2016-10-25 12:59:26 | zom/Zom-iOS | https://api.github.com/repos/zom/Zom-iOS | opened | Account password not stored (iOS keychain issue) | bug medium-priority | I think I have solved the disappearing account password issue. Both R and I were using devices that may have not had the keychain enabled (Is that possible?). She had a new device, and I had been messing around with settings, and was getting prompted to re-enable the iCloud keychain, or something along those lines.
Is it possible, that the app could work without the keychain (since you are starting with a default key/password for the database), but that the account password would be not stored if the keychain wasn't properly setup?
Can we either A) alert the user that they need to setup their keychain or B) not depend on the keychain for the account passwords?
Does this make sense? | 1.0 | Account password not stored (iOS keychain issue) - I think I have solved the disappearing account password issue. Both R and I were using devices that may have not had the keychain enabled (Is that possible?). She had a new device, and I had been messing around with settings, and was getting prompted to re-enable the iCloud keychain, or something along those lines.
Is it possible, that the app could work without the keychain (since you are starting with a default key/password for the database), but that the account password would be not stored if the keychain wasn't properly setup?
Can we either A) alert the user that they need to setup their keychain or B) not depend on the keychain for the account passwords?
Does this make sense? | priority | account password not stored ios keychain issue i think i have solved the disappearing account password issue both r and i were using devices that may have not had the keychain enabled is that possible she had a new device and i had been messing around with settings and was getting prompted to re enable the icloud keychain or something along those lines is it possible that the app could work without the keychain since you are starting with a default key password for the database but that the account password would be not stored if the keychain wasn t properly setup can we either a alert the user that they need to setup their keychain or b not depend on the keychain for the account passwords does this make sense | 1 |
266,992 | 8,378,095,466 | IssuesEvent | 2018-10-06 10:13:12 | CS2103-AY1819S1-F11-4/main | https://api.github.com/repos/CS2103-AY1819S1-F11-4/main | closed | timetable | feature.Timetable priority.High severity.Medium | to check if timetable input is valid
to do timetable ui for viewing timetable directly with person
| 1.0 | timetable - to check if timetable input is valid
to do timetable ui for viewing timetable directly with person
| priority | timetable to check if timetable input is valid to do timetable ui for viewing timetable directly with person | 1 |
26,007 | 2,684,096,786 | IssuesEvent | 2015-03-28 17:07:29 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Pasting into iPython does not work as expected | 1 star bug imported Priority-Medium | _From [ben.al...@gmail.com](https://code.google.com/u/108458162051700564602/) on June 08, 2012 09:09:02_
Required information! OS version: Win7 x64 ConEmu version: 120604 32bit (32-bit because I'm using 32-bit Python)
Far version: N/A *Bug description* Pasting long (> ~250 chars) blocks of text into iPython doesn't work as expected *Steps to reproduction* 1. Start new iPython console or attach to running iPython console
2. Copy example text from web browser, text editor etc. (DOS/ANSI, UNIX/UTF8... makes no difference how the text is encoded)
3. Paste into iPython window using '%cpaste' magic command
Observed behavior: Several lines of text will paste with appropriate CR and indentation. However, at some random point, the block will either stop, or will jump ahead a random number of lines (characters, maybe?). It is so flaky that I have actually seen it work properly >_< (once)
Expected behavior: Pasting "properly" formatted python code (where "proper" is defined as: line ending markers of some kind separate lines, and indentation using tabs or spaces is as expected for python code. The ipython "magic" command '%cpaste' will strip out extra characters from line beginnings (e.g. if pasting from an email, removes '>>') and "do the right thing" with line ending markers.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=570_ | 1.0 | Pasting into iPython does not work as expected - _From [ben.al...@gmail.com](https://code.google.com/u/108458162051700564602/) on June 08, 2012 09:09:02_
Required information! OS version: Win7 x64 ConEmu version: 120604 32bit (32-bit because I'm using 32-bit Python)
Far version: N/A *Bug description* Pasting long (> ~250 chars) blocks of text into iPython doesn't work as expected *Steps to reproduction* 1. Start new iPython console or attach to running iPython console
2. Copy example text from web browser, text editor etc. (DOS/ANSI, UNIX/UTF8... makes no difference how the text is encoded)
3. Paste into iPython window using '%cpaste' magic command
Observed behavior: Several lines of text will paste with appropriate CR and indentation. However, at some random point, the block will either stop, or will jump ahead a random number of lines (characters, maybe?). It is so flaky that I have actually seen it work properly >_< (once)
Expected behavior: Pasting "properly" formatted python code (where "proper" is defined as: line ending markers of some kind separate lines, and indentation using tabs or spaces is as expected for python code. The ipython "magic" command '%cpaste' will strip out extra characters from line beginnings (e.g. if pasting from an email, removes '>>') and "do the right thing" with line ending markers.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=570_ | priority | pasting into ipython does not work as expected from on june required information os version conemu version bit because i m using bit python far version n a bug description pasting long chars blocks of text into ipython doesn t work as expected steps to reproduction start new ipython console or attach to running ipython console copy example text from web browser text editor etc dos ansi unix makes no difference how the text is encoded paste into ipython window using cpaste magic command observed behavior several lines of text will paste with appropriate cr and indentation however at some random point the block will either stop or will jump ahead a random number of lines characters maybe it is so flaky that i have actually seen it work properly once expected behavior pasting properly formatted python code where proper is defined as line ending markers of some kind separate lines and indentation using tabs or spaces is as expected for python code the ipython magic command cpaste will strip out extra characters from line beginnings e g if pasting from an email removes and do the right thing with line ending markers original issue | 1 |
52,258 | 3,022,458,956 | IssuesEvent | 2015-07-31 20:29:07 | information-artifact-ontology/IAO | https://api.github.com/repos/information-artifact-ontology/IAO | closed | label and symbol as subclasses of data item | bug imported Priority-Medium | _From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on July 23, 2009 07:24:39_
By our definition of data item (a data item is an information content
entity that is intended to be a truthful statement about something (modulo,
e.g., measurement precision or other systematic errors) and is
constructed/acquired by a method which reliably tends to produce
(approximately) truthful statements.), label and symbol shouldn't be
subclasses. They have been moved under ICE for now. Potentially true for
data about an ontology part as well.
_Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=29_ | 1.0 | label and symbol as subclasses of data item - _From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on July 23, 2009 07:24:39_
By our definition of data item (a data item is an information content
entity that is intended to be a truthful statement about something (modulo,
e.g., measurement precision or other systematic errors) and is
constructed/acquired by a method which reliably tends to produce
(approximately) truthful statements.), label and symbol shouldn't be
subclasses. They have been moved under ICE for now. Potentially true for
data about an ontology part as well.
_Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=29_ | priority | label and symbol as subclasses of data item from on july by our definition of data item a data item is an information content entity that is intended to be a truthful statement about something modulo e g measurement precision or other systematic errors and is constructed acquired by a method which reliably tends to produce approximately truthful statements label and symbol shouldn t be subclasses they have been moved under ice for now potentially true for data about an ontology part as well original issue | 1 |
18,941 | 2,616,015,058 | IssuesEvent | 2015-03-02 00:57:41 | jasonhall/bwapi | https://api.github.com/repos/jasonhall/bwapi | closed | Restrict Unit::placeCOP to one placement | auto-migrated Component-Logic Priority-Medium Type-Enhancement Usability | ```
Implement the placeCOP command for Flag Beacons (UMS/Capture The Flag).
Probably as Unit::placeCOP
```
Original issue reported on code.google.com by `AHeinerm` on 27 Nov 2010 at 5:35 | 1.0 | Restrict Unit::placeCOP to one placement - ```
Implement the placeCOP command for Flag Beacons (UMS/Capture The Flag).
Probably as Unit::placeCOP
```
Original issue reported on code.google.com by `AHeinerm` on 27 Nov 2010 at 5:35 | priority | restrict unit placecop to one placement implement the placecop command for flag beacons ums capture the flag probably as unit placecop original issue reported on code google com by aheinerm on nov at | 1 |
175,448 | 6,551,185,788 | IssuesEvent | 2017-09-05 13:58:32 | arthurbeggs/riscv-simple | https://api.github.com/repos/arthurbeggs/riscv-simple | opened | Testbenches must be automated | feature medium priority unimplemented | A tcl script is necessary to change testbench files automatically when top-level entity is changed. | 1.0 | Testbenches must be automated - A tcl script is necessary to change testbench files automatically when top-level entity is changed. | priority | testbenches must be automated a tcl script is necessary to change testbench files automatically when top level entity is changed | 1 |
89,576 | 3,797,033,695 | IssuesEvent | 2016-03-23 04:46:43 | cs2103jan2016-f13-3j/main | https://api.github.com/repos/cs2103jan2016-f13-3j/main | closed | As a user, I want to be able to sort tasks alphabetically. | priority.medium type.epic type.story | Sort the tasks according the deadlines, alphabetical order, creation date. | 1.0 | As a user, I want to be able to sort tasks alphabetically. - Sort the tasks according the deadlines, alphabetical order, creation date. | priority | as a user i want to be able to sort tasks alphabetically sort the tasks according the deadlines alphabetical order creation date | 1 |
99,349 | 4,053,687,709 | IssuesEvent | 2016-05-24 09:31:33 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | Contribute: Autocomplete not working for resource type after a space | bug Priority-Medium | Typing "zipped" returns "zipped shapefile as a choice. This is correct. However, typing any of the characters after "zipped" (i.e. " shapefile"), the autocomplete does not return the result and user cannot add a file type when no results are found.

| 1.0 | Contribute: Autocomplete not working for resource type after a space - Typing "zipped" returns "zipped shapefile as a choice. This is correct. However, typing any of the characters after "zipped" (i.e. " shapefile"), the autocomplete does not return the result and user cannot add a file type when no results are found.

| priority | contribute autocomplete not working for resource type after a space typing zipped returns zipped shapefile as a choice this is correct however typing any of the characters after zipped i e shapefile the autocomplete does not return the result and user cannot add a file type when no results are found | 1 |
237,222 | 7,757,598,770 | IssuesEvent | 2018-05-31 16:49:00 | JiscRDSS/rdss-canonical-data-model | https://api.github.com/repos/JiscRDSS/rdss-canonical-data-model | closed | add Checksum entity to /properties folder | alpha priority:Medium recommendation | This entity is specified on the logical model diagram but it is not listed amongst the entities in the JiscRDSS/rddss-canonical-data-model/properties directory.
During RDSS-Archivematica Alpha MVP Sprint 2, we encountered the standard digital preservation requirement to verify a chain of custody for Files once they enter the Preservation System domain. In order to support this requirement RDSS-Archivematica needs to read and verify checksum values for all the Files in a Dataset when they are moved to and from their JiscRDSS cloud storage location (S3).
These Files are listed in the Metadata Read message payload that RDSS-Archivematica receives to trigger a register-preservation-message event for that Dataset. However, the current (textual) version of the RDSS Canonical Data Model (CDM) does not have the necessary properties specified to use for checksum values (nor are these present for the Metadata Create payload specifications in the current rdss-messaging-api-docs). | 1.0 | add Checksum entity to /properties folder - This entity is specified on the logical model diagram but it is not listed amongst the entities in the JiscRDSS/rddss-canonical-data-model/properties directory.
During RDSS-Archivematica Alpha MVP Sprint 2, we encountered the standard digital preservation requirement to verify a chain of custody for Files once they enter the Preservation System domain. In order to support this requirement RDSS-Archivematica needs to read and verify checksum values for all the Files in a Dataset when they are moved to and from their JiscRDSS cloud storage location (S3).
These Files are listed in the Metadata Read message payload that RDSS-Archivematica receives to trigger a register-preservation-message event for that Dataset. However, the current (textual) version of the RDSS Canonical Data Model (CDM) does not have the necessary properties specified to use for checksum values (nor are these present for the Metadata Create payload specifications in the current rdss-messaging-api-docs). | priority | add checksum entity to properties folder this entity is specified on the logical model diagram but it is not listed amongst the entities in the jiscrdss rddss canonical data model properties directory during rdss archivematica alpha mvp sprint we encountered the standard digital preservation requirement to verify a chain of custody for files once they enter the preservation system domain in order to support this requirement rdss archivematica needs to read and verify checksum values for all the files in a dataset when they are moved to and from their jiscrdss cloud storage location these files are listed in the metadata read message payload that rdss archivematica receives to trigger a register preservation message event for that dataset however the current textual version of the rdss canonical data model cdm does not have the necessary properties specified to use for checksum values nor are these present for the metadata create payload specifications in the current rdss messaging api docs | 1 |
87,148 | 3,737,438,727 | IssuesEvent | 2016-03-08 19:17:03 | Baystation12/Baystation12 | https://api.github.com/repos/Baystation12/Baystation12 | closed | Virus apparantly not contagious | needs review priority: medium | I was treating a patient with a virus, I didn't catch it from him (even forgetting a mask), nor was I able to infect a monkey with it (neither by blood nor by sharing a room with the patient). Partial virus details:
> Transmitted By: Airborne
> Rate of Progression: 100
> Species Affected: Monkey, Human, Vox Pariah, Diona, Shadow, Stok, Unathi, Slime, Golem, space-adapted Human, Xenomorph Drone, Vox, Skrell, Neaera, Xenomorph Queen
| 1.0 | Virus apparantly not contagious - I was treating a patient with a virus, I didn't catch it from him (even forgetting a mask), nor was I able to infect a monkey with it (neither by blood nor by sharing a room with the patient). Partial virus details:
> Transmitted By: Airborne
> Rate of Progression: 100
> Species Affected: Monkey, Human, Vox Pariah, Diona, Shadow, Stok, Unathi, Slime, Golem, space-adapted Human, Xenomorph Drone, Vox, Skrell, Neaera, Xenomorph Queen
| priority | virus apparantly not contagious i was treating a patient with a virus i didn t catch it from him even forgetting a mask nor was i able to infect a monkey with it neither by blood nor by sharing a room with the patient partial virus details transmitted by airborne rate of progression species affected monkey human vox pariah diona shadow stok unathi slime golem space adapted human xenomorph drone vox skrell neaera xenomorph queen | 1 |
432,522 | 12,494,461,196 | IssuesEvent | 2020-06-01 11:15:00 | hochschule-darmstadt/openartbrowser | https://api.github.com/repos/hochschule-darmstadt/openartbrowser | closed | Give impression of size of artwork | User Interface feature medium priority | **Reason**
Artworks can vary considerably in size and it is difficult to get an adequate impression of the size.
**Solution**
TODO: make a suggestion and discuss.
Good solution that could be adopted: Digital collection Städel, e.g., "Maße" of
https://sammlung.staedelmuseum.de/de/werk/phone-call-iii
**Alternatives**
**Effects**
More realistic impression of artwork
**Acceptance criteria**
**Additional context**
Precondition: #298
Related: #297
| 1.0 | Give impression of size of artwork - **Reason**
Artworks can vary considerably in size and it is difficult to get an adequate impression of the size.
**Solution**
TODO: make a suggestion and discuss.
Good solution that could be adopted: Digital collection Städel, e.g., "Maße" of
https://sammlung.staedelmuseum.de/de/werk/phone-call-iii
**Alternatives**
**Effects**
More realistic impression of artwork
**Acceptance criteria**
**Additional context**
Precondition: #298
Related: #297
| priority | give impression of size of artwork reason artworks can vary considerably in size and it is difficult to get an adequate impression of the size solution todo make a suggestion and discuss good solution that could be adopted digital collection städel e g maße of alternatives effects more realistic impression of artwork acceptance criteria additional context precondition related | 1 |
197,965 | 6,967,247,302 | IssuesEvent | 2017-12-10 05:55:38 | theQRL/qrl-wallet | https://api.github.com/repos/theQRL/qrl-wallet | closed | Transactions API implementation on view wallet page | Priority: Medium Status: Blocked Type: Bug | Currently transactions do not appear on the view wallet page. Updates are required on the qrl node to enable this functionality. | 1.0 | Transactions API implementation on view wallet page - Currently transactions do not appear on the view wallet page. Updates are required on the qrl node to enable this functionality. | priority | transactions api implementation on view wallet page currently transactions do not appear on the view wallet page updates are required on the qrl node to enable this functionality | 1 |
409,707 | 11,967,097,040 | IssuesEvent | 2020-04-06 05:46:52 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Right click to enable mouse on map finish & lobby panel executes the command bound to mouse2 | Priority: Medium Size: Small Type: Bug | Reported by 879m
**Describe the bug**
When in spectate, the right click to enable mouse feature on the map finished and lobby members panel doesn't eat the spectate input (`+attack`, `+attack2`, `+jump` to spectate next player, spectate previous player, and change spectator mode respectively). That is, the spectate command bound to `mouse2` is executed as well.
**To Reproduce**
Map finished panel:
1. Play a replay of a run.
2. Skip to end,
3. When map finished panel pops up, press right click to enable the mouse.
4. The spectate command bound to `mouse2` executes (eg. if `mouse2` bound to `+jump`, view will jump to thirdperson).
Lobby panel:
1. Spectate a player.
2. Press whatever is bound to `+duck` to disable mouse input.
3. Open the lobby members panel.
4. Press right click to enable mouse input.
5. The spectate command bound to `mouse2` executes.
**Expected behavior**
These panels should eat the spectate input.
**Desktop (please complete the following information):**
- OS: Windows | 1.0 | Right click to enable mouse on map finish & lobby panel executes the command bound to mouse2 - Reported by 879m
**Describe the bug**
When in spectate, the right click to enable mouse feature on the map finished and lobby members panel doesn't eat the spectate input (`+attack`, `+attack2`, `+jump` to spectate next player, spectate previous player, and change spectator mode respectively). That is, the spectate command bound to `mouse2` is executed as well.
**To Reproduce**
Map finished panel:
1. Play a replay of a run.
2. Skip to end,
3. When map finished panel pops up, press right click to enable the mouse.
4. The spectate command bound to `mouse2` executes (eg. if `mouse2` bound to `+jump`, view will jump to thirdperson).
Lobby panel:
1. Spectate a player.
2. Press whatever is bound to `+duck` to disable mouse input.
3. Open the lobby members panel.
4. Press right click to enable mouse input.
5. The spectate command bound to `mouse2` executes.
**Expected behavior**
These panels should eat the spectate input.
**Desktop (please complete the following information):**
- OS: Windows | priority | right click to enable mouse on map finish lobby panel executes the command bound to reported by describe the bug when in spectate the right click to enable mouse feature on the map finished and lobby members panel doesn t eat the spectate input attack jump to spectate next player spectate previous player and change spectator mode respectively that is the spectate command bound to is executed as well to reproduce map finished panel play a replay of a run skip to end when map finished panel pops up press right click to enable the mouse the spectate command bound to executes eg if bound to jump view will jump to thirdperson lobby panel spectate a player press whatever is bound to duck to disable mouse input open the lobby members panel press right click to enable mouse input the spectate command bound to executes expected behavior these panels should eat the spectate input desktop please complete the following information os windows | 1 |
811,659 | 30,295,511,755 | IssuesEvent | 2023-07-09 20:08:13 | LucasAnselmoSilva12345/Social-Pets | https://api.github.com/repos/LucasAnselmoSilva12345/Social-Pets | closed | Create a Feed | Medium priority User experience | ## About the task
In this task, we'll show the Feed in our website. Important, the Feed is showed in two parts of the project:
- [Home page](https://social-pets.pages.dev/) = List all photos posted by user.
- [User account feed page](https://social-pets.pages.dev/account) = List only photos posted of the user. | 1.0 | Create a Feed - ## About the task
In this task, we'll show the Feed in our website. Important, the Feed is showed in two parts of the project:
- [Home page](https://social-pets.pages.dev/) = List all photos posted by user.
- [User account feed page](https://social-pets.pages.dev/account) = List only photos posted of the user. | priority | create a feed about the task in this task we ll show the feed in our website important the feed is showed in two parts of the project list all photos posted by user list only photos posted of the user | 1 |
664,360 | 22,267,400,543 | IssuesEvent | 2022-06-10 08:50:15 | SimplyVC/panic | https://api.github.com/repos/SimplyVC/panic | closed | Installation wizard - Alerts Setup 4 (Navigation) | 2 SP UI iteration 2 Priority: Medium | ### Story
As a node operator, when I'm done with my alerts setup, I want the ability to close off and finalise my set configs.
### Description
The scope of this ticket is limited to the navigation section of the Alerts Setup page. Here the node operator can either go back to previous steps of the configuration process, or validate his config.
Image 1

Image 2

### Requirements
- The node operatoor requires a **Back** button that when clicked takes him back to the repositories step. (See Image 1)
- The node operatoor requires a **Finalise** button that when clicked diaplays the configuration confirmation modal . (See Image 2)
- The modal title reads, "Your setup is complete!" and the body reads, "By clicking on the Finish button, you will be saving your configuration." (See Image 2)
- Clicking on the Back button kills the modal and the node operator is again able to review and update alert types and their values. (See Image 2)
- Clicking on the Finish button takes the node operator to the 'Chain setup completed' page and saves the configuration to MongoDB. (See Image 2)
### Blocked by
### Acceptance Criteria
**Scenario**: Alerts setup, Node operator is done with the review and update of alerts
**When**: The node operator is done from the Alerts setup phase
**And**: Clicks on the Finalise button
**Then**: He is presented with the config confirmation modal
**And**: If the node operator clicks on the Finish button
**Then**: The configs for blockchain setup, channels, nodes, repos, and alerts is pushed to MongoDB and he is taken to the 'Chain setup completed' page
**And**: If the node operator clicks the Back button instead of the Finish button, then he is back to the Alerts Setup page from where he can review and update alert types and values
| 1.0 | Installation wizard - Alerts Setup 4 (Navigation) - ### Story
As a node operator, when I'm done with my alerts setup, I want the ability to close off and finalise my set configs.
### Description
The scope of this ticket is limited to the navigation section of the Alerts Setup page. Here the node operator can either go back to previous steps of the configuration process, or validate his config.
Image 1

Image 2

### Requirements
- The node operatoor requires a **Back** button that when clicked takes him back to the repositories step. (See Image 1)
- The node operatoor requires a **Finalise** button that when clicked diaplays the configuration confirmation modal . (See Image 2)
- The modal title reads, "Your setup is complete!" and the body reads, "By clicking on the Finish button, you will be saving your configuration." (See Image 2)
- Clicking on the Back button kills the modal and the node operator is again able to review and update alert types and their values. (See Image 2)
- Clicking on the Finish button takes the node operator to the 'Chain setup completed' page and saves the configuration to MongoDB. (See Image 2)
### Blocked by
### Acceptance Criteria
**Scenario**: Alerts setup, Node operator is done with the review and update of alerts
**When**: The node operator is done from the Alerts setup phase
**And**: Clicks on the Finalise button
**Then**: He is presented with the config confirmation modal
**And**: If the node operator clicks on the Finish button
**Then**: The configs for blockchain setup, channels, nodes, repos, and alerts is pushed to MongoDB and he is taken to the 'Chain setup completed' page
**And**: If the node operator clicks the Back button instead of the Finish button, then he is back to the Alerts Setup page from where he can review and update alert types and values
| priority | installation wizard alerts setup navigation story as a node operator when i m done with my alerts setup i want the ability to close off and finalise my set configs description the scope of this ticket is limited to the navigation section of the alerts setup page here the node operator can either go back to previous steps of the configuration process or validate his config image image requirements the node operatoor requires a back button that when clicked takes him back to the repositories step see image the node operatoor requires a finalise button that when clicked diaplays the configuration confirmation modal see image the modal title reads your setup is complete and the body reads by clicking on the finish button you will be saving your configuration see image clicking on the back button kills the modal and the node operator is again able to review and update alert types and their values see image clicking on the finish button takes the node operator to the chain setup completed page and saves the configuration to mongodb see image blocked by acceptance criteria scenario alerts setup node operator is done with the review and update of alerts when the node operator is done from the alerts setup phase and clicks on the finalise button then he is presented with the config confirmation modal and if the node operator clicks on the finish button then the configs for blockchain setup channels nodes repos and alerts is pushed to mongodb and he is taken to the chain setup completed page and if the node operator clicks the back button instead of the finish button then he is back to the alerts setup page from where he can review and update alert types and values | 1 |
720,693 | 24,801,967,953 | IssuesEvent | 2022-10-24 22:46:54 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB][LST] Packed Columns: ERROR: Query error: Write response count mismatch | kind/bug area/docdb priority/medium 2.14 Backport Required | Jira Link: [DB-2632](https://yugabyte.atlassian.net/browse/DB-2632)
### Description
On a universe created with ` bin/yb-ctl --replication_factor 3 create --tserver_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700 --master_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700
`
Might be related to #12879
@spolitov Using [LST](https://github.com/yugabyte/yb-long-system-test/) against current master state (50cbd58f6bd317120fc2b7ebb4f0d83a4bceea27) fails with internal errors:
```
2022-06-14 08:41:07,736 MainThread INFO
2022-06-14 08:41:07,737 MainThread INFO --------------------------------------------------------------------------------
2022-06-14 08:41:07,737 MainThread INFO Running Long System Test 0.1
2022-06-14 08:41:07,737 MainThread INFO --------------------------------------------------------------------------------
2022-06-14 08:41:07,737 MainThread INFO
2022-06-14 08:41:07,745 MainThread INFO Reproduce with: git checkout e42027ff && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=8 --runtime=0 --complexity=full --seed=159817
2022-06-14 08:41:08,007 MainThread INFO Database version: PostgreSQL 11.2-YB-2.15.1.0-b0 on x86_64-pc-linux-gnu, compiled by clang version 12.0.1 (https://github.com/yugabyte/llvm-project.git bdb147e675d8c87cee72cc1f87c4b82855977d94), 64-bit
2022-06-14 08:41:08,009 MainThread INFO Creating tables for database db_lst_159817
2022-06-14 08:41:24,505 MainThread INFO Starting worker_0: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction
2022-06-14 08:41:24,506 MainThread INFO Starting worker_1: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction
2022-06-14 08:41:24,507 MainThread INFO Starting worker_2: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,509 MainThread INFO Starting worker_3: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,510 MainThread INFO Starting worker_4: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,511 MainThread INFO Starting worker_5: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction
2022-06-14 08:41:24,512 MainThread INFO Starting worker_6: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,513 MainThread INFO Starting worker_7: RandomSelectAction, SetConfigAction
2022-06-14 08:41:34,106 worker_1 ERROR Unexpected query failure: InternalError_
Query: INSERT INTO tg1_0 (c0_text, c1_jsonb) VALUES ('58', '{"a": 9, "b": ["0"], "c": false}'::jsonb);
values: None
runtime: 2022-06-14 08:41:34.101 - 2022-06-14 08:41:34.106
supports explain: True
supports rollback: True
affected rows: None
Action: SingleInsertAction
Error class: InternalError_
Error code: XX000
Error message: ERROR: Query error: Write response count mismatch
Transaction isolation level: read uncommitted
DB Node: host: 127.0.0.2, port: 5433
DB Backend PID: 186056
```
Logging looks something like this, happened many times:
```
E0614 08:56:56.080338 192119 batcher.cc:609] Batcher (0x000000001388ec18), session (0x00000000113afb38): Received wrong number of responses compared to request(s) sent.
E0614 08:58:08.070195 179400 write_query.cc:666] Wrong number or mismatches: 1 vs 2
E0614 08:58:08.070262 179400 async_rpc.cc:618] Write response count mismatch: 0 Redis requests sent, 0 responses received. 0 Apache CQL requests sent, 0 responses received. 2 PostgreSQL requests sent, 1 responses received.
E0614 08:58:08.070281 179400 async_rpc.cc:626] Illegal state (yb/client/async_rpc.cc:625): Write response count mismatch, request: tablet_id: "ddbc9811e06248acb6563538e617df47" propagated_hybrid_time: 6779687272735277056 include_trace: false write_batch { transaction { transaction_id: "\305\260\331-$bBL\206\202\024D\266x\206J" isolation: SNAPSHOT_ISOLATION status_tablet: "520abb9ec87c4b59bee73c57493f5c14" priority: 11136240255342564422 start_hybrid_time: 6779687272735170560 locality: GLOBAL } DEPRECATED_may_have_metadata: true } read_time { read_ht: 6779687272735264768 DEPRECATED_max_of_read_time_and_local_limit_ht: 6779687274783264768 global_limit_ht: 6779687274783264768 in_txn_limit_ht: 18446744073709551615 local_limit_ht: 6779687274783264768 } pgsql_write_batch { client: YQL_CLIENT_PGSQL stmt_id: 94232672 stmt_type: PGSQL_INSERT table_id: "0000450a00003000800000000000453d" schema_version: 14 ybctid_column_value { value { binary_value: "S\216\363\177\024K\345E\242\205\010\323HX\226\340\327\000\000!" } } column_values { column_id: 1 expr { value { double_value: -63.219470959047122 } } } column_values { column_id: 2 expr { value { float_value: 31.7553101 } } } column_values { column_id: 3 expr { value { double_value: 27.851813552206124 } } } column_values { column_id: 4 expr { value { binary_value: "H\017\000\000I\256\377\377W\n\000\000\002" } } } column_values { column_id: 5 expr { value { binary_value: "H\017\000\0006\367\377\377\330\t\000\000\002" } } } column_values { column_id: 6 expr { value { binary_value: "H\017\000\000\016\227\377\377\352\337\377\377\002" } } } column_values { column_id: 7 expr { value { binary_value: "B\017\000\000\033\200\2079\000\005\036U\031\001\005`\023\033\000\207D\000d\004\202\026\234\024 \003\002" } } } column_values { column_id: 8 expr { value { float_value: 46.837944 } } } column_refs { } ysql_catalog_version: 994 partition_key: "" } pgsql_write_batch { client: YQL_CLIENT_PGSQL stmt_id: 51843168 stmt_type: PGSQL_UPSERT table_id: "0000450a0000300080000000000045b5" schema_version: 0 range_column_values { value { float_value: 46.837944 } } range_column_values { value { float_value: 31.7553101 } } range_column_values { value { double_value: 27.851813552206124 } } range_column_values { value { double_value: -63.219470959047122 } } range_column_values { value { binary_value: "S\216\363\177\024K\345E\242\205\010\323HX\226\340\327\000\000!" } } column_refs { } ysql_catalog_version: 994 partition_key: "" } client_id1: 2399774823823658397 client_id2: 17313700842916508570 request_id: 1170 min_running_request_id: 1170 rejection_score: 0 batch_idx: 0, response: propagated_hybrid_time: 6779687272735772672 pgsql_response_batch { }
```
[lst.zip](https://github.com/yugabyte/yugabyte-db/files/8898021/lst.zip)
| 1.0 | [DocDB][LST] Packed Columns: ERROR: Query error: Write response count mismatch - Jira Link: [DB-2632](https://yugabyte.atlassian.net/browse/DB-2632)
### Description
On a universe created with ` bin/yb-ctl --replication_factor 3 create --tserver_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700 --master_flags=ysql_enable_packed_row=true,ysql_packed_row_size_limit=1700
`
Might be related to #12879
@spolitov Using [LST](https://github.com/yugabyte/yb-long-system-test/) against current master state (50cbd58f6bd317120fc2b7ebb4f0d83a4bceea27) fails with internal errors:
```
2022-06-14 08:41:07,736 MainThread INFO
2022-06-14 08:41:07,737 MainThread INFO --------------------------------------------------------------------------------
2022-06-14 08:41:07,737 MainThread INFO Running Long System Test 0.1
2022-06-14 08:41:07,737 MainThread INFO --------------------------------------------------------------------------------
2022-06-14 08:41:07,737 MainThread INFO
2022-06-14 08:41:07,745 MainThread INFO Reproduce with: git checkout e42027ff && ./long_system_test.py --nodes=127.0.0.1:5433,127.0.0.2:5433,127.0.0.3:5433 --threads=8 --runtime=0 --complexity=full --seed=159817
2022-06-14 08:41:08,007 MainThread INFO Database version: PostgreSQL 11.2-YB-2.15.1.0-b0 on x86_64-pc-linux-gnu, compiled by clang version 12.0.1 (https://github.com/yugabyte/llvm-project.git bdb147e675d8c87cee72cc1f87c4b82855977d94), 64-bit
2022-06-14 08:41:08,009 MainThread INFO Creating tables for database db_lst_159817
2022-06-14 08:41:24,505 MainThread INFO Starting worker_0: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction
2022-06-14 08:41:24,506 MainThread INFO Starting worker_1: SingleInsertAction, SingleUpdateAction, SingleDeleteAction, BulkInsertAction, BulkUpdateAction, SetConfigAction
2022-06-14 08:41:24,507 MainThread INFO Starting worker_2: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,509 MainThread INFO Starting worker_3: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,510 MainThread INFO Starting worker_4: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,511 MainThread INFO Starting worker_5: CreateIndexAction, DropIndexAction, SetConfigAction, AddColumnAction
2022-06-14 08:41:24,512 MainThread INFO Starting worker_6: RandomSelectAction, SetConfigAction
2022-06-14 08:41:24,513 MainThread INFO Starting worker_7: RandomSelectAction, SetConfigAction
2022-06-14 08:41:34,106 worker_1 ERROR Unexpected query failure: InternalError_
Query: INSERT INTO tg1_0 (c0_text, c1_jsonb) VALUES ('58', '{"a": 9, "b": ["0"], "c": false}'::jsonb);
values: None
runtime: 2022-06-14 08:41:34.101 - 2022-06-14 08:41:34.106
supports explain: True
supports rollback: True
affected rows: None
Action: SingleInsertAction
Error class: InternalError_
Error code: XX000
Error message: ERROR: Query error: Write response count mismatch
Transaction isolation level: read uncommitted
DB Node: host: 127.0.0.2, port: 5433
DB Backend PID: 186056
```
Logging looks something like this, happened many times:
```
E0614 08:56:56.080338 192119 batcher.cc:609] Batcher (0x000000001388ec18), session (0x00000000113afb38): Received wrong number of responses compared to request(s) sent.
E0614 08:58:08.070195 179400 write_query.cc:666] Wrong number or mismatches: 1 vs 2
E0614 08:58:08.070262 179400 async_rpc.cc:618] Write response count mismatch: 0 Redis requests sent, 0 responses received. 0 Apache CQL requests sent, 0 responses received. 2 PostgreSQL requests sent, 1 responses received.
E0614 08:58:08.070281 179400 async_rpc.cc:626] Illegal state (yb/client/async_rpc.cc:625): Write response count mismatch, request: tablet_id: "ddbc9811e06248acb6563538e617df47" propagated_hybrid_time: 6779687272735277056 include_trace: false write_batch { transaction { transaction_id: "\305\260\331-$bBL\206\202\024D\266x\206J" isolation: SNAPSHOT_ISOLATION status_tablet: "520abb9ec87c4b59bee73c57493f5c14" priority: 11136240255342564422 start_hybrid_time: 6779687272735170560 locality: GLOBAL } DEPRECATED_may_have_metadata: true } read_time { read_ht: 6779687272735264768 DEPRECATED_max_of_read_time_and_local_limit_ht: 6779687274783264768 global_limit_ht: 6779687274783264768 in_txn_limit_ht: 18446744073709551615 local_limit_ht: 6779687274783264768 } pgsql_write_batch { client: YQL_CLIENT_PGSQL stmt_id: 94232672 stmt_type: PGSQL_INSERT table_id: "0000450a00003000800000000000453d" schema_version: 14 ybctid_column_value { value { binary_value: "S\216\363\177\024K\345E\242\205\010\323HX\226\340\327\000\000!" } } column_values { column_id: 1 expr { value { double_value: -63.219470959047122 } } } column_values { column_id: 2 expr { value { float_value: 31.7553101 } } } column_values { column_id: 3 expr { value { double_value: 27.851813552206124 } } } column_values { column_id: 4 expr { value { binary_value: "H\017\000\000I\256\377\377W\n\000\000\002" } } } column_values { column_id: 5 expr { value { binary_value: "H\017\000\0006\367\377\377\330\t\000\000\002" } } } column_values { column_id: 6 expr { value { binary_value: "H\017\000\000\016\227\377\377\352\337\377\377\002" } } } column_values { column_id: 7 expr { value { binary_value: "B\017\000\000\033\200\2079\000\005\036U\031\001\005`\023\033\000\207D\000d\004\202\026\234\024 \003\002" } } } column_values { column_id: 8 expr { value { float_value: 46.837944 } } } column_refs { } ysql_catalog_version: 994 partition_key: "" } pgsql_write_batch { client: YQL_CLIENT_PGSQL stmt_id: 51843168 stmt_type: PGSQL_UPSERT table_id: "0000450a0000300080000000000045b5" schema_version: 0 range_column_values { value { float_value: 46.837944 } } range_column_values { value { float_value: 31.7553101 } } range_column_values { value { double_value: 27.851813552206124 } } range_column_values { value { double_value: -63.219470959047122 } } range_column_values { value { binary_value: "S\216\363\177\024K\345E\242\205\010\323HX\226\340\327\000\000!" } } column_refs { } ysql_catalog_version: 994 partition_key: "" } client_id1: 2399774823823658397 client_id2: 17313700842916508570 request_id: 1170 min_running_request_id: 1170 rejection_score: 0 batch_idx: 0, response: propagated_hybrid_time: 6779687272735772672 pgsql_response_batch { }
```
[lst.zip](https://github.com/yugabyte/yugabyte-db/files/8898021/lst.zip)
| priority | packed columns error query error write response count mismatch jira link description on a universe created with bin yb ctl replication factor create tserver flags ysql enable packed row true ysql packed row size limit master flags ysql enable packed row true ysql packed row size limit might be related to spolitov using against current master state fails with internal errors mainthread info mainthread info mainthread info running long system test mainthread info mainthread info mainthread info reproduce with git checkout long system test py nodes threads runtime complexity full seed mainthread info database version postgresql yb on pc linux gnu compiled by clang version bit mainthread info creating tables for database db lst mainthread info starting worker createindexaction dropindexaction setconfigaction addcolumnaction mainthread info starting worker singleinsertaction singleupdateaction singledeleteaction bulkinsertaction bulkupdateaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker createindexaction dropindexaction setconfigaction addcolumnaction mainthread info starting worker randomselectaction setconfigaction mainthread info starting worker randomselectaction setconfigaction worker error unexpected query failure internalerror query insert into text jsonb values a b c false jsonb values none runtime supports explain true supports rollback true affected rows none action singleinsertaction error class internalerror error code error message error query error write response count mismatch transaction isolation level read uncommitted db node host port db backend pid logging looks something like this happened many times batcher cc batcher session received wrong number of responses compared to request s sent write query cc wrong number or mismatches vs async rpc cc write response count mismatch redis requests sent responses received apache cql requests sent responses received postgresql requests sent responses received async rpc cc illegal state yb client async rpc cc write response count mismatch request tablet id propagated hybrid time include trace false write batch transaction transaction id bbl isolation snapshot isolation status tablet priority start hybrid time locality global deprecated may have metadata true read time read ht deprecated max of read time and local limit ht global limit ht in txn limit ht local limit ht pgsql write batch client yql client pgsql stmt id stmt type pgsql insert table id schema version ybctid column value value binary value s column values column id expr value double value column values column id expr value float value column values column id expr value double value column values column id expr value binary value h n column values column id expr value binary value h t column values column id expr value binary value h column values column id expr value binary value b column values column id expr value float value column refs ysql catalog version partition key pgsql write batch client yql client pgsql stmt id stmt type pgsql upsert table id schema version range column values value float value range column values value float value range column values value double value range column values value double value range column values value binary value s column refs ysql catalog version partition key client client request id min running request id rejection score batch idx response propagated hybrid time pgsql response batch | 1 |
126,999 | 5,009,050,525 | IssuesEvent | 2016-12-12 21:16:16 | slackapi/node-slack-sdk | https://api.github.com/repos/slackapi/node-slack-sdk | closed | Slack Web API 'users.profile.get' method question | bug Priority—Medium | * [x] I've read and understood the [Contributing guidelines](./CONTRIBUTING.md) and have done my best effort to follow them.
* [x] I've read and agree to the [Code of Conduct](./CODE_OF_CONDUCT.md).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
#### Description
I'm trying to figure out how to call the `users.profile.get` Slack Web API method from this library. Is it possible? The only thing I can find remotely close is the RTM dataStore method `getUserById`. | 1.0 | Slack Web API 'users.profile.get' method question - * [x] I've read and understood the [Contributing guidelines](./CONTRIBUTING.md) and have done my best effort to follow them.
* [x] I've read and agree to the [Code of Conduct](./CODE_OF_CONDUCT.md).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
#### Description
I'm trying to figure out how to call the `users.profile.get` Slack Web API method from this library. Is it possible? The only thing I can find remotely close is the RTM dataStore method `getUserById`. | priority | slack web api users profile get method question i ve read and understood the contributing md and have done my best effort to follow them i ve read and agree to the code of conduct md i ve searched for any related issues and avoided creating a duplicate issue description i m trying to figure out how to call the users profile get slack web api method from this library is it possible the only thing i can find remotely close is the rtm datastore method getuserbyid | 1 |
811,949 | 30,306,993,290 | IssuesEvent | 2023-07-10 10:08:55 | code4romania/rvm | https://api.github.com/repos/code4romania/rvm | opened | [Resources] change page title 'Vizualizare Resursă' | medium-priority | Change the page title from 'Vizualizare Resursă' to '_valure from 'Denumire resursă' field_

| 1.0 | [Resources] change page title 'Vizualizare Resursă' - Change the page title from 'Vizualizare Resursă' to '_valure from 'Denumire resursă' field_

| priority | change page title vizualizare resursă change the page title from vizualizare resursă to valure from denumire resursă field | 1 |
304,740 | 9,335,108,276 | IssuesEvent | 2019-03-28 17:46:35 | octobercms/october | https://api.github.com/repos/octobercms/october | closed | Backend pages wont open for new administrators | Priority: Medium Status: In Progress Type: Bug | - OctoberCMS Build: 446
- PHP Version: 7.2.10
- Database Engine: Mysql
- Plugins Installed: October Demo, RedirectToHTTPS
### Description:
Newly created administrators have trouble accessing pages in the backend.
### Steps To Reproduce:
- Created new administrator from CMS as a Developer with Super User.
- Login with new administrator account.
- Browse to CMS -> Pages
- Clicking on a page name does absolutely nothing.
- Checking any checkbox next to page names (ones for multiple deleting) fixes the problem.
- Inspecting the page shows following errors:
`Uncaught TypeError: Cannot read property 'setActiveItem' of undefined
jquery.min.js?v446:2 jQuery.Deferred exception: Cannot read property 'setActiveItem' of undefined TypeError: Cannot read property 'setActiveItem' of undefined
Uncaught TypeError: Cannot read property 'goTo' of undefined`

| 1.0 | Backend pages wont open for new administrators - - OctoberCMS Build: 446
- PHP Version: 7.2.10
- Database Engine: Mysql
- Plugins Installed: October Demo, RedirectToHTTPS
### Description:
Newly created administrators have trouble accessing pages in the backend.
### Steps To Reproduce:
- Created new administrator from CMS as a Developer with Super User.
- Login with new administrator account.
- Browse to CMS -> Pages
- Clicking on a page name does absolutely nothing.
- Checking any checkbox next to page names (ones for multiple deleting) fixes the problem.
- Inspecting the page shows following errors:
`Uncaught TypeError: Cannot read property 'setActiveItem' of undefined
jquery.min.js?v446:2 jQuery.Deferred exception: Cannot read property 'setActiveItem' of undefined TypeError: Cannot read property 'setActiveItem' of undefined
Uncaught TypeError: Cannot read property 'goTo' of undefined`

| priority | backend pages wont open for new administrators octobercms build php version database engine mysql plugins installed october demo redirecttohttps description newly created administrators have trouble accessing pages in the backend steps to reproduce created new administrator from cms as a developer with super user login with new administrator account browse to cms pages clicking on a page name does absolutely nothing checking any checkbox next to page names ones for multiple deleting fixes the problem inspecting the page shows following errors uncaught typeerror cannot read property setactiveitem of undefined jquery min js jquery deferred exception cannot read property setactiveitem of undefined typeerror cannot read property setactiveitem of undefined uncaught typeerror cannot read property goto of undefined | 1 |
22,121 | 2,645,678,998 | IssuesEvent | 2015-03-13 01:01:34 | prikhi/evoluspencil | https://api.github.com/repos/prikhi/evoluspencil | opened | Option Show 100% for WinXP Progress bar | 2–5 stars imported Priority-Medium Type-Shapes-Enhancement | _From [Chan.Foo...@gmail.com](https://code.google.com/u/110653365426315065746/) on July 09, 2008 11:09:20_
What steps will reproduce the problem? 1. Select WinXP Progress Bar widgets 2. 3. What is the expected output? What do you see instead? Shall has an option to "Focus 100%" as provided in GTK widgets - Progress Bar What version of the product are you using? On what operating system? Pencil Version 1.0 on Windows XP Please provide any additional information below.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=9_ | 1.0 | Option Show 100% for WinXP Progress bar - _From [Chan.Foo...@gmail.com](https://code.google.com/u/110653365426315065746/) on July 09, 2008 11:09:20_
What steps will reproduce the problem? 1. Select WinXP Progress Bar widgets 2. 3. What is the expected output? What do you see instead? Shall has an option to "Focus 100%" as provided in GTK widgets - Progress Bar What version of the product are you using? On what operating system? Pencil Version 1.0 on Windows XP Please provide any additional information below.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=9_ | priority | option show for winxp progress bar from on july what steps will reproduce the problem select winxp progress bar widgets what is the expected output what do you see instead shall has an option to focus as provided in gtk widgets progress bar what version of the product are you using on what operating system pencil version on windows xp please provide any additional information below original issue | 1 |
26,010 | 2,684,097,896 | IssuesEvent | 2015-03-28 17:08:49 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Build output and temp paths are hardcoded in project files. | 1 star bug imported Priority-Medium | _From [j...@jakeonthenet.com](https://code.google.com/u/102440816720852204789/) on June 11, 2012 20:27:40_
Required information! OS version: n/a ConEmu version: source
Far version: source *Bug description* Build paths are hardcode in the project files, both for temp and output.
This makes building very difficult as every project file needs to be changed to reflect the local environment.
Please follow standard convention and make all output relative to the $(SolutionDir)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=573_ | 1.0 | Build output and temp paths are hardcoded in project files. - _From [j...@jakeonthenet.com](https://code.google.com/u/102440816720852204789/) on June 11, 2012 20:27:40_
Required information! OS version: n/a ConEmu version: source
Far version: source *Bug description* Build paths are hardcode in the project files, both for temp and output.
This makes building very difficult as every project file needs to be changed to reflect the local environment.
Please follow standard convention and make all output relative to the $(SolutionDir)
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=573_ | priority | build output and temp paths are hardcoded in project files from on june required information os version n a conemu version source far version source bug description build paths are hardcode in the project files both for temp and output this makes building very difficult as every project file needs to be changed to reflect the local environment please follow standard convention and make all output relative to the solutiondir original issue | 1 |
762,635 | 26,725,953,579 | IssuesEvent | 2023-01-29 18:10:28 | scprogramming/Olive | https://api.github.com/repos/scprogramming/Olive | closed | [Configuration]Validate configuration on launch | Medium Priority Backlog Configuration | It would be great to validate configuration changes when the program launches to proactively catch issues | 1.0 | [Configuration]Validate configuration on launch - It would be great to validate configuration changes when the program launches to proactively catch issues | priority | validate configuration on launch it would be great to validate configuration changes when the program launches to proactively catch issues | 1 |
140,169 | 5,398,178,121 | IssuesEvent | 2017-02-27 16:21:38 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | Log bundle needs version file | component/vicadmin priority/medium | As a support engineer on VIC, I would like to see a version file in the log bundle so that I can know what version of VIC a customer is using when trying to triage an issue. | 1.0 | Log bundle needs version file - As a support engineer on VIC, I would like to see a version file in the log bundle so that I can know what version of VIC a customer is using when trying to triage an issue. | priority | log bundle needs version file as a support engineer on vic i would like to see a version file in the log bundle so that i can know what version of vic a customer is using when trying to triage an issue | 1 |
57,276 | 3,081,253,533 | IssuesEvent | 2015-08-22 14:45:14 | bitfighter/bitfighter | https://api.github.com/repos/bitfighter/bitfighter | closed | Updater does not always close Bitfighter, causing failure | 019x 020 bug duplicate imported Priority-Medium | _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on January 02, 2015 17:29:50_
Updater does not always close Bitfighter, which will cause the updater to fail. I would suggest we look for a way to add a notice to manually close the game if it doesn't close automatically.
This is more an annoyance rather than a critical error, but I have reproduced this on two different machines.
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=492_ | 1.0 | Updater does not always close Bitfighter, causing failure - _From [watusim...@bitfighter.org](https://code.google.com/u/105427273526970468779/) on January 02, 2015 17:29:50_
Updater does not always close Bitfighter, which will cause the updater to fail. I would suggest we look for a way to add a notice to manually close the game if it doesn't close automatically.
This is more an annoyance rather than a critical error, but I have reproduced this on two different machines.
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=492_ | priority | updater does not always close bitfighter causing failure from on january updater does not always close bitfighter which will cause the updater to fail i would suggest we look for a way to add a notice to manually close the game if it doesn t close automatically this is more an annoyance rather than a critical error but i have reproduced this on two different machines original issue | 1 |
766,167 | 26,873,604,772 | IssuesEvent | 2023-02-04 19:25:17 | belav/csharpier | https://api.github.com/repos/belav/csharpier | closed | More rider files to ignore | priority:medium area:rider | ```
C:\Users\bela\AppData\Local\Temp\SourceGeneratedDocuments\433174804EA40F44BA8B1359\CSharpier.Tests.Generators\CSharpier.Tests.Generators.FormattingTestsGenerator
``` | 1.0 | More rider files to ignore - ```
C:\Users\bela\AppData\Local\Temp\SourceGeneratedDocuments\433174804EA40F44BA8B1359\CSharpier.Tests.Generators\CSharpier.Tests.Generators.FormattingTestsGenerator
``` | priority | more rider files to ignore c users bela appdata local temp sourcegenerateddocuments csharpier tests generators csharpier tests generators formattingtestsgenerator | 1 |
237,400 | 7,759,506,804 | IssuesEvent | 2018-05-31 23:56:15 | minio/minio | https://api.github.com/repos/minio/minio | closed | Ability to create, upload, share and delete folder in ui | priority: medium triage won't fix | <!--- Provide a general summary of the issue in the Title above -->
Currently in the minio UI it is not possible to create a folder or upload folder from client side using UI. Neither if you already have folder in the underlying storage ability to share or delete it. UI allows sharing only a file via link.
## Expected Behavior
<!--- If you're suggesting a change/improvement, tell us how it should work -->
It should provide an option in addition to "Upload file, Create bucket" as "Create folder" and a user should be able to upload folder using drag and drop to create a folder in the current bucket. Also folder should have a the same shared link and delete option similar to files.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
It is not possible to perform above mentioned tasks from UI. I can upload file using drag and drop but folder is not possible to upload using drag and drop.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I am trying to use minio to share my data with colleagues and having ability to share subfolders will be really useful. Currently I have to create the tar file of folder to share the content.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`): minio/minio:RELEASE.2018-05-16T23-35-33Z
* Environment name and version (e.g. nginx 1.9.1): Docker
* Operating System and version (`uname -a`): Linux test 4.4.0-119-generic #143-Ubuntu SMP
| 1.0 | Ability to create, upload, share and delete folder in ui - <!--- Provide a general summary of the issue in the Title above -->
Currently in the minio UI it is not possible to create a folder or upload folder from client side using UI. Neither if you already have folder in the underlying storage ability to share or delete it. UI allows sharing only a file via link.
## Expected Behavior
<!--- If you're suggesting a change/improvement, tell us how it should work -->
It should provide an option in addition to "Upload file, Create bucket" as "Create folder" and a user should be able to upload folder using drag and drop to create a folder in the current bucket. Also folder should have a the same shared link and delete option similar to files.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
It is not possible to perform above mentioned tasks from UI. I can upload file using drag and drop but folder is not possible to upload using drag and drop.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I am trying to use minio to share my data with colleagues and having ability to share subfolders will be really useful. Currently I have to create the tar file of folder to share the content.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`): minio/minio:RELEASE.2018-05-16T23-35-33Z
* Environment name and version (e.g. nginx 1.9.1): Docker
* Operating System and version (`uname -a`): Linux test 4.4.0-119-generic #143-Ubuntu SMP
| priority | ability to create upload share and delete folder in ui currently in the minio ui it is not possible to create a folder or upload folder from client side using ui neither if you already have folder in the underlying storage ability to share or delete it ui allows sharing only a file via link expected behavior it should provide an option in addition to upload file create bucket as create folder and a user should be able to upload folder using drag and drop to create a folder in the current bucket also folder should have a the same shared link and delete option similar to files current behavior it is not possible to perform above mentioned tasks from ui i can upload file using drag and drop but folder is not possible to upload using drag and drop context i am trying to use minio to share my data with colleagues and having ability to share subfolders will be really useful currently i have to create the tar file of folder to share the content your environment version used minio version minio minio release environment name and version e g nginx docker operating system and version uname a linux test generic ubuntu smp | 1 |
227,806 | 7,543,110,715 | IssuesEvent | 2018-04-17 14:42:57 | losol/EventManagement | https://api.github.com/repos/losol/EventManagement | closed | Previously ordered products should be shown on update order modal | complexity:low has pr priority:medium | As a course admin I can see previously ordered products and add remove them on the orders page so that I do not double order a product.
Pr now the update order modal only shows empty checkboxes. | 1.0 | Previously ordered products should be shown on update order modal - As a course admin I can see previously ordered products and add remove them on the orders page so that I do not double order a product.
Pr now the update order modal only shows empty checkboxes. | priority | previously ordered products should be shown on update order modal as a course admin i can see previously ordered products and add remove them on the orders page so that i do not double order a product pr now the update order modal only shows empty checkboxes | 1 |
287,412 | 8,813,237,989 | IssuesEvent | 2018-12-28 19:06:05 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | AD Source: ServicePrincipalName missing LDAP attribute from the default list | Priority: Medium Type: Bug | <img width="739" alt="screen shot 2018-12-28 at 2 05 06 pm" src="https://user-images.githubusercontent.com/5261214/50525485-ad044600-0aa9-11e9-9e71-30090a8c0063.png">
| 1.0 | AD Source: ServicePrincipalName missing LDAP attribute from the default list - <img width="739" alt="screen shot 2018-12-28 at 2 05 06 pm" src="https://user-images.githubusercontent.com/5261214/50525485-ad044600-0aa9-11e9-9e71-30090a8c0063.png">
| priority | ad source serviceprincipalname missing ldap attribute from the default list img width alt screen shot at pm src | 1 |
802,922 | 29,058,516,764 | IssuesEvent | 2023-05-15 01:54:20 | masastack/MASA.Stack.Components | https://api.github.com/repos/masastack/MASA.Stack.Components | closed | Toggle the team drop-down box. The selected item shows an error | type/bug severity/medium status/resolved site/staging priority/p2 |
切换团队下拉框,选中项显示错误
https://user-images.githubusercontent.com/95004531/231428559-8775fffb-1d29-4963-8964-4d51cb1ac4ec.mp4
| 1.0 | Toggle the team drop-down box. The selected item shows an error -
切换团队下拉框,选中项显示错误
https://user-images.githubusercontent.com/95004531/231428559-8775fffb-1d29-4963-8964-4d51cb1ac4ec.mp4
| priority | toggle the team drop down box the selected item shows an error 切换团队下拉框,选中项显示错误 | 1 |
307,042 | 9,414,139,438 | IssuesEvent | 2019-04-10 09:27:45 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | "Public Crafting Stations" layout broken | Medium Priority | (at Economy Viewer)
Just try to resize window

| 1.0 | "Public Crafting Stations" layout broken - (at Economy Viewer)
Just try to resize window

| priority | public crafting stations layout broken at economy viewer just try to resize window | 1 |
831,140 | 32,039,120,223 | IssuesEvent | 2023-09-22 17:44:29 | rstudio/gt | https://api.github.com/repos/rstudio/gt | reopened | render title/footnote/source_note as multiple lines in Word | Difficulty: [3] Advanced Effort: [2] Medium Priority: [3] High Type: ★ Enhancement Focus: Pharma | ## Proposal
Please allow this: render title/footnote/source_note as multiple lines in Word. This is a critical issue since most regulatory documents are in Word.
While we can render title/footnote/source_note as multiple lines in HTML, we can't do it in Word. The following will fail in Word.
tab_source_note(source_note = "source note test ^lline2 \\nline3 <br>Line 4
Line5
Line6")

| 1.0 | render title/footnote/source_note as multiple lines in Word - ## Proposal
Please allow this: render title/footnote/source_note as multiple lines in Word. This is a critical issue since most regulatory documents are in Word.
While we can render title/footnote/source_note as multiple lines in HTML, we can't do it in Word. The following will fail in Word.
tab_source_note(source_note = "source note test ^lline2 \\nline3 <br>Line 4
Line5
Line6")

| priority | render title footnote source note as multiple lines in word proposal please allow this render title footnote source note as multiple lines in word this is a critical issue since most regulatory documents are in word while we can render title footnote source note as multiple lines in html we can t do it in word the following will fail in word tab source note source note source note test line | 1 |
248,132 | 7,927,752,418 | IssuesEvent | 2018-07-06 09:09:19 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | Get contract requests - improve response | BE epic/contracts kind/bug priority/medium project/phase3 status/todo | Improve response according to spec
Service - Get contract request
1. object urgent
**Actual**
```
{
"urgent": [
{
"url": "https://storage.go....",
"type": "additional_document"
},
{
"url": "https://storage.go....",
"type": "contract_request_statute"
}
]
}
```
**Expected**
```
{
"urgent": {
"documents": [
{
"type": "signed_content",
"url": "https://storage.ehealth.world"
}
]
}
}
```
2. external contractors
**Actual**
```
{
"external_contractors": [
{
"legal_entity_id": "942db94b-32e3-496e-9610-1e3b106a74f4",
"divisions": [
{
"medical_service": "Послуга ПМД",
"id": "b302a95c-6c3b-43d1-a106-c8de4b19f443"
},
{
"medical_service": "Послуга ПМД",
"id": "e40bea41-31ca-410e-8319-838787f55d12"
}
],
"contract": {
"number": "1512351235",
"issued_at": "2017-01-01",
"expires_at": "2020-01-01"
}
}
]
}
```
**Expected**
```
{
"external_contractors": [
{
"legal_entity": {
"id": "b075f148-7f93-4fc2-b2ec-2d81b19a9b7b",
"name": "Клініка Ноунейм"
},
"contract": {
"number": "1234567",
"issued_at": "2018-01-01",
"expires_at": "2019-01-01"
},
"divisions": [
{
"id": "2922a240-63db-404e-b730-09222bfeb2dd",
"name": "Бориспільське відділення Клініки Ноунейм",
"medical_service": "Послуга ПМД"
}
]
}
]
}
```
child of #2367 | 1.0 | Get contract requests - improve response - Improve response according to spec
Service - Get contract request
1. object urgent
**Actual**
```
{
"urgent": [
{
"url": "https://storage.go....",
"type": "additional_document"
},
{
"url": "https://storage.go....",
"type": "contract_request_statute"
}
]
}
```
**Expected**
```
{
"urgent": {
"documents": [
{
"type": "signed_content",
"url": "https://storage.ehealth.world"
}
]
}
}
```
2. external contractors
**Actual**
```
{
"external_contractors": [
{
"legal_entity_id": "942db94b-32e3-496e-9610-1e3b106a74f4",
"divisions": [
{
"medical_service": "Послуга ПМД",
"id": "b302a95c-6c3b-43d1-a106-c8de4b19f443"
},
{
"medical_service": "Послуга ПМД",
"id": "e40bea41-31ca-410e-8319-838787f55d12"
}
],
"contract": {
"number": "1512351235",
"issued_at": "2017-01-01",
"expires_at": "2020-01-01"
}
}
]
}
```
**Expected**
```
{
"external_contractors": [
{
"legal_entity": {
"id": "b075f148-7f93-4fc2-b2ec-2d81b19a9b7b",
"name": "Клініка Ноунейм"
},
"contract": {
"number": "1234567",
"issued_at": "2018-01-01",
"expires_at": "2019-01-01"
},
"divisions": [
{
"id": "2922a240-63db-404e-b730-09222bfeb2dd",
"name": "Бориспільське відділення Клініки Ноунейм",
"medical_service": "Послуга ПМД"
}
]
}
]
}
```
child of #2367 | priority | get contract requests improve response improve response according to spec service get contract request object urgent actual urgent url type additional document url type contract request statute expected urgent documents type signed content url external contractors actual external contractors legal entity id divisions medical service послуга пмд id medical service послуга пмд id contract number issued at expires at expected external contractors legal entity id name клініка ноунейм contract number issued at expires at divisions id name бориспільське відділення клініки ноунейм medical service послуга пмд child of | 1 |
404,957 | 11,865,119,300 | IssuesEvent | 2020-03-25 23:23:49 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | Intake: Attorney/Agent Fee Tech Spec | Epic Priority: Medium Product: caseflow-intake Stakeholder: BVA Stakeholder: VBA Team: Foxtrot 🦊 Type: Tech-Spec | BVA and VBA require Caseflow to support the ability for Attorneys and Agents to appeal the fee associated with granted benefits to those they represent. All detail for these appeals must be hidden from veterans, other claimants and appellants. This requires a review of intake workflows to determine and document all touch points for where information related to these appeals or the appeals themselves are surfaced in the the UI today.
### AC
1. Review Epic issue for additional context.
2. Write tech spec detailing solution scope. | 1.0 | Intake: Attorney/Agent Fee Tech Spec - BVA and VBA require Caseflow to support the ability for Attorneys and Agents to appeal the fee associated with granted benefits to those they represent. All detail for these appeals must be hidden from veterans, other claimants and appellants. This requires a review of intake workflows to determine and document all touch points for where information related to these appeals or the appeals themselves are surfaced in the the UI today.
### AC
1. Review Epic issue for additional context.
2. Write tech spec detailing solution scope. | priority | intake attorney agent fee tech spec bva and vba require caseflow to support the ability for attorneys and agents to appeal the fee associated with granted benefits to those they represent all detail for these appeals must be hidden from veterans other claimants and appellants this requires a review of intake workflows to determine and document all touch points for where information related to these appeals or the appeals themselves are surfaced in the the ui today ac review epic issue for additional context write tech spec detailing solution scope | 1 |
57,629 | 3,083,237,015 | IssuesEvent | 2015-08-24 07:30:13 | magro/memcached-session-manager | https://api.github.com/repos/magro/memcached-session-manager | closed | CouchbaseClient Dependency not listed in documentation? | imported Priority-Medium Type-Other | _From [mark.con...@gmail.com](https://code.google.com/u/114941079499385564480/) on March 20, 2013 20:47:32_
<b>What steps will reproduce the problem?</b>
1. followed installation documentation and installed the following jars:
memcached-session-manager-tc6-1.6.4.jar
memcached-session-manager-1.6.4.jar
spymemcached-2.7.3.jar
as well as added Manager tag to context.xml
What is the expected output?
expected tomcat6 to start
What do you see instead?
tomact6 fails on startup due to
java.lang.NoClassDefFoundError: com/couchbase/client/CouchbaseClient
<b>What version of the product are you using? On what operating system?</b>
memcached-session-manager 1.6.4
ubuntu 10.04
tomcat6
java version 1.6.0_41
<b>Please provide any additional information below.</b>
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=158_ | 1.0 | CouchbaseClient Dependency not listed in documentation? - _From [mark.con...@gmail.com](https://code.google.com/u/114941079499385564480/) on March 20, 2013 20:47:32_
<b>What steps will reproduce the problem?</b>
1. followed installation documentation and installed the following jars:
memcached-session-manager-tc6-1.6.4.jar
memcached-session-manager-1.6.4.jar
spymemcached-2.7.3.jar
as well as added Manager tag to context.xml
What is the expected output?
expected tomcat6 to start
What do you see instead?
tomact6 fails on startup due to
java.lang.NoClassDefFoundError: com/couchbase/client/CouchbaseClient
<b>What version of the product are you using? On what operating system?</b>
memcached-session-manager 1.6.4
ubuntu 10.04
tomcat6
java version 1.6.0_41
<b>Please provide any additional information below.</b>
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=158_ | priority | couchbaseclient dependency not listed in documentation from on march what steps will reproduce the problem followed installation documentation and installed the following jars memcached session manager jar memcached session manager jar spymemcached jar as well as added manager tag to context xml what is the expected output expected to start what do you see instead fails on startup due to java lang noclassdeffounderror com couchbase client couchbaseclient what version of the product are you using on what operating system memcached session manager ubuntu java version please provide any additional information below original issue | 1 |
528,994 | 15,378,627,722 | IssuesEvent | 2021-03-02 18:33:55 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Search using multiple keywords in Activity Posts | feature: enhancement priority: medium | **Is your feature request related to a problem? Please describe.**
When searching in the WordPress Posts, you can use multiple keywords in any order, and only posts that contain all of those keywords will be included in the search result. But when using multiple keywords when searching for BuddyBoss Activity, it is treated as a single word or phrase causing some activities not to be included in the result.
**To reproduce**
1. Create a WordPress post. For this example, we will use the keywords "sample" and "BuddyBoss". https://i.ibb.co/Mcx7bY7/1.jpg
2. Try to search "sample BuddyBoss" in the Posts and notice that it will return the post we just created. https://i.ibb.co/ZzjjCQC/2.jpg
3. Go to BuddyBoss > Activity. You should be able to see the activity for the new blog post that contains the same keywords. https://i.ibb.co/j8MGPq3/3.jpg
4. Try to search "sample BuddyBoss" in the Activity and notice that it will not return any activity posts. https://i.ibb.co/jW3t9Lv/4.jpg
**Support ticket links**
https://secure.helpscout.net/conversation/1433107177/126540
| 1.0 | Search using multiple keywords in Activity Posts - **Is your feature request related to a problem? Please describe.**
When searching in the WordPress Posts, you can use multiple keywords in any order, and only posts that contain all of those keywords will be included in the search result. But when using multiple keywords when searching for BuddyBoss Activity, it is treated as a single word or phrase causing some activities not to be included in the result.
**To reproduce**
1. Create a WordPress post. For this example, we will use the keywords "sample" and "BuddyBoss". https://i.ibb.co/Mcx7bY7/1.jpg
2. Try to search "sample BuddyBoss" in the Posts and notice that it will return the post we just created. https://i.ibb.co/ZzjjCQC/2.jpg
3. Go to BuddyBoss > Activity. You should be able to see the activity for the new blog post that contains the same keywords. https://i.ibb.co/j8MGPq3/3.jpg
4. Try to search "sample BuddyBoss" in the Activity and notice that it will not return any activity posts. https://i.ibb.co/jW3t9Lv/4.jpg
**Support ticket links**
https://secure.helpscout.net/conversation/1433107177/126540
| priority | search using multiple keywords in activity posts is your feature request related to a problem please describe when searching in the wordpress posts you can use multiple keywords in any order and only posts that contain all of those keywords will be included in the search result but when using multiple keywords when searching for buddyboss activity it is treated as a single word or phrase causing some activities not to be included in the result to reproduce create a wordpress post for this example we will use the keywords sample and buddyboss try to search sample buddyboss in the posts and notice that it will return the post we just created go to buddyboss activity you should be able to see the activity for the new blog post that contains the same keywords try to search sample buddyboss in the activity and notice that it will not return any activity posts support ticket links | 1 |
661,355 | 22,050,501,679 | IssuesEvent | 2022-05-30 08:14:02 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | webconnectivity: introduce DNS over UDP resolver | enhancement priority/medium platform/android methodology data quality ooni/probe-engine | In https://github.com/ooni/probe/issues/2029#issuecomment-1140258729, I explained why the result returned by Android's `getaddrinfo` cannot be trusted unless the call is successful. To summarize the matter, Android's `getaddrinfo` is a proxy that calls a DNS lookup service (in most if not all cases). Unfortunately, as part of the call, the actual return code is lost and only three error codes survive: `0`, which means success; `EAI_NODATA`, which means any failure inside the DNS lookup service including `NXDOMAIN` but also `Refused`; `EAI_SYSTEM`, which means that the call to the DNS lookup service failed. In the latter case, the Bionic libc code runs the original DNS lookup code from NetBSD. However, I would argue that the DNS lookup service being up and running is the common configuration. As a result, on Android, the result of `getaddrinfo` is not very informative. To complement calling `getaddrinfo`, we should then also use a DNS over UDP resolver. Because using such an extra resolver would be useful in general, and because otherwise it would be more complex to compare measurements, I think we should introduce this extra call for all webconnectivity users (as opposed to doing this only for Android). | 1.0 | webconnectivity: introduce DNS over UDP resolver - In https://github.com/ooni/probe/issues/2029#issuecomment-1140258729, I explained why the result returned by Android's `getaddrinfo` cannot be trusted unless the call is successful. To summarize the matter, Android's `getaddrinfo` is a proxy that calls a DNS lookup service (in most if not all cases). Unfortunately, as part of the call, the actual return code is lost and only three error codes survive: `0`, which means success; `EAI_NODATA`, which means any failure inside the DNS lookup service including `NXDOMAIN` but also `Refused`; `EAI_SYSTEM`, which means that the call to the DNS lookup service failed. In the latter case, the Bionic libc code runs the original DNS lookup code from NetBSD. However, I would argue that the DNS lookup service being up and running is the common configuration. As a result, on Android, the result of `getaddrinfo` is not very informative. To complement calling `getaddrinfo`, we should then also use a DNS over UDP resolver. Because using such an extra resolver would be useful in general, and because otherwise it would be more complex to compare measurements, I think we should introduce this extra call for all webconnectivity users (as opposed to doing this only for Android). | priority | webconnectivity introduce dns over udp resolver in i explained why the result returned by android s getaddrinfo cannot be trusted unless the call is successful to summarize the matter android s getaddrinfo is a proxy that calls a dns lookup service in most if not all cases unfortunately as part of the call the actual return code is lost and only three error codes survive which means success eai nodata which means any failure inside the dns lookup service including nxdomain but also refused eai system which means that the call to the dns lookup service failed in the latter case the bionic libc code runs the original dns lookup code from netbsd however i would argue that the dns lookup service being up and running is the common configuration as a result on android the result of getaddrinfo is not very informative to complement calling getaddrinfo we should then also use a dns over udp resolver because using such an extra resolver would be useful in general and because otherwise it would be more complex to compare measurements i think we should introduce this extra call for all webconnectivity users as opposed to doing this only for android | 1 |
611,954 | 18,985,572,264 | IssuesEvent | 2021-11-21 17:01:28 | dehy/foodcoop-mobile-app | https://api.github.com/repos/dehy/foodcoop-mobile-app | closed | Nettoyer les classes Scanner | Priority: Medium Status: In Progress Type: Refactoring | Initialement, la classe Scanner.ts gérait la logique de la scanette et des inventaires.
Avec la refactorisation, une nouvelle classe a été créée (Scanner2) et ne gère que la partie scan. C'est d'ailleurs une classe "Composant" que l'ont peut intégrer dans d'autres écrans.
Des classes séparées gèrent les logiques de scanette et d'inventaire. | 1.0 | Nettoyer les classes Scanner - Initialement, la classe Scanner.ts gérait la logique de la scanette et des inventaires.
Avec la refactorisation, une nouvelle classe a été créée (Scanner2) et ne gère que la partie scan. C'est d'ailleurs une classe "Composant" que l'ont peut intégrer dans d'autres écrans.
Des classes séparées gèrent les logiques de scanette et d'inventaire. | priority | nettoyer les classes scanner initialement la classe scanner ts gérait la logique de la scanette et des inventaires avec la refactorisation une nouvelle classe a été créée et ne gère que la partie scan c est d ailleurs une classe composant que l ont peut intégrer dans d autres écrans des classes séparées gèrent les logiques de scanette et d inventaire | 1 |
489,705 | 14,111,565,381 | IssuesEvent | 2020-11-07 00:43:48 | vanjarosoftware/Vanjaro.Platform | https://api.github.com/repos/vanjarosoftware/Vanjaro.Platform | closed | Remove Filter from Extensions App | Area: Backend Enhancement Priority: Medium Release: Patch | Extensions App to show all extenions. Update icons for module, provider, auth, and generic extension. | 1.0 | Remove Filter from Extensions App - Extensions App to show all extenions. Update icons for module, provider, auth, and generic extension. | priority | remove filter from extensions app extensions app to show all extenions update icons for module provider auth and generic extension | 1 |
404,014 | 11,850,929,471 | IssuesEvent | 2020-03-24 17:21:03 | boston-library/curator | https://api.github.com/repos/boston-library/curator | opened | seed and validate ControlledTerms::Language values using BPLDC Authority API | authorities priority: medium | Instances of `ControlledTerms::Language` have a limited set of values.
This data is available in DC3/Curator-friendly format via the [BPLDC Authority API](https://github.com/boston-library/bpldc_authority_api) app, see [documentation](https://github.com/boston-library/bpldc_authority_api/wiki/Nomenclature-controlled-values).
We should seed the db with these values using the API, and validate submitted objects against the list. | 1.0 | seed and validate ControlledTerms::Language values using BPLDC Authority API - Instances of `ControlledTerms::Language` have a limited set of values.
This data is available in DC3/Curator-friendly format via the [BPLDC Authority API](https://github.com/boston-library/bpldc_authority_api) app, see [documentation](https://github.com/boston-library/bpldc_authority_api/wiki/Nomenclature-controlled-values).
We should seed the db with these values using the API, and validate submitted objects against the list. | priority | seed and validate controlledterms language values using bpldc authority api instances of controlledterms language have a limited set of values this data is available in curator friendly format via the app see we should seed the db with these values using the api and validate submitted objects against the list | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.