Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
140,696 | 5,414,595,379 | IssuesEvent | 2017-03-01 19:28:24 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | VIC container shim supervisor for containerd. | area/containerd kind/investigation priority/medium | As an engineer, I need to design and implement a POC shim layer for containerd in order to enable containerd to work VIC containers.
This is purely a research work that at best would have a prototype that can create and run VIC container from CLI. I would not expect containerd to be able to use it, however, these are just first steps.
Acceptance criteria
1. Prototype of runc analogue for VIC
2. List of missing things and problems which may arise on VIC side to accommodate containerd shim interface requirements. | 1.0 | VIC container shim supervisor for containerd. - As an engineer, I need to design and implement a POC shim layer for containerd in order to enable containerd to work VIC containers.
This is purely a research work that at best would have a prototype that can create and run VIC container from CLI. I would not expect containerd to be able to use it, however, these are just first steps.
Acceptance criteria
1. Prototype of runc analogue for VIC
2. List of missing things and problems which may arise on VIC side to accommodate containerd shim interface requirements. | priority | vic container shim supervisor for containerd as an engineer i need to design and implement a poc shim layer for containerd in order to enable containerd to work vic containers this is purely a research work that at best would have a prototype that can create and run vic container from cli i would not expect containerd to be able to use it however these are just first steps acceptance criteria prototype of runc analogue for vic list of missing things and problems which may arise on vic side to accommodate containerd shim interface requirements | 1 |
470,478 | 13,538,543,710 | IssuesEvent | 2020-09-16 12:15:47 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | packetfence-pki: add support for SCEP | Priority: Medium Type: Feature / Enhancement | **Is your feature request related to a problem? Please describe.**
When you use `packetfence-pki` to generate RADIUS certificates for EAP-TLS, you may want to configure a [MDM](https://en.wikipedia.org/wiki/Mobile_device_management) software on nodes to automatically get a certificate from `packetfence-pki`.
**Describe the solution you'd like**
Be able to automatically get a client certificate from PacketFence PKI using SCEP protocol. Perhaps [ACME](https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment) can be used too.
| 1.0 | packetfence-pki: add support for SCEP - **Is your feature request related to a problem? Please describe.**
When you use `packetfence-pki` to generate RADIUS certificates for EAP-TLS, you may want to configure a [MDM](https://en.wikipedia.org/wiki/Mobile_device_management) software on nodes to automatically get a certificate from `packetfence-pki`.
**Describe the solution you'd like**
Be able to automatically get a client certificate from PacketFence PKI using SCEP protocol. Perhaps [ACME](https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment) can be used too.
| priority | packetfence pki add support for scep is your feature request related to a problem please describe when you use packetfence pki to generate radius certificates for eap tls you may want to configure a software on nodes to automatically get a certificate from packetfence pki describe the solution you d like be able to automatically get a client certificate from packetfence pki using scep protocol perhaps can be used too | 1 |
523,102 | 15,172,897,289 | IssuesEvent | 2021-02-13 11:34:23 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Please restore "Replace Page From a Template" functionality | Area: AE > PersonaBar Ext > Pages.Web Effort: High Priority: Medium Status: Ready for Development Type: Enhancement stale |
## Description of problem
In DNN 8 this was an option under the Pages admin menu. It allowed you to replace an existing page using a template. I think that this has disappeared in DNN 9. I wasn't able to find it in DNN 9.2.1.
## Description of solution
I'd like the old functionality to return
## Description of alternatives considered
Can always delete a page and re-create it from a template, but that doesn't include all of the possibilities available before. And, the old way was very easy to do.
## Additional context
Old functionality
## Screenshots
## Additional context
## Affected version
<!-- Check all that apply and add more if necessary -->
* [x] 9.2.2
* [x] 9.2.1
* [x] 9.2
* [x] 9.1.1
* [x] 9.1
* [x] 9.0
## Affected browser
It's not a browser issue.
* [ ] Chrome
* [ ] Firefox
* [ ] Safari
* [ ] Internet Explorer
* [ ] Edge
| 1.0 | Please restore "Replace Page From a Template" functionality -
## Description of problem
In DNN 8 this was an option under the Pages admin menu. It allowed you to replace an existing page using a template. I think that this has disappeared in DNN 9. I wasn't able to find it in DNN 9.2.1.
## Description of solution
I'd like the old functionality to return
## Description of alternatives considered
Can always delete a page and re-create it from a template, but that doesn't include all of the possibilities available before. And, the old way was very easy to do.
## Additional context
Old functionality
## Screenshots
## Additional context
## Affected version
<!-- Check all that apply and add more if necessary -->
* [x] 9.2.2
* [x] 9.2.1
* [x] 9.2
* [x] 9.1.1
* [x] 9.1
* [x] 9.0
## Affected browser
It's not a browser issue.
* [ ] Chrome
* [ ] Firefox
* [ ] Safari
* [ ] Internet Explorer
* [ ] Edge
| priority | please restore replace page from a template functionality description of problem in dnn this was an option under the pages admin menu it allowed you to replace an existing page using a template i think that this has disappeared in dnn i wasn t able to find it in dnn description of solution i d like the old functionality to return description of alternatives considered can always delete a page and re create it from a template but that doesn t include all of the possibilities available before and the old way was very easy to do additional context old functionality screenshots additional context affected version affected browser it s not a browser issue chrome firefox safari internet explorer edge | 1 |
389,041 | 11,496,582,174 | IssuesEvent | 2020-02-12 08:18:54 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :207985] Argument cannot be negative in subsys/net/lib/websocket/websocket.c | Coverity area: Networking bug priority: medium |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/a3e89e84a801d9bc048b0ee2177f0fb11d1a925a/subsys/net/lib/websocket/websocket.c#L789
Category: Memory - corruptions
Function: `websocket_recv_msg`
Component: Networking
CID: [207985](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=207985)
Details:
```
697 #else
698 ret = recv(ctx->real_sock, &ctx->tmp_buf[ctx->tmp_buf_pos],
699 ctx->tmp_buf_len - ctx->tmp_buf_pos,
700 timeout == K_NO_WAIT ? MSG_DONTWAIT : 0);
701 #endif /* CONFIG_NET_TEST */
702
>>> CID 207985: (REVERSE_NEGATIVE)
>>> You might be using variable "ret" before verifying that it is >= 0.
703 if (ret < 0) {
704 return -errno;
705 }
706
707 if (ret == 0) {
708 /* Socket closed */
783 ret = input_len;
784 #else
785 ret = recv(ctx->real_sock, ctx->tmp_buf, ctx->tmp_buf_len,
786 timeout == K_NO_WAIT ? MSG_DONTWAIT : 0);
787 #endif /* CONFIG_NET_TEST */
788
>>> CID 207985: (REVERSE_NEGATIVE)
>>> You might be using variable "ret" before verifying that it is >= 0.
789 if (ret < 0) {
790 return -errno;
791 }
792
793 if (ret == 0) {
794 return 0;
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID :207985] Argument cannot be negative in subsys/net/lib/websocket/websocket.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/a3e89e84a801d9bc048b0ee2177f0fb11d1a925a/subsys/net/lib/websocket/websocket.c#L789
Category: Memory - corruptions
Function: `websocket_recv_msg`
Component: Networking
CID: [207985](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=207985)
Details:
```
697 #else
698 ret = recv(ctx->real_sock, &ctx->tmp_buf[ctx->tmp_buf_pos],
699 ctx->tmp_buf_len - ctx->tmp_buf_pos,
700 timeout == K_NO_WAIT ? MSG_DONTWAIT : 0);
701 #endif /* CONFIG_NET_TEST */
702
>>> CID 207985: (REVERSE_NEGATIVE)
>>> You might be using variable "ret" before verifying that it is >= 0.
703 if (ret < 0) {
704 return -errno;
705 }
706
707 if (ret == 0) {
708 /* Socket closed */
783 ret = input_len;
784 #else
785 ret = recv(ctx->real_sock, ctx->tmp_buf, ctx->tmp_buf_len,
786 timeout == K_NO_WAIT ? MSG_DONTWAIT : 0);
787 #endif /* CONFIG_NET_TEST */
788
>>> CID 207985: (REVERSE_NEGATIVE)
>>> You might be using variable "ret" before verifying that it is >= 0.
789 if (ret < 0) {
790 return -errno;
791 }
792
793 if (ret == 0) {
794 return 0;
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| priority | argument cannot be negative in subsys net lib websocket websocket c static code scan issues found in file category memory corruptions function websocket recv msg component networking cid details else ret recv ctx real sock ctx tmp buf ctx tmp buf len ctx tmp buf pos timeout k no wait msg dontwait endif config net test cid reverse negative you might be using variable ret before verifying that it is if ret return errno if ret socket closed ret input len else ret recv ctx real sock ctx tmp buf ctx tmp buf len timeout k no wait msg dontwait endif config net test cid reverse negative you might be using variable ret before verifying that it is if ret return errno if ret return please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 1 |
88,467 | 3,778,043,988 | IssuesEvent | 2016-03-17 22:22:30 | Fermat-ORG/fermat-org | https://api.github.com/repos/Fermat-ORG/fermat-org | closed | Save the correct value for super layer | Priority: MEDIUM server | We thought that this was not serious, but it was, and a lot. If the value is "false" instead of `false`, then the client code thought it was an actual super layer, thus bubbling up many bugs, I temporarily fixed it by converting to `false` if the name was actually "false". | 1.0 | Save the correct value for super layer - We thought that this was not serious, but it was, and a lot. If the value is "false" instead of `false`, then the client code thought it was an actual super layer, thus bubbling up many bugs, I temporarily fixed it by converting to `false` if the name was actually "false". | priority | save the correct value for super layer we thought that this was not serious but it was and a lot if the value is false instead of false then the client code thought it was an actual super layer thus bubbling up many bugs i temporarily fixed it by converting to false if the name was actually false | 1 |
622,680 | 19,653,844,145 | IssuesEvent | 2022-01-10 10:22:47 | debops/debops | https://api.github.com/repos/debops/debops | closed | debops script doesn't seem to use proper exit statuses | bug priority: medium tag: DebOps script | ```
$ debops run bootstrap -l vmtest1 -e ansible_user=root
Executing Ansible playbooks:
bootstrap
ERROR! the playbook: bootstrap could not be found
$ echo $?
0
```
I'd have expected any exit value except 0 here... | 1.0 | debops script doesn't seem to use proper exit statuses - ```
$ debops run bootstrap -l vmtest1 -e ansible_user=root
Executing Ansible playbooks:
bootstrap
ERROR! the playbook: bootstrap could not be found
$ echo $?
0
```
I'd have expected any exit value except 0 here... | priority | debops script doesn t seem to use proper exit statuses debops run bootstrap l e ansible user root executing ansible playbooks bootstrap error the playbook bootstrap could not be found echo i d have expected any exit value except here | 1 |
3,501 | 2,538,569,580 | IssuesEvent | 2015-01-27 08:20:19 | newca12/gapt | https://api.github.com/repos/newca12/gapt | closed | structs should be trees | 1 star enhancement imported Priority-Medium | _From [fra...@gmail.com](https://code.google.com/u/108596877348066494139/) on February 01, 2011 11:03:51_
Since Structs are Trees, they should inherit the from them. This will allow prooftool (which displays Trees) to display them.
_Original issue: http://code.google.com/p/gapt/issues/detail?id=106_ | 1.0 | structs should be trees - _From [fra...@gmail.com](https://code.google.com/u/108596877348066494139/) on February 01, 2011 11:03:51_
Since Structs are Trees, they should inherit the from them. This will allow prooftool (which displays Trees) to display them.
_Original issue: http://code.google.com/p/gapt/issues/detail?id=106_ | priority | structs should be trees from on february since structs are trees they should inherit the from them this will allow prooftool which displays trees to display them original issue | 1 |
368,780 | 10,884,451,551 | IssuesEvent | 2019-11-18 08:18:54 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1250] Civics: wages without source account | Fixed Medium Priority | Currently we can create Elected Title with Wages that doesn't have pointed source Account.

Wage payment process will fail | 1.0 | [0.9.0 staging-1250] Civics: wages without source account - Currently we can create Elected Title with Wages that doesn't have pointed source Account.

Wage payment process will fail | priority | civics wages without source account currently we can create elected title with wages that doesn t have pointed source account wage payment process will fail | 1 |
584,201 | 17,408,458,609 | IssuesEvent | 2021-08-03 09:11:41 | dataware-tools/dataware-tools | https://api.github.com/repos/dataware-tools/dataware-tools | closed | [Data-browser] Show formatted record info | kind/feature priority/medium wg/web-app | ## Purpose
- feature request
## Description
- Add function to format record information depend on pydtk.config.dtype
| 1.0 | [Data-browser] Show formatted record info - ## Purpose
- feature request
## Description
- Add function to format record information depend on pydtk.config.dtype
| priority | show formatted record info purpose feature request description add function to format record information depend on pydtk config dtype | 1 |
26,430 | 2,684,493,654 | IssuesEvent | 2015-03-29 01:36:45 | gtcasl/gpuocelot | https://api.github.com/repos/gtcasl/gpuocelot | closed | 2-Element Vectors of floats are broken in the llvm backend on 32-bit platforms | bug imported Priority-Medium | _From [SolusStu...@gmail.com](https://code.google.com/u/100974457117804684489/) on February 20, 2010 15:15:08_
What steps will reproduce the problem? See this bug report from llvm: http://hlvm.llvm.org/bugs/show_bug.cgi?id=3287 What is the expected output? What do you see instead? Loads to 2-element vectors of floats randomly produce nan values. What version of the product are you using? On what operating system? 32-bit platforms using LLVM.
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=39_ | 1.0 | 2-Element Vectors of floats are broken in the llvm backend on 32-bit platforms - _From [SolusStu...@gmail.com](https://code.google.com/u/100974457117804684489/) on February 20, 2010 15:15:08_
What steps will reproduce the problem? See this bug report from llvm: http://hlvm.llvm.org/bugs/show_bug.cgi?id=3287 What is the expected output? What do you see instead? Loads to 2-element vectors of floats randomly produce nan values. What version of the product are you using? On what operating system? 32-bit platforms using LLVM.
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=39_ | priority | element vectors of floats are broken in the llvm backend on bit platforms from on february what steps will reproduce the problem see this bug report from llvm what is the expected output what do you see instead loads to element vectors of floats randomly produce nan values what version of the product are you using on what operating system bit platforms using llvm original issue | 1 |
67,215 | 3,267,221,263 | IssuesEvent | 2015-10-23 01:27:10 | TheLens/elections | https://api.github.com/repos/TheLens/elections | closed | Change footer language on table | Bug Medium priority | Now says: View all candidate results
Change to: View all candidates
Similar on the hide text. Say: Show top candidates | 1.0 | Change footer language on table - Now says: View all candidate results
Change to: View all candidates
Similar on the hide text. Say: Show top candidates | priority | change footer language on table now says view all candidate results change to view all candidates similar on the hide text say show top candidates | 1 |
77,479 | 3,506,395,641 | IssuesEvent | 2016-01-08 06:26:49 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | MessageChat logs spam (BB #524) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 04.03.2014 05:40:23 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/524
<hr>
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 100)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 90)
04-03-14 06:32
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 92)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 84)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 98)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 90)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 92)
04-03-14 06:32
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 84)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 98)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 90) | 1.0 | MessageChat logs spam (BB #524) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 04.03.2014 05:40:23 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/524
<hr>
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 100)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 90)
04-03-14 06:32
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 92)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 84)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 98)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 90)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 92)
04-03-14 06:32
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 84)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 98)
SESSION: opcode CMSG_MESSAGECHAT (0x0095) has unprocessed tail data (read stop at 8 from 90) | priority | messagechat logs spam bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from session opcode cmsg messagechat has unprocessed tail data read stop at from | 1 |
108,861 | 4,351,831,245 | IssuesEvent | 2016-08-01 02:09:28 | ssadedin/bpipe | https://api.github.com/repos/ssadedin/bpipe | closed | out of memory processing when operating on 500+ files | bug imported Priority-Medium | _From [henning....@gmail.com](https://code.google.com/u/107754961382921025555/) on 2014-05-27T18:02:08Z_
Running a simple bpipe script like
get_quality = {
exec """
qc.pl -q -s 0 -i ${input} &&
touch $output
"""
forward input
}
summarise = {
produce("summary.txt"){
exec """
summarize.py .
"""
}
}
Bpipe.run {
"%" * [ get_quality ] + summarise
}
operating on 500+ files I get OutOfMemory exception even though I set
: ${MAX_JAVA_MEM:="1024m"}
in bin/bpipe.
I know my example is poorly described, I just wanted to hear what you typically set MAX_JAVA_MEM too, if it is normal that bpipe requires a lot of memory like this?
...and btw, thanks for an awesome project we are getting more and more into bpipe here in my office!
_Original issue: http://code.google.com/p/bpipe/issues/detail?id=97_ | 1.0 | out of memory processing when operating on 500+ files - _From [henning....@gmail.com](https://code.google.com/u/107754961382921025555/) on 2014-05-27T18:02:08Z_
Running a simple bpipe script like
get_quality = {
exec """
qc.pl -q -s 0 -i ${input} &&
touch $output
"""
forward input
}
summarise = {
produce("summary.txt"){
exec """
summarize.py .
"""
}
}
Bpipe.run {
"%" * [ get_quality ] + summarise
}
operating on 500+ files I get OutOfMemory exception even though I set
: ${MAX_JAVA_MEM:="1024m"}
in bin/bpipe.
I know my example is poorly described, I just wanted to hear what you typically set MAX_JAVA_MEM too, if it is normal that bpipe requires a lot of memory like this?
...and btw, thanks for an awesome project we are getting more and more into bpipe here in my office!
_Original issue: http://code.google.com/p/bpipe/issues/detail?id=97_ | priority | out of memory processing when operating on files from on running a simple bpipe script like get quality exec qc pl q s i input touch output forward input summarise produce summary txt exec summarize py bpipe run summarise operating on files i get outofmemory exception even though i set max java mem in bin bpipe i know my example is poorly described i just wanted to hear what you typically set max java mem too if it is normal that bpipe requires a lot of memory like this and btw thanks for an awesome project we are getting more and more into bpipe here in my office original issue | 1 |
351,158 | 10,513,400,345 | IssuesEvent | 2019-09-27 20:29:10 | robotframework/SeleniumLibrary | https://api.github.com/repos/robotframework/SeleniumLibrary | closed | Use pabot to run acceptance test locally. | help wanted priority: medium task | The acceptance testing starts to take quite long time, in Travis and when running locally. Experiment how to run acceptance test parallel with a local machine. Changing Travis is not mandatory in this issue.
- [ ] Enhance the [atest/run.py](https://github.com/robotframework/SeleniumLibrary/blob/master/atest/run.py) to use `pabot` instead of `robot` as command line switch.
- [ ] It might be useful to expose `--processes` as argument too
- [x] Testing: Is simple http server enough for the load, can tests run in parallel, are test stable | 1.0 | Use pabot to run acceptance test locally. - The acceptance testing starts to take quite long time, in Travis and when running locally. Experiment how to run acceptance test parallel with a local machine. Changing Travis is not mandatory in this issue.
- [ ] Enhance the [atest/run.py](https://github.com/robotframework/SeleniumLibrary/blob/master/atest/run.py) to use `pabot` instead of `robot` as command line switch.
- [ ] It might be useful to expose `--processes` as argument too
- [x] Testing: Is simple http server enough for the load, can tests run in parallel, are test stable | priority | use pabot to run acceptance test locally the acceptance testing starts to take quite long time in travis and when running locally experiment how to run acceptance test parallel with a local machine changing travis is not mandatory in this issue enhance the to use pabot instead of robot as command line switch it might be useful to expose processes as argument too testing is simple http server enough for the load can tests run in parallel are test stable | 1 |
475,100 | 13,686,914,529 | IssuesEvent | 2020-09-30 09:21:49 | shahednasser/sbuttons | https://api.github.com/repos/shahednasser/sbuttons | closed | The squared social Buttons and Rounded social buttons should be separated for clear UI | Priority: Medium buttons enhancement | **Is your feature request related to a problem? Please describe.**
**Describe the solution you'd like**
**Additional notes**

| 1.0 | The squared social Buttons and Rounded social buttons should be separated for clear UI - **Is your feature request related to a problem? Please describe.**
**Describe the solution you'd like**
**Additional notes**

| priority | the squared social buttons and rounded social buttons should be separated for clear ui is your feature request related to a problem please describe describe the solution you d like additional notes | 1 |
189,120 | 6,794,363,960 | IssuesEvent | 2017-11-01 11:50:07 | dotkom/super-duper-fiesta | https://api.github.com/repos/dotkom/super-duper-fiesta | opened | Store passwordHash in localstorage and only send it when needed to backend | Package: Client Priority: Medium Status: Available Type: Enhancement | This way we don't have to force a reload after registration.
It's only used in two places:
- On initial connection
- When voting on anonymous issues
Fixing the initial connection case is a bit trickier. | 1.0 | Store passwordHash in localstorage and only send it when needed to backend - This way we don't have to force a reload after registration.
It's only used in two places:
- On initial connection
- When voting on anonymous issues
Fixing the initial connection case is a bit trickier. | priority | store passwordhash in localstorage and only send it when needed to backend this way we don t have to force a reload after registration it s only used in two places on initial connection when voting on anonymous issues fixing the initial connection case is a bit trickier | 1 |
52,543 | 3,023,833,576 | IssuesEvent | 2015-08-01 23:05:37 | WarGamesLabs/Jack | https://api.github.com/repos/WarGamesLabs/Jack | closed | Protocol: DMX512-A | auto-migrated Priority-Medium Type-Enhancement | ```
Add the DMX industrial lighting protocol.
Transceiver/physical layer? RS484?
```
Original issue reported on code.google.com by `ianles...@gmail.com` on 31 Mar 2009 at 2:38 | 1.0 | Protocol: DMX512-A - ```
Add the DMX industrial lighting protocol.
Transceiver/physical layer? RS484?
```
Original issue reported on code.google.com by `ianles...@gmail.com` on 31 Mar 2009 at 2:38 | priority | protocol a add the dmx industrial lighting protocol transceiver physical layer original issue reported on code google com by ianles gmail com on mar at | 1 |
614,105 | 19,142,383,989 | IssuesEvent | 2021-12-02 01:19:14 | FrequencyX4/Fate | https://api.github.com/repos/FrequencyX4/Fate | opened | Fix-perms command | Priority: Medium Category: Module | A command to re-setup server permissions so that role permissions are primarily in control rather than channel overwrites | 1.0 | Fix-perms command - A command to re-setup server permissions so that role permissions are primarily in control rather than channel overwrites | priority | fix perms command a command to re setup server permissions so that role permissions are primarily in control rather than channel overwrites | 1 |
314,670 | 9,601,303,077 | IssuesEvent | 2019-05-10 11:47:13 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Test OIDC Hybrid Flow | Complexity/Medium Component/OIDC Priority/High Severity/Major Type/Task | Aspects to check are,
1. Whether the code, token returned has correct expiry times (consider SP wise expiry times as well)
2. id_token generated honours claims configs | 1.0 | Test OIDC Hybrid Flow - Aspects to check are,
1. Whether the code, token returned has correct expiry times (consider SP wise expiry times as well)
2. id_token generated honours claims configs | priority | test oidc hybrid flow aspects to check are whether the code token returned has correct expiry times consider sp wise expiry times as well id token generated honours claims configs | 1 |
760,397 | 26,638,368,929 | IssuesEvent | 2023-01-25 00:44:22 | ualis0n/portifolio | https://api.github.com/repos/ualis0n/portifolio | closed | Adicionar os dados de contato | Priority: Medium Weight: 3 Type: Feature | ## Informações
- [ ] Linkedin
- [ ] Email
- [ ] Telefone (Whatsapp/Telegram)
- [ ] Github
| 1.0 | Adicionar os dados de contato - ## Informações
- [ ] Linkedin
- [ ] Email
- [ ] Telefone (Whatsapp/Telegram)
- [ ] Github
| priority | adicionar os dados de contato informações linkedin email telefone whatsapp telegram github | 1 |
435,441 | 12,535,594,011 | IssuesEvent | 2020-06-04 21:42:21 | onicagroup/runway | https://api.github.com/repos/onicagroup/runway | closed | [REQUEST] deployment/module.environments should be strict | breaking feature priority:medium status:in_progress | As show, this will break legacy uses of `environments`. This would need to be released in a major release or, made toggleable with a default of `false` until the next major release. It should be a top-level config option.
# Example
runway.yml
```yaml
deployments:
- modules:
- path: sampleapp.cfn
parameters:
key: val
environments:
example: 0000/us-east-1
regions:
- us-east-1
- us-west-2
```
## Expectation
- `DEPLOY_ENVIRONMENT=example, accountId=0000, region=us-east-1` deploy the module
- `DEPLOY_ENVIRONMENT=example, accountId=0000, region=us-west-2` always skip the module because the region does not match
- `DEPLOY_ENVIRONMENT=example, accountId=1111, region=us-east-1` always skip the module because the accountId does not match
- `DEPLOY_ENVIRONMENT=prod, accountId=0000, region=us-east-1` always skip the module because the environment is not defined
- `DEPLOY_ENVIRONMENT=prod, accountId=0000, region=us-west-2` always skip the module because the environment is not defined
If `environments` is not defined for an deployment/module, don't skip, let the module class determine if it should skip.
Each environment should still accept an explicit true or false that would enable/disable it for all regions in all accounts.
Each environment should be able to accept an accountId only or a region only. | 1.0 | [REQUEST] deployment/module.environments should be strict - As show, this will break legacy uses of `environments`. This would need to be released in a major release or, made toggleable with a default of `false` until the next major release. It should be a top-level config option.
# Example
runway.yml
```yaml
deployments:
- modules:
- path: sampleapp.cfn
parameters:
key: val
environments:
example: 0000/us-east-1
regions:
- us-east-1
- us-west-2
```
## Expectation
- `DEPLOY_ENVIRONMENT=example, accountId=0000, region=us-east-1` deploy the module
- `DEPLOY_ENVIRONMENT=example, accountId=0000, region=us-west-2` always skip the module because the region does not match
- `DEPLOY_ENVIRONMENT=example, accountId=1111, region=us-east-1` always skip the module because the accountId does not match
- `DEPLOY_ENVIRONMENT=prod, accountId=0000, region=us-east-1` always skip the module because the environment is not defined
- `DEPLOY_ENVIRONMENT=prod, accountId=0000, region=us-west-2` always skip the module because the environment is not defined
If `environments` is not defined for an deployment/module, don't skip, let the module class determine if it should skip.
Each environment should still accept an explicit true or false that would enable/disable it for all regions in all accounts.
Each environment should be able to accept an accountId only or a region only. | priority | deployment module environments should be strict as show this will break legacy uses of environments this would need to be released in a major release or made toggleable with a default of false until the next major release it should be a top level config option example runway yml yaml deployments modules path sampleapp cfn parameters key val environments example us east regions us east us west expectation deploy environment example accountid region us east deploy the module deploy environment example accountid region us west always skip the module because the region does not match deploy environment example accountid region us east always skip the module because the accountid does not match deploy environment prod accountid region us east always skip the module because the environment is not defined deploy environment prod accountid region us west always skip the module because the environment is not defined if environments is not defined for an deployment module don t skip let the module class determine if it should skip each environment should still accept an explicit true or false that would enable disable it for all regions in all accounts each environment should be able to accept an accountid only or a region only | 1 |
72,095 | 3,371,898,768 | IssuesEvent | 2015-11-23 21:07:33 | gsstudios/Dorimanx-SG2-I9100-Kernel | https://api.github.com/repos/gsstudios/Dorimanx-SG2-I9100-Kernel | opened | Root install doesn't work in stweaks | bug Lollipop Medium priority | The root installer in stweaks needs to be updated for lollipop just in case people want to re-root their device. | 1.0 | Root install doesn't work in stweaks - The root installer in stweaks needs to be updated for lollipop just in case people want to re-root their device. | priority | root install doesn t work in stweaks the root installer in stweaks needs to be updated for lollipop just in case people want to re root their device | 1 |
562,801 | 16,670,095,897 | IssuesEvent | 2021-06-07 09:45:54 | belivipro9x99/ctms-plus | https://api.github.com/repos/belivipro9x99/ctms-plus | closed | 🍰 Scrollbar thumb does not display properly when clamping | bug help wanted priority:medium | ## 🐞 báo cáo lỗi
---
### 📃 Mô Tả
Scrollbar thumb change height massively when clamping on top and overflow when clamping on the bottom
### 🔬 Cách Gây Ra Lỗi
1. Try scrolling on an not overflowed scroll container
2. See scrollbar like glitching
### 🎯 Hành Vi Dự Kiến
Scrollbar should behave correctly when clamping
### 📷 Ảnh Chụp

| 1.0 | 🍰 Scrollbar thumb does not display properly when clamping - ## 🐞 báo cáo lỗi
---
### 📃 Mô Tả
Scrollbar thumb change height massively when clamping on top and overflow when clamping on the bottom
### 🔬 Cách Gây Ra Lỗi
1. Try scrolling on an not overflowed scroll container
2. See scrollbar like glitching
### 🎯 Hành Vi Dự Kiến
Scrollbar should behave correctly when clamping
### 📷 Ảnh Chụp

| priority | 🍰 scrollbar thumb does not display properly when clamping 🐞 báo cáo lỗi 📃 mô tả scrollbar thumb change height massively when clamping on top and overflow when clamping on the bottom 🔬 cách gây ra lỗi try scrolling on an not overflowed scroll container see scrollbar like glitching 🎯 hành vi dự kiến scrollbar should behave correctly when clamping 📷 ảnh chụp | 1 |
458,870 | 13,183,337,414 | IssuesEvent | 2020-08-12 17:19:58 | indianapublicmedia/indianapublicmedia-web | https://api.github.com/repos/indianapublicmedia/indianapublicmedia-web | closed | /journeyindiana/ map embed | enhancement medium priority | JI is to have a embedded map (Google or otherwise) of various stories, akin to the from Weekly Special. Some technical research needed here. | 1.0 | /journeyindiana/ map embed - JI is to have a embedded map (Google or otherwise) of various stories, akin to the from Weekly Special. Some technical research needed here. | priority | journeyindiana map embed ji is to have a embedded map google or otherwise of various stories akin to the from weekly special some technical research needed here | 1 |
800,015 | 28,322,933,617 | IssuesEvent | 2023-04-11 03:49:47 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [CDCSDK] Multi Schema + DDL + Pause/Resume Connector + Nemesis + Colocation fails | kind/bug priority/medium area/cdcsdk | Jira Link: [DB-5979](https://yugabyte.atlassian.net/browse/DB-5979)
### Description
http://stress.dev.yugabyte.com/stress_test/88b85897-ee85-41bc-88e8-edff8ede1550
After few iterations, data is not seen in target
### Source connector version
latest
1.9.5.y.17
### Connector configuration
NA
### YugabyteDB version
2.17.4.0-b9
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5979]: https://yugabyte.atlassian.net/browse/DB-5979?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [CDCSDK] Multi Schema + DDL + Pause/Resume Connector + Nemesis + Colocation fails - Jira Link: [DB-5979](https://yugabyte.atlassian.net/browse/DB-5979)
### Description
http://stress.dev.yugabyte.com/stress_test/88b85897-ee85-41bc-88e8-edff8ede1550
After few iterations, data is not seen in target
### Source connector version
latest
1.9.5.y.17
### Connector configuration
NA
### YugabyteDB version
2.17.4.0-b9
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5979]: https://yugabyte.atlassian.net/browse/DB-5979?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | multi schema ddl pause resume connector nemesis colocation fails jira link description after few iterations data is not seen in target source connector version latest y connector configuration na yugabytedb version warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
675,952 | 23,112,652,317 | IssuesEvent | 2022-07-27 14:12:32 | codbex/codbex-kronos | https://api.github.com/repos/codbex/codbex-kronos | opened | [Core] Configure destination caching timeout | effort-medium core supportability priority-low | From xsk created by [dpanayotov](https://github.com/dpanayotov): SAP/xsk#1413
### Details
Currently if a destination is not found initially it will not be retried for 5 minutes. For development purposes this cache may be disabled
### Target
Allow destination cache to be configurable via environment variable
See https://github.com/SAP/cloud-sdk/issues/599 how to work around the inability to configure the library itself | 1.0 | [Core] Configure destination caching timeout - From xsk created by [dpanayotov](https://github.com/dpanayotov): SAP/xsk#1413
### Details
Currently if a destination is not found initially it will not be retried for 5 minutes. For development purposes this cache may be disabled
### Target
Allow destination cache to be configurable via environment variable
See https://github.com/SAP/cloud-sdk/issues/599 how to work around the inability to configure the library itself | priority | configure destination caching timeout from xsk created by sap xsk details currently if a destination is not found initially it will not be retried for minutes for development purposes this cache may be disabled target allow destination cache to be configurable via environment variable see how to work around the inability to configure the library itself | 1 |
647,446 | 21,103,881,861 | IssuesEvent | 2022-04-04 16:47:14 | pystardust/ani-cli | https://api.github.com/repos/pystardust/ani-cli | closed | Termux errors when launching ep | type: bug priority 2: medium | **Metadata (please complete the following information)**
OS: Termux on Android
Application version:
0.118.0
Packages CPU architecture:
aarch64
Subscribed repositories:
# sources.list
deb https://grimler.se/termux-packages-24/ stable main
# x11-repo (sources.list.d/x11.list)
deb https://dl.kcubeterm.com/termux-x11 x11 main
Updatable packages:
All packages up to date
Android version:
9
Kernel build information:
Linux localhost 4.4.111-21737876 #1 SMP PREEMPT Thu Jul 15 19:28:19 KST 2021 aarch64 Android
Device manufacturer:
samsung
Device model:
SM-N950F
Shell: zsh
ani-cli: 2.0.2
Anime: Berserk
**Describe the bug**
A few errors are popping up with the latest update. Playing brings 1 error, and "q" pops up 2
When playing an ep
`ani-cli: 334: [: Illegal number:`
Note: The episode still played
When quitting
`ani-cli: 592: te_ep_list: not found`
`ani-cli: 595: Syntax error: "else" unexpected`
**Steps To Reproduce**
1. Run `ani-cli -q 1080`
2. Type in `Berserk`
3. Select `(1) Berserk (episode 25)` from the list
4. Choose episode 1
5. Go back to Termux, select `q` to quit
**Expected behavior**
Playing the episode and quitting should not bring up any errors
**Screenshots (if applicable; you can just drag the image onto github)**

**Additional context**
I cannot reproduce the 2nd issue when I quit Termux, it seemed random. I tried replaying my file like the steps in pic but it did not reoccur. | 1.0 | Termux errors when launching ep - **Metadata (please complete the following information)**
OS: Termux on Android
Application version:
0.118.0
Packages CPU architecture:
aarch64
Subscribed repositories:
# sources.list
deb https://grimler.se/termux-packages-24/ stable main
# x11-repo (sources.list.d/x11.list)
deb https://dl.kcubeterm.com/termux-x11 x11 main
Updatable packages:
All packages up to date
Android version:
9
Kernel build information:
Linux localhost 4.4.111-21737876 #1 SMP PREEMPT Thu Jul 15 19:28:19 KST 2021 aarch64 Android
Device manufacturer:
samsung
Device model:
SM-N950F
Shell: zsh
ani-cli: 2.0.2
Anime: Berserk
**Describe the bug**
A few errors are popping up with the latest update. Playing brings 1 error, and "q" pops up 2
When playing an ep
`ani-cli: 334: [: Illegal number:`
Note: The episode still played
When quitting
`ani-cli: 592: te_ep_list: not found`
`ani-cli: 595: Syntax error: "else" unexpected`
**Steps To Reproduce**
1. Run `ani-cli -q 1080`
2. Type in `Berserk`
3. Select `(1) Berserk (episode 25)` from the list
4. Choose episode 1
5. Go back to Termux, select `q` to quit
**Expected behavior**
Playing the episode and quitting should not bring up any errors
**Screenshots (if applicable; you can just drag the image onto github)**

**Additional context**
I cannot reproduce the 2nd issue when I quit Termux, it seemed random. I tried replaying my file like the steps in pic but it did not reoccur. | priority | termux errors when launching ep metadata please complete the following information os termux on android application version packages cpu architecture subscribed repositories sources list deb stable main repo sources list d list deb main updatable packages all packages up to date android version kernel build information linux localhost smp preempt thu jul kst android device manufacturer samsung device model sm shell zsh ani cli anime berserk describe the bug a few errors are popping up with the latest update playing brings error and q pops up when playing an ep ani cli illegal number note the episode still played when quitting ani cli te ep list not found ani cli syntax error else unexpected steps to reproduce run ani cli q type in berserk select berserk episode from the list choose episode go back to termux select q to quit expected behavior playing the episode and quitting should not bring up any errors screenshots if applicable you can just drag the image onto github additional context i cannot reproduce the issue when i quit termux it seemed random i tried replaying my file like the steps in pic but it did not reoccur | 1 |
278,167 | 8,637,584,534 | IssuesEvent | 2018-11-23 11:47:30 | MarcusWolschon/osmeditor4android | https://api.github.com/repos/MarcusWolschon/osmeditor4android | closed | Presets: Display in alphabetical order | Enhancement Medium Priority Usability | # Current state
- Presets (`Vorlagen`) are show in **random** (?) order on the screen, see screenshot.

# Target state
- Presets (`Vorlagen`) are show in **alphabetical** order on the screen.
- I think this will make it easier to find a preset.
- The alphabetical order should not be hardcoded so it can be dynamically applied when the system language is changed such as from English to German. | 1.0 | Presets: Display in alphabetical order - # Current state
- Presets (`Vorlagen`) are show in **random** (?) order on the screen, see screenshot.

# Target state
- Presets (`Vorlagen`) are show in **alphabetical** order on the screen.
- I think this will make it easier to find a preset.
- The alphabetical order should not be hardcoded so it can be dynamically applied when the system language is changed such as from English to German. | priority | presets display in alphabetical order current state presets vorlagen are show in random order on the screen see screenshot target state presets vorlagen are show in alphabetical order on the screen i think this will make it easier to find a preset the alphabetical order should not be hardcoded so it can be dynamically applied when the system language is changed such as from english to german | 1 |
338,455 | 10,229,557,169 | IssuesEvent | 2019-08-17 13:47:24 | wevote/WebApp | https://api.github.com/repos/wevote/WebApp | closed | Settings-Sharing (Mobile): Configure social media sharing | Difficulty: Medium Priority: 1 | Upgrade the current http://localhost:3000/settings/sharing page to use the following layout in mobile mode:

| 1.0 | Settings-Sharing (Mobile): Configure social media sharing - Upgrade the current http://localhost:3000/settings/sharing page to use the following layout in mobile mode:

| priority | settings sharing mobile configure social media sharing upgrade the current page to use the following layout in mobile mode | 1 |
830,915 | 32,030,203,737 | IssuesEvent | 2023-09-22 11:47:34 | SkriptLang/Skript | https://api.github.com/repos/SkriptLang/Skript | closed | Data values not properly saved in database | bug priority: medium variables | There are problems when I try to use variables to store an item with a data value
Like storing a clown fish
`!set {test} to clownfish` 349:1
Close the server and restart after storage
`!give {test} to player`
What I got was a raw fish 349:0
Worse, it loses some NBT data
`set {test} to clownfish with nbt "{xbxy:""i35"",display:{Name:""fish""}}"`
When I restart the server, it will become
` {test} = raw fishwith with nbt "{display:{Name:""fish""}}"`
I also tested the enchanted golden apple, which will turn into a normal golden apple when rebooted
| 1.0 | Data values not properly saved in database - There are problems when I try to use variables to store an item with a data value
Like storing a clown fish
`!set {test} to clownfish` 349:1
Close the server and restart after storage
`!give {test} to player`
What I got was a raw fish 349:0
Worse, it loses some NBT data
`set {test} to clownfish with nbt "{xbxy:""i35"",display:{Name:""fish""}}"`
When I restart the server, it will become
` {test} = raw fishwith with nbt "{display:{Name:""fish""}}"`
I also tested the enchanted golden apple, which will turn into a normal golden apple when rebooted
| priority | data values not properly saved in database there are problems when i try to use variables to store an item with a data value like storing a clown fish set test to clownfish close the server and restart after storage give test to player what i got was a raw fish worse it loses some nbt data set test to clownfish with nbt xbxy display name fish when i restart the server it will become test raw fishwith with nbt display name fish i also tested the enchanted golden apple which will turn into a normal golden apple when rebooted | 1 |
698,088 | 23,965,060,976 | IssuesEvent | 2022-09-12 23:30:02 | returntocorp/semgrep | https://api.github.com/repos/returntocorp/semgrep | reopened | [RFC] Add minimum semgrep version needed to run rule | priority:medium rfc | With speed we are adding rules and functionality into semgrep we have hit situations when we
publish rules that depend on features in just released semgrep so when run on a version of semgrep
that is older, causes a crash.
Proposed solution:
We can add an optional field to the rule_schema: `requires` that takes a semver string mentioning the minimum version
of semgrep needed to run or even parse the rule successfully.
semgrep-cli can as a first step filter out rules that do not worth with the currently running version of semgrep (and print out info to user on what rules failed to run and why)
Alternatives:
- Instead of a semver string we can even just have a minimum semgrep version needed to run a rule?
- We could also have rules that fail to parse/run correctly in semgrep-core be reported to semgrep-cli in the response json (semgrep-core will just skip running that rule) and semgrep-cli will let user know of bad rules
- This removes the need for rule writers to be aware of minimum semgrep version, removes need to update schema, but adds complexity to interface | 1.0 | [RFC] Add minimum semgrep version needed to run rule - With speed we are adding rules and functionality into semgrep we have hit situations when we
publish rules that depend on features in just released semgrep so when run on a version of semgrep
that is older, causes a crash.
Proposed solution:
We can add an optional field to the rule_schema: `requires` that takes a semver string mentioning the minimum version
of semgrep needed to run or even parse the rule successfully.
semgrep-cli can as a first step filter out rules that do not worth with the currently running version of semgrep (and print out info to user on what rules failed to run and why)
Alternatives:
- Instead of a semver string we can even just have a minimum semgrep version needed to run a rule?
- We could also have rules that fail to parse/run correctly in semgrep-core be reported to semgrep-cli in the response json (semgrep-core will just skip running that rule) and semgrep-cli will let user know of bad rules
- This removes the need for rule writers to be aware of minimum semgrep version, removes need to update schema, but adds complexity to interface | priority | add minimum semgrep version needed to run rule with speed we are adding rules and functionality into semgrep we have hit situations when we publish rules that depend on features in just released semgrep so when run on a version of semgrep that is older causes a crash proposed solution we can add an optional field to the rule schema requires that takes a semver string mentioning the minimum version of semgrep needed to run or even parse the rule successfully semgrep cli can as a first step filter out rules that do not worth with the currently running version of semgrep and print out info to user on what rules failed to run and why alternatives instead of a semver string we can even just have a minimum semgrep version needed to run a rule we could also have rules that fail to parse run correctly in semgrep core be reported to semgrep cli in the response json semgrep core will just skip running that rule and semgrep cli will let user know of bad rules this removes the need for rule writers to be aware of minimum semgrep version removes need to update schema but adds complexity to interface | 1 |
127,657 | 5,037,954,940 | IssuesEvent | 2016-12-17 23:56:44 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | Investigate bazel support for Xcode | configuration: bazel priority: medium type: installation and distribution | posted initially in #4485, but it deserves it's own issue.
fwiw, just lost a few hours trying Tulsi with bazel to work with Xcode. Long story short, I think it's not going to support our workflow without some work. I made some local edits to the tulsi code to get fairly far along (https://github.com/RussTedrake/tulsi), but it is not able to understand the "@glib://" deps, which apparently comes in through pkg-config (which comes in through the thirdparty bazel support).
The local edits that I did make were needed for it to even consider cc_* as valid targets. It was only looking for e.g. ios_application. So it's clearly not being used for our workflow yet. | 1.0 | Investigate bazel support for Xcode - posted initially in #4485, but it deserves it's own issue.
fwiw, just lost a few hours trying Tulsi with bazel to work with Xcode. Long story short, I think it's not going to support our workflow without some work. I made some local edits to the tulsi code to get fairly far along (https://github.com/RussTedrake/tulsi), but it is not able to understand the "@glib://" deps, which apparently comes in through pkg-config (which comes in through the thirdparty bazel support).
The local edits that I did make were needed for it to even consider cc_* as valid targets. It was only looking for e.g. ios_application. So it's clearly not being used for our workflow yet. | priority | investigate bazel support for xcode posted initially in but it deserves it s own issue fwiw just lost a few hours trying tulsi with bazel to work with xcode long story short i think it s not going to support our workflow without some work i made some local edits to the tulsi code to get fairly far along but it is not able to understand the glib deps which apparently comes in through pkg config which comes in through the thirdparty bazel support the local edits that i did make were needed for it to even consider cc as valid targets it was only looking for e g ios application so it s clearly not being used for our workflow yet | 1 |
196,183 | 6,925,491,402 | IssuesEvent | 2017-11-30 16:05:23 | googlei18n/noto-fonts | https://api.github.com/repos/googlei18n/noto-fonts | closed | Glyph correction needed for Sundanese Letter JA (1B8F) | Android Priority-Medium Script-Sundanese | http://unicode.org/cldr/trac/ticket/9344
[quote]
we've spotted mistake in the Glyph shown for Sundanese Letter JA (1B8F) in this document:
http://unicode.org/charts/PDF/U1B80.pdf
The mistake is at the top part of the glyph.
Currently it is displayed as Z shaped
In fact, the top part should be similar to the Sundanese Letter DA (1B93).
The font which implements this correction can be found in:
http://www.kairaga.com/2015/05/05/font-aksara-sunda-unicode-versi-2013-revisi.html
[/quote]
| 1.0 | Glyph correction needed for Sundanese Letter JA (1B8F) - http://unicode.org/cldr/trac/ticket/9344
[quote]
we've spotted mistake in the Glyph shown for Sundanese Letter JA (1B8F) in this document:
http://unicode.org/charts/PDF/U1B80.pdf
The mistake is at the top part of the glyph.
Currently it is displayed as Z shaped
In fact, the top part should be similar to the Sundanese Letter DA (1B93).
The font which implements this correction can be found in:
http://www.kairaga.com/2015/05/05/font-aksara-sunda-unicode-versi-2013-revisi.html
[/quote]
| priority | glyph correction needed for sundanese letter ja we ve spotted mistake in the glyph shown for sundanese letter ja in this document the mistake is at the top part of the glyph currently it is displayed as z shaped in fact the top part should be similar to the sundanese letter da the font which implements this correction can be found in | 1 |
498,156 | 14,401,945,423 | IssuesEvent | 2020-12-03 14:20:53 | tellor-io/telliot | https://api.github.com/repos/tellor-io/telliot | closed | Revisit the `indexes.json` file format to see if the format can be simplified or if can use an existing golang parser. | help wanted priority: medium type: research | @themandalore mentioned that this is the format that other projects use this format(Chainlink....) so on the plus side this should mean that people are familiar with this format. Lets see if can use some existing golang module to remove the need for a custom parser and sync with some user of the miner to agree on the format.
@mikeghen maybe you can also comment?
```
"json(https://api.binance.com/api/v1/klines?symbol=BTCUSDT&interval=1d&limit=1).0.4",
```
could maybe be something like:
```
"URL": "https://api.binance.com/api/v1/klines?symbol=BTCUSDT&interval=1d&limit=1"
"type": "json",
"jsonPath":".0.4"
```
The format seems to be similar to https://goessner.net/articles/JsonPath/
| 1.0 | Revisit the `indexes.json` file format to see if the format can be simplified or if can use an existing golang parser. - @themandalore mentioned that this is the format that other projects use this format(Chainlink....) so on the plus side this should mean that people are familiar with this format. Lets see if can use some existing golang module to remove the need for a custom parser and sync with some user of the miner to agree on the format.
@mikeghen maybe you can also comment?
```
"json(https://api.binance.com/api/v1/klines?symbol=BTCUSDT&interval=1d&limit=1).0.4",
```
could maybe be something like:
```
"URL": "https://api.binance.com/api/v1/klines?symbol=BTCUSDT&interval=1d&limit=1"
"type": "json",
"jsonPath":".0.4"
```
The format seems to be similar to https://goessner.net/articles/JsonPath/
| priority | revisit the indexes json file format to see if the format can be simplified or if can use an existing golang parser themandalore mentioned that this is the format that other projects use this format chainlink so on the plus side this should mean that people are familiar with this format lets see if can use some existing golang module to remove the need for a custom parser and sync with some user of the miner to agree on the format mikeghen maybe you can also comment json could maybe be something like url type json jsonpath the format seems to be similar to | 1 |
127,144 | 5,019,434,392 | IssuesEvent | 2016-12-14 11:44:13 | swash99/ims | https://api.github.com/repos/swash99/ims | closed | Add search bar on inventory entry page | feature Medium Priority | Usecase: During the day, employee might run into situations where they need to add a small note/reminder for a specific item and that note will be taken into account when inventory entry process is being performed or perhaps when the print preview results are passed on.
The solution I suggest is to add a search bar above the inventory entry table (similar to the one on Admin Tasks -> Items page). User will search for item name and add whatever information is needed in the 'Notes' column.
This solution is just a suggestion and if the requirement can be met in a better way, then you are free to pursue experimenting and implementing. | 1.0 | Add search bar on inventory entry page - Usecase: During the day, employee might run into situations where they need to add a small note/reminder for a specific item and that note will be taken into account when inventory entry process is being performed or perhaps when the print preview results are passed on.
The solution I suggest is to add a search bar above the inventory entry table (similar to the one on Admin Tasks -> Items page). User will search for item name and add whatever information is needed in the 'Notes' column.
This solution is just a suggestion and if the requirement can be met in a better way, then you are free to pursue experimenting and implementing. | priority | add search bar on inventory entry page usecase during the day employee might run into situations where they need to add a small note reminder for a specific item and that note will be taken into account when inventory entry process is being performed or perhaps when the print preview results are passed on the solution i suggest is to add a search bar above the inventory entry table similar to the one on admin tasks items page user will search for item name and add whatever information is needed in the notes column this solution is just a suggestion and if the requirement can be met in a better way then you are free to pursue experimenting and implementing | 1 |
445,213 | 12,827,641,571 | IssuesEvent | 2020-07-06 18:54:04 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Website Release | Category: Web Priority: Medium Type: Feature | - [ ] production .env
- [ ] qa account and hosted worlds - cross browser
- [ ] set hw throttle
- [ ] 301 redirects
- [ ] cdn | 1.0 | Website Release - - [ ] production .env
- [ ] qa account and hosted worlds - cross browser
- [ ] set hw throttle
- [ ] 301 redirects
- [ ] cdn | priority | website release production env qa account and hosted worlds cross browser set hw throttle redirects cdn | 1 |
58,110 | 3,087,555,120 | IssuesEvent | 2015-08-25 12:41:47 | juju/docs | https://api.github.com/repos/juju/docs | closed | LXC caching needs docs | 1.22 in progress Medium Priority PR review | From Juju 1.22 onwards, LXC images are cached in the Juju environment
when they are retrieved to instantiate a new LXC container. This applies
to the local provider and all other cloud providers. This caching is
done independently of whether image cloning is enabled.
Note: Due to current upgrade limitations, image caching is currently not
available for machines upgraded to 1.22. Only machines deployed with
1.22 will cache the images.
In Juju 1.22, lxc-create is configured to fetch images from the Juju
state server. If no image is available, the state server will fetch the
image from http://cloud-images.ubuntu.com and then cache it. This means
that the retrieval of images from the external site is only done once
per *environment*, not once per new machine which is the default
behaviour of lxc. The next time lxc-create needs to fetch an image, it
comes directly from the Juju environment cache.
The 'cached-images' command can list and delete cached LXC images stored
in the Juju environment. The 'list' and 'delete' subcommands support
'--arch' and '--series' options to filter the result.
To see all cached images, run:
juju cached-images list
Or to see just the amd64 trusty images run:
juju cached-images list --series trusty --arch amd64
To delete the amd64 trusty cached images run:
juju cache-images delete --series trusty --arch amd64
Future development work will allow Juju to automatically download new
LXC images when they becomes available, but for now, the only way update
a cached image is to remove the old one from the Juju environment. Juju
will also support KVM image caching in the future.
See 'juju cached-images list --help' and 'juju cached-images delete
--help' for more details.
| 1.0 | LXC caching needs docs - From Juju 1.22 onwards, LXC images are cached in the Juju environment
when they are retrieved to instantiate a new LXC container. This applies
to the local provider and all other cloud providers. This caching is
done independently of whether image cloning is enabled.
Note: Due to current upgrade limitations, image caching is currently not
available for machines upgraded to 1.22. Only machines deployed with
1.22 will cache the images.
In Juju 1.22, lxc-create is configured to fetch images from the Juju
state server. If no image is available, the state server will fetch the
image from http://cloud-images.ubuntu.com and then cache it. This means
that the retrieval of images from the external site is only done once
per *environment*, not once per new machine which is the default
behaviour of lxc. The next time lxc-create needs to fetch an image, it
comes directly from the Juju environment cache.
The 'cached-images' command can list and delete cached LXC images stored
in the Juju environment. The 'list' and 'delete' subcommands support
'--arch' and '--series' options to filter the result.
To see all cached images, run:
juju cached-images list
Or to see just the amd64 trusty images run:
juju cached-images list --series trusty --arch amd64
To delete the amd64 trusty cached images run:
juju cache-images delete --series trusty --arch amd64
Future development work will allow Juju to automatically download new
LXC images when they becomes available, but for now, the only way update
a cached image is to remove the old one from the Juju environment. Juju
will also support KVM image caching in the future.
See 'juju cached-images list --help' and 'juju cached-images delete
--help' for more details.
| priority | lxc caching needs docs from juju onwards lxc images are cached in the juju environment when they are retrieved to instantiate a new lxc container this applies to the local provider and all other cloud providers this caching is done independently of whether image cloning is enabled note due to current upgrade limitations image caching is currently not available for machines upgraded to only machines deployed with will cache the images in juju lxc create is configured to fetch images from the juju state server if no image is available the state server will fetch the image from and then cache it this means that the retrieval of images from the external site is only done once per environment not once per new machine which is the default behaviour of lxc the next time lxc create needs to fetch an image it comes directly from the juju environment cache the cached images command can list and delete cached lxc images stored in the juju environment the list and delete subcommands support arch and series options to filter the result to see all cached images run juju cached images list or to see just the trusty images run juju cached images list series trusty arch to delete the trusty cached images run juju cache images delete series trusty arch future development work will allow juju to automatically download new lxc images when they becomes available but for now the only way update a cached image is to remove the old one from the juju environment juju will also support kvm image caching in the future see juju cached images list help and juju cached images delete help for more details | 1 |
25,976 | 2,684,074,897 | IssuesEvent | 2015-03-28 16:42:50 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | opened | В режиме эскизов - некорректное поведение Shift+стрелок | 1 star bug imported Priority-Medium | _From [jbak1...@gmail.com](https://code.google.com/u/111605209573957257873/) on May 14, 2012 01:44:18_
OS version: Win7 x86 ConEmu version: ConEmu .120513
Far version: Far Manager, version 2.1 (build 1807 bis27) x86 *Bug description* В режиме эскизов стрелки работают с учетом табличной структуры - стрелка вправо переходит на следующий файл, стрелка влево - на предыдущий, и т.д. И это хорошо.
А вот нажатия курсорных стрелок с Shift видимо передаются Far'у, и это очень сильно сбивает с толку.
Хотелось бы, чтобы Shift+вправо выделял файл под курсором и ставил курсор на следующий файл, Shift+вниз - переходил на следующую строку с выделением и т. д.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=542_ | 1.0 | В режиме эскизов - некорректное поведение Shift+стрелок - _From [jbak1...@gmail.com](https://code.google.com/u/111605209573957257873/) on May 14, 2012 01:44:18_
OS version: Win7 x86 ConEmu version: ConEmu .120513
Far version: Far Manager, version 2.1 (build 1807 bis27) x86 *Bug description* В режиме эскизов стрелки работают с учетом табличной структуры - стрелка вправо переходит на следующий файл, стрелка влево - на предыдущий, и т.д. И это хорошо.
А вот нажатия курсорных стрелок с Shift видимо передаются Far'у, и это очень сильно сбивает с толку.
Хотелось бы, чтобы Shift+вправо выделял файл под курсором и ставил курсор на следующий файл, Shift+вниз - переходил на следующую строку с выделением и т. д.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=542_ | priority | в режиме эскизов некорректное поведение shift стрелок from on may os version conemu version conemu far version far manager version build bug description в режиме эскизов стрелки работают с учетом табличной структуры стрелка вправо переходит на следующий файл стрелка влево на предыдущий и т д и это хорошо а вот нажатия курсорных стрелок с shift видимо передаются far у и это очень сильно сбивает с толку хотелось бы чтобы shift вправо выделял файл под курсором и ставил курсор на следующий файл shift вниз переходил на следующую строку с выделением и т д original issue | 1 |
681,454 | 23,311,725,909 | IssuesEvent | 2022-08-08 08:52:18 | wasmerio/wasmer | https://api.github.com/repos/wasmerio/wasmer | closed | The execution result of the program differs from that of the native program. | 🐞 bug priority-medium | <!-- Thanks for the bug report! -->
### Describe the bug
```
#include <dirent.h>
#include <stdio.h>
#include <errno.h>
int main(int argc, char **argv) {
DIR *d;
char * target = ".";
if (argc == 2) {
target = argv[1];
}
struct dirent *dir;
d = opendir(target);
if (d) {
while ((dir = readdir(d)) != NULL) {
printf("%s\n", dir->d_name);
}
printf("errno: %d\n", errno);
closedir(d);
}
return(0);
}
```
This is a source code first mentioned in wasmtime issues NO.2493.
My project files are as follows:
[wasmtime-2493.zip](https://github.com/wasmerio/wasmer/files/9159635/wasmtime-2493.zip)
```sh
echo "`wasmer -V` | `rustc -V` | `uname -m`"
wasmer 2.3.0 | rustc 1.62.0 (a8314ef7d 2022-06-27) | x86_64
```
### Steps to reproduce
```
wasmer run ls.wasm --dir=./testfolder -- testfolder | wc -l
```
### Expected behavior
When I compile using g++, the program results in the following
```
203
```
### Actual behavior
```
201
```
| 1.0 | The execution result of the program differs from that of the native program. - <!-- Thanks for the bug report! -->
### Describe the bug
```
#include <dirent.h>
#include <stdio.h>
#include <errno.h>
int main(int argc, char **argv) {
DIR *d;
char * target = ".";
if (argc == 2) {
target = argv[1];
}
struct dirent *dir;
d = opendir(target);
if (d) {
while ((dir = readdir(d)) != NULL) {
printf("%s\n", dir->d_name);
}
printf("errno: %d\n", errno);
closedir(d);
}
return(0);
}
```
This is a source code first mentioned in wasmtime issues NO.2493.
My project files are as follows:
[wasmtime-2493.zip](https://github.com/wasmerio/wasmer/files/9159635/wasmtime-2493.zip)
```sh
echo "`wasmer -V` | `rustc -V` | `uname -m`"
wasmer 2.3.0 | rustc 1.62.0 (a8314ef7d 2022-06-27) | x86_64
```
### Steps to reproduce
```
wasmer run ls.wasm --dir=./testfolder -- testfolder | wc -l
```
### Expected behavior
When I compile using g++, the program results in the following
```
203
```
### Actual behavior
```
201
```
| priority | the execution result of the program differs from that of the native program describe the bug include include include int main int argc char argv dir d char target if argc target argv struct dirent dir d opendir target if d while dir readdir d null printf s n dir d name printf errno d n errno closedir d return this is a source code first mentioned in wasmtime issues no my project files are as follows sh echo wasmer v rustc v uname m wasmer rustc steps to reproduce wasmer run ls wasm dir testfolder testfolder wc l expected behavior when i compile using g the program results in the following actual behavior | 1 |
184,440 | 6,713,274,248 | IssuesEvent | 2017-10-13 12:52:50 | nim-lang/Nim | https://api.github.com/repos/nim-lang/Nim | closed | [times.nim] Timezone offset gives no indication of +/- | Medium Priority Stdlib Times | ## Summary
The timezone offset provided by `getLocalTime()` and `getGMTime()` provide only a 'plain' hour instead of indicating whether the timezone is ahead or behind of UTC/GMT. This is counter to the [documentation](http://nim-lang.org/docs/times.html#format,TimeInfo,string) which suggests +/- should be present for `z`, `zz`, and `zzz`.
See also #3199.
## nim test code
``` nim
import times
echo getTzname()
echo getTime().getGMTime.format("zzz")
echo getTime().getLocalTime.format("zzz")
```
## Output, BST
http://www.timeanddate.com/time/zones/bst
| Code | Actual | Expected |
| --- | --- | --- |
| `getTzname()` | (nonDST: GMT, DST: BST) | (nonDST: GMT, DST: BST) |
| `getTime().getGMTime.format("zzz")` | 00:00 | +00:00 |
| `getTime().getLocalTime.format("zzz")` | 00:00 | +01:00 |
## Output, EDT
http://www.timeanddate.com/time/zones/edt
| Code | Actual | Expected |
| --- | --- | --- |
| `getTzname()` | (nonDST: EST, DST: EDT) | (nonDST: EST, DST: EDT) |
| `getTime().getGMTime.format("zzz")` | 00:00 | +00:00 |
| `getTime().getLocalTime.format("zzz")` | 05:00 | -04:00 |
| 1.0 | [times.nim] Timezone offset gives no indication of +/- - ## Summary
The timezone offset provided by `getLocalTime()` and `getGMTime()` provide only a 'plain' hour instead of indicating whether the timezone is ahead or behind of UTC/GMT. This is counter to the [documentation](http://nim-lang.org/docs/times.html#format,TimeInfo,string) which suggests +/- should be present for `z`, `zz`, and `zzz`.
See also #3199.
## nim test code
``` nim
import times
echo getTzname()
echo getTime().getGMTime.format("zzz")
echo getTime().getLocalTime.format("zzz")
```
## Output, BST
http://www.timeanddate.com/time/zones/bst
| Code | Actual | Expected |
| --- | --- | --- |
| `getTzname()` | (nonDST: GMT, DST: BST) | (nonDST: GMT, DST: BST) |
| `getTime().getGMTime.format("zzz")` | 00:00 | +00:00 |
| `getTime().getLocalTime.format("zzz")` | 00:00 | +01:00 |
## Output, EDT
http://www.timeanddate.com/time/zones/edt
| Code | Actual | Expected |
| --- | --- | --- |
| `getTzname()` | (nonDST: EST, DST: EDT) | (nonDST: EST, DST: EDT) |
| `getTime().getGMTime.format("zzz")` | 00:00 | +00:00 |
| `getTime().getLocalTime.format("zzz")` | 05:00 | -04:00 |
| priority | timezone offset gives no indication of summary the timezone offset provided by getlocaltime and getgmtime provide only a plain hour instead of indicating whether the timezone is ahead or behind of utc gmt this is counter to the which suggests should be present for z zz and zzz see also nim test code nim import times echo gettzname echo gettime getgmtime format zzz echo gettime getlocaltime format zzz output bst code actual expected gettzname nondst gmt dst bst nondst gmt dst bst gettime getgmtime format zzz gettime getlocaltime format zzz output edt code actual expected gettzname nondst est dst edt nondst est dst edt gettime getgmtime format zzz gettime getlocaltime format zzz | 1 |
788,995 | 27,775,490,069 | IssuesEvent | 2023-03-16 16:57:05 | DataScienceScotland/intro_to_r | https://api.github.com/repos/DataScienceScotland/intro_to_r | opened | session1: section 4.1 - redraft joins section | medium priority | could make some of the merge/join commands incomplete to increase interaction with attendees rather than just running through them all
https://github.com/DataScienceScotland/intro_to_r/blob/17b78f13c38c16da9f22902a802a29f426d58f68/session1/intro_to_r_training_incomplete.Rmd#L909 | 1.0 | session1: section 4.1 - redraft joins section - could make some of the merge/join commands incomplete to increase interaction with attendees rather than just running through them all
https://github.com/DataScienceScotland/intro_to_r/blob/17b78f13c38c16da9f22902a802a29f426d58f68/session1/intro_to_r_training_incomplete.Rmd#L909 | priority | section redraft joins section could make some of the merge join commands incomplete to increase interaction with attendees rather than just running through them all | 1 |
646,269 | 21,042,689,005 | IssuesEvent | 2022-03-31 13:38:24 | AY2122S2-CS2103T-W12-3/tp | https://api.github.com/repos/AY2122S2-CS2103T-W12-3/tp | closed | As a busy user with many meetings, I can search for meetings by name or tags | type.Story priority.Medium | so that I can find specific meetings or groups of meetings easily. | 1.0 | As a busy user with many meetings, I can search for meetings by name or tags - so that I can find specific meetings or groups of meetings easily. | priority | as a busy user with many meetings i can search for meetings by name or tags so that i can find specific meetings or groups of meetings easily | 1 |
178,029 | 6,593,414,737 | IssuesEvent | 2017-09-15 00:59:36 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | Duplicate docker names when running concurrent create | area/docker priority/medium team/foundation | When running `docker create --name..` or `docker run -d --name..` it is relatively easy to create duplicated container names. This will cause issues for any function that addresses the container by name.
To duplicate the container name the following command was run in quick succession on linux:
`docker run -d --name jojo busybox top &`
Once both tasks have completed `docker ps` returns the following:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f06b332691c busybox "top" About an hour ago Up About an hour jojo
de9fa278729e busybox "top" About an hour ago Up About an hour jojo
```
The user is only able to remove one container by name and the remaining container will need to be removed by ID.
To minimize this possibility we should reserve the name in the persona cache as vSphere is creating the container. This should follow the same container name reservation utilized in `docker rename`
Environment details:
vSphere 6.0 u3 (nimbus)
ESXi 6.0 u3
vic 1.2.1 deployed to resource pool via `--use-rp`
Reproducibility: easy | 1.0 | Duplicate docker names when running concurrent create - When running `docker create --name..` or `docker run -d --name..` it is relatively easy to create duplicated container names. This will cause issues for any function that addresses the container by name.
To duplicate the container name the following command was run in quick succession on linux:
`docker run -d --name jojo busybox top &`
Once both tasks have completed `docker ps` returns the following:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f06b332691c busybox "top" About an hour ago Up About an hour jojo
de9fa278729e busybox "top" About an hour ago Up About an hour jojo
```
The user is only able to remove one container by name and the remaining container will need to be removed by ID.
To minimize this possibility we should reserve the name in the persona cache as vSphere is creating the container. This should follow the same container name reservation utilized in `docker rename`
Environment details:
vSphere 6.0 u3 (nimbus)
ESXi 6.0 u3
vic 1.2.1 deployed to resource pool via `--use-rp`
Reproducibility: easy | priority | duplicate docker names when running concurrent create when running docker create name or docker run d name it is relatively easy to create duplicated container names this will cause issues for any function that addresses the container by name to duplicate the container name the following command was run in quick succession on linux docker run d name jojo busybox top once both tasks have completed docker ps returns the following container id image command created status ports names busybox top about an hour ago up about an hour jojo busybox top about an hour ago up about an hour jojo the user is only able to remove one container by name and the remaining container will need to be removed by id to minimize this possibility we should reserve the name in the persona cache as vsphere is creating the container this should follow the same container name reservation utilized in docker rename environment details vsphere nimbus esxi vic deployed to resource pool via use rp reproducibility easy | 1 |
504,815 | 14,621,367,736 | IssuesEvent | 2020-12-22 21:30:43 | GSA/piv-guides | https://api.github.com/repos/GSA/piv-guides | closed | Importing intermediate CA certs into NSS | CP Playbook Team Engineer FAQ FAQ Priority - Medium applications help wanted | PIV Auth via browser - specific for managed enterprise devices
- Firefox browsers in use
- NSS needs to be updated using non-manual (no user intervention) methods by enterprise engineers
- Some of the intermediate CAs in the FPKI stop the CA name at OU rather than using a CN
- Do full chains for client (user) provided certificates need to be configured in the client for two-way TLS to succeed?
certutil or other methods to manage _enterprise_ configurations for NSS
| 1.0 | Importing intermediate CA certs into NSS - PIV Auth via browser - specific for managed enterprise devices
- Firefox browsers in use
- NSS needs to be updated using non-manual (no user intervention) methods by enterprise engineers
- Some of the intermediate CAs in the FPKI stop the CA name at OU rather than using a CN
- Do full chains for client (user) provided certificates need to be configured in the client for two-way TLS to succeed?
certutil or other methods to manage _enterprise_ configurations for NSS
| priority | importing intermediate ca certs into nss piv auth via browser specific for managed enterprise devices firefox browsers in use nss needs to be updated using non manual no user intervention methods by enterprise engineers some of the intermediate cas in the fpki stop the ca name at ou rather than using a cn do full chains for client user provided certificates need to be configured in the client for two way tls to succeed certutil or other methods to manage enterprise configurations for nss | 1 |
759,276 | 26,586,163,766 | IssuesEvent | 2023-01-23 01:29:21 | vignetteapp/sekai | https://api.github.com/repos/vignetteapp/sekai | opened | Implement Animation | enhancement help wanted priority:medium | Currently we would have to implement animations by ourselves in any game we would have to do, causing non-standard behavior to happen such having different skeleton standards to work with per-game/application. As part of our post-Encore efforts, we should be able to implement an animation system for bipeds and for common transforms. | 1.0 | Implement Animation - Currently we would have to implement animations by ourselves in any game we would have to do, causing non-standard behavior to happen such having different skeleton standards to work with per-game/application. As part of our post-Encore efforts, we should be able to implement an animation system for bipeds and for common transforms. | priority | implement animation currently we would have to implement animations by ourselves in any game we would have to do causing non standard behavior to happen such having different skeleton standards to work with per game application as part of our post encore efforts we should be able to implement an animation system for bipeds and for common transforms | 1 |
622,813 | 19,657,616,962 | IssuesEvent | 2022-01-10 14:06:33 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | PHPBB import stuck on certain time | bug feature-discussion-forums priority-medium Stale | **Describe the bug**
When importing the forum via phpBB then it is stuck on certain progress.
Concern: The PHPBB forum import not working correctly, its hangs on the records may be rows_in_step is not calculated correctly
https://docs.google.com/spreadsheets/d/1gp89Lf7lhSjhruzYKNlmxwIdXqg7wKauxw5SCb02CNQ/edit#gid=1422384409&range=8:8
**Support ticket links**
https://secure.helpscout.net/conversation/1578838016/154845
**Jira issue** : [PROD-654]
[PROD-654]: https://buddyboss.atlassian.net/browse/PROD-654?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | PHPBB import stuck on certain time - **Describe the bug**
When importing the forum via phpBB then it is stuck on certain progress.
Concern: The PHPBB forum import not working correctly, its hangs on the records may be rows_in_step is not calculated correctly
https://docs.google.com/spreadsheets/d/1gp89Lf7lhSjhruzYKNlmxwIdXqg7wKauxw5SCb02CNQ/edit#gid=1422384409&range=8:8
**Support ticket links**
https://secure.helpscout.net/conversation/1578838016/154845
**Jira issue** : [PROD-654]
[PROD-654]: https://buddyboss.atlassian.net/browse/PROD-654?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | phpbb import stuck on certain time describe the bug when importing the forum via phpbb then it is stuck on certain progress concern the phpbb forum import not working correctly its hangs on the records may be rows in step is not calculated correctly support ticket links jira issue | 1 |
192,785 | 6,876,640,563 | IssuesEvent | 2017-11-20 02:22:21 | Marri/glowfic | https://api.github.com/repos/Marri/glowfic | opened | Change messages to use their own CSS classes instead of piggybacking on replies | 3. medium priority 7. easy dev | The `From:` and `To:` bars in messages currently use the `.post-character` and `.post-screenname` selectors, respectively. Give them a new class name and have the classes share some style information in SCSS.
Fix up styling around `.post-screenname a` (especially in layouts) also, when this is done. | 1.0 | Change messages to use their own CSS classes instead of piggybacking on replies - The `From:` and `To:` bars in messages currently use the `.post-character` and `.post-screenname` selectors, respectively. Give them a new class name and have the classes share some style information in SCSS.
Fix up styling around `.post-screenname a` (especially in layouts) also, when this is done. | priority | change messages to use their own css classes instead of piggybacking on replies the from and to bars in messages currently use the post character and post screenname selectors respectively give them a new class name and have the classes share some style information in scss fix up styling around post screenname a especially in layouts also when this is done | 1 |
817,810 | 30,656,991,742 | IssuesEvent | 2023-07-25 12:48:31 | meanbee/docker-magento2 | https://api.github.com/repos/meanbee/docker-magento2 | closed | Add defaults for all environment variables used in configuration files | type-bug priority-medium | The environment variables are `sed`ded into place in various configuration files. If an environment variable is not defined then the variable substitution string remains and results in a broken configuration file.
| 1.0 | Add defaults for all environment variables used in configuration files - The environment variables are `sed`ded into place in various configuration files. If an environment variable is not defined then the variable substitution string remains and results in a broken configuration file.
| priority | add defaults for all environment variables used in configuration files the environment variables are sed ded into place in various configuration files if an environment variable is not defined then the variable substitution string remains and results in a broken configuration file | 1 |
794,904 | 28,054,220,497 | IssuesEvent | 2023-03-29 08:18:44 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] [PITR] [Xcluster] Cross Replication stops working if the source universe alone is restored to an earlier point in time | kind/bug area/docdb priority/medium | Jira Link: [DB-4248](https://yugabyte.atlassian.net/browse/DB-4248)
### Description
- Create a source and target universe on 2.16.0(tablet splitting is automatically enabled) with YSQL packed columns enabled. (I dont expect packed columns to be influencing this issue. Adding it for documentational purposes)
- Create identical schemas on both sides.
- Setup unidirectional Xcluster replication from source to target.
- Create a snapshot_schedule at the source side alone and collect timestamps after various DML operations
- Load data and validate the replication
- Restore source to one of the collected timestamps.
**Issue:** While the PITR restore succeeds, replication stops working after this restore. Any inserts, deletes and updates in the source universe stop reflecting in the target universe.
Logs to be uploaded shortly. | 1.0 | [DocDB] [PITR] [Xcluster] Cross Replication stops working if the source universe alone is restored to an earlier point in time - Jira Link: [DB-4248](https://yugabyte.atlassian.net/browse/DB-4248)
### Description
- Create a source and target universe on 2.16.0(tablet splitting is automatically enabled) with YSQL packed columns enabled. (I dont expect packed columns to be influencing this issue. Adding it for documentational purposes)
- Create identical schemas on both sides.
- Setup unidirectional Xcluster replication from source to target.
- Create a snapshot_schedule at the source side alone and collect timestamps after various DML operations
- Load data and validate the replication
- Restore source to one of the collected timestamps.
**Issue:** While the PITR restore succeeds, replication stops working after this restore. Any inserts, deletes and updates in the source universe stop reflecting in the target universe.
Logs to be uploaded shortly. | priority | cross replication stops working if the source universe alone is restored to an earlier point in time jira link description create a source and target universe on tablet splitting is automatically enabled with ysql packed columns enabled i dont expect packed columns to be influencing this issue adding it for documentational purposes create identical schemas on both sides setup unidirectional xcluster replication from source to target create a snapshot schedule at the source side alone and collect timestamps after various dml operations load data and validate the replication restore source to one of the collected timestamps issue while the pitr restore succeeds replication stops working after this restore any inserts deletes and updates in the source universe stop reflecting in the target universe logs to be uploaded shortly | 1 |
217,951 | 7,329,378,997 | IssuesEvent | 2018-03-05 04:34:36 | karlogonzales/ministocks-390 | https://api.github.com/repos/karlogonzales/ministocks-390 | closed | AAD, I want to investigate how to add stock datas without limits. | Investigation Priority: Medium Risk: Low Story Points: 8 | # Description
Currently on the app, the implemented widget views allow a limited number of stocks that the user can add. This number depends on the size of the selected widget view.
The example below shows the number of stocked allowed for a 2x4 widget on the Nexus 5X.

# Investigate
Below are some aspects that can be investigated in order to solve this issue.
- [ ] Look into whether it is necessary to implement a different architecture of data display then the one currently implemented.
- [ ] Look into how to reuse the available code.
- [ ] Look into how widget layout and widget provider interacts.
- [ ] Look into internal storage for app development.
| 1.0 | AAD, I want to investigate how to add stock datas without limits. - # Description
Currently on the app, the implemented widget views allow a limited number of stocks that the user can add. This number depends on the size of the selected widget view.
The example below shows the number of stocked allowed for a 2x4 widget on the Nexus 5X.

# Investigate
Below are some aspects that can be investigated in order to solve this issue.
- [ ] Look into whether it is necessary to implement a different architecture of data display then the one currently implemented.
- [ ] Look into how to reuse the available code.
- [ ] Look into how widget layout and widget provider interacts.
- [ ] Look into internal storage for app development.
| priority | aad i want to investigate how to add stock datas without limits description currently on the app the implemented widget views allow a limited number of stocks that the user can add this number depends on the size of the selected widget view the example below shows the number of stocked allowed for a widget on the nexus investigate below are some aspects that can be investigated in order to solve this issue look into whether it is necessary to implement a different architecture of data display then the one currently implemented look into how to reuse the available code look into how widget layout and widget provider interacts look into internal storage for app development | 1 |
484,772 | 13,957,348,115 | IssuesEvent | 2020-10-24 06:17:44 | AY2021S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2021S1-CS2103T-W16-3/tp | opened | Restrict users from using certain commands for Frequent Transactions | priority.medium :2nd_place_medal: type.enhancement :+1: | Currently. user can utilise edit, delete and convert frequent incomes and expenses from any tab. This might not be user-friendly as the user will not be able to see which frequent income and expense they want to execute the respective command on.
To restrict the edit, delete and convert commands, for both `Frequent Incomes` and `Frequent Expenses`, by creating a generic edit and delete command which will be dependent on the `UiState`.
| 1.0 | Restrict users from using certain commands for Frequent Transactions - Currently. user can utilise edit, delete and convert frequent incomes and expenses from any tab. This might not be user-friendly as the user will not be able to see which frequent income and expense they want to execute the respective command on.
To restrict the edit, delete and convert commands, for both `Frequent Incomes` and `Frequent Expenses`, by creating a generic edit and delete command which will be dependent on the `UiState`.
| priority | restrict users from using certain commands for frequent transactions currently user can utilise edit delete and convert frequent incomes and expenses from any tab this might not be user friendly as the user will not be able to see which frequent income and expense they want to execute the respective command on to restrict the edit delete and convert commands for both frequent incomes and frequent expenses by creating a generic edit and delete command which will be dependent on the uistate | 1 |
259,286 | 8,196,586,236 | IssuesEvent | 2018-08-31 10:19:31 | jibrelnetwork/jwallet-web | https://api.github.com/repos/jibrelnetwork/jwallet-web | closed | Create consistent representation for the fractions of ETH/Tokens | priority: medium | ETH and most of the tokens have 18 numbers after the dot.
We need to show it somehow
We can use these names:
- kwei/ada
- mwei/babbage
- gwei/shannon
- szabo
- finney
- ether
- kether/grand/einstein
- mether
- gether
- tether
And show the full number in the tooltip | 1.0 | Create consistent representation for the fractions of ETH/Tokens - ETH and most of the tokens have 18 numbers after the dot.
We need to show it somehow
We can use these names:
- kwei/ada
- mwei/babbage
- gwei/shannon
- szabo
- finney
- ether
- kether/grand/einstein
- mether
- gether
- tether
And show the full number in the tooltip | priority | create consistent representation for the fractions of eth tokens eth and most of the tokens have numbers after the dot we need to show it somehow we can use these names kwei ada mwei babbage gwei shannon szabo finney ether kether grand einstein mether gether tether and show the full number in the tooltip | 1 |
33,021 | 2,761,380,832 | IssuesEvent | 2015-04-28 16:56:27 | dobidoberman1/Mystic-5.4.8-Bug-Tracker | https://api.github.com/repos/dobidoberman1/Mystic-5.4.8-Bug-Tracker | closed | Inviting to party needs to be done 2 times to work. | Medium Priority | Bug Priority: Low Prority
Bug Type: interface issues.
Bug description: Bug is annoying, makes you to invite a certain person 2 times in order to get in your party.
How is it supposed to work?: Invite some 1 time instead of 2 times.
| 1.0 | Inviting to party needs to be done 2 times to work. - Bug Priority: Low Prority
Bug Type: interface issues.
Bug description: Bug is annoying, makes you to invite a certain person 2 times in order to get in your party.
How is it supposed to work?: Invite some 1 time instead of 2 times.
| priority | inviting to party needs to be done times to work bug priority low prority bug type interface issues bug description bug is annoying makes you to invite a certain person times in order to get in your party how is it supposed to work invite some time instead of times | 1 |
668,797 | 22,598,218,467 | IssuesEvent | 2022-06-29 06:36:48 | AustralianCancerDataNetwork/pydicer | https://api.github.com/repos/AustralianCancerDataNetwork/pydicer | opened | PET conversion error | bug Convert medium priority | `List index out of range` error when converting some PET images from a particular dataset. Need to investigate PET conversion code... | 1.0 | PET conversion error - `List index out of range` error when converting some PET images from a particular dataset. Need to investigate PET conversion code... | priority | pet conversion error list index out of range error when converting some pet images from a particular dataset need to investigate pet conversion code | 1 |
26,367 | 2,684,340,614 | IssuesEvent | 2015-03-28 21:54:10 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | not working after update | 1 star bug duplicate imported Priority-Medium | _From [jaroslav...@gmail.com](https://code.google.com/u/109216991801443760409/) on April 22, 2013 04:43:42_
Required information! OS version: Win2k/WinXP/Vista/Win7/Win8 SP? x86/x64 ConEmu version: ? Far version (if you are using Far Manager): ? After update 22.4.2013, con emu refused to start.
Assertion lbFound at ConEmu .cpp:13657
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1036_ | 1.0 | not working after update - _From [jaroslav...@gmail.com](https://code.google.com/u/109216991801443760409/) on April 22, 2013 04:43:42_
Required information! OS version: Win2k/WinXP/Vista/Win7/Win8 SP? x86/x64 ConEmu version: ? Far version (if you are using Far Manager): ? After update 22.4.2013, con emu refused to start.
Assertion lbFound at ConEmu .cpp:13657
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=1036_ | priority | not working after update from on april required information os version winxp vista sp conemu version far version if you are using far manager after update con emu refused to start assertion lbfound at conemu cpp original issue | 1 |
645,383 | 21,003,509,423 | IssuesEvent | 2022-03-29 19:54:24 | LBNL-ETA/BEDES-Manager | https://api.github.com/repos/LBNL-ETA/BEDES-Manager | closed | review workflow for mapping terms, creating composite terms, etc. | enhancement medium priority | Also, incorporate different search logic? | 1.0 | review workflow for mapping terms, creating composite terms, etc. - Also, incorporate different search logic? | priority | review workflow for mapping terms creating composite terms etc also incorporate different search logic | 1 |
604,714 | 18,717,401,627 | IssuesEvent | 2021-11-03 07:39:06 | cyntaria/UniPal-Backend | https://api.github.com/repos/cyntaria/UniPal-Backend | opened | [GET] A Single Teacher | Status: Pending Priority: Medium user story Type: Feature | ### Summary
As an `admin`, I should be able to **get details of a teacher**, so that I can **understand what information it represents**.
### Acceptance Criteria
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, the unique id of the entity for which the details are needed.
**THEN** the app should receive a status `200`
**AND** in the response, the following information should be returned:
- headers
- teacher details
Sample Request/Sample Response
```
headers: {
error: 0,
message: "..."
}
body: {
teacher_id: 1,
full_name: "Waseem Arain",
average_rating: 4.0,
total_reviews: 40
}
```
### Resources
- Development URL: {Here goes a URL to the feature on development API}
- Production URL: {Here goes a URL to the feature on production API}
### Dev Notes
This endpoint is accessible by and serves the admin in the same way.
### Testing Notes
##### Scenario 1: GET request is successful
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`
**THEN** the app should receive a status `200`
**AND** the `{id}` in the body should be same as the `:id` in the path parameter
##### Scenario 2: GET request is unsuccessful
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, **a non-existent id**
**THEN** the app should receive a status `404`
**AND** the response headers' `id` parameter should contain "**_NotFoundException_**"
#### Scenario 3: GET request is forbidden
**GIVEN** a `student` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request
**THEN** the app should receive a status `403`
**AND** the response headers' `id` parameter should contain "**_TokenMissingException_**"
#### Scenario 4: GET request is unauthorized
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request
**AND** the request contains no **authorization token**
**THEN** the app should receive a status `401`
**AND** the response headers' `id` parameter should contain "**_TokenMissingException_**" | 1.0 | [GET] A Single Teacher - ### Summary
As an `admin`, I should be able to **get details of a teacher**, so that I can **understand what information it represents**.
### Acceptance Criteria
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, the unique id of the entity for which the details are needed.
**THEN** the app should receive a status `200`
**AND** in the response, the following information should be returned:
- headers
- teacher details
Sample Request/Sample Response
```
headers: {
error: 0,
message: "..."
}
body: {
teacher_id: 1,
full_name: "Waseem Arain",
average_rating: 4.0,
total_reviews: 40
}
```
### Resources
- Development URL: {Here goes a URL to the feature on development API}
- Production URL: {Here goes a URL to the feature on production API}
### Dev Notes
This endpoint is accessible by and serves the admin in the same way.
### Testing Notes
##### Scenario 1: GET request is successful
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`
**THEN** the app should receive a status `200`
**AND** the `{id}` in the body should be same as the `:id` in the path parameter
##### Scenario 2: GET request is unsuccessful
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request, containing the path parameter:
- `:id`, **a non-existent id**
**THEN** the app should receive a status `404`
**AND** the response headers' `id` parameter should contain "**_NotFoundException_**"
#### Scenario 3: GET request is forbidden
**GIVEN** a `student` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request
**THEN** the app should receive a status `403`
**AND** the response headers' `id` parameter should contain "**_TokenMissingException_**"
#### Scenario 4: GET request is unauthorized
**GIVEN** an `admin` is *requesting details of a teacher* in the app
**WHEN** the app hits the `/teachers/:id` endpoint with a valid GET request
**AND** the request contains no **authorization token**
**THEN** the app should receive a status `401`
**AND** the response headers' `id` parameter should contain "**_TokenMissingException_**" | priority | a single teacher summary as an admin i should be able to get details of a teacher so that i can understand what information it represents acceptance criteria given an admin is requesting details of a teacher in the app when the app hits the teachers id endpoint with a valid get request containing the path parameter id the unique id of the entity for which the details are needed then the app should receive a status and in the response the following information should be returned headers teacher details sample request sample response headers error message body teacher id full name waseem arain average rating total reviews resources development url here goes a url to the feature on development api production url here goes a url to the feature on production api dev notes this endpoint is accessible by and serves the admin in the same way testing notes scenario get request is successful given an admin is requesting details of a teacher in the app when the app hits the teachers id endpoint with a valid get request containing the path parameter id then the app should receive a status and the id in the body should be same as the id in the path parameter scenario get request is unsuccessful given an admin is requesting details of a teacher in the app when the app hits the teachers id endpoint with a valid get request containing the path parameter id a non existent id then the app should receive a status and the response headers id parameter should contain notfoundexception scenario get request is forbidden given a student is requesting details of a teacher in the app when the app hits the teachers id endpoint with a valid get request then the app should receive a status and the response headers id parameter should contain tokenmissingexception scenario get request is unauthorized given an admin is requesting details of a teacher in the app when the app hits the teachers id endpoint with a valid get request and the request contains no authorization token then the app should receive a status and the response headers id parameter should contain tokenmissingexception | 1 |
669,308 | 22,619,096,074 | IssuesEvent | 2022-06-30 03:25:36 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB][Perf][Sysbench][Index-update] Observed tserver killed with SIGTERM "boost::asio::detail::epoll_reactor::run()" during load phase. | kind/bug area/docdb priority/medium | Jira Link: [DB-707](https://yugabyte.atlassian.net/browse/DB-707)
### Description:
Observed "Tserver" crash with SIGTERM, during LOAD phase, while running "Index Update" workload with below sysbench command:
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=sbtest --pgsql-host=172.151.17.85,172.151.23.254,172.151.25.126,172.151.28.247 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=0 --index_updates=10 --range_selects=false --time=600 --warmup-time=300 --num_rows_in_insert=10 --threads=10 load`
### Setup:
- YB version: "**YB-2.13.2.0-b0**"
- YB cluster: CentoOS 4 node cluster running with c5.xlarge instance type.
- Client: Ubuntu
### Steps:
- Create CentOS 4 node yb cluster
- After installing sysbench from yb repository on any client machine run sysbench create to create required schema after creating "sbtest" db
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=sbtest --pgsql-host=172.151.17.85,172.151.23.254,172.151.25.126,172.151.28.247 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=0 --index_updates=10 --range_selects=false --time=600 --warmup-time=300 --num_rows_in_insert=10 --threads=1 create`
- After CREATE phase try to run LOAD phase, on client
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=sbtest --pgsql-host=172.151.17.85,172.151.23.254,172.151.25.126,172.151.28.247 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=0 --index_updates=10 --range_selects=false --time=600 --warmup-time=300 --num_rows_in_insert=10 --threads=10 load`
- On Client LOAD phase gives below error after some time ( approx 5-10 min )
`FATAL: PQexec() failed: 7 Timed out: Perform RPC (request call id 52815) to 172.151.28.247:9100 timed out after 120.000s`
- Further debugging on failed YB node reviled "tserver" killed with SIGTERM.
```In YugabyteDB, setting LC_COLLATE to C and all other locale settings to en_US.UTF-8 by default. Locale support will be enhanced as part of addressing https://github.com/yugabyte/yugabyte-db/issues/1557
2022-05-10 12:25:21.961 UTC [19757] LOG: YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2022-05-10 12:25:21.969 UTC [19757] LOG: pgaudit extension initialized
2022-05-10 12:25:21.969 UTC [19757] LOG: listening on IPv4 address "172.151.28.247", port 5433
2022-05-10 12:25:21.973 UTC [19757] LOG: listening on Unix socket "/tmp/.yb.17662450902068151697/.s.PGSQL.5433"
2022-05-10 12:25:21.986 UTC [19757] LOG: redirecting log output to logging collector process
2022-05-10 12:25:21.986 UTC [19757] HINT: Future log output will appear in directory "/mnt/d0/yb-data/tserver/logs".
*** Aborted at 1652195831 (unix time) try "date -d @1652195831" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGTERM (@0x3e500004c0b) received by PID 19717 (TID 0x7f8c311c51c0) from PID 19467; stack trace: ***
@ 0x7f8c302f4120 (unknown)
@ 0x7f8c303a79f3 __GI_epoll_wait
@ 0x2f9cb3d boost::asio::detail::epoll_reactor::run()
@ 0x2f984a9 boost::asio::detail::scheduler::run()
@ 0x2f952c9 yb::tserver::(anonymous namespace)::TabletServerMain()
@ 0x2f8f529 main
@ 0x7f8c302e1825 __libc_start_main
@ 0x26f902e _start
2022-05-11 03:53:32.112 UTC [10103] LOG: YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2022-05-11 03:53:32.131 UTC [10103] LOG: pgaudit extension initialized
2022-05-11 03:53:32.133 UTC [10103] LOG: listening on IPv4 address "172.151.28.247", port 5433
2022-05-11 03:53:32.139 UTC [10103] LOG: listening on Unix socket "/tmp/.yb.17662450902068151697/.s.PGSQL.5433"
2022-05-11 03:53:32.174 UTC [10103] LOG: redirecting log output to logging collector process
2022-05-11 03:53:32.174 UTC [10103] HINT: Future log output will appear in directory "/mnt/d0/yb-data/tserver/logs".
*** Aborted at 1652441839 (unix time) try "date -d @1652441839" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGTERM (@0x3e5000071f6) received by PID 10064 (TID 0x7f4514a661c0) from PID 29174; stack trace: ***
@ 0x7f4513b95120 (unknown)
@ 0x7f4513c489f3 __GI_epoll_wait
@ 0x2f9cb3d boost::asio::detail::epoll_reactor::run()
@ 0x2f984a9 boost::asio::detail::scheduler::run()
@ 0x2f952c9 yb::tserver::(anonymous namespace)::TabletServerMain()
@ 0x2f8f529 main
@ 0x7f4513b82825 __libc_start_main
@ 0x26f902e _start
2022-06-08 03:59:22.190 UTC [1394] LOG: YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2022-06-08 03:59:22.225 UTC [1394] LOG: pgaudit extension initialized
2022-06-08 03:59:22.227 UTC [1394] LOG: listening on IPv4 address "172.151.28.247", port 5433
2022-06-08 03:59:22.236 UTC [1394] LOG: listening on Unix socket "/tmp/.yb.17662450902068151697/.s.PGSQL.5433"
2022-06-08 03:59:22.271 UTC [1394] LOG: redirecting log output to logging collector process
2022-06-08 03:59:22.271 UTC [1394] HINT: Future log output will appear in directory "/mnt/d0/yb-data/tserver/logs".``` | 1.0 | [DocDB][Perf][Sysbench][Index-update] Observed tserver killed with SIGTERM "boost::asio::detail::epoll_reactor::run()" during load phase. - Jira Link: [DB-707](https://yugabyte.atlassian.net/browse/DB-707)
### Description:
Observed "Tserver" crash with SIGTERM, during LOAD phase, while running "Index Update" workload with below sysbench command:
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=sbtest --pgsql-host=172.151.17.85,172.151.23.254,172.151.25.126,172.151.28.247 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=0 --index_updates=10 --range_selects=false --time=600 --warmup-time=300 --num_rows_in_insert=10 --threads=10 load`
### Setup:
- YB version: "**YB-2.13.2.0-b0**"
- YB cluster: CentoOS 4 node cluster running with c5.xlarge instance type.
- Client: Ubuntu
### Steps:
- Create CentOS 4 node yb cluster
- After installing sysbench from yb repository on any client machine run sysbench create to create required schema after creating "sbtest" db
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=sbtest --pgsql-host=172.151.17.85,172.151.23.254,172.151.25.126,172.151.28.247 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=0 --index_updates=10 --range_selects=false --time=600 --warmup-time=300 --num_rows_in_insert=10 --threads=1 create`
- After CREATE phase try to run LOAD phase, on client
`sysbench /usr/local/share/sysbench/oltp_update_index.lua --db-driver=pgsql --pgsql-db=sbtest --pgsql-host=172.151.17.85,172.151.23.254,172.151.25.126,172.151.28.247 --pgsql-port=5433 --pgsql-user=yugabyte --tables=100 --table-size=4000000 --serial_cache_size=0 --index_updates=10 --range_selects=false --time=600 --warmup-time=300 --num_rows_in_insert=10 --threads=10 load`
- On Client LOAD phase gives below error after some time ( approx 5-10 min )
`FATAL: PQexec() failed: 7 Timed out: Perform RPC (request call id 52815) to 172.151.28.247:9100 timed out after 120.000s`
- Further debugging on failed YB node reviled "tserver" killed with SIGTERM.
```In YugabyteDB, setting LC_COLLATE to C and all other locale settings to en_US.UTF-8 by default. Locale support will be enhanced as part of addressing https://github.com/yugabyte/yugabyte-db/issues/1557
2022-05-10 12:25:21.961 UTC [19757] LOG: YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2022-05-10 12:25:21.969 UTC [19757] LOG: pgaudit extension initialized
2022-05-10 12:25:21.969 UTC [19757] LOG: listening on IPv4 address "172.151.28.247", port 5433
2022-05-10 12:25:21.973 UTC [19757] LOG: listening on Unix socket "/tmp/.yb.17662450902068151697/.s.PGSQL.5433"
2022-05-10 12:25:21.986 UTC [19757] LOG: redirecting log output to logging collector process
2022-05-10 12:25:21.986 UTC [19757] HINT: Future log output will appear in directory "/mnt/d0/yb-data/tserver/logs".
*** Aborted at 1652195831 (unix time) try "date -d @1652195831" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGTERM (@0x3e500004c0b) received by PID 19717 (TID 0x7f8c311c51c0) from PID 19467; stack trace: ***
@ 0x7f8c302f4120 (unknown)
@ 0x7f8c303a79f3 __GI_epoll_wait
@ 0x2f9cb3d boost::asio::detail::epoll_reactor::run()
@ 0x2f984a9 boost::asio::detail::scheduler::run()
@ 0x2f952c9 yb::tserver::(anonymous namespace)::TabletServerMain()
@ 0x2f8f529 main
@ 0x7f8c302e1825 __libc_start_main
@ 0x26f902e _start
2022-05-11 03:53:32.112 UTC [10103] LOG: YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2022-05-11 03:53:32.131 UTC [10103] LOG: pgaudit extension initialized
2022-05-11 03:53:32.133 UTC [10103] LOG: listening on IPv4 address "172.151.28.247", port 5433
2022-05-11 03:53:32.139 UTC [10103] LOG: listening on Unix socket "/tmp/.yb.17662450902068151697/.s.PGSQL.5433"
2022-05-11 03:53:32.174 UTC [10103] LOG: redirecting log output to logging collector process
2022-05-11 03:53:32.174 UTC [10103] HINT: Future log output will appear in directory "/mnt/d0/yb-data/tserver/logs".
*** Aborted at 1652441839 (unix time) try "date -d @1652441839" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGTERM (@0x3e5000071f6) received by PID 10064 (TID 0x7f4514a661c0) from PID 29174; stack trace: ***
@ 0x7f4513b95120 (unknown)
@ 0x7f4513c489f3 __GI_epoll_wait
@ 0x2f9cb3d boost::asio::detail::epoll_reactor::run()
@ 0x2f984a9 boost::asio::detail::scheduler::run()
@ 0x2f952c9 yb::tserver::(anonymous namespace)::TabletServerMain()
@ 0x2f8f529 main
@ 0x7f4513b82825 __libc_start_main
@ 0x26f902e _start
2022-06-08 03:59:22.190 UTC [1394] LOG: YugaByte is ENABLED in PostgreSQL. Transactions are enabled.
2022-06-08 03:59:22.225 UTC [1394] LOG: pgaudit extension initialized
2022-06-08 03:59:22.227 UTC [1394] LOG: listening on IPv4 address "172.151.28.247", port 5433
2022-06-08 03:59:22.236 UTC [1394] LOG: listening on Unix socket "/tmp/.yb.17662450902068151697/.s.PGSQL.5433"
2022-06-08 03:59:22.271 UTC [1394] LOG: redirecting log output to logging collector process
2022-06-08 03:59:22.271 UTC [1394] HINT: Future log output will appear in directory "/mnt/d0/yb-data/tserver/logs".``` | priority | observed tserver killed with sigterm boost asio detail epoll reactor run during load phase jira link description observed tserver crash with sigterm during load phase while running index update workload with below sysbench command sysbench usr local share sysbench oltp update index lua db driver pgsql pgsql db sbtest pgsql host pgsql port pgsql user yugabyte tables table size serial cache size index updates range selects false time warmup time num rows in insert threads load setup yb version yb yb cluster centoos node cluster running with xlarge instance type client ubuntu steps create centos node yb cluster after installing sysbench from yb repository on any client machine run sysbench create to create required schema after creating sbtest db sysbench usr local share sysbench oltp update index lua db driver pgsql pgsql db sbtest pgsql host pgsql port pgsql user yugabyte tables table size serial cache size index updates range selects false time warmup time num rows in insert threads create after create phase try to run load phase on client sysbench usr local share sysbench oltp update index lua db driver pgsql pgsql db sbtest pgsql host pgsql port pgsql user yugabyte tables table size serial cache size index updates range selects false time warmup time num rows in insert threads load on client load phase gives below error after some time approx min fatal pqexec failed timed out perform rpc request call id to timed out after further debugging on failed yb node reviled tserver killed with sigterm in yugabytedb setting lc collate to c and all other locale settings to en us utf by default locale support will be enhanced as part of addressing utc log yugabyte is enabled in postgresql transactions are enabled utc log pgaudit extension initialized utc log listening on address port utc log listening on unix socket tmp yb s pgsql utc log redirecting log output to logging collector process utc hint future log output will appear in directory mnt yb data tserver logs aborted at unix time try date d if you are using gnu date pc unknown sigterm received by pid tid from pid stack trace unknown gi epoll wait boost asio detail epoll reactor run boost asio detail scheduler run yb tserver anonymous namespace tabletservermain main libc start main start utc log yugabyte is enabled in postgresql transactions are enabled utc log pgaudit extension initialized utc log listening on address port utc log listening on unix socket tmp yb s pgsql utc log redirecting log output to logging collector process utc hint future log output will appear in directory mnt yb data tserver logs aborted at unix time try date d if you are using gnu date pc unknown sigterm received by pid tid from pid stack trace unknown gi epoll wait boost asio detail epoll reactor run boost asio detail scheduler run yb tserver anonymous namespace tabletservermain main libc start main start utc log yugabyte is enabled in postgresql transactions are enabled utc log pgaudit extension initialized utc log listening on address port utc log listening on unix socket tmp yb s pgsql utc log redirecting log output to logging collector process utc hint future log output will appear in directory mnt yb data tserver logs | 1 |
608,023 | 18,796,295,977 | IssuesEvent | 2021-11-08 22:51:42 | nickelswitte/imageblog | https://api.github.com/repos/nickelswitte/imageblog | closed | Switching indicators | medium priority | Showing arrow indicators for next/previous picture when pictures are maximized. | 1.0 | Switching indicators - Showing arrow indicators for next/previous picture when pictures are maximized. | priority | switching indicators showing arrow indicators for next previous picture when pictures are maximized | 1 |
575,297 | 17,026,766,224 | IssuesEvent | 2021-07-03 17:38:56 | hochschule-darmstadt/openartbrowser | https://api.github.com/repos/hochschule-darmstadt/openartbrowser | closed | Carousel buttons not aligned properly | User Interface bug medium priority | **Describe the bug**
In the new version, the buttons of the carousel are no longer aligned properly.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to any artwork page (desktop version)
3. Scroll down to 'Related Artworks'
4. See error
**Expected behavior**
The buttons should be centered vertically.
**Screenshots**

**Solution**
Due to different slide sizes on mobile next and prev button were made to stay on the same height, so "align-items: center" was accidentally overwritten with
```
.carousel-control-prev, .carousel-control-next {
....
align-items: initial !important;
margin-top: 10em;
}
```
in carousel.component.scss. These css lines should have been added to a media query. To solve this bug just add the css lines above to the media query "@media (max-width: 575px)" instead so that these css rules are only applied on mobile.
| 1.0 | Carousel buttons not aligned properly - **Describe the bug**
In the new version, the buttons of the carousel are no longer aligned properly.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to any artwork page (desktop version)
3. Scroll down to 'Related Artworks'
4. See error
**Expected behavior**
The buttons should be centered vertically.
**Screenshots**

**Solution**
Due to different slide sizes on mobile next and prev button were made to stay on the same height, so "align-items: center" was accidentally overwritten with
```
.carousel-control-prev, .carousel-control-next {
....
align-items: initial !important;
margin-top: 10em;
}
```
in carousel.component.scss. These css lines should have been added to a media query. To solve this bug just add the css lines above to the media query "@media (max-width: 575px)" instead so that these css rules are only applied on mobile.
| priority | carousel buttons not aligned properly describe the bug in the new version the buttons of the carousel are no longer aligned properly to reproduce steps to reproduce the behavior go to any artwork page desktop version scroll down to related artworks see error expected behavior the buttons should be centered vertically screenshots solution due to different slide sizes on mobile next and prev button were made to stay on the same height so align items center was accidentally overwritten with carousel control prev carousel control next align items initial important margin top in carousel component scss these css lines should have been added to a media query to solve this bug just add the css lines above to the media query media max width instead so that these css rules are only applied on mobile | 1 |
747,307 | 26,081,159,199 | IssuesEvent | 2022-12-25 11:24:50 | bounswe/bounswe2022group8 | https://api.github.com/repos/bounswe/bounswe2022group8 | closed | BE-36: User Level | Effort: Medium Priority: Medium Status: In Progress Coding Team: Backend | ### What's up?
**Level:** Each user belongs to a specific level group, levels are based on the user's _interaction_ with the platform.
We decided to implement functionality for deciding on a user's level.
### To Do
* [x] Decide on the specifics of parameters a user is required to meet, to reach level_2.
* [x] Create a function for calculating user's level.
* [x] Create an API for returning user's level.
* [x] Update User model and(or) profile API for including user level.
### Deadline
25.12.2022 @23.59
### Additional Information
_No response_
### Reviewers
@KarahanS @mumcusena | 1.0 | BE-36: User Level - ### What's up?
**Level:** Each user belongs to a specific level group, levels are based on the user's _interaction_ with the platform.
We decided to implement functionality for deciding on a user's level.
### To Do
* [x] Decide on the specifics of parameters a user is required to meet, to reach level_2.
* [x] Create a function for calculating user's level.
* [x] Create an API for returning user's level.
* [x] Update User model and(or) profile API for including user level.
### Deadline
25.12.2022 @23.59
### Additional Information
_No response_
### Reviewers
@KarahanS @mumcusena | priority | be user level what s up level each user belongs to a specific level group levels are based on the user s interaction with the platform we decided to implement functionality for deciding on a user s level to do decide on the specifics of parameters a user is required to meet to reach level create a function for calculating user s level create an api for returning user s level update user model and or profile api for including user level deadline additional information no response reviewers karahans mumcusena | 1 |
78,133 | 3,509,478,319 | IssuesEvent | 2016-01-08 22:58:04 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | opened | [crash] WorldSession::SendPacket (BB #902) | Category: Instances migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** bbazarragchaa
**Original Date:** 24.05.2015 05:53:01 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** new
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/902
<hr>
Revision: d75a34d5fa8f1084d230e295ee88225ebf7cbcd8
Crash log:
```
#!Revision: OregonCore Rev: 0 Hash: Archive (Win64,little-endian)
Date 24:5:2015. Time 13:27
//=====================================================
*** Hardware ***
Processor: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
Number Of Processors: 32
Physical Memory: 33507880 KB (Available: 15282564 KB)
Commit Charge Limit: 39242280 KB
*** Operation System ***
Windows Server 2012 Server 4.0 (Version 6.2, Build 9200)
//=====================================================
Exception code: C0000005 ACCESS_VIOLATION
Fault address: 000007F6F5BACB59 01:00000000000ABB59 C:\running_cores\oregoncore243\oregon-core.exe
Registers:
RAX:000000D1823978D4
RBX:0000000000000000
RCX:0000000000000000
RDX:000000D174F1F318
RSI:00000000000009A8
RDI:0000000000000000
R8: 000007FAFB18CB50
R9: 000000D1823978D4
R10:000000D1823978D0
R11:000000D17DCBE8D8
R12:000007F6F5B00000
R13:000007F6F62D9340
R14:000007F6F62D9358
R15:0000000000000001
CS:RIP:0033:000007F6F5BACB59
SS:RSP:002B:0000000074F1F2B0 RBP:74F1F350
DS:002B ES:002B FS:0053 GS:002B
Flags:00010202
Call stack:
Address Frame Function SourceFile
000007F6F5BACB59 000000D174F1F2D0 WorldSession::SendPacket+9 c:\source\oregoncore\src\game\worldsession.cpp line 99
000007F6F5C18F5A 000000D174F1F360 Player::SendUpdateWorldState+16A c:\source\oregoncore\src\game\player.cpp line 8098
000007F6F5E8F22E 000000D174F1F390 OutdoorPvP::SendUpdateWorldState+4E c:\source\oregoncore\src\game\outdoorpvp.cpp line 387
000007F6F5E91E17 000000D174F1F400 OPvPCapturePointHP::ChangeState+117 c:\source\oregoncore\src\game\outdoorpvphp.cpp line 198
000007F6F5E8FAB2 000000D174F1F4D0 OPvPCapturePoint::Update+482 c:\source\oregoncore\src\game\outdoorpvp.cpp line 378
000007F6F5E8FB7C 000000D174F1F500 OutdoorPvP::Update+4C c:\source\oregoncore\src\game\outdoorpvp.cpp line 259
000007F6F5E927F2 000000D174F1F530 OutdoorPvPHP::Update+12 c:\source\oregoncore\src\game\outdoorpvphp.cpp line 112
000007F6F5D17E0F 000000D174F1F560 OutdoorPvPMgr::Update+3F c:\source\oregoncore\src\game\outdoorpvpmgr.cpp line 174
000007F6F5B31428 000000D174F1F5B0 World::Update+378 c:\source\oregoncore\src\game\world.cpp line 1817
000007F6F5B15F99 000000D174F1F5E0 WorldRunnable::run+69 c:\source\oregoncore\src\oregoncore\worldrunnable.cpp line 59
000007F6F5B109A2 000000D174F1F710 Master::Run+542 c:\source\oregoncore\src\oregoncore\master.cpp line 248
000007F6F5B0F7F2 000000D174F1F830 ace_main_i+352 c:\source\oregoncore\src\oregoncore\main.cpp line 164
000007F6F5B0FE80 000000D174F1F870 main+40 c:\source\oregoncore\src\oregoncore\main.cpp line 75
000007F6F610931B 000000D174F1F8A0 __tmainCRTStartup+10F f:\dd\vctools\crt\crtw32\dllstuff\crtexe.c line 626
000007FB09BE167E 000000D174F1F8D0 BaseThreadInitThunk+1A
000007FB0A25C3F1 000000D174F1F920 RtlUserThreadStart+21
``` | 1.0 | [crash] WorldSession::SendPacket (BB #902) - This issue was migrated from bitbucket.
**Original Reporter:** bbazarragchaa
**Original Date:** 24.05.2015 05:53:01 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** new
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/902
<hr>
Revision: d75a34d5fa8f1084d230e295ee88225ebf7cbcd8
Crash log:
```
#!Revision: OregonCore Rev: 0 Hash: Archive (Win64,little-endian)
Date 24:5:2015. Time 13:27
//=====================================================
*** Hardware ***
Processor: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
Number Of Processors: 32
Physical Memory: 33507880 KB (Available: 15282564 KB)
Commit Charge Limit: 39242280 KB
*** Operation System ***
Windows Server 2012 Server 4.0 (Version 6.2, Build 9200)
//=====================================================
Exception code: C0000005 ACCESS_VIOLATION
Fault address: 000007F6F5BACB59 01:00000000000ABB59 C:\running_cores\oregoncore243\oregon-core.exe
Registers:
RAX:000000D1823978D4
RBX:0000000000000000
RCX:0000000000000000
RDX:000000D174F1F318
RSI:00000000000009A8
RDI:0000000000000000
R8: 000007FAFB18CB50
R9: 000000D1823978D4
R10:000000D1823978D0
R11:000000D17DCBE8D8
R12:000007F6F5B00000
R13:000007F6F62D9340
R14:000007F6F62D9358
R15:0000000000000001
CS:RIP:0033:000007F6F5BACB59
SS:RSP:002B:0000000074F1F2B0 RBP:74F1F350
DS:002B ES:002B FS:0053 GS:002B
Flags:00010202
Call stack:
Address Frame Function SourceFile
000007F6F5BACB59 000000D174F1F2D0 WorldSession::SendPacket+9 c:\source\oregoncore\src\game\worldsession.cpp line 99
000007F6F5C18F5A 000000D174F1F360 Player::SendUpdateWorldState+16A c:\source\oregoncore\src\game\player.cpp line 8098
000007F6F5E8F22E 000000D174F1F390 OutdoorPvP::SendUpdateWorldState+4E c:\source\oregoncore\src\game\outdoorpvp.cpp line 387
000007F6F5E91E17 000000D174F1F400 OPvPCapturePointHP::ChangeState+117 c:\source\oregoncore\src\game\outdoorpvphp.cpp line 198
000007F6F5E8FAB2 000000D174F1F4D0 OPvPCapturePoint::Update+482 c:\source\oregoncore\src\game\outdoorpvp.cpp line 378
000007F6F5E8FB7C 000000D174F1F500 OutdoorPvP::Update+4C c:\source\oregoncore\src\game\outdoorpvp.cpp line 259
000007F6F5E927F2 000000D174F1F530 OutdoorPvPHP::Update+12 c:\source\oregoncore\src\game\outdoorpvphp.cpp line 112
000007F6F5D17E0F 000000D174F1F560 OutdoorPvPMgr::Update+3F c:\source\oregoncore\src\game\outdoorpvpmgr.cpp line 174
000007F6F5B31428 000000D174F1F5B0 World::Update+378 c:\source\oregoncore\src\game\world.cpp line 1817
000007F6F5B15F99 000000D174F1F5E0 WorldRunnable::run+69 c:\source\oregoncore\src\oregoncore\worldrunnable.cpp line 59
000007F6F5B109A2 000000D174F1F710 Master::Run+542 c:\source\oregoncore\src\oregoncore\master.cpp line 248
000007F6F5B0F7F2 000000D174F1F830 ace_main_i+352 c:\source\oregoncore\src\oregoncore\main.cpp line 164
000007F6F5B0FE80 000000D174F1F870 main+40 c:\source\oregoncore\src\oregoncore\main.cpp line 75
000007F6F610931B 000000D174F1F8A0 __tmainCRTStartup+10F f:\dd\vctools\crt\crtw32\dllstuff\crtexe.c line 626
000007FB09BE167E 000000D174F1F8D0 BaseThreadInitThunk+1A
000007FB0A25C3F1 000000D174F1F920 RtlUserThreadStart+21
``` | priority | worldsession sendpacket bb this issue was migrated from bitbucket original reporter bbazarragchaa original date gmt original priority major original type bug original state new direct link revision crash log revision oregoncore rev hash archive little endian date time hardware processor intel r xeon r cpu number of processors physical memory kb available kb commit charge limit kb operation system windows server server version build exception code access violation fault address c running cores oregon core exe registers rax rbx rcx rdx rsi rdi cs rip ss rsp rbp ds es fs gs flags call stack address frame function sourcefile worldsession sendpacket c source oregoncore src game worldsession cpp line player sendupdateworldstate c source oregoncore src game player cpp line outdoorpvp sendupdateworldstate c source oregoncore src game outdoorpvp cpp line opvpcapturepointhp changestate c source oregoncore src game outdoorpvphp cpp line opvpcapturepoint update c source oregoncore src game outdoorpvp cpp line outdoorpvp update c source oregoncore src game outdoorpvp cpp line outdoorpvphp update c source oregoncore src game outdoorpvphp cpp line outdoorpvpmgr update c source oregoncore src game outdoorpvpmgr cpp line world update c source oregoncore src game world cpp line worldrunnable run c source oregoncore src oregoncore worldrunnable cpp line master run c source oregoncore src oregoncore master cpp line ace main i c source oregoncore src oregoncore main cpp line main c source oregoncore src oregoncore main cpp line tmaincrtstartup f dd vctools crt dllstuff crtexe c line basethreadinitthunk rtluserthreadstart | 1 |
510,114 | 14,785,192,386 | IssuesEvent | 2021-01-12 02:07:17 | zorkind/Hellion-Rescue-Project | https://api.github.com/repos/zorkind/Hellion-Rescue-Project | closed | Helmet overlay overlaps with pause menu | HRP bug medium priority | **Vanilla / HRP**
HRP
**Client Version**
0.1.0.5
**Describe the bug**
When the visor of either the AC Mk9 or the AC Proteus helmet is active, and then open the ESC-menu, it will overlap with its components
**To Reproduce**
Steps to reproduce the behavior:
1. Equip either an AC Mk9 or the AC Proteus Helmet.
2. Hit ESC to open up the menu.
**Expected behavior**
Either:
- The helmet visor shouldn't appear while the ESC menu is active
- Helmet visor elements shouldn't interfere with the elements of the ESC Menu.
-
**Screenshots**


**Additional context**
With the proteus helmet, the issue is more subtle, since the proteus has a different visor, however you can still see it blocking both Wikipedia and Discord Links, so the issue applies to both helmets.
| 1.0 | Helmet overlay overlaps with pause menu - **Vanilla / HRP**
HRP
**Client Version**
0.1.0.5
**Describe the bug**
When the visor of either the AC Mk9 or the AC Proteus helmet is active, and then open the ESC-menu, it will overlap with its components
**To Reproduce**
Steps to reproduce the behavior:
1. Equip either an AC Mk9 or the AC Proteus Helmet.
2. Hit ESC to open up the menu.
**Expected behavior**
Either:
- The helmet visor shouldn't appear while the ESC menu is active
- Helmet visor elements shouldn't interfere with the elements of the ESC Menu.
-
**Screenshots**


**Additional context**
With the proteus helmet, the issue is more subtle, since the proteus has a different visor, however you can still see it blocking both Wikipedia and Discord Links, so the issue applies to both helmets.
| priority | helmet overlay overlaps with pause menu vanilla hrp hrp client version describe the bug when the visor of either the ac or the ac proteus helmet is active and then open the esc menu it will overlap with its components to reproduce steps to reproduce the behavior equip either an ac or the ac proteus helmet hit esc to open up the menu expected behavior either the helmet visor shouldn t appear while the esc menu is active helmet visor elements shouldn t interfere with the elements of the esc menu screenshots additional context with the proteus helmet the issue is more subtle since the proteus has a different visor however you can still see it blocking both wikipedia and discord links so the issue applies to both helmets | 1 |
401,261 | 11,787,980,411 | IssuesEvent | 2020-03-17 14:52:32 | loadimpact/har-to-k6 | https://api.github.com/repos/loadimpact/har-to-k6 | closed | Add Dockerfile | Priority: Medium Status: Available Type: Improvement | This tool is missing a `Dockerfile`, and a CI process that pushes new releases to the Docker Hub, which would be very useful for non-JS developers. Now, I either have to run it with `npx` (super slow and annoying) or I have to install it with `npm install` | 1.0 | Add Dockerfile - This tool is missing a `Dockerfile`, and a CI process that pushes new releases to the Docker Hub, which would be very useful for non-JS developers. Now, I either have to run it with `npx` (super slow and annoying) or I have to install it with `npm install` | priority | add dockerfile this tool is missing a dockerfile and a ci process that pushes new releases to the docker hub which would be very useful for non js developers now i either have to run it with npx super slow and annoying or i have to install it with npm install | 1 |
7,341 | 2,601,757,135 | IssuesEvent | 2015-02-24 00:33:30 | chrsmith/bwapi | https://api.github.com/repos/chrsmith/bwapi | closed | Load an older revision for non-compatible AI modules | auto-migrated Priority-Medium Type-Enhancement Usability | ```
Because that would be cool too.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 6 Nov 2010 at 2:33 | 1.0 | Load an older revision for non-compatible AI modules - ```
Because that would be cool too.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 6 Nov 2010 at 2:33 | priority | load an older revision for non compatible ai modules because that would be cool too original issue reported on code google com by aheinerm on nov at | 1 |
53,521 | 3,040,719,177 | IssuesEvent | 2015-08-07 16:57:14 | scamille/simc_issue_test3 | https://api.github.com/repos/scamille/simc_issue_test3 | closed | Bug with extending dot durations/multiple of those in same actions list. | bug imported Priority-Medium | _From [Twigele](https://code.google.com/u/Twigele/) on March 13, 2009 02:05:43_
What steps will reproduce the problem? http://elitistjerks.com/f73/t48226-simulationcraft_fur_feather_wearers/p2/#post1146178 Post has profiles etc.
The second rip in the action list is not aware of the ticks that got added
to the first one via extend_duration()
_Original issue: http://code.google.com/p/simulationcraft/issues/detail?id=38_ | 1.0 | Bug with extending dot durations/multiple of those in same actions list. - _From [Twigele](https://code.google.com/u/Twigele/) on March 13, 2009 02:05:43_
What steps will reproduce the problem? http://elitistjerks.com/f73/t48226-simulationcraft_fur_feather_wearers/p2/#post1146178 Post has profiles etc.
The second rip in the action list is not aware of the ticks that got added
to the first one via extend_duration()
_Original issue: http://code.google.com/p/simulationcraft/issues/detail?id=38_ | priority | bug with extending dot durations multiple of those in same actions list from on march what steps will reproduce the problem post has profiles etc the second rip in the action list is not aware of the ticks that got added to the first one via extend duration original issue | 1 |
420,952 | 12,246,398,030 | IssuesEvent | 2020-05-05 14:24:17 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | py systems: Should make make debug messages with NiceTypeName prettier for Python? | component: pydrake priority: medium team: manipulation type: user assistance | Issue-ifying #13194. [See discussion](https://github.com/RobotLocomotion/drake/pull/13194#pullrequestreview-403981032)
Motivated by the error encountered here:
https://stackoverflow.com/q/61506822/7829525
> The goal is to turn this message:
> ```
> Traceback (most recent call last):
> File "debugging.py", line 24, in <module>
> DirectTranscription(sys, sys.CreateDefaultContext(), 10)
> RuntimeError: The object named [] of type drake::pydrake::(anonymous)::Impl<double>::PyVectorSystem does not support ToAutoDiffXd.
> ```
> into this message:
> ```
> Traceback (most recent call last):
> File "debugging.py", line 24, in <module>
> DirectTranscription(sys, sys.CreateDefaultContext(), 10)
> RuntimeError: The object named [] of type my_module.CustomVectorSystem does not support ToAutoDiffXd.
> ```
>
> The actual type name `drake::pydrake::(anonymous)::Impl<double>::PyVectorSystem` is rather uninformative (aside from being ugly) if you have a diagram with multiple Python-authored systems that you're trying to convert. (They'll all look the same.)
\cc @sherm1 @jwnimmer-tri | 1.0 | py systems: Should make make debug messages with NiceTypeName prettier for Python? - Issue-ifying #13194. [See discussion](https://github.com/RobotLocomotion/drake/pull/13194#pullrequestreview-403981032)
Motivated by the error encountered here:
https://stackoverflow.com/q/61506822/7829525
> The goal is to turn this message:
> ```
> Traceback (most recent call last):
> File "debugging.py", line 24, in <module>
> DirectTranscription(sys, sys.CreateDefaultContext(), 10)
> RuntimeError: The object named [] of type drake::pydrake::(anonymous)::Impl<double>::PyVectorSystem does not support ToAutoDiffXd.
> ```
> into this message:
> ```
> Traceback (most recent call last):
> File "debugging.py", line 24, in <module>
> DirectTranscription(sys, sys.CreateDefaultContext(), 10)
> RuntimeError: The object named [] of type my_module.CustomVectorSystem does not support ToAutoDiffXd.
> ```
>
> The actual type name `drake::pydrake::(anonymous)::Impl<double>::PyVectorSystem` is rather uninformative (aside from being ugly) if you have a diagram with multiple Python-authored systems that you're trying to convert. (They'll all look the same.)
\cc @sherm1 @jwnimmer-tri | priority | py systems should make make debug messages with nicetypename prettier for python issue ifying motivated by the error encountered here the goal is to turn this message traceback most recent call last file debugging py line in directtranscription sys sys createdefaultcontext runtimeerror the object named of type drake pydrake anonymous impl pyvectorsystem does not support toautodiffxd into this message traceback most recent call last file debugging py line in directtranscription sys sys createdefaultcontext runtimeerror the object named of type my module customvectorsystem does not support toautodiffxd the actual type name drake pydrake anonymous impl pyvectorsystem is rather uninformative aside from being ugly if you have a diagram with multiple python authored systems that you re trying to convert they ll all look the same cc jwnimmer tri | 1 |
795,094 | 28,060,943,347 | IssuesEvent | 2023-03-29 12:36:17 | zowe/zowe-cli | https://api.github.com/repos/zowe/zowe-cli | closed | Special characters in JCL when submitting a job not correctly encoded | bug priority-medium severity-medium | **Bug Description**
-
I am a member of the CLI team and we've noted [several issues on our repo](https://github.com/zowe/zowe-cli/issues/1633) that actually have to do with the way the CLI and API ML interact.
When our users access z/OSMF through API ML with special characters in their jobnames, their requests fail. We've seen that you've since added [support for character encoding](https://github.com/zowe/api-layer/pull/804) but now we see issues with handling unencoded characters.
I am reaching out to determine a best path forward, perhaps you'd like our team to handing some of this work load.
**Steps to reproduce**
-
The following documentation was written by @juleskreutzer in [this](https://github.com/zowe/zowe-cli/issues/1596) issue
```
I ran the following command: zowe jobs submit ds --wait-for-output 'r276404.jcllib(zoweclit)'
This results in the following output:
PS C:\Users\Jules.Kreutzer> zowe jobs submit ds --wait-for-output 'r276404.jcllib(zoweclit)'
Command Error:
Error obtaining status for jobname "KT#TEST" jobid "JOB97232".
z/OSMF REST API Error:
Rest API failure with HTTP(S) status 400
<!doctype html><html lang="en"><head><title>HTTP Status 400 – Bad Request</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 – Bad Request</h1></body></html>
Error Details:
HTTP(S) error status "400" received.
Review request details (resource, base path, credentials, payload) and ensure correctness.
Protocol: https
Host: xxxx
Port: 7554
Base Path: ibmzosmf/api/v1
Resource: /zosmf/restjobs/jobs/KT#TEST/JOB97232
Request: GET
Headers: [{"X-CSRF-ZOSMF-HEADER":true}]
Payload: undefined
Contents of zoweclit JCL:
//KT#TEST JOB NLD700253170,'KREUTZER,A G1501110'
//BR14 EXEC PGM=IEFBR14
//*
```
**Expected behavior**
-
We expect users to be able to include special characters within their job names and to have that work throughout the zowe ecosystem.
**Screenshots**
-
The below screenshot proves that accessing zOSMF through API ML with a special character job name will error. The same request direct to zOSMF succeeds.

**Additional context**
-
We've compiled an [epic](https://github.com/zowe/zowe-cli/issues/1633) containing the related issues but each issue is also listed below for convenience:
- https://github.com/zowe/zowe-cli/issues/1596
- https://github.com/zowe/zowe-cli/issues/1073
- https://github.com/zowe/vscode-extension-for-zowe/issues/1215
**Environment Details**
- Version and build number: Version 1.28.13 build # n/a
- Test environment: Broadcom internal system
**API Catalog Web UI (in case of API Catalog issue):**
- NA
**REST API client (in case of REST API issue):**
- NA
**Willingness to help**
Our team is definitely willing to help - perhaps we can schedule a meeting to talk this through. | 1.0 | Special characters in JCL when submitting a job not correctly encoded - **Bug Description**
-
I am a member of the CLI team and we've noted [several issues on our repo](https://github.com/zowe/zowe-cli/issues/1633) that actually have to do with the way the CLI and API ML interact.
When our users access z/OSMF through API ML with special characters in their jobnames, their requests fail. We've seen that you've since added [support for character encoding](https://github.com/zowe/api-layer/pull/804) but now we see issues with handling unencoded characters.
I am reaching out to determine a best path forward, perhaps you'd like our team to handing some of this work load.
**Steps to reproduce**
-
The following documentation was written by @juleskreutzer in [this](https://github.com/zowe/zowe-cli/issues/1596) issue
```
I ran the following command: zowe jobs submit ds --wait-for-output 'r276404.jcllib(zoweclit)'
This results in the following output:
PS C:\Users\Jules.Kreutzer> zowe jobs submit ds --wait-for-output 'r276404.jcllib(zoweclit)'
Command Error:
Error obtaining status for jobname "KT#TEST" jobid "JOB97232".
z/OSMF REST API Error:
Rest API failure with HTTP(S) status 400
<!doctype html><html lang="en"><head><title>HTTP Status 400 – Bad Request</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 – Bad Request</h1></body></html>
Error Details:
HTTP(S) error status "400" received.
Review request details (resource, base path, credentials, payload) and ensure correctness.
Protocol: https
Host: xxxx
Port: 7554
Base Path: ibmzosmf/api/v1
Resource: /zosmf/restjobs/jobs/KT#TEST/JOB97232
Request: GET
Headers: [{"X-CSRF-ZOSMF-HEADER":true}]
Payload: undefined
Contents of zoweclit JCL:
//KT#TEST JOB NLD700253170,'KREUTZER,A G1501110'
//BR14 EXEC PGM=IEFBR14
//*
```
**Expected behavior**
-
We expect users to be able to include special characters within their job names and to have that work throughout the zowe ecosystem.
**Screenshots**
-
The below screenshot proves that accessing zOSMF through API ML with a special character job name will error. The same request direct to zOSMF succeeds.

**Additional context**
-
We've compiled an [epic](https://github.com/zowe/zowe-cli/issues/1633) containing the related issues but each issue is also listed below for convenience:
- https://github.com/zowe/zowe-cli/issues/1596
- https://github.com/zowe/zowe-cli/issues/1073
- https://github.com/zowe/vscode-extension-for-zowe/issues/1215
**Environment Details**
- Version and build number: Version 1.28.13 build # n/a
- Test environment: Broadcom internal system
**API Catalog Web UI (in case of API Catalog issue):**
- NA
**REST API client (in case of REST API issue):**
- NA
**Willingness to help**
Our team is definitely willing to help - perhaps we can schedule a meeting to talk this through. | priority | special characters in jcl when submitting a job not correctly encoded bug description i am a member of the cli team and we ve noted that actually have to do with the way the cli and api ml interact when our users access z osmf through api ml with special characters in their jobnames their requests fail we ve seen that you ve since added but now we see issues with handling unencoded characters i am reaching out to determine a best path forward perhaps you d like our team to handing some of this work load steps to reproduce the following documentation was written by juleskreutzer in issue i ran the following command zowe jobs submit ds wait for output jcllib zoweclit this results in the following output ps c users jules kreutzer zowe jobs submit ds wait for output jcllib zoweclit command error error obtaining status for jobname kt test jobid z osmf rest api error rest api failure with http s status http status – bad request body font family tahoma arial sans serif b color white background color font size font size font size p font size a color black line height background color border none http status – bad request error details http s error status received review request details resource base path credentials payload and ensure correctness protocol https host xxxx port base path ibmzosmf api resource zosmf restjobs jobs kt test request get headers payload undefined contents of zoweclit jcl kt test job kreutzer a exec pgm expected behavior we expect users to be able to include special characters within their job names and to have that work throughout the zowe ecosystem screenshots the below screenshot proves that accessing zosmf through api ml with a special character job name will error the same request direct to zosmf succeeds additional context we ve compiled an containing the related issues but each issue is also listed below for convenience environment details version and build number version build n a test environment broadcom internal system api catalog web ui in case of api catalog issue na rest api client in case of rest api issue na willingness to help our team is definitely willing to help perhaps we can schedule a meeting to talk this through | 1 |
54,607 | 3,070,204,241 | IssuesEvent | 2015-08-19 01:45:04 | AtlasOfLivingAustralia/biocache-store | https://api.github.com/repos/AtlasOfLivingAustralia/biocache-store | opened | Reproductive Condition facet | enhancement priority-medium status-new type-enhancement | _From @mbohun on August 19, 2014 13:8_
*migrated from:* https://code.google.com/p/ala/issues/detail?id=653
*date:* Sun Apr 27 20:05:09 2014
*author:* milo_nic...@hotmail.com
---
From Climatewatch staff- ... checking some of the plant data from ClimateWatch using the ALA mapping tools. One of the big things to look for is when plants are flowering. This information is stored as ‘Reproductive Condition’ in the ALA record. We were wondering if it’s possible to add a filter the data (like those in the attached screen shot) to only look at different reproductive phases?
_Copied from original issue: AtlasOfLivingAustralia/biocache-hubs#76_ | 1.0 | Reproductive Condition facet - _From @mbohun on August 19, 2014 13:8_
*migrated from:* https://code.google.com/p/ala/issues/detail?id=653
*date:* Sun Apr 27 20:05:09 2014
*author:* milo_nic...@hotmail.com
---
From Climatewatch staff- ... checking some of the plant data from ClimateWatch using the ALA mapping tools. One of the big things to look for is when plants are flowering. This information is stored as ‘Reproductive Condition’ in the ALA record. We were wondering if it’s possible to add a filter the data (like those in the attached screen shot) to only look at different reproductive phases?
_Copied from original issue: AtlasOfLivingAustralia/biocache-hubs#76_ | priority | reproductive condition facet from mbohun on august migrated from date sun apr author milo nic hotmail com from climatewatch staff checking some of the plant data from climatewatch using the ala mapping tools one of the big things to look for is when plants are flowering this information is stored as ‘reproductive condition’ in the ala record we were wondering if it’s possible to add a filter the data like those in the attached screen shot to only look at different reproductive phases copied from original issue atlasoflivingaustralia biocache hubs | 1 |
516,555 | 14,983,780,349 | IssuesEvent | 2021-01-28 17:40:10 | reymon359/gatsby-personal-site | https://api.github.com/repos/reymon359/gatsby-personal-site | closed | Feature: Update about me | Estimation 1 ☕ Priority Medium ☄☄ Type Feature ✨ | Examples of things I want to add in the new one
I am enjoying life going on adventures, doing Software development, Meeting new people, Leaning new things, Improving myself, Trying new foods, Helping others, Visit new places, Opening my mind, Achieving my goals, HIking and Boardsports. | 1.0 | Feature: Update about me - Examples of things I want to add in the new one
I am enjoying life going on adventures, doing Software development, Meeting new people, Leaning new things, Improving myself, Trying new foods, Helping others, Visit new places, Opening my mind, Achieving my goals, HIking and Boardsports. | priority | feature update about me examples of things i want to add in the new one i am enjoying life going on adventures doing software development meeting new people leaning new things improving myself trying new foods helping others visit new places opening my mind achieving my goals hiking and boardsports | 1 |
78,677 | 3,512,854,437 | IssuesEvent | 2016-01-11 06:01:19 | Solinea/goldstone-server | https://api.github.com/repos/Solinea/goldstone-server | opened | UI nav icon highlighting issue | component: ui priority 3: medium type: bug | when you click on a lefthand nav icon, then click on the goldstone log, you navigate to the dashboard, but the originally clicked icon remains highlighted.

| 1.0 | UI nav icon highlighting issue - when you click on a lefthand nav icon, then click on the goldstone log, you navigate to the dashboard, but the originally clicked icon remains highlighted.

| priority | ui nav icon highlighting issue when you click on a lefthand nav icon then click on the goldstone log you navigate to the dashboard but the originally clicked icon remains highlighted | 1 |
603,902 | 18,673,770,454 | IssuesEvent | 2021-10-31 07:24:06 | AY2122S1-CS2103-W14-1/tp | https://api.github.com/repos/AY2122S1-CS2103-W14-1/tp | reopened | Appointment accepts 2400 as time | priority.Medium bug | 
## Steps to reproduce
Add an appointment at time 2400
## Expected behaviour
Throw error when 2400 is inputted as time.
| 1.0 | Appointment accepts 2400 as time - 
## Steps to reproduce
Add an appointment at time 2400
## Expected behaviour
Throw error when 2400 is inputted as time.
| priority | appointment accepts as time steps to reproduce add an appointment at time expected behaviour throw error when is inputted as time | 1 |
44,265 | 2,902,694,745 | IssuesEvent | 2015-06-18 08:51:00 | thesgc/chembiohub_helpdesk | https://api.github.com/repos/thesgc/chembiohub_helpdesk | opened | Compound added on 16/6 has no value in 'added by' field - not sure how this is possible.
It does ha | app: ChemReg name: Karen priority: Medium status: New | Compound added on 16/6 has no value in 'added by' field - not sure how this is possible.
It does have custom field values though, where other rows in the search results have 'added by' value filled but have no custom field values - could be coincidence.
Used this search: http://staging.chembiohub.ox.ac.uk/sandbox/#/search?project__project_key__in=adam-hendry,amy-varney-test-project,anthony-test,damerell-test-project,james-wickens-test-project,karen-malaria,marsden-test-project,martin-peeks-test-project,test-project&related_molregno__chembl__chembl_id__in=UOXCA60AQH,UOXKJ20WUU,UOXZC17EHC | 1.0 | Compound added on 16/6 has no value in 'added by' field - not sure how this is possible.
It does ha - Compound added on 16/6 has no value in 'added by' field - not sure how this is possible.
It does have custom field values though, where other rows in the search results have 'added by' value filled but have no custom field values - could be coincidence.
Used this search: http://staging.chembiohub.ox.ac.uk/sandbox/#/search?project__project_key__in=adam-hendry,amy-varney-test-project,anthony-test,damerell-test-project,james-wickens-test-project,karen-malaria,marsden-test-project,martin-peeks-test-project,test-project&related_molregno__chembl__chembl_id__in=UOXCA60AQH,UOXKJ20WUU,UOXZC17EHC | priority | compound added on has no value in added by field not sure how this is possible it does ha compound added on has no value in added by field not sure how this is possible it does have custom field values though where other rows in the search results have added by value filled but have no custom field values could be coincidence used this search | 1 |
625,449 | 19,729,966,769 | IssuesEvent | 2022-01-14 00:43:21 | Thorium-Sim/thorium | https://api.github.com/repos/Thorium-Sim/thorium | opened | Reactor & Engine Heat Frozen | type/bug priority/medium | ### Requested By: Alex DeBirk
### Priority: Medium
### Version: 3.5.1
The heat on the engines and the core doesn't rise when the system is on.
### Steps to Reproduce
I noticed that the reactor heated up too soon, so I changed the rate at which it heated up in the simulator config system configuration to 0.25. When we deleted our flight and created a new flight to test the heat rate, the heat never rose again. We restored it to 1, but it still doesn't. The heat rate in core is 1, and I can manually increase the heat, but it doesn't rise. | 1.0 | Reactor & Engine Heat Frozen - ### Requested By: Alex DeBirk
### Priority: Medium
### Version: 3.5.1
The heat on the engines and the core doesn't rise when the system is on.
### Steps to Reproduce
I noticed that the reactor heated up too soon, so I changed the rate at which it heated up in the simulator config system configuration to 0.25. When we deleted our flight and created a new flight to test the heat rate, the heat never rose again. We restored it to 1, but it still doesn't. The heat rate in core is 1, and I can manually increase the heat, but it doesn't rise. | priority | reactor engine heat frozen requested by alex debirk priority medium version the heat on the engines and the core doesn t rise when the system is on steps to reproduce i noticed that the reactor heated up too soon so i changed the rate at which it heated up in the simulator config system configuration to when we deleted our flight and created a new flight to test the heat rate the heat never rose again we restored it to but it still doesn t the heat rate in core is and i can manually increase the heat but it doesn t rise | 1 |
722,284 | 24,857,256,358 | IssuesEvent | 2022-10-27 04:14:51 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | docs: branchNameStrict description incorrect | priority-3-medium type:docs status:ready regression | ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
N/A
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
N/A
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
The change to enable `branchNameStrict` by default was reverted, but the documentation wasn't. This means the description implies it is enabled by default when it isn't, and makes suggestions which are no longer valid.
https://github.com/renovatebot/renovate/pull/18536
```
By default, Renovate removes special characters when slugifying the branch name:
all special characters are removed
only alphabetic characters are allowed
hyphens - are used to separate sections
To revert this behavior to that used in v32 and before, set this value to false. This will mean that special characters like . may end up in the branch name.
```
https://docs.renovatebot.com/configuration-options/#branchnamestrict
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste the relevant log(s) here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
No reproduction repository | 1.0 | docs: branchNameStrict description incorrect - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
N/A
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
N/A
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
The change to enable `branchNameStrict` by default was reverted, but the documentation wasn't. This means the description implies it is enabled by default when it isn't, and makes suggestions which are no longer valid.
https://github.com/renovatebot/renovate/pull/18536
```
By default, Renovate removes special characters when slugifying the branch name:
all special characters are removed
only alphabetic characters are allowed
hyphens - are used to separate sections
To revert this behavior to that used in v32 and before, set this value to false. This will mean that special characters like . may end up in the branch name.
```
https://docs.renovatebot.com/configuration-options/#branchnamestrict
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste the relevant log(s) here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
No reproduction repository | priority | docs branchnamestrict description incorrect how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run n a if you re self hosting renovate select which platform you are using no response if you re self hosting renovate tell us what version of the platform you run n a was this something which used to work for you and then stopped it used to work and then stopped describe the bug the change to enable branchnamestrict by default was reverted but the documentation wasn t this means the description implies it is enabled by default when it isn t and makes suggestions which are no longer valid by default renovate removes special characters when slugifying the branch name all special characters are removed only alphabetic characters are allowed hyphens are used to separate sections to revert this behavior to that used in and before set this value to false this will mean that special characters like may end up in the branch name relevant debug logs logs copy paste the relevant log s here between the starting and ending backticks have you created a minimal reproduction repository no reproduction repository | 1 |
733,996 | 25,334,105,992 | IssuesEvent | 2022-11-18 15:27:30 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | PF 12.1: Local user creation summary windows can't be closed | Type: Bug Priority: Medium | 
Refreshing the page or back works. | 1.0 | PF 12.1: Local user creation summary windows can't be closed - 
Refreshing the page or back works. | priority | pf local user creation summary windows can t be closed refreshing the page or back works | 1 |
133,504 | 5,204,638,923 | IssuesEvent | 2017-01-24 16:01:41 | newamericafoundation/newamerica-cms | https://api.github.com/repos/newamericafoundation/newamerica-cms | opened | EmbdedType Error | Medium Priority | ```sh
Template error:
In template /app/blog/templates/blog/blog_post.html, error at line 0
embedtype
1 : {% extends "post_page.html" %}
2 :
3 : {% load wagtailcore_tags %}
4 : {% load wagtailcore_tags wagtailimages_tags %}
5 : {% load utilities %}
6 :
7 : {% block body_class %}template-blogpost{% endblock %}
8 :
9 : {% block post-header %}
10 : {% image page.story_image min-1100x650 as story_image %}
Traceback:
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
217. response = self.process_exception_by_middleware(e, request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
215. response = response.render()
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py" in render
109. self.content = self.rendered_content
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py" in rendered_content
86. content = template.render(context, self._request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/backends/django.py" in render
66. return self.template.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
208. return self._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
174. return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
174. return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
174. return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
70. result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
70. result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/defaulttags.py" in render
209. nodelist.append(node.render_annotated(context))
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
210. return template.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
210. return self._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/templatetags/wagtailcore_tags.py" in render
75. return value.render_as_block(context=new_context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/blocks/base.py" in render_as_block
428. return self.block.render(self.value, context=context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/blocks/base.py" in render
232. return self.render_basic(value, context=context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/blocks/base.py" in render_basic
247. return force_text(value)
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/encoding.py" in force_text
78. s = six.text_type(s)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in __str__
200. return mark_safe(self.__html__())
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in __html__
197. return '<div class="rich-text">' + expand_db_html(self.source) + '</div>'
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in expand_db_html
181. html = FIND_EMBED_TAG.sub(replace_embed_tag, html)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in replace_embed_tag
177. handler = get_embed_handler(attrs['embedtype'])
Exception Type: KeyError at /asset-building/the-ladder/updating-poverty/
Exception Value: u'embedtype'
Request information:
USER: AnonymousUser
``` | 1.0 | EmbdedType Error - ```sh
Template error:
In template /app/blog/templates/blog/blog_post.html, error at line 0
embedtype
1 : {% extends "post_page.html" %}
2 :
3 : {% load wagtailcore_tags %}
4 : {% load wagtailcore_tags wagtailimages_tags %}
5 : {% load utilities %}
6 :
7 : {% block body_class %}template-blogpost{% endblock %}
8 :
9 : {% block post-header %}
10 : {% image page.story_image min-1100x650 as story_image %}
Traceback:
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
217. response = self.process_exception_by_middleware(e, request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
215. response = response.render()
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py" in render
109. self.content = self.rendered_content
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/response.py" in rendered_content
86. content = template.render(context, self._request)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/backends/django.py" in render
66. return self.template.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
208. return self._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
174. return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
174. return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
174. return compiled_parent._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
70. result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
70. result = block.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/defaulttags.py" in render
209. nodelist.append(node.render_annotated(context))
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py" in render
210. return template.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
210. return self._render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in _render
199. return self.nodelist.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render
994. bit = node.render_annotated(context)
File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py" in render_annotated
961. return self.render(context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/templatetags/wagtailcore_tags.py" in render
75. return value.render_as_block(context=new_context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/blocks/base.py" in render_as_block
428. return self.block.render(self.value, context=context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/blocks/base.py" in render
232. return self.render_basic(value, context=context)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/blocks/base.py" in render_basic
247. return force_text(value)
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/encoding.py" in force_text
78. s = six.text_type(s)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in __str__
200. return mark_safe(self.__html__())
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in __html__
197. return '<div class="rich-text">' + expand_db_html(self.source) + '</div>'
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in expand_db_html
181. html = FIND_EMBED_TAG.sub(replace_embed_tag, html)
File "/app/.heroku/python/lib/python2.7/site-packages/wagtail/wagtailcore/rich_text.py" in replace_embed_tag
177. handler = get_embed_handler(attrs['embedtype'])
Exception Type: KeyError at /asset-building/the-ladder/updating-poverty/
Exception Value: u'embedtype'
Request information:
USER: AnonymousUser
``` | priority | embdedtype error sh template error in template app blog templates blog blog post html error at line embedtype extends quot post page html quot load wagtailcore tags load wagtailcore tags wagtailimages tags load utilities block body class template blogpost endblock block post header image page story image min as story image traceback file app heroku python lib site packages django core handlers exception py in inner response get response request file app heroku python lib site packages django core handlers base py in get response response self process exception by middleware e request file app heroku python lib site packages django core handlers base py in get response response response render file app heroku python lib site packages django template response py in render self content self rendered content file app heroku python lib site packages django template response py in rendered content content template render context self request file app heroku python lib site packages django template backends django py in render return self template render context file app heroku python lib site packages django template base py in render return self render context file app heroku python lib site packages django template base py in render return self nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template loader tags py in render return compiled parent render context file app heroku python lib site packages django template base py in render return self nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template loader tags py in render return compiled parent render context file app heroku python lib site packages django template base py in render return self nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template loader tags py in render return compiled parent render context file app heroku python lib site packages django template base py in render return self nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template loader tags py in render result block nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template loader tags py in render result block nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template defaulttags py in render nodelist append node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages django template loader tags py in render return template render context file app heroku python lib site packages django template base py in render return self render context file app heroku python lib site packages django template base py in render return self nodelist render context file app heroku python lib site packages django template base py in render bit node render annotated context file app heroku python lib site packages django template base py in render annotated return self render context file app heroku python lib site packages wagtail wagtailcore templatetags wagtailcore tags py in render return value render as block context new context file app heroku python lib site packages wagtail wagtailcore blocks base py in render as block return self block render self value context context file app heroku python lib site packages wagtail wagtailcore blocks base py in render return self render basic value context context file app heroku python lib site packages wagtail wagtailcore blocks base py in render basic return force text value file app heroku python lib site packages django utils encoding py in force text s six text type s file app heroku python lib site packages wagtail wagtailcore rich text py in str return mark safe self html file app heroku python lib site packages wagtail wagtailcore rich text py in html return expand db html self source file app heroku python lib site packages wagtail wagtailcore rich text py in expand db html html find embed tag sub replace embed tag html file app heroku python lib site packages wagtail wagtailcore rich text py in replace embed tag handler get embed handler attrs exception type keyerror at asset building the ladder updating poverty exception value u embedtype request information user anonymoususer | 1 |
793,084 | 27,982,621,553 | IssuesEvent | 2023-03-26 10:37:05 | KDT3-Final-6/final-project-BE | https://api.github.com/repos/KDT3-Final-6/final-project-BE | closed | feat: 검색 기능 로직 추가 | Type: Feature Status: In Progress Priority: Medium For: API For: Backend | ## Description
판매 중인 상품만 출력하도록 로직 추가
## Tasks(Process)
- [x] 제목으로 판매 중인 상품만 출력하는 로직 추가
- [x] 카테고리로 판매 중인 상품만 출력하는 로직 추가
- [x] 날짜로 판매 중인 상품만 출력하는 로직 추가
## References
| 1.0 | feat: 검색 기능 로직 추가 - ## Description
판매 중인 상품만 출력하도록 로직 추가
## Tasks(Process)
- [x] 제목으로 판매 중인 상품만 출력하는 로직 추가
- [x] 카테고리로 판매 중인 상품만 출력하는 로직 추가
- [x] 날짜로 판매 중인 상품만 출력하는 로직 추가
## References
| priority | feat 검색 기능 로직 추가 description 판매 중인 상품만 출력하도록 로직 추가 tasks process 제목으로 판매 중인 상품만 출력하는 로직 추가 카테고리로 판매 중인 상품만 출력하는 로직 추가 날짜로 판매 중인 상품만 출력하는 로직 추가 references | 1 |
216,984 | 7,313,602,356 | IssuesEvent | 2018-03-01 02:01:18 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: World view waypoint bug | Medium Priority |

**Version:** 0.7.0.0 beta
**Steps to Reproduce:**
begin the new tutorial, on the world view tutorial follow steps until the last objective (you can see the marker in the world. click the arrow to bring up options or remove it)
**Expected behavior:**
clicking the downward facing arrow should bring up a context menu with options (including removal)
**Actual behavior:**
clicking the arrow does not bring up a menu of any kind, also waypoints placed previously show on the screen but no longer show on the map
attached is a screenshot via the new steam client :)
| 1.0 | USER ISSUE: World view waypoint bug -

**Version:** 0.7.0.0 beta
**Steps to Reproduce:**
begin the new tutorial, on the world view tutorial follow steps until the last objective (you can see the marker in the world. click the arrow to bring up options or remove it)
**Expected behavior:**
clicking the downward facing arrow should bring up a context menu with options (including removal)
**Actual behavior:**
clicking the arrow does not bring up a menu of any kind, also waypoints placed previously show on the screen but no longer show on the map
attached is a screenshot via the new steam client :)
| priority | user issue world view waypoint bug version beta steps to reproduce begin the new tutorial on the world view tutorial follow steps until the last objective you can see the marker in the world click the arrow to bring up options or remove it expected behavior clicking the downward facing arrow should bring up a context menu with options including removal actual behavior clicking the arrow does not bring up a menu of any kind also waypoints placed previously show on the screen but no longer show on the map attached is a screenshot via the new steam client | 1 |
589,155 | 17,691,099,366 | IssuesEvent | 2021-08-24 10:02:14 | nimblehq/nimble-medium-ios | https://api.github.com/repos/nimblehq/nimble-medium-ios | closed | As a user, I can signup from the left menu | type : feature category : ui priority : medium | ## Why
When users don't signup yet, they will be able to see option to do so in the left menu from `Home` Screen.
## Acceptance Criteria
- [ ] Reuse the option UI layout, font style and highlight style from #10.
- [ ] Update the option title with text: `Signup` and a start signup icon.
- [ ] Use the start signup icon from below resources.
- [ ] This option must be right under the login option.
## Resources
- The start signup icon:
https://www.iconpacks.net/free-icon/user-signup-3058.html
- Sample menu option - default state:
<img width="243" alt="Screen Shot 2021-08-06 at 11 31 33" src="https://user-images.githubusercontent.com/70877098/128456327-e03d7f84-f2c3-4463-83c0-53e7549b8117.png">
- Sample menu option - highlighted state:
<img width="240" alt="Screen Shot 2021-08-06 at 11 33 26" src="https://user-images.githubusercontent.com/70877098/128456410-8dc3c254-b112-458c-ace1-1693e2308b46.png">
| 1.0 | As a user, I can signup from the left menu - ## Why
When users don't signup yet, they will be able to see option to do so in the left menu from `Home` Screen.
## Acceptance Criteria
- [ ] Reuse the option UI layout, font style and highlight style from #10.
- [ ] Update the option title with text: `Signup` and a start signup icon.
- [ ] Use the start signup icon from below resources.
- [ ] This option must be right under the login option.
## Resources
- The start signup icon:
https://www.iconpacks.net/free-icon/user-signup-3058.html
- Sample menu option - default state:
<img width="243" alt="Screen Shot 2021-08-06 at 11 31 33" src="https://user-images.githubusercontent.com/70877098/128456327-e03d7f84-f2c3-4463-83c0-53e7549b8117.png">
- Sample menu option - highlighted state:
<img width="240" alt="Screen Shot 2021-08-06 at 11 33 26" src="https://user-images.githubusercontent.com/70877098/128456410-8dc3c254-b112-458c-ace1-1693e2308b46.png">
| priority | as a user i can signup from the left menu why when users don t signup yet they will be able to see option to do so in the left menu from home screen acceptance criteria reuse the option ui layout font style and highlight style from update the option title with text signup and a start signup icon use the start signup icon from below resources this option must be right under the login option resources the start signup icon sample menu option default state img width alt screen shot at src sample menu option highlighted state img width alt screen shot at src | 1 |
647,894 | 21,158,756,740 | IssuesEvent | 2022-04-07 07:24:44 | KminekMatej/tymy | https://api.github.com/repos/KminekMatej/tymy | closed | Zobrazení události spadne pokud neexistuje | bug frontend Priority:Medium | Při pokusu o odskok na neexistující událost (čert ví jak to mohlo vzniknout), server spadne s error 500.
Zdroj: dudaci.tymy.cz | 1.0 | Zobrazení události spadne pokud neexistuje - Při pokusu o odskok na neexistující událost (čert ví jak to mohlo vzniknout), server spadne s error 500.
Zdroj: dudaci.tymy.cz | priority | zobrazení události spadne pokud neexistuje při pokusu o odskok na neexistující událost čert ví jak to mohlo vzniknout server spadne s error zdroj dudaci tymy cz | 1 |
82,681 | 3,618,104,621 | IssuesEvent | 2016-02-08 09:51:57 | PolarisSS13/Polaris | https://api.github.com/repos/PolarisSS13/Polaris | closed | The trash bag moves intercoms around! | Bug Priority: Medium | When the trash bag is used on an intercom present on a wall, the intercom moves onto the player.
With some fiddling you can put it back on the wall, but it is obvious that no ordinary trash bag can not move fastened station intercoms around onto the floor or through windows! | 1.0 | The trash bag moves intercoms around! - When the trash bag is used on an intercom present on a wall, the intercom moves onto the player.
With some fiddling you can put it back on the wall, but it is obvious that no ordinary trash bag can not move fastened station intercoms around onto the floor or through windows! | priority | the trash bag moves intercoms around when the trash bag is used on an intercom present on a wall the intercom moves onto the player with some fiddling you can put it back on the wall but it is obvious that no ordinary trash bag can not move fastened station intercoms around onto the floor or through windows | 1 |
597,075 | 18,154,151,721 | IssuesEvent | 2021-09-26 19:38:03 | airqo-platform/AirQo-api | https://api.github.com/repos/airqo-platform/AirQo-api | opened | nextMaintenance field Data Type | invalid device-registry priority-medium | **What were you trying to achieve?**
I was trying to seralize data from the AirQo API
**What are the expected results?**
I expected all fields to have a constant data type or at most 2 data types
**What are the received results?**
The `nextMaintenance` field had about 3 data types i.e `null`, `date` and `json object`
<img width="963" alt="Screenshot 2021-09-26 at 22 18 25" src="https://user-images.githubusercontent.com/37845280/134821640-b09292e3-dcaf-4fee-8756-1afe1f16daaa.png">
**What are the steps to reproduce the issue?**
Query for devices on the production environment and compare the different values for `nextMaintenance`
**In what environment did you encounter the issue?**
- Chrome Browser
**Additional context**
Any other information you would like to share?
| 1.0 | nextMaintenance field Data Type - **What were you trying to achieve?**
I was trying to seralize data from the AirQo API
**What are the expected results?**
I expected all fields to have a constant data type or at most 2 data types
**What are the received results?**
The `nextMaintenance` field had about 3 data types i.e `null`, `date` and `json object`
<img width="963" alt="Screenshot 2021-09-26 at 22 18 25" src="https://user-images.githubusercontent.com/37845280/134821640-b09292e3-dcaf-4fee-8756-1afe1f16daaa.png">
**What are the steps to reproduce the issue?**
Query for devices on the production environment and compare the different values for `nextMaintenance`
**In what environment did you encounter the issue?**
- Chrome Browser
**Additional context**
Any other information you would like to share?
| priority | nextmaintenance field data type what were you trying to achieve i was trying to seralize data from the airqo api what are the expected results i expected all fields to have a constant data type or at most data types what are the received results the nextmaintenance field had about data types i e null date and json object img width alt screenshot at src what are the steps to reproduce the issue query for devices on the production environment and compare the different values for nextmaintenance in what environment did you encounter the issue chrome browser additional context any other information you would like to share | 1 |
58,390 | 3,088,994,434 | IssuesEvent | 2015-08-25 19:20:58 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | (Долой фичизм и их говнокод) нафиг spyware/"антивирусы" из ...DC-клиента ! | bug imported invalid Priority-Medium | _From [zzzxzzzy...@gmail.com](https://code.google.com/u/111612712877897236331/) on October 26, 2014 17:00:30_
StartUp: ( r503 -beta96 и ранее?):
+ (минимум)удлиняющий процесс загрузки!
(тем более он всё равно никогда не заменит полноценный - даже если бы и не был spyware)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1512_ | 1.0 | (Долой фичизм и их говнокод) нафиг spyware/"антивирусы" из ...DC-клиента ! - _From [zzzxzzzy...@gmail.com](https://code.google.com/u/111612712877897236331/) on October 26, 2014 17:00:30_
StartUp: ( r503 -beta96 и ранее?):
+ (минимум)удлиняющий процесс загрузки!
(тем более он всё равно никогда не заменит полноценный - даже если бы и не был spyware)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1512_ | priority | долой фичизм и их говнокод нафиг spyware антивирусы из dc клиента from on october startup и ранее минимум удлиняющий процесс загрузки тем более он всё равно никогда не заменит полноценный даже если бы и не был spyware original issue | 1 |
502,155 | 14,541,353,202 | IssuesEvent | 2020-12-15 14:28:26 | ZeusWPI/hydra | https://api.github.com/repos/ZeusWPI/hydra | closed | Add closing days | api medium priority | From the mail:
> Voorlopig zijn de enige sluitingsdagen (buiten de reguliere UGent sluitingsdagen) die van de eerste week van de kerstvakantie.
We should add these to the scraper.
| 1.0 | Add closing days - From the mail:
> Voorlopig zijn de enige sluitingsdagen (buiten de reguliere UGent sluitingsdagen) die van de eerste week van de kerstvakantie.
We should add these to the scraper.
| priority | add closing days from the mail voorlopig zijn de enige sluitingsdagen buiten de reguliere ugent sluitingsdagen die van de eerste week van de kerstvakantie we should add these to the scraper | 1 |
606,569 | 18,764,920,292 | IssuesEvent | 2021-11-05 21:48:22 | cyntaria/UniPal-Backend | https://api.github.com/repos/cyntaria/UniPal-Backend | opened | [DELETE] A Teacher Review | Status: Pending Priority: Medium user story Type: Feature | ### Summary
As a `student`, I should be able to **delete teacher reviews**, so that I can **remove old or inconsistent entries**.
### Acceptance Criteria
**GIVEN** a `student` is *deleting a teacher* in the app
**WHEN** the app hits the `/teacher-reviews/:id` endpoint with a valid DELETE request, containing the path parameter:
- `:id`, the unique id of the entity being removed.
**THEN** the app should receive a status `200`
**AND** in the response, the following information should be returned:
- header message indicating delete operation success
Sample Request/Sample Response
```
headers: {
error: 0,
message: "The specified item was deleted successfully"
}
body: {}
```
### Resources
- Development URL: {Here goes a URL to the feature on development API}
- Production URL: {Here goes a URL to the feature on production API}
### Dev Notes
{Some complementary notes if necessary}
### Testing Notes
#### Scenario 1: DELETE request is successful:
1. Create a new teacher review with a **POST** request to `/teacher-reviews` endpoint and ensure status id `200` is returned.
2. Make a **DELETE** request to `/teacher-reviews/:id` endpoint and ensure status id `200` is returned.
3. A subsequent **GET** request to `/teacher-reviews/:id` endpoint should return a status id `404`.
4. And the response headers' `id` parameter should contain "**_NotFoundException_**".
#### Scenario 2: DELETE request is unsuccessful due to unknown review_id
1. Make a **DELETE** request to `/teacher-reviews/:id` endpoint containing a non-existent `review_id`.
2. Ensure a `404` status id is returned.
3. And the response headers' `id` parameter should contain "**_NotFoundException_**".
#### Scenario 3: DELETE request is forbidden
1. Make a **DELETE** request to `/teacher-reviews/:id` endpoint with `reviewed_by_erp` **!==** erp `student` account token.
2. Ensure the endpoint returns a `403` forbidden status id.
3. And the response headers' `id` parameter should contain "**_ForbiddenException_**"
#### Scenario 4: DELETE request is unauthorized
1. Send a **DELETE** request to `/teacher-reviews/:id` endpoint without an **authorization token**
2. Ensure a `401` unauthorized status id is returned.
3. And the response headers' `id` parameter should contain "**_TokenMissingException_**" | 1.0 | [DELETE] A Teacher Review - ### Summary
As a `student`, I should be able to **delete teacher reviews**, so that I can **remove old or inconsistent entries**.
### Acceptance Criteria
**GIVEN** a `student` is *deleting a teacher* in the app
**WHEN** the app hits the `/teacher-reviews/:id` endpoint with a valid DELETE request, containing the path parameter:
- `:id`, the unique id of the entity being removed.
**THEN** the app should receive a status `200`
**AND** in the response, the following information should be returned:
- header message indicating delete operation success
Sample Request/Sample Response
```
headers: {
error: 0,
message: "The specified item was deleted successfully"
}
body: {}
```
### Resources
- Development URL: {Here goes a URL to the feature on development API}
- Production URL: {Here goes a URL to the feature on production API}
### Dev Notes
{Some complementary notes if necessary}
### Testing Notes
#### Scenario 1: DELETE request is successful:
1. Create a new teacher review with a **POST** request to `/teacher-reviews` endpoint and ensure status id `200` is returned.
2. Make a **DELETE** request to `/teacher-reviews/:id` endpoint and ensure status id `200` is returned.
3. A subsequent **GET** request to `/teacher-reviews/:id` endpoint should return a status id `404`.
4. And the response headers' `id` parameter should contain "**_NotFoundException_**".
#### Scenario 2: DELETE request is unsuccessful due to unknown review_id
1. Make a **DELETE** request to `/teacher-reviews/:id` endpoint containing a non-existent `review_id`.
2. Ensure a `404` status id is returned.
3. And the response headers' `id` parameter should contain "**_NotFoundException_**".
#### Scenario 3: DELETE request is forbidden
1. Make a **DELETE** request to `/teacher-reviews/:id` endpoint with `reviewed_by_erp` **!==** erp `student` account token.
2. Ensure the endpoint returns a `403` forbidden status id.
3. And the response headers' `id` parameter should contain "**_ForbiddenException_**"
#### Scenario 4: DELETE request is unauthorized
1. Send a **DELETE** request to `/teacher-reviews/:id` endpoint without an **authorization token**
2. Ensure a `401` unauthorized status id is returned.
3. And the response headers' `id` parameter should contain "**_TokenMissingException_**" | priority | a teacher review summary as a student i should be able to delete teacher reviews so that i can remove old or inconsistent entries acceptance criteria given a student is deleting a teacher in the app when the app hits the teacher reviews id endpoint with a valid delete request containing the path parameter id the unique id of the entity being removed then the app should receive a status and in the response the following information should be returned header message indicating delete operation success sample request sample response headers error message the specified item was deleted successfully body resources development url here goes a url to the feature on development api production url here goes a url to the feature on production api dev notes some complementary notes if necessary testing notes scenario delete request is successful create a new teacher review with a post request to teacher reviews endpoint and ensure status id is returned make a delete request to teacher reviews id endpoint and ensure status id is returned a subsequent get request to teacher reviews id endpoint should return a status id and the response headers id parameter should contain notfoundexception scenario delete request is unsuccessful due to unknown review id make a delete request to teacher reviews id endpoint containing a non existent review id ensure a status id is returned and the response headers id parameter should contain notfoundexception scenario delete request is forbidden make a delete request to teacher reviews id endpoint with reviewed by erp erp student account token ensure the endpoint returns a forbidden status id and the response headers id parameter should contain forbiddenexception scenario delete request is unauthorized send a delete request to teacher reviews id endpoint without an authorization token ensure a unauthorized status id is returned and the response headers id parameter should contain tokenmissingexception | 1 |
650,715 | 21,414,618,468 | IssuesEvent | 2022-04-22 09:39:29 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Printing plugin preview not working with leaflet | bug investigation Priority: Medium Accepted Good first issue | ## Description
Printing plugin preview not working with leaflet
## How to reproduce
- Create a new map
- Open print tool
*Expected Result*
I can see the preview
*Current Result*
The preview map is blank
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
| 1.0 | Printing plugin preview not working with leaflet - ## Description
Printing plugin preview not working with leaflet
## How to reproduce
- Create a new map
- Open print tool
*Expected Result*
I can see the preview
*Current Result*
The preview map is blank
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
| priority | printing plugin preview not working with leaflet description printing plugin preview not working with leaflet how to reproduce create a new map open print tool expected result i can see the preview current result the preview map is blank not browser related browser info use this site a href for non expert users browser affected version internet explorer edge chrome firefox safari other useful information | 1 |
156,016 | 5,963,138,311 | IssuesEvent | 2017-05-30 03:06:17 | k0shk0sh/FastHub | https://api.github.com/repos/k0shk0sh/FastHub | closed | Lack of notification options | Priority: Medium Status: Completed Type: Feature Request | Nice to see notifications on FastHub but lacks some options:
- Choose a custom notification sound
- Notification vibration options (default, long, sort, disabled)
- Priority
Had to disable the notifications here because of the annoyances like not being able to disable vibration | 1.0 | Lack of notification options - Nice to see notifications on FastHub but lacks some options:
- Choose a custom notification sound
- Notification vibration options (default, long, sort, disabled)
- Priority
Had to disable the notifications here because of the annoyances like not being able to disable vibration | priority | lack of notification options nice to see notifications on fasthub but lacks some options choose a custom notification sound notification vibration options default long sort disabled priority had to disable the notifications here because of the annoyances like not being able to disable vibration | 1 |
677,884 | 23,178,758,303 | IssuesEvent | 2022-07-31 20:20:22 | shal/mono | https://api.github.com/repos/shal/mono | closed | Public client gives access to a pile of internals | Priority: Medium Type: Bug | The public client gives access to methods of `core` and `http.Client`. Is this by design? And if so, what benefits are there in this approach?
The same relates to the personal client.
<img width="625" alt="Screenshot 2019-12-14 at 17 28 50" src="https://user-images.githubusercontent.com/12697803/70850835-57ca6b80-1e97-11ea-96e1-e77bfef031ac.png">
| 1.0 | Public client gives access to a pile of internals - The public client gives access to methods of `core` and `http.Client`. Is this by design? And if so, what benefits are there in this approach?
The same relates to the personal client.
<img width="625" alt="Screenshot 2019-12-14 at 17 28 50" src="https://user-images.githubusercontent.com/12697803/70850835-57ca6b80-1e97-11ea-96e1-e77bfef031ac.png">
| priority | public client gives access to a pile of internals the public client gives access to methods of core and http client is this by design and if so what benefits are there in this approach the same relates to the personal client img width alt screenshot at src | 1 |
780,256 | 27,387,186,116 | IssuesEvent | 2023-02-28 14:05:44 | dsa-ou/allowed | https://api.github.com/repos/dsa-ou/allowed | closed | turn on method call checks | type: enhancement good first issue priority: medium | Currently, `allowed` checks method calls (which takes a long time) whenever possible (`pytype` is installed and the user is running Python 3.7–3.10). This doesn't allow to first do the quick checks, which cover most constructs, and only once those are fixed do method call checks.
To support this, turn off method checks by default and add an option (e.g. `-m` and `--methods`) that turns it on. If turned on but the checks aren't possible, print the error messages as currently.
If method checks are off but could be done, remind the user of this new option after the program has run. | 1.0 | turn on method call checks - Currently, `allowed` checks method calls (which takes a long time) whenever possible (`pytype` is installed and the user is running Python 3.7–3.10). This doesn't allow to first do the quick checks, which cover most constructs, and only once those are fixed do method call checks.
To support this, turn off method checks by default and add an option (e.g. `-m` and `--methods`) that turns it on. If turned on but the checks aren't possible, print the error messages as currently.
If method checks are off but could be done, remind the user of this new option after the program has run. | priority | turn on method call checks currently allowed checks method calls which takes a long time whenever possible pytype is installed and the user is running python – this doesn t allow to first do the quick checks which cover most constructs and only once those are fixed do method call checks to support this turn off method checks by default and add an option e g m and methods that turns it on if turned on but the checks aren t possible print the error messages as currently if method checks are off but could be done remind the user of this new option after the program has run | 1 |
775,246 | 27,224,938,420 | IssuesEvent | 2023-02-21 09:03:19 | tallyhowallet/extension | https://api.github.com/repos/tallyhowallet/extension | closed | Exclude testnets funds from Portfolio balance | Type: Bug Status: Pending Priority: Medium | Testnets funds shouldn't be included in calculating account balances across networks.
This is possible regression because I can't see this effect in the current version but I have this problem in my extension that was installed for some time. | 1.0 | Exclude testnets funds from Portfolio balance - Testnets funds shouldn't be included in calculating account balances across networks.
This is possible regression because I can't see this effect in the current version but I have this problem in my extension that was installed for some time. | priority | exclude testnets funds from portfolio balance testnets funds shouldn t be included in calculating account balances across networks this is possible regression because i can t see this effect in the current version but i have this problem in my extension that was installed for some time | 1 |
381,065 | 11,272,925,823 | IssuesEvent | 2020-01-14 15:41:32 | isi-vista/adam | https://api.github.com/repos/isi-vista/adam | closed | Add a test for debug_matching for PatternGraphMatcher | priority-2-low size-medium | This doesn't need to test the output, just that it doesn't crash. Also we can exercise some coverage of the dot rendering code this way. | 1.0 | Add a test for debug_matching for PatternGraphMatcher - This doesn't need to test the output, just that it doesn't crash. Also we can exercise some coverage of the dot rendering code this way. | priority | add a test for debug matching for patterngraphmatcher this doesn t need to test the output just that it doesn t crash also we can exercise some coverage of the dot rendering code this way | 1 |
22,186 | 2,645,770,101 | IssuesEvent | 2015-03-13 02:04:44 | prikhi/evoluspencil | https://api.github.com/repos/prikhi/evoluspencil | closed | [Enhancement] Pure svg document output | 2–5 stars enhancement imported Priority-Medium | _From [flooberd...@gmail.com](https://code.google.com/u/115974732518422240613/) on November 15, 2008 13:36:49_
What steps will reproduce the problem? --I can manually edit the ".ep" document to make an svg conformant file,
but it would be great if this were a native output format of Pencil. What version of the product are you using? On what operating system? Pencil 1.0 Build 4 on Windows XP using Firefox 3.03 Please provide any additional information below. The svg file Pencil outputs should be able to be displayed directly by Firefox.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=71_ | 1.0 | [Enhancement] Pure svg document output - _From [flooberd...@gmail.com](https://code.google.com/u/115974732518422240613/) on November 15, 2008 13:36:49_
What steps will reproduce the problem? --I can manually edit the ".ep" document to make an svg conformant file,
but it would be great if this were a native output format of Pencil. What version of the product are you using? On what operating system? Pencil 1.0 Build 4 on Windows XP using Firefox 3.03 Please provide any additional information below. The svg file Pencil outputs should be able to be displayed directly by Firefox.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=71_ | priority | pure svg document output from on november what steps will reproduce the problem i can manually edit the ep document to make an svg conformant file but it would be great if this were a native output format of pencil what version of the product are you using on what operating system pencil build on windows xp using firefox please provide any additional information below the svg file pencil outputs should be able to be displayed directly by firefox original issue | 1 |
38,871 | 2,850,513,366 | IssuesEvent | 2015-05-31 16:55:52 | damonkohler/android-scripting | https://api.github.com/repos/damonkohler/android-scripting | closed | Embedding Interpreters and Scripts into an APK (completed for Python) | auto-migrated Priority-Medium Type-Enhancement | ```
The Python interpreter, Python scripts, and SL4A can now be embedded into an
APK thanks to Anthony Prieur's project:
https://code.google.com/p/android-python27/
This means that a single APK containing the interpreter can be uploaded to
GooglePlay, and the interpreter won't need to be downloaded during the app
installation.
A customized version of the interpreter can also be used, for example a
different version build, set of modules, etc...
This should also satisfy open source license terms for some interpreters, which
require that commercial distributions of the language be "compiled" so that the
language isn't directly exposed to the user (as is the case with the Artistic
License).
It might be a good idea to add this information to SL4A's Wiki for "Sharing
Scripts", and possibly as an alternate template project, so that developers
know about this and can provide feedback on it.
It would also be great to extend this to other interpreters. Anthony provided
some hints on how to possibly do this for the Perl interpreter. If anyone is
interested in assisting with that, just let me know and I can share those
details.
```
Original issue reported on code.google.com by `danielop...@gmail.com` on 23 May 2012 at 4:53 | 1.0 | Embedding Interpreters and Scripts into an APK (completed for Python) - ```
The Python interpreter, Python scripts, and SL4A can now be embedded into an
APK thanks to Anthony Prieur's project:
https://code.google.com/p/android-python27/
This means that a single APK containing the interpreter can be uploaded to
GooglePlay, and the interpreter won't need to be downloaded during the app
installation.
A customized version of the interpreter can also be used, for example a
different version build, set of modules, etc...
This should also satisfy open source license terms for some interpreters, which
require that commercial distributions of the language be "compiled" so that the
language isn't directly exposed to the user (as is the case with the Artistic
License).
It might be a good idea to add this information to SL4A's Wiki for "Sharing
Scripts", and possibly as an alternate template project, so that developers
know about this and can provide feedback on it.
It would also be great to extend this to other interpreters. Anthony provided
some hints on how to possibly do this for the Perl interpreter. If anyone is
interested in assisting with that, just let me know and I can share those
details.
```
Original issue reported on code.google.com by `danielop...@gmail.com` on 23 May 2012 at 4:53 | priority | embedding interpreters and scripts into an apk completed for python the python interpreter python scripts and can now be embedded into an apk thanks to anthony prieur s project this means that a single apk containing the interpreter can be uploaded to googleplay and the interpreter won t need to be downloaded during the app installation a customized version of the interpreter can also be used for example a different version build set of modules etc this should also satisfy open source license terms for some interpreters which require that commercial distributions of the language be compiled so that the language isn t directly exposed to the user as is the case with the artistic license it might be a good idea to add this information to s wiki for sharing scripts and possibly as an alternate template project so that developers know about this and can provide feedback on it it would also be great to extend this to other interpreters anthony provided some hints on how to possibly do this for the perl interpreter if anyone is interested in assisting with that just let me know and i can share those details original issue reported on code google com by danielop gmail com on may at | 1 |
157,716 | 6,011,422,296 | IssuesEvent | 2017-06-06 15:10:40 | tferreira/piggydime | https://api.github.com/repos/tferreira/piggydime | closed | Selected account is not visible enough | enhancement priority - medium ui | Find a way to show to the user which account is selected.
The solution could be to change the color of the tile, or add an highlight effect on it...
Also add the hand mouse pointer on hover on tiles. | 1.0 | Selected account is not visible enough - Find a way to show to the user which account is selected.
The solution could be to change the color of the tile, or add an highlight effect on it...
Also add the hand mouse pointer on hover on tiles. | priority | selected account is not visible enough find a way to show to the user which account is selected the solution could be to change the color of the tile or add an highlight effect on it also add the hand mouse pointer on hover on tiles | 1 |
9,426 | 2,607,945,284 | IssuesEvent | 2015-02-26 00:32:59 | chrsmithdemos/switchlist | https://api.github.com/repos/chrsmithdemos/switchlist | opened | Add extra description field to each freight car | auto-migrated Priority-Medium Type-Enhancement | ```
Some modelers use car descriptions rather than reporting marks to make it
easier to find cars in yards:
"SP 20394 RED"
"BOX SP YELLOW"
We ought to figure out a way to include car descriptions in switchlists,
perhaps by having the car description on a separate line of each switchlist.
If it's not easy, we ought to include helpful workarounds in the documentation.
Complications:
* Need to redo all switchlist styles to fit the car descriptions
* Makes switchlists longer
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 21 Aug 2011 at 6:06 | 1.0 | Add extra description field to each freight car - ```
Some modelers use car descriptions rather than reporting marks to make it
easier to find cars in yards:
"SP 20394 RED"
"BOX SP YELLOW"
We ought to figure out a way to include car descriptions in switchlists,
perhaps by having the car description on a separate line of each switchlist.
If it's not easy, we ought to include helpful workarounds in the documentation.
Complications:
* Need to redo all switchlist styles to fit the car descriptions
* Makes switchlists longer
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 21 Aug 2011 at 6:06 | priority | add extra description field to each freight car some modelers use car descriptions rather than reporting marks to make it easier to find cars in yards sp red box sp yellow we ought to figure out a way to include car descriptions in switchlists perhaps by having the car description on a separate line of each switchlist if it s not easy we ought to include helpful workarounds in the documentation complications need to redo all switchlist styles to fit the car descriptions makes switchlists longer original issue reported on code google com by rwbowdi gmail com on aug at | 1 |
642,348 | 20,885,705,712 | IssuesEvent | 2022-03-23 04:43:10 | AY2122S2-CS2113T-T09-2/tp | https://api.github.com/repos/AY2122S2-CS2113T-T09-2/tp | closed | Create Resource File for Schedule | type.Task priority.Medium | Implement file and data format for schedule.
Part of a bigger task as mentioned in #92. | 1.0 | Create Resource File for Schedule - Implement file and data format for schedule.
Part of a bigger task as mentioned in #92. | priority | create resource file for schedule implement file and data format for schedule part of a bigger task as mentioned in | 1 |
487,492 | 14,047,488,387 | IssuesEvent | 2020-11-02 07:15:24 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.1 staging-1823] Mortared Sandstone/Limestone/Granite Doors don't work | Category: Gameplay Priority: Medium Status: Fixed Type: Regression | - [x] I can place a several these doors in one place:

- [x] Double doors don't work.
Seems It was fixed but we have it again. | 1.0 | [0.9.1 staging-1823] Mortared Sandstone/Limestone/Granite Doors don't work - - [x] I can place a several these doors in one place:

- [x] Double doors don't work.
Seems It was fixed but we have it again. | priority | mortared sandstone limestone granite doors don t work i can place a several these doors in one place double doors don t work seems it was fixed but we have it again | 1 |
232,035 | 7,653,472,862 | IssuesEvent | 2018-05-10 04:14:43 | neutronpy/neutronpy | https://api.github.com/repos/neutronpy/neutronpy | closed | Add ability to compare if two objects are equivalent | component: core enhancement priority: medium | Need the ability to compare two Data, Instrument, Energy and Material objects to see if they are equivalent.
| 1.0 | Add ability to compare if two objects are equivalent - Need the ability to compare two Data, Instrument, Energy and Material objects to see if they are equivalent.
| priority | add ability to compare if two objects are equivalent need the ability to compare two data instrument energy and material objects to see if they are equivalent | 1 |
680,720 | 23,283,468,247 | IssuesEvent | 2022-08-05 14:15:39 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Contact request message and very next message are separated with [Today] timestamp | bug Chat priority 2: medium E:Bugfixes S:2 | ### Description
1. create user one
2. create user two (separate instance)
3. as user one, send a message in public chat
4. as user two, click the message and send contact request
5. as user one, accept the contact request
6. as user one, send a message to 1x1 chat with user two
7. as user two, open 1x1 chat with user one
As result, the contact request message and newly received message are divided with [Today] timestamp
<img width="927" alt="Screenshot 2022-07-20 at 10 41 04" src="https://user-images.githubusercontent.com/82375995/179925782-b17a6d2f-65d2-45a1-a279-f9fbb40f9ae3.png">
| 1.0 | Contact request message and very next message are separated with [Today] timestamp - ### Description
1. create user one
2. create user two (separate instance)
3. as user one, send a message in public chat
4. as user two, click the message and send contact request
5. as user one, accept the contact request
6. as user one, send a message to 1x1 chat with user two
7. as user two, open 1x1 chat with user one
As result, the contact request message and newly received message are divided with [Today] timestamp
<img width="927" alt="Screenshot 2022-07-20 at 10 41 04" src="https://user-images.githubusercontent.com/82375995/179925782-b17a6d2f-65d2-45a1-a279-f9fbb40f9ae3.png">
| priority | contact request message and very next message are separated with timestamp description create user one create user two separate instance as user one send a message in public chat as user two click the message and send contact request as user one accept the contact request as user one send a message to chat with user two as user two open chat with user one as result the contact request message and newly received message are divided with timestamp img width alt screenshot at src | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.