Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
782,631
| 27,501,581,730
|
IssuesEvent
|
2023-03-05 18:58:27
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
closed
|
docker digestPin is no longer working for multiple images in same file
|
type:bug priority-2-high manager:dockerfile status:in-progress reproduction:provided
|
### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
None
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
I noticed today that when using docker:digestPin, renovate is no longer updating all image/from statements within the dockerfile. It would appear that it is only updating the last dependency, although the PR body table contains all of the dependencies and updates
Minimal reproduction
- repository - https://github.com/setchy/renovate-docker-pinning-issue
- pin PR with the two dependencies correctly identified in issue body - https://github.com/setchy/renovate-docker-pinning-issue/pull/5
- only the last dependency was pinned - https://github.com/setchy/renovate-docker-pinning-issue/pull/5/files
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste the relevant log(s) here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description
|
1.0
|
docker digestPin is no longer working for multiple images in same file - ### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
None
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
I noticed today that when using docker:digestPin, renovate is no longer updating all image/from statements within the dockerfile. It would appear that it is only updating the last dependency, although the PR body table contains all of the dependencies and updates
Minimal reproduction
- repository - https://github.com/setchy/renovate-docker-pinning-issue
- pin PR with the two dependencies correctly identified in issue body - https://github.com/setchy/renovate-docker-pinning-issue/pull/5
- only the last dependency was pinned - https://github.com/setchy/renovate-docker-pinning-issue/pull/5/files
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste the relevant log(s) here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description
|
priority
|
docker digestpin is no longer working for multiple images in same file how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response if you re self hosting renovate select which platform you are using none if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped it used to work and then stopped describe the bug i noticed today that when using docker digestpin renovate is no longer updating all image from statements within the dockerfile it would appear that it is only updating the last dependency although the pr body table contains all of the dependencies and updates minimal reproduction repository pin pr with the two dependencies correctly identified in issue body only the last dependency was pinned relevant debug logs logs copy paste the relevant log s here between the starting and ending backticks have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description
| 1
|
71,133
| 3,352,374,944
|
IssuesEvent
|
2015-11-17 22:27:56
|
minetest-LOTT/Lord-of-the-Test
|
https://api.github.com/repos/minetest-LOTT/Lord-of-the-Test
|
closed
|
Clothes aren't updated on the player model
|
bug high priority
|
I don't know what is happening internally, but new clothes chosen after commit 233c3ee3500a82b190caa4b6c11ebb854263630d don't appear on the player model.
|
1.0
|
Clothes aren't updated on the player model - I don't know what is happening internally, but new clothes chosen after commit 233c3ee3500a82b190caa4b6c11ebb854263630d don't appear on the player model.
|
priority
|
clothes aren t updated on the player model i don t know what is happening internally but new clothes chosen after commit don t appear on the player model
| 1
|
204,338
| 7,087,021,698
|
IssuesEvent
|
2018-01-11 16:29:55
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
closed
|
DAL: Blocked attempts to insert duplicate nodes by pfqueue when processing DHCP packets
|
Priority: High Status: For review Type: Bug
|
Seems related to #2823
```
Dec 12 07:53:19 pf-julien pfqueue: pfqueue(4507) ERROR: [mac:94:db:c9:38:85:5b] Database query failed with non retryable error: Duplicate entry '94:db:c9:38:85:5b' for key 'PRIMARY' (errno: 1062) [INSERT INTO `node` ( `autoreg`, `bandwidth_balance`, `bypass_role_id`, `bypass_vlan`, `category_id`, `computername`, `detect_date`, `device_class`, `device_score`, `device_type`, `device_version`, `dhcp6_enterprise`, `dhcp6_fingerprint`, `dhcp_fingerprint`, `dhcp_vendor`, `last_arp`, `last_dhcp`, `last_seen`, `lastskip`, `mac`, `machine_account`, `notes`, `pid`, `regdate`, `sessionid`, `status`, `time_balance`, `unregdate`, `user_agent`, `voip`) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )]{no, NULL, NULL, , 1, not-zammits-pc, 2017-12-11 15:14:53, Linux, 50, Ubuntu/Debian 5/Knoppix 6, NULL, , , 1,28,2,3,15,6,119,12,44,47,26,121,42, , 0000-00-00 00:00:00, 2017-12-12 07:53:18, 2017-12-12 07:53:18, 0000-00-00 00:00:00, 94:db:c9:38:85:5b, NULL, , jsemaan, 0000-00-00 00:00:00, , unreg, NULL, 0000-00-00 00:00:00, , no} (pf::dal::db_execute)
```
|
1.0
|
DAL: Blocked attempts to insert duplicate nodes by pfqueue when processing DHCP packets - Seems related to #2823
```
Dec 12 07:53:19 pf-julien pfqueue: pfqueue(4507) ERROR: [mac:94:db:c9:38:85:5b] Database query failed with non retryable error: Duplicate entry '94:db:c9:38:85:5b' for key 'PRIMARY' (errno: 1062) [INSERT INTO `node` ( `autoreg`, `bandwidth_balance`, `bypass_role_id`, `bypass_vlan`, `category_id`, `computername`, `detect_date`, `device_class`, `device_score`, `device_type`, `device_version`, `dhcp6_enterprise`, `dhcp6_fingerprint`, `dhcp_fingerprint`, `dhcp_vendor`, `last_arp`, `last_dhcp`, `last_seen`, `lastskip`, `mac`, `machine_account`, `notes`, `pid`, `regdate`, `sessionid`, `status`, `time_balance`, `unregdate`, `user_agent`, `voip`) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )]{no, NULL, NULL, , 1, not-zammits-pc, 2017-12-11 15:14:53, Linux, 50, Ubuntu/Debian 5/Knoppix 6, NULL, , , 1,28,2,3,15,6,119,12,44,47,26,121,42, , 0000-00-00 00:00:00, 2017-12-12 07:53:18, 2017-12-12 07:53:18, 0000-00-00 00:00:00, 94:db:c9:38:85:5b, NULL, , jsemaan, 0000-00-00 00:00:00, , unreg, NULL, 0000-00-00 00:00:00, , no} (pf::dal::db_execute)
```
|
priority
|
dal blocked attempts to insert duplicate nodes by pfqueue when processing dhcp packets seems related to dec pf julien pfqueue pfqueue error database query failed with non retryable error duplicate entry db for key primary errno no null null not zammits pc linux ubuntu debian knoppix null db null jsemaan unreg null no pf dal db execute
| 1
|
204,564
| 7,088,765,036
|
IssuesEvent
|
2018-01-11 22:48:51
|
terascope/teraslice
|
https://api.github.com/repos/terascope/teraslice
|
reopened
|
Roles of Job vs Execution Context
|
bug priority:high
|
This has been bothering me for a while and I know I've brought it up before but now I think I realize the reason the way we currently store and validate jobs is not quite right given that we also have this notion of the execution context. The thing that bothers me is that a users job gets validated and expanded and stored as a "Job" and then copied over as an execution context. I suggest that we change how this works and only ever store the users "raw" (unvalidated) job on the job object and then do the validation and expansion on the execution context. Conceptually speaking the job represents the users unmodified request and the execution context represents the the expansion of the job and thing that actually gets executed.
The realization I had was that the validation/expansion step then more tightly couples the job to implementation details of a specific version of the code. You could be left with jobs that don't run after upgrading teraslice or with defaults from earlier versions. Whereas if the job had not been validated/expanded it would be more likely to run after code changes. Now, I don't consider this a real driving reason to make this change, it is more of a hint or clue that the current behavior is not quite right.
I realize this complicates the validation at submission time but that is probably manageable.
|
1.0
|
Roles of Job vs Execution Context - This has been bothering me for a while and I know I've brought it up before but now I think I realize the reason the way we currently store and validate jobs is not quite right given that we also have this notion of the execution context. The thing that bothers me is that a users job gets validated and expanded and stored as a "Job" and then copied over as an execution context. I suggest that we change how this works and only ever store the users "raw" (unvalidated) job on the job object and then do the validation and expansion on the execution context. Conceptually speaking the job represents the users unmodified request and the execution context represents the the expansion of the job and thing that actually gets executed.
The realization I had was that the validation/expansion step then more tightly couples the job to implementation details of a specific version of the code. You could be left with jobs that don't run after upgrading teraslice or with defaults from earlier versions. Whereas if the job had not been validated/expanded it would be more likely to run after code changes. Now, I don't consider this a real driving reason to make this change, it is more of a hint or clue that the current behavior is not quite right.
I realize this complicates the validation at submission time but that is probably manageable.
|
priority
|
roles of job vs execution context this has been bothering me for a while and i know i ve brought it up before but now i think i realize the reason the way we currently store and validate jobs is not quite right given that we also have this notion of the execution context the thing that bothers me is that a users job gets validated and expanded and stored as a job and then copied over as an execution context i suggest that we change how this works and only ever store the users raw unvalidated job on the job object and then do the validation and expansion on the execution context conceptually speaking the job represents the users unmodified request and the execution context represents the the expansion of the job and thing that actually gets executed the realization i had was that the validation expansion step then more tightly couples the job to implementation details of a specific version of the code you could be left with jobs that don t run after upgrading teraslice or with defaults from earlier versions whereas if the job had not been validated expanded it would be more likely to run after code changes now i don t consider this a real driving reason to make this change it is more of a hint or clue that the current behavior is not quite right i realize this complicates the validation at submission time but that is probably manageable
| 1
|
127,826
| 5,039,710,834
|
IssuesEvent
|
2016-12-18 23:17:45
|
nohharri/GroupGenius
|
https://api.github.com/repos/nohharri/GroupGenius
|
opened
|
firebase/angular slow down
|
high priority
|
something is slow and not replacing as fast as it used to. Like on the public page, the group placeholder image sticks for about .2s after selecting a group. Same between the switch of "join a group" and "request pending"
|
1.0
|
firebase/angular slow down - something is slow and not replacing as fast as it used to. Like on the public page, the group placeholder image sticks for about .2s after selecting a group. Same between the switch of "join a group" and "request pending"
|
priority
|
firebase angular slow down something is slow and not replacing as fast as it used to like on the public page the group placeholder image sticks for about after selecting a group same between the switch of join a group and request pending
| 1
|
298,298
| 9,198,455,118
|
IssuesEvent
|
2019-03-07 12:41:01
|
CMDT/TimeSeriesDataCapture
|
https://api.github.com/repos/CMDT/TimeSeriesDataCapture
|
closed
|
Hardcoded values for credentials in BrowseData
|
Browse API High Priority bug
|
This will alleviate problems requiring fixes in [AliceLiveProjects](https://github.com/aliceliveprojects/TimeSeriesDataCapture_BrowseData/commits/master):
https://github.com/aliceliveprojects/TimeSeriesDataCapture_BrowseData/commit/50ed1ebce2d0d2d19c2c2117698d18a29e3cc606
https://github.com/aliceliveprojects/TimeSeriesDataCapture_BrowseData/commit/0943822bf3643e410da2897e1653d9437aee8e14
|
1.0
|
Hardcoded values for credentials in BrowseData - This will alleviate problems requiring fixes in [AliceLiveProjects](https://github.com/aliceliveprojects/TimeSeriesDataCapture_BrowseData/commits/master):
https://github.com/aliceliveprojects/TimeSeriesDataCapture_BrowseData/commit/50ed1ebce2d0d2d19c2c2117698d18a29e3cc606
https://github.com/aliceliveprojects/TimeSeriesDataCapture_BrowseData/commit/0943822bf3643e410da2897e1653d9437aee8e14
|
priority
|
hardcoded values for credentials in browsedata this will alleviate problems requiring fixes in
| 1
|
354,487
| 10,568,144,470
|
IssuesEvent
|
2019-10-06 10:50:04
|
netdata/netdata
|
https://api.github.com/repos/netdata/netdata
|
closed
|
Slaves not connecting to master
|
bug priority/high
|
Hi!
Here is a new one:
Master stopped receiving stream data from half of slaves. And of course "I didn't touch anything!" (apart from memory mode = dbengine on Master) :)
Here are last logs from aforementioned v17 Slave
```
2019-09-25 12:09:19: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:19: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:19: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:19: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:24: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:24: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:24: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:24: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:29: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:29: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:29: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:29: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:34: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:34: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:34: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:34: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
```
And same from one of the missing v13 Slaves:
```
2019-09-25 12:09:22: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:22: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:22: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:22: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:27: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:27: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:27: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:27: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:32: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:32: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:32: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:32: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:37: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:37: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:37: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:37: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
```
Then I restarted netdata v17 Master and list of available slaves changed. Some became available again, others became unavailable.
Should I start a new issue?
Thanks!
_Originally posted by @noobiek in https://github.com/netdata/netdata/issues/6852#issuecomment-534935227_
|
1.0
|
Slaves not connecting to master - Hi!
Here is a new one:
Master stopped receiving stream data from half of slaves. And of course "I didn't touch anything!" (apart from memory mode = dbengine on Master) :)
Here are last logs from aforementioned v17 Slave
```
2019-09-25 12:09:19: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:19: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:19: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:19: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:24: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:24: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:24: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:24: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:29: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:29: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:29: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:29: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:34: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:34: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:34: netdata INFO : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:34: netdata ERROR : STREAM_SENDER[wtdb1] : STREAM wtdb1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
```
And same from one of the missing v13 Slaves:
```
2019-09-25 12:09:22: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:22: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:22: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:22: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:27: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:27: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:27: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:27: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:32: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:32: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:32: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:32: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
2019-09-25 12:09:37: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: connecting...
2019-09-25 12:09:37: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: initializing communication...
2019-09-25 12:09:37: netdata INFO : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: waiting response from remote netdata...
2019-09-25 12:09:37: netdata ERROR : STREAM_SENDER[wtapp1] : STREAM wtapp1 [send to _MASTER_IP_:19999]: server is not replying properly (is it a netdata?).
```
Then I restarted netdata v17 Master and list of available slaves changed. Some became available again, others became unavailable.
Should I start a new issue?
Thanks!
_Originally posted by @noobiek in https://github.com/netdata/netdata/issues/6852#issuecomment-534935227_
|
priority
|
slaves not connecting to master hi here is a new one master stopped receiving stream data from half of slaves and of course i didn t touch anything apart from memory mode dbengine on master here are last logs from aforementioned slave netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata and same from one of the missing slaves netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata netdata info stream sender stream connecting netdata info stream sender stream initializing communication netdata info stream sender stream waiting response from remote netdata netdata error stream sender stream server is not replying properly is it a netdata then i restarted netdata master and list of available slaves changed some became available again others became unavailable should i start a new issue thanks originally posted by noobiek in
| 1
|
213,989
| 7,262,411,470
|
IssuesEvent
|
2018-02-19 05:47:43
|
wso2/testgrid
|
https://api.github.com/repos/wso2/testgrid
|
opened
|
AWS region can be configured via infrastructureConfig now.
|
Priority/Highest Severity/Blocker Type/Bug
|
**Description:**
This can be done by setting the inputParameter 'region' under the `infrastructureConfig -> scripts`.
However, this does not work ATM. We need to look at why this does not happen.
**Affected Product Version:**
0.9.0-m14
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
Run `cloudformation-is` with the following testgrid.yaml.
```yaml
version: '0.9'
infrastructureConfig:
iacProvider: CLOUDFORMATION
infrastructureProvider: AWS
containerOrchestrationEngine: None
parameters:
- JDK : ORACLE_JDK8
provisioners:
- name: 01-two-node-deployment
description: Provision Infra for a two node IS cluster
dir: cloudformation-templates/pattern-1
scripts:
- name: infra-for-two-node-deployment
description: Creates infrastructure for a IS two node deployment.
type: CLOUDFORMATION
file: pattern-1-with-puppet-cloudformation.template.yml
inputParameters:
parseInfrastructureScript: false
region: us-east-2
DBPassword: "DB_Password"
EC2KeyPair: "testgrid-key"
ALBCertificateARN: "arn:aws:acm:us-east-1:809489900555:certificate/2ab5aded-5df1-4549-9f7e-91639ff6634e"
scenarioConfig:
scenarios:
- "scenario02"
- "scenario12"
- "scenario21"
```
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
|
1.0
|
AWS region can be configured via infrastructureConfig now. - **Description:**
This can be done by setting the inputParameter 'region' under the `infrastructureConfig -> scripts`.
However, this does not work ATM. We need to look at why this does not happen.
**Affected Product Version:**
0.9.0-m14
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
Run `cloudformation-is` with the following testgrid.yaml.
```yaml
version: '0.9'
infrastructureConfig:
iacProvider: CLOUDFORMATION
infrastructureProvider: AWS
containerOrchestrationEngine: None
parameters:
- JDK : ORACLE_JDK8
provisioners:
- name: 01-two-node-deployment
description: Provision Infra for a two node IS cluster
dir: cloudformation-templates/pattern-1
scripts:
- name: infra-for-two-node-deployment
description: Creates infrastructure for a IS two node deployment.
type: CLOUDFORMATION
file: pattern-1-with-puppet-cloudformation.template.yml
inputParameters:
parseInfrastructureScript: false
region: us-east-2
DBPassword: "DB_Password"
EC2KeyPair: "testgrid-key"
ALBCertificateARN: "arn:aws:acm:us-east-1:809489900555:certificate/2ab5aded-5df1-4549-9f7e-91639ff6634e"
scenarioConfig:
scenarios:
- "scenario02"
- "scenario12"
- "scenario21"
```
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
|
priority
|
aws region can be configured via infrastructureconfig now description this can be done by setting the inputparameter region under the infrastructureconfig scripts however this does not work atm we need to look at why this does not happen affected product version os db other environment details and versions steps to reproduce run cloudformation is with the following testgrid yaml yaml version infrastructureconfig iacprovider cloudformation infrastructureprovider aws containerorchestrationengine none parameters jdk oracle provisioners name two node deployment description provision infra for a two node is cluster dir cloudformation templates pattern scripts name infra for two node deployment description creates infrastructure for a is two node deployment type cloudformation file pattern with puppet cloudformation template yml inputparameters parseinfrastructurescript false region us east dbpassword db password testgrid key albcertificatearn arn aws acm us east certificate scenarioconfig scenarios related issues
| 1
|
595,991
| 18,093,441,481
|
IssuesEvent
|
2021-09-22 06:08:35
|
zulip/zulip-mobile
|
https://api.github.com/repos/zulip/zulip-mobile
|
opened
|
Make typeahead boxes more visible
|
help wanted a-compose/send P1 high-priority
|
At present, typeahead boxes are not visually distinct from the background in night mode. This is especially problematic when the box opens automatically when the user does not expect it, as it now does for topic typeahead when sending a message from an interleaved view.
Possible solutions are to give the box more of a border or a different background color. The light mode typeahead box has a more clear boundary, but perhaps would benefit from more distinct styling as well.
Screenshot:

[CZO thread](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/compose.20topic.20and.20navigation)
|
1.0
|
Make typeahead boxes more visible - At present, typeahead boxes are not visually distinct from the background in night mode. This is especially problematic when the box opens automatically when the user does not expect it, as it now does for topic typeahead when sending a message from an interleaved view.
Possible solutions are to give the box more of a border or a different background color. The light mode typeahead box has a more clear boundary, but perhaps would benefit from more distinct styling as well.
Screenshot:

[CZO thread](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/compose.20topic.20and.20navigation)
|
priority
|
make typeahead boxes more visible at present typeahead boxes are not visually distinct from the background in night mode this is especially problematic when the box opens automatically when the user does not expect it as it now does for topic typeahead when sending a message from an interleaved view possible solutions are to give the box more of a border or a different background color the light mode typeahead box has a more clear boundary but perhaps would benefit from more distinct styling as well screenshot
| 1
|
492,737
| 14,218,872,755
|
IssuesEvent
|
2020-11-17 12:28:21
|
Scholar-6/brillder
|
https://api.github.com/repos/Scholar-6/brillder
|
closed
|
Correct word highlighting showing as incorrect in book, categorise showing all green when some incorrect
|
High Level Priority
|
<img width="1506" alt="Screenshot 2020-11-17 at 11 51 42" src="https://user-images.githubusercontent.com/59654112/99381401-68ffd180-28cb-11eb-88db-272b61f20565.png">
- [ ] highlighting validation/display
<img width="1180" alt="Screenshot 2020-11-17 at 11 51 54" src="https://user-images.githubusercontent.com/59654112/99381412-6d2bef00-28cb-11eb-8348-e763792eb720.png">ng
- [x] categorise validation/display
https://brillder.scholar6.org/post-play/brick/308/13
|
1.0
|
Correct word highlighting showing as incorrect in book, categorise showing all green when some incorrect - <img width="1506" alt="Screenshot 2020-11-17 at 11 51 42" src="https://user-images.githubusercontent.com/59654112/99381401-68ffd180-28cb-11eb-88db-272b61f20565.png">
- [ ] highlighting validation/display
<img width="1180" alt="Screenshot 2020-11-17 at 11 51 54" src="https://user-images.githubusercontent.com/59654112/99381412-6d2bef00-28cb-11eb-8348-e763792eb720.png">ng
- [x] categorise validation/display
https://brillder.scholar6.org/post-play/brick/308/13
|
priority
|
correct word highlighting showing as incorrect in book categorise showing all green when some incorrect img width alt screenshot at src highlighting validation display img width alt screenshot at src categorise validation display
| 1
|
688,108
| 23,548,667,965
|
IssuesEvent
|
2022-08-21 13:58:30
|
proveuswrong/webapp-news
|
https://api.github.com/repos/proveuswrong/webapp-news
|
opened
|
Implement Dispute Flow
|
priority: high type: enhancement
|
- Period tracking
- Evidence browsing and submission
- Appeal status and funding
- Pass arbitrator period
- Execute arbitrator ruling
- Draw jury
|
1.0
|
Implement Dispute Flow - - Period tracking
- Evidence browsing and submission
- Appeal status and funding
- Pass arbitrator period
- Execute arbitrator ruling
- Draw jury
|
priority
|
implement dispute flow period tracking evidence browsing and submission appeal status and funding pass arbitrator period execute arbitrator ruling draw jury
| 1
|
475,337
| 13,691,297,893
|
IssuesEvent
|
2020-09-30 15:22:38
|
CCAFS/MARLO
|
https://api.github.com/repos/CCAFS/MARLO
|
closed
|
[MR] (KDS-MARLO) Satisfaction Survey summary
|
Priority - High Type -Task
|
Analysis of the results of the MARLO satisfaction survey
- [x] Export the results
- [x] Made the analysis
- [x] Share the results
**Move to Review when:** share the results with Hector and team
**Move to Closed when:** send the email to MARLO Family
https://docs.google.com/document/d/1-8ZOLustcz3juFSr8KcCsMi2sXvEqpcL4bJiWY-JVuY/edit?usp=sharing
|
1.0
|
[MR] (KDS-MARLO) Satisfaction Survey summary - Analysis of the results of the MARLO satisfaction survey
- [x] Export the results
- [x] Made the analysis
- [x] Share the results
**Move to Review when:** share the results with Hector and team
**Move to Closed when:** send the email to MARLO Family
https://docs.google.com/document/d/1-8ZOLustcz3juFSr8KcCsMi2sXvEqpcL4bJiWY-JVuY/edit?usp=sharing
|
priority
|
kds marlo satisfaction survey summary analysis of the results of the marlo satisfaction survey export the results made the analysis share the results move to review when share the results with hector and team move to closed when send the email to marlo family
| 1
|
140,154
| 5,398,003,557
|
IssuesEvent
|
2017-02-27 15:58:27
|
jazzsequence/museum-core
|
https://api.github.com/repos/jazzsequence/museum-core
|
closed
|
fix call to 'register_sidebar'
|
deprecated priority-high
|
> The first call to 'register_sidebar' does not define an 'id' which is required as of WP 4.2.2 (or somewhere thereabout).
|
1.0
|
fix call to 'register_sidebar' - > The first call to 'register_sidebar' does not define an 'id' which is required as of WP 4.2.2 (or somewhere thereabout).
|
priority
|
fix call to register sidebar the first call to register sidebar does not define an id which is required as of wp or somewhere thereabout
| 1
|
19,572
| 2,622,153,867
|
IssuesEvent
|
2015-03-04 00:07:17
|
byzhang/terrastore
|
https://api.github.com/repos/byzhang/terrastore
|
closed
|
Upgrade to Terracotta 3.2.0.
|
auto-migrated Milestone-0.4 Priority-High Type-Enhancement
|
```
Upgrade to the latest Terracotta 3.2.0.
```
Original issue reported on code.google.com by `sergio.b...@gmail.com` on 14 Jan 2010 at 9:08
|
1.0
|
Upgrade to Terracotta 3.2.0. - ```
Upgrade to the latest Terracotta 3.2.0.
```
Original issue reported on code.google.com by `sergio.b...@gmail.com` on 14 Jan 2010 at 9:08
|
priority
|
upgrade to terracotta upgrade to the latest terracotta original issue reported on code google com by sergio b gmail com on jan at
| 1
|
352,630
| 10,544,327,717
|
IssuesEvent
|
2019-10-02 16:41:19
|
fac-17/My-Body-Back
|
https://api.github.com/repos/fac-17/My-Body-Back
|
closed
|
Create File Structure
|
Feature High Priority
|
- [x] Clone this repo
- [x] Create React App
- [x] Create folders & gitkeep files for initial push
Should be done after researching React Router #48
|
1.0
|
Create File Structure - - [x] Clone this repo
- [x] Create React App
- [x] Create folders & gitkeep files for initial push
Should be done after researching React Router #48
|
priority
|
create file structure clone this repo create react app create folders gitkeep files for initial push should be done after researching react router
| 1
|
788,508
| 27,755,304,120
|
IssuesEvent
|
2023-03-16 01:38:47
|
quickwit-oss/tantivy
|
https://api.github.com/repos/quickwit-oss/tantivy
|
closed
|
More Documents like this
|
good first issue high priority
|
It would be great to have a "MoreLikeThis" feature in Tantivy.
An efficient, effective "more-like-this" query generator would be a great contribution.
Elasticsearch and Lucene both support it:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-mlt-query.html
https://lucene.apache.org/core/7_2_0/queries/org/apache/lucene/queries/mlt/MoreLikeThis.html
> Note: this came up when trying to add support for Tantivy in [Django-Haystack](https://django-haystack.readthedocs.io/en/master/backend_support.html#backend-support-matrix)
If it's helpful, here's how Whoosh (pure search engine implemented in Python) is doing it:
https://github.com/mchaput/whoosh/blob/main/src/whoosh/searching.py#L543-L585
|
1.0
|
More Documents like this - It would be great to have a "MoreLikeThis" feature in Tantivy.
An efficient, effective "more-like-this" query generator would be a great contribution.
Elasticsearch and Lucene both support it:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-mlt-query.html
https://lucene.apache.org/core/7_2_0/queries/org/apache/lucene/queries/mlt/MoreLikeThis.html
> Note: this came up when trying to add support for Tantivy in [Django-Haystack](https://django-haystack.readthedocs.io/en/master/backend_support.html#backend-support-matrix)
If it's helpful, here's how Whoosh (pure search engine implemented in Python) is doing it:
https://github.com/mchaput/whoosh/blob/main/src/whoosh/searching.py#L543-L585
|
priority
|
more documents like this it would be great to have a morelikethis feature in tantivy an efficient effective more like this query generator would be a great contribution elasticsearch and lucene both support it note this came up when trying to add support for tantivy in if it s helpful here s how whoosh pure search engine implemented in python is doing it
| 1
|
101,005
| 4,105,744,350
|
IssuesEvent
|
2016-06-06 04:04:53
|
idevelopment/RingMe
|
https://api.github.com/repos/idevelopment/RingMe
|
opened
|
Get all agents on the index page
|
enhancement High Priority
|
Get all agents on the index page
When the customer clicks on the `callback button` we need to verify if the customer is registered.
If the customer is registered send the form data to the db and notify the agent to call the customer.
|
1.0
|
Get all agents on the index page - Get all agents on the index page
When the customer clicks on the `callback button` we need to verify if the customer is registered.
If the customer is registered send the form data to the db and notify the agent to call the customer.
|
priority
|
get all agents on the index page get all agents on the index page when the customer clicks on the callback button we need to verify if the customer is registered if the customer is registered send the form data to the db and notify the agent to call the customer
| 1
|
461,487
| 13,231,066,982
|
IssuesEvent
|
2020-08-18 10:57:59
|
CHOMPStation2/CHOMPStation2
|
https://api.github.com/repos/CHOMPStation2/CHOMPStation2
|
closed
|
Plethora of issues related to using_map.get_map_levels (maybe) making programs function on only one z-level
|
High Priority
|
Basically a lot of "check x over multiple z levels" programs now only function on the z-level said program (i.e. the computer using said program) is located in. From my testing so far I believe this is related to using_map.get_map_levels(). From my testing so far this includes:
-Crew monitor
-Atmosphere alarms
-Power monitor
-Network cards (modular computer wifi)
-Alarm handler
-Space z-level transferring at the edge
I have not tested everything which calls using_map.get_map_levels but suspect the list of broken programs is significantly longer.
This causes #470
|
1.0
|
Plethora of issues related to using_map.get_map_levels (maybe) making programs function on only one z-level - Basically a lot of "check x over multiple z levels" programs now only function on the z-level said program (i.e. the computer using said program) is located in. From my testing so far I believe this is related to using_map.get_map_levels(). From my testing so far this includes:
-Crew monitor
-Atmosphere alarms
-Power monitor
-Network cards (modular computer wifi)
-Alarm handler
-Space z-level transferring at the edge
I have not tested everything which calls using_map.get_map_levels but suspect the list of broken programs is significantly longer.
This causes #470
|
priority
|
plethora of issues related to using map get map levels maybe making programs function on only one z level basically a lot of check x over multiple z levels programs now only function on the z level said program i e the computer using said program is located in from my testing so far i believe this is related to using map get map levels from my testing so far this includes crew monitor atmosphere alarms power monitor network cards modular computer wifi alarm handler space z level transferring at the edge i have not tested everything which calls using map get map levels but suspect the list of broken programs is significantly longer this causes
| 1
|
141,951
| 5,447,436,314
|
IssuesEvent
|
2017-03-07 13:35:22
|
duckduckgo/zeroclickinfo-fathead
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead
|
closed
|
C++: CPP Reference Fathead
|
Difficulty: High Improvement Mission: Programming Priority: High Status: Needs a Developer Topic: C++
|
# Recreate the CPP Reference Fathead Instant Answer
Help us make DuckDuckGo the best search engine for programmers!
### What do I need to know?
You'll need to know how to code in **Perl**, **Python**, **Ruby**, or **JavaScript**.

### What am I doing?
You will write a script that scrapes or downloads the data source below, and generates an **output.txt** file containing the parsed documentation. You can learn more about Fatheads and the `output.txt` syntax [**here**](https://docs.duckduckhack.com/resources/fathead-overview.html).
**Bonus Info** 🚀 : This Fathead already exists, and it's awesome! We have decided to make it a candidate for deprecation so we can align the code to mirror other Fatheads. Further, we don't wish to rely on external parties to provide the data.
**Data source**: The same, but open to discussion.
<!-- ^^^ FILL THIS IN ^^^ -->
**Instant Answer Page**: https://duck.co/ia/view/cppreference_doc
<!-- ^^^ FILL THIS IN, AFTER ISSUE IS CLAIMED ^^^ -->
### What is the Goal?
As part of our [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53), we're aiming to reach 100% Instant Answer (IA) coverage for searches related to programming languages by creating new Instant Answers, and improving existing ones.
Here are some Fathead examples:
- Ruby Docs
- [Code](https://github.com/duckduckgo/zeroclickinfo-fathead/tree/master/lib/fathead/ruby) | [Example Query](https://duckduckgo.com/?q=array+bsearch&ia=about)
- MDN CSS
- [Code](https://github.com/duckduckgo/zeroclickinfo-fathead/tree/master/lib/fathead/mdn_css) | [Example Query](https://duckduckgo.com/?q=css+background-position&ia=about)

[See more related Instant Answers](https://duck.co/ia?repo=fathead)
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a new Instant Answer Page: https://duck.co/ia/new_ia (then let us know, here!)
- [ ] 5) Create the Fathead
- [ ] 6) Create a Pull Request
- [ ] 7) Ping @sahildua2305 for a review
<!-- ^^^ FILL THIS IN ^^^ -->
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
|
1.0
|
C++: CPP Reference Fathead - # Recreate the CPP Reference Fathead Instant Answer
Help us make DuckDuckGo the best search engine for programmers!
### What do I need to know?
You'll need to know how to code in **Perl**, **Python**, **Ruby**, or **JavaScript**.

### What am I doing?
You will write a script that scrapes or downloads the data source below, and generates an **output.txt** file containing the parsed documentation. You can learn more about Fatheads and the `output.txt` syntax [**here**](https://docs.duckduckhack.com/resources/fathead-overview.html).
**Bonus Info** 🚀 : This Fathead already exists, and it's awesome! We have decided to make it a candidate for deprecation so we can align the code to mirror other Fatheads. Further, we don't wish to rely on external parties to provide the data.
**Data source**: The same, but open to discussion.
<!-- ^^^ FILL THIS IN ^^^ -->
**Instant Answer Page**: https://duck.co/ia/view/cppreference_doc
<!-- ^^^ FILL THIS IN, AFTER ISSUE IS CLAIMED ^^^ -->
### What is the Goal?
As part of our [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53), we're aiming to reach 100% Instant Answer (IA) coverage for searches related to programming languages by creating new Instant Answers, and improving existing ones.
Here are some Fathead examples:
- Ruby Docs
- [Code](https://github.com/duckduckgo/zeroclickinfo-fathead/tree/master/lib/fathead/ruby) | [Example Query](https://duckduckgo.com/?q=array+bsearch&ia=about)
- MDN CSS
- [Code](https://github.com/duckduckgo/zeroclickinfo-fathead/tree/master/lib/fathead/mdn_css) | [Example Query](https://duckduckgo.com/?q=css+background-position&ia=about)

[See more related Instant Answers](https://duck.co/ia?repo=fathead)
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a new Instant Answer Page: https://duck.co/ia/new_ia (then let us know, here!)
- [ ] 5) Create the Fathead
- [ ] 6) Create a Pull Request
- [ ] 7) Ping @sahildua2305 for a review
<!-- ^^^ FILL THIS IN ^^^ -->
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
|
priority
|
c cpp reference fathead recreate the cpp reference fathead instant answer help us make duckduckgo the best search engine for programmers what do i need to know you ll need to know how to code in perl python ruby or javascript what am i doing you will write a script that scrapes or downloads the data source below and generates an output txt file containing the parsed documentation you can learn more about fatheads and the output txt syntax bonus info 🚀 this fathead already exists and it s awesome we have decided to make it a candidate for deprecation so we can align the code to mirror other fatheads further we don t wish to rely on external parties to provide the data data source the same but open to discussion instant answer page what is the goal as part of our we re aiming to reach instant answer ia coverage for searches related to programming languages by creating new instant answers and improving existing ones here are some fathead examples ruby docs mdn css get started claim this issue by commenting below review our and fork this repository create a new instant answer page then let us know here create the fathead create a pull request ping for a review resources join to ask questions join the to discuss project planning and instant answer metrics read the for technical help
| 1
|
817,171
| 30,629,046,098
|
IssuesEvent
|
2023-07-24 13:27:46
|
bigbluebutton/bigbluebutton
|
https://api.github.com/repos/bigbluebutton/bigbluebutton
|
closed
|
Whiteboard tools are working only on a small area of the presentation
|
priority: high module: client
|
**Describe the bug**
All whiteboard tools works only on small area of the entire whiteboard in Google Chrome browser. Problem presist with default presentation and with uploaded one. It started with Chrome latest update and it apears on every PC of my team with updated chrome browser. Issue is not found on Edge, Opera and Mozilla.
**Actual behavior**
moving the mouse otside the marked area of the screenshoot it look like the whiteboard tools are desactivated and for example the hand changes to the usual mouse cursor.
**Screenshots**

**BBB version:**
BigBlueButton Server 2.3.10 (2419)
**Desktop :**
- OS: Windows 10
- Browser Chrome
- Version 114.0.5735.90 (Official Build) (64-bit)
|
1.0
|
Whiteboard tools are working only on a small area of the presentation - **Describe the bug**
All whiteboard tools works only on small area of the entire whiteboard in Google Chrome browser. Problem presist with default presentation and with uploaded one. It started with Chrome latest update and it apears on every PC of my team with updated chrome browser. Issue is not found on Edge, Opera and Mozilla.
**Actual behavior**
moving the mouse otside the marked area of the screenshoot it look like the whiteboard tools are desactivated and for example the hand changes to the usual mouse cursor.
**Screenshots**

**BBB version:**
BigBlueButton Server 2.3.10 (2419)
**Desktop :**
- OS: Windows 10
- Browser Chrome
- Version 114.0.5735.90 (Official Build) (64-bit)
|
priority
|
whiteboard tools are working only on a small area of the presentation describe the bug all whiteboard tools works only on small area of the entire whiteboard in google chrome browser problem presist with default presentation and with uploaded one it started with chrome latest update and it apears on every pc of my team with updated chrome browser issue is not found on edge opera and mozilla actual behavior moving the mouse otside the marked area of the screenshoot it look like the whiteboard tools are desactivated and for example the hand changes to the usual mouse cursor screenshots bbb version bigbluebutton server desktop os windows browser chrome version official build bit
| 1
|
381,638
| 11,277,535,180
|
IssuesEvent
|
2020-01-15 03:11:49
|
medic/cht-core
|
https://api.github.com/repos/medic/cht-core
|
closed
|
Supervisors download all task documents from CHWs they supervise
|
Priority: 1 - High Type: Bug
|
**Describe the bug**
As part of Rules-Engine v2, tasks are now stored to disk and replicated up and down.
However, a stored task has this structure:
```
{
"type": "task",
"authoredOn": "<timestamp>",
"state": "<some_task_state>"
"stateHistory": [{ "state": "<some_task_state>", "timestamp": "<timestamp>" }, ...],
"user": "<users's contact document id>",
"requester": "<taskEmission.doc.contact._id>",
"owner": "<taskEmission.contact._id>",
"emission": { ... emission data }
}
```
relevant code here: https://github.com/medic/cht-core/blob/master/shared-libs/rules-engine/src/transform-task-emission-to-doc.js#L26
The `user` field is used for determining replication permissions for the doc and is the value emitted in `medic/docs_by_replication_key`: https://github.com/medic/cht-core/blob/master/ddocs/medic/views/docs_by_replication_key/map.js#L71 .
Because the field contains the uuid of the user's contact doc, supervisors are most likely allowed to view the people they supervise, hence they will download all the tasks of these users.
**To Reproduce**
Using "legacy" hierarchy:
1. Create a CHW under a `health_center`, add some families and people to the families and add at least one report that would generate a task. Sync!
2. Create a Supervisor above the CHW, in `district_hospital` level and give it `replication_depth` = 2. This means that he will download the `health_center`, the CHW Contact document and the families, but none of the people in the families or their reports.
3. Log in with the Supervisor account and check the tasks tab. Notice that you see the same task as the "CHW".
**Expected behavior**
I believe it is intended for supervisors not to download supervisee tasks, as that could prove severely detrimental for their devices performance.
**Environment**
- App: webapp
- Version: 3.8.x
**Additional context**
It appears that simply switching the `task.user` field to contain the actual user-settings document id (`org:couchdb:user:<username>`) would solve the replication issue, as only the users themselves have access to the user-settings docs.
If, in the future, we want to have the ability to generate tasks for other users, not based on their username, we could additionally have `medic/docs_by_replication_key` emit an additional value that would send the task to the correct people - for example emit the `owner` and have the task downloaded to everyone who sees the person.
|
1.0
|
Supervisors download all task documents from CHWs they supervise -
**Describe the bug**
As part of Rules-Engine v2, tasks are now stored to disk and replicated up and down.
However, a stored task has this structure:
```
{
"type": "task",
"authoredOn": "<timestamp>",
"state": "<some_task_state>"
"stateHistory": [{ "state": "<some_task_state>", "timestamp": "<timestamp>" }, ...],
"user": "<users's contact document id>",
"requester": "<taskEmission.doc.contact._id>",
"owner": "<taskEmission.contact._id>",
"emission": { ... emission data }
}
```
relevant code here: https://github.com/medic/cht-core/blob/master/shared-libs/rules-engine/src/transform-task-emission-to-doc.js#L26
The `user` field is used for determining replication permissions for the doc and is the value emitted in `medic/docs_by_replication_key`: https://github.com/medic/cht-core/blob/master/ddocs/medic/views/docs_by_replication_key/map.js#L71 .
Because the field contains the uuid of the user's contact doc, supervisors are most likely allowed to view the people they supervise, hence they will download all the tasks of these users.
**To Reproduce**
Using "legacy" hierarchy:
1. Create a CHW under a `health_center`, add some families and people to the families and add at least one report that would generate a task. Sync!
2. Create a Supervisor above the CHW, in `district_hospital` level and give it `replication_depth` = 2. This means that he will download the `health_center`, the CHW Contact document and the families, but none of the people in the families or their reports.
3. Log in with the Supervisor account and check the tasks tab. Notice that you see the same task as the "CHW".
**Expected behavior**
I believe it is intended for supervisors not to download supervisee tasks, as that could prove severely detrimental for their devices performance.
**Environment**
- App: webapp
- Version: 3.8.x
**Additional context**
It appears that simply switching the `task.user` field to contain the actual user-settings document id (`org:couchdb:user:<username>`) would solve the replication issue, as only the users themselves have access to the user-settings docs.
If, in the future, we want to have the ability to generate tasks for other users, not based on their username, we could additionally have `medic/docs_by_replication_key` emit an additional value that would send the task to the correct people - for example emit the `owner` and have the task downloaded to everyone who sees the person.
|
priority
|
supervisors download all task documents from chws they supervise describe the bug as part of rules engine tasks are now stored to disk and replicated up and down however a stored task has this structure type task authoredon state statehistory user requester owner emission emission data relevant code here the user field is used for determining replication permissions for the doc and is the value emitted in medic docs by replication key because the field contains the uuid of the user s contact doc supervisors are most likely allowed to view the people they supervise hence they will download all the tasks of these users to reproduce using legacy hierarchy create a chw under a health center add some families and people to the families and add at least one report that would generate a task sync create a supervisor above the chw in district hospital level and give it replication depth this means that he will download the health center the chw contact document and the families but none of the people in the families or their reports log in with the supervisor account and check the tasks tab notice that you see the same task as the chw expected behavior i believe it is intended for supervisors not to download supervisee tasks as that could prove severely detrimental for their devices performance environment app webapp version x additional context it appears that simply switching the task user field to contain the actual user settings document id org couchdb user would solve the replication issue as only the users themselves have access to the user settings docs if in the future we want to have the ability to generate tasks for other users not based on their username we could additionally have medic docs by replication key emit an additional value that would send the task to the correct people for example emit the owner and have the task downloaded to everyone who sees the person
| 1
|
192,973
| 6,877,599,646
|
IssuesEvent
|
2017-11-20 08:44:39
|
OpenNebula/one
|
https://api.github.com/repos/OpenNebula/one
|
opened
|
Add Sunstone option to start websocketproxy.py with -v
|
Category: Sunstone Priority: High Status: Pending Tracker: Backlog
|
---
Author Name: **Arnold Bechtoldt** (Arnold Bechtoldt)
Original Redmine Issue: 3613, https://dev.opennebula.org/issues/3613
Original Date: 2015-02-18
---
None
|
1.0
|
Add Sunstone option to start websocketproxy.py with -v - ---
Author Name: **Arnold Bechtoldt** (Arnold Bechtoldt)
Original Redmine Issue: 3613, https://dev.opennebula.org/issues/3613
Original Date: 2015-02-18
---
None
|
priority
|
add sunstone option to start websocketproxy py with v author name arnold bechtoldt arnold bechtoldt original redmine issue original date none
| 1
|
98,826
| 4,031,734,195
|
IssuesEvent
|
2016-05-18 18:10:50
|
SuLab/mark2cure
|
https://api.github.com/repos/SuLab/mark2cure
|
closed
|
Fix concept definition links for relation training
|
high priority ux
|
@x0xMaximus can you please add these links to this page https://mark2cure.org/training/relation/3/step/1/? They are modals and there is some sort of inheritance thing that is causing problems to add these.
This is what the page should look like:

links are the relation modals for each concept like this: https://github.com/SuLab/mark2cure/blob/master/mark2cure/instructions/templates/instructions/drug-definitions-relation-modal.jade
|
1.0
|
Fix concept definition links for relation training - @x0xMaximus can you please add these links to this page https://mark2cure.org/training/relation/3/step/1/? They are modals and there is some sort of inheritance thing that is causing problems to add these.
This is what the page should look like:

links are the relation modals for each concept like this: https://github.com/SuLab/mark2cure/blob/master/mark2cure/instructions/templates/instructions/drug-definitions-relation-modal.jade
|
priority
|
fix concept definition links for relation training can you please add these links to this page they are modals and there is some sort of inheritance thing that is causing problems to add these this is what the page should look like links are the relation modals for each concept like this
| 1
|
577,024
| 17,102,009,990
|
IssuesEvent
|
2021-07-09 12:39:38
|
Codethulhu03/UAV
|
https://api.github.com/repos/Codethulhu03/UAV
|
closed
|
Reset Button in connect drone ui: What is its function? It crashes the program
|
bug gui high priority question
|
line 297, in reset
if self.simulationCheck.isChecked():
AttributeError: 'Ui_ConnectDrone' object has no attribute 'simulationCheck'
|
1.0
|
Reset Button in connect drone ui: What is its function? It crashes the program - line 297, in reset
if self.simulationCheck.isChecked():
AttributeError: 'Ui_ConnectDrone' object has no attribute 'simulationCheck'
|
priority
|
reset button in connect drone ui what is its function it crashes the program line in reset if self simulationcheck ischecked attributeerror ui connectdrone object has no attribute simulationcheck
| 1
|
149,753
| 5,725,152,301
|
IssuesEvent
|
2017-04-20 15:57:03
|
fedora-infra/bodhi
|
https://api.github.com/repos/fedora-infra/bodhi
|
closed
|
[RFE] Expire overrides using CLI.
|
Client High priority RFE
|
It seems it is not possible to expire override using CLI. The web UI seems to be the only option ATM. Please consider adding this feature.
```
$ rpm -q bodhi-client
bodhi-client-2.2.4-1.fc26.noarch
```
I noted this originally here:
https://bugzilla.redhat.com/show_bug.cgi?id=1366114#c6
|
1.0
|
[RFE] Expire overrides using CLI. - It seems it is not possible to expire override using CLI. The web UI seems to be the only option ATM. Please consider adding this feature.
```
$ rpm -q bodhi-client
bodhi-client-2.2.4-1.fc26.noarch
```
I noted this originally here:
https://bugzilla.redhat.com/show_bug.cgi?id=1366114#c6
|
priority
|
expire overrides using cli it seems it is not possible to expire override using cli the web ui seems to be the only option atm please consider adding this feature rpm q bodhi client bodhi client noarch i noted this originally here
| 1
|
487,728
| 14,058,813,836
|
IssuesEvent
|
2020-11-03 01:14:12
|
metwork-framework/mfdata
|
https://api.github.com/repos/metwork-framework/mfdata
|
closed
|
[switch_rules:alwaystrue] with multiple steps - Questions
|
Priority: High Status: In Progress Type: Bug backport-to-1.0
|
I'm trying to use [switch_rules:alwaystrue] in my foo plugin with multiple steps : main and other
My config.ini (according to the documentation with multiple steps)
[switch_rules:alwaystrue]
*=main, other
With this configuration, switch fails : Bad syntax
```
2020-10-29T10:26:51.214605Z [ERROR] (switch/rules#13532) bad action [other] for section [switch_rules_foo:alwaystrue] and pattern: * {path=/home/dearith10/metwork/mfdata/tmp/config_auto/plugin_switch_rules.ini}
Traceback (most recent call last):
File "/opt/metwork-mfext-1.0/opt/python3/bin/switch_step", line 11, in <module>
sys.exit(main())
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/switch_step.py", line 110, in main
x.run()
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/step.py", line 526, in run
self._init()
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/switch_step.py", line 35, in _init
r, self.args.switch_section_prefix)
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/switch_rules.py", line 305, in read
raise BadSyntax()
acquisition.switch_rules.BadSyntax
```
The plugin_switch_rules.ini :
```
# GENERATED FILE
# <CONTRIBUTION OF foo PLUGIN>
[switch_rules_foo:alwaystrue]
* = foo/main, other
# </CONTRIBUTION OF foo PLUGIN>
# <CONTRIBUTION OF ungzip PLUGIN>
[switch_rules_ungzip:fnmatch:latest.guess_file_type.main.system_magic]
gzip compressed data* = ungzip/main
# </CONTRIBUTION OF ungzip PLUGIN>
```
Now, if I set my switch rules like this:
```
[switch_rules:alwaystrue]
*=foo/main, foo/other
```
It works,
Is it an issue or am I wrong ? What is the correct way to configure rules with mutiple steps ?
|
1.0
|
[switch_rules:alwaystrue] with multiple steps - Questions - I'm trying to use [switch_rules:alwaystrue] in my foo plugin with multiple steps : main and other
My config.ini (according to the documentation with multiple steps)
[switch_rules:alwaystrue]
*=main, other
With this configuration, switch fails : Bad syntax
```
2020-10-29T10:26:51.214605Z [ERROR] (switch/rules#13532) bad action [other] for section [switch_rules_foo:alwaystrue] and pattern: * {path=/home/dearith10/metwork/mfdata/tmp/config_auto/plugin_switch_rules.ini}
Traceback (most recent call last):
File "/opt/metwork-mfext-1.0/opt/python3/bin/switch_step", line 11, in <module>
sys.exit(main())
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/switch_step.py", line 110, in main
x.run()
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/step.py", line 526, in run
self._init()
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/switch_step.py", line 35, in _init
r, self.args.switch_section_prefix)
File "/opt/metwork-mfext-1.0/opt/python3/lib/python3.7/site-packages/acquisition/switch_rules.py", line 305, in read
raise BadSyntax()
acquisition.switch_rules.BadSyntax
```
The plugin_switch_rules.ini :
```
# GENERATED FILE
# <CONTRIBUTION OF foo PLUGIN>
[switch_rules_foo:alwaystrue]
* = foo/main, other
# </CONTRIBUTION OF foo PLUGIN>
# <CONTRIBUTION OF ungzip PLUGIN>
[switch_rules_ungzip:fnmatch:latest.guess_file_type.main.system_magic]
gzip compressed data* = ungzip/main
# </CONTRIBUTION OF ungzip PLUGIN>
```
Now, if I set my switch rules like this:
```
[switch_rules:alwaystrue]
*=foo/main, foo/other
```
It works,
Is it an issue or am I wrong ? What is the correct way to configure rules with mutiple steps ?
|
priority
|
with multiple steps questions i m trying to use in my foo plugin with multiple steps main and other my config ini according to the documentation with multiple steps main other with this configuration switch fails bad syntax switch rules bad action for section and pattern path home metwork mfdata tmp config auto plugin switch rules ini traceback most recent call last file opt metwork mfext opt bin switch step line in sys exit main file opt metwork mfext opt lib site packages acquisition switch step py line in main x run file opt metwork mfext opt lib site packages acquisition step py line in run self init file opt metwork mfext opt lib site packages acquisition switch step py line in init r self args switch section prefix file opt metwork mfext opt lib site packages acquisition switch rules py line in read raise badsyntax acquisition switch rules badsyntax the plugin switch rules ini generated file foo main other gzip compressed data ungzip main now if i set my switch rules like this foo main foo other it works is it an issue or am i wrong what is the correct way to configure rules with mutiple steps
| 1
|
443,848
| 12,800,328,759
|
IssuesEvent
|
2020-07-02 16:53:38
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
When gdpr option is enabled the site is becoming unclickable in browser Safari on IOS and MacOS
|
Urgent [Priority: HIGH] bug
|
When testing with Google Chrome and Microsoft Edge everything works fine with the GDPR popup
It´s only when using the browsers Safari on IOS and MacOS the problem exists.
https://secure.helpscout.net/conversation/1191220977/135300?folderId=2632030
The issue is occuirng in our staing as well. when gdpr is enabled the site is becoming unclikable
https://monosnap.com/file/6vm5nUGhKdm4lZ3zq2zL2qafkE0rtl
https://wordpress-123147-847862.cloudwaysapps.com/amp/
|
1.0
|
When gdpr option is enabled the site is becoming unclickable in browser Safari on IOS and MacOS - When testing with Google Chrome and Microsoft Edge everything works fine with the GDPR popup
It´s only when using the browsers Safari on IOS and MacOS the problem exists.
https://secure.helpscout.net/conversation/1191220977/135300?folderId=2632030
The issue is occuirng in our staing as well. when gdpr is enabled the site is becoming unclikable
https://monosnap.com/file/6vm5nUGhKdm4lZ3zq2zL2qafkE0rtl
https://wordpress-123147-847862.cloudwaysapps.com/amp/
|
priority
|
when gdpr option is enabled the site is becoming unclickable in browser safari on ios and macos when testing with google chrome and microsoft edge everything works fine with the gdpr popup it´s only when using the browsers safari on ios and macos the problem exists the issue is occuirng in our staing as well when gdpr is enabled the site is becoming unclikable
| 1
|
6,544
| 2,589,165,544
|
IssuesEvent
|
2015-02-18 10:18:58
|
olga-jane/prizm
|
https://api.github.com/repos/olga-jane/prizm
|
closed
|
Unhandled NullReferenceException in Settings->Pipe form
|
bug bug - crash/performance/leak bug - validation Coding HIGH priority Settings
|
Steps to reproduce:
1) Go to the Settings->Pipe form
2) Create a new pipe size parameter
3) Fill in the pipe's diameter, wall thickness, pipe length and seam type boxes.
4) If you press save button now it will show you a warning which says that you haven't fill in all the boxes.
5) Cross mark will be shown near the last record in the inspection operations grid. So let us fill it. Select last record and click the Edit button.
6) In the edition form fill in the Code and Name boxes and press Save.
7) Now press Save button on the Settings->Pipe form.
Expected result:
A warning which says to fill in all the fields in records.
Actual result:
An actual attempt to save records in database and unhandled NullReferenceException.
Additional info:
The long story of the exception, shown in error message:
System.NullReferenceException: Ссылка на объект не указывает на экземпляр объекта.
в Prizm.Main.Forms.Settings.SaveSettingsCommand.SaveMillSizeTypes()
в Prizm.Main.Forms.Settings.SaveSettingsCommand.Execute()
в Prizm.Main.Commands.CommandInfo.SimpleButtonAttacher.btn_Click(Object sender, EventArgs e)
в System.Windows.Forms.Control.OnClick(EventArgs e)
в DevExpress.XtraEditors.BaseButton.OnClick(EventArgs e)
в DevExpress.XtraEditors.BaseButton.OnMouseUp(MouseEventArgs e)
в System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
в System.Windows.Forms.Control.WndProc(Message& m)
в DevExpress.Utils.Controls.ControlBase.WndProc(Message& m)
в DevExpress.XtraEditors.BaseControl.WndProc(Message& msg)
в System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
в System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
в System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Screenshot:

|
1.0
|
Unhandled NullReferenceException in Settings->Pipe form - Steps to reproduce:
1) Go to the Settings->Pipe form
2) Create a new pipe size parameter
3) Fill in the pipe's diameter, wall thickness, pipe length and seam type boxes.
4) If you press save button now it will show you a warning which says that you haven't fill in all the boxes.
5) Cross mark will be shown near the last record in the inspection operations grid. So let us fill it. Select last record and click the Edit button.
6) In the edition form fill in the Code and Name boxes and press Save.
7) Now press Save button on the Settings->Pipe form.
Expected result:
A warning which says to fill in all the fields in records.
Actual result:
An actual attempt to save records in database and unhandled NullReferenceException.
Additional info:
The long story of the exception, shown in error message:
System.NullReferenceException: Ссылка на объект не указывает на экземпляр объекта.
в Prizm.Main.Forms.Settings.SaveSettingsCommand.SaveMillSizeTypes()
в Prizm.Main.Forms.Settings.SaveSettingsCommand.Execute()
в Prizm.Main.Commands.CommandInfo.SimpleButtonAttacher.btn_Click(Object sender, EventArgs e)
в System.Windows.Forms.Control.OnClick(EventArgs e)
в DevExpress.XtraEditors.BaseButton.OnClick(EventArgs e)
в DevExpress.XtraEditors.BaseButton.OnMouseUp(MouseEventArgs e)
в System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
в System.Windows.Forms.Control.WndProc(Message& m)
в DevExpress.Utils.Controls.ControlBase.WndProc(Message& m)
в DevExpress.XtraEditors.BaseControl.WndProc(Message& msg)
в System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
в System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
в System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Screenshot:

|
priority
|
unhandled nullreferenceexception in settings pipe form steps to reproduce go to the settings pipe form create a new pipe size parameter fill in the pipe s diameter wall thickness pipe length and seam type boxes if you press save button now it will show you a warning which says that you haven t fill in all the boxes cross mark will be shown near the last record in the inspection operations grid so let us fill it select last record and click the edit button in the edition form fill in the code and name boxes and press save now press save button on the settings pipe form expected result a warning which says to fill in all the fields in records actual result an actual attempt to save records in database and unhandled nullreferenceexception additional info the long story of the exception shown in error message system nullreferenceexception ссылка на объект не указывает на экземпляр объекта в prizm main forms settings savesettingscommand savemillsizetypes в prizm main forms settings savesettingscommand execute в prizm main commands commandinfo simplebuttonattacher btn click object sender eventargs e в system windows forms control onclick eventargs e в devexpress xtraeditors basebutton onclick eventargs e в devexpress xtraeditors basebutton onmouseup mouseeventargs e в system windows forms control wmmouseup message m mousebuttons button clicks в system windows forms control wndproc message m в devexpress utils controls controlbase wndproc message m в devexpress xtraeditors basecontrol wndproc message msg в system windows forms control controlnativewindow onmessage message m в system windows forms control controlnativewindow wndproc message m в system windows forms nativewindow callback intptr hwnd msg intptr wparam intptr lparam screenshot
| 1
|
496,104
| 14,332,356,210
|
IssuesEvent
|
2020-11-27 02:12:18
|
rich-iannone/pointblank
|
https://api.github.com/repos/rich-iannone/pointblank
|
opened
|
Give the `snip_list()` function more options
|
Difficulty: [3] Advanced Effort: [3] High Priority: [3] High Type: ★ Enhancement
|
Right now, the `snip_list()` function (usable within `info_snippet()`) has a dearth of options for generating a list (with only `limit` available). This should be expanded so that lists generated from column data in information reports:
1. have a better default appearance
2. can be easily customized
3. can be localized to different languages
As a starting point, the following options might be included:
- `sep`: the separator to use between items (default: `","`)
- `and_or`: the conjunction to use in a list with items > 2; could be `NULL` (sets to `"and"`), `"and"`, `"or"`, or `""` (default: `NULL`)
- `oxford`: whether a list with items >2 should use the mandatory serial comma, only when the language is English (default: `TRUE`)
- `as_code`: whether to set the items in a code font (default: `TRUE`)
- `quot_str`: whether to use quotation marks around each list item; could be `TRUE`/`FALSE` but `NULL` decides based on the vector type (default: `NULL`)
|
1.0
|
Give the `snip_list()` function more options - Right now, the `snip_list()` function (usable within `info_snippet()`) has a dearth of options for generating a list (with only `limit` available). This should be expanded so that lists generated from column data in information reports:
1. have a better default appearance
2. can be easily customized
3. can be localized to different languages
As a starting point, the following options might be included:
- `sep`: the separator to use between items (default: `","`)
- `and_or`: the conjunction to use in a list with items > 2; could be `NULL` (sets to `"and"`), `"and"`, `"or"`, or `""` (default: `NULL`)
- `oxford`: whether a list with items >2 should use the mandatory serial comma, only when the language is English (default: `TRUE`)
- `as_code`: whether to set the items in a code font (default: `TRUE`)
- `quot_str`: whether to use quotation marks around each list item; could be `TRUE`/`FALSE` but `NULL` decides based on the vector type (default: `NULL`)
|
priority
|
give the snip list function more options right now the snip list function usable within info snippet has a dearth of options for generating a list with only limit available this should be expanded so that lists generated from column data in information reports have a better default appearance can be easily customized can be localized to different languages as a starting point the following options might be included sep the separator to use between items default and or the conjunction to use in a list with items could be null sets to and and or or default null oxford whether a list with items should use the mandatory serial comma only when the language is english default true as code whether to set the items in a code font default true quot str whether to use quotation marks around each list item could be true false but null decides based on the vector type default null
| 1
|
649,100
| 21,218,072,805
|
IssuesEvent
|
2022-04-11 09:16:43
|
geosolutions-it/MapStore2-C027
|
https://api.github.com/repos/geosolutions-it/MapStore2-C027
|
closed
|
c027 GN is always returning metedata in ISO format instead of DC
|
Priority: High C027-COMUNE_FI-2021-SUPPORT investigation
|
By default [GN](http://sr-vm378-sitgfn.comune.intranet:9080/geonetwork) returns metadata in ISO format instead of DC as expected by MS: the result is the same even if we specify the ousputSchema in the call as follow:
**URL:**
`http://sr-vm378-sitgfn.comune.intranet:9080/geonetwork/srv/ita/csw?service=CSW&version=2.0.2`
**Body:**
```
<csw:GetRecords xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:gmd="http://www.isotc211.org/2005/gmd"
xmlns:gco="http://www.isotc211.org/2005/gco"
xmlns:gmi="http://www.isotc211.org/2005/gmi"
xmlns:ows="http://www.opengis.net/ows"
outputSchema="http://www.opengis.net/cat/csw/2.0.2" service="CSW" version="2.0.2" resultType="results" startPosition="1" maxRecords="4">
<csw:Query typeNames="csw:Record">
<csw:ElementSetName>full</csw:ElementSetName>
<csw:Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>csw:AnyText</ogc:PropertyName>
<ogc:Literal>%verdi%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</csw:Constraint>
</csw:Query>
</csw:GetRecords>
```
Below is the preview of the response:

|
1.0
|
c027 GN is always returning metedata in ISO format instead of DC - By default [GN](http://sr-vm378-sitgfn.comune.intranet:9080/geonetwork) returns metadata in ISO format instead of DC as expected by MS: the result is the same even if we specify the ousputSchema in the call as follow:
**URL:**
`http://sr-vm378-sitgfn.comune.intranet:9080/geonetwork/srv/ita/csw?service=CSW&version=2.0.2`
**Body:**
```
<csw:GetRecords xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:gmd="http://www.isotc211.org/2005/gmd"
xmlns:gco="http://www.isotc211.org/2005/gco"
xmlns:gmi="http://www.isotc211.org/2005/gmi"
xmlns:ows="http://www.opengis.net/ows"
outputSchema="http://www.opengis.net/cat/csw/2.0.2" service="CSW" version="2.0.2" resultType="results" startPosition="1" maxRecords="4">
<csw:Query typeNames="csw:Record">
<csw:ElementSetName>full</csw:ElementSetName>
<csw:Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>csw:AnyText</ogc:PropertyName>
<ogc:Literal>%verdi%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</csw:Constraint>
</csw:Query>
</csw:GetRecords>
```
Below is the preview of the response:

|
priority
|
gn is always returning metedata in iso format instead of dc by default returns metadata in iso format instead of dc as expected by ms the result is the same even if we specify the ousputschema in the call as follow url body csw getrecords xmlns csw xmlns ogc xmlns gml xmlns dc xmlns dct xmlns gmd xmlns gco xmlns gmi xmlns ows outputschema service csw version resulttype results startposition maxrecords full csw anytext verdi below is the preview of the response
| 1
|
23,414
| 2,659,228,566
|
IssuesEvent
|
2015-03-18 19:48:29
|
IQSS/dataverse
|
https://api.github.com/repos/IQSS/dataverse
|
closed
|
Email validation failed on Dataverse creation (worth flagging this again)
|
Component: Metadata Component: UX & Upgrade Priority: High Status: QA Type: Bug
|
Accidentally typed in this email address and it went through: ```pete@malinator.com.blah.ha^```
A "^" is allowed in the first part of an address, before the "@", but it shouldn't be in the domain name.
Here is the regular expression code for a heavily used* email validation routine that may be adapted for the system:
[email validation code in github](https://github.com/django/django/blob/master/django/core/validators.py#L119)
(*Used by edX, HarvardX, pinterest, etc)
Related ticket: #364
|
1.0
|
Email validation failed on Dataverse creation (worth flagging this again) - Accidentally typed in this email address and it went through: ```pete@malinator.com.blah.ha^```
A "^" is allowed in the first part of an address, before the "@", but it shouldn't be in the domain name.
Here is the regular expression code for a heavily used* email validation routine that may be adapted for the system:
[email validation code in github](https://github.com/django/django/blob/master/django/core/validators.py#L119)
(*Used by edX, HarvardX, pinterest, etc)
Related ticket: #364
|
priority
|
email validation failed on dataverse creation worth flagging this again accidentally typed in this email address and it went through pete malinator com blah ha a is allowed in the first part of an address before the but it shouldn t be in the domain name here is the regular expression code for a heavily used email validation routine that may be adapted for the system used by edx harvardx pinterest etc related ticket
| 1
|
343,299
| 10,327,765,761
|
IssuesEvent
|
2019-09-02 07:53:33
|
rsx-labs/aide-frontend
|
https://api.github.com/repos/rsx-labs/aide-frontend
|
closed
|
[Attendance] Once an employee is shown as present, filed leaves for the same day does not reflect in attendance.
|
High Priority bug
|
**Describe the bug**
Employee is shown as present in attendance for some reason.
SL/VL is filed for the same day.
Attendance still shows employee as present.
**Expected behavior**
Attendance should also consider filed leaves.
**Screenshots**
If applicable, add screenshots to help explain your problem.


**Version (please complete the following information):**
- Version 2.6
**Additional context**
Add any other context about the problem here.
|
1.0
|
[Attendance] Once an employee is shown as present, filed leaves for the same day does not reflect in attendance. - **Describe the bug**
Employee is shown as present in attendance for some reason.
SL/VL is filed for the same day.
Attendance still shows employee as present.
**Expected behavior**
Attendance should also consider filed leaves.
**Screenshots**
If applicable, add screenshots to help explain your problem.


**Version (please complete the following information):**
- Version 2.6
**Additional context**
Add any other context about the problem here.
|
priority
|
once an employee is shown as present filed leaves for the same day does not reflect in attendance describe the bug employee is shown as present in attendance for some reason sl vl is filed for the same day attendance still shows employee as present expected behavior attendance should also consider filed leaves screenshots if applicable add screenshots to help explain your problem version please complete the following information version additional context add any other context about the problem here
| 1
|
589,863
| 17,762,078,288
|
IssuesEvent
|
2021-08-29 22:01:54
|
OpenTabletDriver/OpenTabletDriver
|
https://api.github.com/repos/OpenTabletDriver/OpenTabletDriver
|
closed
|
0.5.3.3 deb package does not contain a udev rules file
|
bug priority:high
|
## Description
The [0.5.3.3 release](https://github.com/OpenTabletDriver/OpenTabletDriver/releases/tag/v0.5.3.3) for Debian (`OpenTabletDriver.deb`) does not contain a udev file and therefore causes permission issues for Ubuntu users.
The 0.5.3.2 package does not contain this issue.
## System Information:
<!-- Please fill out this information -->
Not my personal info, but it affects:
| Name | Value |
| ---------------- | ----- |
| Operating System | Debian/Ubuntu |
| Software Version | 0.5.3.3 |
| Tablet | All |
|
1.0
|
0.5.3.3 deb package does not contain a udev rules file - ## Description
The [0.5.3.3 release](https://github.com/OpenTabletDriver/OpenTabletDriver/releases/tag/v0.5.3.3) for Debian (`OpenTabletDriver.deb`) does not contain a udev file and therefore causes permission issues for Ubuntu users.
The 0.5.3.2 package does not contain this issue.
## System Information:
<!-- Please fill out this information -->
Not my personal info, but it affects:
| Name | Value |
| ---------------- | ----- |
| Operating System | Debian/Ubuntu |
| Software Version | 0.5.3.3 |
| Tablet | All |
|
priority
|
deb package does not contain a udev rules file description the for debian opentabletdriver deb does not contain a udev file and therefore causes permission issues for ubuntu users the package does not contain this issue system information not my personal info but it affects name value operating system debian ubuntu software version tablet all
| 1
|
562,722
| 16,668,350,724
|
IssuesEvent
|
2021-06-07 07:52:10
|
Proof-Of-Humanity/proof-of-humanity-web
|
https://api.github.com/repos/Proof-Of-Humanity/proof-of-humanity-web
|
closed
|
Wrong UBI balance
|
priority: high status: available type: bug :bug:
|
**Describe the Bug**
Wrong UBI balance being displayed in the user's profile;
**To Reproduce**
Compare the balance displayed by visiting:
https://app.proofofhumanity.id/profile/0x245Bd6B5D8f494df8256Ae44737A1e5D59769aB4?network=mainnet
with the one returned by calling `balanceOf` in the the UBI contract:
https://etherscan.io/token/0xdd1ad9a21ce722c151a836373babe42c868ce9a4?a=0x245Bd6B5D8f494df8256Ae44737A1e5D59769aB4#readProxyContract
|
1.0
|
Wrong UBI balance - **Describe the Bug**
Wrong UBI balance being displayed in the user's profile;
**To Reproduce**
Compare the balance displayed by visiting:
https://app.proofofhumanity.id/profile/0x245Bd6B5D8f494df8256Ae44737A1e5D59769aB4?network=mainnet
with the one returned by calling `balanceOf` in the the UBI contract:
https://etherscan.io/token/0xdd1ad9a21ce722c151a836373babe42c868ce9a4?a=0x245Bd6B5D8f494df8256Ae44737A1e5D59769aB4#readProxyContract
|
priority
|
wrong ubi balance describe the bug wrong ubi balance being displayed in the user s profile to reproduce compare the balance displayed by visiting with the one returned by calling balanceof in the the ubi contract
| 1
|
433,268
| 12,505,145,845
|
IssuesEvent
|
2020-06-02 10:11:39
|
talamortis/OregonCore
|
https://api.github.com/repos/talamortis/OregonCore
|
closed
|
Core crash
|
Linux Priority: High crash
|
OS:Ubuntu16.04
version:ElunaCFBG
As long as the command is used in the console, it will crash. I can't get the log, only the screenshot










<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/91609997-core-crash?utm_campaign=plugin&utm_content=tracker%2F91676571&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F91676571&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Core crash - OS:Ubuntu16.04
version:ElunaCFBG
As long as the command is used in the console, it will crash. I can't get the log, only the screenshot










<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/91609997-core-crash?utm_campaign=plugin&utm_content=tracker%2F91676571&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F91676571&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
priority
|
core crash os: version:elunacfbg as long as the command is used in the console it will crash i can t get the log only the screenshot want to back this issue we accept bounties via
| 1
|
784,333
| 27,567,064,079
|
IssuesEvent
|
2023-03-08 05:20:17
|
phyloref/klados
|
https://api.github.com/repos/phyloref/klados
|
closed
|
Add additional example files
|
priority: high
|
In PR #230, I removed the following example files:
* fisher_et_al_2007.json (from https://doi.org/10.1639/0007-2745%282007%29110%5B46%3APOTCWA%5D2.0.CO%3B2)
- Includes an apomorphy-based definition
* hillis_and_wilcox_2005.json (from https://doi.org/10.1016/j.ympev.2004.10.007)
- Includes specimen-based definitions
* Most of the phyloreferences from Brochu 2003, replacing it with the minimal version used in the phyx.js tests.
This leaves us with a single small example file, Brochu 2003. This issue tracks us adding additional example files (probably from the Clade Ontology) to Klados. Ideally, they should demonstrate specimen identifiers (like Fisher et al did) or apomorphy-based phyloreferences. Using files from the Clade Ontology will almost certainly be easier than attempting to convert these v0.2.0 files to v1.0.0 Phyloref files.
- [ ] Also check whether phyloreferences with specimens and external references as specifiers can be exported correctly as CSV.
Could be part of the tutorial (#227).
|
1.0
|
Add additional example files - In PR #230, I removed the following example files:
* fisher_et_al_2007.json (from https://doi.org/10.1639/0007-2745%282007%29110%5B46%3APOTCWA%5D2.0.CO%3B2)
- Includes an apomorphy-based definition
* hillis_and_wilcox_2005.json (from https://doi.org/10.1016/j.ympev.2004.10.007)
- Includes specimen-based definitions
* Most of the phyloreferences from Brochu 2003, replacing it with the minimal version used in the phyx.js tests.
This leaves us with a single small example file, Brochu 2003. This issue tracks us adding additional example files (probably from the Clade Ontology) to Klados. Ideally, they should demonstrate specimen identifiers (like Fisher et al did) or apomorphy-based phyloreferences. Using files from the Clade Ontology will almost certainly be easier than attempting to convert these v0.2.0 files to v1.0.0 Phyloref files.
- [ ] Also check whether phyloreferences with specimens and external references as specifiers can be exported correctly as CSV.
Could be part of the tutorial (#227).
|
priority
|
add additional example files in pr i removed the following example files fisher et al json from includes an apomorphy based definition hillis and wilcox json from includes specimen based definitions most of the phyloreferences from brochu replacing it with the minimal version used in the phyx js tests this leaves us with a single small example file brochu this issue tracks us adding additional example files probably from the clade ontology to klados ideally they should demonstrate specimen identifiers like fisher et al did or apomorphy based phyloreferences using files from the clade ontology will almost certainly be easier than attempting to convert these files to phyloref files also check whether phyloreferences with specimens and external references as specifiers can be exported correctly as csv could be part of the tutorial
| 1
|
331,318
| 10,064,288,923
|
IssuesEvent
|
2019-07-23 08:19:55
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
JDK 11 support for the product is
|
Complexity/High Priority/High Type/Improvement
|
Currently, the Identity Server will not start or run on java 11.
|
1.0
|
JDK 11 support for the product is - Currently, the Identity Server will not start or run on java 11.
|
priority
|
jdk support for the product is currently the identity server will not start or run on java
| 1
|
793,387
| 27,993,811,164
|
IssuesEvent
|
2023-03-27 06:58:47
|
TencentBlueKing/bk-cmdb
|
https://api.github.com/repos/TencentBlueKing/bk-cmdb
|
closed
|
【3.10.23-alpha5 】字段预览时,tooltip未显示 +n 里的值
|
priority: High
|
一、前提条件
字段预览时,tooltip显示 +n 里的值

二 、重现步骤
模型-模型管理-选择主机模型-字段预览-tooltip查看+n的值
三 、预期结果
tooltip时可查看到+n的值
四 、实际结果
tooltip无法查看到+n的值,需点击+3才可查看到具体的值

|
1.0
|
【3.10.23-alpha5 】字段预览时,tooltip未显示 +n 里的值 - 一、前提条件
字段预览时,tooltip显示 +n 里的值

二 、重现步骤
模型-模型管理-选择主机模型-字段预览-tooltip查看+n的值
三 、预期结果
tooltip时可查看到+n的值
四 、实际结果
tooltip无法查看到+n的值,需点击+3才可查看到具体的值

|
priority
|
【 】字段预览时,tooltip未显示 n 里的值 一、前提条件 字段预览时,tooltip显示 n 里的值 二 、重现步骤 模型 模型管理 选择主机模型 字段预览 tooltip查看 n的值 三 、预期结果 tooltip时可查看到 n的值 四 、实际结果 tooltip无法查看到 n的值,需点击
| 1
|
537,870
| 15,755,822,574
|
IssuesEvent
|
2021-03-31 02:27:03
|
SCIInstitute/ShapeWorks
|
https://api.github.com/repos/SCIInstitute/ShapeWorks
|
closed
|
ShapeWorks 6.0 testing
|
High Priority
|
Please edit and add a ✅ indicating success and ❌ indicating failure or 🕒 for a test in progress with your username when you complete a task for a given platform. When a test fails, please add a github issue and link it (* the issue when it's fixed and ready to test again). Also, go ahead and add new tasks that might not already be on here.
Please use the most recent release candidate for all testing (be careful which `shapeworks` is in your `$PATH`). The most recent is found here:
https://github.com/SCIInstitute/ShapeWorks/releases/tag/v6.0.0-rc10
Example:
| | Windows | Mac | Linux |
|------------------|-----------------------------|--------------|----------------|
| Notebooks | | 🕒 (@archanasri) | |
| Usecase: Ellipsoid | ✅ (@akenmorris ) | | |
| Usecase: All tiny-test | ❌ (#1073) | ✅ (@cchriste) | |
Ok, now the real thing!
| | Windows | Mac | Linux |
|------------------|-----------------------------|--------------|----------------|
| Clean installation | ✅ (@cchriste) (#1097, #1098) |✅ (@akenmorris RC10) | |
| Notebooks: getting-started-with-jupyter-notebooks | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1, @riddhishb) |
| Notebooks: setting-up-shapeworks-environment | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1, @riddhishb)|
| Notebooks: getting-started-with-segmentations | ✅ (@cchriste) (#1113) | ✅ (@akenmorris RC10) | ✅ (@jadie1 RC10) |
| Notebooks: getting-started-with-exploring-segmentations | ✅ (@cchriste) (#1113)| ✅ (@akenmorris RC10) | ✅ (@jadie1 RC10) |
| Notebooks: getting-started-with-meshes | ✅ (@cchriste) (#1142) | ✅(@akenmorris RC10) | ✅ (@jadie1 RC10) |
| Notebooks: getting-started-with-data-augmentation | ✅ (@cchriste) | ✅(@akenmorris) | ✅ (@jadie1) |
| Notebooks: getting-started-with-shape-cohort-generation | ✅ (@cchriste) (#1113) | ✅ (@akenmorris) | ✅ (@jadie1 RC10) |
| Usecase: ellipsoid | ✅ (@akenmorris RC10) | ✅ (@akenmorris) |✅ (@jadie1 RC10) |
| Usecase: ellipsoid --tiny_test | ✅ (@akenmorris) | ✅ (@archanasri) | ✅ (@jadie1, @riddhishb)|
| Usecase: ellipsoid_cut | ✅ (@akenmorris RC10) | ✅ (@akenmorris) | (✅ @jadie1 RC10) |
| Usecase: ellipsoid_cut --tiny_test |✅ (@akenmorris) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: ellipsoid_evaluate | ✅ (@akenmorris) (RC6) | ✅ (@akenmorris) (RC6) | ✅ (@iyerkrithika21)|
| Usecase: ellipsoid_fd | ✅ (@cchriste) | ✅ (@akenmorris) | ✅ (@jadie1 RC10, @riddhishb)|
| Usecase: ellipsoid_mesh | ✅ (@iyerkrithika21, @cchriste) | ✅ (@archanasri) | ✅ (@medakk) (@jadie1 RC10) |
| Usecase: ellipsoid_mesh --tiny-test | ✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@medakk) |
| Usecase: femur | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1) |
| Usecase: femur --tiny-test | ✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur --groom_images | ✅ (@cchriste) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: femur --groom_images --tiny-test |✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_mesh | ✅ (@cchriste) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_mesh --tiny-test | ✅ (@iyerkrithika21)| ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_cut | ✅ (@cchriste) | ✅ (akenmorris) | ✅ (@jadie1) |
| Usecase: femur_cut --tiny-test | ✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_cut --tiny-test (anisotropic)| ✅ (@akenmorris) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: left_atrium |✅ (@akenmorris, @cchriste) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: left_atrium --tiny-test |✅ (@akenmorris) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: lumps | ✅ (@iyerkrithika21, @cchriste) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: lumps --tiny-test |✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: deep_ssm --tiny-test | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1) |
| Usecase: deep_ssm | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1) |
| Studio: Ellipsoid Example | ✅ (@akenmorris) | ✅ (@akenmorris) | ✅ (@medakk) |
| Studio: Feature Map Example | ✅ (@akenmorris) | ✅ (@akenmorris) | ✅ (@medakk) |
| ... | | | |
Please keep comments on this issue to a minimum. Let's try to keep the status in the table and not in the comments.
|
1.0
|
ShapeWorks 6.0 testing - Please edit and add a ✅ indicating success and ❌ indicating failure or 🕒 for a test in progress with your username when you complete a task for a given platform. When a test fails, please add a github issue and link it (* the issue when it's fixed and ready to test again). Also, go ahead and add new tasks that might not already be on here.
Please use the most recent release candidate for all testing (be careful which `shapeworks` is in your `$PATH`). The most recent is found here:
https://github.com/SCIInstitute/ShapeWorks/releases/tag/v6.0.0-rc10
Example:
| | Windows | Mac | Linux |
|------------------|-----------------------------|--------------|----------------|
| Notebooks | | 🕒 (@archanasri) | |
| Usecase: Ellipsoid | ✅ (@akenmorris ) | | |
| Usecase: All tiny-test | ❌ (#1073) | ✅ (@cchriste) | |
Ok, now the real thing!
| | Windows | Mac | Linux |
|------------------|-----------------------------|--------------|----------------|
| Clean installation | ✅ (@cchriste) (#1097, #1098) |✅ (@akenmorris RC10) | |
| Notebooks: getting-started-with-jupyter-notebooks | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1, @riddhishb) |
| Notebooks: setting-up-shapeworks-environment | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1, @riddhishb)|
| Notebooks: getting-started-with-segmentations | ✅ (@cchriste) (#1113) | ✅ (@akenmorris RC10) | ✅ (@jadie1 RC10) |
| Notebooks: getting-started-with-exploring-segmentations | ✅ (@cchriste) (#1113)| ✅ (@akenmorris RC10) | ✅ (@jadie1 RC10) |
| Notebooks: getting-started-with-meshes | ✅ (@cchriste) (#1142) | ✅(@akenmorris RC10) | ✅ (@jadie1 RC10) |
| Notebooks: getting-started-with-data-augmentation | ✅ (@cchriste) | ✅(@akenmorris) | ✅ (@jadie1) |
| Notebooks: getting-started-with-shape-cohort-generation | ✅ (@cchriste) (#1113) | ✅ (@akenmorris) | ✅ (@jadie1 RC10) |
| Usecase: ellipsoid | ✅ (@akenmorris RC10) | ✅ (@akenmorris) |✅ (@jadie1 RC10) |
| Usecase: ellipsoid --tiny_test | ✅ (@akenmorris) | ✅ (@archanasri) | ✅ (@jadie1, @riddhishb)|
| Usecase: ellipsoid_cut | ✅ (@akenmorris RC10) | ✅ (@akenmorris) | (✅ @jadie1 RC10) |
| Usecase: ellipsoid_cut --tiny_test |✅ (@akenmorris) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: ellipsoid_evaluate | ✅ (@akenmorris) (RC6) | ✅ (@akenmorris) (RC6) | ✅ (@iyerkrithika21)|
| Usecase: ellipsoid_fd | ✅ (@cchriste) | ✅ (@akenmorris) | ✅ (@jadie1 RC10, @riddhishb)|
| Usecase: ellipsoid_mesh | ✅ (@iyerkrithika21, @cchriste) | ✅ (@archanasri) | ✅ (@medakk) (@jadie1 RC10) |
| Usecase: ellipsoid_mesh --tiny-test | ✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@medakk) |
| Usecase: femur | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1) |
| Usecase: femur --tiny-test | ✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur --groom_images | ✅ (@cchriste) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: femur --groom_images --tiny-test |✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_mesh | ✅ (@cchriste) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_mesh --tiny-test | ✅ (@iyerkrithika21)| ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_cut | ✅ (@cchriste) | ✅ (akenmorris) | ✅ (@jadie1) |
| Usecase: femur_cut --tiny-test | ✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: femur_cut --tiny-test (anisotropic)| ✅ (@akenmorris) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: left_atrium |✅ (@akenmorris, @cchriste) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: left_atrium --tiny-test |✅ (@akenmorris) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: lumps | ✅ (@iyerkrithika21, @cchriste) | ✅ (@akenmorris) | ✅ (@jadie1) |
| Usecase: lumps --tiny-test |✅ (@iyerkrithika21) | ✅ (@archanasri) | ✅ (@jadie1) |
| Usecase: deep_ssm --tiny-test | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1) |
| Usecase: deep_ssm | ✅ (@cchriste) | ✅ (@akenmorris RC10) | ✅ (@jadie1) |
| Studio: Ellipsoid Example | ✅ (@akenmorris) | ✅ (@akenmorris) | ✅ (@medakk) |
| Studio: Feature Map Example | ✅ (@akenmorris) | ✅ (@akenmorris) | ✅ (@medakk) |
| ... | | | |
Please keep comments on this issue to a minimum. Let's try to keep the status in the table and not in the comments.
|
priority
|
shapeworks testing please edit and add a ✅ indicating success and ❌ indicating failure or 🕒 for a test in progress with your username when you complete a task for a given platform when a test fails please add a github issue and link it the issue when it s fixed and ready to test again also go ahead and add new tasks that might not already be on here please use the most recent release candidate for all testing be careful which shapeworks is in your path the most recent is found here example windows mac linux notebooks 🕒 archanasri usecase ellipsoid ✅ akenmorris usecase all tiny test ❌ ✅ cchriste ok now the real thing windows mac linux clean installation ✅ cchriste ✅ akenmorris notebooks getting started with jupyter notebooks ✅ cchriste ✅ akenmorris ✅ riddhishb notebooks setting up shapeworks environment ✅ cchriste ✅ akenmorris ✅ riddhishb notebooks getting started with segmentations ✅ cchriste ✅ akenmorris ✅ notebooks getting started with exploring segmentations ✅ cchriste ✅ akenmorris ✅ notebooks getting started with meshes ✅ cchriste ✅ akenmorris ✅ notebooks getting started with data augmentation ✅ cchriste ✅ akenmorris ✅ notebooks getting started with shape cohort generation ✅ cchriste ✅ akenmorris ✅ usecase ellipsoid ✅ akenmorris ✅ akenmorris ✅ usecase ellipsoid tiny test ✅ akenmorris ✅ archanasri ✅ riddhishb usecase ellipsoid cut ✅ akenmorris ✅ akenmorris ✅ usecase ellipsoid cut tiny test ✅ akenmorris ✅ archanasri ✅ usecase ellipsoid evaluate ✅ akenmorris ✅ akenmorris ✅ usecase ellipsoid fd ✅ cchriste ✅ akenmorris ✅ riddhishb usecase ellipsoid mesh ✅ cchriste ✅ archanasri ✅ medakk usecase ellipsoid mesh tiny test ✅ ✅ archanasri ✅ medakk usecase femur ✅ cchriste ✅ akenmorris ✅ usecase femur tiny test ✅ ✅ archanasri ✅ usecase femur groom images ✅ cchriste ✅ akenmorris ✅ usecase femur groom images tiny test ✅ ✅ archanasri ✅ usecase femur mesh ✅ cchriste ✅ archanasri ✅ usecase femur mesh tiny test ✅ ✅ archanasri ✅ usecase femur cut ✅ cchriste ✅ akenmorris ✅ usecase femur cut tiny test ✅ ✅ archanasri ✅ usecase femur cut tiny test anisotropic ✅ akenmorris ✅ akenmorris ✅ usecase left atrium ✅ akenmorris cchriste ✅ akenmorris ✅ usecase left atrium tiny test ✅ akenmorris ✅ archanasri ✅ usecase lumps ✅ cchriste ✅ akenmorris ✅ usecase lumps tiny test ✅ ✅ archanasri ✅ usecase deep ssm tiny test ✅ cchriste ✅ akenmorris ✅ usecase deep ssm ✅ cchriste ✅ akenmorris ✅ studio ellipsoid example ✅ akenmorris ✅ akenmorris ✅ medakk studio feature map example ✅ akenmorris ✅ akenmorris ✅ medakk please keep comments on this issue to a minimum let s try to keep the status in the table and not in the comments
| 1
|
80,807
| 3,574,750,244
|
IssuesEvent
|
2016-01-27 13:22:57
|
SockDrawer/SockRPG
|
https://api.github.com/repos/SockDrawer/SockRPG
|
opened
|
Feature: Preservation of posts
|
Feature High Priority
|
Given I am entering a post
When I am logged out due to inactivity
Then I should be offered the chance to log back in
And my post should still be saved in its entirety
This is the most annoying thing about traditional forums for RP: your session expires for security reasons and you lose a 2,000 word post you've been working on the whole time.
|
1.0
|
Feature: Preservation of posts - Given I am entering a post
When I am logged out due to inactivity
Then I should be offered the chance to log back in
And my post should still be saved in its entirety
This is the most annoying thing about traditional forums for RP: your session expires for security reasons and you lose a 2,000 word post you've been working on the whole time.
|
priority
|
feature preservation of posts given i am entering a post when i am logged out due to inactivity then i should be offered the chance to log back in and my post should still be saved in its entirety this is the most annoying thing about traditional forums for rp your session expires for security reasons and you lose a word post you ve been working on the whole time
| 1
|
227,193
| 7,527,697,708
|
IssuesEvent
|
2018-04-13 18:00:00
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
opened
|
[deployer] Make the deployer extensible by dropping a JAR with new processors
|
new feature priority: highest!
|
Deployer should be able to be easily extensible, by just dropping a jar under a directory like in `bin/crafter-deployer/lib`. Please also document how to do this.
|
1.0
|
[deployer] Make the deployer extensible by dropping a JAR with new processors - Deployer should be able to be easily extensible, by just dropping a jar under a directory like in `bin/crafter-deployer/lib`. Please also document how to do this.
|
priority
|
make the deployer extensible by dropping a jar with new processors deployer should be able to be easily extensible by just dropping a jar under a directory like in bin crafter deployer lib please also document how to do this
| 1
|
162,419
| 6,152,885,607
|
IssuesEvent
|
2017-06-28 08:34:23
|
resir014/Stonehenge
|
https://api.github.com/repos/resir014/Stonehenge
|
opened
|
Re-intergrate codebase with kernel
|
high-priority
|
After we've integrated the kernel in #7, we need to refactor our existing codebase (located in `src-old/`) back into our current OS.
|
1.0
|
Re-intergrate codebase with kernel - After we've integrated the kernel in #7, we need to refactor our existing codebase (located in `src-old/`) back into our current OS.
|
priority
|
re intergrate codebase with kernel after we ve integrated the kernel in we need to refactor our existing codebase located in src old back into our current os
| 1
|
634,321
| 20,358,842,774
|
IssuesEvent
|
2022-02-20 11:30:32
|
dnd-side-project/dnd-6th-4-ping-pong
|
https://api.github.com/repos/dnd-side-project/dnd-6th-4-ping-pong
|
closed
|
ClassHomeFragment 스크롤링 오류
|
Priority: High Type: Bug
|
- ClassHomeFragment ViewPager내에 또 다른 Fragment의 RecyclerView가 따로 스크롤링되는 오류 발생
|
1.0
|
ClassHomeFragment 스크롤링 오류 - - ClassHomeFragment ViewPager내에 또 다른 Fragment의 RecyclerView가 따로 스크롤링되는 오류 발생
|
priority
|
classhomefragment 스크롤링 오류 classhomefragment viewpager내에 또 다른 fragment의 recyclerview가 따로 스크롤링되는 오류 발생
| 1
|
718,875
| 24,734,730,562
|
IssuesEvent
|
2022-10-20 20:46:47
|
layer5io/layer5
|
https://api.github.com/repos/layer5io/layer5
|
closed
|
Platforms integration filter missing
|
kind/bug priority/high
|
#### Description
Unable to see the "Platforms" integration filter
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/54144759/196818840-51d5163b-e726-4b7c-ae26-b4946bbc628b.png">
<br>
#### Expected Behavior
Platforms integration should be present among other integrations
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/54144759/196819054-3999475b-d846-43c7-9907-84995d4933c8.png">
<br>
**Not sure if it is a bug or a deliberate action.**
#### Environment:
- Host OS: MacOS
- Browser: Chrome
---
<img src="https://raw.githubusercontent.com/layer5io/layer5/master/.github/assets/images/layer5/5-light-small.svg" width="16px" align="left" /><h3> Contributor Resources and <a href="https://layer5.io/community/handbook">Handbook</a></h3>
The layer5.io website uses Gatsby, React, and GitHub Pages. Site content is found under the [`master` branch](https://github.com/layer5io/layer5/tree/master).
- 📚 See [contributing instructions](https://github.com/layer5io/layer5/blob/master/CONTRIBUTING.md)
- 🎨 Wireframes and designs for Layer5 site in [Figma](https://www.figma.com/file/5ZwEkSJwUPitURD59YHMEN/Layer5-Designs).
- 🙋🏾🙋🏼 Questions: [Discussion Forum](https://discuss.layer5.io) and [Community Slack](http://slack.layer5.io)
|
1.0
|
Platforms integration filter missing - #### Description
Unable to see the "Platforms" integration filter
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/54144759/196818840-51d5163b-e726-4b7c-ae26-b4946bbc628b.png">
<br>
#### Expected Behavior
Platforms integration should be present among other integrations
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/54144759/196819054-3999475b-d846-43c7-9907-84995d4933c8.png">
<br>
**Not sure if it is a bug or a deliberate action.**
#### Environment:
- Host OS: MacOS
- Browser: Chrome
---
<img src="https://raw.githubusercontent.com/layer5io/layer5/master/.github/assets/images/layer5/5-light-small.svg" width="16px" align="left" /><h3> Contributor Resources and <a href="https://layer5.io/community/handbook">Handbook</a></h3>
The layer5.io website uses Gatsby, React, and GitHub Pages. Site content is found under the [`master` branch](https://github.com/layer5io/layer5/tree/master).
- 📚 See [contributing instructions](https://github.com/layer5io/layer5/blob/master/CONTRIBUTING.md)
- 🎨 Wireframes and designs for Layer5 site in [Figma](https://www.figma.com/file/5ZwEkSJwUPitURD59YHMEN/Layer5-Designs).
- 🙋🏾🙋🏼 Questions: [Discussion Forum](https://discuss.layer5.io) and [Community Slack](http://slack.layer5.io)
|
priority
|
platforms integration filter missing description unable to see the platforms integration filter img width alt image src expected behavior platforms integration should be present among other integrations img width alt image src not sure if it is a bug or a deliberate action environment host os macos browser chrome contributor resources and a href the io website uses gatsby react and github pages site content is found under the 📚 see 🎨 wireframes and designs for site in 🙋🏾🙋🏼 questions and
| 1
|
762,056
| 26,706,893,403
|
IssuesEvent
|
2023-01-27 19:03:05
|
pytorch/functorch
|
https://api.github.com/repos/pytorch/functorch
|
closed
|
.item() error when computing Jacobian with vmap and `torch.autograd.set_detect_anomaly(True)`
|
actionable high priority
|
Running the example in the official example [here](https://pytorch.org/functorch/nightly/generated/functorch.vmap.html) with `torch.autograd.set_detect_anomaly(True)` causes an error:
``` python
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
```
``` python
# Setup
torch.autograd.set_detect_anomaly(True)
N = 5
f = lambda x: x ** 2
x = torch.randn(N, requires_grad=True)
y = f(x)
I_N = torch.eye(N)
# Sequential approach
jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]
for v in I_N.unbind()]
jacobian = torch.stack(jacobian_rows)
# vectorized gradient computation
def get_vjp(v):
return torch.autograd.grad(y, x, v)
jacobian = functorch.vmap(get_vjp)(I_N)
````
|
1.0
|
.item() error when computing Jacobian with vmap and `torch.autograd.set_detect_anomaly(True)` - Running the example in the official example [here](https://pytorch.org/functorch/nightly/generated/functorch.vmap.html) with `torch.autograd.set_detect_anomaly(True)` causes an error:
``` python
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: vmap: It looks like you're calling .item() on a Tensor. We don't support vmap over calling .item() on a Tensor, please try to rewrite what you're doing with other operations. If error is occurring somewhere inside PyTorch internals, please file a bug report.
```
``` python
# Setup
torch.autograd.set_detect_anomaly(True)
N = 5
f = lambda x: x ** 2
x = torch.randn(N, requires_grad=True)
y = f(x)
I_N = torch.eye(N)
# Sequential approach
jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]
for v in I_N.unbind()]
jacobian = torch.stack(jacobian_rows)
# vectorized gradient computation
def get_vjp(v):
return torch.autograd.grad(y, x, v)
jacobian = functorch.vmap(get_vjp)(I_N)
````
|
priority
|
item error when computing jacobian with vmap and torch autograd set detect anomaly true running the example in the official example with torch autograd set detect anomaly true causes an error python return variable execution engine run backward calls into the c engine to run the backward pass runtimeerror vmap it looks like you re calling item on a tensor we don t support vmap over calling item on a tensor please try to rewrite what you re doing with other operations if error is occurring somewhere inside pytorch internals please file a bug report python setup torch autograd set detect anomaly true n f lambda x x x torch randn n requires grad true y f x i n torch eye n sequential approach jacobian rows for v in i n unbind jacobian torch stack jacobian rows vectorized gradient computation def get vjp v return torch autograd grad y x v jacobian functorch vmap get vjp i n
| 1
|
718,040
| 24,701,818,196
|
IssuesEvent
|
2022-10-19 15:48:23
|
Google-Developer-Student-Club-CCOEW/AI-ML
|
https://api.github.com/repos/Google-Developer-Student-Club-CCOEW/AI-ML
|
opened
|
Perform Correlation Check, Variance Check and Visual Exploration
|
hacktoberfest eda high-priority
|
Use different EDA tools and techniques to find relationships and redundancies within the data.
**_Dataset is inside AI-ML/data directory.
Project code is inside AI-ML/code directory.
Add your code inside 'Correlation check', 'Variance Check' and 'Visual Exploration' sections.
You can add sub sections as per needed. _**
|
1.0
|
Perform Correlation Check, Variance Check and Visual Exploration - Use different EDA tools and techniques to find relationships and redundancies within the data.
**_Dataset is inside AI-ML/data directory.
Project code is inside AI-ML/code directory.
Add your code inside 'Correlation check', 'Variance Check' and 'Visual Exploration' sections.
You can add sub sections as per needed. _**
|
priority
|
perform correlation check variance check and visual exploration use different eda tools and techniques to find relationships and redundancies within the data dataset is inside ai ml data directory project code is inside ai ml code directory add your code inside correlation check variance check and visual exploration sections you can add sub sections as per needed
| 1
|
126,949
| 5,008,118,893
|
IssuesEvent
|
2016-12-12 18:36:58
|
brycethorup/cash-class-tracker
|
https://api.github.com/repos/brycethorup/cash-class-tracker
|
closed
|
Art/Screens for Take Stock game
|
High Priority
|
Consisting of 1 background screen with 3 overlaid portions:
GameStart (touch to begin)
InstructionsList (featuring 3 item randomly generated list)
Congratulations (game conclusion)
|
1.0
|
Art/Screens for Take Stock game - Consisting of 1 background screen with 3 overlaid portions:
GameStart (touch to begin)
InstructionsList (featuring 3 item randomly generated list)
Congratulations (game conclusion)
|
priority
|
art screens for take stock game consisting of background screen with overlaid portions gamestart touch to begin instructionslist featuring item randomly generated list congratulations game conclusion
| 1
|
519,019
| 15,038,934,788
|
IssuesEvent
|
2021-02-02 18:01:47
|
tysonkaufmann/su-go
|
https://api.github.com/repos/tysonkaufmann/su-go
|
opened
|
[DEV] Backend Middleware and Controllers
|
High Priority task
|
**Related To**
- [Setup backend](#1 )
**Description**
Add a middleware to restrict access to Su-Go APIs for everyone else and allow only users with a token.
Move logic from routes to Controllers
|
1.0
|
[DEV] Backend Middleware and Controllers - **Related To**
- [Setup backend](#1 )
**Description**
Add a middleware to restrict access to Su-Go APIs for everyone else and allow only users with a token.
Move logic from routes to Controllers
|
priority
|
backend middleware and controllers related to description add a middleware to restrict access to su go apis for everyone else and allow only users with a token move logic from routes to controllers
| 1
|
270,089
| 8,452,192,403
|
IssuesEvent
|
2018-10-20 00:49:07
|
tootsuite/mastodon
|
https://api.github.com/repos/tootsuite/mastodon
|
closed
|
Suspended remote users can receive messages directly from my instance
|
priority - high question
|
If I block a domain, users can still send messages to users on that domain, but we can't get responses back.
This is not in line with how the domain block functionality is explained in the documentation, which says my users cannot interact with their users at all.
I believe the domain block functionality should be bidirectional as a result, otherwise an admin who suspends a remote user believing that it is stopping any messages to that user from their instance, will be in for a rude awakening.
* * * *
- [x] I searched or browsed the repo’s other issues to ensure this is not a duplicate.
- [x] This bug happens on a [tagged release](https://github.com/tootsuite/mastodon/releases) and not on `master` (If you're a user, don't worry about this).
|
1.0
|
Suspended remote users can receive messages directly from my instance - If I block a domain, users can still send messages to users on that domain, but we can't get responses back.
This is not in line with how the domain block functionality is explained in the documentation, which says my users cannot interact with their users at all.
I believe the domain block functionality should be bidirectional as a result, otherwise an admin who suspends a remote user believing that it is stopping any messages to that user from their instance, will be in for a rude awakening.
* * * *
- [x] I searched or browsed the repo’s other issues to ensure this is not a duplicate.
- [x] This bug happens on a [tagged release](https://github.com/tootsuite/mastodon/releases) and not on `master` (If you're a user, don't worry about this).
|
priority
|
suspended remote users can receive messages directly from my instance if i block a domain users can still send messages to users on that domain but we can t get responses back this is not in line with how the domain block functionality is explained in the documentation which says my users cannot interact with their users at all i believe the domain block functionality should be bidirectional as a result otherwise an admin who suspends a remote user believing that it is stopping any messages to that user from their instance will be in for a rude awakening i searched or browsed the repo’s other issues to ensure this is not a duplicate this bug happens on a and not on master if you re a user don t worry about this
| 1
|
177,642
| 6,586,042,767
|
IssuesEvent
|
2017-09-13 15:51:21
|
geosolutions-it/evo-odas
|
https://api.github.com/repos/geosolutions-it/evo-odas
|
closed
|
Conventions for DAG names and Configuration Keys
|
enhancement ingestion Priority: High
|
As per point 2 and 3 in the email on EVO-ODAS mailing list ("Airflow Ingestion Review" summed up below ) we should adopt conventions for DAG names and configuration keys:
**DAG Naming Convention**
{COLLECTION_ID}_{NAME}, e.g. S2_MSI_L1C_Download, so that all DAGs belonging to a collection show up grouped together in the airflow GUI
**Configuration Key Naming Convention**
configuration keys needs to be prefixed and abstract in some case.
- a) Prefixed because something like "startdate" is probably not precise enough (DAG startdate or search startdate?). Maybe "dhus_search_startdate" or "search_startdate"?
- b) Abstracted because e.g. "download_dir" should be reused in between DAGs and plugin-specific configurations. Also, it usually shares a common base directory. Therefore I believe there are at least two levels of configurations:
- 1. A general config, like "evoodas_config.py" or "common.py"
- 2. Collection specific config, like "sentinal1_config.py"
Eventually we should create a directory "config" that contains all configuration files and secrets? Like:
```
airflow/config/common.py
airflow/config/secrets.py
airflow/config/sentinel1.py
airflow/config/sentinel1.py
```
|
1.0
|
Conventions for DAG names and Configuration Keys - As per point 2 and 3 in the email on EVO-ODAS mailing list ("Airflow Ingestion Review" summed up below ) we should adopt conventions for DAG names and configuration keys:
**DAG Naming Convention**
{COLLECTION_ID}_{NAME}, e.g. S2_MSI_L1C_Download, so that all DAGs belonging to a collection show up grouped together in the airflow GUI
**Configuration Key Naming Convention**
configuration keys needs to be prefixed and abstract in some case.
- a) Prefixed because something like "startdate" is probably not precise enough (DAG startdate or search startdate?). Maybe "dhus_search_startdate" or "search_startdate"?
- b) Abstracted because e.g. "download_dir" should be reused in between DAGs and plugin-specific configurations. Also, it usually shares a common base directory. Therefore I believe there are at least two levels of configurations:
- 1. A general config, like "evoodas_config.py" or "common.py"
- 2. Collection specific config, like "sentinal1_config.py"
Eventually we should create a directory "config" that contains all configuration files and secrets? Like:
```
airflow/config/common.py
airflow/config/secrets.py
airflow/config/sentinel1.py
airflow/config/sentinel1.py
```
|
priority
|
conventions for dag names and configuration keys as per point and in the email on evo odas mailing list airflow ingestion review summed up below we should adopt conventions for dag names and configuration keys dag naming convention collection id name e g msi download so that all dags belonging to a collection show up grouped together in the airflow gui configuration key naming convention configuration keys needs to be prefixed and abstract in some case a prefixed because something like startdate is probably not precise enough dag startdate or search startdate maybe dhus search startdate or search startdate b abstracted because e g download dir should be reused in between dags and plugin specific configurations also it usually shares a common base directory therefore i believe there are at least two levels of configurations a general config like evoodas config py or common py collection specific config like config py eventually we should create a directory config that contains all configuration files and secrets like airflow config common py airflow config secrets py airflow config py airflow config py
| 1
|
191,914
| 6,845,016,593
|
IssuesEvent
|
2017-11-13 05:49:46
|
wso2/cdmf-agent-android
|
https://api.github.com/repos/wso2/cdmf-agent-android
|
closed
|
Null Pointer exception occurs when installing application through the server
|
Priority/High Severity/Major Type/Bug
|
**Description:**
Null Pointer exception occurs when installing application.
```
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Authorization: Bearer <ACCESS-TOKEN>' -d '{
"deviceIDs": [
"1567393930303"
],
"operation": {
"appIdentifier": "<APP-IDENTIFIER>",
"type": "enterprise",
"url": "<HOST>/helloworld.apk"
}
}' 'https://<HOST>:<PORT>/api/device-mgt/android/v1.0/admin/devices/install-application'
```
```
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.String.equals(java.lang.Object)' on a null object reference
at org.wso2.iot.agent.events.listeners.ApplicationStateListener.applyEnforcement(ApplicationStateListener.java:152)
at org.wso2.iot.agent.events.listeners.ApplicationStateListener.onReceive(ApplicationStateListener.java:94)
at android.app.ActivityThread.handleReceiver(ActivityThread.java:2610)
at android.app.ActivityThread.access$1700(ActivityThread.java:152)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:138
```
**Affected Product Version:**
3.1.0-Update5
**OS, DB, other environment details and versions:**
Ubuntu 16.04, H2, Single IOT node
**Steps to reproduce:**
Enroll an android device.
Install an application
|
1.0
|
Null Pointer exception occurs when installing application through the server - **Description:**
Null Pointer exception occurs when installing application.
```
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Authorization: Bearer <ACCESS-TOKEN>' -d '{
"deviceIDs": [
"1567393930303"
],
"operation": {
"appIdentifier": "<APP-IDENTIFIER>",
"type": "enterprise",
"url": "<HOST>/helloworld.apk"
}
}' 'https://<HOST>:<PORT>/api/device-mgt/android/v1.0/admin/devices/install-application'
```
```
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.String.equals(java.lang.Object)' on a null object reference
at org.wso2.iot.agent.events.listeners.ApplicationStateListener.applyEnforcement(ApplicationStateListener.java:152)
at org.wso2.iot.agent.events.listeners.ApplicationStateListener.onReceive(ApplicationStateListener.java:94)
at android.app.ActivityThread.handleReceiver(ActivityThread.java:2610)
at android.app.ActivityThread.access$1700(ActivityThread.java:152)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:138
```
**Affected Product Version:**
3.1.0-Update5
**OS, DB, other environment details and versions:**
Ubuntu 16.04, H2, Single IOT node
**Steps to reproduce:**
Enroll an android device.
Install an application
|
priority
|
null pointer exception occurs when installing application through the server description null pointer exception occurs when installing application curl x post header content type application json header accept application json header authorization bearer d deviceids operation appidentifier type enterprise url helloworld apk caused by java lang nullpointerexception attempt to invoke virtual method boolean java lang string equals java lang object on a null object reference at org iot agent events listeners applicationstatelistener applyenforcement applicationstatelistener java at org iot agent events listeners applicationstatelistener onreceive applicationstatelistener java at android app activitythread handlereceiver activitythread java at android app activitythread access activitythread java at android app activitythread h handlemessage activitythread java affected product version os db other environment details and versions ubuntu single iot node steps to reproduce enroll an android device install an application
| 1
|
104,802
| 4,221,372,894
|
IssuesEvent
|
2016-07-01 05:03:57
|
hook/champions
|
https://api.github.com/repos/hook/champions
|
closed
|
Stun Synergy She-Hulk -> Daredevil
|
Priority: High Status: Completed Type: Bug Type: Maintenance
|
There's a mistake on this synergy. It's not the same as Black Widow -> Hulk. It is "+15% Stun Activation Chance" (as a 4 stars), and works on any Stun, not only Special Attacks. Currently the tooltip incorrectly states "Chance to Stun on special attacks".
http://community.kabam.com/forums/showthread.php?645759-She-Hulk-Champion-Spotlight

|
1.0
|
Stun Synergy She-Hulk -> Daredevil - There's a mistake on this synergy. It's not the same as Black Widow -> Hulk. It is "+15% Stun Activation Chance" (as a 4 stars), and works on any Stun, not only Special Attacks. Currently the tooltip incorrectly states "Chance to Stun on special attacks".
http://community.kabam.com/forums/showthread.php?645759-She-Hulk-Champion-Spotlight

|
priority
|
stun synergy she hulk daredevil there s a mistake on this synergy it s not the same as black widow hulk it is stun activation chance as a stars and works on any stun not only special attacks currently the tooltip incorrectly states chance to stun on special attacks
| 1
|
34,791
| 2,787,886,974
|
IssuesEvent
|
2015-05-08 09:42:00
|
ceylon/ceylon-js
|
https://api.github.com/repos/ceylon/ceylon-js
|
opened
|
Store `module` annotations in meta model
|
bug high priority
|
Right now no annotations placed on the module descriptor are stored in the meta model, eg:
```ceylon
native("js")
module test "1.0.0" {
}
```
This is needed for https://github.com/ceylon/ceylon-spec/issues/946 and https://github.com/ceylon/ceylon-spec/issues/499
|
1.0
|
Store `module` annotations in meta model - Right now no annotations placed on the module descriptor are stored in the meta model, eg:
```ceylon
native("js")
module test "1.0.0" {
}
```
This is needed for https://github.com/ceylon/ceylon-spec/issues/946 and https://github.com/ceylon/ceylon-spec/issues/499
|
priority
|
store module annotations in meta model right now no annotations placed on the module descriptor are stored in the meta model eg ceylon native js module test this is needed for and
| 1
|
139,161
| 5,357,602,618
|
IssuesEvent
|
2017-02-20 19:04:58
|
freedomvote/freedomvote
|
https://api.github.com/repos/freedomvote/freedomvote
|
opened
|
Response notes are single-language
|
Priority: High Type: Bug
|
The notes provided in the responses are currently only provided in a single language. For the Dutch elections this results in most parties having Dutch notes, and one party having English notes. They are not rendered by language.
If possible I like to fix this before we go live.
|
1.0
|
Response notes are single-language - The notes provided in the responses are currently only provided in a single language. For the Dutch elections this results in most parties having Dutch notes, and one party having English notes. They are not rendered by language.
If possible I like to fix this before we go live.
|
priority
|
response notes are single language the notes provided in the responses are currently only provided in a single language for the dutch elections this results in most parties having dutch notes and one party having english notes they are not rendered by language if possible i like to fix this before we go live
| 1
|
291,470
| 8,925,604,767
|
IssuesEvent
|
2019-01-21 23:37:09
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
Use Ethereum Alarm Clock to automate time-dependent actions
|
Feature Priority: High
|
Original discussion here: https://github.com/AugurProject/augur-app/issues/402.
Such actions to consider would be:
- Paying for transaction fees in Dai instead of ETH (requires account abstraction. Can be done using a proxy/identity wallet, so interactions with Augur and EAC are through that wallet and thus the wallet address is constant.)
- Automating placing/canceling orders (also requires proximity/identity wallet)
- Submitting Designated Reports (also requires proximity/identity wallet)
- Submitting First Public Reports (requires support for EVM expressions as conditions. This has not been implemented yet, but more details can be found at: https://blog.chronologic.network/time-is-only-the-beginning-introducing-conditional-scheduling-with-chronos-protocol-76036e4daaad)
Need to discuss this more (particularly proximity/identity wallet).
|
1.0
|
Use Ethereum Alarm Clock to automate time-dependent actions - Original discussion here: https://github.com/AugurProject/augur-app/issues/402.
Such actions to consider would be:
- Paying for transaction fees in Dai instead of ETH (requires account abstraction. Can be done using a proxy/identity wallet, so interactions with Augur and EAC are through that wallet and thus the wallet address is constant.)
- Automating placing/canceling orders (also requires proximity/identity wallet)
- Submitting Designated Reports (also requires proximity/identity wallet)
- Submitting First Public Reports (requires support for EVM expressions as conditions. This has not been implemented yet, but more details can be found at: https://blog.chronologic.network/time-is-only-the-beginning-introducing-conditional-scheduling-with-chronos-protocol-76036e4daaad)
Need to discuss this more (particularly proximity/identity wallet).
|
priority
|
use ethereum alarm clock to automate time dependent actions original discussion here such actions to consider would be paying for transaction fees in dai instead of eth requires account abstraction can be done using a proxy identity wallet so interactions with augur and eac are through that wallet and thus the wallet address is constant automating placing canceling orders also requires proximity identity wallet submitting designated reports also requires proximity identity wallet submitting first public reports requires support for evm expressions as conditions this has not been implemented yet but more details can be found at need to discuss this more particularly proximity identity wallet
| 1
|
774,166
| 27,185,343,724
|
IssuesEvent
|
2023-02-19 05:45:35
|
Reyder95/Project-Vultura-3D-Unity
|
https://api.github.com/repos/Reyder95/Project-Vultura-3D-Unity
|
closed
|
Allow players to sell from inventory and storage
|
high priority ready for development user interface
|
Currently you can only sell what the station requests. Players should also be allowed to sell their own items to stations.
|
1.0
|
Allow players to sell from inventory and storage - Currently you can only sell what the station requests. Players should also be allowed to sell their own items to stations.
|
priority
|
allow players to sell from inventory and storage currently you can only sell what the station requests players should also be allowed to sell their own items to stations
| 1
|
225,239
| 7,479,940,905
|
IssuesEvent
|
2018-04-04 15:53:43
|
EvictionLab/eviction-maps
|
https://api.github.com/repos/EvictionLab/eviction-maps
|
closed
|
Footer nav points to /contact-us instead of /contact
|
bug high priority
|
It looks like the website is using /contact as the route, but opening an issue on this just to verify that something didn't get lost in translation
|
1.0
|
Footer nav points to /contact-us instead of /contact - It looks like the website is using /contact as the route, but opening an issue on this just to verify that something didn't get lost in translation
|
priority
|
footer nav points to contact us instead of contact it looks like the website is using contact as the route but opening an issue on this just to verify that something didn t get lost in translation
| 1
|
143,273
| 5,513,101,718
|
IssuesEvent
|
2017-03-17 11:27:03
|
smashingmagazine/redesign
|
https://api.github.com/repos/smashingmagazine/redesign
|
closed
|
404 on Emerson Loustau's article "How To Make WordPress Hard For Clients To Mess Up"
|
bug high priority
|
Meow! What happens?
When you visit https://next.smashingmagazine.com/author/emersonloustau/ and try to access 1st article from a list, it results in 404.

|
1.0
|
404 on Emerson Loustau's article "How To Make WordPress Hard For Clients To Mess Up" - Meow! What happens?
When you visit https://next.smashingmagazine.com/author/emersonloustau/ and try to access 1st article from a list, it results in 404.

|
priority
|
on emerson loustau s article how to make wordpress hard for clients to mess up meow what happens when you visit and try to access article from a list it results in
| 1
|
750,594
| 26,207,170,871
|
IssuesEvent
|
2023-01-04 00:25:13
|
AlphaWallet/alpha-wallet-ios
|
https://api.github.com/repos/AlphaWallet/alpha-wallet-ios
|
closed
|
Sending transaction with WalletConnect v1 does nothing
|
Bug High Priority
|
@oa-s will send you a URL offline
1. Visit URL in desktop browser
2. Make sure Goerli is enabled
3. Click "Connect wallet"
4. Approve WalletConnect session
5. Click "Mint Test Devcon Souvenir Token"
6. WalletConnect actionsheet to sign transaction appears
7. Tap Confirm
8. Actionsheet closes
Expected
---
9. Transaction sent — a confirmation screen appears about transaction sent and show up as done or pending in Activity
Observed
---
9. Nothing seemed to have happened
Works if accessed in dapp browser
|
1.0
|
Sending transaction with WalletConnect v1 does nothing - @oa-s will send you a URL offline
1. Visit URL in desktop browser
2. Make sure Goerli is enabled
3. Click "Connect wallet"
4. Approve WalletConnect session
5. Click "Mint Test Devcon Souvenir Token"
6. WalletConnect actionsheet to sign transaction appears
7. Tap Confirm
8. Actionsheet closes
Expected
---
9. Transaction sent — a confirmation screen appears about transaction sent and show up as done or pending in Activity
Observed
---
9. Nothing seemed to have happened
Works if accessed in dapp browser
|
priority
|
sending transaction with walletconnect does nothing oa s will send you a url offline visit url in desktop browser make sure goerli is enabled click connect wallet approve walletconnect session click mint test devcon souvenir token walletconnect actionsheet to sign transaction appears tap confirm actionsheet closes expected transaction sent — a confirmation screen appears about transaction sent and show up as done or pending in activity observed nothing seemed to have happened works if accessed in dapp browser
| 1
|
80,089
| 3,550,668,723
|
IssuesEvent
|
2016-01-20 22:54:53
|
WPIRoboticsProjects/GRIP
|
https://api.github.com/repos/WPIRoboticsProjects/GRIP
|
closed
|
Fix deploying
|
HIGH PRIORITY type: bug
|
There's definitely something up with the deploy feature. A lot of these reports have little useful information, but hopefully between them there should be enough to figure out what's going on. We can use this issue as a single place to collect any information about problems with deploying.
- http://www.chiefdelphi.com/forums/showpost.php?p=1523851&postcount=49
- http://www.chiefdelphi.com/forums/showpost.php?p=1524008&postcount=54
- http://www.chiefdelphi.com/forums/showpost.php?p=1524508&postcount=62
- #352
- #367
- #375
- #376
|
1.0
|
Fix deploying - There's definitely something up with the deploy feature. A lot of these reports have little useful information, but hopefully between them there should be enough to figure out what's going on. We can use this issue as a single place to collect any information about problems with deploying.
- http://www.chiefdelphi.com/forums/showpost.php?p=1523851&postcount=49
- http://www.chiefdelphi.com/forums/showpost.php?p=1524008&postcount=54
- http://www.chiefdelphi.com/forums/showpost.php?p=1524508&postcount=62
- #352
- #367
- #375
- #376
|
priority
|
fix deploying there s definitely something up with the deploy feature a lot of these reports have little useful information but hopefully between them there should be enough to figure out what s going on we can use this issue as a single place to collect any information about problems with deploying
| 1
|
655,242
| 21,682,124,615
|
IssuesEvent
|
2022-05-09 07:46:58
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Sidebar for MS Options
|
Priority: High New Feature Layout C169-Rennes-Métropole-2021-GeOrchestra3
|
## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
There is the need to access the option menu to open other tools contextually (e.g. when the side panel is opened). At the moment the Option button is covered when a side panel is opened so that certain operations are not possible. We need a way to place the menu option somewhere else and properly manage possible conflicts when certain tools are opened together (eg. tools that interacts with the map)
## Acceptance criteria
<!-- Describe here the list of acceptance criteria -->
The general aims are:
- [ ] Remove the burger menu, with other buttons like Home (and the Login one in MS) to move all of them, along with all menu options, in a vertical side toolbar

- [ ] Reduce the size of side panels that opens on the right to make them smaller (around width: 550px) and with a smaller header
- [ ] The Search bar will move to the left when a right-side panel open, to be collapsed in a spyglass button and be opened when that button is clicked. The Search bar is opened by default when there's no panels opened on the right

- [ ] As far as the annotation panel is concerned, it will be moved on the left side to be more "driven" by the TOC: a new button ("Create annotations") in the TOC toolbar will open the annotation panel on the left. When annotations have been created (the related layer is present in TOC with the others) an annotation pencil icon in the TOC toolbar ("Edit annotations") will allow to open the annotation panel again for editing purposes as soon as the annotation layer is selected in TOC


- [ ] The DrawSupport will be also reviewed a bit to ensure there will be no conflicts between tools interacting with the map using a proper policy
- [ ] The printing tool will provide an option to include additional layers (eg. selection of parcels) in the final print
- [ ] The new sidebar menu should be obviously available also in the app context wizard to be included in a context in the same way of the current BurgerMenu
- [ ] We have to manage someway the migration from BurgerMenu to this new menu (e.g. existing contexts). What we have to do for existing contexts where the burger menu is used?
## Other useful information
|
1.0
|
Sidebar for MS Options - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
There is the need to access the option menu to open other tools contextually (e.g. when the side panel is opened). At the moment the Option button is covered when a side panel is opened so that certain operations are not possible. We need a way to place the menu option somewhere else and properly manage possible conflicts when certain tools are opened together (eg. tools that interacts with the map)
## Acceptance criteria
<!-- Describe here the list of acceptance criteria -->
The general aims are:
- [ ] Remove the burger menu, with other buttons like Home (and the Login one in MS) to move all of them, along with all menu options, in a vertical side toolbar

- [ ] Reduce the size of side panels that opens on the right to make them smaller (around width: 550px) and with a smaller header
- [ ] The Search bar will move to the left when a right-side panel open, to be collapsed in a spyglass button and be opened when that button is clicked. The Search bar is opened by default when there's no panels opened on the right

- [ ] As far as the annotation panel is concerned, it will be moved on the left side to be more "driven" by the TOC: a new button ("Create annotations") in the TOC toolbar will open the annotation panel on the left. When annotations have been created (the related layer is present in TOC with the others) an annotation pencil icon in the TOC toolbar ("Edit annotations") will allow to open the annotation panel again for editing purposes as soon as the annotation layer is selected in TOC


- [ ] The DrawSupport will be also reviewed a bit to ensure there will be no conflicts between tools interacting with the map using a proper policy
- [ ] The printing tool will provide an option to include additional layers (eg. selection of parcels) in the final print
- [ ] The new sidebar menu should be obviously available also in the app context wizard to be included in a context in the same way of the current BurgerMenu
- [ ] We have to manage someway the migration from BurgerMenu to this new menu (e.g. existing contexts). What we have to do for existing contexts where the burger menu is used?
## Other useful information
|
priority
|
sidebar for ms options description there is the need to access the option menu to open other tools contextually e g when the side panel is opened at the moment the option button is covered when a side panel is opened so that certain operations are not possible we need a way to place the menu option somewhere else and properly manage possible conflicts when certain tools are opened together eg tools that interacts with the map acceptance criteria the general aims are remove the burger menu with other buttons like home and the login one in ms to move all of them along with all menu options in a vertical side toolbar reduce the size of side panels that opens on the right to make them smaller around width and with a smaller header the search bar will move to the left when a right side panel open to be collapsed in a spyglass button and be opened when that button is clicked the search bar is opened by default when there s no panels opened on the right as far as the annotation panel is concerned it will be moved on the left side to be more driven by the toc a new button create annotations in the toc toolbar will open the annotation panel on the left when annotations have been created the related layer is present in toc with the others an annotation pencil icon in the toc toolbar edit annotations will allow to open the annotation panel again for editing purposes as soon as the annotation layer is selected in toc the drawsupport will be also reviewed a bit to ensure there will be no conflicts between tools interacting with the map using a proper policy the printing tool will provide an option to include additional layers eg selection of parcels in the final print the new sidebar menu should be obviously available also in the app context wizard to be included in a context in the same way of the current burgermenu we have to manage someway the migration from burgermenu to this new menu e g existing contexts what we have to do for existing contexts where the burger menu is used other useful information
| 1
|
37,036
| 2,814,439,217
|
IssuesEvent
|
2015-05-18 20:03:06
|
jimrybarski/raspberrypid
|
https://api.github.com/repos/jimrybarski/raspberrypid
|
closed
|
Autostart servers
|
feature high priority
|
The three servers (website, API, backend) need to start automatically on boot.
|
1.0
|
Autostart servers - The three servers (website, API, backend) need to start automatically on boot.
|
priority
|
autostart servers the three servers website api backend need to start automatically on boot
| 1
|
121,795
| 4,821,459,173
|
IssuesEvent
|
2016-11-05 10:35:15
|
CS2103AUG2016-W11-C2/main
|
https://api.github.com/repos/CS2103AUG2016-W11-C2/main
|
opened
|
Improve parsing for add and schedule
|
priority.high type.bug
|
Initially, I wanted to reject other formats other than
1) add a task name
2) add do something by (time)
3) add attend something from (time) to (time)
so things like
4) add read something to (time) will fail but the problem is natty reads number as time (so it does not help when we try to add back to the title string)
This fails: "add i have alot of things to do for 2103"
What we can do
1) acknowledge that it will be wrong. provide ways to overcome it e.g. don't parse what is enclosed in inverted commas etc
2) check for a) by or b) from+to otherwise c) add everything to the task string. No throwing incorrect command format.
3) Make it compulsory to use /by or -by instead
comment here or update user guide on implementation
|
1.0
|
Improve parsing for add and schedule - Initially, I wanted to reject other formats other than
1) add a task name
2) add do something by (time)
3) add attend something from (time) to (time)
so things like
4) add read something to (time) will fail but the problem is natty reads number as time (so it does not help when we try to add back to the title string)
This fails: "add i have alot of things to do for 2103"
What we can do
1) acknowledge that it will be wrong. provide ways to overcome it e.g. don't parse what is enclosed in inverted commas etc
2) check for a) by or b) from+to otherwise c) add everything to the task string. No throwing incorrect command format.
3) Make it compulsory to use /by or -by instead
comment here or update user guide on implementation
|
priority
|
improve parsing for add and schedule initially i wanted to reject other formats other than add a task name add do something by time add attend something from time to time so things like add read something to time will fail but the problem is natty reads number as time so it does not help when we try to add back to the title string this fails add i have alot of things to do for what we can do acknowledge that it will be wrong provide ways to overcome it e g don t parse what is enclosed in inverted commas etc check for a by or b from to otherwise c add everything to the task string no throwing incorrect command format make it compulsory to use by or by instead comment here or update user guide on implementation
| 1
|
370,246
| 10,927,303,104
|
IssuesEvent
|
2019-11-22 16:24:27
|
ooni/pipeline
|
https://api.github.com/repos/ooni/pipeline
|
closed
|
Ingest new metadata produced by new set of tests
|
priority/high
|
This is a master ticket for all feature extractors related to current and future tests:
* [x] Add support for extracting features related to IM tests #200
* [x] Add support for extracting middlebox test features #201
* [x] Document writing feature extractors for new tests #133
|
1.0
|
Ingest new metadata produced by new set of tests - This is a master ticket for all feature extractors related to current and future tests:
* [x] Add support for extracting features related to IM tests #200
* [x] Add support for extracting middlebox test features #201
* [x] Document writing feature extractors for new tests #133
|
priority
|
ingest new metadata produced by new set of tests this is a master ticket for all feature extractors related to current and future tests add support for extracting features related to im tests add support for extracting middlebox test features document writing feature extractors for new tests
| 1
|
726,950
| 25,017,411,801
|
IssuesEvent
|
2022-11-03 20:09:15
|
l7mp/stunner
|
https://api.github.com/repos/l7mp/stunner
|
closed
|
Let turncat to handle FQDNs in TURN URIs
|
good first issue priority: high status: confirmed type: bug
|
Currently turncat cannot connect to TURN servers using a TURN URI that contains the FQDN of the server, e.g.: turn://example.com.
The reason is that during startup we try to create a fake STUNner config from the given URI and when we try to validate it: https://github.com/l7mp/stunner/blob/a845b070ee263b14453743f120bcf8d39d42c6d6/config.go#L58 we run into an error because we assume that the address is a valid IP address: https://github.com/l7mp/stunner/blob/4e8c046beac072f6a140f60c212fe4ad736a24ff/pkg/apis/v1alpha1/listener.go#L56.
|
1.0
|
Let turncat to handle FQDNs in TURN URIs - Currently turncat cannot connect to TURN servers using a TURN URI that contains the FQDN of the server, e.g.: turn://example.com.
The reason is that during startup we try to create a fake STUNner config from the given URI and when we try to validate it: https://github.com/l7mp/stunner/blob/a845b070ee263b14453743f120bcf8d39d42c6d6/config.go#L58 we run into an error because we assume that the address is a valid IP address: https://github.com/l7mp/stunner/blob/4e8c046beac072f6a140f60c212fe4ad736a24ff/pkg/apis/v1alpha1/listener.go#L56.
|
priority
|
let turncat to handle fqdns in turn uris currently turncat cannot connect to turn servers using a turn uri that contains the fqdn of the server e g turn example com the reason is that during startup we try to create a fake stunner config from the given uri and when we try to validate it we run into an error because we assume that the address is a valid ip address
| 1
|
805,547
| 29,524,524,101
|
IssuesEvent
|
2023-06-05 06:34:26
|
GF-Corporate-Archives/gf-johann-conrad-fischer
|
https://api.github.com/repos/GF-Corporate-Archives/gf-johann-conrad-fischer
|
reopened
|
Places TEI-File: diverse englische PlaceNames am falschen Ort
|
bug anton highest priority
|
@ottosmops nun gibt es diverse englische PlaceNames die mehrfach am falschen Ort vorkommen. In der Orteübersicht schaut das dann so aus:
<img width="1424" alt="Bildschirmfoto 2023-06-04 um 17 59 51" src="https://github.com/GF-Corporate-Archives/gf-johann-conrad-fischer/assets/92036071/cdb4ea60-93ca-442b-bcf3-5692d31587d4">
<img width="1424" alt="Bildschirmfoto 2023-06-04 um 18 00 00" src="https://github.com/GF-Corporate-Archives/gf-johann-conrad-fischer/assets/92036071/a522be6e-3adb-4ff4-8586-92df74b6e19a">
Wenn ich im TEI-File nach "Anglesey" suche, kommen 19 Treffer, nach "Khafre" 62 Treffer, obwohl beides einmalige Keywords bzw. mit einer Verlinkung in einem anderen Keyword sind.
|
1.0
|
Places TEI-File: diverse englische PlaceNames am falschen Ort - @ottosmops nun gibt es diverse englische PlaceNames die mehrfach am falschen Ort vorkommen. In der Orteübersicht schaut das dann so aus:
<img width="1424" alt="Bildschirmfoto 2023-06-04 um 17 59 51" src="https://github.com/GF-Corporate-Archives/gf-johann-conrad-fischer/assets/92036071/cdb4ea60-93ca-442b-bcf3-5692d31587d4">
<img width="1424" alt="Bildschirmfoto 2023-06-04 um 18 00 00" src="https://github.com/GF-Corporate-Archives/gf-johann-conrad-fischer/assets/92036071/a522be6e-3adb-4ff4-8586-92df74b6e19a">
Wenn ich im TEI-File nach "Anglesey" suche, kommen 19 Treffer, nach "Khafre" 62 Treffer, obwohl beides einmalige Keywords bzw. mit einer Verlinkung in einem anderen Keyword sind.
|
priority
|
places tei file diverse englische placenames am falschen ort ottosmops nun gibt es diverse englische placenames die mehrfach am falschen ort vorkommen in der orteübersicht schaut das dann so aus img width alt bildschirmfoto um src img width alt bildschirmfoto um src wenn ich im tei file nach anglesey suche kommen treffer nach khafre treffer obwohl beides einmalige keywords bzw mit einer verlinkung in einem anderen keyword sind
| 1
|
416,243
| 12,141,778,569
|
IssuesEvent
|
2020-04-23 23:30:04
|
camsaul/methodical
|
https://api.github.com/repos/camsaul/methodical
|
reopened
|
add a is-default-method? util fn
|
enhancement high-priority!
|
We need a util fn to figure out if the effective method for a dispatch value is the default method, especially since we can't compare them using `=` since they aren't the same object (as they are in vanilla multimethods)
|
1.0
|
add a is-default-method? util fn - We need a util fn to figure out if the effective method for a dispatch value is the default method, especially since we can't compare them using `=` since they aren't the same object (as they are in vanilla multimethods)
|
priority
|
add a is default method util fn we need a util fn to figure out if the effective method for a dispatch value is the default method especially since we can t compare them using since they aren t the same object as they are in vanilla multimethods
| 1
|
788,828
| 27,769,043,586
|
IssuesEvent
|
2023-03-16 13:23:43
|
pendulum-chain/portal
|
https://api.github.com/repos/pendulum-chain/portal
|
closed
|
Route by default to Pendulum
|
priority:high type:enhancement
|
As a user accessing the Pendulum Portal via the default URL, I should be directed to the "production" network: Pendulum.
# TC
* When accessing `https://portal.pendulumchain.org`, the route should default to `https://portal.pendulumchain.org/pendulum` (not `/amplitude`)
|
1.0
|
Route by default to Pendulum - As a user accessing the Pendulum Portal via the default URL, I should be directed to the "production" network: Pendulum.
# TC
* When accessing `https://portal.pendulumchain.org`, the route should default to `https://portal.pendulumchain.org/pendulum` (not `/amplitude`)
|
priority
|
route by default to pendulum as a user accessing the pendulum portal via the default url i should be directed to the production network pendulum tc when accessing the route should default to not amplitude
| 1
|
412,449
| 12,042,657,542
|
IssuesEvent
|
2020-04-14 10:58:56
|
TownyAdvanced/Towny
|
https://api.github.com/repos/TownyAdvanced/Towny
|
closed
|
Suggestion: Confirmation message with price warning on /plot set outpost
|
Label-Outposts Priority-High enhancement
|
**Please explain your feature request to the best of your abilities:**
Players get confused with the various outpost commands. As /plot set outpost can be very expensive, it might be good to warn players of the cost before the command goes through. ie:
/plot set outpost
*This will create a new outpost costing %outpostprice%. Do you really want to do this?*
Thanks.
|
1.0
|
Suggestion: Confirmation message with price warning on /plot set outpost - **Please explain your feature request to the best of your abilities:**
Players get confused with the various outpost commands. As /plot set outpost can be very expensive, it might be good to warn players of the cost before the command goes through. ie:
/plot set outpost
*This will create a new outpost costing %outpostprice%. Do you really want to do this?*
Thanks.
|
priority
|
suggestion confirmation message with price warning on plot set outpost please explain your feature request to the best of your abilities players get confused with the various outpost commands as plot set outpost can be very expensive it might be good to warn players of the cost before the command goes through ie plot set outpost this will create a new outpost costing outpostprice do you really want to do this thanks
| 1
|
534,575
| 15,629,819,693
|
IssuesEvent
|
2021-03-22 00:25:25
|
pietervdvn/MapComplete
|
https://api.github.com/repos/pietervdvn/MapComplete
|
closed
|
background map alidade smooth dark is broken
|
high-priority
|
As used in https://pietervdvn.github.io/MapComplete/surveillance / https://mapcomplete.osm.be/surveillance
Note: at zoom level 0 it does work, all other zoom levels seem to do nothing
|
1.0
|
background map alidade smooth dark is broken - As used in https://pietervdvn.github.io/MapComplete/surveillance / https://mapcomplete.osm.be/surveillance
Note: at zoom level 0 it does work, all other zoom levels seem to do nothing
|
priority
|
background map alidade smooth dark is broken as used in note at zoom level it does work all other zoom levels seem to do nothing
| 1
|
342,178
| 10,313,075,594
|
IssuesEvent
|
2019-08-29 21:32:56
|
BCcampus/edehr
|
https://api.github.com/repos/BCcampus/edehr
|
closed
|
Change orientation of 4 tables add stacked fields
|
Effort - Low Epic - Layout Priority - High ~Bug
|
The medications screen table should be organized with each record being displayed in a row.
|
1.0
|
Change orientation of 4 tables add stacked fields - The medications screen table should be organized with each record being displayed in a row.
|
priority
|
change orientation of tables add stacked fields the medications screen table should be organized with each record being displayed in a row
| 1
|
33,375
| 2,764,433,904
|
IssuesEvent
|
2015-04-29 15:25:41
|
Mirdarthos/Zyla
|
https://api.github.com/repos/Mirdarthos/Zyla
|
opened
|
Combine files
|
High Priority ToDo
|
combine includes so that the functionality only needs 1 include file. Build that file to include the other files and start the classes as neccesary
|
1.0
|
Combine files - combine includes so that the functionality only needs 1 include file. Build that file to include the other files and start the classes as neccesary
|
priority
|
combine files combine includes so that the functionality only needs include file build that file to include the other files and start the classes as neccesary
| 1
|
67,558
| 3,275,259,637
|
IssuesEvent
|
2015-10-26 14:55:15
|
theodi/member-directory
|
https://api.github.com/repos/theodi/member-directory
|
closed
|
Amend student T&Cs
|
4 - Done priority: high
|
* [ ] Remove point 4.2
* [ ] Remove definitions table and reformat as follows (with the term emboldened each time):
ODI means the Open Data Institute, a company registered in England and Wales with company number 08030289 and whose registered office is 3rd Floor, 65 Clifton Street, London, EC2A 4JE
<!---
@huboard:{"order":1.3299286365509033e-06,"milestone_order":0.46484375,"custom_state":""}
-->
|
1.0
|
Amend student T&Cs - * [ ] Remove point 4.2
* [ ] Remove definitions table and reformat as follows (with the term emboldened each time):
ODI means the Open Data Institute, a company registered in England and Wales with company number 08030289 and whose registered office is 3rd Floor, 65 Clifton Street, London, EC2A 4JE
<!---
@huboard:{"order":1.3299286365509033e-06,"milestone_order":0.46484375,"custom_state":""}
-->
|
priority
|
amend student t cs remove point remove definitions table and reformat as follows with the term emboldened each time odi means the open data institute a company registered in england and wales with company number and whose registered office is floor clifton street london huboard order milestone order custom state
| 1
|
117,025
| 4,710,128,873
|
IssuesEvent
|
2016-10-14 09:03:52
|
GluuFederation/community-edition-setup
|
https://api.github.com/repos/GluuFederation/community-edition-setup
|
closed
|
Tomcat/OpenDJ services run under root user insided containers of CentOS7/RHEL7-based instances.
|
bug High Priority
|
Results of quick survey I did show that in containers of the latest packages (2.4.3-2.4.4) based on CentOS7/RHEL7 distros services of Tomcat and OpenDJ run under root user, instead of "tomcat"/"ldap" users, respectively. I just run`ps -aux | grep -i java` on those vms within container to check for this. This behavior can't be seen in containers based on CentOS6.x or Ubuntu 14.02 distros.
|
1.0
|
Tomcat/OpenDJ services run under root user insided containers of CentOS7/RHEL7-based instances. - Results of quick survey I did show that in containers of the latest packages (2.4.3-2.4.4) based on CentOS7/RHEL7 distros services of Tomcat and OpenDJ run under root user, instead of "tomcat"/"ldap" users, respectively. I just run`ps -aux | grep -i java` on those vms within container to check for this. This behavior can't be seen in containers based on CentOS6.x or Ubuntu 14.02 distros.
|
priority
|
tomcat opendj services run under root user insided containers of based instances results of quick survey i did show that in containers of the latest packages based on distros services of tomcat and opendj run under root user instead of tomcat ldap users respectively i just run ps aux grep i java on those vms within container to check for this this behavior can t be seen in containers based on x or ubuntu distros
| 1
|
501,577
| 14,529,555,010
|
IssuesEvent
|
2020-12-14 17:58:44
|
SparkDevNetwork/Rock
|
https://api.github.com/repos/SparkDevNetwork/Rock
|
closed
|
Workflow Entry HTML Block Broken when Block name changed.
|
Priority: High Status: Attention Core Team Status: Confirmed
|
### Description
When a workflow entry show HTML action is used on the external site and the Block name is anything other than "Workflow Entry" the HTML from the show HTML action does not display on the front end.
This works.


This does not


### Steps to Reproduce
1. Create a workflow that uses a workflow entry form and then shows a success message via the Workflow Entry show HTML action
1. add the workflow entry block to any page on the Front end. Change the Name of the block to anything other than "Workflow Entry"
1. visit the external page and fill out the form. The HTML from the show HTML action will not be shown.
1. Change the name of the workflow entry block to "Workflow Entry"
1. Visit the external page and fill out the form the HTML will now be shown as expected.
**Expected behavior:**
the entry show HTML content would be displayed regardless of the block name.
**Actual behavior:**
The block has to be very specifically named "Workflow Entry" for the HTML to display.
### Versions
* **Rock Version:** 11.2, 11.3, 12, 13
|
1.0
|
Workflow Entry HTML Block Broken when Block name changed. - ### Description
When a workflow entry show HTML action is used on the external site and the Block name is anything other than "Workflow Entry" the HTML from the show HTML action does not display on the front end.
This works.


This does not


### Steps to Reproduce
1. Create a workflow that uses a workflow entry form and then shows a success message via the Workflow Entry show HTML action
1. add the workflow entry block to any page on the Front end. Change the Name of the block to anything other than "Workflow Entry"
1. visit the external page and fill out the form. The HTML from the show HTML action will not be shown.
1. Change the name of the workflow entry block to "Workflow Entry"
1. Visit the external page and fill out the form the HTML will now be shown as expected.
**Expected behavior:**
the entry show HTML content would be displayed regardless of the block name.
**Actual behavior:**
The block has to be very specifically named "Workflow Entry" for the HTML to display.
### Versions
* **Rock Version:** 11.2, 11.3, 12, 13
|
priority
|
workflow entry html block broken when block name changed description when a workflow entry show html action is used on the external site and the block name is anything other than workflow entry the html from the show html action does not display on the front end this works this does not steps to reproduce create a workflow that uses a workflow entry form and then shows a success message via the workflow entry show html action add the workflow entry block to any page on the front end change the name of the block to anything other than workflow entry visit the external page and fill out the form the html from the show html action will not be shown change the name of the workflow entry block to workflow entry visit the external page and fill out the form the html will now be shown as expected expected behavior the entry show html content would be displayed regardless of the block name actual behavior the block has to be very specifically named workflow entry for the html to display versions rock version
| 1
|
482,266
| 13,903,773,178
|
IssuesEvent
|
2020-10-20 07:43:36
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Issue with Annotations in stories
|
Accepted Priority: High bug
|
## Description
<!-- Add here a few sentences describing the bug. -->
In map advanced editor of a story, it is not possible to open the colorpicker to change the style. If you try to do this, it is not possible to close the annotation panel
## How to reproduce
<!-- A list of steps to reproduce the bug -->
It is possible to do a try by using [this story](https://dev.mapstore2.geo-solutions.it/mapstore/#/geostory/11555)

*Expected Result*
<!-- Describe here the expected result -->
It is possible to change the style and close the annotation panel
*Current Result*
<!-- Describe here the current behavior -->
You cannot open the the color picker and if you try to open it to change the color, then you cannot close the annotation panel
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
|
1.0
|
Issue with Annotations in stories - ## Description
<!-- Add here a few sentences describing the bug. -->
In map advanced editor of a story, it is not possible to open the colorpicker to change the style. If you try to do this, it is not possible to close the annotation panel
## How to reproduce
<!-- A list of steps to reproduce the bug -->
It is possible to do a try by using [this story](https://dev.mapstore2.geo-solutions.it/mapstore/#/geostory/11555)

*Expected Result*
<!-- Describe here the expected result -->
It is possible to change the style and close the annotation panel
*Current Result*
<!-- Describe here the current behavior -->
You cannot open the the color picker and if you try to open it to change the color, then you cannot close the annotation panel
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
|
priority
|
issue with annotations in stories description in map advanced editor of a story it is not possible to open the colorpicker to change the style if you try to do this it is not possible to close the annotation panel how to reproduce it is possible to do a try by using expected result it is possible to change the style and close the annotation panel current result you cannot open the the color picker and if you try to open it to change the color then you cannot close the annotation panel not browser related browser info use this site a href for non expert users browser affected version internet explorer edge chrome firefox safari other useful information
| 1
|
488,755
| 14,085,963,976
|
IssuesEvent
|
2020-11-05 02:24:30
|
boostcamp-2020/IssueTracker-28
|
https://api.github.com/repos/boostcamp-2020/IssueTracker-28
|
closed
|
[이슈 목록 화면] 검색 필터 적용
|
🌟 high-priority 👾 frontend
|
### 예상 화면


### 기능
[Filters] 버튼 옆에 있는 검색창에 필터 적용 상태가 텍스트 형식으로 표시된다.
### 요구사항
- 초기 상태는 is:open is:issue 이다. (오픈된 이슈 목록이 나타남)
- 반드시 is:open… 이런 형식으로 표시될 필요는 없다. 자유롭게 알아볼 수 있도록 표시한다.
- 검색창에 있는 텍스트를 모두 지우면 검색창에 “Search all issues”라고 연하게 나타나며, 검색창의 모든 텍스트를 지운 채로 엔터를 누르면 클로즈된 이슈를 포함한 모든 이슈 목록이 나타난다. “Search all issues” 상태일때도 ➋번 동작이 동일하게 적용된다.
|
1.0
|
[이슈 목록 화면] 검색 필터 적용 - ### 예상 화면


### 기능
[Filters] 버튼 옆에 있는 검색창에 필터 적용 상태가 텍스트 형식으로 표시된다.
### 요구사항
- 초기 상태는 is:open is:issue 이다. (오픈된 이슈 목록이 나타남)
- 반드시 is:open… 이런 형식으로 표시될 필요는 없다. 자유롭게 알아볼 수 있도록 표시한다.
- 검색창에 있는 텍스트를 모두 지우면 검색창에 “Search all issues”라고 연하게 나타나며, 검색창의 모든 텍스트를 지운 채로 엔터를 누르면 클로즈된 이슈를 포함한 모든 이슈 목록이 나타난다. “Search all issues” 상태일때도 ➋번 동작이 동일하게 적용된다.
|
priority
|
검색 필터 적용 예상 화면 기능 버튼 옆에 있는 검색창에 필터 적용 상태가 텍스트 형식으로 표시된다 요구사항 초기 상태는 is open is issue 이다 오픈된 이슈 목록이 나타남 반드시 is open… 이런 형식으로 표시될 필요는 없다 자유롭게 알아볼 수 있도록 표시한다 검색창에 있는 텍스트를 모두 지우면 검색창에 “search all issues”라고 연하게 나타나며 검색창의 모든 텍스트를 지운 채로 엔터를 누르면 클로즈된 이슈를 포함한 모든 이슈 목록이 나타난다 “search all issues” 상태일때도 ➋번 동작이 동일하게 적용된다
| 1
|
713,618
| 24,533,349,954
|
IssuesEvent
|
2022-10-11 18:24:10
|
huridocs/uwazi
|
https://api.github.com/repos/huridocs/uwazi
|
opened
|
SEO/Accessibility tweaks - 2
|
Priority: High Frontend :sunglasses:
|
Navigating Uwazi with a text only browser (such as CLI links2 browser) reveals a serie of accessibility problems that also have a great impact on SEO, particularly when it comes to content discoverability.
The server side rendering of some of our components need to be tweaked so it tosses valid, semantic html.
3. Filters

problem: filters are rendered with the number of hits glued to the item.
solution: the number of hits needs to be rendered with spacing from the item title. Desirable to have them enclosed in parentheses. It should be enclosed in a <nav> tag.
problem: the options are not a clickable link, hindering content discoverability
solution: regardless of the full fledged web app rendering and interaction, the options should be rendered as a link
problem: "X more" link is not unfoldable. Many options are not reachable
solution: a good enough solution, at least for discoverability, would be not collapsing the options and always printing the whole list
|
1.0
|
SEO/Accessibility tweaks - 2 - Navigating Uwazi with a text only browser (such as CLI links2 browser) reveals a serie of accessibility problems that also have a great impact on SEO, particularly when it comes to content discoverability.
The server side rendering of some of our components need to be tweaked so it tosses valid, semantic html.
3. Filters

problem: filters are rendered with the number of hits glued to the item.
solution: the number of hits needs to be rendered with spacing from the item title. Desirable to have them enclosed in parentheses. It should be enclosed in a <nav> tag.
problem: the options are not a clickable link, hindering content discoverability
solution: regardless of the full fledged web app rendering and interaction, the options should be rendered as a link
problem: "X more" link is not unfoldable. Many options are not reachable
solution: a good enough solution, at least for discoverability, would be not collapsing the options and always printing the whole list
|
priority
|
seo accessibility tweaks navigating uwazi with a text only browser such as cli browser reveals a serie of accessibility problems that also have a great impact on seo particularly when it comes to content discoverability the server side rendering of some of our components need to be tweaked so it tosses valid semantic html filters problem filters are rendered with the number of hits glued to the item solution the number of hits needs to be rendered with spacing from the item title desirable to have them enclosed in parentheses it should be enclosed in a tag problem the options are not a clickable link hindering content discoverability solution regardless of the full fledged web app rendering and interaction the options should be rendered as a link problem x more link is not unfoldable many options are not reachable solution a good enough solution at least for discoverability would be not collapsing the options and always printing the whole list
| 1
|
766,502
| 26,885,979,005
|
IssuesEvent
|
2023-02-06 03:15:20
|
CCICB/CRUX
|
https://api.github.com/repos/CCICB/CRUX
|
closed
|
THCA dataset throws error for TMB module and drug interaction module
|
High Priority
|
An error has occurred. Check your logs or contact the app author for clarification.
|
1.0
|
THCA dataset throws error for TMB module and drug interaction module - An error has occurred. Check your logs or contact the app author for clarification.
|
priority
|
thca dataset throws error for tmb module and drug interaction module an error has occurred check your logs or contact the app author for clarification
| 1
|
391,609
| 11,576,194,511
|
IssuesEvent
|
2020-02-21 11:22:31
|
ooni/probe
|
https://api.github.com/repos/ooni/probe
|
closed
|
Update the GeoIP database of the mobile apps?
|
bug discuss effort/XS ooni/probe-mobile priority/high
|
Currently the WhatsApp test is reporting false positives because of bad GeoIP data.
|
1.0
|
Update the GeoIP database of the mobile apps? - Currently the WhatsApp test is reporting false positives because of bad GeoIP data.
|
priority
|
update the geoip database of the mobile apps currently the whatsapp test is reporting false positives because of bad geoip data
| 1
|
790,312
| 27,822,685,831
|
IssuesEvent
|
2023-03-19 12:23:38
|
AY2223S2-CS2113-W15-3/tp
|
https://api.github.com/repos/AY2223S2-CS2113-W15-3/tp
|
opened
|
Information is not displayed by month
|
type.Enhancement priority.High
|
When listing budget, deposit, expense or stats, information is displayed in overall rather by monthly.
Was thinking of using a optional parameter to filter the information by month.
|
1.0
|
Information is not displayed by month - When listing budget, deposit, expense or stats, information is displayed in overall rather by monthly.
Was thinking of using a optional parameter to filter the information by month.
|
priority
|
information is not displayed by month when listing budget deposit expense or stats information is displayed in overall rather by monthly was thinking of using a optional parameter to filter the information by month
| 1
|
477,592
| 13,764,977,839
|
IssuesEvent
|
2020-10-07 12:50:09
|
AY2021S1-CS2113T-T09-1/tp
|
https://api.github.com/repos/AY2021S1-CS2113T-T09-1/tp
|
closed
|
Add a pointer to track the current app view
|
priority.High type.Task
|
Pointer keeps track whether user is in overall project list view or within a specific project. This allows for task management features within a project.
|
1.0
|
Add a pointer to track the current app view - Pointer keeps track whether user is in overall project list view or within a specific project. This allows for task management features within a project.
|
priority
|
add a pointer to track the current app view pointer keeps track whether user is in overall project list view or within a specific project this allows for task management features within a project
| 1
|
689,915
| 23,640,228,599
|
IssuesEvent
|
2022-08-25 16:22:06
|
DSpace/dspace-angular
|
https://api.github.com/repos/DSpace/dspace-angular
|
closed
|
Browse menus on top must reset pagenumber to 1 when a link clicked.
|
bug help wanted component: Discovery high priority e/6
|
**Describe the bug**
On browse search results, if you click a link with a big number of results no item is shown in the opening page. This is because the page parameter in the url (bbm.page) is not reset. The new page tries to show the Page number of the previous page.
**To Reproduce**
Steps to reproduce the behavior:
1. Open Ds7 demo site https://demo7.dspace.org/home
2. Click Browse by Author link at the top menu https://demo7.dspace.org/browse/author
3. Navigate to page 110 https://demo7.dspace.org/browse/author?bbm.page=110
4. Click on Simmons, Cameron which have 189 items https://demo7.dspace.org/browse/author?bbm.page=110&value=Simmons,%20Cameron
5. You see that a blank page loads. https://demo7.dspace.org/browse/author?bbm.page=96&value=Simmons,%20Cameron
6. Check that the page parameter in the url that it still tries to show page 110, which is in fact the number of previous page
7. Change the url parameter bbm.page to 1. Observe that correct result is shown.
**Expected behavior**
When a link clicked at step 4 above, a second parameter for page number must be used. And that parameter must start by 1.
The bbm.page parameter must be kept in the memory. If the user clicks "All browse results" button anytime during his navigation of the author's related works, than the system must go back to the previous bbm.page (110) at step 3.
This way the user can continue browsing by authors, from the point he/she forked to the author.
**Related work**
Link to any related tickets or PRs here.
|
1.0
|
Browse menus on top must reset pagenumber to 1 when a link clicked. - **Describe the bug**
On browse search results, if you click a link with a big number of results no item is shown in the opening page. This is because the page parameter in the url (bbm.page) is not reset. The new page tries to show the Page number of the previous page.
**To Reproduce**
Steps to reproduce the behavior:
1. Open Ds7 demo site https://demo7.dspace.org/home
2. Click Browse by Author link at the top menu https://demo7.dspace.org/browse/author
3. Navigate to page 110 https://demo7.dspace.org/browse/author?bbm.page=110
4. Click on Simmons, Cameron which have 189 items https://demo7.dspace.org/browse/author?bbm.page=110&value=Simmons,%20Cameron
5. You see that a blank page loads. https://demo7.dspace.org/browse/author?bbm.page=96&value=Simmons,%20Cameron
6. Check that the page parameter in the url that it still tries to show page 110, which is in fact the number of previous page
7. Change the url parameter bbm.page to 1. Observe that correct result is shown.
**Expected behavior**
When a link clicked at step 4 above, a second parameter for page number must be used. And that parameter must start by 1.
The bbm.page parameter must be kept in the memory. If the user clicks "All browse results" button anytime during his navigation of the author's related works, than the system must go back to the previous bbm.page (110) at step 3.
This way the user can continue browsing by authors, from the point he/she forked to the author.
**Related work**
Link to any related tickets or PRs here.
|
priority
|
browse menus on top must reset pagenumber to when a link clicked describe the bug on browse search results if you click a link with a big number of results no item is shown in the opening page this is because the page parameter in the url bbm page is not reset the new page tries to show the page number of the previous page to reproduce steps to reproduce the behavior open demo site click browse by author link at the top menu navigate to page click on simmons cameron which have items you see that a blank page loads check that the page parameter in the url that it still tries to show page which is in fact the number of previous page change the url parameter bbm page to observe that correct result is shown expected behavior when a link clicked at step above a second parameter for page number must be used and that parameter must start by the bbm page parameter must be kept in the memory if the user clicks all browse results button anytime during his navigation of the author s related works than the system must go back to the previous bbm page at step this way the user can continue browsing by authors from the point he she forked to the author related work link to any related tickets or prs here
| 1
|
415,409
| 12,129,029,730
|
IssuesEvent
|
2020-04-22 21:39:10
|
PyTorchLightning/pytorch-lightning
|
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
|
closed
|
Add support for Horovod as a distributed backend
|
High Priority enhancement help wanted
|
## 🚀 Feature
[Horovod](http://horovod.ai/) is a framework for performing data-parallel distributed training for PyTorch (in addition to other frameworks like TensorFlow and MXNet). It uses the allreduce technique to synchronously aggregate gradients across workers, similar to PyTorch's DDP API.
The goal of this feature is to implemented support for Horovod as another `distributed_backend` option for PyTorch Lightning, providing an abstraction layer over the Horovod API so users don't need to modify their training code when scaling from one GPU to many.
### Motivation
At Uber, many of our researchers are interested in adopting PyTorch Lightning as a standard platform-level API. Because our infrastructure is highly integrated with Horovod, one of the prerequisites for adoption is to be able to run PyTorch Lightning using Horovod for distributed training.
We considered making this an internal layer built on top of PyTorch Lightning, but because Horovod is a popular API used by other companies in industry, we thought this would make the most sense as a contribution to PyTorch Lightning.
### Pitch
With this change, all users would need to do to add Horovod support would be to make the following change to their Trainer to run on GPU (single or multiple):
```
trainer = Trainer(distributed_backend='horovod', gpus=1)
```
Or to run on CPU:
```
trainer = Trainer(distributed_backend='horovod')
```
Then the training script can be launched via the `horovodrun` command-line tool, where the host/GPU allocation is specified:
```
horovodrun -np 8 -H host1:4,host2:4 python train.py
```
### Alternatives
1. Build Horovod support outside of PyTorch Lightning. This has been some by some users in the past, but requires building a separate abstraction of Lightning. It'll be difficult to keep such solutions in sync as Lightning continues to add new features, or to make it fully compatible with user LightningModules (if we need to use the same methods/hooks to implement the required functionality).
2. Launch Horovod in-process as opposed to from a driver application. Horovod supports launching programmatically via the `horovod.run` API. However, this requires pickling code, which is prone to serialization errors for some models. Most Horovod users are accustomed to using horovodrun / mpirun to launch their jobs. Also, using `horovodrun` allows us to decouple the training code from the resource requirements (num_gpus, etc.) which is useful for our users.
### Additional context
A proof of concept has been implemented here: https://github.com/PyTorchLightning/pytorch-lightning/compare/master...tgaddair:horovod
Docs and unit tests are forthcoming. But before creating a full PR, I wanted to get the thoughts of the PyTorch Lightning devs to see if this implementation aligns with your goals for the project.
cc @alsrgv
|
1.0
|
Add support for Horovod as a distributed backend - ## 🚀 Feature
[Horovod](http://horovod.ai/) is a framework for performing data-parallel distributed training for PyTorch (in addition to other frameworks like TensorFlow and MXNet). It uses the allreduce technique to synchronously aggregate gradients across workers, similar to PyTorch's DDP API.
The goal of this feature is to implemented support for Horovod as another `distributed_backend` option for PyTorch Lightning, providing an abstraction layer over the Horovod API so users don't need to modify their training code when scaling from one GPU to many.
### Motivation
At Uber, many of our researchers are interested in adopting PyTorch Lightning as a standard platform-level API. Because our infrastructure is highly integrated with Horovod, one of the prerequisites for adoption is to be able to run PyTorch Lightning using Horovod for distributed training.
We considered making this an internal layer built on top of PyTorch Lightning, but because Horovod is a popular API used by other companies in industry, we thought this would make the most sense as a contribution to PyTorch Lightning.
### Pitch
With this change, all users would need to do to add Horovod support would be to make the following change to their Trainer to run on GPU (single or multiple):
```
trainer = Trainer(distributed_backend='horovod', gpus=1)
```
Or to run on CPU:
```
trainer = Trainer(distributed_backend='horovod')
```
Then the training script can be launched via the `horovodrun` command-line tool, where the host/GPU allocation is specified:
```
horovodrun -np 8 -H host1:4,host2:4 python train.py
```
### Alternatives
1. Build Horovod support outside of PyTorch Lightning. This has been some by some users in the past, but requires building a separate abstraction of Lightning. It'll be difficult to keep such solutions in sync as Lightning continues to add new features, or to make it fully compatible with user LightningModules (if we need to use the same methods/hooks to implement the required functionality).
2. Launch Horovod in-process as opposed to from a driver application. Horovod supports launching programmatically via the `horovod.run` API. However, this requires pickling code, which is prone to serialization errors for some models. Most Horovod users are accustomed to using horovodrun / mpirun to launch their jobs. Also, using `horovodrun` allows us to decouple the training code from the resource requirements (num_gpus, etc.) which is useful for our users.
### Additional context
A proof of concept has been implemented here: https://github.com/PyTorchLightning/pytorch-lightning/compare/master...tgaddair:horovod
Docs and unit tests are forthcoming. But before creating a full PR, I wanted to get the thoughts of the PyTorch Lightning devs to see if this implementation aligns with your goals for the project.
cc @alsrgv
|
priority
|
add support for horovod as a distributed backend 🚀 feature is a framework for performing data parallel distributed training for pytorch in addition to other frameworks like tensorflow and mxnet it uses the allreduce technique to synchronously aggregate gradients across workers similar to pytorch s ddp api the goal of this feature is to implemented support for horovod as another distributed backend option for pytorch lightning providing an abstraction layer over the horovod api so users don t need to modify their training code when scaling from one gpu to many motivation at uber many of our researchers are interested in adopting pytorch lightning as a standard platform level api because our infrastructure is highly integrated with horovod one of the prerequisites for adoption is to be able to run pytorch lightning using horovod for distributed training we considered making this an internal layer built on top of pytorch lightning but because horovod is a popular api used by other companies in industry we thought this would make the most sense as a contribution to pytorch lightning pitch with this change all users would need to do to add horovod support would be to make the following change to their trainer to run on gpu single or multiple trainer trainer distributed backend horovod gpus or to run on cpu trainer trainer distributed backend horovod then the training script can be launched via the horovodrun command line tool where the host gpu allocation is specified horovodrun np h python train py alternatives build horovod support outside of pytorch lightning this has been some by some users in the past but requires building a separate abstraction of lightning it ll be difficult to keep such solutions in sync as lightning continues to add new features or to make it fully compatible with user lightningmodules if we need to use the same methods hooks to implement the required functionality launch horovod in process as opposed to from a driver application horovod supports launching programmatically via the horovod run api however this requires pickling code which is prone to serialization errors for some models most horovod users are accustomed to using horovodrun mpirun to launch their jobs also using horovodrun allows us to decouple the training code from the resource requirements num gpus etc which is useful for our users additional context a proof of concept has been implemented here docs and unit tests are forthcoming but before creating a full pr i wanted to get the thoughts of the pytorch lightning devs to see if this implementation aligns with your goals for the project cc alsrgv
| 1
|
4,701
| 2,563,005,937
|
IssuesEvent
|
2015-02-06 09:06:15
|
ukwa/w3act
|
https://api.github.com/repos/ukwa/w3act
|
opened
|
Please add a full JSON Collections endpoint
|
High Priority
|
To access all the collections metadata, I need an export of them as JSON. I think this is just a matter of hooking a suitable route to CollectionsController.getCollectionsData().
GET /collections/filterbyjson controllers.Collections.getCollectionsData()
and/or
GET /api/collections controllers.Collections.getCollectionsData()
|
1.0
|
Please add a full JSON Collections endpoint - To access all the collections metadata, I need an export of them as JSON. I think this is just a matter of hooking a suitable route to CollectionsController.getCollectionsData().
GET /collections/filterbyjson controllers.Collections.getCollectionsData()
and/or
GET /api/collections controllers.Collections.getCollectionsData()
|
priority
|
please add a full json collections endpoint to access all the collections metadata i need an export of them as json i think this is just a matter of hooking a suitable route to collectionscontroller getcollectionsdata get collections filterbyjson controllers collections getcollectionsdata and or get api collections controllers collections getcollectionsdata
| 1
|
308,867
| 9,458,571,886
|
IssuesEvent
|
2019-04-17 05:53:33
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
opened
|
Add event triggering for missing identity operations
|
Complexity/Medium Priority/High Severity/Major Type/Improvement WUM
|
Currently we only fire "account lock" event only. Which is a generic event where it can be fired for multiple operations. We need to trigger a specific event here. Furthermore, there are multiple operations that we do not trigger events. We need to identify them and trigger events accordingly.
|
1.0
|
Add event triggering for missing identity operations - Currently we only fire "account lock" event only. Which is a generic event where it can be fired for multiple operations. We need to trigger a specific event here. Furthermore, there are multiple operations that we do not trigger events. We need to identify them and trigger events accordingly.
|
priority
|
add event triggering for missing identity operations currently we only fire account lock event only which is a generic event where it can be fired for multiple operations we need to trigger a specific event here furthermore there are multiple operations that we do not trigger events we need to identify them and trigger events accordingly
| 1
|
813,511
| 30,459,488,374
|
IssuesEvent
|
2023-07-17 05:13:59
|
TimDettmers/bitsandbytes
|
https://api.github.com/repos/TimDettmers/bitsandbytes
|
closed
|
non-existent path throwing Error invalid device ordinal
|
enhancement high priority
|
Hello, I'm trying to run the following [HuggingFace notebook](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=ym9bmcpKP9XT
).
The code runs fine on Colab, but when trying it (with no changes) on my system (Windows 11, WSL2, NVidia Quadro RTX 5000 16GB) I get the following error upon starting the training loop (trainer.train()):
```
You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
/home/uccollab/support-text-gen/lib/python3.9/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py:230: UserWarning: where received a uint8 condition tensor. This behavior is deprecated and will be removed in a future version of PyTorch. Use a boolean condition instead. (Triggered internally at ../aten/src/ATen/native/TensorCompare.cpp:493.)
attn_scores = torch.where(causal_mask, attn_scores, mask_value)
Error invalid device ordinal at line 359 in file /home/tim/git/bitsandbytes/csrc/pythonInterface.c
```
I've seen that this is potentially fixable by editing pythonInterface.c, according to [this other thread](https://github.com/stoperro/bitsandbytes_windows/commit/e02f078000ed19fe57321c464dd16d60f18d6803)
The problem is the specified path "/home/tim/git/bitsandbytes/csrc/pythonInterface.c", there's no such path on my WSL installation (also "tim" is not my username). I have no clue where bitesandbytes is getting that file from and a search doesn't return anything as well. I also tried compiling from source but the issue persists.
|
1.0
|
non-existent path throwing Error invalid device ordinal - Hello, I'm trying to run the following [HuggingFace notebook](https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=ym9bmcpKP9XT
).
The code runs fine on Colab, but when trying it (with no changes) on my system (Windows 11, WSL2, NVidia Quadro RTX 5000 16GB) I get the following error upon starting the training loop (trainer.train()):
```
You're using a GPTNeoXTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
/home/uccollab/support-text-gen/lib/python3.9/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py:230: UserWarning: where received a uint8 condition tensor. This behavior is deprecated and will be removed in a future version of PyTorch. Use a boolean condition instead. (Triggered internally at ../aten/src/ATen/native/TensorCompare.cpp:493.)
attn_scores = torch.where(causal_mask, attn_scores, mask_value)
Error invalid device ordinal at line 359 in file /home/tim/git/bitsandbytes/csrc/pythonInterface.c
```
I've seen that this is potentially fixable by editing pythonInterface.c, according to [this other thread](https://github.com/stoperro/bitsandbytes_windows/commit/e02f078000ed19fe57321c464dd16d60f18d6803)
The problem is the specified path "/home/tim/git/bitsandbytes/csrc/pythonInterface.c", there's no such path on my WSL installation (also "tim" is not my username). I have no clue where bitesandbytes is getting that file from and a search doesn't return anything as well. I also tried compiling from source but the issue persists.
|
priority
|
non existent path throwing error invalid device ordinal hello i m trying to run the following the code runs fine on colab but when trying it with no changes on my system windows nvidia quadro rtx i get the following error upon starting the training loop trainer train you re using a gptneoxtokenizerfast tokenizer please note that with a fast tokenizer using the call method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding home uccollab support text gen lib site packages transformers models gpt neox modeling gpt neox py userwarning where received a condition tensor this behavior is deprecated and will be removed in a future version of pytorch use a boolean condition instead triggered internally at aten src aten native tensorcompare cpp attn scores torch where causal mask attn scores mask value error invalid device ordinal at line in file home tim git bitsandbytes csrc pythoninterface c i ve seen that this is potentially fixable by editing pythoninterface c according to the problem is the specified path home tim git bitsandbytes csrc pythoninterface c there s no such path on my wsl installation also tim is not my username i have no clue where bitesandbytes is getting that file from and a search doesn t return anything as well i also tried compiling from source but the issue persists
| 1
|
217,730
| 7,327,794,632
|
IssuesEvent
|
2018-03-04 14:25:09
|
goby-lang/goby
|
https://api.github.com/repos/goby-lang/goby
|
closed
|
Blocks don't return nil when empty
|
Priority High VM bug in progress
|
There is no current specification for this - I think it's a bug, but it may not necessarily be.
When an empty block is passed to a function, `nil` value is not returned; instead, I think that the caller itself is returned. For example
```ruby
[1, 2].map do end # => [[1, 2], [1, 2]]
[1, 2].select do end # => [1, 2]
```
I would expect `[nil, nil]` in the first case, and `[]` in the second, as I think `nil` is an appropriate value to return when a block is empty.
|
1.0
|
Blocks don't return nil when empty - There is no current specification for this - I think it's a bug, but it may not necessarily be.
When an empty block is passed to a function, `nil` value is not returned; instead, I think that the caller itself is returned. For example
```ruby
[1, 2].map do end # => [[1, 2], [1, 2]]
[1, 2].select do end # => [1, 2]
```
I would expect `[nil, nil]` in the first case, and `[]` in the second, as I think `nil` is an appropriate value to return when a block is empty.
|
priority
|
blocks don t return nil when empty there is no current specification for this i think it s a bug but it may not necessarily be when an empty block is passed to a function nil value is not returned instead i think that the caller itself is returned for example ruby map do end select do end i would expect in the first case and in the second as i think nil is an appropriate value to return when a block is empty
| 1
|
243,209
| 7,854,286,868
|
IssuesEvent
|
2018-06-20 20:15:58
|
ByteClubGames/YumiAndTheYokai
|
https://api.github.com/repos/ByteClubGames/YumiAndTheYokai
|
closed
|
Checkpoint / Respawn System & Health
|
HIGH PRIORITY Programming Ready for Review
|
Write a script so that when a checkpoint is reached, the player will respawn at this location should they die or fall
|
1.0
|
Checkpoint / Respawn System & Health - Write a script so that when a checkpoint is reached, the player will respawn at this location should they die or fall
|
priority
|
checkpoint respawn system health write a script so that when a checkpoint is reached the player will respawn at this location should they die or fall
| 1
|
141,702
| 5,440,097,054
|
IssuesEvent
|
2017-03-06 15:04:29
|
fossasia/open-event-orga-server
|
https://api.github.com/repos/fossasia/open-event-orga-server
|
opened
|
Public Schedule: Offer search box in top menu
|
enhancement Priority: High
|
The organizer front end offers a search bar for sessions and speakers. The public schedule is missing that. Please offer a search bar in front of the rooms view button (on the left of the button).

|
1.0
|
Public Schedule: Offer search box in top menu - The organizer front end offers a search bar for sessions and speakers. The public schedule is missing that. Please offer a search bar in front of the rooms view button (on the left of the button).

|
priority
|
public schedule offer search box in top menu the organizer front end offers a search bar for sessions and speakers the public schedule is missing that please offer a search bar in front of the rooms view button on the left of the button
| 1
|
607,179
| 18,774,034,507
|
IssuesEvent
|
2021-11-07 10:58:03
|
AY2122S1-CS2103T-F11-2/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-F11-2/tp
|
closed
|
Fix code quality in `EditCommand` and `EditCommandParser`
|
priority.High coding quality
|
- Methods in `EditCommandParser` are too long.
- Methods in `EditCommand` have deep nesting.
|
1.0
|
Fix code quality in `EditCommand` and `EditCommandParser` - - Methods in `EditCommandParser` are too long.
- Methods in `EditCommand` have deep nesting.
|
priority
|
fix code quality in editcommand and editcommandparser methods in editcommandparser are too long methods in editcommand have deep nesting
| 1
|
327,780
| 9,980,890,853
|
IssuesEvent
|
2019-07-10 05:41:02
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
Flake in generic_test.go file
|
kind/bug priority/High
|
[kind/bug]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
- Operating System: Supported platform
- Output of `odo version`: master
## How did you run odo exactly?
```
odo generic when .odoignore file exists
should create and push the contents of a named component excluding the contents in .odoignore file
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:104
Creating a new project: gityabmabc
Running odo with args: [project create gityabmabc -w -v4]
[odo] I0708 14:23:28.680050 11670 preference.go:116] The configFile is /tmp/artifacts/.odo/preference.yaml
[odo] I0708 14:23:28.680137 11670 occlient.go:455] Trying to connect to server api.ci-op-rwd6qbnx-f09f4.origin-ci-int-aws.dev.rhcloud.com:6443
[odo] I0708 14:23:28.698277 11670 occlient.go:462] Server https://api.ci-op-rwd6qbnx-f09f4.origin-ci-int-aws.dev.rhcloud.com:6443 is up
[odo] I0708 14:23:28.783641 11670 occlient.go:385] isLoggedIn err: <nil>
[odo] output: "developer"
[odo] I0708 14:23:28.783680 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: context
[odo] I0708 14:23:28.783691 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: context
[odo] I0708 14:23:28.783748 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: app
[odo] I0708 14:23:28.783756 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: project
[odo] I0708 14:23:28.804113 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: app
[odo] I0708 14:23:28.804167 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: output
[odo] I0708 14:23:28.804174 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: component
[odo] • Waiting for project to come up ...
[odo] ✓ Waiting for project to come up [401ms]
[odo] ✓ Project 'gityabmabc' is ready for use
[odo] I0708 14:23:29.230621 11670 odo.go:70] Could not get the latest release information in time. Never mind, exiting gracefully :)
[odo] ✓ New project created and now using project : gityabmabc
Created dir: /tmp/682270109
Running git with args: [clone https://github.com/openshift/nodejs-ex /tmp/682270109/nodejs-ex]
[git] Cloning into '/tmp/682270109/nodejs-ex'...
Running odo with args: [create nodejs nodejs --project gityabmabc --context /tmp/682270109/nodejs-ex]
[odo] • Validating component ...
[odo] ✓ Validating component [385ms]
[odo] Please use `odo push` command to create the component with source deployed
Running odo with args: [push --context /tmp/682270109/nodejs-ex]
[odo] Validation
[odo] • Validating component ...
[odo] ✓ Validating component [419ms]
[odo] • Checking component ...
[odo] ✓ Checking component [23ms]
[odo]
[odo] Configuration changes
[odo] • Creating component ...
[odo] ✓ Initializing component
[odo] ✓ Creating component [254ms]
[odo] • Applying configuration ...
[odo] ✓ Applying configuration [15237ns]
[odo]
[odo] Pushing to component nodejs of type local
[odo] • Waiting for component to start ...
[odo] ✓ Waiting for component to start [2m]
[odo] • Copying files to component ...
[odo] ✓ Copying files to component [6s]
[odo] • Building component ...
[odo] ✓ Building component [10s]
[odo] ✓ Changes successfully pushed to component
Running oc with args: [get pods --namespace gityabmabc]
[oc] NAME READY STATUS RESTARTS AGE
[oc] nodejs-app-1-deploy 1/1 Running 0 2m3s
[oc] nodejs-app-1-m4hc6 1/1 Running 0 113s
Running oc with args: [exec nodejs-app-1-deploy --namespace gityabmabc -- ls -lai /opt/app-root/src]
[oc] error: unable to upgrade connection: container not found ("deployment")
Deleting project: gityabmabc
Running odo with args: [project delete gityabmabc -f]
[odo] This project contains the following applications, which will be deleted
[odo] Application app
[odo] This application has following components that will be deleted
[odo] component named nodejs
[odo] No services / could not get services
[odo] • Deleting project gityabmabc ...
[odo] ✓ Deleting project gityabmabc [6s]
[odo] Deleted project : gityabmabc
• Failure [134.577 seconds]
odo generic
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:16
when .odoignore file exists
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:92
should create and push the contents of a named component excluding the contents in .odoignore file [It]
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:104
No future change is possible. Bailing out early after 0.405s.
Expected
<int>: 1
to match exit code:
<int>: 0
/go/src/github.com/openshift/odo/tests/helper/helper_run.go:29
```
## Actual behavior
Should look into the right pod
## Expected behavior
Fails
## Any logs, error output, etc?
|
1.0
|
Flake in generic_test.go file - [kind/bug]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
- Operating System: Supported platform
- Output of `odo version`: master
## How did you run odo exactly?
```
odo generic when .odoignore file exists
should create and push the contents of a named component excluding the contents in .odoignore file
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:104
Creating a new project: gityabmabc
Running odo with args: [project create gityabmabc -w -v4]
[odo] I0708 14:23:28.680050 11670 preference.go:116] The configFile is /tmp/artifacts/.odo/preference.yaml
[odo] I0708 14:23:28.680137 11670 occlient.go:455] Trying to connect to server api.ci-op-rwd6qbnx-f09f4.origin-ci-int-aws.dev.rhcloud.com:6443
[odo] I0708 14:23:28.698277 11670 occlient.go:462] Server https://api.ci-op-rwd6qbnx-f09f4.origin-ci-int-aws.dev.rhcloud.com:6443 is up
[odo] I0708 14:23:28.783641 11670 occlient.go:385] isLoggedIn err: <nil>
[odo] output: "developer"
[odo] I0708 14:23:28.783680 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: context
[odo] I0708 14:23:28.783691 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: context
[odo] I0708 14:23:28.783748 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: app
[odo] I0708 14:23:28.783756 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: project
[odo] I0708 14:23:28.804113 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: app
[odo] I0708 14:23:28.804167 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: output
[odo] I0708 14:23:28.804174 11670 context.go:355] Ignoring error as it usually means flag wasn't set: flag accessed but not defined: component
[odo] • Waiting for project to come up ...
[odo] ✓ Waiting for project to come up [401ms]
[odo] ✓ Project 'gityabmabc' is ready for use
[odo] I0708 14:23:29.230621 11670 odo.go:70] Could not get the latest release information in time. Never mind, exiting gracefully :)
[odo] ✓ New project created and now using project : gityabmabc
Created dir: /tmp/682270109
Running git with args: [clone https://github.com/openshift/nodejs-ex /tmp/682270109/nodejs-ex]
[git] Cloning into '/tmp/682270109/nodejs-ex'...
Running odo with args: [create nodejs nodejs --project gityabmabc --context /tmp/682270109/nodejs-ex]
[odo] • Validating component ...
[odo] ✓ Validating component [385ms]
[odo] Please use `odo push` command to create the component with source deployed
Running odo with args: [push --context /tmp/682270109/nodejs-ex]
[odo] Validation
[odo] • Validating component ...
[odo] ✓ Validating component [419ms]
[odo] • Checking component ...
[odo] ✓ Checking component [23ms]
[odo]
[odo] Configuration changes
[odo] • Creating component ...
[odo] ✓ Initializing component
[odo] ✓ Creating component [254ms]
[odo] • Applying configuration ...
[odo] ✓ Applying configuration [15237ns]
[odo]
[odo] Pushing to component nodejs of type local
[odo] • Waiting for component to start ...
[odo] ✓ Waiting for component to start [2m]
[odo] • Copying files to component ...
[odo] ✓ Copying files to component [6s]
[odo] • Building component ...
[odo] ✓ Building component [10s]
[odo] ✓ Changes successfully pushed to component
Running oc with args: [get pods --namespace gityabmabc]
[oc] NAME READY STATUS RESTARTS AGE
[oc] nodejs-app-1-deploy 1/1 Running 0 2m3s
[oc] nodejs-app-1-m4hc6 1/1 Running 0 113s
Running oc with args: [exec nodejs-app-1-deploy --namespace gityabmabc -- ls -lai /opt/app-root/src]
[oc] error: unable to upgrade connection: container not found ("deployment")
Deleting project: gityabmabc
Running odo with args: [project delete gityabmabc -f]
[odo] This project contains the following applications, which will be deleted
[odo] Application app
[odo] This application has following components that will be deleted
[odo] component named nodejs
[odo] No services / could not get services
[odo] • Deleting project gityabmabc ...
[odo] ✓ Deleting project gityabmabc [6s]
[odo] Deleted project : gityabmabc
• Failure [134.577 seconds]
odo generic
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:16
when .odoignore file exists
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:92
should create and push the contents of a named component excluding the contents in .odoignore file [It]
/go/src/github.com/openshift/odo/tests/integration/generic_test.go:104
No future change is possible. Bailing out early after 0.405s.
Expected
<int>: 1
to match exit code:
<int>: 0
/go/src/github.com/openshift/odo/tests/helper/helper_run.go:29
```
## Actual behavior
Should look into the right pod
## Expected behavior
Fails
## Any logs, error output, etc?
|
priority
|
flake in generic test go file welcome we kindly ask you to fill out the issue template below use the chat and talk to us if you have a question rather than a bug or feature request the chat room is at thanks for understanding and for contributing to the project what versions of software are you using operating system supported platform output of odo version master how did you run odo exactly odo generic when odoignore file exists should create and push the contents of a named component excluding the contents in odoignore file go src github com openshift odo tests integration generic test go creating a new project gityabmabc running odo with args preference go the configfile is tmp artifacts odo preference yaml occlient go trying to connect to server api ci op origin ci int aws dev rhcloud com occlient go server is up occlient go isloggedin err output developer context go ignoring error as it usually means flag wasn t set flag accessed but not defined context context go ignoring error as it usually means flag wasn t set flag accessed but not defined context context go ignoring error as it usually means flag wasn t set flag accessed but not defined app context go ignoring error as it usually means flag wasn t set flag accessed but not defined project context go ignoring error as it usually means flag wasn t set flag accessed but not defined app context go ignoring error as it usually means flag wasn t set flag accessed but not defined output context go ignoring error as it usually means flag wasn t set flag accessed but not defined component • waiting for project to come up ✓ waiting for project to come up ✓ project gityabmabc is ready for use odo go could not get the latest release information in time never mind exiting gracefully ✓ new project created and now using project gityabmabc created dir tmp running git with args cloning into tmp nodejs ex running odo with args • validating component ✓ validating component please use odo push command to create the component with source deployed running odo with args validation • validating component ✓ validating component • checking component ✓ checking component configuration changes • creating component ✓ initializing component ✓ creating component • applying configuration ✓ applying configuration pushing to component nodejs of type local • waiting for component to start ✓ waiting for component to start • copying files to component ✓ copying files to component • building component ✓ building component ✓ changes successfully pushed to component running oc with args name ready status restarts age nodejs app deploy running nodejs app running running oc with args error unable to upgrade connection container not found deployment deleting project gityabmabc running odo with args this project contains the following applications which will be deleted application app this application has following components that will be deleted component named nodejs no services could not get services • deleting project gityabmabc ✓ deleting project gityabmabc deleted project gityabmabc • failure odo generic go src github com openshift odo tests integration generic test go when odoignore file exists go src github com openshift odo tests integration generic test go should create and push the contents of a named component excluding the contents in odoignore file go src github com openshift odo tests integration generic test go no future change is possible bailing out early after expected to match exit code go src github com openshift odo tests helper helper run go actual behavior should look into the right pod expected behavior fails any logs error output etc
| 1
|
534,241
| 15,612,906,853
|
IssuesEvent
|
2021-03-19 15:50:07
|
lilygdu/TH-starter
|
https://api.github.com/repos/lilygdu/TH-starter
|
closed
|
Create view events database table
|
high priority
|
VIEW EVENT TABLE | | | | |
-- | -- | -- | -- | -- | --
id | page_visit_id | tracking_id | viewed_at | view_x | view_y
primary key | foreign key | semantic unique id given to specific items | timestamp of view | % of x | % of y
- [ ] **id**: PRIMARY KEY, NOT NULL
- [ ] **page_visit_id**: FOREIGN KEY, NOT NULL
- [ ] **tracking_id**: UUID, NOT NULL (should be semantic as possible)
- [ ] **viewed_at**: timestamp, default local timezone, NOT NULL
- [ ] **view_x**: NOT NULL (% of screen visible)
- [ ] **view_y**: NOT NULL (% of screen visible)
GIVEN a user is on the page
WHEN an element is visible on the screen
THEN track / log the time and location of the elements by their element_ids
Note:
Data is not updated when the user scrolls UNLESS the user scrolls and shows a new element that was not previously recorded in the initial page load
Only add new row for viewing of elements that possess data attribute tracking_id (refer to #149 )
|
1.0
|
Create view events database table -
VIEW EVENT TABLE | | | | |
-- | -- | -- | -- | -- | --
id | page_visit_id | tracking_id | viewed_at | view_x | view_y
primary key | foreign key | semantic unique id given to specific items | timestamp of view | % of x | % of y
- [ ] **id**: PRIMARY KEY, NOT NULL
- [ ] **page_visit_id**: FOREIGN KEY, NOT NULL
- [ ] **tracking_id**: UUID, NOT NULL (should be semantic as possible)
- [ ] **viewed_at**: timestamp, default local timezone, NOT NULL
- [ ] **view_x**: NOT NULL (% of screen visible)
- [ ] **view_y**: NOT NULL (% of screen visible)
GIVEN a user is on the page
WHEN an element is visible on the screen
THEN track / log the time and location of the elements by their element_ids
Note:
Data is not updated when the user scrolls UNLESS the user scrolls and shows a new element that was not previously recorded in the initial page load
Only add new row for viewing of elements that possess data attribute tracking_id (refer to #149 )
|
priority
|
create view events database table view event table id page visit id tracking id viewed at view x view y primary key foreign key semantic unique id given to specific items timestamp of view of x of y id primary key not null page visit id foreign key not null tracking id uuid not null should be semantic as possible viewed at timestamp default local timezone not null view x not null of screen visible view y not null of screen visible given a user is on the page when an element is visible on the screen then track log the time and location of the elements by their element ids note data is not updated when the user scrolls unless the user scrolls and shows a new element that was not previously recorded in the initial page load only add new row for viewing of elements that possess data attribute tracking id refer to
| 1
|
32,666
| 2,757,469,282
|
IssuesEvent
|
2015-04-27 15:03:48
|
ExactTarget/fuelux-mctheme
|
https://api.github.com/repos/ExactTarget/fuelux-mctheme
|
closed
|
Include all spritesheet.png icons in SVG format
|
High Priority icons
|
Error:

This was previously in the theme and has been removed with the creation of the SVG icon set.
The last two on the right need to be added to the SVG icon set.

|
1.0
|
Include all spritesheet.png icons in SVG format - Error:

This was previously in the theme and has been removed with the creation of the SVG icon set.
The last two on the right need to be added to the SVG icon set.

|
priority
|
include all spritesheet png icons in svg format error this was previously in the theme and has been removed with the creation of the svg icon set the last two on the right need to be added to the svg icon set
| 1
|
133,735
| 5,207,305,524
|
IssuesEvent
|
2017-01-24 23:05:16
|
ecohealthalliance/eidith
|
https://api.github.com/repos/ecohealthalliance/eidith
|
closed
|
Error in ed_tests_report()
|
bug high-priority
|
```
> ed_tests_report()
Error in parse(text = stri_extract_last_regex(deparse(new_expr), "(?<=%in%\\s).*$")) :
<text>:3:1: unexpected numeric constant
2: NA
3: NA
```
This is confirmed on @noamross's and Tracie's computers. Not sure why it's not being picked up by the nightly tests. Fix ASAP.
|
1.0
|
Error in ed_tests_report() - ```
> ed_tests_report()
Error in parse(text = stri_extract_last_regex(deparse(new_expr), "(?<=%in%\\s).*$")) :
<text>:3:1: unexpected numeric constant
2: NA
3: NA
```
This is confirmed on @noamross's and Tracie's computers. Not sure why it's not being picked up by the nightly tests. Fix ASAP.
|
priority
|
error in ed tests report ed tests report error in parse text stri extract last regex deparse new expr in s unexpected numeric constant na na this is confirmed on noamross s and tracie s computers not sure why it s not being picked up by the nightly tests fix asap
| 1
|
459,453
| 13,193,373,885
|
IssuesEvent
|
2020-08-13 15:08:02
|
ChainSafe/gossamer
|
https://api.github.com/repos/ChainSafe/gossamer
|
closed
|
profile node to determine areas of optimization
|
Priority: 2 - High Type: Maintenance
|
<!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- the node needs to be optimized
- especially the networking layer since the node can't deal with more than ~5 peers right now
- profile the node (focus on network) and determine areas of high memory usage
- needs profiling as both standalone and when peer count increases
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [x] I have provided as much information as possible and necessary
- [x] I am planning to submit a pull request to fix this issue myself
|
1.0
|
profile node to determine areas of optimization - <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- the node needs to be optimized
- especially the networking layer since the node can't deal with more than ~5 peers right now
- profile the node (focus on network) and determine areas of high memory usage
- needs profiling as both standalone and when peer count increases
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [x] I have provided as much information as possible and necessary
- [x] I am planning to submit a pull request to fix this issue myself
|
priority
|
profile node to determine areas of optimization please read carefully expected behavior if you re describing a bug tell us what should happen if you re suggesting a change improvement tell us how it should work the node needs to be optimized especially the networking layer since the node can t deal with more than peers right now profile the node focus on network and determine areas of high memory usage needs profiling as both standalone and when peer count increases checklist each empty square brackets below is a checkbox replace with to check the box after completing the task i have read and i have provided as much information as possible and necessary i am planning to submit a pull request to fix this issue myself
| 1
|
236,389
| 7,749,000,775
|
IssuesEvent
|
2018-05-30 10:00:24
|
Gloirin/m2gTest
|
https://api.github.com/repos/Gloirin/m2gTest
|
closed
|
0003138:
Right-click menu on account offers unusual actions first
|
Felamimail bug high priority
|
**Reported by robert.lischke on 19 Oct 2010 15:23**
**Version:** git master
The context menu, when right-clicking an account offers potentially harmful actions first:
Edit Account
Delete Account
Add Folder
Update Folder List
Set Vacation Message
Set Filter Rules
Changing this order to
Add Folder
Update Folder List
Set Vacation Message
Set Filter Rules
Edit Account
Delete Account
would prevent users from accidentally removing or editing their account details.
**Steps to reproduce:** open Mail, right-click on any account, see the context menu
|
1.0
|
0003138:
Right-click menu on account offers unusual actions first - **Reported by robert.lischke on 19 Oct 2010 15:23**
**Version:** git master
The context menu, when right-clicking an account offers potentially harmful actions first:
Edit Account
Delete Account
Add Folder
Update Folder List
Set Vacation Message
Set Filter Rules
Changing this order to
Add Folder
Update Folder List
Set Vacation Message
Set Filter Rules
Edit Account
Delete Account
would prevent users from accidentally removing or editing their account details.
**Steps to reproduce:** open Mail, right-click on any account, see the context menu
|
priority
|
right click menu on account offers unusual actions first reported by robert lischke on oct version git master the context menu when right clicking an account offers potentially harmful actions first edit account delete account add folder update folder list set vacation message set filter rules changing this order to add folder update folder list set vacation message set filter rules edit account delete account would prevent users from accidentally removing or editing their account details steps to reproduce open mail right click on any account see the context menu
| 1
|
72,834
| 3,391,981,856
|
IssuesEvent
|
2015-11-30 17:37:04
|
creativedisturbance/podcasts
|
https://api.github.com/repos/creativedisturbance/podcasts
|
closed
|
urgent re art and climate change podcast
|
High Priority
|
corey
PLEASE reply to this email and copy me
roger
Hello Roger and Corey,
I just got s link to an exhibition I am in as part of COP21.
Here it is:
http://www.artcop21.com/events/tipping-points-artists-address-the-climate-crises/
This made me wonder if the podcast that Miriam Seidel and I did has been posted yet??
Thank you.
Best,
@porteriffic
Diane Burko
310 South Juniper Street
Philadelphia, PA 1017
cell:215-880-8466home: 215-546-8181
www.dianeburko.com
www.dianeburkophotography.com
|
1.0
|
urgent re art and climate change podcast - corey
PLEASE reply to this email and copy me
roger
Hello Roger and Corey,
I just got s link to an exhibition I am in as part of COP21.
Here it is:
http://www.artcop21.com/events/tipping-points-artists-address-the-climate-crises/
This made me wonder if the podcast that Miriam Seidel and I did has been posted yet??
Thank you.
Best,
@porteriffic
Diane Burko
310 South Juniper Street
Philadelphia, PA 1017
cell:215-880-8466home: 215-546-8181
www.dianeburko.com
www.dianeburkophotography.com
|
priority
|
urgent re art and climate change podcast corey please reply to this email and copy me roger hello roger and corey i just got s link to an exhibition i am in as part of here it is this made me wonder if the podcast that miriam seidel and i did has been posted yet thank you best porteriffic diane burko south juniper street philadelphia pa cell
| 1
|
575,701
| 17,047,375,488
|
IssuesEvent
|
2021-07-06 02:25:52
|
TeamB-um/B-umiOS
|
https://api.github.com/repos/TeamB-um/B-umiOS
|
opened
|
[feat] 분리수거 레이아웃 + 요소 넣기
|
0️⃣ priority: high 진석 🌿 feature 👀 view
|
## 💡 issue
분리수거 관련 레이아웃을 잡을 예정입니다.
관련 요소를 추가 시킬 계획입니다.
## 📝 todo
- [ ] 레이아웃 잡기!
- [ ] 요소 넣기!
|
1.0
|
[feat] 분리수거 레이아웃 + 요소 넣기 - ## 💡 issue
분리수거 관련 레이아웃을 잡을 예정입니다.
관련 요소를 추가 시킬 계획입니다.
## 📝 todo
- [ ] 레이아웃 잡기!
- [ ] 요소 넣기!
|
priority
|
분리수거 레이아웃 요소 넣기 💡 issue 분리수거 관련 레이아웃을 잡을 예정입니다 관련 요소를 추가 시킬 계획입니다 📝 todo 레이아웃 잡기 요소 넣기
| 1
|
355,781
| 10,584,710,108
|
IssuesEvent
|
2019-10-08 15:57:12
|
AY1920S1-CS2113T-W12-4/main
|
https://api.github.com/repos/AY1920S1-CS2113T-W12-4/main
|
opened
|
As a user, I can see live updates when i add a new task
|
priority.High type.Story
|
so that I can see the tasks I have and compare on easily.
|
1.0
|
As a user, I can see live updates when i add a new task - so that I can see the tasks I have and compare on easily.
|
priority
|
as a user i can see live updates when i add a new task so that i can see the tasks i have and compare on easily
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.