Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
67,744
| 3,278,176,728
|
IssuesEvent
|
2015-10-27 07:38:40
|
nim-lang/Nim
|
https://api.github.com/repos/nim-lang/Nim
|
closed
|
Bug with generic proc returning a {.global.} `var seq[T]`
|
High Priority Regression
|
This bug was introduced in a commit sometime over the last 1-2 weeks:
```nim
proc problem[T]: var seq[T] =
## Problem! Bug with generics makes every call to this proc generate
## a new seq[T] instead of retrieving the `items {.global.}` variable.
var items {.global.}: seq[T]
return items
proc workaround[T]: ptr seq[T] =
## Workaround! By returning by `ptr` instead of `var` we can get access to
## the `items` variable, but that means we have to explicitly deref at callsite.
var items {.global.}: seq[T]
return addr items
proc proof[T]: var seq[int] =
## Proof. This proc correctly retrieves the `items` variable. Notice the only thing
## that's changed from `foo` is that it returns `seq[int]` instead of `seq[T]`.
var items {.global.}: seq[int]
return items
problem[int]() = @[1, 2, 3]
problem[float]() = @[4.0, 5.0, 6.0]
workaround[int]()[] = @[1, 2, 3]
workaround[float]()[] = @[4.0, 5.0, 6.0]
proof[int]() = @[1, 2, 3]
proof[float]() = @[4, 5, 6]
echo problem[int]() # prints 'nil' - BUG!
echo problem[float]() # prints 'nil' - BUG!
echo workaround[int]()[] # prints '@[1, 2, 3]'
echo workaround[float]()[] # prints '@[4.0, 5.0, 6.0]'
echo proof[int]() # prints '@[1, 2, 3]'
echo proof[float]() # prints '@[4, 5, 6]'
```
A core module of mine relies heavily on these features, and I was in the middle of changing something around when this bug was committed and I updated nim (a poor decision on my part), so it took a while to understand that it was from a bug introduced by the git-pull instead of by my changes. Otherwise I'd have a better idea about what exact commit introduced this bug, sorry.
|
1.0
|
Bug with generic proc returning a {.global.} `var seq[T]` - This bug was introduced in a commit sometime over the last 1-2 weeks:
```nim
proc problem[T]: var seq[T] =
## Problem! Bug with generics makes every call to this proc generate
## a new seq[T] instead of retrieving the `items {.global.}` variable.
var items {.global.}: seq[T]
return items
proc workaround[T]: ptr seq[T] =
## Workaround! By returning by `ptr` instead of `var` we can get access to
## the `items` variable, but that means we have to explicitly deref at callsite.
var items {.global.}: seq[T]
return addr items
proc proof[T]: var seq[int] =
## Proof. This proc correctly retrieves the `items` variable. Notice the only thing
## that's changed from `foo` is that it returns `seq[int]` instead of `seq[T]`.
var items {.global.}: seq[int]
return items
problem[int]() = @[1, 2, 3]
problem[float]() = @[4.0, 5.0, 6.0]
workaround[int]()[] = @[1, 2, 3]
workaround[float]()[] = @[4.0, 5.0, 6.0]
proof[int]() = @[1, 2, 3]
proof[float]() = @[4, 5, 6]
echo problem[int]() # prints 'nil' - BUG!
echo problem[float]() # prints 'nil' - BUG!
echo workaround[int]()[] # prints '@[1, 2, 3]'
echo workaround[float]()[] # prints '@[4.0, 5.0, 6.0]'
echo proof[int]() # prints '@[1, 2, 3]'
echo proof[float]() # prints '@[4, 5, 6]'
```
A core module of mine relies heavily on these features, and I was in the middle of changing something around when this bug was committed and I updated nim (a poor decision on my part), so it took a while to understand that it was from a bug introduced by the git-pull instead of by my changes. Otherwise I'd have a better idea about what exact commit introduced this bug, sorry.
|
priority
|
bug with generic proc returning a global var seq this bug was introduced in a commit sometime over the last weeks nim proc problem var seq problem bug with generics makes every call to this proc generate a new seq instead of retrieving the items global variable var items global seq return items proc workaround ptr seq workaround by returning by ptr instead of var we can get access to the items variable but that means we have to explicitly deref at callsite var items global seq return addr items proc proof var seq proof this proc correctly retrieves the items variable notice the only thing that s changed from foo is that it returns seq instead of seq var items global seq return items problem problem workaround workaround proof proof echo problem prints nil bug echo problem prints nil bug echo workaround prints echo workaround prints echo proof prints echo proof prints a core module of mine relies heavily on these features and i was in the middle of changing something around when this bug was committed and i updated nim a poor decision on my part so it took a while to understand that it was from a bug introduced by the git pull instead of by my changes otherwise i d have a better idea about what exact commit introduced this bug sorry
| 1
|
162,676
| 6,157,798,270
|
IssuesEvent
|
2017-06-28 19:46:38
|
igvteam/juicebox.js
|
https://api.github.com/repos/igvteam/juicebox.js
|
opened
|
Shift - crosshairs not working
|
bug high priority
|
Shift - crosshairs are not working in the latest code. Pressing the shift key does not cause them to appear. By doing some clicking pressing and moving the mouse in and out of the element I was able to get them to appear once, but they were frozen (not tracking the mouse).
|
1.0
|
Shift - crosshairs not working - Shift - crosshairs are not working in the latest code. Pressing the shift key does not cause them to appear. By doing some clicking pressing and moving the mouse in and out of the element I was able to get them to appear once, but they were frozen (not tracking the mouse).
|
priority
|
shift crosshairs not working shift crosshairs are not working in the latest code pressing the shift key does not cause them to appear by doing some clicking pressing and moving the mouse in and out of the element i was able to get them to appear once but they were frozen not tracking the mouse
| 1
|
95,078
| 3,933,813,707
|
IssuesEvent
|
2016-04-25 20:26:00
|
washingtonstateuniversity/WSUWP-Content-Syndicate
|
https://api.github.com/repos/washingtonstateuniversity/WSUWP-Content-Syndicate
|
opened
|
Add actions throughout to better support logging
|
enhancement priority:high
|
We have custom logging in several places, but that shouldn't be part of the core plugin once we make it more publicly available. Instead, we can create some kind of "failure" method and use an action in there to help logging.
|
1.0
|
Add actions throughout to better support logging - We have custom logging in several places, but that shouldn't be part of the core plugin once we make it more publicly available. Instead, we can create some kind of "failure" method and use an action in there to help logging.
|
priority
|
add actions throughout to better support logging we have custom logging in several places but that shouldn t be part of the core plugin once we make it more publicly available instead we can create some kind of failure method and use an action in there to help logging
| 1
|
643,315
| 20,948,287,462
|
IssuesEvent
|
2022-03-26 07:23:42
|
AY2122S2-CS2113T-T09-1/tp
|
https://api.github.com/repos/AY2122S2-CS2113T-T09-1/tp
|
closed
|
Show daily or weekly schedule (Parsing tasks for printing schedule)
|
priority.High type.Chore
|
Implement logic handling for filtering tasks by date
|
1.0
|
Show daily or weekly schedule (Parsing tasks for printing schedule) - Implement logic handling for filtering tasks by date
|
priority
|
show daily or weekly schedule parsing tasks for printing schedule implement logic handling for filtering tasks by date
| 1
|
119,971
| 4,778,436,640
|
IssuesEvent
|
2016-10-27 19:17:34
|
RepreZen/SwagEdit
|
https://api.github.com/repos/RepreZen/SwagEdit
|
closed
|
Deprecated simple reference should display with specialized warning message
|
High Priority
|
Reported by @tedepstein in [ZEN-3016](https://modelsolv.atlassian.net/browse/ZEN-3016):
> Discovered in [support ticket #176](https://support.reprezen.com/helpdesk/tickets/176):
We used to recognize these deprecated simple references, and provide a friendlier warning message. Now it's just giving an "invalid reference" warning, which doesn't tell the whole story.
This is really a SwagEdit regression, apparently....
|
1.0
|
Deprecated simple reference should display with specialized warning message - Reported by @tedepstein in [ZEN-3016](https://modelsolv.atlassian.net/browse/ZEN-3016):
> Discovered in [support ticket #176](https://support.reprezen.com/helpdesk/tickets/176):
We used to recognize these deprecated simple references, and provide a friendlier warning message. Now it's just giving an "invalid reference" warning, which doesn't tell the whole story.
This is really a SwagEdit regression, apparently....
|
priority
|
deprecated simple reference should display with specialized warning message reported by tedepstein in discovered in we used to recognize these deprecated simple references and provide a friendlier warning message now it s just giving an invalid reference warning which doesn t tell the whole story this is really a swagedit regression apparently
| 1
|
663,798
| 22,207,419,242
|
IssuesEvent
|
2022-06-07 15:57:37
|
voxel51/fiftyone
|
https://api.github.com/repos/voxel51/fiftyone
|
opened
|
[BUG] App cannot handle schema changes to an in-use dataset
|
bug app high priority
|
On `fiftyone>=0.16.0`, the App cannot handle schema changes to an in-use dataset:
```py
import fiftyone as fo
sample = fo.Sample(filepath="/Users/Brian/Desktop/test.png")
dataset = fo.Dataset()
dataset.add_sample(sample)
session = fo.launch_app(dataset)
sample["value"] = 1
sample.save()
# Raises error shown below
session.refresh()
# Does not escape the error page
session.dataset = None
# Doesn't help
session.refresh()
# Now press `x` in error modal to revert to empty dataset state in App
# Error below is still raised
session.dataset = dataset
```
```
Error: invalid path value
at http://localhost:5151/assets/index.1baa3d4c.js:191:11855
at a (http://localhost:5151/assets/vendor.2936b611.js:52:494206)
at _ (http://localhost:5151/assets/vendor.2936b611.js:52:483152)
at b (http://localhost:5151/assets/vendor.2936b611.js:52:484363)
at http://localhost:5151/assets/vendor.2936b611.js:52:485657
at http://localhost:5151/assets/vendor.2936b611.js:52:485629
at Object.Q [as get] (http://localhost:5151/assets/vendor.2936b611.js:52:485651)
at getNodeLoadable (http://localhost:5151/assets/vendor.2936b611.js:52:442147)
at _ (http://localhost:5151/assets/vendor.2936b611.js:52:482529)
at http://localhost:5151/assets/index.1baa3d4c.js:355:3674
```
|
1.0
|
[BUG] App cannot handle schema changes to an in-use dataset - On `fiftyone>=0.16.0`, the App cannot handle schema changes to an in-use dataset:
```py
import fiftyone as fo
sample = fo.Sample(filepath="/Users/Brian/Desktop/test.png")
dataset = fo.Dataset()
dataset.add_sample(sample)
session = fo.launch_app(dataset)
sample["value"] = 1
sample.save()
# Raises error shown below
session.refresh()
# Does not escape the error page
session.dataset = None
# Doesn't help
session.refresh()
# Now press `x` in error modal to revert to empty dataset state in App
# Error below is still raised
session.dataset = dataset
```
```
Error: invalid path value
at http://localhost:5151/assets/index.1baa3d4c.js:191:11855
at a (http://localhost:5151/assets/vendor.2936b611.js:52:494206)
at _ (http://localhost:5151/assets/vendor.2936b611.js:52:483152)
at b (http://localhost:5151/assets/vendor.2936b611.js:52:484363)
at http://localhost:5151/assets/vendor.2936b611.js:52:485657
at http://localhost:5151/assets/vendor.2936b611.js:52:485629
at Object.Q [as get] (http://localhost:5151/assets/vendor.2936b611.js:52:485651)
at getNodeLoadable (http://localhost:5151/assets/vendor.2936b611.js:52:442147)
at _ (http://localhost:5151/assets/vendor.2936b611.js:52:482529)
at http://localhost:5151/assets/index.1baa3d4c.js:355:3674
```
|
priority
|
app cannot handle schema changes to an in use dataset on fiftyone the app cannot handle schema changes to an in use dataset py import fiftyone as fo sample fo sample filepath users brian desktop test png dataset fo dataset dataset add sample sample session fo launch app dataset sample sample save raises error shown below session refresh does not escape the error page session dataset none doesn t help session refresh now press x in error modal to revert to empty dataset state in app error below is still raised session dataset dataset error invalid path value at at a at at b at at at object q at getnodeloadable at at
| 1
|
172,910
| 6,517,770,124
|
IssuesEvent
|
2017-08-28 03:08:08
|
localstack/localstack
|
https://api.github.com/repos/localstack/localstack
|
closed
|
Localstack Lambda service does not accept python3.6 f-string syntax
|
enhancement priority-high
|
I found Localstack Lambda service does not accept python3.6 f-string syntax.
I have created a simple Lambda python below.
```
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
event2 = {"Records": [3,4,5]}
print(f'Successfully processed {len(event2["Records"])} records.')
return event['key1']
```
An error occurred (Exception) when calling the CreateFunction operation: Unknown error: ('Unable to get handler function from lambda code.', SyntaxError('invalid syntax', ('/tmp/localstack/lambda_script_l_3c0ba2ea.py', 9, 69, ' print(f\'Successfully processed {len(event2["Records"])} records.\')\n')))
|
1.0
|
Localstack Lambda service does not accept python3.6 f-string syntax - I found Localstack Lambda service does not accept python3.6 f-string syntax.
I have created a simple Lambda python below.
```
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
event2 = {"Records": [3,4,5]}
print(f'Successfully processed {len(event2["Records"])} records.')
return event['key1']
```
An error occurred (Exception) when calling the CreateFunction operation: Unknown error: ('Unable to get handler function from lambda code.', SyntaxError('invalid syntax', ('/tmp/localstack/lambda_script_l_3c0ba2ea.py', 9, 69, ' print(f\'Successfully processed {len(event2["Records"])} records.\')\n')))
|
priority
|
localstack lambda service does not accept f string syntax i found localstack lambda service does not accept f string syntax i have created a simple lambda python below import json print loading function def lambda handler event context print received event json dumps event indent records print f successfully processed len records return event an error occurred exception when calling the createfunction operation unknown error unable to get handler function from lambda code syntaxerror invalid syntax tmp localstack lambda script l py print f successfully processed len records n
| 1
|
380,023
| 11,253,345,107
|
IssuesEvent
|
2020-01-11 15:40:33
|
William-Lake/NLP-API
|
https://api.github.com/repos/William-Lake/NLP-API
|
closed
|
Provide a history of the path actions in the response.
|
High Priority Low Urgency
|
E.g. if the entire requests fails or only one endpoint, return the appropriate feedback to the user.
|
1.0
|
Provide a history of the path actions in the response. - E.g. if the entire requests fails or only one endpoint, return the appropriate feedback to the user.
|
priority
|
provide a history of the path actions in the response e g if the entire requests fails or only one endpoint return the appropriate feedback to the user
| 1
|
636,135
| 20,592,831,106
|
IssuesEvent
|
2022-03-05 03:20:21
|
datajoint/element-calcium-imaging
|
https://api.github.com/repos/datajoint/element-calcium-imaging
|
closed
|
extra use of output_dir in the Processing
|
high priority
|
`output_dir` should not be redefined within the trigger condition
|
1.0
|
extra use of output_dir in the Processing - `output_dir` should not be redefined within the trigger condition
|
priority
|
extra use of output dir in the processing output dir should not be redefined within the trigger condition
| 1
|
113,616
| 4,565,737,632
|
IssuesEvent
|
2016-09-15 02:15:48
|
Aqueti/atl
|
https://api.github.com/repos/Aqueti/atl
|
opened
|
Memory leak in BaseKey
|
High Priority
|
BaseKey returns a uint8_t* from its hash function. If this memory is not freed by the calling function, it will be leaked.
I see two ways to solve this: either be very explicit that that memory needs to be freed in documentation, or pass in a buffer for BaseKey to write to.
I'm not sure which I prefer - we should talk about this tomorrow.
|
1.0
|
Memory leak in BaseKey - BaseKey returns a uint8_t* from its hash function. If this memory is not freed by the calling function, it will be leaked.
I see two ways to solve this: either be very explicit that that memory needs to be freed in documentation, or pass in a buffer for BaseKey to write to.
I'm not sure which I prefer - we should talk about this tomorrow.
|
priority
|
memory leak in basekey basekey returns a t from its hash function if this memory is not freed by the calling function it will be leaked i see two ways to solve this either be very explicit that that memory needs to be freed in documentation or pass in a buffer for basekey to write to i m not sure which i prefer we should talk about this tomorrow
| 1
|
131,533
| 5,154,640,095
|
IssuesEvent
|
2017-01-15 01:29:00
|
rm-code/On-The-Roadside
|
https://api.github.com/repos/rm-code/On-The-Roadside
|
closed
|
Health system
|
Priority: High Status: Accepted Type: Feature
|
Some more brain storming stuff ...

The bodies of all creatures in the game will be modelled as graph like the on above. These graphs will feature entry nodes, bones and organs.
## Nodes
- Entry nodes (pink) are used to represent the _outside_ of the body. These are points which can be targeted and hit directly. E.g.: The player attacks an arm or the head of a creature.
- The damage of the attack would then be conferred to the inside of the body where it can hit bones (grey) and / or organs (green).
## Damage propagation between nodes
Damage to a creature's body will be affected by the [Damage Type](https://github.com/rm-code/On-The-Roadside/issues/74) of the used weapon / ammunition.
In reality a swing with a baseball bat could hit both the arm and the chest of the defender. Therefore it would make sense to allow damage propagation between entry nodes for certain types of damage:
- Bludgeoning damage would have a high chance to damage adjacent nodes and bones, but not internal organs.
- Piercing damage would directly travel from the entry node to an end node and have a high chance to hit the inner nodes.
- Slashing damage would be able to propagate its damage to an adjacent node, but have a lower chance to hit vital (inner) organs.
## Effects based on nodes
Nodes could be marked with certain effects which would then be removed upon their destruction.
|
1.0
|
Health system - Some more brain storming stuff ...

The bodies of all creatures in the game will be modelled as graph like the on above. These graphs will feature entry nodes, bones and organs.
## Nodes
- Entry nodes (pink) are used to represent the _outside_ of the body. These are points which can be targeted and hit directly. E.g.: The player attacks an arm or the head of a creature.
- The damage of the attack would then be conferred to the inside of the body where it can hit bones (grey) and / or organs (green).
## Damage propagation between nodes
Damage to a creature's body will be affected by the [Damage Type](https://github.com/rm-code/On-The-Roadside/issues/74) of the used weapon / ammunition.
In reality a swing with a baseball bat could hit both the arm and the chest of the defender. Therefore it would make sense to allow damage propagation between entry nodes for certain types of damage:
- Bludgeoning damage would have a high chance to damage adjacent nodes and bones, but not internal organs.
- Piercing damage would directly travel from the entry node to an end node and have a high chance to hit the inner nodes.
- Slashing damage would be able to propagate its damage to an adjacent node, but have a lower chance to hit vital (inner) organs.
## Effects based on nodes
Nodes could be marked with certain effects which would then be removed upon their destruction.
|
priority
|
health system some more brain storming stuff the bodies of all creatures in the game will be modelled as graph like the on above these graphs will feature entry nodes bones and organs nodes entry nodes pink are used to represent the outside of the body these are points which can be targeted and hit directly e g the player attacks an arm or the head of a creature the damage of the attack would then be conferred to the inside of the body where it can hit bones grey and or organs green damage propagation between nodes damage to a creature s body will be affected by the of the used weapon ammunition in reality a swing with a baseball bat could hit both the arm and the chest of the defender therefore it would make sense to allow damage propagation between entry nodes for certain types of damage bludgeoning damage would have a high chance to damage adjacent nodes and bones but not internal organs piercing damage would directly travel from the entry node to an end node and have a high chance to hit the inner nodes slashing damage would be able to propagate its damage to an adjacent node but have a lower chance to hit vital inner organs effects based on nodes nodes could be marked with certain effects which would then be removed upon their destruction
| 1
|
379,565
| 11,223,426,156
|
IssuesEvent
|
2020-01-07 22:39:58
|
Sp2000/colplus-repo
|
https://api.github.com/repos/Sp2000/colplus-repo
|
closed
|
References issues with several GSDs
|
data orphan high priority
|
Some of the older GSDs were still using the legacy reference_id system and/or were missing database_ids in the reference table of the **2019 annual edition** of Assembly_Global. The GSDs include:
id | database | missing_database_id | missing_reference_code | fixed
-- | -- | -- | -- | --
9 | ETI WBD (Euphausiacea) | TRUE | TRUE |
19 | MOST | TRUE | TRUE | FIXED
22 | Parhost | TRUE | TRUE |
26 | ScaleNet | TRUE | TRUE | FIXED
30 | TicksBase | TRUE | TRUE |
34 | UCD | | TRUE |
42 | ChiloBase | | TRUE |
66 | Droseraceae Database | | TRUE |
I'm planning on manually fixing the NameReferences and References files for these GSDs. [Databases 19 and 26 were previously fixed](https://github.com/Sp2000/colplus-repo/issues/73).
- [x] I need to look at whether database_id is NULL in scientific_name_references as well.
Ref: https://github.com/Sp2000/colplus-repo/issues/89
|
1.0
|
References issues with several GSDs - Some of the older GSDs were still using the legacy reference_id system and/or were missing database_ids in the reference table of the **2019 annual edition** of Assembly_Global. The GSDs include:
id | database | missing_database_id | missing_reference_code | fixed
-- | -- | -- | -- | --
9 | ETI WBD (Euphausiacea) | TRUE | TRUE |
19 | MOST | TRUE | TRUE | FIXED
22 | Parhost | TRUE | TRUE |
26 | ScaleNet | TRUE | TRUE | FIXED
30 | TicksBase | TRUE | TRUE |
34 | UCD | | TRUE |
42 | ChiloBase | | TRUE |
66 | Droseraceae Database | | TRUE |
I'm planning on manually fixing the NameReferences and References files for these GSDs. [Databases 19 and 26 were previously fixed](https://github.com/Sp2000/colplus-repo/issues/73).
- [x] I need to look at whether database_id is NULL in scientific_name_references as well.
Ref: https://github.com/Sp2000/colplus-repo/issues/89
|
priority
|
references issues with several gsds some of the older gsds were still using the legacy reference id system and or were missing database ids in the reference table of the annual edition of assembly global the gsds include id database missing database id missing reference code fixed eti wbd euphausiacea true true most true true fixed parhost true true scalenet true true fixed ticksbase true true ucd true chilobase true droseraceae database true i m planning on manually fixing the namereferences and references files for these gsds i need to look at whether database id is null in scientific name references as well ref
| 1
|
89,099
| 3,789,737,198
|
IssuesEvent
|
2016-03-21 18:57:16
|
CoderDojo/community-platform
|
https://api.github.com/repos/CoderDojo/community-platform
|
closed
|
Invite all feature for events
|
backlog events high priority top priority
|
We want to add a feature where in the events section you can invite all in the events section, and it will send an email to all members of a Dojo notifying them that there is a new event live on the Dojo.
Wireframe to follow.
|
2.0
|
Invite all feature for events - We want to add a feature where in the events section you can invite all in the events section, and it will send an email to all members of a Dojo notifying them that there is a new event live on the Dojo.
Wireframe to follow.
|
priority
|
invite all feature for events we want to add a feature where in the events section you can invite all in the events section and it will send an email to all members of a dojo notifying them that there is a new event live on the dojo wireframe to follow
| 1
|
242,684
| 7,845,411,733
|
IssuesEvent
|
2018-06-19 12:53:11
|
cms-gem-daq-project/gem-plotting-tools
|
https://api.github.com/repos/cms-gem-daq-project/gem-plotting-tools
|
closed
|
Feature Request: Identifying Original maskReason & Date Range of Burned Input
|
Priority: High Status: Help Wanted Type: Enhancement
|
<!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
We have 30720 channels in CMS presently. And we need some automated way for tracking when changes occur. Ideally this should be via the DB but this is not up and running (yet...). But we have an automated procedure to make time series plots from the entire calibration dataset, see #87.
So my proposal is to use these time series plots to identify:
- The original `maskReason` that is applied to a channel,
- The smallest date range for when a channel made the switch from "healthy" to burned, e.g. for [this](https://mattermost.web.cern.ch/cms-gem-daq/pl/ks7fa3yf67nnjejw3uhtk6eije) example that would be `2017.05.10.20.41` to `2017.05.31.09.21`.
### Issue with Channel Mask
Right now when a channel is masked it's `maskReason` will be recorded in the output `TTree` of the analysis scan. However once a channel is masked the history will be somewhat "lost." This is because the channel will be masked at the time of acquisition so the s-curve analysis will see that a fit to it fails (because there is no data). The `maskReason` will then be assigned [FitFailed](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/0ce86672ddac3affee27b80517eb34e8cd50b029/anaInfo.py#L51). This is because the scan software does not know the history.
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [X] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
To accomplish this a secondary analysis tool should be made, `macros/timeHistoryAnalyzer.py`, which should should have a function like:
```
def analyzeTimeSeriesPlot(timeSeriesHisto, obsType, deadChanCutLow=4.14E-02, deadChanCutHigh=1.09E-01):
"""
timeSeriesHisto - TH2 object created by gemPlotter.py when making time series plots
obsType - The observable to be investigate from the set {noise, ..., maskReason, etc...} see for example the observables produced here https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/0ce86672ddac3affee27b80517eb34e8cd50b029/macros/plotTimeSeries.py#L10-L26
deadChanCutLow - lower bound on s-curve width for identifying a dead channel, value in fC
deadChanCutHigh - higher bound on s-curve width for identifying a dead channel, value in fC
"""
# import numpy
# import nesteddict from gem sw
chanArray = nesteddict()
# Load info from the histo
for binY in timeSeriesHisto:
lastGoodScanDate = 2017.01.01
firstDateAfterBurn = None
for bin X in timSeriesHisto:
# get the point value and store it
scanDate = timSeriesHisto.GetBinLabel(binX)
chanStripOrPanPin = binY-1 #bins go from 1 to nbins but channel/strips/panpin go 0 to 127
scanVal = timSeriesHisto.GetBinContent(binX,binY)
# See if channel went from bad to good
if obsType == "noise":
if scanVal > deadChanCutHigh:
lastGoodScanDate = scanDate
if deadChanCutLow <= scanVal and scanVal <= deadChanCutHigh:
if firstDateAfterBurn is not None:
firstDateAfterBurn = scanDate
chanArray[chanStripOrPanPin] = (lastGoodScanDate, firstDateAfterBurn)
# Try to determine original mask reason
if obsType == "maskReason":
# Similar to above example
# Display to user
print "| chan | lastGoodScanDate | firstDateAfterBurn |" # in markdown format
print "| :---: | :------------------: | :-----------------: |"
for chan in range(0,len(chanArray)):
print "| %i | %s | %s |"%(chan, chanArray[chan][0], chanArray[chan][1])
# Similarly display a table which shows the number of channels in the VFAT that have the following maskReasons based on the "original" maskReason:
#NotMasked = 0x0
#HotChannel = 0x01
#FitFailed = 0x02
#DeadChannel = 0x04
#HighNoise = 0x08
#HighEffPed = 0x10
```
Caveats is that this needs to be modified to *not* be sensitive to transient effects; e.g. in [this example](https://mattermost.web.cern.ch/cms-gem-daq/pl/ks7fa3yf67nnjejw3uhtk6eije) two scans failed to complete successfully so the data is missing and the channels and this may throw off the algorithm if this were "maskReason" case. Testing would be required
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Presently the last good scandate and first date after burned channel need to be determined by hand. Also the `FitFailed` `maskReason` slowly dominates the dataset.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
We need to be able to accurately report our channel status and history to ourselves and CMS
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
|
1.0
|
Feature Request: Identifying Original maskReason & Date Range of Burned Input - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
We have 30720 channels in CMS presently. And we need some automated way for tracking when changes occur. Ideally this should be via the DB but this is not up and running (yet...). But we have an automated procedure to make time series plots from the entire calibration dataset, see #87.
So my proposal is to use these time series plots to identify:
- The original `maskReason` that is applied to a channel,
- The smallest date range for when a channel made the switch from "healthy" to burned, e.g. for [this](https://mattermost.web.cern.ch/cms-gem-daq/pl/ks7fa3yf67nnjejw3uhtk6eije) example that would be `2017.05.10.20.41` to `2017.05.31.09.21`.
### Issue with Channel Mask
Right now when a channel is masked it's `maskReason` will be recorded in the output `TTree` of the analysis scan. However once a channel is masked the history will be somewhat "lost." This is because the channel will be masked at the time of acquisition so the s-curve analysis will see that a fit to it fails (because there is no data). The `maskReason` will then be assigned [FitFailed](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/0ce86672ddac3affee27b80517eb34e8cd50b029/anaInfo.py#L51). This is because the scan software does not know the history.
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [X] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
To accomplish this a secondary analysis tool should be made, `macros/timeHistoryAnalyzer.py`, which should should have a function like:
```
def analyzeTimeSeriesPlot(timeSeriesHisto, obsType, deadChanCutLow=4.14E-02, deadChanCutHigh=1.09E-01):
"""
timeSeriesHisto - TH2 object created by gemPlotter.py when making time series plots
obsType - The observable to be investigate from the set {noise, ..., maskReason, etc...} see for example the observables produced here https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/0ce86672ddac3affee27b80517eb34e8cd50b029/macros/plotTimeSeries.py#L10-L26
deadChanCutLow - lower bound on s-curve width for identifying a dead channel, value in fC
deadChanCutHigh - higher bound on s-curve width for identifying a dead channel, value in fC
"""
# import numpy
# import nesteddict from gem sw
chanArray = nesteddict()
# Load info from the histo
for binY in timeSeriesHisto:
lastGoodScanDate = 2017.01.01
firstDateAfterBurn = None
for bin X in timSeriesHisto:
# get the point value and store it
scanDate = timSeriesHisto.GetBinLabel(binX)
chanStripOrPanPin = binY-1 #bins go from 1 to nbins but channel/strips/panpin go 0 to 127
scanVal = timSeriesHisto.GetBinContent(binX,binY)
# See if channel went from bad to good
if obsType == "noise":
if scanVal > deadChanCutHigh:
lastGoodScanDate = scanDate
if deadChanCutLow <= scanVal and scanVal <= deadChanCutHigh:
if firstDateAfterBurn is not None:
firstDateAfterBurn = scanDate
chanArray[chanStripOrPanPin] = (lastGoodScanDate, firstDateAfterBurn)
# Try to determine original mask reason
if obsType == "maskReason":
# Similar to above example
# Display to user
print "| chan | lastGoodScanDate | firstDateAfterBurn |" # in markdown format
print "| :---: | :------------------: | :-----------------: |"
for chan in range(0,len(chanArray)):
print "| %i | %s | %s |"%(chan, chanArray[chan][0], chanArray[chan][1])
# Similarly display a table which shows the number of channels in the VFAT that have the following maskReasons based on the "original" maskReason:
#NotMasked = 0x0
#HotChannel = 0x01
#FitFailed = 0x02
#DeadChannel = 0x04
#HighNoise = 0x08
#HighEffPed = 0x10
```
Caveats is that this needs to be modified to *not* be sensitive to transient effects; e.g. in [this example](https://mattermost.web.cern.ch/cms-gem-daq/pl/ks7fa3yf67nnjejw3uhtk6eije) two scans failed to complete successfully so the data is missing and the channels and this may throw off the algorithm if this were "maskReason" case. Testing would be required
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Presently the last good scandate and first date after burned channel need to be determined by hand. Also the `FitFailed` `maskReason` slowly dominates the dataset.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
We need to be able to accurately report our channel status and history to ourselves and CMS
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
|
priority
|
feature request identifying original maskreason date range of burned input brief summary of issue we have channels in cms presently and we need some automated way for tracking when changes occur ideally this should be via the db but this is not up and running yet but we have an automated procedure to make time series plots from the entire calibration dataset see so my proposal is to use these time series plots to identify the original maskreason that is applied to a channel the smallest date range for when a channel made the switch from healthy to burned e g for example that would be to issue with channel mask right now when a channel is masked it s maskreason will be recorded in the output ttree of the analysis scan however once a channel is masked the history will be somewhat lost this is because the channel will be masked at the time of acquisition so the s curve analysis will see that a fit to it fails because there is no data the maskreason will then be assigned this is because the scan software does not know the history types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior to accomplish this a secondary analysis tool should be made macros timehistoryanalyzer py which should should have a function like def analyzetimeseriesplot timeserieshisto obstype deadchancutlow deadchancuthigh timeserieshisto object created by gemplotter py when making time series plots obstype the observable to be investigate from the set noise maskreason etc see for example the observables produced here deadchancutlow lower bound on s curve width for identifying a dead channel value in fc deadchancuthigh higher bound on s curve width for identifying a dead channel value in fc import numpy import nesteddict from gem sw chanarray nesteddict load info from the histo for biny in timeserieshisto lastgoodscandate firstdateafterburn none for bin x in timserieshisto get the point value and store it scandate timserieshisto getbinlabel binx chanstriporpanpin biny bins go from to nbins but channel strips panpin go to scanval timserieshisto getbincontent binx biny see if channel went from bad to good if obstype noise if scanval deadchancuthigh lastgoodscandate scandate if deadchancutlow scanval and scanval deadchancuthigh if firstdateafterburn is not none firstdateafterburn scandate chanarray lastgoodscandate firstdateafterburn try to determine original mask reason if obstype maskreason similar to above example display to user print chan lastgoodscandate firstdateafterburn in markdown format print for chan in range len chanarray print i s s chan chanarray chanarray similarly display a table which shows the number of channels in the vfat that have the following maskreasons based on the original maskreason notmasked hotchannel fitfailed deadchannel highnoise higheffped caveats is that this needs to be modified to not be sensitive to transient effects e g in two scans failed to complete successfully so the data is missing and the channels and this may throw off the algorithm if this were maskreason case testing would be required current behavior presently the last good scandate and first date after burned channel need to be determined by hand also the fitfailed maskreason slowly dominates the dataset context for feature requests we need to be able to accurately report our channel status and history to ourselves and cms
| 1
|
537,062
| 15,722,388,259
|
IssuesEvent
|
2021-03-29 05:38:46
|
wso2/integration-studio
|
https://api.github.com/repos/wso2/integration-studio
|
closed
|
[Tooling] Make the ESB Editor respect workspace / project / file encoding
|
8.0.0 Priority/High
|
**Description:**
Make the ESB Editor component of the WSO2 EI Tooling honor the file encoding specified in the following hierarchy : OS / eclipse / workspace / project / file
<!-- Give a brief description of the issue -->
The ESB Editor doesn't currently respect the workspace / project or file encoding of WSO2 artifacts that can be edited with the component (proxy service, sequences, inbound endpoints, etc.).
The ESB Editor always use the default OS encoding / locale, which cause problems on platform that doesn't use UTF-8 as the default encoding (i.e. Windows). In that case, the content of the file will be saved with the current encoding, which is CP-1252 for a Windows with the french language.
When the file is saved and deployed, there will be an error since the content of the file will not be compatible with the UTF-8 encoding.
**Affected Product Version:**
WSO2 EI Tooling 6.1.1+
**OS, DB, other environment details and versions:**
Windows only
**Steps to reproduce:**
- Create a service proxy with the WSO2 EI Tooling on Windows
- Open the proxy definition with the ESB Editor
- Copy and paste the following code in the *source* tab :
```
<?xml version="1.0" encoding="UTF-8"?>
<proxy name="MyService" startOnLoad="true" transports="http https" xmlns="http://ws.apache.org/ns/synapse">
<target>
<inSequence>
<property name="MyProperty" scope="default" type="STRING" value="Ma propriétée"/>
</inSequence>
<outSequence/>
<faultSequence/>
</target>
</proxy>
```
- Deploy the code in WSO2 EI runtime
**Workaround**
As a workaround, it is possible to use the default Eclipse *XML Editor* component or an external tool to paste the XML code in the file.
Another way is to add the -Dfile.encoding=UTF-8 parameter to the *eclipse.ini* startup parameters.
|
1.0
|
[Tooling] Make the ESB Editor respect workspace / project / file encoding - **Description:**
Make the ESB Editor component of the WSO2 EI Tooling honor the file encoding specified in the following hierarchy : OS / eclipse / workspace / project / file
<!-- Give a brief description of the issue -->
The ESB Editor doesn't currently respect the workspace / project or file encoding of WSO2 artifacts that can be edited with the component (proxy service, sequences, inbound endpoints, etc.).
The ESB Editor always use the default OS encoding / locale, which cause problems on platform that doesn't use UTF-8 as the default encoding (i.e. Windows). In that case, the content of the file will be saved with the current encoding, which is CP-1252 for a Windows with the french language.
When the file is saved and deployed, there will be an error since the content of the file will not be compatible with the UTF-8 encoding.
**Affected Product Version:**
WSO2 EI Tooling 6.1.1+
**OS, DB, other environment details and versions:**
Windows only
**Steps to reproduce:**
- Create a service proxy with the WSO2 EI Tooling on Windows
- Open the proxy definition with the ESB Editor
- Copy and paste the following code in the *source* tab :
```
<?xml version="1.0" encoding="UTF-8"?>
<proxy name="MyService" startOnLoad="true" transports="http https" xmlns="http://ws.apache.org/ns/synapse">
<target>
<inSequence>
<property name="MyProperty" scope="default" type="STRING" value="Ma propriétée"/>
</inSequence>
<outSequence/>
<faultSequence/>
</target>
</proxy>
```
- Deploy the code in WSO2 EI runtime
**Workaround**
As a workaround, it is possible to use the default Eclipse *XML Editor* component or an external tool to paste the XML code in the file.
Another way is to add the -Dfile.encoding=UTF-8 parameter to the *eclipse.ini* startup parameters.
|
priority
|
make the esb editor respect workspace project file encoding description make the esb editor component of the ei tooling honor the file encoding specified in the following hierarchy os eclipse workspace project file the esb editor doesn t currently respect the workspace project or file encoding of artifacts that can be edited with the component proxy service sequences inbound endpoints etc the esb editor always use the default os encoding locale which cause problems on platform that doesn t use utf as the default encoding i e windows in that case the content of the file will be saved with the current encoding which is cp for a windows with the french language when the file is saved and deployed there will be an error since the content of the file will not be compatible with the utf encoding affected product version ei tooling os db other environment details and versions windows only steps to reproduce create a service proxy with the ei tooling on windows open the proxy definition with the esb editor copy and paste the following code in the source tab proxy name myservice startonload true transports http https xmlns deploy the code in ei runtime workaround as a workaround it is possible to use the default eclipse xml editor component or an external tool to paste the xml code in the file another way is to add the dfile encoding utf parameter to the eclipse ini startup parameters
| 1
|
64,408
| 3,211,400,273
|
IssuesEvent
|
2015-10-06 10:30:34
|
cs2103aug2015-w11-1j/main
|
https://api.github.com/repos/cs2103aug2015-w11-1j/main
|
closed
|
Parser fine tune
|
priority.high
|
To remove magic number, string etc and remove main method to integrate into logic
|
1.0
|
Parser fine tune - To remove magic number, string etc and remove main method to integrate into logic
|
priority
|
parser fine tune to remove magic number string etc and remove main method to integrate into logic
| 1
|
292,148
| 8,953,678,894
|
IssuesEvent
|
2019-01-25 20:12:37
|
CredentialEngine/CompetencyFrameworks
|
https://api.github.com/repos/CredentialEngine/CompetencyFrameworks
|
opened
|
complexityLevel missing from framework export
|
High Priority bug
|
The beta connecting credentials framework properly shows in the CASS editor:
https://credentialengine.org/publisher/Competencies
It correctly indicates/links to the relevant concepts from the Beta Connecting Credentials levels concept scheme. It does so via the `ceasn:complexityLevel` property - however, this property is missing from the export:
https://cass.credentialengine.org/api/ceasn/ba13a98298ab5fea4188f4bd3dacdc66
In addition, `ceasn:competencyCategory` is empty.
This fix is **urgently** needed for next week's presentation.
|
1.0
|
complexityLevel missing from framework export - The beta connecting credentials framework properly shows in the CASS editor:
https://credentialengine.org/publisher/Competencies
It correctly indicates/links to the relevant concepts from the Beta Connecting Credentials levels concept scheme. It does so via the `ceasn:complexityLevel` property - however, this property is missing from the export:
https://cass.credentialengine.org/api/ceasn/ba13a98298ab5fea4188f4bd3dacdc66
In addition, `ceasn:competencyCategory` is empty.
This fix is **urgently** needed for next week's presentation.
|
priority
|
complexitylevel missing from framework export the beta connecting credentials framework properly shows in the cass editor it correctly indicates links to the relevant concepts from the beta connecting credentials levels concept scheme it does so via the ceasn complexitylevel property however this property is missing from the export in addition ceasn competencycategory is empty this fix is urgently needed for next week s presentation
| 1
|
582,460
| 17,361,850,914
|
IssuesEvent
|
2021-07-29 22:00:34
|
jessebw/activeVote
|
https://api.github.com/repos/jessebw/activeVote
|
closed
|
current poll view - No poll
|
High Priority enhancement
|
- [x] If there is no current poll, display a landing view which lets the user know there are no active polls.
|
1.0
|
current poll view - No poll - - [x] If there is no current poll, display a landing view which lets the user know there are no active polls.
|
priority
|
current poll view no poll if there is no current poll display a landing view which lets the user know there are no active polls
| 1
|
59,628
| 3,115,156,898
|
IssuesEvent
|
2015-09-03 13:11:15
|
USGCRP/gcis-ontology
|
https://api.github.com/repos/USGCRP/gcis-ontology
|
closed
|
gcis:Project, Model as property
|
high-priority
|
Like https://github.com/USGCRP/gcis-ontology/issues/95 this is a spinoff of #12. Here, the turtle for models, in addition to projects, incorrectly uses classes as properties:
https://github.com/USGCRP/gcis/blob/master/lib/Tuba/files/templates/model/object.ttl.tut
Example with instance data:
http://data.globalchange.gov/model/ccsm3.thtml
I was unable to locate a candidate "property" relating a model to a Project like CMIP5 within dbpedia.
Similarly with relating a model run to a model:
https://github.com/USGCRP/gcis/blob/master/lib/Tuba/files/templates/model_run/object.ttl.tut
|
1.0
|
gcis:Project, Model as property - Like https://github.com/USGCRP/gcis-ontology/issues/95 this is a spinoff of #12. Here, the turtle for models, in addition to projects, incorrectly uses classes as properties:
https://github.com/USGCRP/gcis/blob/master/lib/Tuba/files/templates/model/object.ttl.tut
Example with instance data:
http://data.globalchange.gov/model/ccsm3.thtml
I was unable to locate a candidate "property" relating a model to a Project like CMIP5 within dbpedia.
Similarly with relating a model run to a model:
https://github.com/USGCRP/gcis/blob/master/lib/Tuba/files/templates/model_run/object.ttl.tut
|
priority
|
gcis project model as property like this is a spinoff of here the turtle for models in addition to projects incorrectly uses classes as properties example with instance data i was unable to locate a candidate property relating a model to a project like within dbpedia similarly with relating a model run to a model
| 1
|
440,068
| 12,692,564,002
|
IssuesEvent
|
2020-06-21 23:21:03
|
ctm/mb2-doc
|
https://api.github.com/repos/ctm/mb2-doc
|
closed
|
login typo protection
|
easy enhancement high priority
|
Currently if someone typos their nickname when logging in, they'll create a new one, which probably isn't what's wanted. If we add a dialog box that says "the account isn't known, would you like to create it?", that would solve the problem.
|
1.0
|
login typo protection - Currently if someone typos their nickname when logging in, they'll create a new one, which probably isn't what's wanted. If we add a dialog box that says "the account isn't known, would you like to create it?", that would solve the problem.
|
priority
|
login typo protection currently if someone typos their nickname when logging in they ll create a new one which probably isn t what s wanted if we add a dialog box that says the account isn t known would you like to create it that would solve the problem
| 1
|
552,075
| 16,194,264,574
|
IssuesEvent
|
2021-05-04 12:49:35
|
EvanQuan/Chubberino
|
https://api.github.com/repos/EvanQuan/Chubberino
|
opened
|
Refund heisters on quitting mid-heists
|
enhancement high priority medium effort
|
Currently if the program ends in the middle of any heists, all heisters lose all cheese they wagered.
Should return all cheese on quitting, preferrable with a message.
|
1.0
|
Refund heisters on quitting mid-heists - Currently if the program ends in the middle of any heists, all heisters lose all cheese they wagered.
Should return all cheese on quitting, preferrable with a message.
|
priority
|
refund heisters on quitting mid heists currently if the program ends in the middle of any heists all heisters lose all cheese they wagered should return all cheese on quitting preferrable with a message
| 1
|
558,703
| 16,540,982,064
|
IssuesEvent
|
2021-05-27 16:45:25
|
wazuh/wazuh-documentation
|
https://api.github.com/repos/wazuh/wazuh-documentation
|
opened
|
Current installation guide - Index
|
priority: highest type: refactor
|
Hello team!
The aim of this issue is to adapt the current installation guide index to the new structure proposed.
We must clarify the differences between the kinds of deployments type.
Regards,
David
|
1.0
|
Current installation guide - Index - Hello team!
The aim of this issue is to adapt the current installation guide index to the new structure proposed.
We must clarify the differences between the kinds of deployments type.
Regards,
David
|
priority
|
current installation guide index hello team the aim of this issue is to adapt the current installation guide index to the new structure proposed we must clarify the differences between the kinds of deployments type regards david
| 1
|
767,523
| 26,929,648,649
|
IssuesEvent
|
2023-02-07 15:59:18
|
GoogleCloudPlatform/dataproc-templates
|
https://api.github.com/repos/GoogleCloudPlatform/dataproc-templates
|
closed
|
[Publishing] Create Gcloud commands and rest API for below templates. Target JDBC
|
publishing python java high-priority
|
1. Java - GCSToJDBC
2. Python - GCSToJDBC
3. Python - JDBCToJDBC
Add the commands in : go/go/dataproc-gcp-documentation-internal document also contains commands for reference from other templates
|
1.0
|
[Publishing] Create Gcloud commands and rest API for below templates. Target JDBC - 1. Java - GCSToJDBC
2. Python - GCSToJDBC
3. Python - JDBCToJDBC
Add the commands in : go/go/dataproc-gcp-documentation-internal document also contains commands for reference from other templates
|
priority
|
create gcloud commands and rest api for below templates target jdbc java gcstojdbc python gcstojdbc python jdbctojdbc add the commands in go go dataproc gcp documentation internal document also contains commands for reference from other templates
| 1
|
130,436
| 5,116,029,280
|
IssuesEvent
|
2017-01-07 00:10:59
|
meumobi/sitebuilder
|
https://api.github.com/repos/meumobi/sitebuilder
|
opened
|
UpdateFeedsWorker low not work
|
bug feeds high priority
|
It's status is NOK on /status page, and since last deploy we've observed a error `could not find cursor over collection meumobi_partners.extensions` on logs
```
[2017-01-07 01:11:02] sitebuilder.ERROR: Uncaught Exception MongoCursorException: "localhost:27017: could not find cursor over collection meumobi_partners.extensions" at /home/meumobi/PROJECTS/meumobi.com/releases/20170106204149/sitebuilder/lib/lithium/data/source/mongo_db/Result.php line 63 {"exception":"[object] (MongoCursorException(code: 16336): localhost:27017: could not find cursor over collection meumobi_partners.extensions at /home/meumobi/PROJECTS/meumobi.com/releases/20170106204149/sitebuilder/lib/lithium/data/source/mongo_db/Result.php:63)"}
```
This error appears since last deploy, exactly at `[2017-01-06 22:11:38]` on server time (UTC), one hour after the deploy.
```
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-07" | grep "could not find cursor over collection" | wc -l
5
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-06" | grep "could not find cursor over collection" | wc -l
9
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-05" | grep "could not find cursor over collection" | wc -l
0
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-04" | grep "could not find cursor over collection" | wc -l
0
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-03" | grep "could not find cursor over collection" | wc -l
0
```
|
1.0
|
UpdateFeedsWorker low not work - It's status is NOK on /status page, and since last deploy we've observed a error `could not find cursor over collection meumobi_partners.extensions` on logs
```
[2017-01-07 01:11:02] sitebuilder.ERROR: Uncaught Exception MongoCursorException: "localhost:27017: could not find cursor over collection meumobi_partners.extensions" at /home/meumobi/PROJECTS/meumobi.com/releases/20170106204149/sitebuilder/lib/lithium/data/source/mongo_db/Result.php line 63 {"exception":"[object] (MongoCursorException(code: 16336): localhost:27017: could not find cursor over collection meumobi_partners.extensions at /home/meumobi/PROJECTS/meumobi.com/releases/20170106204149/sitebuilder/lib/lithium/data/source/mongo_db/Result.php:63)"}
```
This error appears since last deploy, exactly at `[2017-01-06 22:11:38]` on server time (UTC), one hour after the deploy.
```
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-07" | grep "could not find cursor over collection" | wc -l
5
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-06" | grep "could not find cursor over collection" | wc -l
9
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-05" | grep "could not find cursor over collection" | wc -l
0
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-04" | grep "could not find cursor over collection" | wc -l
0
root@ks387594[ELEFANTE]:/home/meumobi/PROJECTS/meumobi.com/current# cat log/sitebuilder.log | grep "2017-01-03" | grep "could not find cursor over collection" | wc -l
0
```
|
priority
|
updatefeedsworker low not work it s status is nok on status page and since last deploy we ve observed a error could not find cursor over collection meumobi partners extensions on logs sitebuilder error uncaught exception mongocursorexception localhost could not find cursor over collection meumobi partners extensions at home meumobi projects meumobi com releases sitebuilder lib lithium data source mongo db result php line exception mongocursorexception code localhost could not find cursor over collection meumobi partners extensions at home meumobi projects meumobi com releases sitebuilder lib lithium data source mongo db result php this error appears since last deploy exactly at on server time utc one hour after the deploy root home meumobi projects meumobi com current cat log sitebuilder log grep grep could not find cursor over collection wc l root home meumobi projects meumobi com current cat log sitebuilder log grep grep could not find cursor over collection wc l root home meumobi projects meumobi com current cat log sitebuilder log grep grep could not find cursor over collection wc l root home meumobi projects meumobi com current cat log sitebuilder log grep grep could not find cursor over collection wc l root home meumobi projects meumobi com current cat log sitebuilder log grep grep could not find cursor over collection wc l
| 1
|
395,137
| 11,672,137,572
|
IssuesEvent
|
2020-03-04 05:41:22
|
gambitph/Stackable
|
https://api.github.com/repos/gambitph/Stackable
|
opened
|
Can't change the Text Color of the blocks inside the Accordion
|
[block] accordion bug high priority
|
Can't change the Text Color of the blocks inside the Accordion. This happens on Frontend and Backend.




|
1.0
|
Can't change the Text Color of the blocks inside the Accordion - Can't change the Text Color of the blocks inside the Accordion. This happens on Frontend and Backend.




|
priority
|
can t change the text color of the blocks inside the accordion can t change the text color of the blocks inside the accordion this happens on frontend and backend
| 1
|
492,410
| 14,212,659,851
|
IssuesEvent
|
2020-11-17 00:36:32
|
openml/OpenML
|
https://api.github.com/repos/openml/OpenML
|
closed
|
Test server does no longer return dataset ID within task XML
|
bug priority: highest
|
[https://test.openml.org/api/v1/task/2](https://test.openml.org/api/v1/task/2):
```
<oml:task xmlns:oml="http://openml.org/openml">
<oml:task_id>2</oml:task_id>
<oml:task_name>Task 2 (Supervised Classification)</oml:task_name>
<oml:task_type_id>1</oml:task_type_id>
<oml:task_type>Supervised Classification</oml:task_type>
<oml:input name="source_data">
<oml:data_set>
<oml:data_set_id></oml:data_set_id>
<oml:target_feature></oml:target_feature>
</oml:data_set> </oml:input>
<oml:input name="estimation_procedure">
<oml:estimation_procedure>
<oml:id></oml:id>
<oml:type></oml:type>
<oml:data_splits_url>https://test.openml.org/api_splits/get/2/Task_2_splits.arff</oml:data_splits_url>
<oml:parameter name="number_repeats"></oml:parameter>
<oml:parameter name="number_folds"></oml:parameter>
<oml:parameter name="percentage"></oml:parameter>
<oml:parameter name="stratified_sampling"></oml:parameter>
</oml:estimation_procedure> </oml:input>
<oml:input name="cost_matrix">
<oml:cost_matrix></oml:cost_matrix> </oml:input>
<oml:input name="evaluation_measures">
<oml:evaluation_measures>
<oml:evaluation_measure></oml:evaluation_measure>
</oml:evaluation_measures> </oml:input>
<oml:output name="predictions">
<oml:predictions>
<oml:format>ARFF</oml:format>
<oml:feature name="repeat" type="integer"/>
<oml:feature name="fold" type="integer"/>
<oml:feature name="row_id" type="integer"/>
<oml:feature name="confidence.classname" type="numeric"/>
<oml:feature name="prediction" type="string"/>
</oml:predictions> </oml:output>
</oml:task>
```
This makes the Python unit tests fail. It would be great if someone could have a look into this quickly.
|
1.0
|
Test server does no longer return dataset ID within task XML - [https://test.openml.org/api/v1/task/2](https://test.openml.org/api/v1/task/2):
```
<oml:task xmlns:oml="http://openml.org/openml">
<oml:task_id>2</oml:task_id>
<oml:task_name>Task 2 (Supervised Classification)</oml:task_name>
<oml:task_type_id>1</oml:task_type_id>
<oml:task_type>Supervised Classification</oml:task_type>
<oml:input name="source_data">
<oml:data_set>
<oml:data_set_id></oml:data_set_id>
<oml:target_feature></oml:target_feature>
</oml:data_set> </oml:input>
<oml:input name="estimation_procedure">
<oml:estimation_procedure>
<oml:id></oml:id>
<oml:type></oml:type>
<oml:data_splits_url>https://test.openml.org/api_splits/get/2/Task_2_splits.arff</oml:data_splits_url>
<oml:parameter name="number_repeats"></oml:parameter>
<oml:parameter name="number_folds"></oml:parameter>
<oml:parameter name="percentage"></oml:parameter>
<oml:parameter name="stratified_sampling"></oml:parameter>
</oml:estimation_procedure> </oml:input>
<oml:input name="cost_matrix">
<oml:cost_matrix></oml:cost_matrix> </oml:input>
<oml:input name="evaluation_measures">
<oml:evaluation_measures>
<oml:evaluation_measure></oml:evaluation_measure>
</oml:evaluation_measures> </oml:input>
<oml:output name="predictions">
<oml:predictions>
<oml:format>ARFF</oml:format>
<oml:feature name="repeat" type="integer"/>
<oml:feature name="fold" type="integer"/>
<oml:feature name="row_id" type="integer"/>
<oml:feature name="confidence.classname" type="numeric"/>
<oml:feature name="prediction" type="string"/>
</oml:predictions> </oml:output>
</oml:task>
```
This makes the Python unit tests fail. It would be great if someone could have a look into this quickly.
|
priority
|
test server does no longer return dataset id within task xml oml task xmlns oml task supervised classification supervised classification arff this makes the python unit tests fail it would be great if someone could have a look into this quickly
| 1
|
309,030
| 9,460,448,810
|
IssuesEvent
|
2019-04-17 11:00:16
|
netdata/netdata
|
https://api.github.com/repos/netdata/netdata
|
closed
|
Full integration with netdata of the new database implementation
|
area/database feature request priority/high
|
<!---
When creating a feature request please:
- Verify first that your issue is not already reported on GitHub
- Explain new feature briefly in "Feature idea summary" section
- Provide a clear and concise description of what you expect to happen.
--->
Complete integration of the new database with netdata, file management, configuration, old database code and new database code coexisting, porting of rrd automation and unit tests.
Related to #5282.
|
1.0
|
Full integration with netdata of the new database implementation - <!---
When creating a feature request please:
- Verify first that your issue is not already reported on GitHub
- Explain new feature briefly in "Feature idea summary" section
- Provide a clear and concise description of what you expect to happen.
--->
Complete integration of the new database with netdata, file management, configuration, old database code and new database code coexisting, porting of rrd automation and unit tests.
Related to #5282.
|
priority
|
full integration with netdata of the new database implementation when creating a feature request please verify first that your issue is not already reported on github explain new feature briefly in feature idea summary section provide a clear and concise description of what you expect to happen complete integration of the new database with netdata file management configuration old database code and new database code coexisting porting of rrd automation and unit tests related to
| 1
|
724,392
| 24,928,519,834
|
IssuesEvent
|
2022-10-31 09:37:49
|
alphagov/govuk-prototype-kit
|
https://api.github.com/repos/alphagov/govuk-prototype-kit
|
closed
|
Run v13 private beta
|
⚠️ high priority user research
|
## What
Running private beta for v13 5-23 September
## Why
To obtain feedback from our users on v13 and iterate before full release
## Who needs to work on this
The whole team?
## Done when
- [x] Users invited to beta @ruthhammond
- [x] EOI form re-shared to recruit more users to invite
- [x] Feedback collected via survey form, interviews and support
- [x] Feedback analysed
- [x] Create script / discussion guide for in-depth interviews
- [x] In-depth interviews
|
1.0
|
Run v13 private beta - ## What
Running private beta for v13 5-23 September
## Why
To obtain feedback from our users on v13 and iterate before full release
## Who needs to work on this
The whole team?
## Done when
- [x] Users invited to beta @ruthhammond
- [x] EOI form re-shared to recruit more users to invite
- [x] Feedback collected via survey form, interviews and support
- [x] Feedback analysed
- [x] Create script / discussion guide for in-depth interviews
- [x] In-depth interviews
|
priority
|
run private beta what running private beta for september why to obtain feedback from our users on and iterate before full release who needs to work on this the whole team done when users invited to beta ruthhammond eoi form re shared to recruit more users to invite feedback collected via survey form interviews and support feedback analysed create script discussion guide for in depth interviews in depth interviews
| 1
|
7,457
| 2,602,353,151
|
IssuesEvent
|
2015-02-24 07:57:53
|
NebulousLabs/Sia
|
https://api.github.com/repos/NebulousLabs/Sia
|
closed
|
Add website to readme
|
bug High Priority
|
right now, people who find us through the github page (I mentioned Sia in a reddit comment earlier today) don't have any way to find the website and download the binaries.
|
1.0
|
Add website to readme - right now, people who find us through the github page (I mentioned Sia in a reddit comment earlier today) don't have any way to find the website and download the binaries.
|
priority
|
add website to readme right now people who find us through the github page i mentioned sia in a reddit comment earlier today don t have any way to find the website and download the binaries
| 1
|
386,183
| 11,433,279,174
|
IssuesEvent
|
2020-02-04 15:26:40
|
rich-iannone/pointblank
|
https://api.github.com/repos/rich-iannone/pointblank
|
closed
|
Validations that work for numeric columns should also work for Date and datetime-type columns
|
Difficulty: ③ Advanced Effort: ③ High Priority: ③ High Type: ★ Enhancement
|
Right now date and datetime column values cannot be validated, and that’s a shame.
|
1.0
|
Validations that work for numeric columns should also work for Date and datetime-type columns - Right now date and datetime column values cannot be validated, and that’s a shame.
|
priority
|
validations that work for numeric columns should also work for date and datetime type columns right now date and datetime column values cannot be validated and that’s a shame
| 1
|
292,473
| 8,958,543,877
|
IssuesEvent
|
2019-01-27 15:11:36
|
ngageoint/hootenanny
|
https://api.github.com/repos/ngageoint/hootenanny
|
closed
|
Unifying conflation changing direction of one way streets - Maldives
|
Category: Algorithms Priority: High Status: Defined Type: Bug
|
Found this while working on #2867 and looking at Maldives results in JOSM. The issue was pre-existing to those changes, so logging it separately. The road in question is Buruzu Magu when using NOME as the secondary, where the reference had no one way tag. Not sure yet if this all affects Network roads.
Another affected road is Fareedhee Magu.
|
1.0
|
Unifying conflation changing direction of one way streets - Maldives - Found this while working on #2867 and looking at Maldives results in JOSM. The issue was pre-existing to those changes, so logging it separately. The road in question is Buruzu Magu when using NOME as the secondary, where the reference had no one way tag. Not sure yet if this all affects Network roads.
Another affected road is Fareedhee Magu.
|
priority
|
unifying conflation changing direction of one way streets maldives found this while working on and looking at maldives results in josm the issue was pre existing to those changes so logging it separately the road in question is buruzu magu when using nome as the secondary where the reference had no one way tag not sure yet if this all affects network roads another affected road is fareedhee magu
| 1
|
752,946
| 26,333,600,271
|
IssuesEvent
|
2023-01-10 12:45:28
|
GSM-MSG/GUI-iOS
|
https://api.github.com/repos/GSM-MSG/GUI-iOS
|
opened
|
BaseViewController
|
1️⃣ Priority: High ⚙ Setting
|
### Describe
ViewController들이 사용할 BaseViewController
### Additional
_No response_
|
1.0
|
BaseViewController - ### Describe
ViewController들이 사용할 BaseViewController
### Additional
_No response_
|
priority
|
baseviewcontroller describe viewcontroller들이 사용할 baseviewcontroller additional no response
| 1
|
788,791
| 27,766,788,340
|
IssuesEvent
|
2023-03-16 12:01:00
|
AY2223S2-CS2113-T15-1/tp
|
https://api.github.com/repos/AY2223S2-CS2113-T15-1/tp
|
opened
|
Add FileManager support for saving modified Note objects
|
type.Story priority.High
|
After the implementation of sort by importance, there will be an additional attribute in the Note class.
|
1.0
|
Add FileManager support for saving modified Note objects - After the implementation of sort by importance, there will be an additional attribute in the Note class.
|
priority
|
add filemanager support for saving modified note objects after the implementation of sort by importance there will be an additional attribute in the note class
| 1
|
113,196
| 4,544,367,006
|
IssuesEvent
|
2016-09-10 17:06:06
|
Starblaster64/Vs-Saxton-Hale-2
|
https://api.github.com/repos/Starblaster64/Vs-Saxton-Hale-2
|
closed
|
Compile issue
|
bug high priority
|
Putting this here for recording purposes, since I've already mentioned it twice before.
The plugin currently will not compile unless you add "modules/" to the path of every #include that references a file within that folder.
|
1.0
|
Compile issue - Putting this here for recording purposes, since I've already mentioned it twice before.
The plugin currently will not compile unless you add "modules/" to the path of every #include that references a file within that folder.
|
priority
|
compile issue putting this here for recording purposes since i ve already mentioned it twice before the plugin currently will not compile unless you add modules to the path of every include that references a file within that folder
| 1
|
171,918
| 6,496,778,320
|
IssuesEvent
|
2017-08-22 11:31:02
|
wende/elchemy
|
https://api.github.com/repos/wende/elchemy
|
closed
|
Make wildcard work when importing all types of a union
|
bug Complexity:Advanced Language:Elm Priority:High Project:Compiler
|
Add:
## Example:
```elm
import Module exposing (Union(..))
```
|
1.0
|
Make wildcard work when importing all types of a union - Add:
## Example:
```elm
import Module exposing (Union(..))
```
|
priority
|
make wildcard work when importing all types of a union add example elm import module exposing union
| 1
|
292,807
| 8,968,747,649
|
IssuesEvent
|
2019-01-29 09:01:10
|
evangelos-ch/MangAdventure
|
https://api.github.com/repos/evangelos-ch/MangAdventure
|
opened
|
[TODO] Reader improvements
|
Priority: High Status: On Hold Type: Enhancement
|
- [ ] Use avelino/django-turbolinks to load pages faster
- [ ] More reading modes:
- [ ] Long strip mode
- [ ] Fit page to screen
- [ ] Double page
- [ ] Right to left direction
- [ ] Bidirectional page click events
- [ ] RSS feeds
- [ ] AniList ~~& MAL~~ integration
|
1.0
|
[TODO] Reader improvements - - [ ] Use avelino/django-turbolinks to load pages faster
- [ ] More reading modes:
- [ ] Long strip mode
- [ ] Fit page to screen
- [ ] Double page
- [ ] Right to left direction
- [ ] Bidirectional page click events
- [ ] RSS feeds
- [ ] AniList ~~& MAL~~ integration
|
priority
|
reader improvements use avelino django turbolinks to load pages faster more reading modes long strip mode fit page to screen double page right to left direction bidirectional page click events rss feeds anilist mal integration
| 1
|
104,639
| 4,216,362,843
|
IssuesEvent
|
2016-06-30 09:00:44
|
ari/jobsworth
|
https://api.github.com/repos/ari/jobsworth
|
closed
|
Test failure after email template upgrade
|
high priority
|
@k41n Sorry to do this to you. I just committed some improvements to the outbound email templates, but this caused test failures. Could you please take a look and see whether the test just needs adjusting or if I actually broke the emails.
https://travis-ci.org/ari/jobsworth/jobs/140697171#L1874
|
1.0
|
Test failure after email template upgrade - @k41n Sorry to do this to you. I just committed some improvements to the outbound email templates, but this caused test failures. Could you please take a look and see whether the test just needs adjusting or if I actually broke the emails.
https://travis-ci.org/ari/jobsworth/jobs/140697171#L1874
|
priority
|
test failure after email template upgrade sorry to do this to you i just committed some improvements to the outbound email templates but this caused test failures could you please take a look and see whether the test just needs adjusting or if i actually broke the emails
| 1
|
395,798
| 11,696,739,092
|
IssuesEvent
|
2020-03-06 10:21:41
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Syntax is getting disturb of analytics code and creating syntactic errors in advance analytics section editor
|
NEXT UPDATE [Priority: HIGH] bug
|
The syntax is getting disturbed of analytics code and creating syntactic errors in the advance analytics section editor.
https://secure.helpscout.net/conversation/1082087063/110966?folderId=2632030
|
1.0
|
Syntax is getting disturb of analytics code and creating syntactic errors in advance analytics section editor - The syntax is getting disturbed of analytics code and creating syntactic errors in the advance analytics section editor.
https://secure.helpscout.net/conversation/1082087063/110966?folderId=2632030
|
priority
|
syntax is getting disturb of analytics code and creating syntactic errors in advance analytics section editor the syntax is getting disturbed of analytics code and creating syntactic errors in the advance analytics section editor
| 1
|
146,667
| 5,625,815,629
|
IssuesEvent
|
2017-04-04 20:19:41
|
SCIInstitute/ALMA-TDA
|
https://api.github.com/repos/SCIInstitute/ALMA-TDA
|
closed
|
command line interface
|
enhancement high priority
|
need to be able to process cubes from the command line to fit into existing workflows.
|
1.0
|
command line interface - need to be able to process cubes from the command line to fit into existing workflows.
|
priority
|
command line interface need to be able to process cubes from the command line to fit into existing workflows
| 1
|
207,155
| 7,125,122,618
|
IssuesEvent
|
2018-01-19 21:36:39
|
tavorperry/ZooP
|
https://api.github.com/repos/tavorperry/ZooP
|
opened
|
Delete all scripts in the end of pages ?
|
High Priority ! question
|
In every page, we have links to Jquery scripts like this:
<script src="https://code.jquery.com/jquery-1.12.4.min.js" integrity="sha256-ZosEbRLbNQzLpnKIkEdrPv7lOy9C27hHQ+Xp8a4MxAQ=" crossorigin="anonymous"></script>
1. What does it do?
2. Do we need it?
3. I tried to remove it from Volunteer and nothing changed...
Thanks
|
1.0
|
Delete all scripts in the end of pages ? - In every page, we have links to Jquery scripts like this:
<script src="https://code.jquery.com/jquery-1.12.4.min.js" integrity="sha256-ZosEbRLbNQzLpnKIkEdrPv7lOy9C27hHQ+Xp8a4MxAQ=" crossorigin="anonymous"></script>
1. What does it do?
2. Do we need it?
3. I tried to remove it from Volunteer and nothing changed...
Thanks
|
priority
|
delete all scripts in the end of pages in every page we have links to jquery scripts like this what does it do do we need it i tried to remove it from volunteer and nothing changed thanks
| 1
|
342,835
| 10,322,361,254
|
IssuesEvent
|
2019-08-31 11:34:37
|
wso2/docs-is
|
https://api.github.com/repos/wso2/docs-is
|
opened
|
Scim2 documents contains .xml configs
|
Priority/High
|
https://is.docs.wso2.com/en/5.9.0/connectors/configuring-SCIM-2.0-Provisioning-Connector/#ConfiguringSCIM2.0ProvisioningConnector-/UsersEndpoint contain xml configurations. They should be change to toml.
|
1.0
|
Scim2 documents contains .xml configs - https://is.docs.wso2.com/en/5.9.0/connectors/configuring-SCIM-2.0-Provisioning-Connector/#ConfiguringSCIM2.0ProvisioningConnector-/UsersEndpoint contain xml configurations. They should be change to toml.
|
priority
|
documents contains xml configs contain xml configurations they should be change to toml
| 1
|
473,309
| 13,640,155,171
|
IssuesEvent
|
2020-09-25 12:17:48
|
ExoCTK/exoctk
|
https://api.github.com/repos/ExoCTK/exoctk
|
closed
|
Update production environment
|
High Priority Web App
|
Upgrade production environment to `exoctk-3.6` including Python 3.6 since it is only running 2.7.5 right now!
|
1.0
|
Update production environment - Upgrade production environment to `exoctk-3.6` including Python 3.6 since it is only running 2.7.5 right now!
|
priority
|
update production environment upgrade production environment to exoctk including python since it is only running right now
| 1
|
390,966
| 11,566,600,838
|
IssuesEvent
|
2020-02-20 12:49:50
|
robotology/human-dynamics-estimation
|
https://api.github.com/repos/robotology/human-dynamics-estimation
|
closed
|
Add option to express net external wrench estimates of dummy source (hands) with orientation of world frame
|
complexity:medium component:HumanDynamicsEstimation component:HumanWrenchProvider priority:high type:enhancement type:task
|
Currently, the force-torque measurements from the ftShoes are expressed (both origin and orientation) with respect the human foot frames (`LeftFoot` and `RightFoot`). So, on calling [extractLinkNetExternalWrenchesFromDynamicVariables(const VectorDynSize& d, LinkNetExternalWrenches& netExtWrenches, const bool task1)](https://github.com/robotology/idyntree/blob/master/src/estimation/src/BerdyHelper.cpp#L2117), the net external wrench estimates on `LeftFoot` and `RightFoot` links are correctly obtained in the body frame, and if the covariances are correctly set for the MAP estimator, the measurements and the estimates on `LeftFoot` and `RightFoot` links match closely. So, there is no need to modify the function [extractLinkNetExternalWrenchesFromDynamicVariables()](https://github.com/robotology/idyntree/blob/master/src/estimation/src/BerdyHelper.cpp#L2117) for `LeftFoot` and `RightFoot` links.
Now, coming to the case of the links `LeftHand` and `RightHand`, they are considered to be dummy sources of force-torques measurements (set to **0**). The net external wrench estimates for `LeftHand` and `RightHand` obtained on calling [extractLinkNetExternalWrenchesFromDynamicVariables(const VectorDynSize& d, LinkNetExternalWrenches& netExtWrenches, const bool task1)](https://github.com/robotology/idyntree/blob/master/src/estimation/src/BerdyHelper.cpp#L2117) are expressed (both origin and orientation) in their body frames. To highlight the fact that these estimates at the hands are a reflection of the estimates of the object weight at hands, it is useful to express them at the origin of the links `LeftHand` and `RightHand` but with the orientation of the world frame.
As pointed out by @traversaro this code is best suited on the front end of HDE rather than in the back end of Berdy in iDynTree.
One of the problems in achieving this is to know inside `HumanDynamicsEstimator` device which link has a dummy wrench source attached. Currently, this information is present in `HumanWrenchProvider` device https://github.com/robotology/human-dynamics-estimation/blob/feature/visualize-berdy-estimated-wrench/conf/xml/Human.xml#L261 but it is not propagated to `HumnaDynamicsEstimator` device.
This issue will track the details related to updating HDE code for expressing the estimated net external wrench of dummy sources with the orientation of world frame.
@lrapetti @claudia-lat @traversaro
|
1.0
|
Add option to express net external wrench estimates of dummy source (hands) with orientation of world frame - Currently, the force-torque measurements from the ftShoes are expressed (both origin and orientation) with respect the human foot frames (`LeftFoot` and `RightFoot`). So, on calling [extractLinkNetExternalWrenchesFromDynamicVariables(const VectorDynSize& d, LinkNetExternalWrenches& netExtWrenches, const bool task1)](https://github.com/robotology/idyntree/blob/master/src/estimation/src/BerdyHelper.cpp#L2117), the net external wrench estimates on `LeftFoot` and `RightFoot` links are correctly obtained in the body frame, and if the covariances are correctly set for the MAP estimator, the measurements and the estimates on `LeftFoot` and `RightFoot` links match closely. So, there is no need to modify the function [extractLinkNetExternalWrenchesFromDynamicVariables()](https://github.com/robotology/idyntree/blob/master/src/estimation/src/BerdyHelper.cpp#L2117) for `LeftFoot` and `RightFoot` links.
Now, coming to the case of the links `LeftHand` and `RightHand`, they are considered to be dummy sources of force-torques measurements (set to **0**). The net external wrench estimates for `LeftHand` and `RightHand` obtained on calling [extractLinkNetExternalWrenchesFromDynamicVariables(const VectorDynSize& d, LinkNetExternalWrenches& netExtWrenches, const bool task1)](https://github.com/robotology/idyntree/blob/master/src/estimation/src/BerdyHelper.cpp#L2117) are expressed (both origin and orientation) in their body frames. To highlight the fact that these estimates at the hands are a reflection of the estimates of the object weight at hands, it is useful to express them at the origin of the links `LeftHand` and `RightHand` but with the orientation of the world frame.
As pointed out by @traversaro this code is best suited on the front end of HDE rather than in the back end of Berdy in iDynTree.
One of the problems in achieving this is to know inside `HumanDynamicsEstimator` device which link has a dummy wrench source attached. Currently, this information is present in `HumanWrenchProvider` device https://github.com/robotology/human-dynamics-estimation/blob/feature/visualize-berdy-estimated-wrench/conf/xml/Human.xml#L261 but it is not propagated to `HumnaDynamicsEstimator` device.
This issue will track the details related to updating HDE code for expressing the estimated net external wrench of dummy sources with the orientation of world frame.
@lrapetti @claudia-lat @traversaro
|
priority
|
add option to express net external wrench estimates of dummy source hands with orientation of world frame currently the force torque measurements from the ftshoes are expressed both origin and orientation with respect the human foot frames leftfoot and rightfoot so on calling the net external wrench estimates on leftfoot and rightfoot links are correctly obtained in the body frame and if the covariances are correctly set for the map estimator the measurements and the estimates on leftfoot and rightfoot links match closely so there is no need to modify the function for leftfoot and rightfoot links now coming to the case of the links lefthand and righthand they are considered to be dummy sources of force torques measurements set to the net external wrench estimates for lefthand and righthand obtained on calling are expressed both origin and orientation in their body frames to highlight the fact that these estimates at the hands are a reflection of the estimates of the object weight at hands it is useful to express them at the origin of the links lefthand and righthand but with the orientation of the world frame as pointed out by traversaro this code is best suited on the front end of hde rather than in the back end of berdy in idyntree one of the problems in achieving this is to know inside humandynamicsestimator device which link has a dummy wrench source attached currently this information is present in humanwrenchprovider device but it is not propagated to humnadynamicsestimator device this issue will track the details related to updating hde code for expressing the estimated net external wrench of dummy sources with the orientation of world frame lrapetti claudia lat traversaro
| 1
|
236,887
| 7,753,360,142
|
IssuesEvent
|
2018-05-31 00:10:20
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
7.5.0 Spawning the economy or placing a store crashes Eco
|
High Priority
|
Gabeux and I both replicated this by calling /spawneconomy and placing store objects, probably an issue with some deprecated currency features.
[store_crashIssue.txt](https://github.com/StrangeLoopGames/EcoIssues/files/2041162/store_crashIssue.txt)
|
1.0
|
7.5.0 Spawning the economy or placing a store crashes Eco - Gabeux and I both replicated this by calling /spawneconomy and placing store objects, probably an issue with some deprecated currency features.
[store_crashIssue.txt](https://github.com/StrangeLoopGames/EcoIssues/files/2041162/store_crashIssue.txt)
|
priority
|
spawning the economy or placing a store crashes eco gabeux and i both replicated this by calling spawneconomy and placing store objects probably an issue with some deprecated currency features
| 1
|
579,941
| 17,201,722,088
|
IssuesEvent
|
2021-07-17 11:26:54
|
myxo/ssyp_db
|
https://api.github.com/repos/myxo/ssyp_db
|
closed
|
Datamodel: slow runtime
|
bug high priority
|
Решил я тут запустить generate_table и увидел чудовищную просадку по скорости работы. Результат профайлера:

|
1.0
|
Datamodel: slow runtime - Решил я тут запустить generate_table и увидел чудовищную просадку по скорости работы. Результат профайлера:

|
priority
|
datamodel slow runtime решил я тут запустить generate table и увидел чудовищную просадку по скорости работы результат профайлера
| 1
|
331,905
| 10,081,428,467
|
IssuesEvent
|
2019-07-25 08:47:51
|
ipfs/ipfs-cluster
|
https://api.github.com/repos/ipfs/ipfs-cluster
|
closed
|
[Daemon] Init should take a list of peers
|
difficulty:easy enhancement/feature help wanted priority:high ready
|
**Describe the feature you are proposing**
`ipfs-cluster-service init --peers <multiaddress,multiaddress>`
It should:
1) Add and write the given peers to the peerstore file
2) For raft config section, add the peer IDs to the init_raft_peerset
3) For crdt config section, add the peer IDs to the trusted_peers slice
**Additional context**
Mostly useful for CRDTs:
`ipfs-cluster-service init --peers /dnsaddr/mycluster.myclyster.io` should allow to run `ipfs-cluster-service daemon --consensus crdt` without needing the bootstrap flag.
|
1.0
|
[Daemon] Init should take a list of peers - **Describe the feature you are proposing**
`ipfs-cluster-service init --peers <multiaddress,multiaddress>`
It should:
1) Add and write the given peers to the peerstore file
2) For raft config section, add the peer IDs to the init_raft_peerset
3) For crdt config section, add the peer IDs to the trusted_peers slice
**Additional context**
Mostly useful for CRDTs:
`ipfs-cluster-service init --peers /dnsaddr/mycluster.myclyster.io` should allow to run `ipfs-cluster-service daemon --consensus crdt` without needing the bootstrap flag.
|
priority
|
init should take a list of peers describe the feature you are proposing ipfs cluster service init peers it should add and write the given peers to the peerstore file for raft config section add the peer ids to the init raft peerset for crdt config section add the peer ids to the trusted peers slice additional context mostly useful for crdts ipfs cluster service init peers dnsaddr mycluster myclyster io should allow to run ipfs cluster service daemon consensus crdt without needing the bootstrap flag
| 1
|
707,581
| 24,310,285,696
|
IssuesEvent
|
2022-09-29 21:30:40
|
COS301-SE-2022/Office-Booker
|
https://api.github.com/repos/COS301-SE-2022/Office-Booker
|
closed
|
Prevent duplicate booking votings
|
Type: Enhance Priority: High Status: Busy Status: Needs-info
|
As it stands, a user is able to spam votes on a booking.
The currently suggested approach is to simply store an array of users who have already voted on a booking and disallow subsequent votes if they are already in that array.
|
1.0
|
Prevent duplicate booking votings - As it stands, a user is able to spam votes on a booking.
The currently suggested approach is to simply store an array of users who have already voted on a booking and disallow subsequent votes if they are already in that array.
|
priority
|
prevent duplicate booking votings as it stands a user is able to spam votes on a booking the currently suggested approach is to simply store an array of users who have already voted on a booking and disallow subsequent votes if they are already in that array
| 1
|
777,620
| 27,288,462,530
|
IssuesEvent
|
2023-02-23 15:03:44
|
Satellite-im/Uplink
|
https://api.github.com/repos/Satellite-im/Uplink
|
closed
|
Global - Changes requested from Matts PR #274
|
High Priority Global
|
i recommend moving this and the use_future to the end of this function. the function was laid out so the `use_future`s were declared after `main_element`, which allowed a dev to see the end result of this function without having to scroll past all the `use_future`s
_Originally posted by @sdwoodbury in https://github.com/Satellite-im/Uplink/pull/274#discussion_r1113615666_
|
1.0
|
Global - Changes requested from Matts PR #274 - i recommend moving this and the use_future to the end of this function. the function was laid out so the `use_future`s were declared after `main_element`, which allowed a dev to see the end result of this function without having to scroll past all the `use_future`s
_Originally posted by @sdwoodbury in https://github.com/Satellite-im/Uplink/pull/274#discussion_r1113615666_
|
priority
|
global changes requested from matts pr i recommend moving this and the use future to the end of this function the function was laid out so the use future s were declared after main element which allowed a dev to see the end result of this function without having to scroll past all the use future s originally posted by sdwoodbury in
| 1
|
426,281
| 12,370,378,655
|
IssuesEvent
|
2020-05-18 16:42:07
|
Azure/ARO-RP
|
https://api.github.com/repos/Azure/ARO-RP
|
closed
|
CI GOROOT missing
|
priority-high
|
All CI is failing with:
```
go: cannot find GOROOT directory: /usr/local/go1.13
```
This might be change in the VM pool. This needs fixing asap.
https://msazure.visualstudio.com/AzureRedHatOpenShift/_build/results?buildId=31206518&view=logs&j=406f28f7-259e-5bfd-e153-dd013342e83f&t=50b0c634-7175-589f-aa2c-13d292c44c63
|
1.0
|
CI GOROOT missing - All CI is failing with:
```
go: cannot find GOROOT directory: /usr/local/go1.13
```
This might be change in the VM pool. This needs fixing asap.
https://msazure.visualstudio.com/AzureRedHatOpenShift/_build/results?buildId=31206518&view=logs&j=406f28f7-259e-5bfd-e153-dd013342e83f&t=50b0c634-7175-589f-aa2c-13d292c44c63
|
priority
|
ci goroot missing all ci is failing with go cannot find goroot directory usr local this might be change in the vm pool this needs fixing asap
| 1
|
56,247
| 3,078,627,002
|
IssuesEvent
|
2015-08-21 11:38:18
|
nfprojects/nfengine
|
https://api.github.com/repos/nfprojects/nfengine
|
closed
|
Remove all useless #if ... #else ... #endif sections
|
bug high priority medium
|
Some parts of engine have code hidden by #if ... #else ... #endif sequence. Search for all of them and either remove them, or provide different way to determine which section to use (avoid preprocessor macros, we want the engine to be entirely compiled).
|
1.0
|
Remove all useless #if ... #else ... #endif sections - Some parts of engine have code hidden by #if ... #else ... #endif sequence. Search for all of them and either remove them, or provide different way to determine which section to use (avoid preprocessor macros, we want the engine to be entirely compiled).
|
priority
|
remove all useless if else endif sections some parts of engine have code hidden by if else endif sequence search for all of them and either remove them or provide different way to determine which section to use avoid preprocessor macros we want the engine to be entirely compiled
| 1
|
580,234
| 17,213,594,245
|
IssuesEvent
|
2021-07-19 08:39:53
|
GeneralMine/S2QUAT
|
https://api.github.com/repos/GeneralMine/S2QUAT
|
opened
|
Link question to factors or criterias
|
Enhancement Help wanted Priority: High
|
We need to establish a optional relationship from a question to a direct factor or criteria of the quality model. Thats necessary for #52 to connect the survey to the quality model in evaluation.
I guess its not possible to combine these two relationships to one, due to the fact that the relationship will always be an exclusive or.
Also having nearly always null in a column is bad practise right?
@ciearius what do you think?
|
1.0
|
Link question to factors or criterias - We need to establish a optional relationship from a question to a direct factor or criteria of the quality model. Thats necessary for #52 to connect the survey to the quality model in evaluation.
I guess its not possible to combine these two relationships to one, due to the fact that the relationship will always be an exclusive or.
Also having nearly always null in a column is bad practise right?
@ciearius what do you think?
|
priority
|
link question to factors or criterias we need to establish a optional relationship from a question to a direct factor or criteria of the quality model thats necessary for to connect the survey to the quality model in evaluation i guess its not possible to combine these two relationships to one due to the fact that the relationship will always be an exclusive or also having nearly always null in a column is bad practise right ciearius what do you think
| 1
|
92,569
| 3,872,420,348
|
IssuesEvent
|
2016-04-11 13:50:34
|
DoSomething/gladiator
|
https://api.github.com/repos/DoSomething/gladiator
|
closed
|
Leaderboard
|
#leaderboard large priority-high
|
Columns for leaderboard:
Rank | Name | Number of x's y'd | Email | Flagged Status
| --- | --- | --- | --- | --- |
1 | Jerome | 50 | jeromoe@example.com | approved
2 | Hils | 35 | hils@clinton.com | approved
3 | Bernie | 20 | bernie@sanders.com | approved
Sort by number of x's y'd.
If there is a pending file highlight the row in yellow, only show approved if there are NO PENDING files
Drop phone number
|
1.0
|
Leaderboard - Columns for leaderboard:
Rank | Name | Number of x's y'd | Email | Flagged Status
| --- | --- | --- | --- | --- |
1 | Jerome | 50 | jeromoe@example.com | approved
2 | Hils | 35 | hils@clinton.com | approved
3 | Bernie | 20 | bernie@sanders.com | approved
Sort by number of x's y'd.
If there is a pending file highlight the row in yellow, only show approved if there are NO PENDING files
Drop phone number
|
priority
|
leaderboard columns for leaderboard rank name number of x s y d email flagged status jerome jeromoe example com approved hils hils clinton com approved bernie bernie sanders com approved sort by number of x s y d if there is a pending file highlight the row in yellow only show approved if there are no pending files drop phone number
| 1
|
612,596
| 19,026,692,605
|
IssuesEvent
|
2021-11-24 05:06:57
|
boostcampwm-2021/iOS05-Escaper
|
https://api.github.com/repos/boostcampwm-2021/iOS05-Escaper
|
closed
|
[E1 S1 T6] 로그인 화면의 repository를 구성한다.
|
feature High Priority
|
### Epic - Story - Task
Epic : 로그인 화면
Story : Escaper 서비스를 이용하기 위해 로그인을 할 수 있다.
Task : 로그인 화면의 repository를 구성한다.
|
1.0
|
[E1 S1 T6] 로그인 화면의 repository를 구성한다. - ### Epic - Story - Task
Epic : 로그인 화면
Story : Escaper 서비스를 이용하기 위해 로그인을 할 수 있다.
Task : 로그인 화면의 repository를 구성한다.
|
priority
|
로그인 화면의 repository를 구성한다 epic story task epic 로그인 화면 story escaper 서비스를 이용하기 위해 로그인을 할 수 있다 task 로그인 화면의 repository를 구성한다
| 1
|
305,752
| 9,376,569,904
|
IssuesEvent
|
2019-04-04 08:17:27
|
qlcchain/go-qlc
|
https://api.github.com/repos/qlcchain/go-qlc
|
closed
|
verify performance of sqlite
|
Priority: High Type: Enhancement
|
### Description of the issue
verify performance of sqlite
### Issue-Type
- [ ] bug report
- [x] feature request
- [ ] Documentation improvement
|
1.0
|
verify performance of sqlite - ### Description of the issue
verify performance of sqlite
### Issue-Type
- [ ] bug report
- [x] feature request
- [ ] Documentation improvement
|
priority
|
verify performance of sqlite description of the issue verify performance of sqlite issue type bug report feature request documentation improvement
| 1
|
601,749
| 18,429,868,892
|
IssuesEvent
|
2021-10-14 06:06:49
|
ballerina-platform/ballerina-standard-library
|
https://api.github.com/repos/ballerina-platform/ballerina-standard-library
|
closed
|
Compiler plugin fails for wrong programs
|
Points/0.5 Priority/High Type/Improvement Team/PCP
|
Consider the below program where the io:println is not in the right place. Ideally it should be inside the init() function.
```ballerina
import ballerinax/kafka;
import ballerina/io;
kafka:ProducerConfiguration prod_config = {
clientId: "trainer-id",
acks: "all",
retryCount: 3
};
kafka:Producer trainer_prod = check new(kafka:DEFAULT_URL, prod_config);
kafka:ConsumerConfiguration consumer_configs = {
groupId: "training-id",
topics: ["training-req"],
pollingInterval: 1
};
listener kafka:Listener kafka_listener = new(kafka:DEFAULT_URL, consumer_configs);
service kafka:Service on kafka_listener {
// function init() {
io:println("Inside trainer service....");
// }
remote function onConsumerRecord(kafka:Caller caller, kafka:ConsumerRecord[] records) returns error? {
foreach var kafka_record in records {
check process_kafka_record(kafka_record);
}
}
}
function process_kafka_record(kafka:ConsumerRecord k_record) returns error? {
byte[] value = k_record.value;
io:println("processing record...");
}
```
Error log
```
Compiling source
slack.bal
error: compilation failed: The compiler extension in package 'ballerinax:kafka:2.1.0-beta.2.2' failed to complete. class io.ballerina.compiler.syntax.tree.ObjectFieldNode cannot be cast to class io.ballerina.compiler.syntax.tree.FunctionDefinitionNode (io.ballerina.compiler.syntax.tree.ObjectFieldNode and io.ballerina.compiler.syntax.tree.FunctionDefinitionNode are in unnamed module of loader 'app')
```
|
1.0
|
Compiler plugin fails for wrong programs - Consider the below program where the io:println is not in the right place. Ideally it should be inside the init() function.
```ballerina
import ballerinax/kafka;
import ballerina/io;
kafka:ProducerConfiguration prod_config = {
clientId: "trainer-id",
acks: "all",
retryCount: 3
};
kafka:Producer trainer_prod = check new(kafka:DEFAULT_URL, prod_config);
kafka:ConsumerConfiguration consumer_configs = {
groupId: "training-id",
topics: ["training-req"],
pollingInterval: 1
};
listener kafka:Listener kafka_listener = new(kafka:DEFAULT_URL, consumer_configs);
service kafka:Service on kafka_listener {
// function init() {
io:println("Inside trainer service....");
// }
remote function onConsumerRecord(kafka:Caller caller, kafka:ConsumerRecord[] records) returns error? {
foreach var kafka_record in records {
check process_kafka_record(kafka_record);
}
}
}
function process_kafka_record(kafka:ConsumerRecord k_record) returns error? {
byte[] value = k_record.value;
io:println("processing record...");
}
```
Error log
```
Compiling source
slack.bal
error: compilation failed: The compiler extension in package 'ballerinax:kafka:2.1.0-beta.2.2' failed to complete. class io.ballerina.compiler.syntax.tree.ObjectFieldNode cannot be cast to class io.ballerina.compiler.syntax.tree.FunctionDefinitionNode (io.ballerina.compiler.syntax.tree.ObjectFieldNode and io.ballerina.compiler.syntax.tree.FunctionDefinitionNode are in unnamed module of loader 'app')
```
|
priority
|
compiler plugin fails for wrong programs consider the below program where the io println is not in the right place ideally it should be inside the init function ballerina import ballerinax kafka import ballerina io kafka producerconfiguration prod config clientid trainer id acks all retrycount kafka producer trainer prod check new kafka default url prod config kafka consumerconfiguration consumer configs groupid training id topics pollinginterval listener kafka listener kafka listener new kafka default url consumer configs service kafka service on kafka listener function init io println inside trainer service remote function onconsumerrecord kafka caller caller kafka consumerrecord records returns error foreach var kafka record in records check process kafka record kafka record function process kafka record kafka consumerrecord k record returns error byte value k record value io println processing record error log compiling source slack bal error compilation failed the compiler extension in package ballerinax kafka beta failed to complete class io ballerina compiler syntax tree objectfieldnode cannot be cast to class io ballerina compiler syntax tree functiondefinitionnode io ballerina compiler syntax tree objectfieldnode and io ballerina compiler syntax tree functiondefinitionnode are in unnamed module of loader app
| 1
|
482,029
| 13,895,964,809
|
IssuesEvent
|
2020-10-19 16:31:16
|
AY2021S1-CS2113T-F14-3/tp
|
https://api.github.com/repos/AY2021S1-CS2113T-F14-3/tp
|
closed
|
Removal of nested user inputs in RouteMapCommand class
|
priority.High type.Enhancement
|
Removal of nested user inputs in RouteMapCommand class to facillitate efficient use of the application
|
1.0
|
Removal of nested user inputs in RouteMapCommand class - Removal of nested user inputs in RouteMapCommand class to facillitate efficient use of the application
|
priority
|
removal of nested user inputs in routemapcommand class removal of nested user inputs in routemapcommand class to facillitate efficient use of the application
| 1
|
741,782
| 25,818,555,010
|
IssuesEvent
|
2022-12-12 07:44:52
|
xournalpp/xournalpp
|
https://api.github.com/repos/xournalpp/xournalpp
|
closed
|
Crash of unknown cause when writing
|
bug priority::high Crash
|
**Affects versions :**
- OS: Arch Linux
- (Linux only) Desktop environment: Gnome Wayland
- Which version of libgtk do you use 3.24.34
- Version of Xournal++: 1.12
- Installation method: flatpak
**Describe the bug**
Crash of unknown cause.
**To Reproduce**
Unknown. I was simply writing in a document in xournalpp on my tablet.
**Expected behavior**
N/A
**Screenshots of Problem**
N/A
**Additional context**
Crash log
```
Date: Tue Oct 25 21:22:50 2022
Error: signal 11
[bt]: (0) xournalpp(+0x201a75) [0x558545e38a75]
[bt]: (1) /usr/lib/x86_64-linux-gnu/libc.so.6(+0x3f0c0) [0x7fb9cee3f0c0]
[bt]: (2) xournalpp(_ZNK7Element7getTypeEv+0x4) [0x558545e09db4]
[bt]: (3) xournalpp(_ZNK12DocumentView11drawElementEP6_cairoP7Element+0x22) [0x558545e2f382]
[bt]: (4) xournalpp(_ZN12DocumentView9drawLayerEP6_cairoP5Layer+0x9b) [0x558545e2f53b]
[bt]: (5) xournalpp(_ZN12DocumentView8drawPageESt10shared_ptrI7XojPageEP6_cairobbbb+0xfe) [0x558545e2ff8e]
[bt]: (6) xournalpp(_ZN10PreviewJob8drawPageEv+0x38a) [0x558545d2c3ea]
[bt]: (7) xournalpp(_ZN10PreviewJob3runEv+0x34) [0x558545d2c764]
[bt]: (8) xournalpp(_ZN9Scheduler17jobThreadCallbackEPS_+0xf3) [0x558545d2f333]
[bt]: (9) /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x87889) [0x7fb9d0681889]
[bt]: (10) /usr/lib/x86_64-linux-gnu/libc.so.6(+0x8f1da) [0x7fb9cee8f1da]
[bt]: (11) /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x44) [0x7fb9cef17d84]
Try to get a better stracktrace...
[bt] #1 xournalpp(+0x202096) [0x558545e39096]
[bt] #2 /usr/lib/x86_64-linux-gnu/libc.so.6(+0x3f0c0) [0x7fb9cee3f0c0]
[bt] #3 xournalpp(_ZNK7Element7getTypeEv+0x4) [0x558545e09db4]
[bt] #4 xournalpp(_ZNK12DocumentView11drawElementEP6_cairoP7Element+0x22) [0x558545e2f382]
[bt] #5 xournalpp(_ZN12DocumentView9drawLayerEP6_cairoP5Layer+0x9b) [0x558545e2f53b]
[bt] #6 xournalpp(_ZN12DocumentView8drawPageESt10shared_ptrI7XojPageEP6_cairobbbb+0xfe) [0x558545e2ff8e]
[bt] #7 xournalpp(_ZN10PreviewJob8drawPageEv+0x38a) [0x558545d2c3ea]
[bt] #8 xournalpp(_ZN10PreviewJob3runEv+0x34) [0x558545d2c764]
[bt] #9 xournalpp(_ZN9Scheduler17jobThreadCallbackEPS_+0xf3) [0x558545d2f333]
[bt] #10 /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x87889) [0x7fb9d0681889]
[bt] #11 /usr/lib/x86_64-linux-gnu/libc.so.6(+0x8f1da) [0x7fb9cee8f1da]
[bt] #12 /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x44) [0x7fb9cef17d84]
```
|
1.0
|
Crash of unknown cause when writing - **Affects versions :**
- OS: Arch Linux
- (Linux only) Desktop environment: Gnome Wayland
- Which version of libgtk do you use 3.24.34
- Version of Xournal++: 1.12
- Installation method: flatpak
**Describe the bug**
Crash of unknown cause.
**To Reproduce**
Unknown. I was simply writing in a document in xournalpp on my tablet.
**Expected behavior**
N/A
**Screenshots of Problem**
N/A
**Additional context**
Crash log
```
Date: Tue Oct 25 21:22:50 2022
Error: signal 11
[bt]: (0) xournalpp(+0x201a75) [0x558545e38a75]
[bt]: (1) /usr/lib/x86_64-linux-gnu/libc.so.6(+0x3f0c0) [0x7fb9cee3f0c0]
[bt]: (2) xournalpp(_ZNK7Element7getTypeEv+0x4) [0x558545e09db4]
[bt]: (3) xournalpp(_ZNK12DocumentView11drawElementEP6_cairoP7Element+0x22) [0x558545e2f382]
[bt]: (4) xournalpp(_ZN12DocumentView9drawLayerEP6_cairoP5Layer+0x9b) [0x558545e2f53b]
[bt]: (5) xournalpp(_ZN12DocumentView8drawPageESt10shared_ptrI7XojPageEP6_cairobbbb+0xfe) [0x558545e2ff8e]
[bt]: (6) xournalpp(_ZN10PreviewJob8drawPageEv+0x38a) [0x558545d2c3ea]
[bt]: (7) xournalpp(_ZN10PreviewJob3runEv+0x34) [0x558545d2c764]
[bt]: (8) xournalpp(_ZN9Scheduler17jobThreadCallbackEPS_+0xf3) [0x558545d2f333]
[bt]: (9) /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x87889) [0x7fb9d0681889]
[bt]: (10) /usr/lib/x86_64-linux-gnu/libc.so.6(+0x8f1da) [0x7fb9cee8f1da]
[bt]: (11) /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x44) [0x7fb9cef17d84]
Try to get a better stracktrace...
[bt] #1 xournalpp(+0x202096) [0x558545e39096]
[bt] #2 /usr/lib/x86_64-linux-gnu/libc.so.6(+0x3f0c0) [0x7fb9cee3f0c0]
[bt] #3 xournalpp(_ZNK7Element7getTypeEv+0x4) [0x558545e09db4]
[bt] #4 xournalpp(_ZNK12DocumentView11drawElementEP6_cairoP7Element+0x22) [0x558545e2f382]
[bt] #5 xournalpp(_ZN12DocumentView9drawLayerEP6_cairoP5Layer+0x9b) [0x558545e2f53b]
[bt] #6 xournalpp(_ZN12DocumentView8drawPageESt10shared_ptrI7XojPageEP6_cairobbbb+0xfe) [0x558545e2ff8e]
[bt] #7 xournalpp(_ZN10PreviewJob8drawPageEv+0x38a) [0x558545d2c3ea]
[bt] #8 xournalpp(_ZN10PreviewJob3runEv+0x34) [0x558545d2c764]
[bt] #9 xournalpp(_ZN9Scheduler17jobThreadCallbackEPS_+0xf3) [0x558545d2f333]
[bt] #10 /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x87889) [0x7fb9d0681889]
[bt] #11 /usr/lib/x86_64-linux-gnu/libc.so.6(+0x8f1da) [0x7fb9cee8f1da]
[bt] #12 /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x44) [0x7fb9cef17d84]
```
|
priority
|
crash of unknown cause when writing affects versions os arch linux linux only desktop environment gnome wayland which version of libgtk do you use version of xournal installation method flatpak describe the bug crash of unknown cause to reproduce unknown i was simply writing in a document in xournalpp on my tablet expected behavior n a screenshots of problem n a additional context crash log date tue oct error signal xournalpp usr lib linux gnu libc so xournalpp xournalpp xournalpp xournalpp cairobbbb xournalpp xournalpp xournalpp usr lib linux gnu libglib so usr lib linux gnu libc so usr lib linux gnu libc so clone try to get a better stracktrace xournalpp usr lib linux gnu libc so xournalpp xournalpp xournalpp xournalpp cairobbbb xournalpp xournalpp xournalpp usr lib linux gnu libglib so usr lib linux gnu libc so usr lib linux gnu libc so clone
| 1
|
166,135
| 6,291,506,591
|
IssuesEvent
|
2017-07-20 01:04:56
|
GoogleCloudPlatform/google-cloud-eclipse
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
|
reopened
|
Cannot run StarterPipeline generated by Dataflow wizard (Dataflow 2.0.0)
|
bug high priority
|
```
Exception in thread "main" java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:233)
at org.apache.beam.sdk.util.InstanceBuilder.build(InstanceBuilder.java:162)
at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:52)
at org.apache.beam.sdk.Pipeline.create(Pipeline.java:141)
at f.StarterPipeline.main(StarterPipeline.java:50)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:222)
... 4 more
Caused by: java.lang.IllegalArgumentException: Missing object or bucket in path: 'gs://my-awesome-bucket/', did you mean: 'gs://some-bucket/my-awesome-bucket'?
at org.apache.beam.sdks.java.extensions.google.cloud.platform.core.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:383)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.verifyPath(GcsPathValidator.java:79)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.validateOutputFilePrefixSupported(GcsPathValidator.java:62)
at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:211)
... 9 more
```
Arguments given to `main()` are as follows:
```
arg: --runner=DataflowRunner
arg: --project=my-gcp-project-id
arg: --gcpTempLocation=gs://my-awesome-bucket
arg: --stagingLocation=gs://my-awesome-bucket
```
|
1.0
|
Cannot run StarterPipeline generated by Dataflow wizard (Dataflow 2.0.0) - ```
Exception in thread "main" java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:233)
at org.apache.beam.sdk.util.InstanceBuilder.build(InstanceBuilder.java:162)
at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:52)
at org.apache.beam.sdk.Pipeline.create(Pipeline.java:141)
at f.StarterPipeline.main(StarterPipeline.java:50)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:222)
... 4 more
Caused by: java.lang.IllegalArgumentException: Missing object or bucket in path: 'gs://my-awesome-bucket/', did you mean: 'gs://some-bucket/my-awesome-bucket'?
at org.apache.beam.sdks.java.extensions.google.cloud.platform.core.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:383)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.verifyPath(GcsPathValidator.java:79)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.validateOutputFilePrefixSupported(GcsPathValidator.java:62)
at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:211)
... 9 more
```
Arguments given to `main()` are as follows:
```
arg: --runner=DataflowRunner
arg: --project=my-gcp-project-id
arg: --gcpTempLocation=gs://my-awesome-bucket
arg: --stagingLocation=gs://my-awesome-bucket
```
|
priority
|
cannot run starterpipeline generated by dataflow wizard dataflow exception in thread main java lang runtimeexception failed to construct instance from factory method dataflowrunner fromoptions interface org apache beam sdk options pipelineoptions at org apache beam sdk util instancebuilder buildfrommethod instancebuilder java at org apache beam sdk util instancebuilder build instancebuilder java at org apache beam sdk pipelinerunner fromoptions pipelinerunner java at org apache beam sdk pipeline create pipeline java at f starterpipeline main starterpipeline java caused by java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache beam sdk util instancebuilder buildfrommethod instancebuilder java more caused by java lang illegalargumentexception missing object or bucket in path gs my awesome bucket did you mean gs some bucket my awesome bucket at org apache beam sdks java extensions google cloud platform core repackaged com google common base preconditions checkargument preconditions java at org apache beam sdk extensions gcp storage gcspathvalidator verifypath gcspathvalidator java at org apache beam sdk extensions gcp storage gcspathvalidator validateoutputfileprefixsupported gcspathvalidator java at org apache beam runners dataflow dataflowrunner fromoptions dataflowrunner java more arguments given to main are as follows arg runner dataflowrunner arg project my gcp project id arg gcptemplocation gs my awesome bucket arg staginglocation gs my awesome bucket
| 1
|
722,322
| 24,858,631,949
|
IssuesEvent
|
2022-10-27 06:09:52
|
kubesphere/kubesphere
|
https://api.github.com/repos/kubesphere/kubesphere
|
closed
|
Platform permission 'cluster-management' can not authorized/unauthorized workspace
|
kind/bug priority/high
|
**Describe the Bug**
There is a user test, his platform permission is `cluster-management`, and did not invited to cluster host
**Versions Used**
KubeSphere: `v3.3.1-rc.5`
**How To Reproduce**
Steps to reproduce the behavior:
1. Login ks with user test
2. Go to cluster visibility of cluster host
3. authorized/unauthorized workspace

**Expected behavior**
can authorized/unauthorized workspace
|
1.0
|
Platform permission 'cluster-management' can not authorized/unauthorized workspace - **Describe the Bug**
There is a user test, his platform permission is `cluster-management`, and did not invited to cluster host
**Versions Used**
KubeSphere: `v3.3.1-rc.5`
**How To Reproduce**
Steps to reproduce the behavior:
1. Login ks with user test
2. Go to cluster visibility of cluster host
3. authorized/unauthorized workspace

**Expected behavior**
can authorized/unauthorized workspace
|
priority
|
platform permission cluster management can not authorized unauthorized workspace describe the bug there is a user test his platform permission is cluster management and did not invited to cluster host versions used kubesphere rc how to reproduce steps to reproduce the behavior login ks with user test go to cluster visibility of cluster host authorized unauthorized workspace expected behavior can authorized unauthorized workspace
| 1
|
158,497
| 6,028,805,632
|
IssuesEvent
|
2017-06-08 16:30:50
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
test request - https
|
Enhancement Priority-High
|
Please visit https://arctos.database.museum - everything should be the same as the http site - and let me know if something doesn't work or if you somehow end up back on http://arctos.database.museum or experience any other problems.
If there are no problems found in the next few days we can probably safely update incoming links (and possibly force all traffic to https).
You will get warnings (which should not stop loading) regarding media - I will update the things I can to a secure channel, and we should develop documentation guidelines regarding http/https (eg, for creating Media).
|
1.0
|
test request - https - Please visit https://arctos.database.museum - everything should be the same as the http site - and let me know if something doesn't work or if you somehow end up back on http://arctos.database.museum or experience any other problems.
If there are no problems found in the next few days we can probably safely update incoming links (and possibly force all traffic to https).
You will get warnings (which should not stop loading) regarding media - I will update the things I can to a secure channel, and we should develop documentation guidelines regarding http/https (eg, for creating Media).
|
priority
|
test request https please visit everything should be the same as the http site and let me know if something doesn t work or if you somehow end up back on or experience any other problems if there are no problems found in the next few days we can probably safely update incoming links and possibly force all traffic to https you will get warnings which should not stop loading regarding media i will update the things i can to a secure channel and we should develop documentation guidelines regarding http https eg for creating media
| 1
|
732,422
| 25,258,797,829
|
IssuesEvent
|
2022-11-15 20:38:48
|
openaq/openaq-fetch
|
https://api.github.com/repos/openaq/openaq-fetch
|
reopened
|
Iran (Tehran) - Data Sources
|
help wanted new data high priority needs investigation
|
This is listed as high priority b/c Tehran, Iran is experiencing very bad AQ. Media outlets report schools shut/will be shutting down to AQ. Plus it is in a region where we have minimal to no coverage currently in our system.
Most useful info:
List of stations and coordinates: http://31.24.238.89/home/station.aspx
Hourly physical concentration data: http://31.24.238.89/home/DataArchive.aspx
(also downloadable via csv)
Other info:
General map (in AQI format): http://air.tehran.ir/Default.aspx?tabid=193
General map with hourly data (in AQI format): http://31.24.238.89/home/OnlineAQI.aspx
It appears they are using the US EPA scale. I assume this means they are doing a similar calculation for US EPA
|
1.0
|
Iran (Tehran) - Data Sources - This is listed as high priority b/c Tehran, Iran is experiencing very bad AQ. Media outlets report schools shut/will be shutting down to AQ. Plus it is in a region where we have minimal to no coverage currently in our system.
Most useful info:
List of stations and coordinates: http://31.24.238.89/home/station.aspx
Hourly physical concentration data: http://31.24.238.89/home/DataArchive.aspx
(also downloadable via csv)
Other info:
General map (in AQI format): http://air.tehran.ir/Default.aspx?tabid=193
General map with hourly data (in AQI format): http://31.24.238.89/home/OnlineAQI.aspx
It appears they are using the US EPA scale. I assume this means they are doing a similar calculation for US EPA
|
priority
|
iran tehran data sources this is listed as high priority b c tehran iran is experiencing very bad aq media outlets report schools shut will be shutting down to aq plus it is in a region where we have minimal to no coverage currently in our system most useful info list of stations and coordinates hourly physical concentration data also downloadable via csv other info general map in aqi format general map with hourly data in aqi format it appears they are using the us epa scale i assume this means they are doing a similar calculation for us epa
| 1
|
762,350
| 26,716,174,927
|
IssuesEvent
|
2023-01-28 14:45:42
|
robertgouveia/JuiceJam
|
https://api.github.com/repos/robertgouveia/JuiceJam
|
opened
|
The user can see more of the level while in fullscreen.
|
bug priority: high
|
The game doesn't seem to be scaling properly. When I go into fullscreen I have an advantage because I can see much more of the level than I can in the small view.
Environment:
itch.io version 1 build
Reproducibility rate:
100%
Steps to reproduce:
1. Press the fullscreen button.
Expected result:
The game goes into fullscreen and scales up so it looks the same, just bigger.
Actual result:
The game goes into fullscreen and I can see more than I can in the smaller view.


|
1.0
|
The user can see more of the level while in fullscreen. - The game doesn't seem to be scaling properly. When I go into fullscreen I have an advantage because I can see much more of the level than I can in the small view.
Environment:
itch.io version 1 build
Reproducibility rate:
100%
Steps to reproduce:
1. Press the fullscreen button.
Expected result:
The game goes into fullscreen and scales up so it looks the same, just bigger.
Actual result:
The game goes into fullscreen and I can see more than I can in the smaller view.


|
priority
|
the user can see more of the level while in fullscreen the game doesn t seem to be scaling properly when i go into fullscreen i have an advantage because i can see much more of the level than i can in the small view environment itch io version build reproducibility rate steps to reproduce press the fullscreen button expected result the game goes into fullscreen and scales up so it looks the same just bigger actual result the game goes into fullscreen and i can see more than i can in the smaller view
| 1
|
758,690
| 26,565,172,186
|
IssuesEvent
|
2023-01-20 19:27:26
|
meanstream-io/meanstream
|
https://api.github.com/repos/meanstream-io/meanstream
|
closed
|
USK Type Change Closes Panel
|
type: bug priority: high complexity: low impact: ux
|
When a user changes the type of upstream key e.g. from Luma to Chroma, the expansion panel closes and needs to be reopened. This is irritating for the user at best.
|
1.0
|
USK Type Change Closes Panel - When a user changes the type of upstream key e.g. from Luma to Chroma, the expansion panel closes and needs to be reopened. This is irritating for the user at best.
|
priority
|
usk type change closes panel when a user changes the type of upstream key e g from luma to chroma the expansion panel closes and needs to be reopened this is irritating for the user at best
| 1
|
433,214
| 12,503,692,239
|
IssuesEvent
|
2020-06-02 07:43:59
|
CatalogueOfLife/clearinghouse-ui
|
https://api.github.com/repos/CatalogueOfLife/clearinghouse-ui
|
closed
|
Removed decision shown in assembly source tree
|
bug high priority tiny
|
When I delete a decision from the source tree in the assembly it is gone.
But when I close the source tree on some higher node and reopen it the decision show up again.
Deleting it once more causes a 404 - so it in already gone in the db and just a rendering problem
https://data.dev.catalogue.life/catalogue/3/assembly?assemblyTaxonKey=9d98b9e8-10a5-4bc7-80cb-5ca3b1e883de&datasetKey=1011&sourceTaxonKey=xA
|
1.0
|
Removed decision shown in assembly source tree - When I delete a decision from the source tree in the assembly it is gone.
But when I close the source tree on some higher node and reopen it the decision show up again.
Deleting it once more causes a 404 - so it in already gone in the db and just a rendering problem
https://data.dev.catalogue.life/catalogue/3/assembly?assemblyTaxonKey=9d98b9e8-10a5-4bc7-80cb-5ca3b1e883de&datasetKey=1011&sourceTaxonKey=xA
|
priority
|
removed decision shown in assembly source tree when i delete a decision from the source tree in the assembly it is gone but when i close the source tree on some higher node and reopen it the decision show up again deleting it once more causes a so it in already gone in the db and just a rendering problem
| 1
|
261,687
| 8,245,101,015
|
IssuesEvent
|
2018-09-11 08:42:06
|
bitshares/bitshares-ui
|
https://api.github.com/repos/bitshares/bitshares-ui
|
reopened
|
[1][kapeer] Unable to update a smart coin's backing asset
|
bug high priority
|
**Describe the bug**
Should be able to update backing asset when supply is zero.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://wallet.bitshares.org/
2. Click on hamburg button, click assets
3. Click "update asset" of an smart coin which has zero supply
4. Click "smartcoin options" tab, scroll down
5. After changed asset name in "short backing asset" box, the "update asset" button is still disabled
6. Change something else in the page
7. click "reset" button, backing asset doesn't change
8. change something else in the page
9. Click "update asset" button
10. login,
11. check info in transaction confirmation page, the backing asset is not changed
**Expected behavior**
Able to change backing asset when supply is zero.
**Screenshots**




|
1.0
|
[1][kapeer] Unable to update a smart coin's backing asset - **Describe the bug**
Should be able to update backing asset when supply is zero.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://wallet.bitshares.org/
2. Click on hamburg button, click assets
3. Click "update asset" of an smart coin which has zero supply
4. Click "smartcoin options" tab, scroll down
5. After changed asset name in "short backing asset" box, the "update asset" button is still disabled
6. Change something else in the page
7. click "reset" button, backing asset doesn't change
8. change something else in the page
9. Click "update asset" button
10. login,
11. check info in transaction confirmation page, the backing asset is not changed
**Expected behavior**
Able to change backing asset when supply is zero.
**Screenshots**




|
priority
|
unable to update a smart coin s backing asset describe the bug should be able to update backing asset when supply is zero to reproduce steps to reproduce the behavior go to click on hamburg button click assets click update asset of an smart coin which has zero supply click smartcoin options tab scroll down after changed asset name in short backing asset box the update asset button is still disabled change something else in the page click reset button backing asset doesn t change change something else in the page click update asset button login check info in transaction confirmation page the backing asset is not changed expected behavior able to change backing asset when supply is zero screenshots
| 1
|
439,909
| 12,690,330,025
|
IssuesEvent
|
2020-06-21 11:32:55
|
aysegulsari/swe-573
|
https://api.github.com/repos/aysegulsari/swe-573
|
closed
|
Advanced search feature implementation
|
Priority: High Status: Pending Type: Development
|
Implement a search feature that will enabled user to search for other user profiles and recipes.
|
1.0
|
Advanced search feature implementation - Implement a search feature that will enabled user to search for other user profiles and recipes.
|
priority
|
advanced search feature implementation implement a search feature that will enabled user to search for other user profiles and recipes
| 1
|
388,229
| 11,484,868,434
|
IssuesEvent
|
2020-02-11 05:34:37
|
openmsupply/mobile
|
https://api.github.com/repos/openmsupply/mobile
|
closed
|
Can't select an ItemDirection
|
Bug: development Docs: not needed Effort: small Module: dispensary Priority: high
|
## Describe the bug
`ItemDirection`s in the drop down aren't selectable
### To reproduce
Dispensing development bug
### Expected behaviour
Dispensing development bug
### Proposed Solution
Dispensing development bug
### Version and device info
Dispensing development bug
### Additional context
Dispensing development bug
|
1.0
|
Can't select an ItemDirection - ## Describe the bug
`ItemDirection`s in the drop down aren't selectable
### To reproduce
Dispensing development bug
### Expected behaviour
Dispensing development bug
### Proposed Solution
Dispensing development bug
### Version and device info
Dispensing development bug
### Additional context
Dispensing development bug
|
priority
|
can t select an itemdirection describe the bug itemdirection s in the drop down aren t selectable to reproduce dispensing development bug expected behaviour dispensing development bug proposed solution dispensing development bug version and device info dispensing development bug additional context dispensing development bug
| 1
|
304,121
| 9,321,477,133
|
IssuesEvent
|
2019-03-27 04:06:08
|
python/mypy
|
https://api.github.com/repos/python/mypy
|
closed
|
Type ignore comment has no effect after argument type comment
|
bug false-positive priority-0-high
|
This program generates an error on the first line even though there is a `# type: ignore` comment:
```py
def f(x, # type: x # type: ignore
):
# type: (...) -> None
pass
```
Similarly, this generates an error even though it shouldn't:
```py
def f(x=y, # type: int # type: ignore # Name 'y' not defined
):
# type: (...) -> None
pass
```
|
1.0
|
Type ignore comment has no effect after argument type comment - This program generates an error on the first line even though there is a `# type: ignore` comment:
```py
def f(x, # type: x # type: ignore
):
# type: (...) -> None
pass
```
Similarly, this generates an error even though it shouldn't:
```py
def f(x=y, # type: int # type: ignore # Name 'y' not defined
):
# type: (...) -> None
pass
```
|
priority
|
type ignore comment has no effect after argument type comment this program generates an error on the first line even though there is a type ignore comment py def f x type x type ignore type none pass similarly this generates an error even though it shouldn t py def f x y type int type ignore name y not defined type none pass
| 1
|
198,958
| 6,979,474,713
|
IssuesEvent
|
2017-12-12 21:10:40
|
redhat-nfvpe/kube-centos-ansible
|
https://api.github.com/repos/redhat-nfvpe/kube-centos-ansible
|
closed
|
Ability to install optional packages on hosts
|
priority:high state:needs_review type:enhancement
|
Used to have some play where I installed packages that not everyone needs, but, that I need every single time I spin up a cluster. Especially an editor and network tracing tools (e.g. tcpdump).
I'm going to create a playbook that lets you set those packages if you need.
|
1.0
|
Ability to install optional packages on hosts - Used to have some play where I installed packages that not everyone needs, but, that I need every single time I spin up a cluster. Especially an editor and network tracing tools (e.g. tcpdump).
I'm going to create a playbook that lets you set those packages if you need.
|
priority
|
ability to install optional packages on hosts used to have some play where i installed packages that not everyone needs but that i need every single time i spin up a cluster especially an editor and network tracing tools e g tcpdump i m going to create a playbook that lets you set those packages if you need
| 1
|
166,587
| 6,307,208,868
|
IssuesEvent
|
2017-07-21 23:49:00
|
haskell/cabal
|
https://api.github.com/repos/haskell/cabal
|
closed
|
Version number of Cabal on 2.0 branch is incorrect
|
priority: high
|
The version number listed in `Cabal.cabal` is 2.0.0.0 whereas the tag claims the release should be 2.0.0.1.
Thanks to @hvr for noticing this.
|
1.0
|
Version number of Cabal on 2.0 branch is incorrect - The version number listed in `Cabal.cabal` is 2.0.0.0 whereas the tag claims the release should be 2.0.0.1.
Thanks to @hvr for noticing this.
|
priority
|
version number of cabal on branch is incorrect the version number listed in cabal cabal is whereas the tag claims the release should be thanks to hvr for noticing this
| 1
|
690,651
| 23,668,053,898
|
IssuesEvent
|
2022-08-27 00:45:34
|
earth-chris/earthlib
|
https://api.github.com/repos/earth-chris/earthlib
|
opened
|
Create service account for testing `ee` functions in CI
|
high effort low priority
|
You need to run `ee.Initialize()` prior to running any earth engine calls, which requires setting up a service account. This should be done when the package is stable enough to handle this (lotsa refactoring goin' on right now).
Here's a useful [SO post](https://gis.stackexchange.com/questions/377222/creating-automated-tests-using-google-earth-engine-python-api) to guide this.
|
1.0
|
Create service account for testing `ee` functions in CI - You need to run `ee.Initialize()` prior to running any earth engine calls, which requires setting up a service account. This should be done when the package is stable enough to handle this (lotsa refactoring goin' on right now).
Here's a useful [SO post](https://gis.stackexchange.com/questions/377222/creating-automated-tests-using-google-earth-engine-python-api) to guide this.
|
priority
|
create service account for testing ee functions in ci you need to run ee initialize prior to running any earth engine calls which requires setting up a service account this should be done when the package is stable enough to handle this lotsa refactoring goin on right now here s a useful to guide this
| 1
|
376,340
| 11,142,350,106
|
IssuesEvent
|
2019-12-22 09:01:35
|
bounswe/bounswe2019group3
|
https://api.github.com/repos/bounswe/bounswe2019group3
|
closed
|
send exercise answers
|
Front-end Priority: High Status: In Progress
|
I added the exercises functionality but sending the answers part is not implemented yet.
|
1.0
|
send exercise answers - I added the exercises functionality but sending the answers part is not implemented yet.
|
priority
|
send exercise answers i added the exercises functionality but sending the answers part is not implemented yet
| 1
|
495,697
| 14,286,474,068
|
IssuesEvent
|
2020-11-23 15:11:47
|
PMEAL/OpenPNM
|
https://api.github.com/repos/PMEAL/OpenPNM
|
closed
|
GenericTransport._is_converged shoudn't raise Exception
|
bug easy high priority
|
`_is_converged` is only supposed to return `True/False`.
|
1.0
|
GenericTransport._is_converged shoudn't raise Exception - `_is_converged` is only supposed to return `True/False`.
|
priority
|
generictransport is converged shoudn t raise exception is converged is only supposed to return true false
| 1
|
117,821
| 4,728,086,227
|
IssuesEvent
|
2016-10-18 15:07:07
|
INN/largo-related-posts
|
https://api.github.com/repos/INN/largo-related-posts
|
closed
|
Incorrect display of related posts in widget
|
priority: high type: bug
|
From Mike:
When I publish the page, the related posts are not what I selected. Instead they seem to be completely different posts than what I have selected.
Example post: http://training-stage.publicbroadcasting.net/blog/test-post/
|
1.0
|
Incorrect display of related posts in widget - From Mike:
When I publish the page, the related posts are not what I selected. Instead they seem to be completely different posts than what I have selected.
Example post: http://training-stage.publicbroadcasting.net/blog/test-post/
|
priority
|
incorrect display of related posts in widget from mike when i publish the page the related posts are not what i selected instead they seem to be completely different posts than what i have selected example post
| 1
|
492,463
| 14,213,719,794
|
IssuesEvent
|
2020-11-17 03:16:21
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
closed
|
Manuscript Revision Checklist - 5) Enhance Discussion
|
high priority :warning:
|
### Enhance Discussion
- Clarify what high-level functions are still missing and the direction in which nk is going to implement these
- [x] Explain state of unittest, documentation, development, short-term targets
- Zen: since pytest is already mentioned in the body of the paragraph, we can talk about following coding best practices to maintain readability of code, use of black/pylint/docformatter
- [ ] State whether there are plans of advertising package (tutorial or spring at conferences), hire dedicated developers through grants - why if not
- [x] Give stronger arguments about reproducibility and how it can impact research in long run
### What has been done
- *Zen: added paragraph in discussion regarding code quality - NeuroKit2 also prioritizes a high standard of quality control during code development. This is done through automated testing of code using unittest-based tests that follow coding best practices, reducing code complexity and maintaining code readability. Additionally, we provide guidelines for new contributors to follow when writing code, encouraging them to follow PEP 8 coding conventions and fix code errors, as documented in our `readthedocs` page (*https://neurokit2.readthedocs.io/en/latest/contributing/contributing.html*).*
- *TamZen*: added paragraph in discussion about reproducibility arguments which are two-fold: 1) nk as compared to other black-box softwares allows for better tracing of the discrepancy of results in analysis pipelines, 2) nk offers several methods, allows for comparison and benchmarking of derived results
See #353
|
1.0
|
Manuscript Revision Checklist - 5) Enhance Discussion - ### Enhance Discussion
- Clarify what high-level functions are still missing and the direction in which nk is going to implement these
- [x] Explain state of unittest, documentation, development, short-term targets
- Zen: since pytest is already mentioned in the body of the paragraph, we can talk about following coding best practices to maintain readability of code, use of black/pylint/docformatter
- [ ] State whether there are plans of advertising package (tutorial or spring at conferences), hire dedicated developers through grants - why if not
- [x] Give stronger arguments about reproducibility and how it can impact research in long run
### What has been done
- *Zen: added paragraph in discussion regarding code quality - NeuroKit2 also prioritizes a high standard of quality control during code development. This is done through automated testing of code using unittest-based tests that follow coding best practices, reducing code complexity and maintaining code readability. Additionally, we provide guidelines for new contributors to follow when writing code, encouraging them to follow PEP 8 coding conventions and fix code errors, as documented in our `readthedocs` page (*https://neurokit2.readthedocs.io/en/latest/contributing/contributing.html*).*
- *TamZen*: added paragraph in discussion about reproducibility arguments which are two-fold: 1) nk as compared to other black-box softwares allows for better tracing of the discrepancy of results in analysis pipelines, 2) nk offers several methods, allows for comparison and benchmarking of derived results
See #353
|
priority
|
manuscript revision checklist enhance discussion enhance discussion clarify what high level functions are still missing and the direction in which nk is going to implement these explain state of unittest documentation development short term targets zen since pytest is already mentioned in the body of the paragraph we can talk about following coding best practices to maintain readability of code use of black pylint docformatter state whether there are plans of advertising package tutorial or spring at conferences hire dedicated developers through grants why if not give stronger arguments about reproducibility and how it can impact research in long run what has been done zen added paragraph in discussion regarding code quality also prioritizes a high standard of quality control during code development this is done through automated testing of code using unittest based tests that follow coding best practices reducing code complexity and maintaining code readability additionally we provide guidelines for new contributors to follow when writing code encouraging them to follow pep coding conventions and fix code errors as documented in our readthedocs page tamzen added paragraph in discussion about reproducibility arguments which are two fold nk as compared to other black box softwares allows for better tracing of the discrepancy of results in analysis pipelines nk offers several methods allows for comparison and benchmarking of derived results see
| 1
|
87,502
| 3,755,545,648
|
IssuesEvent
|
2016-03-12 18:46:50
|
cs2103jan2016-t11-3j/main
|
https://api.github.com/repos/cs2103jan2016-t11-3j/main
|
closed
|
Bug in edit
|
priority.high type.bug
|
Edits the wrong item sometimes, notably when the task list is first loaded.
Works fine if it's executed after "search"/"display"
|
1.0
|
Bug in edit - Edits the wrong item sometimes, notably when the task list is first loaded.
Works fine if it's executed after "search"/"display"
|
priority
|
bug in edit edits the wrong item sometimes notably when the task list is first loaded works fine if it s executed after search display
| 1
|
477,004
| 13,753,856,924
|
IssuesEvent
|
2020-10-06 16:08:56
|
rstudio/gt
|
https://api.github.com/repos/rstudio/gt
|
closed
|
save as RTF file with rowname_col specified
|
Difficulty: [3] Advanced Effort: [3] High Priority: ♨︎ Critical Type: ☹︎ Bug
|
When trying to save a gt object as an RTF file, I'm running into an issue if rowname_col is specified. I'm getting the error message "Error in row_splits[[i]] : subscript out of bounds".
I don't understand why the below code would lead to this error, but its very possible this is something I don't understand rather than an issue with gt.
```r
df <- tibble(x = c("A", "B", "B", "C"),
y = c("1", "2", "2", "8"),
z = c("Low", "Low", "High", "Low"))
# as_rtf() throws error "Error in row_splits[[i]] : subscript out of bounds"
df %>%
gt(rowname_col = "x") %>%
as_rtf()
#same error when using gtsave()
df %>%
gt(rowname_col = "x") %>%
gtsave("test.rtf")
```
|
1.0
|
save as RTF file with rowname_col specified - When trying to save a gt object as an RTF file, I'm running into an issue if rowname_col is specified. I'm getting the error message "Error in row_splits[[i]] : subscript out of bounds".
I don't understand why the below code would lead to this error, but its very possible this is something I don't understand rather than an issue with gt.
```r
df <- tibble(x = c("A", "B", "B", "C"),
y = c("1", "2", "2", "8"),
z = c("Low", "Low", "High", "Low"))
# as_rtf() throws error "Error in row_splits[[i]] : subscript out of bounds"
df %>%
gt(rowname_col = "x") %>%
as_rtf()
#same error when using gtsave()
df %>%
gt(rowname_col = "x") %>%
gtsave("test.rtf")
```
|
priority
|
save as rtf file with rowname col specified when trying to save a gt object as an rtf file i m running into an issue if rowname col is specified i m getting the error message error in row splits subscript out of bounds i don t understand why the below code would lead to this error but its very possible this is something i don t understand rather than an issue with gt r df tibble x c a b b c y c z c low low high low as rtf throws error error in row splits subscript out of bounds df gt rowname col x as rtf same error when using gtsave df gt rowname col x gtsave test rtf
| 1
|
741,313
| 25,788,164,883
|
IssuesEvent
|
2022-12-09 23:09:28
|
zulip/zulip-mobile
|
https://api.github.com/repos/zulip/zulip-mobile
|
closed
|
"Mark all as read" appears to mark all as _unread_
|
P1 high-priority
|
(Marking P1 because this came up during work to have Flow check flagsReducer-test.js, toward https://github.com/zulip/zulip-mobile/issues/5102, disallow ancient servers.)
To reproduce:
- Arrange to have just a few unreads in the "All messages" view, and go to that view
- Tap "Mark all as read"
- See that the unread marker doesn't disappear from the unread messages in the list
- See also that an unread marker *appears* on all the other loaded messages in the list
As long as the API request succeeds, the server does actually mark the messages as read. So what's going on?
Well, we have this `Object.keys` in flagsReducer.js:
```js
const eventUpdateMessageFlags = (state, action) => {
if (action.all) {
if (action.op === 'add') {
return addFlagsForMessages(initialState, Object.keys(action.allMessages).map(Number), [
action.flag,
]);
}
```
But note that:
- `allMessages` is a `MessagesState` value, so it's an `Immutable.Map<number, Message>`.
- If you `Object.keys` one of those, then you get… `['size', '_root', '__ownerID', '__hash', '__altered']`. _That's_ not [how you're supposed get keys from an Immutable.Map](https://immutable-js.com/docs/v4.1.0/Map/#keys()). 😛 (By the way, did you know that https://immutable-js.com/ sets up the dev-tools console as an `Immutable` playground?)
- We map those string keys through `Number`, giving `[NaN, NaN, NaN, NaN, NaN]`.
- We wipe any data in the flags state (see the `initialState` in the quoted code above, and see #5596)… but then, when we _mean_ to fill in the IDs of the messages you've just marked as read, we instead put an object with `NaN: true` as `state.read`.
|
1.0
|
"Mark all as read" appears to mark all as _unread_ - (Marking P1 because this came up during work to have Flow check flagsReducer-test.js, toward https://github.com/zulip/zulip-mobile/issues/5102, disallow ancient servers.)
To reproduce:
- Arrange to have just a few unreads in the "All messages" view, and go to that view
- Tap "Mark all as read"
- See that the unread marker doesn't disappear from the unread messages in the list
- See also that an unread marker *appears* on all the other loaded messages in the list
As long as the API request succeeds, the server does actually mark the messages as read. So what's going on?
Well, we have this `Object.keys` in flagsReducer.js:
```js
const eventUpdateMessageFlags = (state, action) => {
if (action.all) {
if (action.op === 'add') {
return addFlagsForMessages(initialState, Object.keys(action.allMessages).map(Number), [
action.flag,
]);
}
```
But note that:
- `allMessages` is a `MessagesState` value, so it's an `Immutable.Map<number, Message>`.
- If you `Object.keys` one of those, then you get… `['size', '_root', '__ownerID', '__hash', '__altered']`. _That's_ not [how you're supposed get keys from an Immutable.Map](https://immutable-js.com/docs/v4.1.0/Map/#keys()). 😛 (By the way, did you know that https://immutable-js.com/ sets up the dev-tools console as an `Immutable` playground?)
- We map those string keys through `Number`, giving `[NaN, NaN, NaN, NaN, NaN]`.
- We wipe any data in the flags state (see the `initialState` in the quoted code above, and see #5596)… but then, when we _mean_ to fill in the IDs of the messages you've just marked as read, we instead put an object with `NaN: true` as `state.read`.
|
priority
|
mark all as read appears to mark all as unread marking because this came up during work to have flow check flagsreducer test js toward disallow ancient servers to reproduce arrange to have just a few unreads in the all messages view and go to that view tap mark all as read see that the unread marker doesn t disappear from the unread messages in the list see also that an unread marker appears on all the other loaded messages in the list as long as the api request succeeds the server does actually mark the messages as read so what s going on well we have this object keys in flagsreducer js js const eventupdatemessageflags state action if action all if action op add return addflagsformessages initialstate object keys action allmessages map number action flag but note that allmessages is a messagesstate value so it s an immutable map if you object keys one of those then you get… that s not 😛 by the way did you know that sets up the dev tools console as an immutable playground we map those string keys through number giving we wipe any data in the flags state see the initialstate in the quoted code above and see … but then when we mean to fill in the ids of the messages you ve just marked as read we instead put an object with nan true as state read
| 1
|
824,750
| 31,169,251,991
|
IssuesEvent
|
2023-08-16 22:52:51
|
filamentphp/filament
|
https://api.github.com/repos/filamentphp/filament
|
closed
|
Builder broken - Undefined array key "type"
|
bug confirmed bug in dependency high priority
|
### Package
filament/filament
### Package Version
v3.0.7
### Laravel Version
v10.17.1
### Livewire Version
_No response_
### PHP Version
PHP 8.2
### Problem description
Simple Builder setup causes error `Undefined array key "type"` when saving/deleting.
Error: https://flareapp.io/share/87ne66yP
### Expected behavior
Adding and removing blocks without error.
### Steps to reproduce
1. Add to resource form:
```php
Forms\Components\Builder::make('builder')
->blocks([
Forms\Components\Builder\Block::make('block')
->schema([
Forms\Components\Toggle::make('enabled'),
Forms\Components\TextInput::make('title'),
]),
])
```
2. Add Block
3. Remove Block
4. Add Block
5. Remove Block --> Error `Undefined array key "type"`
### Reproduction repository
https://github.com/martin-ro/f3
### Relevant log output
```shell
https://flareapp.io/share/87ne66yP
```
|
1.0
|
Builder broken - Undefined array key "type" - ### Package
filament/filament
### Package Version
v3.0.7
### Laravel Version
v10.17.1
### Livewire Version
_No response_
### PHP Version
PHP 8.2
### Problem description
Simple Builder setup causes error `Undefined array key "type"` when saving/deleting.
Error: https://flareapp.io/share/87ne66yP
### Expected behavior
Adding and removing blocks without error.
### Steps to reproduce
1. Add to resource form:
```php
Forms\Components\Builder::make('builder')
->blocks([
Forms\Components\Builder\Block::make('block')
->schema([
Forms\Components\Toggle::make('enabled'),
Forms\Components\TextInput::make('title'),
]),
])
```
2. Add Block
3. Remove Block
4. Add Block
5. Remove Block --> Error `Undefined array key "type"`
### Reproduction repository
https://github.com/martin-ro/f3
### Relevant log output
```shell
https://flareapp.io/share/87ne66yP
```
|
priority
|
builder broken undefined array key type package filament filament package version laravel version livewire version no response php version php problem description simple builder setup causes error undefined array key type when saving deleting error expected behavior adding and removing blocks without error steps to reproduce add to resource form php forms components builder make builder blocks forms components builder block make block schema forms components toggle make enabled forms components textinput make title add block remove block add block remove block error undefined array key type reproduction repository relevant log output shell
| 1
|
554,340
| 16,418,268,475
|
IssuesEvent
|
2021-05-19 09:25:21
|
UAlbertaALTLab/cree-intelligent-dictionary
|
https://api.github.com/repos/UAlbertaALTLab/cree-intelligent-dictionary
|
closed
|
Provide results for multi-word/key (English) searches
|
Improvement end-user/community feedback high-priority
|
Feedback via itwêwina feedback form from `███████████████
@gmail.com`:
> translate compound words such as _thank you_ without having to search just the first word.
> Love the sight and find it very useful
This would appear to be directly relevant in the current work on improving search and relevance.
|
1.0
|
Provide results for multi-word/key (English) searches - Feedback via itwêwina feedback form from `███████████████
@gmail.com`:
> translate compound words such as _thank you_ without having to search just the first word.
> Love the sight and find it very useful
This would appear to be directly relevant in the current work on improving search and relevance.
|
priority
|
provide results for multi word key english searches feedback via itwêwina feedback form from ███████████████ gmail com translate compound words such as thank you without having to search just the first word love the sight and find it very useful this would appear to be directly relevant in the current work on improving search and relevance
| 1
|
21,886
| 2,642,492,871
|
IssuesEvent
|
2015-03-12 00:28:46
|
sul-dlss/triannon
|
https://api.github.com/repos/sul-dlss/triannon
|
closed
|
use https for Fedora, not http
|
priority: high
|
On triannon box, rails logs read thus (with https in triannon.yml):
Faraday::SSLError (SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed):
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:920:in `connect'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:920:in `block in connect'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/timeout.rb:76:in `timeout'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:920:in `connect'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:863:in `do_start'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:852:in `start'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:1369:in `request'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:1128:in `get'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:80:in `perform_request'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:40:in `block in call'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:32:in `call'
faraday (0.9.1) lib/faraday/request/url_encoded.rb:15:in `call'
faraday (0.9.1) lib/faraday/rack_builder.rb:139:in `build_response'
faraday (0.9.1) lib/faraday/connection.rb:377:in `run_request'
faraday (0.9.1) lib/faraday/connection.rb:140:in `get'
triannon (0.5.4) app/services/triannon/ldp_loader.rb:82:in `get_ttl'
need to set ssl version stuff on faraday connection?
|
1.0
|
use https for Fedora, not http - On triannon box, rails logs read thus (with https in triannon.yml):
Faraday::SSLError (SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed):
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:920:in `connect'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:920:in `block in connect'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/timeout.rb:76:in `timeout'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:920:in `connect'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:863:in `do_start'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:852:in `start'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:1369:in `request'
/usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/net/http.rb:1128:in `get'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:80:in `perform_request'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:40:in `block in call'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection'
faraday (0.9.1) lib/faraday/adapter/net_http.rb:32:in `call'
faraday (0.9.1) lib/faraday/request/url_encoded.rb:15:in `call'
faraday (0.9.1) lib/faraday/rack_builder.rb:139:in `build_response'
faraday (0.9.1) lib/faraday/connection.rb:377:in `run_request'
faraday (0.9.1) lib/faraday/connection.rb:140:in `get'
triannon (0.5.4) app/services/triannon/ldp_loader.rb:82:in `get_ttl'
need to set ssl version stuff on faraday connection?
|
priority
|
use https for fedora not http on triannon box rails logs read thus with https in triannon yml faraday sslerror ssl connect returned errno state read server certificate b certificate verify failed usr local rvm rubies ruby lib ruby net http rb in connect usr local rvm rubies ruby lib ruby net http rb in block in connect usr local rvm rubies ruby lib ruby timeout rb in timeout usr local rvm rubies ruby lib ruby net http rb in connect usr local rvm rubies ruby lib ruby net http rb in do start usr local rvm rubies ruby lib ruby net http rb in start usr local rvm rubies ruby lib ruby net http rb in request usr local rvm rubies ruby lib ruby net http rb in get faraday lib faraday adapter net http rb in perform request faraday lib faraday adapter net http rb in block in call faraday lib faraday adapter net http rb in with net http connection faraday lib faraday adapter net http rb in call faraday lib faraday request url encoded rb in call faraday lib faraday rack builder rb in build response faraday lib faraday connection rb in run request faraday lib faraday connection rb in get triannon app services triannon ldp loader rb in get ttl need to set ssl version stuff on faraday connection
| 1
|
424,791
| 12,323,366,558
|
IssuesEvent
|
2020-05-13 12:03:14
|
hetic-newsroom/heticiens.news
|
https://api.github.com/repos/hetic-newsroom/heticiens.news
|
closed
|
UI back-office 1.0
|
UI feature high-priority
|
## Tasklist :rocket:
- [x] Réactiver la page de login
- [x] Page de publication basique avec CKeditor, qui envoie les données à l'endpoint de publication
- [x] Page où l'on peut éditer son profil, son mot de passe
- Modération:
- [x] Changer la page d'édition pour enregistrer en draft, plutôt que publier directement,
- [x] Donner aux modérateurs l'accès à une page spéciale qui list les articles en draft et permet de les valider
## Documentation APIs à utiliser
Voir #20
|
1.0
|
UI back-office 1.0 - ## Tasklist :rocket:
- [x] Réactiver la page de login
- [x] Page de publication basique avec CKeditor, qui envoie les données à l'endpoint de publication
- [x] Page où l'on peut éditer son profil, son mot de passe
- Modération:
- [x] Changer la page d'édition pour enregistrer en draft, plutôt que publier directement,
- [x] Donner aux modérateurs l'accès à une page spéciale qui list les articles en draft et permet de les valider
## Documentation APIs à utiliser
Voir #20
|
priority
|
ui back office tasklist rocket réactiver la page de login page de publication basique avec ckeditor qui envoie les données à l endpoint de publication page où l on peut éditer son profil son mot de passe modération changer la page d édition pour enregistrer en draft plutôt que publier directement donner aux modérateurs l accès à une page spéciale qui list les articles en draft et permet de les valider documentation apis à utiliser voir
| 1
|
745,850
| 26,003,944,211
|
IssuesEvent
|
2022-12-20 17:30:57
|
nabla-studio/nablajs
|
https://api.github.com/repos/nabla-studio/nablajs
|
closed
|
Integrate MobX inside keyring package
|
enhancement high priority review
|
should integrate mobx inside the keyring in order to make some elements responsive
|
1.0
|
Integrate MobX inside keyring package - should integrate mobx inside the keyring in order to make some elements responsive
|
priority
|
integrate mobx inside keyring package should integrate mobx inside the keyring in order to make some elements responsive
| 1
|
157,356
| 5,997,416,306
|
IssuesEvent
|
2017-06-04 00:01:05
|
ncssar/sartopo-feature-requests
|
https://api.github.com/repos/ncssar/sartopo-feature-requests
|
opened
|
'convert to track' in line popup
|
Priority:High SAR-specific
|
This would be the quickest easiest solution for converting lines to tracks without having to precisely right-click on the line.
|
1.0
|
'convert to track' in line popup - This would be the quickest easiest solution for converting lines to tracks without having to precisely right-click on the line.
|
priority
|
convert to track in line popup this would be the quickest easiest solution for converting lines to tracks without having to precisely right click on the line
| 1
|
275,847
| 8,581,382,565
|
IssuesEvent
|
2018-11-13 14:34:24
|
creativecommons/commoners
|
https://api.github.com/repos/creativecommons/commoners
|
closed
|
Provide Membership Council a way to ask for clarification for incomplete vouching statements
|
Priority: High 🔥
|
- Maybe including a checkbox? We need a way to bounce back incomplete applications when the Membership Council don't have enough information about a certain applicant.
- Maybe/also giving the admin powers to manually edit some fields on the application?
|
1.0
|
Provide Membership Council a way to ask for clarification for incomplete vouching statements - - Maybe including a checkbox? We need a way to bounce back incomplete applications when the Membership Council don't have enough information about a certain applicant.
- Maybe/also giving the admin powers to manually edit some fields on the application?
|
priority
|
provide membership council a way to ask for clarification for incomplete vouching statements maybe including a checkbox we need a way to bounce back incomplete applications when the membership council don t have enough information about a certain applicant maybe also giving the admin powers to manually edit some fields on the application
| 1
|
782,446
| 27,496,901,071
|
IssuesEvent
|
2023-03-05 08:36:03
|
commons-app/apps-android-commons
|
https://api.github.com/repos/commons-app/apps-android-commons
|
closed
|
Removing failed uploads from ContributionsList
|
question high priority
|
From andrea as:
> Hi, usually I, from commons app, tap gallery and search in Google photos and dont crash immidiatly but when is uploading.
> Often The uploading dont finish and i have a lot of upload not finish in the top of my list "non riuscito" how can i erase it?
Do we have a way for the user to remove the failed uploads from their Contributions list? If not, should we?
|
1.0
|
Removing failed uploads from ContributionsList - From andrea as:
> Hi, usually I, from commons app, tap gallery and search in Google photos and dont crash immidiatly but when is uploading.
> Often The uploading dont finish and i have a lot of upload not finish in the top of my list "non riuscito" how can i erase it?
Do we have a way for the user to remove the failed uploads from their Contributions list? If not, should we?
|
priority
|
removing failed uploads from contributionslist from andrea as hi usually i from commons app tap gallery and search in google photos and dont crash immidiatly but when is uploading often the uploading dont finish and i have a lot of upload not finish in the top of my list non riuscito how can i erase it do we have a way for the user to remove the failed uploads from their contributions list if not should we
| 1
|
781,225
| 27,428,385,630
|
IssuesEvent
|
2023-03-01 22:21:50
|
canonical/jaas-dashboard
|
https://api.github.com/repos/canonical/jaas-dashboard
|
closed
|
When a user does not have access to any models, they see a forever spinner
|
Priority: High Bug 🐛
|
**Describe the bug**
When a user does not have access to any models, they see a forever spinner
**To Reproduce**
Spin up the dashboard while logged in as a user without access to any models.
**Expected behaviour**
You should see a short spinner followed by a message informing you that you don't currently have access to any models but here is a link to the documentation to add a model.
|
1.0
|
When a user does not have access to any models, they see a forever spinner - **Describe the bug**
When a user does not have access to any models, they see a forever spinner
**To Reproduce**
Spin up the dashboard while logged in as a user without access to any models.
**Expected behaviour**
You should see a short spinner followed by a message informing you that you don't currently have access to any models but here is a link to the documentation to add a model.
|
priority
|
when a user does not have access to any models they see a forever spinner describe the bug when a user does not have access to any models they see a forever spinner to reproduce spin up the dashboard while logged in as a user without access to any models expected behaviour you should see a short spinner followed by a message informing you that you don t currently have access to any models but here is a link to the documentation to add a model
| 1
|
554,515
| 16,431,449,848
|
IssuesEvent
|
2021-05-20 02:33:32
|
CanberraOceanRacingClub/namadgi3
|
https://api.github.com/repos/CanberraOceanRacingClub/namadgi3
|
opened
|
Main sail replacement
|
Sails committee decision required priority 1: High shopping list
|
During a recent routine service of the main (#304), Hood sailmakers assessed its condition as "nearing end of life". Ben reported that under normal usage condition the main probably has 12 months of useful life left".
But, he noted, with CORC high usage pattern he recommends replacement by the end of 2021. Use beyond this date would risk mainsail blowout.
A replacement main needs to be considered by the committee.
@delcosta, @PSARN @peterottesen @mrmrmartin -- additional thoughts and comments please. We will put this to the committee next week. Before then can we develop a plan for approval by the committee?
|
1.0
|
Main sail replacement - During a recent routine service of the main (#304), Hood sailmakers assessed its condition as "nearing end of life". Ben reported that under normal usage condition the main probably has 12 months of useful life left".
But, he noted, with CORC high usage pattern he recommends replacement by the end of 2021. Use beyond this date would risk mainsail blowout.
A replacement main needs to be considered by the committee.
@delcosta, @PSARN @peterottesen @mrmrmartin -- additional thoughts and comments please. We will put this to the committee next week. Before then can we develop a plan for approval by the committee?
|
priority
|
main sail replacement during a recent routine service of the main hood sailmakers assessed its condition as nearing end of life ben reported that under normal usage condition the main probably has months of useful life left but he noted with corc high usage pattern he recommends replacement by the end of use beyond this date would risk mainsail blowout a replacement main needs to be considered by the committee delcosta psarn peterottesen mrmrmartin additional thoughts and comments please we will put this to the committee next week before then can we develop a plan for approval by the committee
| 1
|
511,128
| 14,854,522,924
|
IssuesEvent
|
2021-01-18 11:26:03
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Denoise profiled: central image goes black
|
bug: pending priority: high reproduce: confirmed
|
DT current git master, nVidia 1060 GPU
This seems to be a corner case in denoise profile, the central image goes black in specific conditions.
To reproduce:
- open an image in darkroom
- apply a preset including wavelet, for example the built-in preset "wavelets: chroma only"
- change mode to "non-local means" or "non-local means auto"
- see the central image going black
The issue is observed in both Linux and Windows, but only with OpenCL active.
It is not related to the recent fixes in denoise profiled, as it happens also in 3.4
|
1.0
|
Denoise profiled: central image goes black - DT current git master, nVidia 1060 GPU
This seems to be a corner case in denoise profile, the central image goes black in specific conditions.
To reproduce:
- open an image in darkroom
- apply a preset including wavelet, for example the built-in preset "wavelets: chroma only"
- change mode to "non-local means" or "non-local means auto"
- see the central image going black
The issue is observed in both Linux and Windows, but only with OpenCL active.
It is not related to the recent fixes in denoise profiled, as it happens also in 3.4
|
priority
|
denoise profiled central image goes black dt current git master nvidia gpu this seems to be a corner case in denoise profile the central image goes black in specific conditions to reproduce open an image in darkroom apply a preset including wavelet for example the built in preset wavelets chroma only change mode to non local means or non local means auto see the central image going black the issue is observed in both linux and windows but only with opencl active it is not related to the recent fixes in denoise profiled as it happens also in
| 1
|
65,156
| 3,226,879,138
|
IssuesEvent
|
2015-10-10 17:46:20
|
mariaantonova/QAExam-10.10
|
https://api.github.com/repos/mariaantonova/QAExam-10.10
|
opened
|
There is no email sending and verifying the registered user
|
highest priority
|
Environment Windows 7 Mozilla Firefox 40.0.3 with installed Firebug(http://getfirebug.com)
Steps to reproduce:
1. Open the website
2. Navigate to Register button
3. Click it
4. Make a registration with email: mm@mm.mm
user: mm
password: mmm
Expected result:
There should be send an email that verify my registration on the site
Actual result:
There is no email sending and verifying the registered user
|
1.0
|
There is no email sending and verifying the registered user - Environment Windows 7 Mozilla Firefox 40.0.3 with installed Firebug(http://getfirebug.com)
Steps to reproduce:
1. Open the website
2. Navigate to Register button
3. Click it
4. Make a registration with email: mm@mm.mm
user: mm
password: mmm
Expected result:
There should be send an email that verify my registration on the site
Actual result:
There is no email sending and verifying the registered user
|
priority
|
there is no email sending and verifying the registered user environment windows mozilla firefox with installed firebug steps to reproduce open the website navigate to register button click it make a registration with email mm mm mm user mm password mmm expected result there should be send an email that verify my registration on the site actual result there is no email sending and verifying the registered user
| 1
|
488,507
| 14,078,823,378
|
IssuesEvent
|
2020-11-04 14:05:58
|
trimstray/htrace.sh
|
https://api.github.com/repos/trimstray/htrace.sh
|
closed
|
Unable to build Docker image: cannot find package "github.com/projectdiscovery/subfinder/cmd/subfinder"
|
Priority: High Status: Review Needed Type: Bug
|
MacOS 10.15.6
Docker Desktop v 2.4.0.0
```
➜ htrace.sh git:(master) ✗ build/build.sh
+++ dirname build/build.sh
++ cd build/..
++ pwd
+ ROOT_DIR=/Volumes/projects/github/htrace.sh
+ docker build -t htrace.sh -f build/Dockerfile /Volumes/projects/github/htrace.sh
Sending build context to Docker daemon 8.211MB
Step 1/35 : FROM golang:alpine AS golang
---> b3bc898ad092
Step 2/35 : RUN apk add --no-cache git
---> Using cache
---> 62748406ef26
Step 3/35 : RUN go get github.com/ssllabs/ssllabs-scan
---> Using cache
---> 6fc956831704
Step 4/35 : RUN go get github.com/maxmind/geoipupdate/cmd/geoipupdate
---> Using cache
---> de3c6a51a849
Step 5/35 : RUN go get github.com/projectdiscovery/subfinder/cmd/subfinder
---> Running in af580b820fd2
cannot find package "github.com/projectdiscovery/subfinder/cmd/subfinder" in any of:
/usr/local/go/src/github.com/projectdiscovery/subfinder/cmd/subfinder (from $GOROOT)
/go/src/github.com/projectdiscovery/subfinder/cmd/subfinder (from $GOPATH)
The command '/bin/sh -c go get github.com/projectdiscovery/subfinder/cmd/subfinder' returned a non-zero code: 1
```
|
1.0
|
Unable to build Docker image: cannot find package "github.com/projectdiscovery/subfinder/cmd/subfinder" - MacOS 10.15.6
Docker Desktop v 2.4.0.0
```
➜ htrace.sh git:(master) ✗ build/build.sh
+++ dirname build/build.sh
++ cd build/..
++ pwd
+ ROOT_DIR=/Volumes/projects/github/htrace.sh
+ docker build -t htrace.sh -f build/Dockerfile /Volumes/projects/github/htrace.sh
Sending build context to Docker daemon 8.211MB
Step 1/35 : FROM golang:alpine AS golang
---> b3bc898ad092
Step 2/35 : RUN apk add --no-cache git
---> Using cache
---> 62748406ef26
Step 3/35 : RUN go get github.com/ssllabs/ssllabs-scan
---> Using cache
---> 6fc956831704
Step 4/35 : RUN go get github.com/maxmind/geoipupdate/cmd/geoipupdate
---> Using cache
---> de3c6a51a849
Step 5/35 : RUN go get github.com/projectdiscovery/subfinder/cmd/subfinder
---> Running in af580b820fd2
cannot find package "github.com/projectdiscovery/subfinder/cmd/subfinder" in any of:
/usr/local/go/src/github.com/projectdiscovery/subfinder/cmd/subfinder (from $GOROOT)
/go/src/github.com/projectdiscovery/subfinder/cmd/subfinder (from $GOPATH)
The command '/bin/sh -c go get github.com/projectdiscovery/subfinder/cmd/subfinder' returned a non-zero code: 1
```
|
priority
|
unable to build docker image cannot find package github com projectdiscovery subfinder cmd subfinder macos docker desktop v ➜ htrace sh git master ✗ build build sh dirname build build sh cd build pwd root dir volumes projects github htrace sh docker build t htrace sh f build dockerfile volumes projects github htrace sh sending build context to docker daemon step from golang alpine as golang step run apk add no cache git using cache step run go get github com ssllabs ssllabs scan using cache step run go get github com maxmind geoipupdate cmd geoipupdate using cache step run go get github com projectdiscovery subfinder cmd subfinder running in cannot find package github com projectdiscovery subfinder cmd subfinder in any of usr local go src github com projectdiscovery subfinder cmd subfinder from goroot go src github com projectdiscovery subfinder cmd subfinder from gopath the command bin sh c go get github com projectdiscovery subfinder cmd subfinder returned a non zero code
| 1
|
680,067
| 23,256,276,030
|
IssuesEvent
|
2022-08-04 09:32:23
|
bitsongofficial/wallet-mobile
|
https://api.github.com/repos/bitsongofficial/wallet-mobile
|
opened
|
Improve Responsive UI
|
enhancement help wanted high priority
|
Actually we have some size issues on some devices, we should add a scaling system like this: https://github.com/nirsky/react-native-size-matters .
Ofc we can wrote down our utils, it's something very simple, maybe an npm package is not required, what do you think @zheleznov163 do you have other ideas?
|
1.0
|
Improve Responsive UI - Actually we have some size issues on some devices, we should add a scaling system like this: https://github.com/nirsky/react-native-size-matters .
Ofc we can wrote down our utils, it's something very simple, maybe an npm package is not required, what do you think @zheleznov163 do you have other ideas?
|
priority
|
improve responsive ui actually we have some size issues on some devices we should add a scaling system like this ofc we can wrote down our utils it s something very simple maybe an npm package is not required what do you think do you have other ideas
| 1
|
523,240
| 15,176,158,318
|
IssuesEvent
|
2021-02-14 03:31:37
|
skill-collectors/weather-app
|
https://api.github.com/repos/skill-collectors/weather-app
|
closed
|
Main temperature display
|
priority:high
|
This component displays the large current temperature and "feels like" temperature in the header.
|
1.0
|
Main temperature display - This component displays the large current temperature and "feels like" temperature in the header.
|
priority
|
main temperature display this component displays the large current temperature and feels like temperature in the header
| 1
|
344,639
| 10,347,656,146
|
IssuesEvent
|
2019-09-04 17:55:14
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
closed
|
Display of security events list is incorrect
|
Priority: High Type: Bug
|
**Describe the bug**
The display of the security events list is not valid and shows empty data for everything except the ID
**To Reproduce**
1. Go in security events
**Screenshots**

**Expected behavior**
The security events are shown with their details
**Additional context**
Response payload of the security events listing:
```
{"items":[{"desc":null,"enabled":null,"id":"defaults","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100006","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100007","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100008","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100009","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100010","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100011","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100012","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100013","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100014","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300000","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300006","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2000000","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2000032","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2002030","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2002201","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2001904","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2001972","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2001569","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100020","priority":null,"template":null}],"nextCursor":40,"prevCursor":0,"status":200}
```
|
1.0
|
Display of security events list is incorrect - **Describe the bug**
The display of the security events list is not valid and shows empty data for everything except the ID
**To Reproduce**
1. Go in security events
**Screenshots**

**Expected behavior**
The security events are shown with their details
**Additional context**
Response payload of the security events listing:
```
{"items":[{"desc":null,"enabled":null,"id":"defaults","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100006","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100007","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100008","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100009","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100010","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100011","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100012","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100013","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100014","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1200005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300000","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1300006","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2000000","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2000032","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2002030","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2002201","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2001904","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2001972","priority":null,"template":null},{"desc":null,"enabled":null,"id":"2001569","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000001","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000002","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000003","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000004","priority":null,"template":null},{"desc":null,"enabled":null,"id":"3000005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100005","priority":null,"template":null},{"desc":null,"enabled":null,"id":"1100020","priority":null,"template":null}],"nextCursor":40,"prevCursor":0,"status":200}
```
|
priority
|
display of security events list is incorrect describe the bug the display of the security events list is not valid and shows empty data for everything except the id to reproduce go in security events screenshots expected behavior the security events are shown with their details additional context response payload of the security events listing items nextcursor prevcursor status
| 1
|
455,289
| 13,123,962,690
|
IssuesEvent
|
2020-08-06 02:15:40
|
space-wizards/space-station-14
|
https://api.github.com/repos/space-wizards/space-station-14
|
opened
|
Gas overlay urgently needs optimization
|
Feature: Atmospherics Priority: 1-high
|
Due to the lack of overlay bubbling and poorly performant gas netcode, the current gas overlay absolutely kills the client when there's a big fire or a lot of gas atmos changes.
|
1.0
|
Gas overlay urgently needs optimization - Due to the lack of overlay bubbling and poorly performant gas netcode, the current gas overlay absolutely kills the client when there's a big fire or a lot of gas atmos changes.
|
priority
|
gas overlay urgently needs optimization due to the lack of overlay bubbling and poorly performant gas netcode the current gas overlay absolutely kills the client when there s a big fire or a lot of gas atmos changes
| 1
|
666,337
| 22,351,145,447
|
IssuesEvent
|
2022-06-15 12:11:19
|
fadeinside/s3air-achievements-plus
|
https://api.github.com/repos/fadeinside/s3air-achievements-plus
|
closed
|
Improve the rating (stars) of some achievements
|
Type: Improvements Priority: Highest
|
**Description**
There are those achievements that have not been evaluated by their difficulty, and there are those that were simply overpriced. All the stars that are now were installed during the development of a working prototype, and have not changed since.
**Screenshots/References**
\-
**Additional context**
\-
|
1.0
|
Improve the rating (stars) of some achievements - **Description**
There are those achievements that have not been evaluated by their difficulty, and there are those that were simply overpriced. All the stars that are now were installed during the development of a working prototype, and have not changed since.
**Screenshots/References**
\-
**Additional context**
\-
|
priority
|
improve the rating stars of some achievements description there are those achievements that have not been evaluated by their difficulty and there are those that were simply overpriced all the stars that are now were installed during the development of a working prototype and have not changed since screenshots references additional context
| 1
|
306,218
| 9,382,603,046
|
IssuesEvent
|
2019-04-04 23:05:55
|
gii-is-psg2/PSG2-1819-G6-60
|
https://api.github.com/repos/gii-is-psg2/PSG2-1819-G6-60
|
closed
|
Create a technical report entitled “Análisis del Código Fuente y Métricas Asociadas”
|
priority_high release
|
Analyzing your project with SonarCloud (as per the releasees generated in L2 and L3, analyzing the code in the master branches for the corresponding commits). Such report should describe the values of the source code metrics computed, the types of issues found in the analysis and its causes. This document must contain at least the following items:
A screenshot of the Sonar Cloud dashboard for the analysis of your project and a description of the the
metrics provided in the dashboard and their values.
Description and analyses of the potential bugs found in the repository (in the reliability section, on
measures).
Description and analysis of the different types code smells found in the analyses (in the
maintainability section, on measures). For each type of code smell the report should describe.
The name and description of the code smell
The different causes of the smell in your codebase
A justified evaluation of the severity of the code smell
A brief description of how to solve it depending on the causes
Conclusions about the results of the analyses
|
1.0
|
Create a technical report entitled “Análisis del Código Fuente y Métricas Asociadas” - Analyzing your project with SonarCloud (as per the releasees generated in L2 and L3, analyzing the code in the master branches for the corresponding commits). Such report should describe the values of the source code metrics computed, the types of issues found in the analysis and its causes. This document must contain at least the following items:
A screenshot of the Sonar Cloud dashboard for the analysis of your project and a description of the the
metrics provided in the dashboard and their values.
Description and analyses of the potential bugs found in the repository (in the reliability section, on
measures).
Description and analysis of the different types code smells found in the analyses (in the
maintainability section, on measures). For each type of code smell the report should describe.
The name and description of the code smell
The different causes of the smell in your codebase
A justified evaluation of the severity of the code smell
A brief description of how to solve it depending on the causes
Conclusions about the results of the analyses
|
priority
|
create a technical report entitled “análisis del código fuente y métricas asociadas” analyzing your project with sonarcloud as per the releasees generated in and analyzing the code in the master branches for the corresponding commits such report should describe the values of the source code metrics computed the types of issues found in the analysis and its causes this document must contain at least the following items a screenshot of the sonar cloud dashboard for the analysis of your project and a description of the the metrics provided in the dashboard and their values description and analyses of the potential bugs found in the repository in the reliability section on measures description and analysis of the different types code smells found in the analyses in the maintainability section on measures for each type of code smell the report should describe the name and description of the code smell the different causes of the smell in your codebase a justified evaluation of the severity of the code smell a brief description of how to solve it depending on the causes conclusions about the results of the analyses
| 1
|
642,296
| 20,883,669,939
|
IssuesEvent
|
2022-03-23 00:59:52
|
SoftwareEngineeringGroup3-3/recipe-app-backend
|
https://api.github.com/repos/SoftwareEngineeringGroup3-3/recipe-app-backend
|
closed
|
[BACKEND] Create '/api/ingredients' endpoint - 'GET' option.
|
enhancement question priority:highest
|
Create '/api/ingredients' endpoint (if not created) with 'GET' option.
Functionality:
`Returns all ingredients to admin panel`
|
1.0
|
[BACKEND] Create '/api/ingredients' endpoint - 'GET' option. - Create '/api/ingredients' endpoint (if not created) with 'GET' option.
Functionality:
`Returns all ingredients to admin panel`
|
priority
|
create api ingredients endpoint get option create api ingredients endpoint if not created with get option functionality returns all ingredients to admin panel
| 1
|
196,626
| 6,935,826,824
|
IssuesEvent
|
2017-12-03 13:58:17
|
dalaranwow/dalaran-wow
|
https://api.github.com/repos/dalaranwow/dalaran-wow
|
closed
|
Warrior Charge over charging every time
|
Class - Warrior General - Mechanics Listed - Changelog Priority - High
|
**Description**:
**Current behaviour**: When i charge as a warrior, I have to step back in order to hit the mob. Every time i charge, every single mob.
**Expected behaviour**: You should charge infront of the mob and be facing it, Not on top of npc.
**Steps to reproduce the problem**:
1. Charge a mob
2. Charge a mob
3. Charge a mob
**Include proofs for this behaviour**
You can research this on several pages like http://wowhead.com or http://wowwiki.wikia.com or http://www.youtube.com/
Issues without proofs will be closed and tag On Hold will be added until the proof is provided.
**Include the ID for the game objects, npcs (creatures,pets,minions), spells, items, quests, instances, zones, achievements, skills**
You can research this on several pages like http://wotlk.openwow.com or http://wowhead.com/
**Include Screenshots from the issue if necessary**
|
1.0
|
Warrior Charge over charging every time - **Description**:
**Current behaviour**: When i charge as a warrior, I have to step back in order to hit the mob. Every time i charge, every single mob.
**Expected behaviour**: You should charge infront of the mob and be facing it, Not on top of npc.
**Steps to reproduce the problem**:
1. Charge a mob
2. Charge a mob
3. Charge a mob
**Include proofs for this behaviour**
You can research this on several pages like http://wowhead.com or http://wowwiki.wikia.com or http://www.youtube.com/
Issues without proofs will be closed and tag On Hold will be added until the proof is provided.
**Include the ID for the game objects, npcs (creatures,pets,minions), spells, items, quests, instances, zones, achievements, skills**
You can research this on several pages like http://wotlk.openwow.com or http://wowhead.com/
**Include Screenshots from the issue if necessary**
|
priority
|
warrior charge over charging every time description current behaviour when i charge as a warrior i have to step back in order to hit the mob every time i charge every single mob expected behaviour you should charge infront of the mob and be facing it not on top of npc steps to reproduce the problem charge a mob charge a mob charge a mob include proofs for this behaviour you can research this on several pages like or or issues without proofs will be closed and tag on hold will be added until the proof is provided include the id for the game objects npcs creatures pets minions spells items quests instances zones achievements skills you can research this on several pages like or include screenshots from the issue if necessary
| 1
|
109,460
| 4,387,791,017
|
IssuesEvent
|
2016-08-08 16:50:21
|
smartchicago/kimball
|
https://api.github.com/repos/smartchicago/kimball
|
opened
|
Add ability to search by phone number
|
High Priority
|
Now that we have SMS preferred CUTGroup testers who text back when they are available for tests, we use their phone numbers to connect to the corresponding tester. It would be a huge enhancement to be able to search by phone number in the nav bar search or on the /search page.
My current process is using search to find SMS Preferred testers and saving them as a CSV. Then I open up the CSV and search for the phone number there.
|
1.0
|
Add ability to search by phone number - Now that we have SMS preferred CUTGroup testers who text back when they are available for tests, we use their phone numbers to connect to the corresponding tester. It would be a huge enhancement to be able to search by phone number in the nav bar search or on the /search page.
My current process is using search to find SMS Preferred testers and saving them as a CSV. Then I open up the CSV and search for the phone number there.
|
priority
|
add ability to search by phone number now that we have sms preferred cutgroup testers who text back when they are available for tests we use their phone numbers to connect to the corresponding tester it would be a huge enhancement to be able to search by phone number in the nav bar search or on the search page my current process is using search to find sms preferred testers and saving them as a csv then i open up the csv and search for the phone number there
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.