Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
666,498
| 22,357,628,794
|
IssuesEvent
|
2022-06-15 17:06:13
|
TheGameCommunity/ExciteBot
|
https://api.github.com/repos/TheGameCommunity/ExciteBot
|
closed
|
bot doesn't recognized nicknamed discord users
|
Category - Bug Priority - High ⇑ Status - Resolved
|
!whois doesn't work if you use a discord user's nickname if they have been nicknamed.
|
1.0
|
bot doesn't recognized nicknamed discord users - !whois doesn't work if you use a discord user's nickname if they have been nicknamed.
|
priority
|
bot doesn t recognized nicknamed discord users whois doesn t work if you use a discord user s nickname if they have been nicknamed
| 1
|
363,559
| 10,742,547,478
|
IssuesEvent
|
2019-10-29 22:54:57
|
sarpik/git-backup
|
https://api.github.com/repos/sarpik/git-backup
|
closed
|
Update README.md
|
docs help wanted priority:high
|
### Task
Update README.md with
* [x] description
* [x] installation info (preferably, using `Makefile` with `make` - see #1)
* [x] installation info for arch users (`yay -S git-backup` once #2 is done)
|
1.0
|
Update README.md - ### Task
Update README.md with
* [x] description
* [x] installation info (preferably, using `Makefile` with `make` - see #1)
* [x] installation info for arch users (`yay -S git-backup` once #2 is done)
|
priority
|
update readme md task update readme md with description installation info preferably using makefile with make see installation info for arch users yay s git backup once is done
| 1
|
628,692
| 20,010,938,797
|
IssuesEvent
|
2022-02-01 06:17:33
|
john-rey-edralin/stsweng-1
|
https://api.github.com/repos/john-rey-edralin/stsweng-1
|
closed
|
View Event modal in calendar page does not appear
|
bug issue: back-end issue: front-end priority: high severity: medium
|
### Description
Viewing a specific event in the calendar does not display its respective modal
### Steps to Reproduce with Visual Proof
1. In the **Home** page, click the **Calendar** tab from the navigation bar

2. In the **Calendar** page, select any day that has events.

3. In the list of events listed for that day, select any event to view its details. The **View Event** modal for the event does not show up and the whole events list modal disappears with a greyed out background.

### Expected Results
When a specific event is clicked in the **Calendar** page, it should display its respective **View Event** modal.
### Actual Results
When a specific event is clicked in the **Calendar** page, there is no **View Event** modal for it and the whole events list modal disappears.
### Additional Information
| Software | |
| --- | --- |
| **Browser** | Microsoft Edge (V97) |
| **Operating System** | Windows 10 |
|
1.0
|
View Event modal in calendar page does not appear - ### Description
Viewing a specific event in the calendar does not display its respective modal
### Steps to Reproduce with Visual Proof
1. In the **Home** page, click the **Calendar** tab from the navigation bar

2. In the **Calendar** page, select any day that has events.

3. In the list of events listed for that day, select any event to view its details. The **View Event** modal for the event does not show up and the whole events list modal disappears with a greyed out background.

### Expected Results
When a specific event is clicked in the **Calendar** page, it should display its respective **View Event** modal.
### Actual Results
When a specific event is clicked in the **Calendar** page, there is no **View Event** modal for it and the whole events list modal disappears.
### Additional Information
| Software | |
| --- | --- |
| **Browser** | Microsoft Edge (V97) |
| **Operating System** | Windows 10 |
|
priority
|
view event modal in calendar page does not appear description viewing a specific event in the calendar does not display its respective modal steps to reproduce with visual proof in the home page click the calendar tab from the navigation bar in the calendar page select any day that has events in the list of events listed for that day select any event to view its details the view event modal for the event does not show up and the whole events list modal disappears with a greyed out background expected results when a specific event is clicked in the calendar page it should display its respective view event modal actual results when a specific event is clicked in the calendar page there is no view event modal for it and the whole events list modal disappears additional information software browser microsoft edge operating system windows
| 1
|
102,014
| 4,149,711,500
|
IssuesEvent
|
2016-06-15 15:12:58
|
MinetestForFun/server-minetestforfun
|
https://api.github.com/repos/MinetestForFun/server-minetestforfun
|
closed
|
magasin fini, manque quelques petits trucs pour la mise en place définitive.
|
Priority: High Server's World
|

@Ombridride Voici donc notre petit magasin!
Ce dont nous avons besoin, avec @crabman77 , pour terminer la mise en place:
- [x] un merge serveur (avec le reboot qui va bien :) )
- [x] le privilège shop (pour config le magasin)
- [x] un shared chest avec nous-mêmes dans lequel se trouvent au moins 50 minercantile: shop (a priori 40 seront utilisés) et 15 bancomatic (seront placés stratégiquement)
Coordonnées: -7 ;; 80
Merci beaucoup :)
|
1.0
|
magasin fini, manque quelques petits trucs pour la mise en place définitive. - 
@Ombridride Voici donc notre petit magasin!
Ce dont nous avons besoin, avec @crabman77 , pour terminer la mise en place:
- [x] un merge serveur (avec le reboot qui va bien :) )
- [x] le privilège shop (pour config le magasin)
- [x] un shared chest avec nous-mêmes dans lequel se trouvent au moins 50 minercantile: shop (a priori 40 seront utilisés) et 15 bancomatic (seront placés stratégiquement)
Coordonnées: -7 ;; 80
Merci beaucoup :)
|
priority
|
magasin fini manque quelques petits trucs pour la mise en place définitive ombridride voici donc notre petit magasin ce dont nous avons besoin avec pour terminer la mise en place un merge serveur avec le reboot qui va bien le privilège shop pour config le magasin un shared chest avec nous mêmes dans lequel se trouvent au moins minercantile shop a priori seront utilisés et bancomatic seront placés stratégiquement coordonnées merci beaucoup
| 1
|
60,461
| 3,129,490,858
|
IssuesEvent
|
2015-09-09 01:41:11
|
TroyManary/EasyPlow
|
https://api.github.com/repos/TroyManary/EasyPlow
|
closed
|
Operator Account Menu Items are not correct
|
Function: Company Priority: High State: Fixed Type: Bug
|
The Operator Account Menu Items (as specified in UI Dev Spreadsheet) should be:
edit your profile
edit payment information
dashboard
service preferences
coupons
log out
However, there are alot more options displayed:

|
1.0
|
Operator Account Menu Items are not correct - The Operator Account Menu Items (as specified in UI Dev Spreadsheet) should be:
edit your profile
edit payment information
dashboard
service preferences
coupons
log out
However, there are alot more options displayed:

|
priority
|
operator account menu items are not correct the operator account menu items as specified in ui dev spreadsheet should be edit your profile edit payment information dashboard service preferences coupons log out however there are alot more options displayed
| 1
|
390,548
| 11,545,149,936
|
IssuesEvent
|
2020-02-18 12:55:13
|
AshenGaming/whisperquest
|
https://api.github.com/repos/AshenGaming/whisperquest
|
closed
|
QID1 - "The Adventure Begins" - Newborough
|
Priority: High Status: Completed
|
**Quest Progress - "The Adventure Begins"**
- [x] NPC Created - Town Guard
- [x] Starter NPC Dialogue - Town Guard
- [x] WG Location - Newborough Main Thoroughfare
- [x] Find NPC - Reverend
- [x] NPC Dialogue - Reverend
- [x] WG Location - Newborough Docks
- [x] Find NPC - Fisherman
- [x] NPC Dialogue - Fisherman
- [x] Purchase Item - 5 Cooked Cod
- [x] Find NPC - Reverend
- [x] End NPC Dialogue - Reverend
- [x] End Rewards - Reverend
|
1.0
|
QID1 - "The Adventure Begins" - Newborough - **Quest Progress - "The Adventure Begins"**
- [x] NPC Created - Town Guard
- [x] Starter NPC Dialogue - Town Guard
- [x] WG Location - Newborough Main Thoroughfare
- [x] Find NPC - Reverend
- [x] NPC Dialogue - Reverend
- [x] WG Location - Newborough Docks
- [x] Find NPC - Fisherman
- [x] NPC Dialogue - Fisherman
- [x] Purchase Item - 5 Cooked Cod
- [x] Find NPC - Reverend
- [x] End NPC Dialogue - Reverend
- [x] End Rewards - Reverend
|
priority
|
the adventure begins newborough quest progress the adventure begins npc created town guard starter npc dialogue town guard wg location newborough main thoroughfare find npc reverend npc dialogue reverend wg location newborough docks find npc fisherman npc dialogue fisherman purchase item cooked cod find npc reverend end npc dialogue reverend end rewards reverend
| 1
|
60,779
| 3,134,229,135
|
IssuesEvent
|
2015-09-10 08:51:49
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
Org page controller - featured items
|
Priority-High
|
Basically for an MVP we need a way for the data team to be able to select the featured orgs and to modify the controller to pull that data into the page.
In the org edit page we need a checkbox to mark an org as featured and possibly other fields:
- either: upload field where we can put a preview screenshot of the org page
- either: text field or other way of specifying from where inside the org page we can get the container for the preview image
|
1.0
|
Org page controller - featured items - Basically for an MVP we need a way for the data team to be able to select the featured orgs and to modify the controller to pull that data into the page.
In the org edit page we need a checkbox to mark an org as featured and possibly other fields:
- either: upload field where we can put a preview screenshot of the org page
- either: text field or other way of specifying from where inside the org page we can get the container for the preview image
|
priority
|
org page controller featured items basically for an mvp we need a way for the data team to be able to select the featured orgs and to modify the controller to pull that data into the page in the org edit page we need a checkbox to mark an org as featured and possibly other fields either upload field where we can put a preview screenshot of the org page either text field or other way of specifying from where inside the org page we can get the container for the preview image
| 1
|
146,767
| 5,627,281,768
|
IssuesEvent
|
2017-04-05 00:51:10
|
ampproject/ampstart
|
https://api.github.com/repos/ampproject/ampstart
|
closed
|
Landing page - make preview header more user friendly
|
P1: High Priority Page: Home Type: Feature Request
|
- we have a new design for this in the final files folder
- add rollovers - need to work this out with dev
- fix selected state
- update with new icons (black, in final files folder)
|
1.0
|
Landing page - make preview header more user friendly - - we have a new design for this in the final files folder
- add rollovers - need to work this out with dev
- fix selected state
- update with new icons (black, in final files folder)
|
priority
|
landing page make preview header more user friendly we have a new design for this in the final files folder add rollovers need to work this out with dev fix selected state update with new icons black in final files folder
| 1
|
68,341
| 3,286,507,782
|
IssuesEvent
|
2015-10-29 03:15:16
|
pombase/canto
|
https://api.github.com/repos/pombase/canto
|
closed
|
bug adding comment interactions
|
bug high priority next user interface
|
from user:
Hi,
I have a problem with annotating genetic interactions. Could you pls help?
I create a genetic interaction between cbf11 and e.g. sty1.
I click the "edit" link next to the interaction.
I type in a comment (such as in which phenotypes the interaction manifests) and click OK.
I click the "edit" link again, but the comment is gone. It doesn't seem to be saved.
Thanks for any help!
Best,
Martin
Also I spotted that Martins name isn't displayed correctly in the rightmost column of the annotations (he has a lot of czech apostrophies in his name)
|
1.0
|
bug adding comment interactions - from user:
Hi,
I have a problem with annotating genetic interactions. Could you pls help?
I create a genetic interaction between cbf11 and e.g. sty1.
I click the "edit" link next to the interaction.
I type in a comment (such as in which phenotypes the interaction manifests) and click OK.
I click the "edit" link again, but the comment is gone. It doesn't seem to be saved.
Thanks for any help!
Best,
Martin
Also I spotted that Martins name isn't displayed correctly in the rightmost column of the annotations (he has a lot of czech apostrophies in his name)
|
priority
|
bug adding comment interactions from user hi i have a problem with annotating genetic interactions could you pls help i create a genetic interaction between and e g i click the edit link next to the interaction i type in a comment such as in which phenotypes the interaction manifests and click ok i click the edit link again but the comment is gone it doesn t seem to be saved thanks for any help best martin also i spotted that martins name isn t displayed correctly in the rightmost column of the annotations he has a lot of czech apostrophies in his name
| 1
|
679,126
| 23,222,135,350
|
IssuesEvent
|
2022-08-02 19:19:14
|
dominicm00/ham
|
https://api.github.com/repos/dominicm00/ham
|
closed
|
`EXIT` statement does not stop execution
|
high priority bug:compliance
|
After an `EXIT` statement, Ham still tries to build available targets.
|
1.0
|
`EXIT` statement does not stop execution - After an `EXIT` statement, Ham still tries to build available targets.
|
priority
|
exit statement does not stop execution after an exit statement ham still tries to build available targets
| 1
|
529,174
| 15,382,567,463
|
IssuesEvent
|
2021-03-03 00:53:47
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Computation for CosineEmbeddingLoss is incorrect when target has a batch dim
|
high priority module: correctness (silent) module: loss module: nn triaged
|
## 🐛 Bug
Identified in #52732
## To Reproduce
As demonstrated in the linked issue, passing a target of shape `(N, 1)` produces a different loss value than passing a target of shape `N`:
```python
import torch
import torch.nn as nn
torch.manual_seed(0)
a = torch.randn(16, 5)
b = torch.randn(16, 5)
labels = torch.empty(16).bernoulli_().mul_(2).sub_(1)
loss_fn = nn.CosineEmbeddingLoss()
print(loss_fn(a, b, labels)) # 0.4538
print(loss_fn(a, b, labels.unsqueeze(1))) # 0.4810
```
## Expected behavior
Resultant loss should not be different for target of shape `N` vs. target of shape `(N, 1)`. Either the computation should be corrected or an error should be thrown for the latter case.
## Additional Context
The problem is due to the usage of `torch.where` within the loss computation. Its broadcasting behavior results in a different value when the target has the extra dim.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @albanD @mruberry
|
1.0
|
Computation for CosineEmbeddingLoss is incorrect when target has a batch dim - ## 🐛 Bug
Identified in #52732
## To Reproduce
As demonstrated in the linked issue, passing a target of shape `(N, 1)` produces a different loss value than passing a target of shape `N`:
```python
import torch
import torch.nn as nn
torch.manual_seed(0)
a = torch.randn(16, 5)
b = torch.randn(16, 5)
labels = torch.empty(16).bernoulli_().mul_(2).sub_(1)
loss_fn = nn.CosineEmbeddingLoss()
print(loss_fn(a, b, labels)) # 0.4538
print(loss_fn(a, b, labels.unsqueeze(1))) # 0.4810
```
## Expected behavior
Resultant loss should not be different for target of shape `N` vs. target of shape `(N, 1)`. Either the computation should be corrected or an error should be thrown for the latter case.
## Additional Context
The problem is due to the usage of `torch.where` within the loss computation. Its broadcasting behavior results in a different value when the target has the extra dim.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @albanD @mruberry
|
priority
|
computation for cosineembeddingloss is incorrect when target has a batch dim 🐛 bug identified in to reproduce as demonstrated in the linked issue passing a target of shape n produces a different loss value than passing a target of shape n python import torch import torch nn as nn torch manual seed a torch randn b torch randn labels torch empty bernoulli mul sub loss fn nn cosineembeddingloss print loss fn a b labels print loss fn a b labels unsqueeze expected behavior resultant loss should not be different for target of shape n vs target of shape n either the computation should be corrected or an error should be thrown for the latter case additional context the problem is due to the usage of torch where within the loss computation its broadcasting behavior results in a different value when the target has the extra dim cc ezyang gchanan bdhirsh jbschlosser alband mruberry
| 1
|
530,088
| 15,415,730,108
|
IssuesEvent
|
2021-03-05 03:18:08
|
AlaskaAirlines/WebCoreStyleSheets
|
https://api.github.com/repos/AlaskaAirlines/WebCoreStyleSheets
|
closed
|
Do not remove focus styles by default
|
Priority: High Status: Complete / Ready to Merge Type: Feature
|
## Is your feature request related to a problem? Please describe.
The default behavior when importing `essentials.css` is to remove focus styles from everything on the page. This makes it way too easy for devs to accidentally remove focus styles from elements that do not have a focus visible fallback. Most will not notice that the focus styles are missing, resulting in a major accessibility violation.
See also AlaskaAirlines/auro-button#97, where focus styles do not properly apply to a custom element inside a shadow root. Because the focus visible polyfill did not load for the inner component, the focus styles inside the entire component are missing. If the default was to show focus styles and only remove them once focus visible was loaded, this issue would not have an a11y impact.
It is better for a page's design to be wrong (focus styles shown when not using the keyboard) than for a page to be inaccessible to keyboard users. Turning off all focus styles by default is an a11y footgun and should be changed.
It is better for focus styles to accidentally be shown than to accidentally be removed. The former means the visual design is incorrect, the latter means the page is inaccessible to keyboard users. Turning off all focus styles by default is an a11y footgun and should be changed.
## Describe the solution you'd like
We should not remove focus styles by default. We should only selectively remove focus styles when the polyfill is loaded, as suggested in the [focus-visible polyfill README](https://github.com/WICG/focus-visible#2-update-your-css).
```css
/*
This will hide the focus indicator if the element receives focus via the mouse,
but it will still show up on keyboard focus.
*/
.js-focus-visible :focus:not(.focus-visible) {
outline: none;
}
```
We could do something similar with the native `:focus-visible` selector since both [Chrome and Firefox](https://caniuse.com/css-focus-visible) have support.
## Describe alternatives you've considered
Rely on education to solve this problem. However, education will not be 100% effective and missing focus styles will slip through the cracks. Instead, we should provide an accessible default and educate how to use focus-visible to meet our design goals.
|
1.0
|
Do not remove focus styles by default - ## Is your feature request related to a problem? Please describe.
The default behavior when importing `essentials.css` is to remove focus styles from everything on the page. This makes it way too easy for devs to accidentally remove focus styles from elements that do not have a focus visible fallback. Most will not notice that the focus styles are missing, resulting in a major accessibility violation.
See also AlaskaAirlines/auro-button#97, where focus styles do not properly apply to a custom element inside a shadow root. Because the focus visible polyfill did not load for the inner component, the focus styles inside the entire component are missing. If the default was to show focus styles and only remove them once focus visible was loaded, this issue would not have an a11y impact.
It is better for a page's design to be wrong (focus styles shown when not using the keyboard) than for a page to be inaccessible to keyboard users. Turning off all focus styles by default is an a11y footgun and should be changed.
It is better for focus styles to accidentally be shown than to accidentally be removed. The former means the visual design is incorrect, the latter means the page is inaccessible to keyboard users. Turning off all focus styles by default is an a11y footgun and should be changed.
## Describe the solution you'd like
We should not remove focus styles by default. We should only selectively remove focus styles when the polyfill is loaded, as suggested in the [focus-visible polyfill README](https://github.com/WICG/focus-visible#2-update-your-css).
```css
/*
This will hide the focus indicator if the element receives focus via the mouse,
but it will still show up on keyboard focus.
*/
.js-focus-visible :focus:not(.focus-visible) {
outline: none;
}
```
We could do something similar with the native `:focus-visible` selector since both [Chrome and Firefox](https://caniuse.com/css-focus-visible) have support.
## Describe alternatives you've considered
Rely on education to solve this problem. However, education will not be 100% effective and missing focus styles will slip through the cracks. Instead, we should provide an accessible default and educate how to use focus-visible to meet our design goals.
|
priority
|
do not remove focus styles by default is your feature request related to a problem please describe the default behavior when importing essentials css is to remove focus styles from everything on the page this makes it way too easy for devs to accidentally remove focus styles from elements that do not have a focus visible fallback most will not notice that the focus styles are missing resulting in a major accessibility violation see also alaskaairlines auro button where focus styles do not properly apply to a custom element inside a shadow root because the focus visible polyfill did not load for the inner component the focus styles inside the entire component are missing if the default was to show focus styles and only remove them once focus visible was loaded this issue would not have an impact it is better for a page s design to be wrong focus styles shown when not using the keyboard than for a page to be inaccessible to keyboard users turning off all focus styles by default is an footgun and should be changed it is better for focus styles to accidentally be shown than to accidentally be removed the former means the visual design is incorrect the latter means the page is inaccessible to keyboard users turning off all focus styles by default is an footgun and should be changed describe the solution you d like we should not remove focus styles by default we should only selectively remove focus styles when the polyfill is loaded as suggested in the css this will hide the focus indicator if the element receives focus via the mouse but it will still show up on keyboard focus js focus visible focus not focus visible outline none we could do something similar with the native focus visible selector since both have support describe alternatives you ve considered rely on education to solve this problem however education will not be effective and missing focus styles will slip through the cracks instead we should provide an accessible default and educate how to use focus visible to meet our design goals
| 1
|
78,708
| 3,516,514,326
|
IssuesEvent
|
2016-01-12 00:09:47
|
trevorberman/TGB_selfSite
|
https://api.github.com/repos/trevorberman/TGB_selfSite
|
closed
|
index.html semantic structure? <section class="social">
|
high priority structure
|
Is ```<section class="social">``` more semantic as ```<nav class="social-nav">``` with ```<ul>``` of icon-links?
- research, and if so, refactor (don't forget corresponding CSS)
|
1.0
|
index.html semantic structure? <section class="social"> - Is ```<section class="social">``` more semantic as ```<nav class="social-nav">``` with ```<ul>``` of icon-links?
- research, and if so, refactor (don't forget corresponding CSS)
|
priority
|
index html semantic structure is more semantic as with of icon links research and if so refactor don t forget corresponding css
| 1
|
163,835
| 6,206,941,937
|
IssuesEvent
|
2017-07-06 19:36:51
|
Polymer/polymer-cli
|
https://api.github.com/repos/Polymer/polymer-cli
|
closed
|
"polymer build" doesn't inject ES5 adapter when bundling
|
Priority: High Type: Bug
|
### Description
`polymer build` doesn't inject `custom-elements-es5-adapter.js` like expected, as the doc says [here][1] :
`
When you use the Polymer CLI to compile your app, the CLI automatically compiles the correct files and injects custom-elements-es5-adapter.js, a lightweight polyfill that lets compiled ES5 work on browsers that support native custom elements.
`
and [here][2] :
`
The --js-compile flag adds the custom-elements-es5-adapter.js adapter for running ES5 code on browsers that support ES6.
`
[1]: https://www.polymer-project.org/2.0/docs/es6#compile
[2]: https://www.polymer-project.org/2.0/toolbox/build-for-production#compiling
### Versions & Environment
- Polymer CLI: 1.2.0
- node: v7.10.0
- Operating System: Linux Mint 17.3 Rosa
#### Steps to Reproduce
1. Create an `/demo/polymer2.html` file with the following content :
```html
<!doctype html>
<html lang="fr">
<head>
<title>polymer2</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, minimum-scale=1, initial-scale=1, user-scalable=yes">
<script src="/bower_components/webcomponentsjs/webcomponents-loader.js"></script>
<link rel="import" href="../src/demo/polymer2-app.html">
</head>
<body>
<polymer2-app></polymer2-app>
</body>
</html>
```
2. Create a `/src/demo/polymer2-app.html` file with the following content :
```html
<link rel="import" href="../../bower_components/polymer/polymer.html">
<dom-module id="polymer2-app">
<template>
<style>
:host {
display: block;
}
</style>
<h1>polymer2-app</h1>
</template>
<script>
class Polymer2App extends Polymer.Element {
static get is() {
return 'polymer2-app';
}
}
customElements.define(Polymer2App.is, Polymer2App);
</script>
</dom-module>
```
3. Create a `polymer.json` file with the following content :
```json
{
"builds": [{
"bundle": true,
"css": {"minify": true},
"html": {"minify": true},
"js": {"compile": true, "minify": true}
}],
"lint": {
"rules": [
"polymer-2"
]
}
}
```
4. Build : `polymer build --entrypoint demo/polymer2.html`
#### Expected Results
1. `polymer build` should inject either a `<script src="/bower_components/webcomponentsjs/custom-elements-es5-adapter.js"></script>` or the content of the script directly into the compiled `polymer2.html` file
2. When visiting this compiled file in a ES6-ready browser (e.g. Chrome 58), a header with the content `polymer2-app` should appear
#### Actual Results
1. None of that happens
2. Blank page with the following errors in console : `Uncaught TypeError: Failed to construct 'HTMLElement': Please use the 'new' operator, this DOM object constructor cannot be called as a function.`
#### Additional information
If I manually add `<script src="/bower_components/webcomponentsjs/custom-elements-es5-adapter.js"></script>` in the index file `/demo/polymer2.html`, the compiled ES5 code works but `polymer serve` displays a blank page with the following error in console for every custom element used : `Uncaught TypeError: Class constructor DomModule cannot be invoked without 'new'`.
|
1.0
|
"polymer build" doesn't inject ES5 adapter when bundling - ### Description
`polymer build` doesn't inject `custom-elements-es5-adapter.js` like expected, as the doc says [here][1] :
`
When you use the Polymer CLI to compile your app, the CLI automatically compiles the correct files and injects custom-elements-es5-adapter.js, a lightweight polyfill that lets compiled ES5 work on browsers that support native custom elements.
`
and [here][2] :
`
The --js-compile flag adds the custom-elements-es5-adapter.js adapter for running ES5 code on browsers that support ES6.
`
[1]: https://www.polymer-project.org/2.0/docs/es6#compile
[2]: https://www.polymer-project.org/2.0/toolbox/build-for-production#compiling
### Versions & Environment
- Polymer CLI: 1.2.0
- node: v7.10.0
- Operating System: Linux Mint 17.3 Rosa
#### Steps to Reproduce
1. Create an `/demo/polymer2.html` file with the following content :
```html
<!doctype html>
<html lang="fr">
<head>
<title>polymer2</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, minimum-scale=1, initial-scale=1, user-scalable=yes">
<script src="/bower_components/webcomponentsjs/webcomponents-loader.js"></script>
<link rel="import" href="../src/demo/polymer2-app.html">
</head>
<body>
<polymer2-app></polymer2-app>
</body>
</html>
```
2. Create a `/src/demo/polymer2-app.html` file with the following content :
```html
<link rel="import" href="../../bower_components/polymer/polymer.html">
<dom-module id="polymer2-app">
<template>
<style>
:host {
display: block;
}
</style>
<h1>polymer2-app</h1>
</template>
<script>
class Polymer2App extends Polymer.Element {
static get is() {
return 'polymer2-app';
}
}
customElements.define(Polymer2App.is, Polymer2App);
</script>
</dom-module>
```
3. Create a `polymer.json` file with the following content :
```json
{
"builds": [{
"bundle": true,
"css": {"minify": true},
"html": {"minify": true},
"js": {"compile": true, "minify": true}
}],
"lint": {
"rules": [
"polymer-2"
]
}
}
```
4. Build : `polymer build --entrypoint demo/polymer2.html`
#### Expected Results
1. `polymer build` should inject either a `<script src="/bower_components/webcomponentsjs/custom-elements-es5-adapter.js"></script>` or the content of the script directly into the compiled `polymer2.html` file
2. When visiting this compiled file in a ES6-ready browser (e.g. Chrome 58), a header with the content `polymer2-app` should appear
#### Actual Results
1. None of that happens
2. Blank page with the following errors in console : `Uncaught TypeError: Failed to construct 'HTMLElement': Please use the 'new' operator, this DOM object constructor cannot be called as a function.`
#### Additional information
If I manually add `<script src="/bower_components/webcomponentsjs/custom-elements-es5-adapter.js"></script>` in the index file `/demo/polymer2.html`, the compiled ES5 code works but `polymer serve` displays a blank page with the following error in console for every custom element used : `Uncaught TypeError: Class constructor DomModule cannot be invoked without 'new'`.
|
priority
|
polymer build doesn t inject adapter when bundling description polymer build doesn t inject custom elements adapter js like expected as the doc says when you use the polymer cli to compile your app the cli automatically compiles the correct files and injects custom elements adapter js a lightweight polyfill that lets compiled work on browsers that support native custom elements and the js compile flag adds the custom elements adapter js adapter for running code on browsers that support versions environment polymer cli node operating system linux mint rosa steps to reproduce create an demo html file with the following content html create a src demo app html file with the following content html host display block app class extends polymer element static get is return app customelements define is create a polymer json file with the following content json builds bundle true css minify true html minify true js compile true minify true lint rules polymer build polymer build entrypoint demo html expected results polymer build should inject either a or the content of the script directly into the compiled html file when visiting this compiled file in a ready browser e g chrome a header with the content app should appear actual results none of that happens blank page with the following errors in console uncaught typeerror failed to construct htmlelement please use the new operator this dom object constructor cannot be called as a function additional information if i manually add in the index file demo html the compiled code works but polymer serve displays a blank page with the following error in console for every custom element used uncaught typeerror class constructor dommodule cannot be invoked without new
| 1
|
352,111
| 10,532,058,977
|
IssuesEvent
|
2019-10-01 09:52:48
|
wso2-cellery/sdk
|
https://api.github.com/repos/wso2-cellery/sdk
|
closed
|
Cellery run command fails in docker for desktop
|
Priority/Highest Severity/Blocker Type/Bug
|
**Description:**
When starting a cell instance by executing `cellery run` command, a failure occurs.
cellery run pzfreo/hello:0.4.0
✔
Extracting Cell Image pzfreo/hello:0.4.0
✔
Reading Image pzfreo/hello:0.4.0
ballerina: insufficient arguments to call the 'main' function
⠼ Starting main instance
Failed to start Cell instance : failed to execute run method in Cell instance due to exit status 1
**Affected Product Version:**
0.4.0-RC
**OS, DB, other environment details and versions:**
Docker for desktop
**Steps to reproduce:**
1. Execute `cellery init` command.
2. Build the cell using `cellery build` command.
3. Run the cell using `cellery run` command.
|
1.0
|
Cellery run command fails in docker for desktop - **Description:**
When starting a cell instance by executing `cellery run` command, a failure occurs.
cellery run pzfreo/hello:0.4.0
✔
Extracting Cell Image pzfreo/hello:0.4.0
✔
Reading Image pzfreo/hello:0.4.0
ballerina: insufficient arguments to call the 'main' function
⠼ Starting main instance
Failed to start Cell instance : failed to execute run method in Cell instance due to exit status 1
**Affected Product Version:**
0.4.0-RC
**OS, DB, other environment details and versions:**
Docker for desktop
**Steps to reproduce:**
1. Execute `cellery init` command.
2. Build the cell using `cellery build` command.
3. Run the cell using `cellery run` command.
|
priority
|
cellery run command fails in docker for desktop description when starting a cell instance by executing cellery run command a failure occurs cellery run pzfreo hello ✔ extracting cell image pzfreo hello ✔ reading image pzfreo hello ballerina insufficient arguments to call the main function ⠼ starting main instance failed to start cell instance failed to execute run method in cell instance due to exit status affected product version rc os db other environment details and versions docker for desktop steps to reproduce execute cellery init command build the cell using cellery build command run the cell using cellery run command
| 1
|
771,965
| 27,100,013,253
|
IssuesEvent
|
2023-02-15 07:51:36
|
therealbluepandabear/PixaPencil_Classic
|
https://api.github.com/repos/therealbluepandabear/PixaPencil_Classic
|
closed
|
[I] Tablet mode/ui
|
high priority improvement difficulty: unknown
|
Test the app in tablet mode (with a tablet EMU) as I am not sure how the UI looks in tablet mode.
If the UI looks funky in tablet mode, create a separate layout file for tablet mode.
|
1.0
|
[I] Tablet mode/ui - Test the app in tablet mode (with a tablet EMU) as I am not sure how the UI looks in tablet mode.
If the UI looks funky in tablet mode, create a separate layout file for tablet mode.
|
priority
|
tablet mode ui test the app in tablet mode with a tablet emu as i am not sure how the ui looks in tablet mode if the ui looks funky in tablet mode create a separate layout file for tablet mode
| 1
|
733,424
| 25,305,603,150
|
IssuesEvent
|
2022-11-17 13:58:30
|
Hexlet/runit
|
https://api.github.com/repos/Hexlet/runit
|
opened
|
Fix style
|
good first issue help wanted frontend Priority: High
|
Now the wrong styles in the modal when the user is not registered. We need to do what we do elsewhere
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/77053797/202465262-b3ea3772-8d36-4d22-a11b-8932ffc4a983.png">
|
1.0
|
Fix style - Now the wrong styles in the modal when the user is not registered. We need to do what we do elsewhere
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/77053797/202465262-b3ea3772-8d36-4d22-a11b-8932ffc4a983.png">
|
priority
|
fix style now the wrong styles in the modal when the user is not registered we need to do what we do elsewhere img width alt image src
| 1
|
188,590
| 6,777,984,795
|
IssuesEvent
|
2017-10-28 03:50:26
|
minio/mint
|
https://api.github.com/repos/minio/mint
|
closed
|
minio-go test fails on azure gateway
|
priority: high
|
While running mint against azure gateway fails with below error
```
Running minio-go tests ... FAILED in 3 seconds
{
"alert": "",
"args": {},
"duration": 72,
"error": "A header you provided implies functionality that is not implemented",
"function": "testFunctionalV2()",
"message": "SetBucketPolicy failed",
"name": "minio-go",
"status": "fail"
}
```
Respective server side log
```
ERRO[0408] Unable to read bucket policy. cause=Policy not found source=[gateway-handlers.go:554:gatewayAPIHandlers.GetBucketPolicyHandler()] stack=gateway-azure.go:853:(*azureObjects).GetBucketPolicies gateway-handlers.go:552:gatewayAPIHandlers.GetBucketPolicyHandler gateway-router.go:97:(gatewayAPIHandlers).GetBucketPolicyHandler-fm
ERRO[0466] Unable to validate content-md5 format. cause=illegal base64 data at input byte 4 source=[gateway-handlers.go:205:gatewayAPIHandlers.PutObjectHandler()]
ERRO[0468] {"method":"PUT","reqURI":"/awscli-mint-test-bucket-4052/datafile-1-MB","header":{"Accept-Encoding":["identity"],"Authorization":["AWS4-HMAC-SHA256 Credential=minioazure1/20171021/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;host;x-amz-content-sha256;x-amz-date, Signature=3ccf742f5979dca00548332ace4b914c5ddf309d70d540cbaed2c79dcd19e396"],"Content-Length":["1048576"],"Content-Md5":["/jCUCgXnG9eMBe+0n5jvOw=="],"Expect":["100-continue"],"Host":["10.99.185.131:9000"],"User-Agent":["aws-cli/1.11.112 Python/3.5.2 Linux/4.10.0-24-generic botocore/1.5.75"],"X-Amz-Content-Sha256":["194fda391f171a59875c3917110e0be72d9ac3d1abc6fbefaebede244e53b5f9"],"X-Amz-Date":["20171021T153455Z"]}} cause=Signature does not match source=[gateway-handlers.go:295:gatewayAPIHandlers.PutObjectHandler()]
ERRO[0502] Unable to read bucket policy. cause=Policy not found source=[gateway-handlers.go:554:gatewayAPIHandlers.GetBucketPolicyHandler()] stack=gateway-azure.go:853:(*azureObjects).GetBucketPolicies gateway-handlers.go:552:gatewayAPIHandlers.GetBucketPolicyHandler gateway-router.go:97:(gatewayAPIHandlers).GetBucketPolicyHandler-fm
```
|
1.0
|
minio-go test fails on azure gateway - While running mint against azure gateway fails with below error
```
Running minio-go tests ... FAILED in 3 seconds
{
"alert": "",
"args": {},
"duration": 72,
"error": "A header you provided implies functionality that is not implemented",
"function": "testFunctionalV2()",
"message": "SetBucketPolicy failed",
"name": "minio-go",
"status": "fail"
}
```
Respective server side log
```
ERRO[0408] Unable to read bucket policy. cause=Policy not found source=[gateway-handlers.go:554:gatewayAPIHandlers.GetBucketPolicyHandler()] stack=gateway-azure.go:853:(*azureObjects).GetBucketPolicies gateway-handlers.go:552:gatewayAPIHandlers.GetBucketPolicyHandler gateway-router.go:97:(gatewayAPIHandlers).GetBucketPolicyHandler-fm
ERRO[0466] Unable to validate content-md5 format. cause=illegal base64 data at input byte 4 source=[gateway-handlers.go:205:gatewayAPIHandlers.PutObjectHandler()]
ERRO[0468] {"method":"PUT","reqURI":"/awscli-mint-test-bucket-4052/datafile-1-MB","header":{"Accept-Encoding":["identity"],"Authorization":["AWS4-HMAC-SHA256 Credential=minioazure1/20171021/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;host;x-amz-content-sha256;x-amz-date, Signature=3ccf742f5979dca00548332ace4b914c5ddf309d70d540cbaed2c79dcd19e396"],"Content-Length":["1048576"],"Content-Md5":["/jCUCgXnG9eMBe+0n5jvOw=="],"Expect":["100-continue"],"Host":["10.99.185.131:9000"],"User-Agent":["aws-cli/1.11.112 Python/3.5.2 Linux/4.10.0-24-generic botocore/1.5.75"],"X-Amz-Content-Sha256":["194fda391f171a59875c3917110e0be72d9ac3d1abc6fbefaebede244e53b5f9"],"X-Amz-Date":["20171021T153455Z"]}} cause=Signature does not match source=[gateway-handlers.go:295:gatewayAPIHandlers.PutObjectHandler()]
ERRO[0502] Unable to read bucket policy. cause=Policy not found source=[gateway-handlers.go:554:gatewayAPIHandlers.GetBucketPolicyHandler()] stack=gateway-azure.go:853:(*azureObjects).GetBucketPolicies gateway-handlers.go:552:gatewayAPIHandlers.GetBucketPolicyHandler gateway-router.go:97:(gatewayAPIHandlers).GetBucketPolicyHandler-fm
```
|
priority
|
minio go test fails on azure gateway while running mint against azure gateway fails with below error running minio go tests failed in seconds alert args duration error a header you provided implies functionality that is not implemented function message setbucketpolicy failed name minio go status fail respective server side log erro unable to read bucket policy cause policy not found source stack gateway azure go azureobjects getbucketpolicies gateway handlers go gatewayapihandlers getbucketpolicyhandler gateway router go gatewayapihandlers getbucketpolicyhandler fm erro unable to validate content format cause illegal data at input byte source erro method put requri awscli mint test bucket datafile mb header accept encoding authorization content length content expect host user agent x amz content x amz date cause signature does not match source erro unable to read bucket policy cause policy not found source stack gateway azure go azureobjects getbucketpolicies gateway handlers go gatewayapihandlers getbucketpolicyhandler gateway router go gatewayapihandlers getbucketpolicyhandler fm
| 1
|
711,071
| 24,449,563,460
|
IssuesEvent
|
2022-10-06 21:15:31
|
MSRevive/MasterSwordRebirth
|
https://api.github.com/repos/MSRevive/MasterSwordRebirth
|
closed
|
msarea_monsterspawn lags the server
|
bug 🐛 high priority
|
It seems ``msarea_monsterspawn`` can lag the server when spawning mobs. This is probably what is causing this issue https://github.com/MSRevive/MasterSwordRebirth/issues/54
|
1.0
|
msarea_monsterspawn lags the server - It seems ``msarea_monsterspawn`` can lag the server when spawning mobs. This is probably what is causing this issue https://github.com/MSRevive/MasterSwordRebirth/issues/54
|
priority
|
msarea monsterspawn lags the server it seems msarea monsterspawn can lag the server when spawning mobs this is probably what is causing this issue
| 1
|
499,074
| 14,439,479,920
|
IssuesEvent
|
2020-12-07 14:26:31
|
hand-drawn-markers-polsl/marker-recognition
|
https://api.github.com/repos/hand-drawn-markers-polsl/marker-recognition
|
closed
|
Create pixel activation heatmap
|
enhancement priority: high
|
Depends on: #4, #6
*5.4.3 Visualizing heatmaps of class activation* of [Deep learning with Python](https://tanthiamhuat.files.wordpress.com/2018/03/deeplearningwithpython.pdf).
Most recent version of this example from [Keras website](https://keras.io/examples/vision/grad_cam/).
|
1.0
|
Create pixel activation heatmap - Depends on: #4, #6
*5.4.3 Visualizing heatmaps of class activation* of [Deep learning with Python](https://tanthiamhuat.files.wordpress.com/2018/03/deeplearningwithpython.pdf).
Most recent version of this example from [Keras website](https://keras.io/examples/vision/grad_cam/).
|
priority
|
create pixel activation heatmap depends on visualizing heatmaps of class activation of most recent version of this example from
| 1
|
824,806
| 31,224,475,486
|
IssuesEvent
|
2023-08-19 00:24:45
|
juno-fx/report
|
https://api.github.com/repos/juno-fx/report
|
opened
|
Mars: Sync Data to Atlas Bucket
|
high priority
|
We need to setup Mars to have a boto3 or s3sync way of getting the content in and out of the bucket
|
1.0
|
Mars: Sync Data to Atlas Bucket - We need to setup Mars to have a boto3 or s3sync way of getting the content in and out of the bucket
|
priority
|
mars sync data to atlas bucket we need to setup mars to have a or way of getting the content in and out of the bucket
| 1
|
219,444
| 7,342,382,200
|
IssuesEvent
|
2018-03-07 07:36:21
|
my-codeworks/fortnox-api
|
https://api.github.com/repos/my-codeworks/fortnox-api
|
closed
|
Email regex too strict
|
priority:high type:bug
|
We got a validation error from Bolagskraft on a valid email: kanal_75_ab-faktura@mail.unit4agresso.readsoftonline.com (https://app.bugsnag.com/your-codeworks/bolagskraft/errors/5a849e0b7482aa00182c686c for those with access).
We should look over the regex and find one that is more permissible in its matching.
|
1.0
|
Email regex too strict - We got a validation error from Bolagskraft on a valid email: kanal_75_ab-faktura@mail.unit4agresso.readsoftonline.com (https://app.bugsnag.com/your-codeworks/bolagskraft/errors/5a849e0b7482aa00182c686c for those with access).
We should look over the regex and find one that is more permissible in its matching.
|
priority
|
email regex too strict we got a validation error from bolagskraft on a valid email kanal ab faktura mail readsoftonline com for those with access we should look over the regex and find one that is more permissible in its matching
| 1
|
577,957
| 17,139,961,538
|
IssuesEvent
|
2021-07-13 08:31:52
|
returntocorp/semgrep
|
https://api.github.com/repos/returntocorp/semgrep
|
closed
|
[JS] Dictionary matching yield inconsistent results
|
bug lang:javascript priority:high user:external
|
**Describe the bug**
Matching JS dictionaries yield unexpected results, for example the pattern `{}` is equivalent to `{...}`, the pattern `{...$X}` matches anything and the pattern `{x: $X}` also matches any dictionary with that key.
Note that this is different to what happens in Python. (#1003 could be related)
**To Reproduce**
```js
a = {};
a = {...x};
a = {...x, ...y};
a = {...x, y: z};
a = {...x, z: 3, ...y}
a = {x: 1};
a = {x:1, y:2};
```
https://semgrep.dev/s/2bP8 (note that the test code gets overwritten when you load it up)
**Expected behavior**
- Pattern `{}` should only match the empty dictionary `{}`
- Pattern `{...$X}` should only match dictionaries of the form `{...anotherDict}`
- Pattern `{x:$X}` should only match dictionaries with just the key `x`
**What is the priority of the bug to you?**
- [x] P0: blocking your adoption of Semgrep or workflow
- [ ] P1: important to fix or quite annoying
- [ ] P2: regular bug that should get fixed
**Environment**
Semgrep Playground and 0.48.0 installed via homebrew.
|
1.0
|
[JS] Dictionary matching yield inconsistent results - **Describe the bug**
Matching JS dictionaries yield unexpected results, for example the pattern `{}` is equivalent to `{...}`, the pattern `{...$X}` matches anything and the pattern `{x: $X}` also matches any dictionary with that key.
Note that this is different to what happens in Python. (#1003 could be related)
**To Reproduce**
```js
a = {};
a = {...x};
a = {...x, ...y};
a = {...x, y: z};
a = {...x, z: 3, ...y}
a = {x: 1};
a = {x:1, y:2};
```
https://semgrep.dev/s/2bP8 (note that the test code gets overwritten when you load it up)
**Expected behavior**
- Pattern `{}` should only match the empty dictionary `{}`
- Pattern `{...$X}` should only match dictionaries of the form `{...anotherDict}`
- Pattern `{x:$X}` should only match dictionaries with just the key `x`
**What is the priority of the bug to you?**
- [x] P0: blocking your adoption of Semgrep or workflow
- [ ] P1: important to fix or quite annoying
- [ ] P2: regular bug that should get fixed
**Environment**
Semgrep Playground and 0.48.0 installed via homebrew.
|
priority
|
dictionary matching yield inconsistent results describe the bug matching js dictionaries yield unexpected results for example the pattern is equivalent to the pattern x matches anything and the pattern x x also matches any dictionary with that key note that this is different to what happens in python could be related to reproduce js a a x a x y a x y z a x z y a x a x y note that the test code gets overwritten when you load it up expected behavior pattern should only match the empty dictionary pattern x should only match dictionaries of the form anotherdict pattern x x should only match dictionaries with just the key x what is the priority of the bug to you blocking your adoption of semgrep or workflow important to fix or quite annoying regular bug that should get fixed environment semgrep playground and installed via homebrew
| 1
|
716,960
| 24,654,741,005
|
IssuesEvent
|
2022-10-17 22:02:03
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Add PMs to Recent Topics
|
in progress priority: high area: recent-topics release goal
|
At present, the Recent Topics does not display private messages. This means it can't be used as an all-encompassing overview of everything that's happening for a user.
We should add private messages to Recent Topics, with filter settings to show private messages or not, and to show stream messages or not. The filters are necessary, as some users prefer not to see PMs in Recent Topics. Blocker: filter drop-down selector for Recent Topics (#19445).
This change also helps unblock future work to make a Recent Topics-style display available for all message views.
Design details:
1. Change `Stream` and `Topic` table headers to a single `Conversation` header.
2. Since we won't have a `Topic` column header, change the visuals for Stream/Topic rows to use message header styling to separate Stream from Topic:
<img width="256" alt="Screen Shot 2021-07-30 at 4 46 15 PM" src="https://user-images.githubusercontent.com/2090066/127721541-3ce840d2-7176-480d-8d0a-73c5833b5a50.png">
3. For PMs, display the names of the members of the conversation in place of the stream/topic.
4. No mute button for group PMs, since that's not an option. I'm not sure we should show a mute button for 1:1 PMs either, as it's a rare action. If we do show it, it should be an eye-slash, not a bell-slash, to match the icon we use elsewhere.
|
1.0
|
Add PMs to Recent Topics - At present, the Recent Topics does not display private messages. This means it can't be used as an all-encompassing overview of everything that's happening for a user.
We should add private messages to Recent Topics, with filter settings to show private messages or not, and to show stream messages or not. The filters are necessary, as some users prefer not to see PMs in Recent Topics. Blocker: filter drop-down selector for Recent Topics (#19445).
This change also helps unblock future work to make a Recent Topics-style display available for all message views.
Design details:
1. Change `Stream` and `Topic` table headers to a single `Conversation` header.
2. Since we won't have a `Topic` column header, change the visuals for Stream/Topic rows to use message header styling to separate Stream from Topic:
<img width="256" alt="Screen Shot 2021-07-30 at 4 46 15 PM" src="https://user-images.githubusercontent.com/2090066/127721541-3ce840d2-7176-480d-8d0a-73c5833b5a50.png">
3. For PMs, display the names of the members of the conversation in place of the stream/topic.
4. No mute button for group PMs, since that's not an option. I'm not sure we should show a mute button for 1:1 PMs either, as it's a rare action. If we do show it, it should be an eye-slash, not a bell-slash, to match the icon we use elsewhere.
|
priority
|
add pms to recent topics at present the recent topics does not display private messages this means it can t be used as an all encompassing overview of everything that s happening for a user we should add private messages to recent topics with filter settings to show private messages or not and to show stream messages or not the filters are necessary as some users prefer not to see pms in recent topics blocker filter drop down selector for recent topics this change also helps unblock future work to make a recent topics style display available for all message views design details change stream and topic table headers to a single conversation header since we won t have a topic column header change the visuals for stream topic rows to use message header styling to separate stream from topic img width alt screen shot at pm src for pms display the names of the members of the conversation in place of the stream topic no mute button for group pms since that s not an option i m not sure we should show a mute button for pms either as it s a rare action if we do show it it should be an eye slash not a bell slash to match the icon we use elsewhere
| 1
|
786,852
| 27,696,189,912
|
IssuesEvent
|
2023-03-14 02:30:34
|
nci-hcmi-catalog/portal
|
https://api.github.com/repos/nci-hcmi-catalog/portal
|
closed
|
Provide updates to terms as needed, dictionary edits facilitated by CMS manager
|
Priority: High
|
I am honestly a little unsure of exactly how the attached file makes it clear what is needed; unless @mistryrn knows, we'll have to follow up with Eva to clarify.
[SC molecular markers_ETC_20220418.xlsx](https://github.com/nci-hcmi-catalog/portal/files/9853853/SC.molecular.markers_ETC_20220418.xlsx)
|
1.0
|
Provide updates to terms as needed, dictionary edits facilitated by CMS manager - I am honestly a little unsure of exactly how the attached file makes it clear what is needed; unless @mistryrn knows, we'll have to follow up with Eva to clarify.
[SC molecular markers_ETC_20220418.xlsx](https://github.com/nci-hcmi-catalog/portal/files/9853853/SC.molecular.markers_ETC_20220418.xlsx)
|
priority
|
provide updates to terms as needed dictionary edits facilitated by cms manager i am honestly a little unsure of exactly how the attached file makes it clear what is needed unless mistryrn knows we ll have to follow up with eva to clarify
| 1
|
335,605
| 10,163,900,048
|
IssuesEvent
|
2019-08-07 10:18:58
|
AbsaOSS/enceladus
|
https://api.github.com/repos/AbsaOSS/enceladus
|
closed
|
Duplicated run objects after standardization
|
Standardization bug priority: high
|
## Describe the bug
After standardization two run documents are created with the same start time and sequential run ids:
- the first run (run id = i) document contains the checkpoints from the _INFO file only and has a status "running"
- the second run (run id = i+1 ) document contains the proper data and checkpoints
The problem dates back at leas to August 1st 2019
## To Reproduce
- The problem can be observed in Menas UI at asgard in 25600cols dataset (dataset -> runs)
- To reproduce the problem, execute standardization and check:
dataset -> runs in Menas
or
the documents in the run collection in the mongodb
## Expected behaviour
The first object is incorrect. Only the second object containing the proper checkpoint data and statuses should be created.
## Screenshots

|
1.0
|
Duplicated run objects after standardization - ## Describe the bug
After standardization two run documents are created with the same start time and sequential run ids:
- the first run (run id = i) document contains the checkpoints from the _INFO file only and has a status "running"
- the second run (run id = i+1 ) document contains the proper data and checkpoints
The problem dates back at leas to August 1st 2019
## To Reproduce
- The problem can be observed in Menas UI at asgard in 25600cols dataset (dataset -> runs)
- To reproduce the problem, execute standardization and check:
dataset -> runs in Menas
or
the documents in the run collection in the mongodb
## Expected behaviour
The first object is incorrect. Only the second object containing the proper checkpoint data and statuses should be created.
## Screenshots

|
priority
|
duplicated run objects after standardization describe the bug after standardization two run documents are created with the same start time and sequential run ids the first run run id i document contains the checkpoints from the info file only and has a status running the second run run id i document contains the proper data and checkpoints the problem dates back at leas to august to reproduce the problem can be observed in menas ui at asgard in dataset dataset runs to reproduce the problem execute standardization and check dataset runs in menas or the documents in the run collection in the mongodb expected behaviour the first object is incorrect only the second object containing the proper checkpoint data and statuses should be created screenshots
| 1
|
237,060
| 7,755,320,631
|
IssuesEvent
|
2018-05-31 09:47:31
|
zeit/next.js
|
https://api.github.com/repos/zeit/next.js
|
closed
|
Remove `react-hot-loader`
|
Priority: High
|
It causes way more issues than it fixes.
It's a matter of removing it from babel options
```js
plugins: [dev && !isServer && hotLoaderItem, dev && reactJsxSourceItem].filter(Boolean)
```
becomes
```js
plugins: [dev && reactJsxSourceItem].filter(Boolean)
```
|
1.0
|
Remove `react-hot-loader` - It causes way more issues than it fixes.
It's a matter of removing it from babel options
```js
plugins: [dev && !isServer && hotLoaderItem, dev && reactJsxSourceItem].filter(Boolean)
```
becomes
```js
plugins: [dev && reactJsxSourceItem].filter(Boolean)
```
|
priority
|
remove react hot loader it causes way more issues than it fixes it s a matter of removing it from babel options js plugins filter boolean becomes js plugins filter boolean
| 1
|
577,829
| 17,136,359,176
|
IssuesEvent
|
2021-07-13 03:05:02
|
Automattic/woocommerce-payments
|
https://api.github.com/repos/Automattic/woocommerce-payments
|
closed
|
Checking out with free trial fails when card requires authentication
|
priority: high type: bug
|
Error on checkout when setup intent requires action. To reproduce:
- Put only subscription product(s) with free trial into cart
- Check out with 4000002500003155
Shortcode:
<img width="815" src="https://user-images.githubusercontent.com/1867547/123555559-7f6ce600-d754-11eb-968f-20e35f7658de.png">
Block:
<img width="815" src="https://user-images.githubusercontent.com/1867547/123555557-7da32280-d754-11eb-8172-cad46bbeb1ca.png">
|
1.0
|
Checking out with free trial fails when card requires authentication - Error on checkout when setup intent requires action. To reproduce:
- Put only subscription product(s) with free trial into cart
- Check out with 4000002500003155
Shortcode:
<img width="815" src="https://user-images.githubusercontent.com/1867547/123555559-7f6ce600-d754-11eb-968f-20e35f7658de.png">
Block:
<img width="815" src="https://user-images.githubusercontent.com/1867547/123555557-7da32280-d754-11eb-8172-cad46bbeb1ca.png">
|
priority
|
checking out with free trial fails when card requires authentication error on checkout when setup intent requires action to reproduce put only subscription product s with free trial into cart check out with shortcode img width src block img width src
| 1
|
497,074
| 14,361,672,809
|
IssuesEvent
|
2020-11-30 18:37:14
|
ChainSafe/gossamer
|
https://api.github.com/repos/ChainSafe/gossamer
|
opened
|
update block verifier to track changes by epoch
|
Priority: 2 - High Type: Enhancement
|
<!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- the block verifier currently tracks changes based on block number
- this is ok for `OnDisabled` changes (see spec 6.1.2) but all the other potential parameter changes happen only at epoch changes
- the verifier should be updated to keep a list of epoch changes
- the data to be stored should be: `Next Epoch Data` which contains the updated BABE authority set and randomness; and `Next Config Data` which contains c_numerator, c_denominator (used for threshold calculation) and secondary slot boolean (currently not used by gossamer)
- the verifier interface should look something like:
```
type Verifier interface {
SetNextConfigData(epoch uint64, data *NextConfigData)
SetNextEpochData(epoch uint64, data *NextEpochData)
}
```
- need to implement `NextConfigData` and `NextEpochData` types
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
- changes tracked by block
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [ ] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [ ] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
|
1.0
|
update block verifier to track changes by epoch - <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
<!---
If you're describing a bug, tell us what should happen.
If you're suggesting a change/improvement, tell us how it should work.
-->
- the block verifier currently tracks changes based on block number
- this is ok for `OnDisabled` changes (see spec 6.1.2) but all the other potential parameter changes happen only at epoch changes
- the verifier should be updated to keep a list of epoch changes
- the data to be stored should be: `Next Epoch Data` which contains the updated BABE authority set and randomness; and `Next Config Data` which contains c_numerator, c_denominator (used for threshold calculation) and secondary slot boolean (currently not used by gossamer)
- the verifier interface should look something like:
```
type Verifier interface {
SetNextConfigData(epoch uint64, data *NextConfigData)
SetNextEpochData(epoch uint64, data *NextEpochData)
}
```
- need to implement `NextConfigData` and `NextEpochData` types
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
- changes tracked by block
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [ ] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [ ] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
|
priority
|
update block verifier to track changes by epoch please read carefully expected behavior if you re describing a bug tell us what should happen if you re suggesting a change improvement tell us how it should work the block verifier currently tracks changes based on block number this is ok for ondisabled changes see spec but all the other potential parameter changes happen only at epoch changes the verifier should be updated to keep a list of epoch changes the data to be stored should be next epoch data which contains the updated babe authority set and randomness and next config data which contains c numerator c denominator used for threshold calculation and secondary slot boolean currently not used by gossamer the verifier interface should look something like type verifier interface setnextconfigdata epoch data nextconfigdata setnextepochdata epoch data nextepochdata need to implement nextconfigdata and nextepochdata types current behavior if describing a bug tell us what happens instead of the expected behavior if suggesting a change or an improvement explain the difference between your suggestion and current behavior changes tracked by block checklist each empty square brackets below is a checkbox replace with to check the box after completing the task i have read and i have provided as much information as possible and necessary i am planning to submit a pull request to fix this issue myself
| 1
|
164,318
| 6,224,127,601
|
IssuesEvent
|
2017-07-10 13:38:45
|
Prospress/woocommerce-subscribe-all-the-things
|
https://api.github.com/repos/Prospress/woocommerce-subscribe-all-the-things
|
opened
|
[2.0] Review runtime meta - function names & keys
|
housekeeping priority:high refactor request scale:tiny
|
See https://github.com/Prospress/woocommerce-subscribe-all-the-things/pull/136#discussion_r123648135 and https://github.com/Prospress/woocommerce-subscribe-all-the-things/pull/136#discussion_r123646941
Also thought it might be best to use a single meta key to store all object-specific "runtime data" for SATT, e.g. `_satt_runtime_data`.
|
1.0
|
[2.0] Review runtime meta - function names & keys - See https://github.com/Prospress/woocommerce-subscribe-all-the-things/pull/136#discussion_r123648135 and https://github.com/Prospress/woocommerce-subscribe-all-the-things/pull/136#discussion_r123646941
Also thought it might be best to use a single meta key to store all object-specific "runtime data" for SATT, e.g. `_satt_runtime_data`.
|
priority
|
review runtime meta function names keys see and also thought it might be best to use a single meta key to store all object specific runtime data for satt e g satt runtime data
| 1
|
353,521
| 10,553,638,589
|
IssuesEvent
|
2019-10-03 17:40:16
|
crytic/echidna
|
https://api.github.com/repos/crytic/echidna
|
closed
|
ABIEncoderV2 bug encoding structures as parameters?
|
bug high-priority question
|
Echidna fails to execute `f` from `C` (the property will never fail):
```
pragma experimental ABIEncoderV2;
library Lib {
struct Struct {
bytes bs;
}
}
contract C {
using Lib for Lib.Struct;
bool state = true;
function f(Lib.Struct memory o1, Lib.Struct memory o2) public {
state = false;
}
function echidna_state() public returns (bool) {
return state;
}
}
```
If you remove either `o1` or `o2` from the parameter list, it will work. I suspect this is bug in the ABIEncoderV2 encoding.
|
1.0
|
ABIEncoderV2 bug encoding structures as parameters? - Echidna fails to execute `f` from `C` (the property will never fail):
```
pragma experimental ABIEncoderV2;
library Lib {
struct Struct {
bytes bs;
}
}
contract C {
using Lib for Lib.Struct;
bool state = true;
function f(Lib.Struct memory o1, Lib.Struct memory o2) public {
state = false;
}
function echidna_state() public returns (bool) {
return state;
}
}
```
If you remove either `o1` or `o2` from the parameter list, it will work. I suspect this is bug in the ABIEncoderV2 encoding.
|
priority
|
bug encoding structures as parameters echidna fails to execute f from c the property will never fail pragma experimental library lib struct struct bytes bs contract c using lib for lib struct bool state true function f lib struct memory lib struct memory public state false function echidna state public returns bool return state if you remove either or from the parameter list it will work i suspect this is bug in the encoding
| 1
|
48,612
| 2,998,933,804
|
IssuesEvent
|
2015-07-23 16:24:34
|
FlatBallFlyer/IBM-Data-Merge-Utility
|
https://api.github.com/repos/FlatBallFlyer/IBM-Data-Merge-Utility
|
opened
|
Implement Pool initialization in idmu-war
|
High Priority
|
Add connection pool initialization to the initialize servlet, create a pool for each configured data source, and use addConnectionPool(name,pool) to load these to the TemplateFactory
|
1.0
|
Implement Pool initialization in idmu-war - Add connection pool initialization to the initialize servlet, create a pool for each configured data source, and use addConnectionPool(name,pool) to load these to the TemplateFactory
|
priority
|
implement pool initialization in idmu war add connection pool initialization to the initialize servlet create a pool for each configured data source and use addconnectionpool name pool to load these to the templatefactory
| 1
|
491,948
| 14,174,303,103
|
IssuesEvent
|
2020-11-12 19:41:23
|
re-vault/practical-revault
|
https://api.github.com/repos/re-vault/practical-revault
|
closed
|
It's trivial for the sync server to know all the onchain transactions
|
Brainstorming High priority
|
We can either:
- Accept it, and wait for full noise channels across the sync server in Revault 2.0
- Therefore the txid obfuscation is meaningless and should be removed
- Fixed it
- But is [the known leak](https://github.com/re-vault/practical-revault/blob/master/messages.md#sig-1) the only one ?
|
1.0
|
It's trivial for the sync server to know all the onchain transactions - We can either:
- Accept it, and wait for full noise channels across the sync server in Revault 2.0
- Therefore the txid obfuscation is meaningless and should be removed
- Fixed it
- But is [the known leak](https://github.com/re-vault/practical-revault/blob/master/messages.md#sig-1) the only one ?
|
priority
|
it s trivial for the sync server to know all the onchain transactions we can either accept it and wait for full noise channels across the sync server in revault therefore the txid obfuscation is meaningless and should be removed fixed it but is the only one
| 1
|
265,701
| 8,357,778,353
|
IssuesEvent
|
2018-10-02 22:57:51
|
bluek8s/kubedirector
|
https://api.github.com/repos/bluek8s/kubedirector
|
opened
|
implement app config choices
|
Priority: High Project: App Model Project: Cluster Model Type: Enhancement
|
Inside the config object, define deploy-time choices and the available selections for each choice. For each selection, optionally define another config to apply when that selection is chosen (activate more roles and more services on roles). That config in turn can define dependent choices etc.
One problem: schema validation of such nested configs REALLY needs support for schema references, which K8s doesn't support yet (and might never). Obviously we could just dynamically validate it, but maybe there's an alternate representation that would be better.
|
1.0
|
implement app config choices - Inside the config object, define deploy-time choices and the available selections for each choice. For each selection, optionally define another config to apply when that selection is chosen (activate more roles and more services on roles). That config in turn can define dependent choices etc.
One problem: schema validation of such nested configs REALLY needs support for schema references, which K8s doesn't support yet (and might never). Obviously we could just dynamically validate it, but maybe there's an alternate representation that would be better.
|
priority
|
implement app config choices inside the config object define deploy time choices and the available selections for each choice for each selection optionally define another config to apply when that selection is chosen activate more roles and more services on roles that config in turn can define dependent choices etc one problem schema validation of such nested configs really needs support for schema references which doesn t support yet and might never obviously we could just dynamically validate it but maybe there s an alternate representation that would be better
| 1
|
125,166
| 4,953,412,083
|
IssuesEvent
|
2016-12-01 15:01:45
|
sdmx-twg/sdmx-vtl
|
https://api.github.com/repos/sdmx-twg/sdmx-vtl
|
opened
|
Boolean operators issues
|
high priority
|
from|No|doc|UM|RM|page/line
----|----|----|----|----|-----
DI|2|RM||85|Boolean operators and functions p. 85-89
DI|3|RM||85|Boolean operators and functions (all)
DI|5|RM||85|Boolean operators and functions (all)
DI|6|RM||85|Boolean operators and functions (in general, possibly in other sections)
DI|7|RM||85|Boolean operators and functions (in general, possibly in other sections)
The behaviour of the relational operators (=, <>, >, >=, etc.) on single-measure and multi-measure data sets is ambiguous. The two presented alternatives for creating the 'condition' and 'xyz_condition' components in the resulting dataset overlap because they are not mutually exclusive. The description is repeated and scattered in many places in the RefMan, which makes it difficult to keep consistent and maintain.
-> Unambiguously define the behaviour of the relational operators on single- and multi-measure datasets.
The behaviour of various Boolean operators on datasets that create 'condition' or 'xyz_condition' in the resulting dataset is not expressible using the core mechanisms: the join expressions and the scalar Boolean operators.
-> Express the behaviour of various Boolean operators applied to datasets using (and make sure it is consistent with) the core mechanisms: scalar relational operators and join expressions.
Operators whose name starts with not are not consistently named and interpreted. Some of them (such as not_exists_in) have an underscore after not, and some doan underscore. Therefore the syntax does not follow the same logic and may be difficult to unambiguously interpret.
-> Make the systematic use of not explicitly defined for all operators with the following syntactic forms:The form A not OP B (where OP is a binary operator other than is) is always interpreted as: not(A OP B).Example: A not exists_in B is interpreted by the parser as not (A exists_in B),The form A is not B is equivalent to:not(A is B),The form A not between C and D is equivalent to:
The description of dataset parameters usually starts with the dataset keyword followed by one or more groups enclosed in { .. .}* or {…}+. However, these curly braces seem to be meta-syntax for repetition, while the mandatory opening and closing curly braces are omitted by mistake.
-> Correct the error.
In many cases the the description of parameters gives two cases: parameters are datasets (followed with a dataset and a structure specification), and the other worded as "are input Dataset or Boolean scalars." It is unclear what distinguishes a dataset (the first case) and an input dataset (the second case).
-> Clarify the meaning and put the correct wording.
|
1.0
|
Boolean operators issues - from|No|doc|UM|RM|page/line
----|----|----|----|----|-----
DI|2|RM||85|Boolean operators and functions p. 85-89
DI|3|RM||85|Boolean operators and functions (all)
DI|5|RM||85|Boolean operators and functions (all)
DI|6|RM||85|Boolean operators and functions (in general, possibly in other sections)
DI|7|RM||85|Boolean operators and functions (in general, possibly in other sections)
The behaviour of the relational operators (=, <>, >, >=, etc.) on single-measure and multi-measure data sets is ambiguous. The two presented alternatives for creating the 'condition' and 'xyz_condition' components in the resulting dataset overlap because they are not mutually exclusive. The description is repeated and scattered in many places in the RefMan, which makes it difficult to keep consistent and maintain.
-> Unambiguously define the behaviour of the relational operators on single- and multi-measure datasets.
The behaviour of various Boolean operators on datasets that create 'condition' or 'xyz_condition' in the resulting dataset is not expressible using the core mechanisms: the join expressions and the scalar Boolean operators.
-> Express the behaviour of various Boolean operators applied to datasets using (and make sure it is consistent with) the core mechanisms: scalar relational operators and join expressions.
Operators whose name starts with not are not consistently named and interpreted. Some of them (such as not_exists_in) have an underscore after not, and some doan underscore. Therefore the syntax does not follow the same logic and may be difficult to unambiguously interpret.
-> Make the systematic use of not explicitly defined for all operators with the following syntactic forms:The form A not OP B (where OP is a binary operator other than is) is always interpreted as: not(A OP B).Example: A not exists_in B is interpreted by the parser as not (A exists_in B),The form A is not B is equivalent to:not(A is B),The form A not between C and D is equivalent to:
The description of dataset parameters usually starts with the dataset keyword followed by one or more groups enclosed in { .. .}* or {…}+. However, these curly braces seem to be meta-syntax for repetition, while the mandatory opening and closing curly braces are omitted by mistake.
-> Correct the error.
In many cases the the description of parameters gives two cases: parameters are datasets (followed with a dataset and a structure specification), and the other worded as "are input Dataset or Boolean scalars." It is unclear what distinguishes a dataset (the first case) and an input dataset (the second case).
-> Clarify the meaning and put the correct wording.
|
priority
|
boolean operators issues from no doc um rm page line di rm boolean operators and functions p di rm boolean operators and functions all di rm boolean operators and functions all di rm boolean operators and functions in general possibly in other sections di rm boolean operators and functions in general possibly in other sections the behaviour of the relational operators etc on single measure and multi measure data sets is ambiguous the two presented alternatives for creating the condition and xyz condition components in the resulting dataset overlap because they are not mutually exclusive the description is repeated and scattered in many places in the refman which makes it difficult to keep consistent and maintain unambiguously define the behaviour of the relational operators on single and multi measure datasets the behaviour of various boolean operators on datasets that create condition or xyz condition in the resulting dataset is not expressible using the core mechanisms the join expressions and the scalar boolean operators express the behaviour of various boolean operators applied to datasets using and make sure it is consistent with the core mechanisms scalar relational operators and join expressions operators whose name starts with not are not consistently named and interpreted some of them such as not exists in have an underscore after not and some doan underscore therefore the syntax does not follow the same logic and may be difficult to unambiguously interpret make the systematic use of not explicitly defined for all operators with the following syntactic forms the form a not op b where op is a binary operator other than is is always interpreted as not a op b example a not exists in b is interpreted by the parser as not a exists in b the form a is not b is equivalent to not a is b the form a not between c and d is equivalent to the description of dataset parameters usually starts with the dataset keyword followed by one or more groups enclosed in or … however these curly braces seem to be meta syntax for repetition while the mandatory opening and closing curly braces are omitted by mistake correct the error in many cases the the description of parameters gives two cases parameters are datasets followed with a dataset and a structure specification and the other worded as are input dataset or boolean scalars it is unclear what distinguishes a dataset the first case and an input dataset the second case clarify the meaning and put the correct wording
| 1
|
93,562
| 3,905,098,234
|
IssuesEvent
|
2016-04-19 00:22:39
|
Maroski/VRProject
|
https://api.github.com/repos/Maroski/VRProject
|
opened
|
Implement Jumping
|
Priority - High
|
Implement a simple jumping mechanism which can be added easily into the world.
|
1.0
|
Implement Jumping - Implement a simple jumping mechanism which can be added easily into the world.
|
priority
|
implement jumping implement a simple jumping mechanism which can be added easily into the world
| 1
|
221,802
| 7,396,434,834
|
IssuesEvent
|
2018-03-18 12:23:50
|
DistrictDataLabs/yellowbrick
|
https://api.github.com/repos/DistrictDataLabs/yellowbrick
|
closed
|
tSNE Value Error when no classes are specified
|
level: novice priority: high review type: bug
|
See [yellowbrick t-SNE fit raises ValueError](https://stackoverflow.com/questions/48950135/yellowbrick-t-sne-fit-raises-valueerror):
The issue has to do with the way numpy arrays are evaluated in 1.13 - we need to change [yellowbrick.text.tsne Line 256](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/text/tsne.py#L256) to `if y is not None and self.classes_ is None`
code to produce:
```python
import pandas as pd
from yellowbrick.text import TSNEVisualizer
from sklearn.datasets import make_classification
## produce random data
X, y = make_classification(n_samples=200, n_features=100,
n_informative=20, n_redundant=10,
n_classes=3, random_state=42)
## visualize data with t-SNE
tsne = TSNEVisualizer()
tsne.fit(X, y)
tsne.poof()
```
error raised:
```
Traceback (most recent call last):
File "t.py", line 12, in <module>
tsne.fit(X, y)
File "/Users/benjamin/Workspace/ddl/yellowbrick/yellowbrick/text/tsne.py", line 256, in fit
if y and self.classes_ is None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
|
1.0
|
tSNE Value Error when no classes are specified - See [yellowbrick t-SNE fit raises ValueError](https://stackoverflow.com/questions/48950135/yellowbrick-t-sne-fit-raises-valueerror):
The issue has to do with the way numpy arrays are evaluated in 1.13 - we need to change [yellowbrick.text.tsne Line 256](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/text/tsne.py#L256) to `if y is not None and self.classes_ is None`
code to produce:
```python
import pandas as pd
from yellowbrick.text import TSNEVisualizer
from sklearn.datasets import make_classification
## produce random data
X, y = make_classification(n_samples=200, n_features=100,
n_informative=20, n_redundant=10,
n_classes=3, random_state=42)
## visualize data with t-SNE
tsne = TSNEVisualizer()
tsne.fit(X, y)
tsne.poof()
```
error raised:
```
Traceback (most recent call last):
File "t.py", line 12, in <module>
tsne.fit(X, y)
File "/Users/benjamin/Workspace/ddl/yellowbrick/yellowbrick/text/tsne.py", line 256, in fit
if y and self.classes_ is None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
|
priority
|
tsne value error when no classes are specified see the issue has to do with the way numpy arrays are evaluated in we need to change to if y is not none and self classes is none code to produce python import pandas as pd from yellowbrick text import tsnevisualizer from sklearn datasets import make classification produce random data x y make classification n samples n features n informative n redundant n classes random state visualize data with t sne tsne tsnevisualizer tsne fit x y tsne poof error raised traceback most recent call last file t py line in tsne fit x y file users benjamin workspace ddl yellowbrick yellowbrick text tsne py line in fit if y and self classes is none valueerror the truth value of an array with more than one element is ambiguous use a any or a all
| 1
|
160,914
| 6,105,252,410
|
IssuesEvent
|
2017-06-20 23:08:50
|
redox-os/ion
|
https://api.github.com/repos/redox-os/ion
|
closed
|
Obtain Length of Array/String
|
enhancement high-priority
|
We need to come up with a syntax for obtaining the length of an array/string. Arrays are counted by elements, and strings are counted by graphemes. Perhaps this approach can be taken?
```ion
let length = $len(string)
let length = @len(array)
```
|
1.0
|
Obtain Length of Array/String - We need to come up with a syntax for obtaining the length of an array/string. Arrays are counted by elements, and strings are counted by graphemes. Perhaps this approach can be taken?
```ion
let length = $len(string)
let length = @len(array)
```
|
priority
|
obtain length of array string we need to come up with a syntax for obtaining the length of an array string arrays are counted by elements and strings are counted by graphemes perhaps this approach can be taken ion let length len string let length len array
| 1
|
415,267
| 12,127,193,466
|
IssuesEvent
|
2020-04-22 18:18:16
|
Cycling74/min-api
|
https://api.github.com/repos/Cycling74/min-api
|
closed
|
Crashes with deferred messages
|
bug priority:high
|
Report from MR:
> These crashes occur on both platforms, but perhaps are more reproducible on Windows (?), for example when an external is used in a M4L patch within a rack. The error happens here:
```
void deferred_message::pop() {
deferred_message x;
m_owning_message->m_deferred_messages.try_dequeue(x);
x.m_owning_message->m_function(m_args, m_inlet);
}
```
> because after try_dequeue x can be null
I've also personally seen this in the context of the *rad.waveslice~* help patcher (a project on a local machine).
|
1.0
|
Crashes with deferred messages - Report from MR:
> These crashes occur on both platforms, but perhaps are more reproducible on Windows (?), for example when an external is used in a M4L patch within a rack. The error happens here:
```
void deferred_message::pop() {
deferred_message x;
m_owning_message->m_deferred_messages.try_dequeue(x);
x.m_owning_message->m_function(m_args, m_inlet);
}
```
> because after try_dequeue x can be null
I've also personally seen this in the context of the *rad.waveslice~* help patcher (a project on a local machine).
|
priority
|
crashes with deferred messages report from mr these crashes occur on both platforms but perhaps are more reproducible on windows for example when an external is used in a patch within a rack the error happens here void deferred message pop deferred message x m owning message m deferred messages try dequeue x x m owning message m function m args m inlet because after try dequeue x can be null i ve also personally seen this in the context of the rad waveslice help patcher a project on a local machine
| 1
|
639,195
| 20,748,635,499
|
IssuesEvent
|
2022-03-15 03:43:00
|
TencentBlueKing/bk-nodeman
|
https://api.github.com/repos/TencentBlueKing/bk-nodeman
|
closed
|
[BUG] Agent 安装失败:'NoneType' object has no attribute 'get'
|
kind/bug module/backend version/V2.2.X priority/high
|
**问题描述**
Agent 安装失败:'NoneType' object has no attribute 'get'
**重现方法**
安装 Agent,非必现
```
Traceback (most recent call last):
File "/data/bkee/bknodeman/nodeman/apps/utils/exc.py", line 71, in wrapped_executor
return wrapped(*args, **kwargs)
File "/data/bkee/bknodeman/nodeman/apps/backend/components/collections/base.py", line 417, in execute
return self.run(self._execute, data, parent_data, common_data=common_data)
File "/data/bkee/bknodeman/nodeman/apps/backend/components/collections/base.py", line 334, in run
service_func(data, parent_data, **kwargs)
File "/data/bkee/bknodeman/nodeman/apps/backend/components/collections/agent_new/query_password.py", line 78, in _execute
oa_ticket = host.identity.extra_data.get("oa_ticket")
AttributeError: 'NoneType' object has no attribute 'get'
******** End of collected logs *********
```
**请提供以下信息**
- [x] bk-nodeman 版本 (发布版本号 或 git tag): <!-- `示例: V3.1.32-ce 或者 git sha. 请不要使用 "最新版本" 或 "当前版本"等无法准确定位代码版本的语句描述` -->
- [ ] 蓝鲸PaaS 版本:<!-- `<示例:PaaS 3.0.58、PaaSAgent 3.0.9` -->
- [ ] bk-nodeman 异常日志:
|
1.0
|
[BUG] Agent 安装失败:'NoneType' object has no attribute 'get' - **问题描述**
Agent 安装失败:'NoneType' object has no attribute 'get'
**重现方法**
安装 Agent,非必现
```
Traceback (most recent call last):
File "/data/bkee/bknodeman/nodeman/apps/utils/exc.py", line 71, in wrapped_executor
return wrapped(*args, **kwargs)
File "/data/bkee/bknodeman/nodeman/apps/backend/components/collections/base.py", line 417, in execute
return self.run(self._execute, data, parent_data, common_data=common_data)
File "/data/bkee/bknodeman/nodeman/apps/backend/components/collections/base.py", line 334, in run
service_func(data, parent_data, **kwargs)
File "/data/bkee/bknodeman/nodeman/apps/backend/components/collections/agent_new/query_password.py", line 78, in _execute
oa_ticket = host.identity.extra_data.get("oa_ticket")
AttributeError: 'NoneType' object has no attribute 'get'
******** End of collected logs *********
```
**请提供以下信息**
- [x] bk-nodeman 版本 (发布版本号 或 git tag): <!-- `示例: V3.1.32-ce 或者 git sha. 请不要使用 "最新版本" 或 "当前版本"等无法准确定位代码版本的语句描述` -->
- [ ] 蓝鲸PaaS 版本:<!-- `<示例:PaaS 3.0.58、PaaSAgent 3.0.9` -->
- [ ] bk-nodeman 异常日志:
|
priority
|
agent 安装失败: nonetype object has no attribute get 问题描述 agent 安装失败: nonetype object has no attribute get 重现方法 安装 agent,非必现 traceback most recent call last file data bkee bknodeman nodeman apps utils exc py line in wrapped executor return wrapped args kwargs file data bkee bknodeman nodeman apps backend components collections base py line in execute return self run self execute data parent data common data common data file data bkee bknodeman nodeman apps backend components collections base py line in run service func data parent data kwargs file data bkee bknodeman nodeman apps backend components collections agent new query password py line in execute oa ticket host identity extra data get oa ticket attributeerror nonetype object has no attribute get end of collected logs 请提供以下信息 bk nodeman 版本 发布版本号 或 git tag : 蓝鲸paas 版本: bk nodeman 异常日志:
| 1
|
570,082
| 17,018,455,357
|
IssuesEvent
|
2021-07-02 15:09:29
|
en-Barry/buildesk
|
https://api.github.com/repos/en-Barry/buildesk
|
closed
|
Fix: マイページのパスに含まれるidを変更する
|
Priority High
|
### 現状
- マイページを``users#show``で表示しているため、パスにidが含まれている
- idの代わりに表示するものとして適した属性がない
### 実現したいこと
- URLのユーザーid丸見え状態を避けたい
- マイページのURLをシェアするときに、アカウント名のような識別子が入ったリンクの方が見栄えがいい
|
1.0
|
Fix: マイページのパスに含まれるidを変更する - ### 現状
- マイページを``users#show``で表示しているため、パスにidが含まれている
- idの代わりに表示するものとして適した属性がない
### 実現したいこと
- URLのユーザーid丸見え状態を避けたい
- マイページのURLをシェアするときに、アカウント名のような識別子が入ったリンクの方が見栄えがいい
|
priority
|
fix マイページのパスに含まれるidを変更する 現状 マイページを users show で表示しているため、パスにidが含まれている idの代わりに表示するものとして適した属性がない 実現したいこと urlのユーザーid丸見え状態を避けたい マイページのurlをシェアするときに、アカウント名のような識別子が入ったリンクの方が見栄えがいい
| 1
|
258,034
| 8,150,321,860
|
IssuesEvent
|
2018-08-22 12:38:38
|
blakeohare/crayon
|
https://api.github.com/repos/blakeohare/crayon
|
closed
|
Platform inconsistency: Integer division behaves differently in JavaScript.
|
High Priority bug
|
`x = 0;`
`y = (x - 1) / 2; // 0 when running from Windows command-line, something else in JavaScript`
Note that the constant expression `-1 / 2` is `0` on both platforms I tested.
|
1.0
|
Platform inconsistency: Integer division behaves differently in JavaScript. - `x = 0;`
`y = (x - 1) / 2; // 0 when running from Windows command-line, something else in JavaScript`
Note that the constant expression `-1 / 2` is `0` on both platforms I tested.
|
priority
|
platform inconsistency integer division behaves differently in javascript x y x when running from windows command line something else in javascript note that the constant expression is on both platforms i tested
| 1
|
131,862
| 5,166,437,914
|
IssuesEvent
|
2017-01-17 16:13:53
|
snaiperskaya96/test-import-repo
|
https://api.github.com/repos/snaiperskaya96/test-import-repo
|
opened
|
New task "product_category_snapshotter"
|
Accepted Enhancement High Priority
|
https://trello.com/c/AnufOqBr/477-new-task-product-category-snapshotter
We need to monitor and record changes in categories for each product that exists in BP daily. This should run once a day for each product and record the category that the product is in on that day.
---
Snapshotted data should go in a new table called `product_category_snapshots`. The table should consist of: id, product_id, category_id, snapshotted_on.
|
1.0
|
New task "product_category_snapshotter" - https://trello.com/c/AnufOqBr/477-new-task-product-category-snapshotter
We need to monitor and record changes in categories for each product that exists in BP daily. This should run once a day for each product and record the category that the product is in on that day.
---
Snapshotted data should go in a new table called `product_category_snapshots`. The table should consist of: id, product_id, category_id, snapshotted_on.
|
priority
|
new task product category snapshotter we need to monitor and record changes in categories for each product that exists in bp daily this should run once a day for each product and record the category that the product is in on that day snapshotted data should go in a new table called product category snapshots the table should consist of id product id category id snapshotted on
| 1
|
65,211
| 3,227,114,943
|
IssuesEvent
|
2015-10-10 22:05:10
|
sci-visus/visus-issues
|
https://api.github.com/repos/sci-visus/visus-issues
|
closed
|
mod_visus look for visus.config in $HOME prior to loading system default
|
MOD_VISUS Priority High
|
Now that we are creating shared installations of mod_visus on supercomputers we must enable the ability for users to provide their own visus.config.
|
1.0
|
mod_visus look for visus.config in $HOME prior to loading system default - Now that we are creating shared installations of mod_visus on supercomputers we must enable the ability for users to provide their own visus.config.
|
priority
|
mod visus look for visus config in home prior to loading system default now that we are creating shared installations of mod visus on supercomputers we must enable the ability for users to provide their own visus config
| 1
|
112,281
| 4,518,694,119
|
IssuesEvent
|
2016-09-06 00:36:40
|
GluuFederation/oxAuth
|
https://api.github.com/repos/GluuFederation/oxAuth
|
closed
|
Duplicate id_tokens created on login
|
bug High priority
|
I logged into oxauth. Previously, there were no sessions for this client. After login, I noticed four tokens created: access token, refresh token, and two id_tokens. See:
```
GLUU.root@albacore:/opt/opendj/bin# ./ldapsearch -h localhost -p 1636 -Z -X -D "cn=directory manager" -j ~/.pw -b 'oxAuthGrantId=2de41305-da6b-4b8a-b72b-ccd916aea36a,inum=@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421,ou=clients,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu' "oxAuthTokenType=id_token"
dn: uniqueIdentifier=f9a669e1-fe6d-4972-920f-22d91bba2760,oxAuthGrantId=2de41305
-da6b-4b8a-b72b-ccd916aea36a,inum=@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE
3.7421,ou=clients,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
objectClass: top
objectClass: oxAuthToken
oxAuthScope: user_name email openid profile
oxAuthGrantType: authorization_code
oxAuthAuthenticationTime: Tue Aug 30 19:38:59 UTC 2016
oxAuthGrantId: 2de41305-da6b-4b8a-b72b-ccd916aea36a
oxAuthNonce: nonce
oxAuthCreation: 20160830193859.923Z
oxAuthExpiration: 20160830203859.923Z
oxAuthTokenType: id_token
oxAuthTokenCode: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2FsYm
Fjb3JlLmdsdXUuaW5mbyIsImF1ZCI6IkAhMTM0RC4zQzNELjc5NkUuRkVDRSEwMDAxIUUwMjIuQ0MzQ
yEwMDA4IUNBRTMuNzQyMSIsImV4cCI6MTQ3MjU4OTUzOSwiaWF0IjoxNDcyNTg1OTM5LCJhY3IiOiJ1
MmYiLCJhbXIiOiJbXSIsIm5vbmNlIjoibm9uY2UiLCJhdXRoX3RpbWUiOjE0NzI1ODU5MzksImNfaGF
zaCI6IkRUWFYtajVkeUZWcHNIcWVjWXdWa1EiLCJveFZhbGlkYXRpb25VUkkiOiJodHRwczovL2FsYm
Fjb3JlLmdsdXUuaW5mby9veGF1dGgvb3BpZnJhbWUiLCJveE9wZW5JRENvbm5lY3RWZXJzaW9uIjoib
3BlbmlkY29ubmVjdC0xLjAiLCJ1c2VyX25hbWUiOiJhZG1pbiIsImludW0iOiJAITEzNEQuM0MzRC43
OTZFLkZFQ0UhMDAwMSFFMDIyLkNDM0MhMDAwMCFBOEYyLkRFMUUuRDdGQiIsIm5hbWUiOiJEZWZhdWx
0IEFkbWluIFVzZXIiLCJmYW1pbHlfbmFtZSI6IlVzZXIiLCJnaXZlbl9uYW1lIjoiQWRtaW4iLCJzdW
IiOiJAITEzNEQuM0MzRC43OTZFLkZFQ0UhMDAwMSFFMDIyLkNDM0MhMDAwMCFBOEYyLkRFMUUuRDdGQ
iJ9.pziE6ZZjjUFJqIyWRXeUtwYBskTy0b07lGxtN895AFE
oxAuthSessionDn: uniqueIdentifier=368f90fb-46b6-4df8-be24-6835fc5db994,ou=sessio
n,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
uniqueIdentifier: f9a669e1-fe6d-4972-920f-22d91bba2760
oxAuthenticationMode: u2f
oxAuthUserId: admin
oxAuthAuthorizationCode: 858ac159-839f-445e-aef0-5f5618b4c56c
oxAuthClientId: @!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421
dn: uniqueIdentifier=ea2914f9-45b7-4011-bbaf-0231df466e3f,oxAuthGrantId=2de41305
-da6b-4b8a-b72b-ccd916aea36a,inum=@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE
3.7421,ou=clients,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
objectClass: top
objectClass: oxAuthToken
oxAuthScope: user_name email openid profile
oxAuthGrantType: authorization_code
oxAuthAuthenticationTime: Tue Aug 30 19:38:59 UTC 2016
oxAuthGrantId: 2de41305-da6b-4b8a-b72b-ccd916aea36a
oxAuthNonce: nonce
oxAuthCreation: 20160830193900.501Z
oxAuthExpiration: 20160830203900.501Z
oxAuthTokenType: id_token
oxAuthTokenCode: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2FsYm
Fjb3JlLmdsdXUuaW5mbyIsImF1ZCI6IkAhMTM0RC4zQzNELjc5NkUuRkVDRSEwMDAxIUUwMjIuQ0MzQ
yEwMDA4IUNBRTMuNzQyMSIsImV4cCI6MTQ3MjU4OTU0MCwiaWF0IjoxNDcyNTg1OTQwLCJhY3IiOiJ1
MmYiLCJhbXIiOiJbXSIsIm5vbmNlIjoibm9uY2UiLCJhdXRoX3RpbWUiOjE0NzI1ODU5MzksImF0X2h
hc2giOiIyWVNMTlVGbGhkUTJ6T2dWREVBYUFBIiwib3hWYWxpZGF0aW9uVVJJIjoiaHR0cHM6Ly9hbG
JhY29yZS5nbHV1LmluZm8vb3hhdXRoL29waWZyYW1lIiwib3hPcGVuSURDb25uZWN0VmVyc2lvbiI6I
m9wZW5pZGNvbm5lY3QtMS4wIiwidXNlcl9uYW1lIjoiYWRtaW4iLCJpbnVtIjoiQCExMzRELjNDM0Qu
Nzk2RS5GRUNFITAwMDEhRTAyMi5DQzNDITAwMDAhQThGMi5ERTFFLkQ3RkIiLCJuYW1lIjoiRGVmYXV
sdCBBZG1pbiBVc2VyIiwiZmFtaWx5X25hbWUiOiJVc2VyIiwiZ2l2ZW5fbmFtZSI6IkFkbWluIiwic3
ViIjoiQCExMzRELjNDM0QuNzk2RS5GRUNFITAwMDEhRTAyMi5DQzNDITAwMDAhQThGMi5ERTFFLkQ3R
kIifQ.PslykWxOe_IQrTlyiACR77WZ4k_3GzY6QRjgJ4O9c-E
oxAuthSessionDn: uniqueIdentifier=368f90fb-46b6-4df8-be24-6835fc5db994,ou=sessio
n,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
uniqueIdentifier: ea2914f9-45b7-4011-bbaf-0231df466e3f
oxAuthenticationMode: u2f
oxAuthUserId: admin
oxAuthAuthorizationCode: 858ac159-839f-445e-aef0-5f5618b4c56c
oxAuthClientId: @!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421
```
Apparently this is a result of the hybrid flow. These tokens are different--the id_token contains different claims. The first has c_hash because it was returned from the authorisation endpoint in conjunction with the code token:
```json
{"typ":"JWT","alg":"HS256"}
.
{
"iss":"https://albacore.gluu.info","aud":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421",
"exp":1472589539,"iat":1472585939,
"acr":"u2f","amr":"[]","nonce":"nonce","auth_time":1472585939,
"c_hash":"DTXV-j5dyFVpsHqecYwVkQ",
"oxValidationURI":"https://albacore.gluu.info/oxauth/opiframe","oxOpenIDConnectVersion":"openidconnect-1.0","user_name":"admin","inum":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB","name":"Default Admin User","family_name":"User","given_name":"Admin","sub":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB"}
{"typ":"JWT","alg":"HS256"}
.
{
"iss":"https://albacore.gluu.info","aud":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421",
"exp":1472589540,"iat":1472585940,
"acr":"u2f","amr":"[]","nonce":"nonce","auth_time":1472585939,
"at_hash":"2YSLNUFlhdQ2zOgVDEAaAA",
"oxValidationURI":"https://albacore.gluu.info/oxauth/opiframe","oxOpenIDConnectVersion":"openidconnect-1.0","user_name":"admin","inum":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB","name":"Default Admin User","family_name":"User","given_name":"Admin","sub":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB"}
```
However, instead of creating two entries, it would be better to create one enty with two values for `oxAuthTokenCode`
|
1.0
|
Duplicate id_tokens created on login - I logged into oxauth. Previously, there were no sessions for this client. After login, I noticed four tokens created: access token, refresh token, and two id_tokens. See:
```
GLUU.root@albacore:/opt/opendj/bin# ./ldapsearch -h localhost -p 1636 -Z -X -D "cn=directory manager" -j ~/.pw -b 'oxAuthGrantId=2de41305-da6b-4b8a-b72b-ccd916aea36a,inum=@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421,ou=clients,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu' "oxAuthTokenType=id_token"
dn: uniqueIdentifier=f9a669e1-fe6d-4972-920f-22d91bba2760,oxAuthGrantId=2de41305
-da6b-4b8a-b72b-ccd916aea36a,inum=@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE
3.7421,ou=clients,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
objectClass: top
objectClass: oxAuthToken
oxAuthScope: user_name email openid profile
oxAuthGrantType: authorization_code
oxAuthAuthenticationTime: Tue Aug 30 19:38:59 UTC 2016
oxAuthGrantId: 2de41305-da6b-4b8a-b72b-ccd916aea36a
oxAuthNonce: nonce
oxAuthCreation: 20160830193859.923Z
oxAuthExpiration: 20160830203859.923Z
oxAuthTokenType: id_token
oxAuthTokenCode: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2FsYm
Fjb3JlLmdsdXUuaW5mbyIsImF1ZCI6IkAhMTM0RC4zQzNELjc5NkUuRkVDRSEwMDAxIUUwMjIuQ0MzQ
yEwMDA4IUNBRTMuNzQyMSIsImV4cCI6MTQ3MjU4OTUzOSwiaWF0IjoxNDcyNTg1OTM5LCJhY3IiOiJ1
MmYiLCJhbXIiOiJbXSIsIm5vbmNlIjoibm9uY2UiLCJhdXRoX3RpbWUiOjE0NzI1ODU5MzksImNfaGF
zaCI6IkRUWFYtajVkeUZWcHNIcWVjWXdWa1EiLCJveFZhbGlkYXRpb25VUkkiOiJodHRwczovL2FsYm
Fjb3JlLmdsdXUuaW5mby9veGF1dGgvb3BpZnJhbWUiLCJveE9wZW5JRENvbm5lY3RWZXJzaW9uIjoib
3BlbmlkY29ubmVjdC0xLjAiLCJ1c2VyX25hbWUiOiJhZG1pbiIsImludW0iOiJAITEzNEQuM0MzRC43
OTZFLkZFQ0UhMDAwMSFFMDIyLkNDM0MhMDAwMCFBOEYyLkRFMUUuRDdGQiIsIm5hbWUiOiJEZWZhdWx
0IEFkbWluIFVzZXIiLCJmYW1pbHlfbmFtZSI6IlVzZXIiLCJnaXZlbl9uYW1lIjoiQWRtaW4iLCJzdW
IiOiJAITEzNEQuM0MzRC43OTZFLkZFQ0UhMDAwMSFFMDIyLkNDM0MhMDAwMCFBOEYyLkRFMUUuRDdGQ
iJ9.pziE6ZZjjUFJqIyWRXeUtwYBskTy0b07lGxtN895AFE
oxAuthSessionDn: uniqueIdentifier=368f90fb-46b6-4df8-be24-6835fc5db994,ou=sessio
n,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
uniqueIdentifier: f9a669e1-fe6d-4972-920f-22d91bba2760
oxAuthenticationMode: u2f
oxAuthUserId: admin
oxAuthAuthorizationCode: 858ac159-839f-445e-aef0-5f5618b4c56c
oxAuthClientId: @!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421
dn: uniqueIdentifier=ea2914f9-45b7-4011-bbaf-0231df466e3f,oxAuthGrantId=2de41305
-da6b-4b8a-b72b-ccd916aea36a,inum=@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE
3.7421,ou=clients,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
objectClass: top
objectClass: oxAuthToken
oxAuthScope: user_name email openid profile
oxAuthGrantType: authorization_code
oxAuthAuthenticationTime: Tue Aug 30 19:38:59 UTC 2016
oxAuthGrantId: 2de41305-da6b-4b8a-b72b-ccd916aea36a
oxAuthNonce: nonce
oxAuthCreation: 20160830193900.501Z
oxAuthExpiration: 20160830203900.501Z
oxAuthTokenType: id_token
oxAuthTokenCode: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2FsYm
Fjb3JlLmdsdXUuaW5mbyIsImF1ZCI6IkAhMTM0RC4zQzNELjc5NkUuRkVDRSEwMDAxIUUwMjIuQ0MzQ
yEwMDA4IUNBRTMuNzQyMSIsImV4cCI6MTQ3MjU4OTU0MCwiaWF0IjoxNDcyNTg1OTQwLCJhY3IiOiJ1
MmYiLCJhbXIiOiJbXSIsIm5vbmNlIjoibm9uY2UiLCJhdXRoX3RpbWUiOjE0NzI1ODU5MzksImF0X2h
hc2giOiIyWVNMTlVGbGhkUTJ6T2dWREVBYUFBIiwib3hWYWxpZGF0aW9uVVJJIjoiaHR0cHM6Ly9hbG
JhY29yZS5nbHV1LmluZm8vb3hhdXRoL29waWZyYW1lIiwib3hPcGVuSURDb25uZWN0VmVyc2lvbiI6I
m9wZW5pZGNvbm5lY3QtMS4wIiwidXNlcl9uYW1lIjoiYWRtaW4iLCJpbnVtIjoiQCExMzRELjNDM0Qu
Nzk2RS5GRUNFITAwMDEhRTAyMi5DQzNDITAwMDAhQThGMi5ERTFFLkQ3RkIiLCJuYW1lIjoiRGVmYXV
sdCBBZG1pbiBVc2VyIiwiZmFtaWx5X25hbWUiOiJVc2VyIiwiZ2l2ZW5fbmFtZSI6IkFkbWluIiwic3
ViIjoiQCExMzRELjNDM0QuNzk2RS5GRUNFITAwMDEhRTAyMi5DQzNDITAwMDAhQThGMi5ERTFFLkQ3R
kIifQ.PslykWxOe_IQrTlyiACR77WZ4k_3GzY6QRjgJ4O9c-E
oxAuthSessionDn: uniqueIdentifier=368f90fb-46b6-4df8-be24-6835fc5db994,ou=sessio
n,o=@!134D.3C3D.796E.FECE!0001!E022.CC3C,o=gluu
uniqueIdentifier: ea2914f9-45b7-4011-bbaf-0231df466e3f
oxAuthenticationMode: u2f
oxAuthUserId: admin
oxAuthAuthorizationCode: 858ac159-839f-445e-aef0-5f5618b4c56c
oxAuthClientId: @!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421
```
Apparently this is a result of the hybrid flow. These tokens are different--the id_token contains different claims. The first has c_hash because it was returned from the authorisation endpoint in conjunction with the code token:
```json
{"typ":"JWT","alg":"HS256"}
.
{
"iss":"https://albacore.gluu.info","aud":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421",
"exp":1472589539,"iat":1472585939,
"acr":"u2f","amr":"[]","nonce":"nonce","auth_time":1472585939,
"c_hash":"DTXV-j5dyFVpsHqecYwVkQ",
"oxValidationURI":"https://albacore.gluu.info/oxauth/opiframe","oxOpenIDConnectVersion":"openidconnect-1.0","user_name":"admin","inum":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB","name":"Default Admin User","family_name":"User","given_name":"Admin","sub":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB"}
{"typ":"JWT","alg":"HS256"}
.
{
"iss":"https://albacore.gluu.info","aud":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0008!CAE3.7421",
"exp":1472589540,"iat":1472585940,
"acr":"u2f","amr":"[]","nonce":"nonce","auth_time":1472585939,
"at_hash":"2YSLNUFlhdQ2zOgVDEAaAA",
"oxValidationURI":"https://albacore.gluu.info/oxauth/opiframe","oxOpenIDConnectVersion":"openidconnect-1.0","user_name":"admin","inum":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB","name":"Default Admin User","family_name":"User","given_name":"Admin","sub":"@!134D.3C3D.796E.FECE!0001!E022.CC3C!0000!A8F2.DE1E.D7FB"}
```
However, instead of creating two entries, it would be better to create one enty with two values for `oxAuthTokenCode`
|
priority
|
duplicate id tokens created on login i logged into oxauth previously there were no sessions for this client after login i noticed four tokens created access token refresh token and two id tokens see gluu root albacore opt opendj bin ldapsearch h localhost p z x d cn directory manager j pw b oxauthgrantid inum fece ou clients o fece o gluu oxauthtokentype id token dn uniqueidentifier oxauthgrantid inum fece cae ou clients o fece o gluu objectclass top objectclass oxauthtoken oxauthscope user name email openid profile oxauthgranttype authorization code oxauthauthenticationtime tue aug utc oxauthgrantid oxauthnonce nonce oxauthcreation oxauthexpiration oxauthtokentype id token oxauthtokencode oxauthsessiondn uniqueidentifier ou sessio n o fece o gluu uniqueidentifier oxauthenticationmode oxauthuserid admin oxauthauthorizationcode oxauthclientid fece dn uniqueidentifier bbaf oxauthgrantid inum fece cae ou clients o fece o gluu objectclass top objectclass oxauthtoken oxauthscope user name email openid profile oxauthgranttype authorization code oxauthauthenticationtime tue aug utc oxauthgrantid oxauthnonce nonce oxauthcreation oxauthexpiration oxauthtokentype id token oxauthtokencode kiifq pslykwxoe e oxauthsessiondn uniqueidentifier ou sessio n o fece o gluu uniqueidentifier bbaf oxauthenticationmode oxauthuserid admin oxauthauthorizationcode oxauthclientid fece apparently this is a result of the hybrid flow these tokens are different the id token contains different claims the first has c hash because it was returned from the authorisation endpoint in conjunction with the code token json typ jwt alg iss exp iat acr amr nonce nonce auth time c hash dtxv oxvalidationuri admin user family name user given name admin sub fece typ jwt alg iss exp iat acr amr nonce nonce auth time at hash oxvalidationuri admin user family name user given name admin sub fece however instead of creating two entries it would be better to create one enty with two values for oxauthtokencode
| 1
|
153,219
| 5,887,119,987
|
IssuesEvent
|
2017-05-17 06:16:14
|
catmaid/CATMAID
|
https://api.github.com/repos/catmaid/CATMAID
|
closed
|
Connectivity widget needs filters: ways to split neurons and list partners for each part
|
difficulty: low priority: high type: enhancement
|
At the moment several contributors are splitting neurons into pieces so that the connectivity widget can list only the synapses downstream of a specific skeleton node. This pratice is harmful and potentially leaves behind splitted neurons when forgetting to join them back into wholes.
To address this we need the means to express virtual splits and list only the partners of synapses within the desired split.
For example a pulldown menu for each listed source neuron with these options:
1. "downstream of tag" (e.g. all parts of the arbor downstream of "microtubules end" which are the microtubule-free twigs)
2. "upstream of tag" (inverse of 1)
3. "axon" (as determined by centrifugal synapse flow centrality)
4. "dendrite" (idem)
The above requires fetching the skeleton and its tags, as well as adding the treenode ID to the retrieved synapses for the connectivity so that the subset can be filtered for display.
The listing of partners can be done in two ways: either only the partners for the desired parts of the neuron are listed, or given that [1,2] and [3,4] are complementary, list not one but two columns for the skeleton in question, e.g one for axon and one for dendrite, or one for downstream of tag and one for upstream of tag.
These split operations are in effect in the 3D Viewer as well as in the Graph widget, and some of their operations are generalized at the bottom of the synapse_clustering.js file.
|
1.0
|
Connectivity widget needs filters: ways to split neurons and list partners for each part - At the moment several contributors are splitting neurons into pieces so that the connectivity widget can list only the synapses downstream of a specific skeleton node. This pratice is harmful and potentially leaves behind splitted neurons when forgetting to join them back into wholes.
To address this we need the means to express virtual splits and list only the partners of synapses within the desired split.
For example a pulldown menu for each listed source neuron with these options:
1. "downstream of tag" (e.g. all parts of the arbor downstream of "microtubules end" which are the microtubule-free twigs)
2. "upstream of tag" (inverse of 1)
3. "axon" (as determined by centrifugal synapse flow centrality)
4. "dendrite" (idem)
The above requires fetching the skeleton and its tags, as well as adding the treenode ID to the retrieved synapses for the connectivity so that the subset can be filtered for display.
The listing of partners can be done in two ways: either only the partners for the desired parts of the neuron are listed, or given that [1,2] and [3,4] are complementary, list not one but two columns for the skeleton in question, e.g one for axon and one for dendrite, or one for downstream of tag and one for upstream of tag.
These split operations are in effect in the 3D Viewer as well as in the Graph widget, and some of their operations are generalized at the bottom of the synapse_clustering.js file.
|
priority
|
connectivity widget needs filters ways to split neurons and list partners for each part at the moment several contributors are splitting neurons into pieces so that the connectivity widget can list only the synapses downstream of a specific skeleton node this pratice is harmful and potentially leaves behind splitted neurons when forgetting to join them back into wholes to address this we need the means to express virtual splits and list only the partners of synapses within the desired split for example a pulldown menu for each listed source neuron with these options downstream of tag e g all parts of the arbor downstream of microtubules end which are the microtubule free twigs upstream of tag inverse of axon as determined by centrifugal synapse flow centrality dendrite idem the above requires fetching the skeleton and its tags as well as adding the treenode id to the retrieved synapses for the connectivity so that the subset can be filtered for display the listing of partners can be done in two ways either only the partners for the desired parts of the neuron are listed or given that and are complementary list not one but two columns for the skeleton in question e g one for axon and one for dendrite or one for downstream of tag and one for upstream of tag these split operations are in effect in the viewer as well as in the graph widget and some of their operations are generalized at the bottom of the synapse clustering js file
| 1
|
709,933
| 24,397,406,245
|
IssuesEvent
|
2022-10-04 20:38:45
|
pyvista/pyvista
|
https://api.github.com/repos/pyvista/pyvista
|
closed
|
Pooch integration for downloading examples
|
proposed-change flaky priority-high
|
We should change our example downloading utilities to leverage Pooch: https://github.com/fatiando/pooch
Not sure how much effort this will require, but the download utility I made is a bit unpredictable on the CI's - downloads time out a lot... not sure if Pooch would fix that but it's probably worth switching to Pooch to make downloading data files outside of our example data repo a bit easier
|
1.0
|
Pooch integration for downloading examples - We should change our example downloading utilities to leverage Pooch: https://github.com/fatiando/pooch
Not sure how much effort this will require, but the download utility I made is a bit unpredictable on the CI's - downloads time out a lot... not sure if Pooch would fix that but it's probably worth switching to Pooch to make downloading data files outside of our example data repo a bit easier
|
priority
|
pooch integration for downloading examples we should change our example downloading utilities to leverage pooch not sure how much effort this will require but the download utility i made is a bit unpredictable on the ci s downloads time out a lot not sure if pooch would fix that but it s probably worth switching to pooch to make downloading data files outside of our example data repo a bit easier
| 1
|
126,860
| 5,006,819,785
|
IssuesEvent
|
2016-12-12 15:12:56
|
tpltnt/SimpleCV
|
https://api.github.com/repos/tpltnt/SimpleCV
|
opened
|
purge iplimage from Image class
|
high priority
|
The current implementation uses iplimage. This is an old C-binding which is not exposed anymore. A numpy.ndarray is usually used instead.
DoD: iplimage code is replaced with numpy.ndarray.
|
1.0
|
purge iplimage from Image class - The current implementation uses iplimage. This is an old C-binding which is not exposed anymore. A numpy.ndarray is usually used instead.
DoD: iplimage code is replaced with numpy.ndarray.
|
priority
|
purge iplimage from image class the current implementation uses iplimage this is an old c binding which is not exposed anymore a numpy ndarray is usually used instead dod iplimage code is replaced with numpy ndarray
| 1
|
129,513
| 5,097,768,229
|
IssuesEvent
|
2017-01-03 22:35:33
|
GalliumOS/galliumos-distro
|
https://api.github.com/repos/GalliumOS/galliumos-distro
|
closed
|
EFI Boot
|
discussion priority:high
|
I have an Acer C720 which has a new ROM with coreboot + SeaBIOS + TianoCore DUET
I am able to EFI boot Xubuntu just fine on it, however, GalliumOS is missing grub2-efi and the EFI folder, which means that it is unable to boot when TianoCore is used.
|
1.0
|
EFI Boot - I have an Acer C720 which has a new ROM with coreboot + SeaBIOS + TianoCore DUET
I am able to EFI boot Xubuntu just fine on it, however, GalliumOS is missing grub2-efi and the EFI folder, which means that it is unable to boot when TianoCore is used.
|
priority
|
efi boot i have an acer which has a new rom with coreboot seabios tianocore duet i am able to efi boot xubuntu just fine on it however galliumos is missing efi and the efi folder which means that it is unable to boot when tianocore is used
| 1
|
652,976
| 21,566,189,174
|
IssuesEvent
|
2022-05-01 22:11:17
|
Square789/Demomgr
|
https://api.github.com/repos/Square789/Demomgr
|
closed
|
Play functionality useless when TF2 is not installed inside the main steam directory
|
bug high priority
|
- The user data is fetched from there, but TF2 may actually be located somewhere else, likely depending on the demo's path.
- Add some checkboxes to fix that maybe idunno
|
1.0
|
Play functionality useless when TF2 is not installed inside the main steam directory - - The user data is fetched from there, but TF2 may actually be located somewhere else, likely depending on the demo's path.
- Add some checkboxes to fix that maybe idunno
|
priority
|
play functionality useless when is not installed inside the main steam directory the user data is fetched from there but may actually be located somewhere else likely depending on the demo s path add some checkboxes to fix that maybe idunno
| 1
|
674,581
| 23,058,492,409
|
IssuesEvent
|
2022-07-25 07:45:59
|
UCL/TDMS
|
https://api.github.com/repos/UCL/TDMS
|
opened
|
Modify interpolation scheme
|
priority:high
|
Overhaul interpolation from the split grid to the central grid.
1. Different scheme required for each field component.
2. Is implemented for the electric field, although interpolateTimeDomainFieldCentralEBandLimited was hacked to work for the 2D case, this should be checked.
3. More significant work required for the magnetic field
4. Enable interpolation to be performed right up to the edge of the computational domain.
|
1.0
|
Modify interpolation scheme - Overhaul interpolation from the split grid to the central grid.
1. Different scheme required for each field component.
2. Is implemented for the electric field, although interpolateTimeDomainFieldCentralEBandLimited was hacked to work for the 2D case, this should be checked.
3. More significant work required for the magnetic field
4. Enable interpolation to be performed right up to the edge of the computational domain.
|
priority
|
modify interpolation scheme overhaul interpolation from the split grid to the central grid different scheme required for each field component is implemented for the electric field although interpolatetimedomainfieldcentralebandlimited was hacked to work for the case this should be checked more significant work required for the magnetic field enable interpolation to be performed right up to the edge of the computational domain
| 1
|
49,051
| 3,001,739,230
|
IssuesEvent
|
2015-07-24 13:28:40
|
centreon/centreon
|
https://api.github.com/repos/centreon/centreon
|
closed
|
Import LDAPUtilisateur / Relation Utilisateur et group LDAP
|
Component: Affect Version Component: Resolution Priority: High Status: Rejected Tracker: Bug
|
---
Author Name: **GIBERT Florian** (GIBERT Florian)
Original Redmine Issue: 5706, https://forge.centreon.com/issues/5706
Original Date: 2014-07-30
---
Bonjour,
Après avoir configuré sous ACL les groupes utilisateurs LDAP, l'import automatique et ou manuel des utilisateurs via le LDAP créé bien l'utilisateur dans la bdd mais aucune action n'est effectuée dans la table contactgroup_contact_relation afin d'appliquer les ACL.
Seule une relance complète de la plateforme permet de finaliser la configuration à appliquer à l'utilisateur (lien entre group et utilisateur) et permet de positionner les ACL
|
1.0
|
Import LDAPUtilisateur / Relation Utilisateur et group LDAP - ---
Author Name: **GIBERT Florian** (GIBERT Florian)
Original Redmine Issue: 5706, https://forge.centreon.com/issues/5706
Original Date: 2014-07-30
---
Bonjour,
Après avoir configuré sous ACL les groupes utilisateurs LDAP, l'import automatique et ou manuel des utilisateurs via le LDAP créé bien l'utilisateur dans la bdd mais aucune action n'est effectuée dans la table contactgroup_contact_relation afin d'appliquer les ACL.
Seule une relance complète de la plateforme permet de finaliser la configuration à appliquer à l'utilisateur (lien entre group et utilisateur) et permet de positionner les ACL
|
priority
|
import ldaputilisateur relation utilisateur et group ldap author name gibert florian gibert florian original redmine issue original date bonjour après avoir configuré sous acl les groupes utilisateurs ldap l import automatique et ou manuel des utilisateurs via le ldap créé bien l utilisateur dans la bdd mais aucune action n est effectuée dans la table contactgroup contact relation afin d appliquer les acl seule une relance complète de la plateforme permet de finaliser la configuration à appliquer à l utilisateur lien entre group et utilisateur et permet de positionner les acl
| 1
|
243,737
| 7,861,273,653
|
IssuesEvent
|
2018-06-21 23:23:24
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
USER ISSUE: treasury - allocate is not working
|
High Priority
|
**Version:** 0.7.5.0 beta staging-bad4361f
Partially game breaking
Treasury "allocate" button is not working, and treasury is not see world currences.
|
1.0
|
USER ISSUE: treasury - allocate is not working - **Version:** 0.7.5.0 beta staging-bad4361f
Partially game breaking
Treasury "allocate" button is not working, and treasury is not see world currences.
|
priority
|
user issue treasury allocate is not working version beta staging partially game breaking treasury allocate button is not working and treasury is not see world currences
| 1
|
386,519
| 11,440,500,965
|
IssuesEvent
|
2020-02-05 09:47:15
|
woocommerce/woocommerce-gateway-stripe
|
https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe
|
closed
|
No such customer error isn't handled correctly.
|
Many: L Priority: High
|
Users who had a stripe customer ID once that no longer does generate an error and the handler isn't working correctly.
A recent payment request returned the following:
{
"error": {
"code": "resource_missing",
"doc_url": "https://stripe.com/docs/error-codes/resource-missing",
"message": "No such customer: cus_GB59S1zpxxxxxx",
"param": "id",
"type": "invalid_request_error"
}
}
At line 234 of class-wc-stripe-customer.php uses a method is_no_such_customer_error() which looks like it would return true according to the three checks that are run, but it doesn't create a new user and the transaction fails. The message "No such customer: cus_GB59S1zpxxxxxx" appears in the front end.
We're running latest version plugin with latest version wordpress using a multisite setup.
It'd be useful to prefix the stripe customer id key with the site prefix so we can store multiple stripe customer IDs (one per website on the multisite).
|
1.0
|
No such customer error isn't handled correctly. - Users who had a stripe customer ID once that no longer does generate an error and the handler isn't working correctly.
A recent payment request returned the following:
{
"error": {
"code": "resource_missing",
"doc_url": "https://stripe.com/docs/error-codes/resource-missing",
"message": "No such customer: cus_GB59S1zpxxxxxx",
"param": "id",
"type": "invalid_request_error"
}
}
At line 234 of class-wc-stripe-customer.php uses a method is_no_such_customer_error() which looks like it would return true according to the three checks that are run, but it doesn't create a new user and the transaction fails. The message "No such customer: cus_GB59S1zpxxxxxx" appears in the front end.
We're running latest version plugin with latest version wordpress using a multisite setup.
It'd be useful to prefix the stripe customer id key with the site prefix so we can store multiple stripe customer IDs (one per website on the multisite).
|
priority
|
no such customer error isn t handled correctly users who had a stripe customer id once that no longer does generate an error and the handler isn t working correctly a recent payment request returned the following error code resource missing doc url message no such customer cus param id type invalid request error at line of class wc stripe customer php uses a method is no such customer error which looks like it would return true according to the three checks that are run but it doesn t create a new user and the transaction fails the message no such customer cus appears in the front end we re running latest version plugin with latest version wordpress using a multisite setup it d be useful to prefix the stripe customer id key with the site prefix so we can store multiple stripe customer ids one per website on the multisite
| 1
|
640,004
| 20,771,045,632
|
IssuesEvent
|
2022-03-16 04:46:24
|
SE701-T1/frontend
|
https://api.github.com/repos/SE701-T1/frontend
|
closed
|
Create API Stubs
|
Status: In Progress Type: Feature Priority: High Team: Messaging
|
**Describe the task that needs to be done.**
API stubs need to be set up to return mock data to different parts of the app.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
Will set up API stubs for communication, pairing, timetable and user, based off the backend. This will allow parts of the app to call these functions and use mock data.
**Notes**
N/A
|
1.0
|
Create API Stubs - **Describe the task that needs to be done.**
API stubs need to be set up to return mock data to different parts of the app.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
Will set up API stubs for communication, pairing, timetable and user, based off the backend. This will allow parts of the app to call these functions and use mock data.
**Notes**
N/A
|
priority
|
create api stubs describe the task that needs to be done api stubs need to be set up to return mock data to different parts of the app describe how a solution to your proposed task might look like and any alternatives considered will set up api stubs for communication pairing timetable and user based off the backend this will allow parts of the app to call these functions and use mock data notes n a
| 1
|
617,521
| 19,359,476,359
|
IssuesEvent
|
2021-12-16 02:15:38
|
matrixorigin/matrixone
|
https://api.github.com/repos/matrixorigin/matrixone
|
closed
|
Improving the performance of getDataFromPipeline
|
priority/high kind/performance
|
There exists a format conversion from column to row in`getDataFromPipeline` which leads to remarkable performance overhead when the result sets is larger than 10000.
|
1.0
|
Improving the performance of getDataFromPipeline - There exists a format conversion from column to row in`getDataFromPipeline` which leads to remarkable performance overhead when the result sets is larger than 10000.
|
priority
|
improving the performance of getdatafrompipeline there exists a format conversion from column to row in getdatafrompipeline which leads to remarkable performance overhead when the result sets is larger than
| 1
|
786,089
| 27,634,301,230
|
IssuesEvent
|
2023-03-10 13:19:51
|
awslabs/seed-farmer
|
https://api.github.com/repos/awslabs/seed-farmer
|
closed
|
[FEATURE] Refactor manifest dataclasses
|
enhancement High Priority
|
The manifest.py object class is beginning to become unwieldy. Refactor it and look at the following:
1. segment each major class (DeploymentManifest, ModuleManifest, Deployspec, etc) into their own files (DO NOT MODIFY LOGIC!!!)
2. create a new class object for `seedfarmer-project` to declaratively define the objects (remove hard-coded config vales from `main`
|
1.0
|
[FEATURE] Refactor manifest dataclasses - The manifest.py object class is beginning to become unwieldy. Refactor it and look at the following:
1. segment each major class (DeploymentManifest, ModuleManifest, Deployspec, etc) into their own files (DO NOT MODIFY LOGIC!!!)
2. create a new class object for `seedfarmer-project` to declaratively define the objects (remove hard-coded config vales from `main`
|
priority
|
refactor manifest dataclasses the manifest py object class is beginning to become unwieldy refactor it and look at the following segment each major class deploymentmanifest modulemanifest deployspec etc into their own files do not modify logic create a new class object for seedfarmer project to declaratively define the objects remove hard coded config vales from main
| 1
|
360,389
| 10,688,124,001
|
IssuesEvent
|
2019-10-22 17:34:05
|
epam/cloud-pipeline
|
https://api.github.com/repos/epam/cloud-pipeline
|
closed
|
NPE while creating the new pipeline
|
kind/bug priority/high state/verify sys/core sys/vcs
|
**Describe the bug**
NPE is thrown, when creating a new pipeline using the fresh deployment of the platform
**To Reproduce**
Steps to reproduce the behavior:
1. (use a fresh deployment of the platform: no pipelines exist, gitlab is empty)
2. Start creating a new pipeline (e.g. using `DEFAULT` templates and a `pipeline1` name)
3. Click `Create`
4. `We've encountered a server error` message is shown
**Expected behavior**
The pipeline is created and it's possible to view it in the GitLab GUI
**Screenshots**
```
[ERROR] 2019-10-07 13:35:31.997 [https-jsse-nio-8080-exec-2] ExceptionHandlerAdvice - This operation has been aborted: uri=/pipeline/restapi/pipeline/register;client=10.244.0.1;session=98444BC4118FE1058B3434C9A9B73D48;user=PIPE_ADMIN.
java.lang.NullPointerException: null
at com.epam.pipeline.manager.git.GitManager.getDefaultGitlabClient(GitManager.java:735) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
at com.epam.pipeline.manager.git.GitManager.checkProjectExists(GitManager.java:726) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
at com.epam.pipeline.manager.pipeline.PipelineManager.create(PipelineManager.java:109) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
at com.epam.pipeline.manager.pipeline.PipelineManager$$FastClassBySpringCGLIB$$cca79522.invoke(<generated>) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
```
**Environment:**
- Cloud Provider: AWS
- Version: 0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f
- Browser: Chrome `Version 77.0.3865.90 (Official Build) (64-bit)`
|
1.0
|
NPE while creating the new pipeline - **Describe the bug**
NPE is thrown, when creating a new pipeline using the fresh deployment of the platform
**To Reproduce**
Steps to reproduce the behavior:
1. (use a fresh deployment of the platform: no pipelines exist, gitlab is empty)
2. Start creating a new pipeline (e.g. using `DEFAULT` templates and a `pipeline1` name)
3. Click `Create`
4. `We've encountered a server error` message is shown
**Expected behavior**
The pipeline is created and it's possible to view it in the GitLab GUI
**Screenshots**
```
[ERROR] 2019-10-07 13:35:31.997 [https-jsse-nio-8080-exec-2] ExceptionHandlerAdvice - This operation has been aborted: uri=/pipeline/restapi/pipeline/register;client=10.244.0.1;session=98444BC4118FE1058B3434C9A9B73D48;user=PIPE_ADMIN.
java.lang.NullPointerException: null
at com.epam.pipeline.manager.git.GitManager.getDefaultGitlabClient(GitManager.java:735) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
at com.epam.pipeline.manager.git.GitManager.checkProjectExists(GitManager.java:726) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
at com.epam.pipeline.manager.pipeline.PipelineManager.create(PipelineManager.java:109) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
at com.epam.pipeline.manager.pipeline.PipelineManager$$FastClassBySpringCGLIB$$cca79522.invoke(<generated>) ~[classes!/:0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f]
```
**Environment:**
- Cloud Provider: AWS
- Version: 0.16.0.2877.ddc9c1e5f7ffec93edd29b1dcfb5bff353340e7f
- Browser: Chrome `Version 77.0.3865.90 (Official Build) (64-bit)`
|
priority
|
npe while creating the new pipeline describe the bug npe is thrown when creating a new pipeline using the fresh deployment of the platform to reproduce steps to reproduce the behavior use a fresh deployment of the platform no pipelines exist gitlab is empty start creating a new pipeline e g using default templates and a name click create we ve encountered a server error message is shown expected behavior the pipeline is created and it s possible to view it in the gitlab gui screenshots exceptionhandleradvice this operation has been aborted uri pipeline restapi pipeline register client session user pipe admin java lang nullpointerexception null at com epam pipeline manager git gitmanager getdefaultgitlabclient gitmanager java at com epam pipeline manager git gitmanager checkprojectexists gitmanager java at com epam pipeline manager pipeline pipelinemanager create pipelinemanager java at com epam pipeline manager pipeline pipelinemanager fastclassbyspringcglib invoke environment cloud provider aws version browser chrome version official build bit
| 1
|
667,893
| 22,513,944,413
|
IssuesEvent
|
2022-06-24 00:11:20
|
acreloaded/acr
|
https://api.github.com/repos/acreloaded/acr
|
closed
|
159: Mouse Sensitivity resets itself
|
bug invalid priority-3-high
|
**Mouse Sensitivity resets itself** by `Monsterhuntergg`
Issue 159 posted to Google Code on 2014 Mar 9 at 19:12:20
Current status: Invalid
Current labels: `Type-Defect`, `Priority-High`, `Usability`
`Monsterhuntergg` on 2014 Mar 9 at 19:12:20:
> <b>What steps will reproduce the problem?</b>
> 1.Looking around
> <b>2.</b>
> <b>3.</b>
> Thread: http://forum.acr.victorz.ca/thread-1272.html
>
> <b>What version are you using? On what operating system?</b>
> Newest version (09.03.14)
> Windows 7
`Monsterhuntergg` on 2014 Mar 9 at 19:45:55:
> *it resets itself when rightclicking
`Monsterhuntergg` on 2014 Mar 9 at 20:36:09:
> *scoping
`theonlypwner` on 2014 Mar 10 at 23:26:17:
> This is expected to be fixed by the redo.
>
> **Labels**: -Priority-Unset Priority-High Usability
> **Status**: Started-Redo
`theonlypwner` on 2014 Aug 23 at 01:41:51:
> This issue is one of those assumed to affect version 2.6 and previous versions.
>
> **Status**: Invalid
|
1.0
|
159: Mouse Sensitivity resets itself - **Mouse Sensitivity resets itself** by `Monsterhuntergg`
Issue 159 posted to Google Code on 2014 Mar 9 at 19:12:20
Current status: Invalid
Current labels: `Type-Defect`, `Priority-High`, `Usability`
`Monsterhuntergg` on 2014 Mar 9 at 19:12:20:
> <b>What steps will reproduce the problem?</b>
> 1.Looking around
> <b>2.</b>
> <b>3.</b>
> Thread: http://forum.acr.victorz.ca/thread-1272.html
>
> <b>What version are you using? On what operating system?</b>
> Newest version (09.03.14)
> Windows 7
`Monsterhuntergg` on 2014 Mar 9 at 19:45:55:
> *it resets itself when rightclicking
`Monsterhuntergg` on 2014 Mar 9 at 20:36:09:
> *scoping
`theonlypwner` on 2014 Mar 10 at 23:26:17:
> This is expected to be fixed by the redo.
>
> **Labels**: -Priority-Unset Priority-High Usability
> **Status**: Started-Redo
`theonlypwner` on 2014 Aug 23 at 01:41:51:
> This issue is one of those assumed to affect version 2.6 and previous versions.
>
> **Status**: Invalid
|
priority
|
mouse sensitivity resets itself mouse sensitivity resets itself by monsterhuntergg issue posted to google code on mar at current status invalid current labels type defect priority high usability monsterhuntergg on mar at what steps will reproduce the problem looking around thread what version are you using on what operating system newest version windows monsterhuntergg on mar at it resets itself when rightclicking monsterhuntergg on mar at scoping theonlypwner on mar at this is expected to be fixed by the redo labels priority unset priority high usability status started redo theonlypwner on aug at this issue is one of those assumed to affect version and previous versions status invalid
| 1
|
528,359
| 15,365,012,385
|
IssuesEvent
|
2021-03-01 22:49:02
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
opened
|
Encumbered Localities with Public Events Not Mapping in Berkeley Mapper?
|
Function-Encumbrances Function-Locality/Event/Georeferencing Priority-High
|
1) To hide sensitive locality data, we have the tools in Arctos to create private localities using the locality access attribute, while creating a second, public specimen event and locality with fuzzy data. However, it would be useful if the public event were mappable, e.g. to county level. Berkeley Mapper right now is disabled for either event for public access if any event has the private access locality attribute. Can this be fixed without causing sensitive locality data breaches in Berkeley Mapper?
See for example https://arctos.database.museum/guid/MSB:Bird:49622
2) On a similar subject, what tools exist if any to georeference a locality to county level or above using only WKT boundaries, rather than displaying randomly selected and misleading point coordinates in Berkeley Mapper? For WKTs for higher geography, localities that use them (e.g. "New Mexico, no specific locality recorded") still map to a single point in Berkeley Mapper, rather showing the WKT boundary as uncertainty. This is misleading, as in the above example, New Mexico records get mapped to a point outside of a species range, merely because that point happened to be chosen as the center default for the state "New Mexico". See https://arctos.database.museum/guid/MSB:Mamm:21317
|
1.0
|
Encumbered Localities with Public Events Not Mapping in Berkeley Mapper? - 1) To hide sensitive locality data, we have the tools in Arctos to create private localities using the locality access attribute, while creating a second, public specimen event and locality with fuzzy data. However, it would be useful if the public event were mappable, e.g. to county level. Berkeley Mapper right now is disabled for either event for public access if any event has the private access locality attribute. Can this be fixed without causing sensitive locality data breaches in Berkeley Mapper?
See for example https://arctos.database.museum/guid/MSB:Bird:49622
2) On a similar subject, what tools exist if any to georeference a locality to county level or above using only WKT boundaries, rather than displaying randomly selected and misleading point coordinates in Berkeley Mapper? For WKTs for higher geography, localities that use them (e.g. "New Mexico, no specific locality recorded") still map to a single point in Berkeley Mapper, rather showing the WKT boundary as uncertainty. This is misleading, as in the above example, New Mexico records get mapped to a point outside of a species range, merely because that point happened to be chosen as the center default for the state "New Mexico". See https://arctos.database.museum/guid/MSB:Mamm:21317
|
priority
|
encumbered localities with public events not mapping in berkeley mapper to hide sensitive locality data we have the tools in arctos to create private localities using the locality access attribute while creating a second public specimen event and locality with fuzzy data however it would be useful if the public event were mappable e g to county level berkeley mapper right now is disabled for either event for public access if any event has the private access locality attribute can this be fixed without causing sensitive locality data breaches in berkeley mapper see for example on a similar subject what tools exist if any to georeference a locality to county level or above using only wkt boundaries rather than displaying randomly selected and misleading point coordinates in berkeley mapper for wkts for higher geography localities that use them e g new mexico no specific locality recorded still map to a single point in berkeley mapper rather showing the wkt boundary as uncertainty this is misleading as in the above example new mexico records get mapped to a point outside of a species range merely because that point happened to be chosen as the center default for the state new mexico see
| 1
|
570,090
| 17,018,637,583
|
IssuesEvent
|
2021-07-02 15:23:16
|
HEPData/hepdata
|
https://api.github.com/repos/HEPData/hepdata
|
closed
|
records: add missing associated_recid and publication_inspire_id to datasubmission
|
complexity: medium priority: high type: bug
|
Some `datasubmission` database tables of finalised records have a null `associated_recid` and `publication_inspire_id`, specifically version 1 of these records:
```sql
hepdata=> select distinct(datasubmission.publication_recid) from datasubmission, hepsubmission where datasubmission.associated_recid is null and datasubmission.publication_inspire_id is null and hepsubmission.overall_status='finished' and hepsubmission.publication_recid=datasubmission.publication_recid and hepsubmission.version=datasubmission.version order by datasubmission.publication_recid;
publication_recid
-------------------
5477
15988
18381
21666
23151
31972
34612
61925
73908
75023
75115
75323
76245
76247
76740
(15 rows)
```
The fields `associated_recid` and `publication_inspire_id` should have been added by the `finalise_datasubmission` function:
https://github.com/HEPData/hepdata/blob/ac009974b012493f6b04133283ff16b64cc62b52/hepdata/modules/records/utils/submission.py#L905-L906
But the finalisation must have partially failed for these records when migrating from the old HepData site in 2016. It should be straightforward to set the `publication_inspire_id` to be the same as the `inspire_id` of the `hepsubmission` table with the same `publication_recid`. But it's not clear how to recover the correct `associated_recid`, so part of the finalisation process might need to be repeated to create new data records if they do not already exist.
The missing `publication_inspire_id` means that the download links for individual tables are currently broken. The missing `associated_recid` means that data tables do not appear in searches. This issue was identified when running `hepdata utils cleanup-index` which gave a [Sentry exception](https://hepdata-sentry.web.cern.ch/sentry/hepdata-prod/issues/17674/?query=is%3Aunresolved) for https://www.hepdata.net/record/15988?version=1 due to the null `associated_recid` values. [This line](https://github.com/HEPData/hepdata/blob/a64ee94f7e47892907e4896de22c23bc92674da6/hepdata/ext/elasticsearch/api.py#L315) of `cleanup_index_batch` could be modified to filter out `None` values. However, it should not be necessary to run `hepdata utils cleanup-index` again now that the indexing code has been improved.
|
1.0
|
records: add missing associated_recid and publication_inspire_id to datasubmission - Some `datasubmission` database tables of finalised records have a null `associated_recid` and `publication_inspire_id`, specifically version 1 of these records:
```sql
hepdata=> select distinct(datasubmission.publication_recid) from datasubmission, hepsubmission where datasubmission.associated_recid is null and datasubmission.publication_inspire_id is null and hepsubmission.overall_status='finished' and hepsubmission.publication_recid=datasubmission.publication_recid and hepsubmission.version=datasubmission.version order by datasubmission.publication_recid;
publication_recid
-------------------
5477
15988
18381
21666
23151
31972
34612
61925
73908
75023
75115
75323
76245
76247
76740
(15 rows)
```
The fields `associated_recid` and `publication_inspire_id` should have been added by the `finalise_datasubmission` function:
https://github.com/HEPData/hepdata/blob/ac009974b012493f6b04133283ff16b64cc62b52/hepdata/modules/records/utils/submission.py#L905-L906
But the finalisation must have partially failed for these records when migrating from the old HepData site in 2016. It should be straightforward to set the `publication_inspire_id` to be the same as the `inspire_id` of the `hepsubmission` table with the same `publication_recid`. But it's not clear how to recover the correct `associated_recid`, so part of the finalisation process might need to be repeated to create new data records if they do not already exist.
The missing `publication_inspire_id` means that the download links for individual tables are currently broken. The missing `associated_recid` means that data tables do not appear in searches. This issue was identified when running `hepdata utils cleanup-index` which gave a [Sentry exception](https://hepdata-sentry.web.cern.ch/sentry/hepdata-prod/issues/17674/?query=is%3Aunresolved) for https://www.hepdata.net/record/15988?version=1 due to the null `associated_recid` values. [This line](https://github.com/HEPData/hepdata/blob/a64ee94f7e47892907e4896de22c23bc92674da6/hepdata/ext/elasticsearch/api.py#L315) of `cleanup_index_batch` could be modified to filter out `None` values. However, it should not be necessary to run `hepdata utils cleanup-index` again now that the indexing code has been improved.
|
priority
|
records add missing associated recid and publication inspire id to datasubmission some datasubmission database tables of finalised records have a null associated recid and publication inspire id specifically version of these records sql hepdata select distinct datasubmission publication recid from datasubmission hepsubmission where datasubmission associated recid is null and datasubmission publication inspire id is null and hepsubmission overall status finished and hepsubmission publication recid datasubmission publication recid and hepsubmission version datasubmission version order by datasubmission publication recid publication recid rows the fields associated recid and publication inspire id should have been added by the finalise datasubmission function but the finalisation must have partially failed for these records when migrating from the old hepdata site in it should be straightforward to set the publication inspire id to be the same as the inspire id of the hepsubmission table with the same publication recid but it s not clear how to recover the correct associated recid so part of the finalisation process might need to be repeated to create new data records if they do not already exist the missing publication inspire id means that the download links for individual tables are currently broken the missing associated recid means that data tables do not appear in searches this issue was identified when running hepdata utils cleanup index which gave a for due to the null associated recid values of cleanup index batch could be modified to filter out none values however it should not be necessary to run hepdata utils cleanup index again now that the indexing code has been improved
| 1
|
414,900
| 12,121,235,731
|
IssuesEvent
|
2020-04-22 08:59:12
|
ComPWA/pycompwa
|
https://api.github.com/repos/ComPWA/pycompwa
|
closed
|
pip install does not work
|
Priority: High
|
```
pip install pycompwa
Collecting pycompwa
Using cached https://files.pythonhosted.org/packages/3b/92/a62d2192624f31493530a94e38dfe033b77b0a24f3a139c2c8a99640d6c2/pycompwa-0.1.0.tar.gz
ERROR: Complete output from command python setup.py egg_info:
ERROR: Searching for cmake
Reading https://pypi.org/simple/cmake/
Downloading https://files.pythonhosted.org/packages/a5/7c/6525cadf99abbabbcb29676f53de0441e8d2f8d0114ab52aae2b31223a3b/cmake-3.16.3.tar.gz#sha256=6fadcaef9b52be35c26d52d9701dd031e96db06231964a5dc29f5d2155285da8
Best match: cmake 3.16.3
Processing cmake-3.16.3.tar.gz
Writing /tmp/easy_install-ltdnbqii/cmake-3.16.3/setup.cfg
Running cmake-3.16.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-ltdnbqii/cmake-3.16.3/egg-dist-tmp-f7nku42m
/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/dist.py:470: UserWarning: Normalizing '0.1.00' to '0.1.0'
normalized_version,
Traceback (most recent call last):
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-ltdnbqii/cmake-3.16.3/setup.py", line 78, in <module>
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 425, in setup
_parse_setuptools_arguments(kw)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in _parse_setuptools_arguments
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in <listcomp>
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/usr/lib64/python3.6/distutils/dist.py", line 847, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 161, in __init__
_Command.__init__(self, dist)
File "/usr/lib64/python3.6/distutils/cmd.py", line 57, in __init__
raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-xt_l36ze/pycompwa/setup.py", line 42, in <module>
'uproot'],
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 456, in setup
return upstream_setup(*args, **kw)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 142, in setup
_install_setup_requires(attrs)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 137, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/dist.py", line 586, in fetch_build_eggs
replace_conflicting=True,
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/pkg_resources/__init__.py", line 780, in resolve
replace_conflicting=replace_conflicting
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/pkg_resources/__init__.py", line 1063, in best_match
return self.obtain(req, installer)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/pkg_resources/__init__.py", line 1075, in obtain
return installer(requirement)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/dist.py", line 653, in fetch_build_egg
return cmd.easy_install(req)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 679, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 705, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 890, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 1158, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 1144, in run_setup
run_setup(setup_script, args)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/usr/lib64/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/lib64/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/_vendor/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-ltdnbqii/cmake-3.16.3/setup.py", line 78, in <module>
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 425, in setup
_parse_setuptools_arguments(kw)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in _parse_setuptools_arguments
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in <listcomp>
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/usr/lib64/python3.6/distutils/dist.py", line 847, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 161, in __init__
_Command.__init__(self, dist)
File "/usr/lib64/python3.6/distutils/cmd.py", line 57, in __init__
raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
----------------------------------------
ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-xt_l36ze/pycompwa/
```
|
1.0
|
pip install does not work - ```
pip install pycompwa
Collecting pycompwa
Using cached https://files.pythonhosted.org/packages/3b/92/a62d2192624f31493530a94e38dfe033b77b0a24f3a139c2c8a99640d6c2/pycompwa-0.1.0.tar.gz
ERROR: Complete output from command python setup.py egg_info:
ERROR: Searching for cmake
Reading https://pypi.org/simple/cmake/
Downloading https://files.pythonhosted.org/packages/a5/7c/6525cadf99abbabbcb29676f53de0441e8d2f8d0114ab52aae2b31223a3b/cmake-3.16.3.tar.gz#sha256=6fadcaef9b52be35c26d52d9701dd031e96db06231964a5dc29f5d2155285da8
Best match: cmake 3.16.3
Processing cmake-3.16.3.tar.gz
Writing /tmp/easy_install-ltdnbqii/cmake-3.16.3/setup.cfg
Running cmake-3.16.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-ltdnbqii/cmake-3.16.3/egg-dist-tmp-f7nku42m
/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/dist.py:470: UserWarning: Normalizing '0.1.00' to '0.1.0'
normalized_version,
Traceback (most recent call last):
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-ltdnbqii/cmake-3.16.3/setup.py", line 78, in <module>
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 425, in setup
_parse_setuptools_arguments(kw)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in _parse_setuptools_arguments
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in <listcomp>
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/usr/lib64/python3.6/distutils/dist.py", line 847, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 161, in __init__
_Command.__init__(self, dist)
File "/usr/lib64/python3.6/distutils/cmd.py", line 57, in __init__
raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-xt_l36ze/pycompwa/setup.py", line 42, in <module>
'uproot'],
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 456, in setup
return upstream_setup(*args, **kw)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 142, in setup
_install_setup_requires(attrs)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 137, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/dist.py", line 586, in fetch_build_eggs
replace_conflicting=True,
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/pkg_resources/__init__.py", line 780, in resolve
replace_conflicting=replace_conflicting
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/pkg_resources/__init__.py", line 1063, in best_match
return self.obtain(req, installer)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/pkg_resources/__init__.py", line 1075, in obtain
return installer(requirement)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/dist.py", line 653, in fetch_build_egg
return cmd.easy_install(req)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 679, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 705, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 890, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 1158, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/command/easy_install.py", line 1144, in run_setup
run_setup(setup_script, args)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/usr/lib64/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/lib64/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/_vendor/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-ltdnbqii/cmake-3.16.3/setup.py", line 78, in <module>
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 425, in setup
_parse_setuptools_arguments(kw)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in _parse_setuptools_arguments
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/skbuild/setuptools_wrap.py", line 212, in <listcomp>
for cmd in [dist.get_command_obj(command) for command in dist.commands]:
File "/usr/lib64/python3.6/distutils/dist.py", line 847, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/home/tau/sjaeger/compwa/lib64/python3.6/site-packages/setuptools/__init__.py", line 161, in __init__
_Command.__init__(self, dist)
File "/usr/lib64/python3.6/distutils/cmd.py", line 57, in __init__
raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
----------------------------------------
ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-xt_l36ze/pycompwa/
```
|
priority
|
pip install does not work pip install pycompwa collecting pycompwa using cached error complete output from command python setup py egg info error searching for cmake reading downloading best match cmake processing cmake tar gz writing tmp easy install ltdnbqii cmake setup cfg running cmake setup py q bdist egg dist dir tmp easy install ltdnbqii cmake egg dist tmp home tau sjaeger compwa site packages setuptools dist py userwarning normalizing to normalized version traceback most recent call last file home tau sjaeger compwa site packages setuptools sandbox py line in save modules yield saved file home tau sjaeger compwa site packages setuptools sandbox py line in setup context yield file home tau sjaeger compwa site packages setuptools sandbox py line in run setup execfile setup script ns file home tau sjaeger compwa site packages setuptools sandbox py line in execfile exec code globals locals file tmp easy install ltdnbqii cmake setup py line in file home tau sjaeger compwa site packages skbuild setuptools wrap py line in setup parse setuptools arguments kw file home tau sjaeger compwa site packages skbuild setuptools wrap py line in parse setuptools arguments for cmd in file home tau sjaeger compwa site packages skbuild setuptools wrap py line in for cmd in file usr distutils dist py line in get command obj cmd obj self command obj klass self file home tau sjaeger compwa site packages setuptools init py line in init command init self dist file usr distutils cmd py line in init raise typeerror dist must be a distribution instance typeerror dist must be a distribution instance during handling of the above exception another exception occurred traceback most recent call last file line in file tmp pip install xt pycompwa setup py line in uproot file home tau sjaeger compwa site packages skbuild setuptools wrap py line in setup return upstream setup args kw file home tau sjaeger compwa site packages setuptools init py line in setup install setup requires attrs file home tau sjaeger compwa site packages setuptools init py line in install setup requires dist fetch build eggs dist setup requires file home tau sjaeger compwa site packages setuptools dist py line in fetch build eggs replace conflicting true file home tau sjaeger compwa site packages pkg resources init py line in resolve replace conflicting replace conflicting file home tau sjaeger compwa site packages pkg resources init py line in best match return self obtain req installer file home tau sjaeger compwa site packages pkg resources init py line in obtain return installer requirement file home tau sjaeger compwa site packages setuptools dist py line in fetch build egg return cmd easy install req file home tau sjaeger compwa site packages setuptools command easy install py line in easy install return self install item spec dist location tmpdir deps file home tau sjaeger compwa site packages setuptools command easy install py line in install item dists self install eggs spec download tmpdir file home tau sjaeger compwa site packages setuptools command easy install py line in install eggs return self build and install setup script setup base file home tau sjaeger compwa site packages setuptools command easy install py line in build and install self run setup setup script setup base args file home tau sjaeger compwa site packages setuptools command easy install py line in run setup run setup setup script args file home tau sjaeger compwa site packages setuptools sandbox py line in run setup raise file usr contextlib py line in exit self gen throw type value traceback file home tau sjaeger compwa site packages setuptools sandbox py line in setup context yield file usr contextlib py line in exit self gen throw type value traceback file home tau sjaeger compwa site packages setuptools sandbox py line in save modules saved exc resume file home tau sjaeger compwa site packages setuptools sandbox py line in resume six reraise type exc self tb file home tau sjaeger compwa site packages setuptools vendor six py line in reraise raise value with traceback tb file home tau sjaeger compwa site packages setuptools sandbox py line in save modules yield saved file home tau sjaeger compwa site packages setuptools sandbox py line in setup context yield file home tau sjaeger compwa site packages setuptools sandbox py line in run setup execfile setup script ns file home tau sjaeger compwa site packages setuptools sandbox py line in execfile exec code globals locals file tmp easy install ltdnbqii cmake setup py line in file home tau sjaeger compwa site packages skbuild setuptools wrap py line in setup parse setuptools arguments kw file home tau sjaeger compwa site packages skbuild setuptools wrap py line in parse setuptools arguments for cmd in file home tau sjaeger compwa site packages skbuild setuptools wrap py line in for cmd in file usr distutils dist py line in get command obj cmd obj self command obj klass self file home tau sjaeger compwa site packages setuptools init py line in init command init self dist file usr distutils cmd py line in init raise typeerror dist must be a distribution instance typeerror dist must be a distribution instance error command python setup py egg info failed with error code in tmp pip install xt pycompwa
| 1
|
66,322
| 3,252,427,598
|
IssuesEvent
|
2015-10-19 14:50:20
|
OpenBEL/openbel-server
|
https://api.github.com/repos/OpenBEL/openbel-server
|
closed
|
Creating evidence (POST /api/evidence) may not conform to evidence schema
|
bug high priority in progress
|
BEL formats are converted into Evidence JSON schema which currently doesn't conform to http://next.belframework.org/schema/evidence.schema.json.
- ~~Make citation.authors an array or null~~
- ~~Metadata needs to be objects with name/value (optional uri) properties~~
- ~~Metadata object items should be allowed~~
- ~~Return single evidence resources as object instead of array~~
- ~~The references section should be added, but treated as an unchecked object (for now)~~
|
1.0
|
Creating evidence (POST /api/evidence) may not conform to evidence schema - BEL formats are converted into Evidence JSON schema which currently doesn't conform to http://next.belframework.org/schema/evidence.schema.json.
- ~~Make citation.authors an array or null~~
- ~~Metadata needs to be objects with name/value (optional uri) properties~~
- ~~Metadata object items should be allowed~~
- ~~Return single evidence resources as object instead of array~~
- ~~The references section should be added, but treated as an unchecked object (for now)~~
|
priority
|
creating evidence post api evidence may not conform to evidence schema bel formats are converted into evidence json schema which currently doesn t conform to make citation authors an array or null metadata needs to be objects with name value optional uri properties metadata object items should be allowed return single evidence resources as object instead of array the references section should be added but treated as an unchecked object for now
| 1
|
656,295
| 21,725,940,092
|
IssuesEvent
|
2022-05-11 07:39:29
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Getting 400 Bad Request when trying outbound provisioning with Salesforce for a user with @gmail.com pattern
|
Priority/Highest Severity/Major bug 5.12.0-bug-fixing Affected-5.12.0 QA-Reported
|
**How to reproduce:**
1. Enable email as a username https://is.docs.wso2.com/en/latest/learn/using-email-address-as-the-username/
2. Set up outbound provisioning with salesforce as per guide https://is.docs.wso2.com/en/latest/learn/outbound-provisioning-with-salesforce/
3. Add a new user as test@wso2.com from managemnet console
4. Login to salesforce and check whether the user is provisioned succesfully ( User is provisioned to salesforce side)
5. Login to managemnt console and add a user as test@gmail.com
6. Login to salesforce and check whether the user is provisioned succesfully ( User is Not provisioned to salesforce side)
Getting 400 bad request in the server logs for the @gmail.com user. @wso2.com user got provisioned to salesforce succesfully.
When Debug logs enabled for provisioning component


busercontent.com/31848014/166674885-236cd89a-198c-43c7-92c3-207389724e96.png)

```
[2022-05-04 16:35:32,426] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.listener.DefaultInboundUserProvisioningListener} - Adding domain name : PRIMARY to user : test8@gmail.com
[2022-05-04 16:35:32,428] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.OutboundProvisioningManager} - Provisioning cache HIT for org.wso2.carbon.identity.application.common.model.ServiceProvider@65c0d51c of carbon.super
[2022-05-04 16:35:32,428] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.dao.CacheBackedProvisioningMgtDAO} - Cache entry not found for Provisioning Entity : identityProviderName=Salesforce.com&& connectorType=salesforce&& provisioningEntityType=USER&& provisioningEntityName=test8@gmail.com. Fetching entity from DB
[2022-05-04 16:35:32,429] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.dao.CacheBackedProvisioningMgtDAO} - Entry for Provisioning Entity : identityProviderName=Salesforce.com&& connectorType=salesforce&& provisioningEntityType=USER&& provisioningEntityName=test8@gmail.com not found in cache or DB
[2022-05-04 16:35:32,431] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] INFO {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Provisioning pattern is not defined, hence using default provisioning pattern
[2022-05-04 16:35:32,431] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] INFO {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Provisioning separator is not defined, hence using default provisioning separator
[2022-05-04 16:35:32,431] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: ProfileId , value is: 00e8d000001riLw
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: Email , value is: samuel@wso2.com
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: LocaleSidKey , value is: en_US
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: UserPermissionsOfflineUser , value is: false
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: LanguageLocaleKey , value is: en_US
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: TimeZoneSidKey , value is: America/Los_Angeles
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: Username , value is: test8@gmail.com
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: UserPermissionsCallCenterAutoLogin , value is: false
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: Alias , value is: Samuel
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: LastName , value is: Gnaniah
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: EmailEncodingKey , value is: UTF-8
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: UserPermissionsMarketingUser , value is: false
[2022-05-04 16:35:32,434] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - JSON object of User
{
"ProfileId": "00e8d000001riLw",
"Email": "samuel@wso2.com",
"LocaleSidKey": "en_US",
"UserPermissionsOfflineUser": false,
"LanguageLocaleKey": "en_US",
"TimeZoneSidKey": "America/Los_Angeles",
"Username": "test8@gmail.com",
"UserPermissionsCallCenterAutoLogin": false,
"Alias": "Samuel",
"LastName": "Gnaniah",
"EmailEncodingKey": "UTF-8",
"UserPermissionsMarketingUser": false
}
[2022-05-04 16:35:32,434] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Built user endpoint url : https://wso212-dev-ed.my.salesforce.com/services/data/v54.0/sobjects/user/
[2022-05-04 16:35:35,101] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Authentication to salesforce returned with response code: 200
[2022-05-04 16:35:35,102] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Authenticate response: {
"access_token": "00D8d000004HeMK!AQYAQMKy0VOTZx8xx8iAfOqm111pN9K22YdnvMMp.Ay_4iqIK0HGEIgB_SXqaVHTOWrDPMRi1kxPIOvWZEt5ZeWlp37XJd3.",
"signature": "zvu4VoUEk8/rHlrFDv7cbVoPTZVoa5ol3NrsSYWqkrg=",
"instance_url": "https://wso212-dev-ed.my.salesforce.com",
"id": "https://login.salesforce.com/id/00D8d000004HeMKEA0/0058d000002nkLgAAI",
"token_type": "Bearer",
"issued_at": "1651662334987"
}
[2022-05-04 16:35:35,102] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Access token is: 00D8d000004HeMK!AQYAQMKy0VOTZx8xx8iAfOqm111pN9K22YdnvMMp.Ay_4iqIK0HGEIgB_SXqaVHTOWrDPMRi1kxPIOvWZEt5ZeWlp37XJd3.
[2022-05-04 16:35:35,103] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Setting authorization header for method: POST as follows,
[2022-05-04 16:35:35,103] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Authorization: OAuth 00D8d000004HeMK!AQYAQMKy0VOTZx8xx8iAfOqm111pN9K22YdnvMMp.Ay_4iqIK0HGEIgB_SXqaVHTOWrDPMRi1kxPIOvWZEt5ZeWlp37XJd3.
[2022-05-04 16:35:36,232] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - HTTP status 400 creating user
[2022-05-04 16:35:36,233] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] ERROR {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Received response status code: 400 text: Bad Request
[2022-05-04 16:35:36,234] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Error response : {"ProfileId":"00e8d000001riLw","Email":"samuel@wso2.com","LocaleSidKey":"en_US","UserPermissionsOfflineUser":false,"LanguageLocaleKey":"en_US","TimeZoneSidKey":"America/Los_Angeles","Username":"test8@gmail.com","UserPermissionsCallCenterAutoLogin":false,"Alias":"Samuel","LastName":"Gnaniah","EmailEncodingKey":"UTF-8","UserPermissionsMarketingUser":false}
[2022-05-04 16:35:36,236] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Returning created user's ID: null
[2022-05-04 16:35:36,239] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.dao.CacheBackedProvisioningMgtDAO} - Caching newly added Provisioning Entity : identityProviderName=Salesforce.com&& connectorType=salesforce&& provisioningEntityType=USER&& provisioningEntityName=test8@gmail.com&& provisioningIdentifier=e65bac16-be34-4ea0-b3d4-911d9d06894e
```
https://user-images.githubusercontent.com/31848014/166675099-a6767cb2-d006-44cf-9477-5829362f0a25.mp4
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
IS 5.12.0 beta
default/h2
|
1.0
|
Getting 400 Bad Request when trying outbound provisioning with Salesforce for a user with @gmail.com pattern - **How to reproduce:**
1. Enable email as a username https://is.docs.wso2.com/en/latest/learn/using-email-address-as-the-username/
2. Set up outbound provisioning with salesforce as per guide https://is.docs.wso2.com/en/latest/learn/outbound-provisioning-with-salesforce/
3. Add a new user as test@wso2.com from managemnet console
4. Login to salesforce and check whether the user is provisioned succesfully ( User is provisioned to salesforce side)
5. Login to managemnt console and add a user as test@gmail.com
6. Login to salesforce and check whether the user is provisioned succesfully ( User is Not provisioned to salesforce side)
Getting 400 bad request in the server logs for the @gmail.com user. @wso2.com user got provisioned to salesforce succesfully.
When Debug logs enabled for provisioning component


busercontent.com/31848014/166674885-236cd89a-198c-43c7-92c3-207389724e96.png)

```
[2022-05-04 16:35:32,426] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.listener.DefaultInboundUserProvisioningListener} - Adding domain name : PRIMARY to user : test8@gmail.com
[2022-05-04 16:35:32,428] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.OutboundProvisioningManager} - Provisioning cache HIT for org.wso2.carbon.identity.application.common.model.ServiceProvider@65c0d51c of carbon.super
[2022-05-04 16:35:32,428] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.dao.CacheBackedProvisioningMgtDAO} - Cache entry not found for Provisioning Entity : identityProviderName=Salesforce.com&& connectorType=salesforce&& provisioningEntityType=USER&& provisioningEntityName=test8@gmail.com. Fetching entity from DB
[2022-05-04 16:35:32,429] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.dao.CacheBackedProvisioningMgtDAO} - Entry for Provisioning Entity : identityProviderName=Salesforce.com&& connectorType=salesforce&& provisioningEntityType=USER&& provisioningEntityName=test8@gmail.com not found in cache or DB
[2022-05-04 16:35:32,431] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] INFO {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Provisioning pattern is not defined, hence using default provisioning pattern
[2022-05-04 16:35:32,431] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] INFO {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Provisioning separator is not defined, hence using default provisioning separator
[2022-05-04 16:35:32,431] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: ProfileId , value is: 00e8d000001riLw
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: Email , value is: samuel@wso2.com
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: LocaleSidKey , value is: en_US
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: UserPermissionsOfflineUser , value is: false
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: LanguageLocaleKey , value is: en_US
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: TimeZoneSidKey , value is: America/Los_Angeles
[2022-05-04 16:35:32,432] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: Username , value is: test8@gmail.com
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: UserPermissionsCallCenterAutoLogin , value is: false
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: Alias , value is: Samuel
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: LastName , value is: Gnaniah
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: EmailEncodingKey , value is: UTF-8
[2022-05-04 16:35:32,433] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - The key is: UserPermissionsMarketingUser , value is: false
[2022-05-04 16:35:32,434] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - JSON object of User
{
"ProfileId": "00e8d000001riLw",
"Email": "samuel@wso2.com",
"LocaleSidKey": "en_US",
"UserPermissionsOfflineUser": false,
"LanguageLocaleKey": "en_US",
"TimeZoneSidKey": "America/Los_Angeles",
"Username": "test8@gmail.com",
"UserPermissionsCallCenterAutoLogin": false,
"Alias": "Samuel",
"LastName": "Gnaniah",
"EmailEncodingKey": "UTF-8",
"UserPermissionsMarketingUser": false
}
[2022-05-04 16:35:32,434] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Built user endpoint url : https://wso212-dev-ed.my.salesforce.com/services/data/v54.0/sobjects/user/
[2022-05-04 16:35:35,101] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Authentication to salesforce returned with response code: 200
[2022-05-04 16:35:35,102] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Authenticate response: {
"access_token": "00D8d000004HeMK!AQYAQMKy0VOTZx8xx8iAfOqm111pN9K22YdnvMMp.Ay_4iqIK0HGEIgB_SXqaVHTOWrDPMRi1kxPIOvWZEt5ZeWlp37XJd3.",
"signature": "zvu4VoUEk8/rHlrFDv7cbVoPTZVoa5ol3NrsSYWqkrg=",
"instance_url": "https://wso212-dev-ed.my.salesforce.com",
"id": "https://login.salesforce.com/id/00D8d000004HeMKEA0/0058d000002nkLgAAI",
"token_type": "Bearer",
"issued_at": "1651662334987"
}
[2022-05-04 16:35:35,102] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Access token is: 00D8d000004HeMK!AQYAQMKy0VOTZx8xx8iAfOqm111pN9K22YdnvMMp.Ay_4iqIK0HGEIgB_SXqaVHTOWrDPMRi1kxPIOvWZEt5ZeWlp37XJd3.
[2022-05-04 16:35:35,103] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Setting authorization header for method: POST as follows,
[2022-05-04 16:35:35,103] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Authorization: OAuth 00D8d000004HeMK!AQYAQMKy0VOTZx8xx8iAfOqm111pN9K22YdnvMMp.Ay_4iqIK0HGEIgB_SXqaVHTOWrDPMRi1kxPIOvWZEt5ZeWlp37XJd3.
[2022-05-04 16:35:36,232] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - HTTP status 400 creating user
[2022-05-04 16:35:36,233] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] ERROR {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Received response status code: 400 text: Bad Request
[2022-05-04 16:35:36,234] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Error response : {"ProfileId":"00e8d000001riLw","Email":"samuel@wso2.com","LocaleSidKey":"en_US","UserPermissionsOfflineUser":false,"LanguageLocaleKey":"en_US","TimeZoneSidKey":"America/Los_Angeles","Username":"test8@gmail.com","UserPermissionsCallCenterAutoLogin":false,"Alias":"Samuel","LastName":"Gnaniah","EmailEncodingKey":"UTF-8","UserPermissionsMarketingUser":false}
[2022-05-04 16:35:36,236] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.connector.salesforce.SalesforceProvisioningConnector} - Returning created user's ID: null
[2022-05-04 16:35:36,239] [ca4616d7-b62d-4ae7-a3b1-1ee4b6fab6ae] DEBUG {org.wso2.carbon.identity.provisioning.dao.CacheBackedProvisioningMgtDAO} - Caching newly added Provisioning Entity : identityProviderName=Salesforce.com&& connectorType=salesforce&& provisioningEntityType=USER&& provisioningEntityName=test8@gmail.com&& provisioningIdentifier=e65bac16-be34-4ea0-b3d4-911d9d06894e
```
https://user-images.githubusercontent.com/31848014/166675099-a6767cb2-d006-44cf-9477-5829362f0a25.mp4
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
IS 5.12.0 beta
default/h2
|
priority
|
getting bad request when trying outbound provisioning with salesforce for a user with gmail com pattern how to reproduce enable email as a username set up outbound provisioning with salesforce as per guide add a new user as test com from managemnet console login to salesforce and check whether the user is provisioned succesfully user is provisioned to salesforce side login to managemnt console and add a user as test gmail com login to salesforce and check whether the user is provisioned succesfully user is not provisioned to salesforce side getting bad request in the server logs for the gmail com user com user got provisioned to salesforce succesfully when debug logs enabled for provisioning component busercontent com png debug org carbon identity provisioning listener defaultinbounduserprovisioninglistener adding domain name primary to user gmail com debug org carbon identity provisioning outboundprovisioningmanager provisioning cache hit for org carbon identity application common model serviceprovider of carbon super debug org carbon identity provisioning dao cachebackedprovisioningmgtdao cache entry not found for provisioning entity identityprovidername salesforce com connectortype salesforce provisioningentitytype user provisioningentityname gmail com fetching entity from db debug org carbon identity provisioning dao cachebackedprovisioningmgtdao entry for provisioning entity identityprovidername salesforce com connectortype salesforce provisioningentitytype user provisioningentityname gmail com not found in cache or db info org carbon identity provisioning connector salesforce salesforceprovisioningconnector provisioning pattern is not defined hence using default provisioning pattern info org carbon identity provisioning connector salesforce salesforceprovisioningconnector provisioning separator is not defined hence using default provisioning separator debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is profileid value is debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is email value is samuel com debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is localesidkey value is en us debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is userpermissionsofflineuser value is false debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is languagelocalekey value is en us debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is timezonesidkey value is america los angeles debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is username value is gmail com debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is userpermissionscallcenterautologin value is false debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is alias value is samuel debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is lastname value is gnaniah debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is emailencodingkey value is utf debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector the key is userpermissionsmarketinguser value is false debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector json object of user profileid email samuel com localesidkey en us userpermissionsofflineuser false languagelocalekey en us timezonesidkey america los angeles username gmail com userpermissionscallcenterautologin false alias samuel lastname gnaniah emailencodingkey utf userpermissionsmarketinguser false debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector built user endpoint url debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector authentication to salesforce returned with response code debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector authenticate response access token ay signature instance url id token type bearer issued at debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector access token is ay debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector setting authorization header for method post as follows debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector authorization oauth ay debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector http status creating user error org carbon identity provisioning connector salesforce salesforceprovisioningconnector received response status code text bad request debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector error response profileid email samuel com localesidkey en us userpermissionsofflineuser false languagelocalekey en us timezonesidkey america los angeles username gmail com userpermissionscallcenterautologin false alias samuel lastname gnaniah emailencodingkey utf userpermissionsmarketinguser false debug org carbon identity provisioning connector salesforce salesforceprovisioningconnector returning created user s id null debug org carbon identity provisioning dao cachebackedprovisioningmgtdao caching newly added provisioning entity identityprovidername salesforce com connectortype salesforce provisioningentitytype user provisioningentityname gmail com provisioningidentifier environment information please complete the following information remove any unnecessary fields is beta default
| 1
|
368,130
| 10,866,592,036
|
IssuesEvent
|
2019-11-14 21:36:55
|
tsgrp/HPI
|
https://api.github.com/repos/tsgrp/HPI
|
opened
|
Folder Notes Enable Contentless Mode Unset In Chained Action Calls
|
High Priority issue
|
When an OCMS action makes a chained call to folder notes, if the new enableContentlessMode flag is not passed in the action will fail to cast the flag on the backend. Since folder notes is a commonly used sub-action throughout OCMS the solution needs to be general in order to avoid finding and updating each call to folder notes.
**### Proposed Solutions:** Since "Contentless Mode" is repo specific the property can be set in the hpi-defaults to be off (false) and then override in the repo/project specific areas that want to turn Contentless Mode on (hpi-demo-hbase). In OCMS to still have access to the flag for branching logic the property can be set in the config-project and set on app to either pull that value or default to false.
|
1.0
|
Folder Notes Enable Contentless Mode Unset In Chained Action Calls - When an OCMS action makes a chained call to folder notes, if the new enableContentlessMode flag is not passed in the action will fail to cast the flag on the backend. Since folder notes is a commonly used sub-action throughout OCMS the solution needs to be general in order to avoid finding and updating each call to folder notes.
**### Proposed Solutions:** Since "Contentless Mode" is repo specific the property can be set in the hpi-defaults to be off (false) and then override in the repo/project specific areas that want to turn Contentless Mode on (hpi-demo-hbase). In OCMS to still have access to the flag for branching logic the property can be set in the config-project and set on app to either pull that value or default to false.
|
priority
|
folder notes enable contentless mode unset in chained action calls when an ocms action makes a chained call to folder notes if the new enablecontentlessmode flag is not passed in the action will fail to cast the flag on the backend since folder notes is a commonly used sub action throughout ocms the solution needs to be general in order to avoid finding and updating each call to folder notes proposed solutions since contentless mode is repo specific the property can be set in the hpi defaults to be off false and then override in the repo project specific areas that want to turn contentless mode on hpi demo hbase in ocms to still have access to the flag for branching logic the property can be set in the config project and set on app to either pull that value or default to false
| 1
|
120,099
| 4,781,603,472
|
IssuesEvent
|
2016-10-28 09:58:00
|
voteamerica/voteamerica.github.io
|
https://api.github.com/repos/voteamerica/voteamerica.github.io
|
closed
|
RIDER signup form updates
|
high priority
|
- [x] Add rider address text input with Label 'Pick up address' and `name="RiderCollectionAddress"`, immediately above the pick up zip code.
- [x] Remove state drop-down: We no longer need this, now that we also have a polling location lookup
- [x] Add notes box just before Personal details section:
Notes: [Temporary text inside box: I have the following accommodation requirements (ex: service animal, assistance folding equipment, assistance entering and exiting car)]
- [ ] Add full destination address input boxes with autocomplete for zip code: Riders are unlikely to know their destination zip code and drivers need the full address to decide whether or not to accept a ride request `[DEPRECATED]`
- [ ] Add box for town name, that gets autocompleted once collection zip code has been entered: Drivers will need this information to decide whether or not to accept a ride. `[DEPRECATED]`
- [x] Add Preferred method of contact tick-boxes: Email / Phone / SMS
- [x] Make phone number entry compulsory
- [x] Make email address entry optional, unless this is specified as the preferred method of contact.
- [x] Add check box next to other legal ones: I understand that Carpool Vote provides introductions between riders and volunteer drivers who have signed up on the platform. I understand that anybody can sign up to drive and Carpool Vote is unable to perform any background checks on people who use the platform. As with any other environment where I meet new people, I will take steps to keep myself and my possessions safe and accept that Carpool Vote cannot be responsible if anything goes wrong.
|
1.0
|
RIDER signup form updates - - [x] Add rider address text input with Label 'Pick up address' and `name="RiderCollectionAddress"`, immediately above the pick up zip code.
- [x] Remove state drop-down: We no longer need this, now that we also have a polling location lookup
- [x] Add notes box just before Personal details section:
Notes: [Temporary text inside box: I have the following accommodation requirements (ex: service animal, assistance folding equipment, assistance entering and exiting car)]
- [ ] Add full destination address input boxes with autocomplete for zip code: Riders are unlikely to know their destination zip code and drivers need the full address to decide whether or not to accept a ride request `[DEPRECATED]`
- [ ] Add box for town name, that gets autocompleted once collection zip code has been entered: Drivers will need this information to decide whether or not to accept a ride. `[DEPRECATED]`
- [x] Add Preferred method of contact tick-boxes: Email / Phone / SMS
- [x] Make phone number entry compulsory
- [x] Make email address entry optional, unless this is specified as the preferred method of contact.
- [x] Add check box next to other legal ones: I understand that Carpool Vote provides introductions between riders and volunteer drivers who have signed up on the platform. I understand that anybody can sign up to drive and Carpool Vote is unable to perform any background checks on people who use the platform. As with any other environment where I meet new people, I will take steps to keep myself and my possessions safe and accept that Carpool Vote cannot be responsible if anything goes wrong.
|
priority
|
rider signup form updates add rider address text input with label pick up address and name ridercollectionaddress immediately above the pick up zip code remove state drop down we no longer need this now that we also have a polling location lookup add notes box just before personal details section notes add full destination address input boxes with autocomplete for zip code riders are unlikely to know their destination zip code and drivers need the full address to decide whether or not to accept a ride request add box for town name that gets autocompleted once collection zip code has been entered drivers will need this information to decide whether or not to accept a ride add preferred method of contact tick boxes email phone sms make phone number entry compulsory make email address entry optional unless this is specified as the preferred method of contact add check box next to other legal ones i understand that carpool vote provides introductions between riders and volunteer drivers who have signed up on the platform i understand that anybody can sign up to drive and carpool vote is unable to perform any background checks on people who use the platform as with any other environment where i meet new people i will take steps to keep myself and my possessions safe and accept that carpool vote cannot be responsible if anything goes wrong
| 1
|
160,272
| 6,085,651,731
|
IssuesEvent
|
2017-06-17 16:43:40
|
department-of-veterans-affairs/caseflow
|
https://api.github.com/repos/department-of-veterans-affairs/caseflow
|
closed
|
Strange VACOLS Diary Notes are showing up for Dispatch claims whose EPs that have already been cested
|
bug-high-priority caseflow-dispatch In Validation tango
|
AMO contacted @abbyraskin and I this afternoon to let us know that they were seeing odd VACOLS Diary notes show up when they ran the 97 VACOLS report (in transit to ARC but not cested) even though the EPs already had been cested.
'So they had an assistant coach review some of them and he decided on his own the diaries were the reason these cases were showing up on the report. That' when they decided to start deleting the diaries.' (John subsequently asked them to stop doing this.)
Another strange parts of this is that at least two of the three instances pointed out to us were claims sent by Caseflow Dispatch to be worked at ARC (we don't usually add diary notes for those).
**Example of the VACOLS view:**
<img width="564" alt="screen shot 2017-06-08 at 4 29 27 pm" src="https://user-images.githubusercontent.com/4975959/26953033-2d71524a-4c76-11e7-9dbb-3426ca78dec4.png">
|
1.0
|
Strange VACOLS Diary Notes are showing up for Dispatch claims whose EPs that have already been cested - AMO contacted @abbyraskin and I this afternoon to let us know that they were seeing odd VACOLS Diary notes show up when they ran the 97 VACOLS report (in transit to ARC but not cested) even though the EPs already had been cested.
'So they had an assistant coach review some of them and he decided on his own the diaries were the reason these cases were showing up on the report. That' when they decided to start deleting the diaries.' (John subsequently asked them to stop doing this.)
Another strange parts of this is that at least two of the three instances pointed out to us were claims sent by Caseflow Dispatch to be worked at ARC (we don't usually add diary notes for those).
**Example of the VACOLS view:**
<img width="564" alt="screen shot 2017-06-08 at 4 29 27 pm" src="https://user-images.githubusercontent.com/4975959/26953033-2d71524a-4c76-11e7-9dbb-3426ca78dec4.png">
|
priority
|
strange vacols diary notes are showing up for dispatch claims whose eps that have already been cested amo contacted abbyraskin and i this afternoon to let us know that they were seeing odd vacols diary notes show up when they ran the vacols report in transit to arc but not cested even though the eps already had been cested so they had an assistant coach review some of them and he decided on his own the diaries were the reason these cases were showing up on the report that when they decided to start deleting the diaries john subsequently asked them to stop doing this another strange parts of this is that at least two of the three instances pointed out to us were claims sent by caseflow dispatch to be worked at arc we don t usually add diary notes for those example of the vacols view img width alt screen shot at pm src
| 1
|
310,152
| 9,486,402,298
|
IssuesEvent
|
2019-04-22 13:54:14
|
IBM/carbon-components-react
|
https://api.github.com/repos/IBM/carbon-components-react
|
reopened
|
AVT 1 - Form Item: DAP violations related to label
|
priority: high type: a11y ♿
|
Environment
macOS Mojave version 10.14.2
Chrome version 72.0.3626.96 (Official Build) (64-bit)
Detailed Description
Run DAP with Feb. 2019 ruleset on the Form Item Component
The following violations are reported:
1. Each form control must have an associated label
[H44: Using label elements to associate text labels with form controls](https://www.w3.org/TR/WCAG20-TECHS/H44)
[ARIA16: Using aria-labelledby to provide a name for user interface controls](https://www.w3.org/TR/WCAG20-TECHS/ARIA16.html)
2. Provide descriptive text in label element
4.1.2 Name, Role, Value
Related to #1932
|
1.0
|
AVT 1 - Form Item: DAP violations related to label - Environment
macOS Mojave version 10.14.2
Chrome version 72.0.3626.96 (Official Build) (64-bit)
Detailed Description
Run DAP with Feb. 2019 ruleset on the Form Item Component
The following violations are reported:
1. Each form control must have an associated label
[H44: Using label elements to associate text labels with form controls](https://www.w3.org/TR/WCAG20-TECHS/H44)
[ARIA16: Using aria-labelledby to provide a name for user interface controls](https://www.w3.org/TR/WCAG20-TECHS/ARIA16.html)
2. Provide descriptive text in label element
4.1.2 Name, Role, Value
Related to #1932
|
priority
|
avt form item dap violations related to label environment macos mojave version chrome version official build bit detailed description run dap with feb ruleset on the form item component the following violations are reported each form control must have an associated label provide descriptive text in label element name role value related to
| 1
|
656,927
| 21,780,041,965
|
IssuesEvent
|
2022-05-13 17:47:12
|
Maptio/maptio
|
https://api.github.com/repos/Maptio/maptio
|
opened
|
Investigate how to achieve user account linking in Auth0
|
bug area: auth priority: high
|
Currently, users can't have a combined Google/social + password-based account. They must choose one method and stick to it. This has lead to quite a number of issues recently. They're mostly easy to fix now that I've got more of a handle on the authentication and it's dramatically simplified. However, they're still non-trivial issues that have already consumed a non-trivial amount of time when taken together. It'd be good to address this.
Firstly, let's dig into what Auth0's recommendations are, can we follow those and achieve this fairly quickly or is this going to be difficult? Let's treat this issue as an initial simple, time-boxed investigation. This is one of the few most important things I wanted to do as part of improving user management / auth refactoring work that we never got round to. Would be good to get to this.
And this is also related to #670 so might be worth working on these two together.
Related intercom conversations:
* https://app.intercom.com/a/apps/q3x5lnhp/inbox/inbox/all/conversations/106323200002109
* https://app.intercom.com/a/apps/q3x5lnhp/inbox/inbox/conversation/106323200002144
* https://app.intercom.com/a/apps/q3x5lnhp/inbox/inbox/all/conversations/106323200002085
|
1.0
|
Investigate how to achieve user account linking in Auth0 - Currently, users can't have a combined Google/social + password-based account. They must choose one method and stick to it. This has lead to quite a number of issues recently. They're mostly easy to fix now that I've got more of a handle on the authentication and it's dramatically simplified. However, they're still non-trivial issues that have already consumed a non-trivial amount of time when taken together. It'd be good to address this.
Firstly, let's dig into what Auth0's recommendations are, can we follow those and achieve this fairly quickly or is this going to be difficult? Let's treat this issue as an initial simple, time-boxed investigation. This is one of the few most important things I wanted to do as part of improving user management / auth refactoring work that we never got round to. Would be good to get to this.
And this is also related to #670 so might be worth working on these two together.
Related intercom conversations:
* https://app.intercom.com/a/apps/q3x5lnhp/inbox/inbox/all/conversations/106323200002109
* https://app.intercom.com/a/apps/q3x5lnhp/inbox/inbox/conversation/106323200002144
* https://app.intercom.com/a/apps/q3x5lnhp/inbox/inbox/all/conversations/106323200002085
|
priority
|
investigate how to achieve user account linking in currently users can t have a combined google social password based account they must choose one method and stick to it this has lead to quite a number of issues recently they re mostly easy to fix now that i ve got more of a handle on the authentication and it s dramatically simplified however they re still non trivial issues that have already consumed a non trivial amount of time when taken together it d be good to address this firstly let s dig into what s recommendations are can we follow those and achieve this fairly quickly or is this going to be difficult let s treat this issue as an initial simple time boxed investigation this is one of the few most important things i wanted to do as part of improving user management auth refactoring work that we never got round to would be good to get to this and this is also related to so might be worth working on these two together related intercom conversations
| 1
|
374,346
| 11,088,378,915
|
IssuesEvent
|
2019-12-14 10:34:59
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
After reloading 'Select Categories to Hide AMP', it shows empty
|
NEXT UPDATE [Priority: HIGH] bug
|
If we select a category in 'Select Categories to Hide AMP' option and reload the page then 'Select Categories to Hide AMP' option shows empty but the functionality works.
For reference: https://wordpress.org/support/topic/all-the-settings-automatically-resetting/
|
1.0
|
After reloading 'Select Categories to Hide AMP', it shows empty - If we select a category in 'Select Categories to Hide AMP' option and reload the page then 'Select Categories to Hide AMP' option shows empty but the functionality works.
For reference: https://wordpress.org/support/topic/all-the-settings-automatically-resetting/
|
priority
|
after reloading select categories to hide amp it shows empty if we select a category in select categories to hide amp option and reload the page then select categories to hide amp option shows empty but the functionality works for reference
| 1
|
801,194
| 28,479,287,779
|
IssuesEvent
|
2023-04-18 00:16:32
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Shift + M should unmute topics in muted streams
|
help wanted area: keyboard UI in progress priority: high release goal
|
At present, `Shift + M` mutes unmuted topics, and unmutes muted topics, regardless of the mute state of the stream the topic is in.
As part of adding the "unmute topic" feature, "Shift + M" should now by default *unmute* topics in muted streams. (If the topic has already been unmuted, it should mute it.)
|
1.0
|
Shift + M should unmute topics in muted streams - At present, `Shift + M` mutes unmuted topics, and unmutes muted topics, regardless of the mute state of the stream the topic is in.
As part of adding the "unmute topic" feature, "Shift + M" should now by default *unmute* topics in muted streams. (If the topic has already been unmuted, it should mute it.)
|
priority
|
shift m should unmute topics in muted streams at present shift m mutes unmuted topics and unmutes muted topics regardless of the mute state of the stream the topic is in as part of adding the unmute topic feature shift m should now by default unmute topics in muted streams if the topic has already been unmuted it should mute it
| 1
|
299,462
| 9,205,524,700
|
IssuesEvent
|
2019-03-08 10:49:35
|
Wotuu/keystone.guru
|
https://api.github.com/repos/Wotuu/keystone.guru
|
closed
|
Improve robustness of the MDT Importer
|
bug high priority
|
If one little thing goes wrong now, the entire import fails. Ideally there should be a 'warnings' of things that went wrong while trying to import the route. It shouldn't just completely fail, there's no need for that.
|
1.0
|
Improve robustness of the MDT Importer - If one little thing goes wrong now, the entire import fails. Ideally there should be a 'warnings' of things that went wrong while trying to import the route. It shouldn't just completely fail, there's no need for that.
|
priority
|
improve robustness of the mdt importer if one little thing goes wrong now the entire import fails ideally there should be a warnings of things that went wrong while trying to import the route it shouldn t just completely fail there s no need for that
| 1
|
192,423
| 6,849,837,581
|
IssuesEvent
|
2017-11-13 23:49:32
|
Cyberjusticelab/JusticeAI
|
https://api.github.com/repos/Cyberjusticelab/JusticeAI
|
closed
|
Fact clustering
|
Machine Learning Points (8) Priority (High) Risk (High)
|
*Description*
As a user, I would like the system to determine categories of facts based on precedent data.
Major work will be conducted on #105. The same approach will then be used for #106.
*Scope of Work*
- [x] Preprocess data of decisions.
- [x] Extract all facts from all obtained precedents.
- [x] Cluster facts based on sentence similarity.
- [x] Create a text file for each cluster of facts where the similar facts are saved, for manual review.
*Story Points*
- 8
*Priority*
- High
*Risk*
- High
*Acceptance Criteria*
- Given a fact, the system should return the cluster label for it.
- Looking at the label's text file, the written facts should be similar to the input fact.
|
1.0
|
Fact clustering - *Description*
As a user, I would like the system to determine categories of facts based on precedent data.
Major work will be conducted on #105. The same approach will then be used for #106.
*Scope of Work*
- [x] Preprocess data of decisions.
- [x] Extract all facts from all obtained precedents.
- [x] Cluster facts based on sentence similarity.
- [x] Create a text file for each cluster of facts where the similar facts are saved, for manual review.
*Story Points*
- 8
*Priority*
- High
*Risk*
- High
*Acceptance Criteria*
- Given a fact, the system should return the cluster label for it.
- Looking at the label's text file, the written facts should be similar to the input fact.
|
priority
|
fact clustering description as a user i would like the system to determine categories of facts based on precedent data major work will be conducted on the same approach will then be used for scope of work preprocess data of decisions extract all facts from all obtained precedents cluster facts based on sentence similarity create a text file for each cluster of facts where the similar facts are saved for manual review story points priority high risk high acceptance criteria given a fact the system should return the cluster label for it looking at the label s text file the written facts should be similar to the input fact
| 1
|
382,320
| 11,303,860,136
|
IssuesEvent
|
2020-01-17 21:14:30
|
sunpy/sunpy
|
https://api.github.com/repos/sunpy/sunpy
|
closed
|
Tighter definition of the bottom left and top right coordinate map properties required?
|
Affects Release Bug(?) Priority High map
|
<!-- This comments are hidden when you submit the issue so you do not need to remove them!
Please be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst
Please be sure to check out our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->
<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue! -->
### Description
<!-- Provide a general description of the bug. -->
Map provides the bottom left and top right coordinates as properties. The issue is that the meaning of the positions described by these coordinates is not obvious.
### Expected behavior
<!-- What did you expect to happen. -->
The expected behavior is that the meaning of these coordinates is consistent.
### Actual behavior
<!-- What actually happened. -->
<!-- Was the output confusing or poorly described? -->
It appears that map.bottom_left_coord is the position of the lower left hand vertex of that pixel, and map.top_right_coord is the position of the upper right hand vertex of that pixel. Therefore the positions that these coordinates refer to are different relative to the pixel. This is not explicit to the user. There may be an implicit assumption that these positions define the extent of the map in the world.
### Steps to Reproduce
<!-- Ideally a code example could be provided so we can run it ourselves. -->
<!-- If you are pasting code, use tripe backticks (```) around your code snippet. -->
```
import numpy as np
import sunpy.data.test
import sunpy.map
import os
import astropy.units as u
testpath = sunpy.data.test.rootdir
m = sunpy.map.Map((os.path.join(testpath, 'aia_171_level1.fits')))
m_bl = m.bottom_left_coord
m_tr = m.top_right_coord
x, y = np.meshgrid(*[np.arange(v.value) for v in m.dimensions]) * u.pix
ac = m.pixel_to_world(x, y)
ac_bl = np.nanmin(ac.Tx.value), np.nanmin(ac.Ty.value)
ac_tr = np.nanmax(ac.Tx.value), np.nanmax(ac.Ty.value)
print(m_bl, ac_bl)
print(m_tr, ac_tr)
```
### System Details
<!-- We at least need to know the SunPy version you are using. -->
<!-- We provide a short function in SunPy that will provide some of the below information. -->
<!-- It is sunpy.util.system_info(), this is optional but strongly recommended. -->
==========================================================
SunPy Installation Information
==========================================================
###########
General
###########
System : Darwin
Processor : i386
Arch : 64bit
SunPy : 1.0.0.dev10506
SunPy_git : 10a1729ebc2c9acce33c849fe531e547a359b5fe
OS: Mac OS X 10.13.6 (i386)
###########
Required Libraries
###########
Python: 3.6.8
NumPy: 1.15.4
SciPy: 1.1.0
matplotlib: 3.0.2
Astropy: 3.1.1
Pandas: 0.23.4
###########
Recommended Libraries
###########
beautifulsoup: 4.6.3
PyQt: NOT INSTALLED
Zeep: NOT INSTALLED
Sqlalchemy: 1.2.15
|
1.0
|
Tighter definition of the bottom left and top right coordinate map properties required? - <!-- This comments are hidden when you submit the issue so you do not need to remove them!
Please be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst
Please be sure to check out our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->
<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue! -->
### Description
<!-- Provide a general description of the bug. -->
Map provides the bottom left and top right coordinates as properties. The issue is that the meaning of the positions described by these coordinates is not obvious.
### Expected behavior
<!-- What did you expect to happen. -->
The expected behavior is that the meaning of these coordinates is consistent.
### Actual behavior
<!-- What actually happened. -->
<!-- Was the output confusing or poorly described? -->
It appears that map.bottom_left_coord is the position of the lower left hand vertex of that pixel, and map.top_right_coord is the position of the upper right hand vertex of that pixel. Therefore the positions that these coordinates refer to are different relative to the pixel. This is not explicit to the user. There may be an implicit assumption that these positions define the extent of the map in the world.
### Steps to Reproduce
<!-- Ideally a code example could be provided so we can run it ourselves. -->
<!-- If you are pasting code, use tripe backticks (```) around your code snippet. -->
```
import numpy as np
import sunpy.data.test
import sunpy.map
import os
import astropy.units as u
testpath = sunpy.data.test.rootdir
m = sunpy.map.Map((os.path.join(testpath, 'aia_171_level1.fits')))
m_bl = m.bottom_left_coord
m_tr = m.top_right_coord
x, y = np.meshgrid(*[np.arange(v.value) for v in m.dimensions]) * u.pix
ac = m.pixel_to_world(x, y)
ac_bl = np.nanmin(ac.Tx.value), np.nanmin(ac.Ty.value)
ac_tr = np.nanmax(ac.Tx.value), np.nanmax(ac.Ty.value)
print(m_bl, ac_bl)
print(m_tr, ac_tr)
```
### System Details
<!-- We at least need to know the SunPy version you are using. -->
<!-- We provide a short function in SunPy that will provide some of the below information. -->
<!-- It is sunpy.util.system_info(), this is optional but strongly recommended. -->
==========================================================
SunPy Installation Information
==========================================================
###########
General
###########
System : Darwin
Processor : i386
Arch : 64bit
SunPy : 1.0.0.dev10506
SunPy_git : 10a1729ebc2c9acce33c849fe531e547a359b5fe
OS: Mac OS X 10.13.6 (i386)
###########
Required Libraries
###########
Python: 3.6.8
NumPy: 1.15.4
SciPy: 1.1.0
matplotlib: 3.0.2
Astropy: 3.1.1
Pandas: 0.23.4
###########
Recommended Libraries
###########
beautifulsoup: 4.6.3
PyQt: NOT INSTALLED
Zeep: NOT INSTALLED
Sqlalchemy: 1.2.15
|
priority
|
tighter definition of the bottom left and top right coordinate map properties required this comments are hidden when you submit the issue so you do not need to remove them please be sure to check out our contributing guidelines please be sure to check out our code of conduct please have a search on our github repository to see if a similar issue has already been posted if a similar issue is closed have a quick look to see if you are satisfied by the resolution if not please go ahead and open an issue description map provides the bottom left and top right coordinates as properties the issue is that the meaning of the positions described by these coordinates is not obvious expected behavior the expected behavior is that the meaning of these coordinates is consistent actual behavior it appears that map bottom left coord is the position of the lower left hand vertex of that pixel and map top right coord is the position of the upper right hand vertex of that pixel therefore the positions that these coordinates refer to are different relative to the pixel this is not explicit to the user there may be an implicit assumption that these positions define the extent of the map in the world steps to reproduce import numpy as np import sunpy data test import sunpy map import os import astropy units as u testpath sunpy data test rootdir m sunpy map map os path join testpath aia fits m bl m bottom left coord m tr m top right coord x y np meshgrid u pix ac m pixel to world x y ac bl np nanmin ac tx value np nanmin ac ty value ac tr np nanmax ac tx value np nanmax ac ty value print m bl ac bl print m tr ac tr system details sunpy installation information general system darwin processor arch sunpy sunpy git os mac os x required libraries python numpy scipy matplotlib astropy pandas recommended libraries beautifulsoup pyqt not installed zeep not installed sqlalchemy
| 1
|
24,647
| 2,671,305,574
|
IssuesEvent
|
2015-03-24 04:44:22
|
nickpaventi/culligan-diy
|
https://api.github.com/repos/nickpaventi/culligan-diy
|
opened
|
Global Navigation [Mobile]: DIY Text adjustment needed
|
bug High Priority
|
If the 'Do it yourself' text needs to wrap, it should align with the left edge of the logo

|
1.0
|
Global Navigation [Mobile]: DIY Text adjustment needed - If the 'Do it yourself' text needs to wrap, it should align with the left edge of the logo

|
priority
|
global navigation diy text adjustment needed if the do it yourself text needs to wrap it should align with the left edge of the logo
| 1
|
438,330
| 12,626,476,056
|
IssuesEvent
|
2020-06-14 16:45:56
|
a2000-erp-team/WEBERP
|
https://api.github.com/repos/a2000-erp-team/WEBERP
|
opened
|
SAL-POS-POS-ADD-[Total Invoice amount is $246, tender with cash $250. Change not functioning! Invoice balance with - $4?]
|
ADRIAN High Priority
|

41. Total Invoice amount is $246, tender with cash $250. Change not functioning! Invoice balance with - $4?
|
1.0
|
SAL-POS-POS-ADD-[Total Invoice amount is $246, tender with cash $250. Change not functioning! Invoice balance with - $4?] - 
41. Total Invoice amount is $246, tender with cash $250. Change not functioning! Invoice balance with - $4?
|
priority
|
sal pos pos add total invoice amount is tender with cash change not functioning invoice balance with
| 1
|
301,126
| 9,216,311,350
|
IssuesEvent
|
2019-03-11 07:38:21
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] Clustering - Error in the logs when creating a site named the same as a site that has been deleted
|
bug priority: high
|
## Describe the bug
Error in the logs when creating a site named the same as a site that has been deleted
## To Reproduce
Steps to reproduce the behavior:
1. On server A, setup studio for clustering and create a site using website_editorial, `sitea`
2. On another server, server B, setup studio for clustering and create a site using website_editorial, `siteb`
3. Wait until the two nodes are synced
4. Stop craftercms on one of the servers, say server A.
5. On server B, delete `sitea`, then create a new site named `sitea` using the empty bp
6. Start server A and notice the logs for both node A and node B
## Expected behavior
There should be no errors in the logs
## Screenshots
If applicable, add screenshots to help explain your problem.
## Logs
Here's the log for server A after coming back up: https://gist.github.com/alhambrav/70b734b83b3370f892c9eb562646e323
Notice the errors in the middle of the log, but it looks like it was able to recover
Here's the log for server B after server A comes back up: https://gist.github.com/alhambrav/5a63371522a671181ac8dc8f1429a3f8
Notice the error saying: Error while updating published repo for site sitea
## Specs
### Version
Studio Version Number: 3.1.0-SNAPSHOT-640b78
Build Number: 640b78d072cc507ab75ba36b1d402154c7e52b54
Build Date/Time: 03-05-2019 09:36:35 -0500
### OS
OS X
### Browser
Chrome browser
## Additional context
Add any other context about the problem here.
|
1.0
|
[studio] Clustering - Error in the logs when creating a site named the same as a site that has been deleted - ## Describe the bug
Error in the logs when creating a site named the same as a site that has been deleted
## To Reproduce
Steps to reproduce the behavior:
1. On server A, setup studio for clustering and create a site using website_editorial, `sitea`
2. On another server, server B, setup studio for clustering and create a site using website_editorial, `siteb`
3. Wait until the two nodes are synced
4. Stop craftercms on one of the servers, say server A.
5. On server B, delete `sitea`, then create a new site named `sitea` using the empty bp
6. Start server A and notice the logs for both node A and node B
## Expected behavior
There should be no errors in the logs
## Screenshots
If applicable, add screenshots to help explain your problem.
## Logs
Here's the log for server A after coming back up: https://gist.github.com/alhambrav/70b734b83b3370f892c9eb562646e323
Notice the errors in the middle of the log, but it looks like it was able to recover
Here's the log for server B after server A comes back up: https://gist.github.com/alhambrav/5a63371522a671181ac8dc8f1429a3f8
Notice the error saying: Error while updating published repo for site sitea
## Specs
### Version
Studio Version Number: 3.1.0-SNAPSHOT-640b78
Build Number: 640b78d072cc507ab75ba36b1d402154c7e52b54
Build Date/Time: 03-05-2019 09:36:35 -0500
### OS
OS X
### Browser
Chrome browser
## Additional context
Add any other context about the problem here.
|
priority
|
clustering error in the logs when creating a site named the same as a site that has been deleted describe the bug error in the logs when creating a site named the same as a site that has been deleted to reproduce steps to reproduce the behavior on server a setup studio for clustering and create a site using website editorial sitea on another server server b setup studio for clustering and create a site using website editorial siteb wait until the two nodes are synced stop craftercms on one of the servers say server a on server b delete sitea then create a new site named sitea using the empty bp start server a and notice the logs for both node a and node b expected behavior there should be no errors in the logs screenshots if applicable add screenshots to help explain your problem logs here s the log for server a after coming back up notice the errors in the middle of the log but it looks like it was able to recover here s the log for server b after server a comes back up notice the error saying error while updating published repo for site sitea specs version studio version number snapshot build number build date time os os x browser chrome browser additional context add any other context about the problem here
| 1
|
515,498
| 14,964,297,282
|
IssuesEvent
|
2021-01-27 11:46:23
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Add 3D Plugin to contexts
|
Accepted C169-Rennes-Métropole-2020-GeOrchestra2 New Feature Priority: High
|
## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
This issue collects all the activities involved for the implementation of the 3D MS's plugin for geOrchestra.
https://github.com/georchestra/mapstore2-georchestra/issues/248
## Acceptance criteria
<!-- Describe here the list of acceptance criteria -->
- [ ] **Enable 3D plugin:** inclusion of the 3D plugin in geOrchestra MS project
- [ ] **3D plugin config:** ability to configure the 3D plugin inside the application context wizard including the configuration properties to configure the DEM layers as terrain model directly inside the plugin instead of in the localConfig
- [ ] **Ability to switch in 3D mode:** ability to switch in 3D mode (if the 3D plugin has been enabled by the administrator for the app context) having the DEM layer available in the Cesium map as terrain model
## Other useful information
this can be done in a subsequent phase
- [ ] **Ability to configure one DEM layer:** publishing data for terrain model in GeoServer is allowed using the DDS/BIL format. MapStore supports it as terrain model (the documentation [here](https://mapstore.readthedocs.io/en/latest/developer-guide/maps-configuration/#special-case-the-elevation-layer)) but it is needed to improve the current configuration tier especially to make it available in the app context wizard
|
1.0
|
Add 3D Plugin to contexts - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
This issue collects all the activities involved for the implementation of the 3D MS's plugin for geOrchestra.
https://github.com/georchestra/mapstore2-georchestra/issues/248
## Acceptance criteria
<!-- Describe here the list of acceptance criteria -->
- [ ] **Enable 3D plugin:** inclusion of the 3D plugin in geOrchestra MS project
- [ ] **3D plugin config:** ability to configure the 3D plugin inside the application context wizard including the configuration properties to configure the DEM layers as terrain model directly inside the plugin instead of in the localConfig
- [ ] **Ability to switch in 3D mode:** ability to switch in 3D mode (if the 3D plugin has been enabled by the administrator for the app context) having the DEM layer available in the Cesium map as terrain model
## Other useful information
this can be done in a subsequent phase
- [ ] **Ability to configure one DEM layer:** publishing data for terrain model in GeoServer is allowed using the DDS/BIL format. MapStore supports it as terrain model (the documentation [here](https://mapstore.readthedocs.io/en/latest/developer-guide/maps-configuration/#special-case-the-elevation-layer)) but it is needed to improve the current configuration tier especially to make it available in the app context wizard
|
priority
|
add plugin to contexts description this issue collects all the activities involved for the implementation of the ms s plugin for georchestra acceptance criteria enable plugin inclusion of the plugin in georchestra ms project plugin config ability to configure the plugin inside the application context wizard including the configuration properties to configure the dem layers as terrain model directly inside the plugin instead of in the localconfig ability to switch in mode ability to switch in mode if the plugin has been enabled by the administrator for the app context having the dem layer available in the cesium map as terrain model other useful information this can be done in a subsequent phase ability to configure one dem layer publishing data for terrain model in geoserver is allowed using the dds bil format mapstore supports it as terrain model the documentation but it is needed to improve the current configuration tier especially to make it available in the app context wizard
| 1
|
146,125
| 5,611,811,812
|
IssuesEvent
|
2017-04-03 00:46:25
|
zmaster587/AdvancedRocketry
|
https://api.github.com/repos/zmaster587/AdvancedRocketry
|
closed
|
Fatal Error Connecting
|
bug High Priority
|
Updating existing world from AR 1.1.1 to 1.1.2 (LibVulpes 0.2.0 to 0.2.1)
Get a fatal error when connecting to the server. Error begins at line 18112 in client log. Full logs attached:
[client latest.txt](https://github.com/zmaster587/AdvancedRocketry/files/888694/client.latest.txt)
[server latest.txt](https://github.com/zmaster587/AdvancedRocketry/files/888695/server.latest.txt)
```javascript
[16:54:38] [Netty Client IO #1/ERROR]: There was a critical exception handling a packet on channel libVulpes
io.netty.handler.codec.DecoderException: java.lang.NullPointerException
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:169) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at net.minecraftforge.fml.common.network.internal.FMLProxyPacket.func_148833_a(FMLProxyPacket.java:100) [FMLProxyPacket.class:?]
at net.minecraft.network.NetworkManager.channelRead0(NetworkManager.java:149) [eo.class:?]
at net.minecraft.network.NetworkManager.channelRead0(NetworkManager.java:51) [eo.class:?]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at net.minecraftforge.fml.common.network.handshake.NetworkDispatcher.handleClientSideCustomPacket(NetworkDispatcher.java:407) [NetworkDispatcher.class:?]
at net.minecraftforge.fml.common.network.handshake.NetworkDispatcher.channelRead0(NetworkDispatcher.java:273) [NetworkDispatcher.class:?]
at net.minecraftforge.fml.common.network.handshake.NetworkDispatcher.channelRead0(NetworkDispatcher.java:73) [NetworkDispatcher.class:?]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.timeout.ReadTimeoutHandler.channelRead(ReadTimeoutHandler.java:150) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_25]
Caused by: java.lang.NullPointerException
at zmaster587.advancedRocketry.dimension.DimensionProperties.setStar(DimensionProperties.java:317) ~[DimensionProperties.class:?]
at zmaster587.advancedRocketry.dimension.DimensionProperties.readFromNBT(DimensionProperties.java:1074) ~[DimensionProperties.class:?]
at zmaster587.advancedRocketry.network.PacketDimInfo.readClient(PacketDimInfo.java:75) ~[PacketDimInfo.class:?]
at zmaster587.libVulpes.network.PacketHandler$Codec.decodeInto(PacketHandler.java:210) ~[PacketHandler$Codec.class:unspecified]
at zmaster587.libVulpes.network.PacketHandler$Codec.decodeInto(PacketHandler.java:192) ~[PacketHandler$Codec.class:unspecified]
at net.minecraftforge.fml.common.network.FMLIndexedMessageToMessageCodec.decode(FMLIndexedMessageToMessageCodec.java:103) ~[FMLIndexedMessageToMessageCodec.class:?]
at net.minecraftforge.fml.common.network.FMLIndexedMessageToMessageCodec.decode(FMLIndexedMessageToMessageCodec.java:40) ~[FMLIndexedMessageToMessageCodec.class:?]
at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
... 40 more
```
|
1.0
|
Fatal Error Connecting - Updating existing world from AR 1.1.1 to 1.1.2 (LibVulpes 0.2.0 to 0.2.1)
Get a fatal error when connecting to the server. Error begins at line 18112 in client log. Full logs attached:
[client latest.txt](https://github.com/zmaster587/AdvancedRocketry/files/888694/client.latest.txt)
[server latest.txt](https://github.com/zmaster587/AdvancedRocketry/files/888695/server.latest.txt)
```javascript
[16:54:38] [Netty Client IO #1/ERROR]: There was a critical exception handling a packet on channel libVulpes
io.netty.handler.codec.DecoderException: java.lang.NullPointerException
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:169) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at net.minecraftforge.fml.common.network.internal.FMLProxyPacket.func_148833_a(FMLProxyPacket.java:100) [FMLProxyPacket.class:?]
at net.minecraft.network.NetworkManager.channelRead0(NetworkManager.java:149) [eo.class:?]
at net.minecraft.network.NetworkManager.channelRead0(NetworkManager.java:51) [eo.class:?]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at net.minecraftforge.fml.common.network.handshake.NetworkDispatcher.handleClientSideCustomPacket(NetworkDispatcher.java:407) [NetworkDispatcher.class:?]
at net.minecraftforge.fml.common.network.handshake.NetworkDispatcher.channelRead0(NetworkDispatcher.java:273) [NetworkDispatcher.class:?]
at net.minecraftforge.fml.common.network.handshake.NetworkDispatcher.channelRead0(NetworkDispatcher.java:73) [NetworkDispatcher.class:?]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.timeout.ReadTimeoutHandler.channelRead(ReadTimeoutHandler.java:150) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_25]
Caused by: java.lang.NullPointerException
at zmaster587.advancedRocketry.dimension.DimensionProperties.setStar(DimensionProperties.java:317) ~[DimensionProperties.class:?]
at zmaster587.advancedRocketry.dimension.DimensionProperties.readFromNBT(DimensionProperties.java:1074) ~[DimensionProperties.class:?]
at zmaster587.advancedRocketry.network.PacketDimInfo.readClient(PacketDimInfo.java:75) ~[PacketDimInfo.class:?]
at zmaster587.libVulpes.network.PacketHandler$Codec.decodeInto(PacketHandler.java:210) ~[PacketHandler$Codec.class:unspecified]
at zmaster587.libVulpes.network.PacketHandler$Codec.decodeInto(PacketHandler.java:192) ~[PacketHandler$Codec.class:unspecified]
at net.minecraftforge.fml.common.network.FMLIndexedMessageToMessageCodec.decode(FMLIndexedMessageToMessageCodec.java:103) ~[FMLIndexedMessageToMessageCodec.class:?]
at net.minecraftforge.fml.common.network.FMLIndexedMessageToMessageCodec.decode(FMLIndexedMessageToMessageCodec.java:40) ~[FMLIndexedMessageToMessageCodec.class:?]
at io.netty.handler.codec.MessageToMessageCodec$2.decode(MessageToMessageCodec.java:81) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
... 40 more
```
|
priority
|
fatal error connecting updating existing world from ar to libvulpes to get a fatal error when connecting to the server error begins at line in client log full logs attached javascript there was a critical exception handling a packet on channel libvulpes io netty handler codec decoderexception java lang nullpointerexception at io netty handler codec messagetomessagedecoder channelread messagetomessagedecoder java at io netty handler codec messagetomessagecodec channelread messagetomessagecodec java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel embedded embeddedchannel writeinbound embeddedchannel java at net minecraftforge fml common network internal fmlproxypacket func a fmlproxypacket java at net minecraft network networkmanager networkmanager java at net minecraft network networkmanager networkmanager java at io netty channel simplechannelinboundhandler channelread simplechannelinboundhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at net minecraftforge fml common network handshake networkdispatcher handleclientsidecustompacket networkdispatcher java at net minecraftforge fml common network handshake networkdispatcher networkdispatcher java at net minecraftforge fml common network handshake networkdispatcher networkdispatcher java at io netty channel simplechannelinboundhandler channelread simplechannelinboundhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec messagetomessagedecoder channelread messagetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler timeout readtimeouthandler channelread readtimeouthandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at java lang thread run thread java caused by java lang nullpointerexception at advancedrocketry dimension dimensionproperties setstar dimensionproperties java at advancedrocketry dimension dimensionproperties readfromnbt dimensionproperties java at advancedrocketry network packetdiminfo readclient packetdiminfo java at libvulpes network packethandler codec decodeinto packethandler java at libvulpes network packethandler codec decodeinto packethandler java at net minecraftforge fml common network fmlindexedmessagetomessagecodec decode fmlindexedmessagetomessagecodec java at net minecraftforge fml common network fmlindexedmessagetomessagecodec decode fmlindexedmessagetomessagecodec java at io netty handler codec messagetomessagecodec decode messagetomessagecodec java at io netty handler codec messagetomessagedecoder channelread messagetomessagedecoder java more
| 1
|
154,731
| 5,930,742,749
|
IssuesEvent
|
2017-05-24 02:50:16
|
MikeJeffers/MSCD-Thesis
|
https://api.github.com/repos/MikeJeffers/MSCD-Thesis
|
closed
|
NN: Don't learn on repeat moves
|
HIGH PRIORITY ML Optimization
|
Detect and prevent learning iterations on repeat moves for AI and user
|
1.0
|
NN: Don't learn on repeat moves - Detect and prevent learning iterations on repeat moves for AI and user
|
priority
|
nn don t learn on repeat moves detect and prevent learning iterations on repeat moves for ai and user
| 1
|
107,628
| 4,312,160,468
|
IssuesEvent
|
2016-07-22 03:13:39
|
idevelopment/Hcrm
|
https://api.github.com/repos/idevelopment/Hcrm
|
closed
|
Set flash messages to translation files
|
bug enhancement High Priority
|
All the flash message's are now coded in english. We need to set it to translation files.
Relates #100
|
1.0
|
Set flash messages to translation files - All the flash message's are now coded in english. We need to set it to translation files.
Relates #100
|
priority
|
set flash messages to translation files all the flash message s are now coded in english we need to set it to translation files relates
| 1
|
596,115
| 18,097,364,113
|
IssuesEvent
|
2021-09-22 10:31:09
|
alan-turing-institute/distr6
|
https://api.github.com/repos/alan-turing-institute/distr6
|
closed
|
Abstract ParameterSet to {param6}
|
high priority optimization
|
{param6} is significantly faster than `ParameterSet` and {paradox} by several orders of magnitude.
Bottlenecks in {distr6} will not be fixed until this abstraction.
|
1.0
|
Abstract ParameterSet to {param6} - {param6} is significantly faster than `ParameterSet` and {paradox} by several orders of magnitude.
Bottlenecks in {distr6} will not be fixed until this abstraction.
|
priority
|
abstract parameterset to is significantly faster than parameterset and paradox by several orders of magnitude bottlenecks in will not be fixed until this abstraction
| 1
|
463,247
| 13,262,178,301
|
IssuesEvent
|
2020-08-20 21:15:15
|
Arzov/arv-api
|
https://api.github.com/repos/Arzov/arv-api
|
closed
|
Refactorizar backend AWS
|
backend high priority research
|
## Descripción
Documentar, testear y reorganizar el código del backend en AWS. Automatizar deploy y testing de backend con AWS SAM.
## Lista de tareas
- [x] Pruebas AWS SAM.
- [x] Documentar código en AWS Lambda, Appsync y DynamoDB.
- [x] Codificar con mejores estándares.
- [x] Renombrar recursos en AWS con mejor nomesclatura.
- [x] Hacer cambios correspondientes en frontend.
|
1.0
|
Refactorizar backend AWS - ## Descripción
Documentar, testear y reorganizar el código del backend en AWS. Automatizar deploy y testing de backend con AWS SAM.
## Lista de tareas
- [x] Pruebas AWS SAM.
- [x] Documentar código en AWS Lambda, Appsync y DynamoDB.
- [x] Codificar con mejores estándares.
- [x] Renombrar recursos en AWS con mejor nomesclatura.
- [x] Hacer cambios correspondientes en frontend.
|
priority
|
refactorizar backend aws descripción documentar testear y reorganizar el código del backend en aws automatizar deploy y testing de backend con aws sam lista de tareas pruebas aws sam documentar código en aws lambda appsync y dynamodb codificar con mejores estándares renombrar recursos en aws con mejor nomesclatura hacer cambios correspondientes en frontend
| 1
|
773,499
| 27,159,695,048
|
IssuesEvent
|
2023-02-17 10:48:27
|
FastcampusMini/mini-project
|
https://api.github.com/repos/FastcampusMini/mini-project
|
closed
|
구매 취소 기능
|
For: Backend Priority: High Status: Completed Type: Feature
|
## Title
변경된 Entity 구조에 따라 변경된 Service 적용
Cascade 적용
## Description
Service 변경
Repository 변경
## References
- [참고 블로그](https://velog.io/@max9106/JPA%EC%97%94%ED%8B%B0%ED%8B%B0-%EC%83%81%ED%83%9C-Cascade)
|
1.0
|
구매 취소 기능 - ## Title
변경된 Entity 구조에 따라 변경된 Service 적용
Cascade 적용
## Description
Service 변경
Repository 변경
## References
- [참고 블로그](https://velog.io/@max9106/JPA%EC%97%94%ED%8B%B0%ED%8B%B0-%EC%83%81%ED%83%9C-Cascade)
|
priority
|
구매 취소 기능 title 변경된 entity 구조에 따라 변경된 service 적용 cascade 적용 description service 변경 repository 변경 references
| 1
|
79,048
| 3,520,016,124
|
IssuesEvent
|
2016-01-12 19:05:32
|
IQSS/dataverse
|
https://api.github.com/repos/IQSS/dataverse
|
closed
|
Clean up Solr field names and attributes
|
Component: Search/Browse Priority: High Status: QA
|
Before we ship Dataverse 4.0 we need to clean up the Solr field names and attributes that are *not* driven by dataset fields from the database.
We'll make them look nicer by removing "_en", "_s", and other dynamic field suffixes. For example, "dvName_en" will become just "dvName".
We'll also want to make sure the fields are that are for internal use (such as "entityid") are indexed=false so they can't be searched on.
This will involve everyone updating their Solr schema.xml.
|
1.0
|
Clean up Solr field names and attributes - Before we ship Dataverse 4.0 we need to clean up the Solr field names and attributes that are *not* driven by dataset fields from the database.
We'll make them look nicer by removing "_en", "_s", and other dynamic field suffixes. For example, "dvName_en" will become just "dvName".
We'll also want to make sure the fields are that are for internal use (such as "entityid") are indexed=false so they can't be searched on.
This will involve everyone updating their Solr schema.xml.
|
priority
|
clean up solr field names and attributes before we ship dataverse we need to clean up the solr field names and attributes that are not driven by dataset fields from the database we ll make them look nicer by removing en s and other dynamic field suffixes for example dvname en will become just dvname we ll also want to make sure the fields are that are for internal use such as entityid are indexed false so they can t be searched on this will involve everyone updating their solr schema xml
| 1
|
653,094
| 21,571,977,757
|
IssuesEvent
|
2022-05-02 09:19:23
|
bitsongofficial/sinfonia-ui
|
https://api.github.com/repos/bitsongofficial/sinfonia-ui
|
closed
|
Use "My" instead of "Your"
|
High Priority Review
|
Give a look at all the texts and substitute **Your** into **My**.
E.g., **Your pools** into **My pools**.
|
1.0
|
Use "My" instead of "Your" - Give a look at all the texts and substitute **Your** into **My**.
E.g., **Your pools** into **My pools**.
|
priority
|
use my instead of your give a look at all the texts and substitute your into my e g your pools into my pools
| 1
|
239,704
| 7,799,933,904
|
IssuesEvent
|
2018-06-09 02:19:15
|
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
closed
|
0006264:
vacation templates administration
|
Feature Request Felamimail Mantis high priority
|
**Reported by pschuele on 16 Apr 2012 11:19**
vacation templates administration
- put vacation template txt files into vfs and allow access via webdav
- admins can change them / users only read
- create container in init
- allow custom templates (for system accounts)
- placeholders:
* start/end (with locale)
* contact person
|
1.0
|
0006264:
vacation templates administration - **Reported by pschuele on 16 Apr 2012 11:19**
vacation templates administration
- put vacation template txt files into vfs and allow access via webdav
- admins can change them / users only read
- create container in init
- allow custom templates (for system accounts)
- placeholders:
* start/end (with locale)
* contact person
|
priority
|
vacation templates administration reported by pschuele on apr vacation templates administration put vacation template txt files into vfs and allow access via webdav admins can change them users only read create container in init allow custom templates for system accounts placeholders start end with locale contact person
| 1
|
626,653
| 19,830,649,493
|
IssuesEvent
|
2022-01-20 11:36:36
|
amosproj/amos2021ws05-fin-prod-port-quick-check
|
https://api.github.com/repos/amosproj/amos2021ws05-fin-prod-port-quick-check
|
closed
|
Product area page - Create Products
|
est. size: 2 type: feature real size: 2 priority: high frontend
|
## User story
1. As a user
2. I want to create products
3. So that i can rate and evaluate the created products afterwards
## Acceptance criteria
* button for creating products is implemented
## Definition of done
* Approved by product owner
* Tests have been written (e.g. Unit test, integration test etc..)
* Code has been peer reviewed and approved
* No syntax or runtime errors emerged
* Code has to be included in the release candidate
|
1.0
|
Product area page - Create Products - ## User story
1. As a user
2. I want to create products
3. So that i can rate and evaluate the created products afterwards
## Acceptance criteria
* button for creating products is implemented
## Definition of done
* Approved by product owner
* Tests have been written (e.g. Unit test, integration test etc..)
* Code has been peer reviewed and approved
* No syntax or runtime errors emerged
* Code has to be included in the release candidate
|
priority
|
product area page create products user story as a user i want to create products so that i can rate and evaluate the created products afterwards acceptance criteria button for creating products is implemented definition of done approved by product owner tests have been written e g unit test integration test etc code has been peer reviewed and approved no syntax or runtime errors emerged code has to be included in the release candidate
| 1
|
785,341
| 27,610,135,226
|
IssuesEvent
|
2023-03-09 15:28:01
|
AY2223S2-CS2113-T11-4/tp
|
https://api.github.com/repos/AY2223S2-CS2113-T11-4/tp
|
closed
|
Implement task deadlines
|
priority.High type.Story
|
As a user, I can add a time/deadline to a task, so that I can record when a task needs to be done.
|
1.0
|
Implement task deadlines - As a user, I can add a time/deadline to a task, so that I can record when a task needs to be done.
|
priority
|
implement task deadlines as a user i can add a time deadline to a task so that i can record when a task needs to be done
| 1
|
721,877
| 24,841,565,339
|
IssuesEvent
|
2022-10-26 13:06:50
|
zitadel/zitadel
|
https://api.github.com/repos/zitadel/zitadel
|
closed
|
Delete instance doesn't remove loginnames from projection
|
type: bug category: backend priority: high state: triage
|
**Describe the bug**
If I remove an instance the projections of the loginnames are not cleared.
- [ ] Remove loginnames
- [ ] Check if all other instances are cleared
**To Reproduce**
Steps to reproduce the behavior:
1. Call remove instance
2. loginnames still in projection
**Expected behavior**
All projections should be cleared
|
1.0
|
Delete instance doesn't remove loginnames from projection - **Describe the bug**
If I remove an instance the projections of the loginnames are not cleared.
- [ ] Remove loginnames
- [ ] Check if all other instances are cleared
**To Reproduce**
Steps to reproduce the behavior:
1. Call remove instance
2. loginnames still in projection
**Expected behavior**
All projections should be cleared
|
priority
|
delete instance doesn t remove loginnames from projection describe the bug if i remove an instance the projections of the loginnames are not cleared remove loginnames check if all other instances are cleared to reproduce steps to reproduce the behavior call remove instance loginnames still in projection expected behavior all projections should be cleared
| 1
|
594,053
| 18,022,745,618
|
IssuesEvent
|
2021-09-16 21:51:16
|
zebscripts/AFK-Daily
|
https://api.github.com/repos/zebscripts/AFK-Daily
|
closed
|
Add GitHub Wiki Action to autoupdate wiki based on .wiki folder
|
Type: Feature request :green_heart: Priority: High :fire:
|
> https://github.com/marketplace/actions/github-wiki-action
After merging the PR that touch the Wiki #91
|
1.0
|
Add GitHub Wiki Action to autoupdate wiki based on .wiki folder - > https://github.com/marketplace/actions/github-wiki-action
After merging the PR that touch the Wiki #91
|
priority
|
add github wiki action to autoupdate wiki based on wiki folder after merging the pr that touch the wiki
| 1
|
2,717
| 2,532,798,419
|
IssuesEvent
|
2015-01-23 18:34:21
|
thompsct/SemGen
|
https://api.github.com/repos/thompsct/SemGen
|
opened
|
Replace ontology term button not working
|
bug high priority
|
Nothing happens when I click the "Replace a reference ontology term with another" button in the Annotator.
|
1.0
|
Replace ontology term button not working - Nothing happens when I click the "Replace a reference ontology term with another" button in the Annotator.
|
priority
|
replace ontology term button not working nothing happens when i click the replace a reference ontology term with another button in the annotator
| 1
|
126,179
| 4,973,695,949
|
IssuesEvent
|
2016-12-06 02:15:56
|
yairodriguez/yairodriguez.github.io
|
https://api.github.com/repos/yairodriguez/yairodriguez.github.io
|
opened
|
Landing Page
|
[priority] high [status] accepted [type] feature
|
### Description
Create the landing page for **yairodriguez.com**
---
### Issue Checklist
- [ ] Write the needed tests.
- [ ] Create the markup for the component.
- [ ] Add the correct styles.
- [ ] Use the *lint* to clean `CSS` code.
- [ ] Document all things.
---
### Assignees
- [ ] Final assign @yairodriguez
|
1.0
|
Landing Page - ### Description
Create the landing page for **yairodriguez.com**
---
### Issue Checklist
- [ ] Write the needed tests.
- [ ] Create the markup for the component.
- [ ] Add the correct styles.
- [ ] Use the *lint* to clean `CSS` code.
- [ ] Document all things.
---
### Assignees
- [ ] Final assign @yairodriguez
|
priority
|
landing page description create the landing page for yairodriguez com issue checklist write the needed tests create the markup for the component add the correct styles use the lint to clean css code document all things assignees final assign yairodriguez
| 1
|
110,203
| 4,423,259,010
|
IssuesEvent
|
2016-08-16 07:51:05
|
learnweb/moodle-mod_ratingallocate
|
https://api.github.com/repos/learnweb/moodle-mod_ratingallocate
|
closed
|
Redesign of Choice Definition
|
Effort: High Priority: Very High
|
The choice definition is currently part of the activity settings. For a better structure, the definition of the choices should be extracted from the activity settings page and get a separate page.
|
1.0
|
Redesign of Choice Definition - The choice definition is currently part of the activity settings. For a better structure, the definition of the choices should be extracted from the activity settings page and get a separate page.
|
priority
|
redesign of choice definition the choice definition is currently part of the activity settings for a better structure the definition of the choices should be extracted from the activity settings page and get a separate page
| 1
|
228,985
| 7,570,088,677
|
IssuesEvent
|
2018-04-23 07:51:12
|
Mandiklopper/People-Connect
|
https://api.github.com/repos/Mandiklopper/People-Connect
|
closed
|
Content of sanction/ disciplinary notifications
|
High Priority
|
[Content of Notification of Suspension Pending Determination of Case.docx](https://github.com/Mandiklopper/UBA-HR-Queries-Issues/files/1457821/Content.of.Notification.of.Suspension.Pending.Determination.of.Case.docx)
The content of notification for disciplinary cases and sanctions should be the same as the letter templates earlier provided to the team. The content of the email notification for case against A20533 (suspension pending investigation) logged yesterday did not have all the required details. I have attached a sample letter for suspension pending investigation. The areas highlighted in red are permanent and uneditable while the other parts will pull from the details entered into the system at time of logging the case.
Also looking at the wrong content going to the employee with letter of displeasure.
Please implement all letter templates as notifications for all other sanctions.
|
1.0
|
Content of sanction/ disciplinary notifications - [Content of Notification of Suspension Pending Determination of Case.docx](https://github.com/Mandiklopper/UBA-HR-Queries-Issues/files/1457821/Content.of.Notification.of.Suspension.Pending.Determination.of.Case.docx)
The content of notification for disciplinary cases and sanctions should be the same as the letter templates earlier provided to the team. The content of the email notification for case against A20533 (suspension pending investigation) logged yesterday did not have all the required details. I have attached a sample letter for suspension pending investigation. The areas highlighted in red are permanent and uneditable while the other parts will pull from the details entered into the system at time of logging the case.
Also looking at the wrong content going to the employee with letter of displeasure.
Please implement all letter templates as notifications for all other sanctions.
|
priority
|
content of sanction disciplinary notifications the content of notification for disciplinary cases and sanctions should be the same as the letter templates earlier provided to the team the content of the email notification for case against suspension pending investigation logged yesterday did not have all the required details i have attached a sample letter for suspension pending investigation the areas highlighted in red are permanent and uneditable while the other parts will pull from the details entered into the system at time of logging the case also looking at the wrong content going to the employee with letter of displeasure please implement all letter templates as notifications for all other sanctions
| 1
|
101,698
| 4,128,688,898
|
IssuesEvent
|
2016-06-10 07:49:47
|
democratic-coin/dcoin-go
|
https://api.github.com/repos/democratic-coin/dcoin-go
|
opened
|
Not downloaded Keys
|
Critical bug High priority
|
Не скачиваются ключи к реферальной программе, открывается главная страница.
|
1.0
|
Not downloaded Keys - Не скачиваются ключи к реферальной программе, открывается главная страница.
|
priority
|
not downloaded keys не скачиваются ключи к реферальной программе открывается главная страница
| 1
|
756,791
| 26,485,868,240
|
IssuesEvent
|
2023-01-17 17:58:08
|
pfmc-assessments/PacFIN.Utilities
|
https://api.github.com/repos/pfmc-assessments/PacFIN.Utilities
|
closed
|
Update the sql.bds function to only pull from current data tables
|
type: bug :bug: topic: database priority: high
|
**Issue**
The existing sql code pull data from both the pacfin_marts.COMPREHENSIVE_BDS_COMM and pacfin.bds_sample_odfw tables with the ODFW table providing the number of unknown sex samples and weights. However, the ODFW is no longer being maintained
**Potential Fix**
Update the sql.bds function to only use the pacfin_marts.COMPREHENSIVE_BDS_COMM to retrieve bds data. Processing functions in the package will need to be checked to ensure that removing the ODFW UNK_NUM and UNK_WT columns will not impact calculations.
|
1.0
|
Update the sql.bds function to only pull from current data tables - **Issue**
The existing sql code pull data from both the pacfin_marts.COMPREHENSIVE_BDS_COMM and pacfin.bds_sample_odfw tables with the ODFW table providing the number of unknown sex samples and weights. However, the ODFW is no longer being maintained
**Potential Fix**
Update the sql.bds function to only use the pacfin_marts.COMPREHENSIVE_BDS_COMM to retrieve bds data. Processing functions in the package will need to be checked to ensure that removing the ODFW UNK_NUM and UNK_WT columns will not impact calculations.
|
priority
|
update the sql bds function to only pull from current data tables issue the existing sql code pull data from both the pacfin marts comprehensive bds comm and pacfin bds sample odfw tables with the odfw table providing the number of unknown sex samples and weights however the odfw is no longer being maintained potential fix update the sql bds function to only use the pacfin marts comprehensive bds comm to retrieve bds data processing functions in the package will need to be checked to ensure that removing the odfw unk num and unk wt columns will not impact calculations
| 1
|
390,229
| 11,540,566,510
|
IssuesEvent
|
2020-02-18 00:35:20
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
reopened
|
[0.9.0 staging-1355] Wetland ans Rainforest overlapping eachother
|
Priority: High Status: Reopen
|
That is what we have on the fresh worlds.


Wetland is overlapping with rainforest. Ceiba and other plants are feeling quite bad there.
We need to find a place for the wetlands somewhere near rivers and lakes. And leave our gorgeous rainforests for themselves.
|
1.0
|
[0.9.0 staging-1355] Wetland ans Rainforest overlapping eachother - That is what we have on the fresh worlds.


Wetland is overlapping with rainforest. Ceiba and other plants are feeling quite bad there.
We need to find a place for the wetlands somewhere near rivers and lakes. And leave our gorgeous rainforests for themselves.
|
priority
|
wetland ans rainforest overlapping eachother that is what we have on the fresh worlds wetland is overlapping with rainforest ceiba and other plants are feeling quite bad there we need to find a place for the wetlands somewhere near rivers and lakes and leave our gorgeous rainforests for themselves
| 1
|
293
| 2,495,023,769
|
IssuesEvent
|
2015-01-06 05:28:23
|
wayneyu/merapp
|
https://api.github.com/repos/wayneyu/merapp
|
opened
|
Write unit and functional tests
|
high_priority
|
SInce we have implemented so many features and are adding more every day, it's time to start writing unit and functional tests before the project gets too big to test by hand.
|
1.0
|
Write unit and functional tests - SInce we have implemented so many features and are adding more every day, it's time to start writing unit and functional tests before the project gets too big to test by hand.
|
priority
|
write unit and functional tests since we have implemented so many features and are adding more every day it s time to start writing unit and functional tests before the project gets too big to test by hand
| 1
|
821,012
| 30,799,882,743
|
IssuesEvent
|
2023-07-31 23:53:25
|
cancervariants/variation-normalization
|
https://api.github.com/repos/cancervariants/variation-normalization
|
opened
|
Improve free text query performance
|
performance priority:high
|
Specifically looking at protein free text queries (E.g. BRAF V600E) for the manuscript analysis. I ran for ~420s locally and could only normalize ~3 free text queries per second. I tried making some work in the [draft PR](https://github.com/GenomicMedLab/cool-seq-tool/pull/172) in cool-seq-tool, but tests weren't passing (BRAF V600E and BRAF V512E did not return the same concept).
|
1.0
|
Improve free text query performance - Specifically looking at protein free text queries (E.g. BRAF V600E) for the manuscript analysis. I ran for ~420s locally and could only normalize ~3 free text queries per second. I tried making some work in the [draft PR](https://github.com/GenomicMedLab/cool-seq-tool/pull/172) in cool-seq-tool, but tests weren't passing (BRAF V600E and BRAF V512E did not return the same concept).
|
priority
|
improve free text query performance specifically looking at protein free text queries e g braf for the manuscript analysis i ran for locally and could only normalize free text queries per second i tried making some work in the in cool seq tool but tests weren t passing braf and braf did not return the same concept
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.