Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24,902 | 2,674,557,099 | IssuesEvent | 2015-03-25 04:06:01 | bowdidge/switchlist | https://api.github.com/repos/bowdidge/switchlist | closed | Mark industry track capacity, and don't send cars to a track if there's no space. | auto-migrated Milestone-Release0.9 Priority-High Type-Enhancement | ```
Currently, SwitchList doesn't have any idea about the capacity of each
industry. If the rates for cargos is set wrong so too many cars are sent to
the industry, there might not be space for new cars.
SwitchList ought to keep track of the length of each car and the space at each
industry, and avoid directing cars to an industry if there is no space.
This will require changing the UI and file database to store car sizes and
capacity, add switches so that users can ignore the feature if necessary, and
change the car assignment algorithm to do the right thing when a track
overflows.
The workaround is to match the cargo rates to the track size better; maybe
there's some simple UI or hints to help users understand what a given "cars per
week" means in terms of the total cars likely to arrive on the track.
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 23 Apr 2011 at 5:37 | 1.0 | Mark industry track capacity, and don't send cars to a track if there's no space. - ```
Currently, SwitchList doesn't have any idea about the capacity of each
industry. If the rates for cargos is set wrong so too many cars are sent to
the industry, there might not be space for new cars.
SwitchList ought to keep track of the length of each car and the space at each
industry, and avoid directing cars to an industry if there is no space.
This will require changing the UI and file database to store car sizes and
capacity, add switches so that users can ignore the feature if necessary, and
change the car assignment algorithm to do the right thing when a track
overflows.
The workaround is to match the cargo rates to the track size better; maybe
there's some simple UI or hints to help users understand what a given "cars per
week" means in terms of the total cars likely to arrive on the track.
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 23 Apr 2011 at 5:37 | priority | mark industry track capacity and don t send cars to a track if there s no space currently switchlist doesn t have any idea about the capacity of each industry if the rates for cargos is set wrong so too many cars are sent to the industry there might not be space for new cars switchlist ought to keep track of the length of each car and the space at each industry and avoid directing cars to an industry if there is no space this will require changing the ui and file database to store car sizes and capacity add switches so that users can ignore the feature if necessary and change the car assignment algorithm to do the right thing when a track overflows the workaround is to match the cargo rates to the track size better maybe there s some simple ui or hints to help users understand what a given cars per week means in terms of the total cars likely to arrive on the track original issue reported on code google com by rwbowdi gmail com on apr at | 1 |
681,265 | 23,303,209,083 | IssuesEvent | 2022-08-07 16:32:16 | OpenCubicChunks/CubicChunks2 | https://api.github.com/repos/OpenCubicChunks/CubicChunks2 | closed | Implement a real HeightmapStorage implementation | High priority | This should also come with tests that verify nothing changed on save/load | 1.0 | Implement a real HeightmapStorage implementation - This should also come with tests that verify nothing changed on save/load | priority | implement a real heightmapstorage implementation this should also come with tests that verify nothing changed on save load | 1 |
108,559 | 4,347,353,964 | IssuesEvent | 2016-07-29 19:12:01 | codebuddiesdotorg/cb-v2-scratch | https://api.github.com/repos/codebuddiesdotorg/cb-v2-scratch | closed | On the profile page, hangouts that the user did not participate in show up on the lefthand side. | bug help wanted high-priority ready | The # of hangouts attended count is correct, but ALL the hangouts show up below.
Check: server/hangouts/methods.js | 1.0 | On the profile page, hangouts that the user did not participate in show up on the lefthand side. - The # of hangouts attended count is correct, but ALL the hangouts show up below.
Check: server/hangouts/methods.js | priority | on the profile page hangouts that the user did not participate in show up on the lefthand side the of hangouts attended count is correct but all the hangouts show up below check server hangouts methods js | 1 |
128,488 | 5,065,597,947 | IssuesEvent | 2016-12-23 13:04:48 | DiCarloLab-Delft/PycQED_py3 | https://api.github.com/repos/DiCarloLab-Delft/PycQED_py3 | opened | Kernel object robustness and simplification | enhancement priority: must/high | The current kernel object handles predistortions. However, there are several other points that also handle distortions leading to two problems.
1. It is unclear what information is stored where, leading to human mistakes
2. The different allocation of information leads to inefficiencies in calculating the convolutions. This leads to a ~10-20s of unneeded convolution time everytime we upload a sequence.
The following points are improvements to address these issues. Note I do not intend to do all of these at once.
- calculate only if parameters changed
- Add RT corrections to the kernel object.
- Rename distortions class
- path for saving the kernels should not be the notebook directory
- test saving and loading
- Change parameters to SI units
- Automatically pick order of distortions to speed
up convolutions
- Add shortening of kernel if possible
- Clear functions to interact with the distortions (instead of the 4 snippered ones there are now). | 1.0 | Kernel object robustness and simplification - The current kernel object handles predistortions. However, there are several other points that also handle distortions leading to two problems.
1. It is unclear what information is stored where, leading to human mistakes
2. The different allocation of information leads to inefficiencies in calculating the convolutions. This leads to a ~10-20s of unneeded convolution time everytime we upload a sequence.
The following points are improvements to address these issues. Note I do not intend to do all of these at once.
- calculate only if parameters changed
- Add RT corrections to the kernel object.
- Rename distortions class
- path for saving the kernels should not be the notebook directory
- test saving and loading
- Change parameters to SI units
- Automatically pick order of distortions to speed
up convolutions
- Add shortening of kernel if possible
- Clear functions to interact with the distortions (instead of the 4 snippered ones there are now). | priority | kernel object robustness and simplification the current kernel object handles predistortions however there are several other points that also handle distortions leading to two problems it is unclear what information is stored where leading to human mistakes the different allocation of information leads to inefficiencies in calculating the convolutions this leads to a of unneeded convolution time everytime we upload a sequence the following points are improvements to address these issues note i do not intend to do all of these at once calculate only if parameters changed add rt corrections to the kernel object rename distortions class path for saving the kernels should not be the notebook directory test saving and loading change parameters to si units automatically pick order of distortions to speed up convolutions add shortening of kernel if possible clear functions to interact with the distortions instead of the snippered ones there are now | 1 |
291,623 | 8,940,966,749 | IssuesEvent | 2019-01-24 02:05:37 | mRemoteNG/mRemoteNG | https://api.github.com/repos/mRemoteNG/mRemoteNG | closed | putty panel not fitted into window (you can drag it around like a mdichild) | Bug High Priority Verified | <!--- Provide a general summary of the issue in the Title above -->
Reference #1261
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide an unambiguous set of steps to reproduce -->
<!--- this bug. Include code to reproduce, if relevant -->
1. Open putty connection
2. See a sliver of the putty window

3. this can be dragged within the connection panel

## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used:
* Operating System and version (e.g. Windows 10 1709 x64):
| 1.0 | putty panel not fitted into window (you can drag it around like a mdichild) - <!--- Provide a general summary of the issue in the Title above -->
Reference #1261
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide an unambiguous set of steps to reproduce -->
<!--- this bug. Include code to reproduce, if relevant -->
1. Open putty connection
2. See a sliver of the putty window

3. this can be dragged within the connection panel

## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used:
* Operating System and version (e.g. Windows 10 1709 x64):
| priority | putty panel not fitted into window you can drag it around like a mdichild reference expected behavior current behavior possible solution steps to reproduce for bugs open putty connection see a sliver of the putty window this can be dragged within the connection panel context your environment version used operating system and version e g windows | 1 |
693,032 | 23,759,997,241 | IssuesEvent | 2022-09-01 08:04:32 | redhat-developer/odo | https://api.github.com/repos/redhat-developer/odo | closed | devfile variables inside kubernetes component are not replaced | kind/bug triage/duplicate priority/High | Devfile variables inside `kubernetes` component files referenced by `uri` should work the same way as they do when `inline` is used
```yaml
#devfile.yaml
commands:
- exec:
commandLine: npm install
component: runtime
group:
isDefault: true
kind: build
workingDir: $PROJECT_SOURCE
id: install
- exec:
commandLine: npm start
component: runtime
group:
isDefault: true
kind: run
workingDir: $PROJECT_SOURCE
id: run
- id: build-image
apply:
component: prod-image
- id: deployk8s
apply:
component: outerloop-deploy
- id: deploy
composite:
commands:
- build-image
- deployk8s
group:
kind: deploy
isDefault: true
components:
- container:
endpoints:
- name: http-3000
targetPort: 3000
image: registry.access.redhat.com/ubi8/nodejs-14:latest
memoryLimit: 1024Mi
mountSources: true
name: runtime
- name: prod-image
image:
imageName: "{{CONTAINER_IMAGE}}"
dockerfile:
uri: ./Dockerfile
buildContext: ${PROJECT_SOURCE}
- name: outerloop-deploy
kubernetes:
uri: kubernetes/deployment.yaml
variables:
CONTAINER_IMAGE: quay.io/tkral/test:latest
metadata:
language: javascript
name: nodejs-nodejs-kkty
projectType: nodejs
schemaVersion: 2.2.0
starterProjects:
- git:
remotes:
origin: https://github.com/odo-devfiles/nodejs-ex.git
name: nodejs-starter
```
```yaml
# kubernetes/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: mynode
spec:
replicas: 1
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: main
image: "{{CONTAINER_IMAGE}}"
resources: {}
```
```
odo deploy
```
```
$ k get deployment mynode -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-11-23T08:58:43Z"
generation: 1
labels:
app.kubernetes.io/managed-by: odo
name: mynode
namespace: test
resourceVersion: "31444"
uid: 98f8a389-0a3d-4abf-b9f1-c691ad7c4cf0
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: node-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: node-app
spec:
containers:
- image: '{{ CONTAINER_IMAGE }}'
imagePullPolicy: IfNotPresent
name: main
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-11-23T08:58:43Z"
lastUpdateTime: "2021-11-23T08:58:43Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-11-23T09:08:44Z"
lastUpdateTime: "2021-11-23T09:08:44Z"
message: ReplicaSet "mynode-5db6864ffd" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
```
/kind bug
/priority high
| 1.0 | devfile variables inside kubernetes component are not replaced - Devfile variables inside `kubernetes` component files referenced by `uri` should work the same way as they do when `inline` is used
```yaml
#devfile.yaml
commands:
- exec:
commandLine: npm install
component: runtime
group:
isDefault: true
kind: build
workingDir: $PROJECT_SOURCE
id: install
- exec:
commandLine: npm start
component: runtime
group:
isDefault: true
kind: run
workingDir: $PROJECT_SOURCE
id: run
- id: build-image
apply:
component: prod-image
- id: deployk8s
apply:
component: outerloop-deploy
- id: deploy
composite:
commands:
- build-image
- deployk8s
group:
kind: deploy
isDefault: true
components:
- container:
endpoints:
- name: http-3000
targetPort: 3000
image: registry.access.redhat.com/ubi8/nodejs-14:latest
memoryLimit: 1024Mi
mountSources: true
name: runtime
- name: prod-image
image:
imageName: "{{CONTAINER_IMAGE}}"
dockerfile:
uri: ./Dockerfile
buildContext: ${PROJECT_SOURCE}
- name: outerloop-deploy
kubernetes:
uri: kubernetes/deployment.yaml
variables:
CONTAINER_IMAGE: quay.io/tkral/test:latest
metadata:
language: javascript
name: nodejs-nodejs-kkty
projectType: nodejs
schemaVersion: 2.2.0
starterProjects:
- git:
remotes:
origin: https://github.com/odo-devfiles/nodejs-ex.git
name: nodejs-starter
```
```yaml
# kubernetes/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: mynode
spec:
replicas: 1
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: main
image: "{{CONTAINER_IMAGE}}"
resources: {}
```
```
odo deploy
```
```
$ k get deployment mynode -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-11-23T08:58:43Z"
generation: 1
labels:
app.kubernetes.io/managed-by: odo
name: mynode
namespace: test
resourceVersion: "31444"
uid: 98f8a389-0a3d-4abf-b9f1-c691ad7c4cf0
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: node-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: node-app
spec:
containers:
- image: '{{ CONTAINER_IMAGE }}'
imagePullPolicy: IfNotPresent
name: main
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-11-23T08:58:43Z"
lastUpdateTime: "2021-11-23T08:58:43Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-11-23T09:08:44Z"
lastUpdateTime: "2021-11-23T09:08:44Z"
message: ReplicaSet "mynode-5db6864ffd" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
```
/kind bug
/priority high
| priority | devfile variables inside kubernetes component are not replaced devfile variables inside kubernetes component files referenced by uri should work the same way as they do when inline is used yaml devfile yaml commands exec commandline npm install component runtime group isdefault true kind build workingdir project source id install exec commandline npm start component runtime group isdefault true kind run workingdir project source id run id build image apply component prod image id apply component outerloop deploy id deploy composite commands build image group kind deploy isdefault true components container endpoints name http targetport image registry access redhat com nodejs latest memorylimit mountsources true name runtime name prod image image imagename container image dockerfile uri dockerfile buildcontext project source name outerloop deploy kubernetes uri kubernetes deployment yaml variables container image quay io tkral test latest metadata language javascript name nodejs nodejs kkty projecttype nodejs schemaversion starterprojects git remotes origin name nodejs starter yaml kubernetes deployment yaml kind deployment apiversion apps metadata name mynode spec replicas selector matchlabels app node app template metadata labels app node app spec containers name main image container image resources odo deploy k get deployment mynode o yaml apiversion apps kind deployment metadata annotations deployment kubernetes io revision creationtimestamp generation labels app kubernetes io managed by odo name mynode namespace test resourceversion uid spec progressdeadlineseconds replicas revisionhistorylimit selector matchlabels app node app strategy rollingupdate maxsurge maxunavailable type rollingupdate template metadata creationtimestamp null labels app node app spec containers image container image imagepullpolicy ifnotpresent name main resources terminationmessagepath dev termination log terminationmessagepolicy file dnspolicy clusterfirst restartpolicy always schedulername default scheduler securitycontext terminationgraceperiodseconds status conditions lasttransitiontime lastupdatetime message deployment does not have minimum availability reason minimumreplicasunavailable status false type available lasttransitiontime lastupdatetime message replicaset mynode has timed out progressing reason progressdeadlineexceeded status false type progressing observedgeneration replicas unavailablereplicas updatedreplicas kind bug priority high | 1 |
558,010 | 16,524,296,367 | IssuesEvent | 2021-05-26 17:57:49 | Javacord/Javacord | https://api.github.com/repos/Javacord/Javacord | opened | Add support for components | high priority | Discord now has components (buttons) for messages and responses,
https://github.com/discord/discord-api-docs/pull/3007 | 1.0 | Add support for components - Discord now has components (buttons) for messages and responses,
https://github.com/discord/discord-api-docs/pull/3007 | priority | add support for components discord now has components buttons for messages and responses | 1 |
479,245 | 13,793,433,470 | IssuesEvent | 2020-10-09 14:56:52 | eclipse-glsp/glsp | https://api.github.com/repos/eclipse-glsp/glsp | closed | Split public API from internal implementations | high-priority | A generic point that we should start thinking about is to make it clear (by package name) what is internal and public API meant to be directly used and extended by clients. This will give us more flexibility in the future when doing modifications as we'll know what we can change without affecting existing language-specific client implementations.
[migrated from https://github.com/eclipsesource/graphical-lsp/issues/363] | 1.0 | Split public API from internal implementations - A generic point that we should start thinking about is to make it clear (by package name) what is internal and public API meant to be directly used and extended by clients. This will give us more flexibility in the future when doing modifications as we'll know what we can change without affecting existing language-specific client implementations.
[migrated from https://github.com/eclipsesource/graphical-lsp/issues/363] | priority | split public api from internal implementations a generic point that we should start thinking about is to make it clear by package name what is internal and public api meant to be directly used and extended by clients this will give us more flexibility in the future when doing modifications as we ll know what we can change without affecting existing language specific client implementations | 1 |
130,346 | 5,114,437,591 | IssuesEvent | 2017-01-06 18:29:29 | aayaffe/SailingRaceCourseManager | https://api.github.com/repos/aayaffe/SailingRaceCourseManager | opened | Admin revoked | Priority: High Type: Bug | Admin options are not available after using other features of the phone (such as camera) and then returning to map activity.
Closing and opening, Fixes issue | 1.0 | Admin revoked - Admin options are not available after using other features of the phone (such as camera) and then returning to map activity.
Closing and opening, Fixes issue | priority | admin revoked admin options are not available after using other features of the phone such as camera and then returning to map activity closing and opening fixes issue | 1 |
241,784 | 7,834,436,134 | IssuesEvent | 2018-06-16 13:59:16 | kaytotes/ImprovedBlizzardUI | https://api.github.com/repos/kaytotes/ImprovedBlizzardUI | closed | Kill Feed | 8.0 high priority ptr | The Kill Feed is no longer functional and causes instant errors on the PTR.

| 1.0 | Kill Feed - The Kill Feed is no longer functional and causes instant errors on the PTR.

| priority | kill feed the kill feed is no longer functional and causes instant errors on the ptr | 1 |
420,449 | 12,238,258,513 | IssuesEvent | 2020-05-04 19:29:18 | SparkDevNetwork/Rock | https://api.github.com/repos/SparkDevNetwork/Rock | closed | Group Capacity Off By One Exception | Fixed in v10.3 Priority: High Status: Confirmed Topic: Group Type: Bug | ### Description
On a Group Type with a _Group Capacity_ rule of _Hard_, the defined Capacity is not honored and an exception is thrown when trying to add someone that would make the group AT capacity, but NOT over it.
### Steps to Reproduce
1. Identify a group type that has a Group Capacity rule of Hard. (Or just edit Small Group type and add that capacity rule)
2. Go to Group Viewer and create a new group of that type.
3. Edit the new group and set capacity to 1
4. Add someone to the group and observe the exception.
**Expected behavior:**
Group capacity would be honored and you would be able to add people up TO the defined group capacity.
**Actual behavior:**
An exception is thrown when adding a member that would make the group AT the defined capacity. To work around the issue you must increase the desired capacity by 1 to allow the group to be at the desired capacity. This is new behavior as of 10.x and didn't occur in 9.4.
Duped on demo and prealpha.

### Versions
* **Rock Version:** 10.0-11.0 | 1.0 | Group Capacity Off By One Exception - ### Description
On a Group Type with a _Group Capacity_ rule of _Hard_, the defined Capacity is not honored and an exception is thrown when trying to add someone that would make the group AT capacity, but NOT over it.
### Steps to Reproduce
1. Identify a group type that has a Group Capacity rule of Hard. (Or just edit Small Group type and add that capacity rule)
2. Go to Group Viewer and create a new group of that type.
3. Edit the new group and set capacity to 1
4. Add someone to the group and observe the exception.
**Expected behavior:**
Group capacity would be honored and you would be able to add people up TO the defined group capacity.
**Actual behavior:**
An exception is thrown when adding a member that would make the group AT the defined capacity. To work around the issue you must increase the desired capacity by 1 to allow the group to be at the desired capacity. This is new behavior as of 10.x and didn't occur in 9.4.
Duped on demo and prealpha.

### Versions
* **Rock Version:** 10.0-11.0 | priority | group capacity off by one exception description on a group type with a group capacity rule of hard the defined capacity is not honored and an exception is thrown when trying to add someone that would make the group at capacity but not over it steps to reproduce identify a group type that has a group capacity rule of hard or just edit small group type and add that capacity rule go to group viewer and create a new group of that type edit the new group and set capacity to add someone to the group and observe the exception expected behavior group capacity would be honored and you would be able to add people up to the defined group capacity actual behavior an exception is thrown when adding a member that would make the group at the defined capacity to work around the issue you must increase the desired capacity by to allow the group to be at the desired capacity this is new behavior as of x and didn t occur in duped on demo and prealpha versions rock version | 1 |
253,512 | 8,057,055,843 | IssuesEvent | 2018-08-02 14:26:00 | Linaro/squad | https://api.github.com/repos/Linaro/squad | closed | Be able to specify baseline when using email api | enhancement high priority | When calling the API like so:
https://qa-reports.linaro.org/api/builds/8275/email/
Note the line "compared to build v4.14.56-93-gec86d5e19e14".
Squad chooses the baseline (build v4.14.56-93-gec86d5e19e14 in this case) by naively using the previous build. However, my API client is able to determine a better baseline because it has knowledge of the project.
If the email endpoint allowed a 'baseline' build to be specified, perhaps like so: "https://qa-reports.linaro.org/api/builds/8275/email?baseline=8156", then the fixes and regressions that an email report produces would be much more useful. | 1.0 | Be able to specify baseline when using email api - When calling the API like so:
https://qa-reports.linaro.org/api/builds/8275/email/
Note the line "compared to build v4.14.56-93-gec86d5e19e14".
Squad chooses the baseline (build v4.14.56-93-gec86d5e19e14 in this case) by naively using the previous build. However, my API client is able to determine a better baseline because it has knowledge of the project.
If the email endpoint allowed a 'baseline' build to be specified, perhaps like so: "https://qa-reports.linaro.org/api/builds/8275/email?baseline=8156", then the fixes and regressions that an email report produces would be much more useful. | priority | be able to specify baseline when using email api when calling the api like so note the line compared to build squad chooses the baseline build in this case by naively using the previous build however my api client is able to determine a better baseline because it has knowledge of the project if the email endpoint allowed a baseline build to be specified perhaps like so then the fixes and regressions that an email report produces would be much more useful | 1 |
386,837 | 11,451,406,386 | IssuesEvent | 2020-02-06 11:32:10 | balena-io/balena-supervisor | https://api.github.com/repos/balena-io/balena-supervisor | opened | The supervisor can report a spurious target state when moving between applications | High priority | This exhibited itself as the supervisor complaining it did not support multiple applications, and the state endpoint had correctly only reported a single app as the target state.
It seems to be related to volatile state, as when the healthcheck kicked in, the supervisor self-recovered. | 1.0 | The supervisor can report a spurious target state when moving between applications - This exhibited itself as the supervisor complaining it did not support multiple applications, and the state endpoint had correctly only reported a single app as the target state.
It seems to be related to volatile state, as when the healthcheck kicked in, the supervisor self-recovered. | priority | the supervisor can report a spurious target state when moving between applications this exhibited itself as the supervisor complaining it did not support multiple applications and the state endpoint had correctly only reported a single app as the target state it seems to be related to volatile state as when the healthcheck kicked in the supervisor self recovered | 1 |
303,730 | 9,310,061,007 | IssuesEvent | 2019-03-25 17:51:52 | ConsenSys/mythril-classic | https://api.github.com/repos/ConsenSys/mythril-classic | closed | Refactor: mythril/mythril.py | Priority: High Review maintenance | ## Description
This issue tracks maintenance of mythril/mythril.py
## Checkpoints:
- [ ] Cleanup Code
- [ ] Ensure 80% code coverage
- [ ] Ensure all public functions have been outfitted with the proper documentation | 1.0 | Refactor: mythril/mythril.py - ## Description
This issue tracks maintenance of mythril/mythril.py
## Checkpoints:
- [ ] Cleanup Code
- [ ] Ensure 80% code coverage
- [ ] Ensure all public functions have been outfitted with the proper documentation | priority | refactor mythril mythril py description this issue tracks maintenance of mythril mythril py checkpoints cleanup code ensure code coverage ensure all public functions have been outfitted with the proper documentation | 1 |
594,690 | 18,051,507,232 | IssuesEvent | 2021-09-19 20:34:35 | robotcoral/coral-app | https://api.github.com/repos/robotcoral/coral-app | opened | Flags can be set underneath a slab | bug High Priority | **Describe the bug**
If karol stands on 1 or more slabs he can place a flag on the floor underneath the slabs.
**To Reproduce**
Steps to reproduce the behavior:
1. Put down one or more slabs as karol
2. Move karol such that he stands on the slabs
3. Place a flag on the current position
**Expected behavior**
Karol places the flag on top of the slabs.
**Additional context**
Version 0.1.10 nightly
| 1.0 | Flags can be set underneath a slab - **Describe the bug**
If karol stands on 1 or more slabs he can place a flag on the floor underneath the slabs.
**To Reproduce**
Steps to reproduce the behavior:
1. Put down one or more slabs as karol
2. Move karol such that he stands on the slabs
3. Place a flag on the current position
**Expected behavior**
Karol places the flag on top of the slabs.
**Additional context**
Version 0.1.10 nightly
| priority | flags can be set underneath a slab describe the bug if karol stands on or more slabs he can place a flag on the floor underneath the slabs to reproduce steps to reproduce the behavior put down one or more slabs as karol move karol such that he stands on the slabs place a flag on the current position expected behavior karol places the flag on top of the slabs additional context version nightly | 1 |
534,762 | 15,648,429,126 | IssuesEvent | 2021-03-23 05:42:57 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | When Super Magician appears as a boss due to Frog's rule, he doesn't have increased HP | High Priority reported in v1.11 | Just missing a field in the frog HP modifiers file I believe | 1.0 | When Super Magician appears as a boss due to Frog's rule, he doesn't have increased HP - Just missing a field in the frog HP modifiers file I believe | priority | when super magician appears as a boss due to frog s rule he doesn t have increased hp just missing a field in the frog hp modifiers file i believe | 1 |
511,993 | 14,886,685,660 | IssuesEvent | 2021-01-20 17:16:08 | ChainSafe/gossamer | https://api.github.com/repos/ChainSafe/gossamer | opened | fix out of memory panic when handling block body | Priority: 2 - High Type: Bug | ## Describe the bug
<!-- A clear and concise description of what the bug is. -->
- node panics with "out of memory" when handling block body: https://github.com/paritytech/cumulus/blob/master/consensus/src/lib.rs
- fix this and re-enable
- may be due to decoding issues
- not sure if this happens with just gossamer, may only be with kusama
<!-- Thank you 🙏 --> | 1.0 | fix out of memory panic when handling block body - ## Describe the bug
<!-- A clear and concise description of what the bug is. -->
- node panics with "out of memory" when handling block body: https://github.com/paritytech/cumulus/blob/master/consensus/src/lib.rs
- fix this and re-enable
- may be due to decoding issues
- not sure if this happens with just gossamer, may only be with kusama
<!-- Thank you 🙏 --> | priority | fix out of memory panic when handling block body describe the bug node panics with out of memory when handling block body fix this and re enable may be due to decoding issues not sure if this happens with just gossamer may only be with kusama | 1 |
497,471 | 14,371,307,330 | IssuesEvent | 2020-12-01 12:22:48 | FEUP-ESOF-2020-21/open-cx-t4g2-codemasters | https://api.github.com/repos/FEUP-ESOF-2020-21/open-cx-t4g2-codemasters | closed | US: As a user I want to be able to rate a talk | conference manager high priority iteration-3 user-story | As a user I want to be able to rate a talk.
Scenario: Rate a talk.
Given: A conference that I have attended
When: I tap “Rate this talk”
Then: I give a score between 0 and 10
Value: Must have
Effort: XL | 1.0 | US: As a user I want to be able to rate a talk - As a user I want to be able to rate a talk.
Scenario: Rate a talk.
Given: A conference that I have attended
When: I tap “Rate this talk”
Then: I give a score between 0 and 10
Value: Must have
Effort: XL | priority | us as a user i want to be able to rate a talk as a user i want to be able to rate a talk scenario rate a talk given a conference that i have attended when i tap “rate this talk” then i give a score between and value must have effort xl | 1 |
552,216 | 16,218,476,649 | IssuesEvent | 2021-05-06 00:26:35 | lalitpagaria/obsei | https://api.github.com/repos/lalitpagaria/obsei | closed | HTTP Sink is not working due to date time serialization issue on AppStore and PlayStore Scrapper Sources | bug high priority | below issue is coming :
TypeError: datetime.datetime(...) is not JSON serializable
**To Reproduce**
Select PlayStore & AppStore Scrapper and use some HTTP mock server or HTTP local server to receive sentiments data.
**Expected behavior**
Should work with any date time format
**Stacktrace**
TypeError: datetime.datetime(...) is not JSON serializable
**Please complete the following information:**
- OS: windows
- Version:
**Additional context**
Add any other context about the problem here.
| 1.0 | HTTP Sink is not working due to date time serialization issue on AppStore and PlayStore Scrapper Sources - below issue is coming :
TypeError: datetime.datetime(...) is not JSON serializable
**To Reproduce**
Select PlayStore & AppStore Scrapper and use some HTTP mock server or HTTP local server to receive sentiments data.
**Expected behavior**
Should work with any date time format
**Stacktrace**
TypeError: datetime.datetime(...) is not JSON serializable
**Please complete the following information:**
- OS: windows
- Version:
**Additional context**
Add any other context about the problem here.
| priority | http sink is not working due to date time serialization issue on appstore and playstore scrapper sources below issue is coming typeerror datetime datetime is not json serializable to reproduce select playstore appstore scrapper and use some http mock server or http local server to receive sentiments data expected behavior should work with any date time format stacktrace typeerror datetime datetime is not json serializable please complete the following information os windows version additional context add any other context about the problem here | 1 |
185,726 | 6,727,089,271 | IssuesEvent | 2017-10-17 12:25:48 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | closed | Undo action removes all latest added attributes after an immediate add of an attribute | 0.94-pre-release Priority/High | Browser: Chrome Version 61.0.3163.100 (Official Build) (64-bit)
**Steps**
1. Add one or two attributes (att1, att2) with values for any configuration
2. Add another attribute (att3)
3. Click undo or press Ctrl+Z
**Acutal result**
All attributes including att1, att2 get undone
**Expected result**
Only att3 should be undone | 1.0 | Undo action removes all latest added attributes after an immediate add of an attribute - Browser: Chrome Version 61.0.3163.100 (Official Build) (64-bit)
**Steps**
1. Add one or two attributes (att1, att2) with values for any configuration
2. Add another attribute (att3)
3. Click undo or press Ctrl+Z
**Acutal result**
All attributes including att1, att2 get undone
**Expected result**
Only att3 should be undone | priority | undo action removes all latest added attributes after an immediate add of an attribute browser chrome version official build bit steps add one or two attributes with values for any configuration add another attribute click undo or press ctrl z acutal result all attributes including get undone expected result only should be undone | 1 |
99,676 | 4,059,209,061 | IssuesEvent | 2016-05-25 08:45:29 | icatproject/python-icat | https://api.github.com/repos/icatproject/python-icat | closed | Ensure compatibility with ICAT 4.7 and IDS 1.6 | blocked enhancement in progress Priority-High | The first snapshot releases for icat.server 4.7.0 and ids.server 1.6.0 are out. Need to make sure python-icat copes well with these upcoming versions and supports all important new features. ICAT 4.7 is supposed to have some minor schema changes, so at least this will require changes in python-icat.
Blocked until the final release versions of icat.server 4.7.0 and ids.server 1.6.0 are available for testing. | 1.0 | Ensure compatibility with ICAT 4.7 and IDS 1.6 - The first snapshot releases for icat.server 4.7.0 and ids.server 1.6.0 are out. Need to make sure python-icat copes well with these upcoming versions and supports all important new features. ICAT 4.7 is supposed to have some minor schema changes, so at least this will require changes in python-icat.
Blocked until the final release versions of icat.server 4.7.0 and ids.server 1.6.0 are available for testing. | priority | ensure compatibility with icat and ids the first snapshot releases for icat server and ids server are out need to make sure python icat copes well with these upcoming versions and supports all important new features icat is supposed to have some minor schema changes so at least this will require changes in python icat blocked until the final release versions of icat server and ids server are available for testing | 1 |
487,458 | 14,047,112,968 | IssuesEvent | 2020-11-02 06:28:43 | wso2/product-apim-tooling | https://api.github.com/repos/wso2/product-apim-tooling | closed | Revamp the existing "apictl" commands and the new commands to be added to API Controller | Affected/3.1.0 Next Release - 4.x Priority/High Type/Improvement | **Description:**
There are two (2) types of command signatures (structures) in API Controller such as **apictl [verb] [noun] [flags]** and **apictl [command] [flags]**. Below are the commands belonging to those two (2) categories.
<table>
<tbody>
<tr>
<th>apictl [verb] [noun] [flags]</th>
<th>apictl [command] [flags]</strong></th>
</tr>
<tr>
<td valign="top">
<p><strong>Existing commands:</strong></p>
<ul>
<li>apictl list apis [flags]</li>
<li>apictl list apps [flags]</li>
<li>apictl login <env-name> [flags]</li>
<li>apictl logout <env-name> [flags]</li>
<li>apictl install api-operator [flags]</li>
<li>apictl uninstall api-operator [flags]</li>
<li>apictl change registry [flags]</li>
<li>apictl version <---- apictl noun only</li>
<li>apictl help <----- apictl verb only</li>
</ul>
<br />
<p><strong>Newly added commands:</strong></p>
<ul>
<li>apictl list api-products [flags]</li>
</ul>
</td>
<td>
<p><strong>Existing commands:</strong></p>
<ul>
<li >apictl add [flags]</li>
<li>apictl add-env [flags]</li>
<li>apictl remove-env [flags]</li>
<li>apictl export-api [flags]</li>
<li>apictl export-apis [flags]</li>
<li>apictl export-app [flags]</li>
<li>apictl import-api [flags]</li>
<li>apictl import-app [flags]</li>
<li>apictl init [flags]</li>
<li>apictl get-keys [flags]</li>
<li>apictl set [flags]</li>
<li>apictl update [flags]</li>
</ul>
<br />
<p><strong>Newly added commands:</strong></p>
<ul>
<li>apictl delete-api [flags]</li>
<li>apictl change-api-status [flags]</li>
<li>apictl delete-api-product [flag]</li>
</ul>
<br />
<p><strong>Commands to be added:</strong></p>
<ul>
<li>apictl import-api-product [flags]</li>
<li>apictl export-api-product [flags]</li>
</ul>
</td>
</tr>
</tbody>
</table>
It would be better if all the commands can be revamped into one then all the commands will be more consistent.
**Suggested new structure:** _**apictl [verb] [noun] [flags]** (The structure that already has been used in the left column)_
The recently added new commands (check right column **Newly added commands:**) and the commands to be added (check right column **Commands to be added:**) can be easily restructured to the suggested new structure.
The existing commands (check right column **Existing commands:**) should be migrated in a manner without breaking any user functionality. These existing commands can be deprecated first without directly removing them which will address the backward compatibility.
**Suggested Labels:**
Type/Improvement
Affected/3.1.0
**Affected Product Version:**
APICTL 3.1.0 | 1.0 | Revamp the existing "apictl" commands and the new commands to be added to API Controller - **Description:**
There are two (2) types of command signatures (structures) in API Controller such as **apictl [verb] [noun] [flags]** and **apictl [command] [flags]**. Below are the commands belonging to those two (2) categories.
<table>
<tbody>
<tr>
<th>apictl [verb] [noun] [flags]</th>
<th>apictl [command] [flags]</strong></th>
</tr>
<tr>
<td valign="top">
<p><strong>Existing commands:</strong></p>
<ul>
<li>apictl list apis [flags]</li>
<li>apictl list apps [flags]</li>
<li>apictl login <env-name> [flags]</li>
<li>apictl logout <env-name> [flags]</li>
<li>apictl install api-operator [flags]</li>
<li>apictl uninstall api-operator [flags]</li>
<li>apictl change registry [flags]</li>
<li>apictl version <---- apictl noun only</li>
<li>apictl help <----- apictl verb only</li>
</ul>
<br />
<p><strong>Newly added commands:</strong></p>
<ul>
<li>apictl list api-products [flags]</li>
</ul>
</td>
<td>
<p><strong>Existing commands:</strong></p>
<ul>
<li >apictl add [flags]</li>
<li>apictl add-env [flags]</li>
<li>apictl remove-env [flags]</li>
<li>apictl export-api [flags]</li>
<li>apictl export-apis [flags]</li>
<li>apictl export-app [flags]</li>
<li>apictl import-api [flags]</li>
<li>apictl import-app [flags]</li>
<li>apictl init [flags]</li>
<li>apictl get-keys [flags]</li>
<li>apictl set [flags]</li>
<li>apictl update [flags]</li>
</ul>
<br />
<p><strong>Newly added commands:</strong></p>
<ul>
<li>apictl delete-api [flags]</li>
<li>apictl change-api-status [flags]</li>
<li>apictl delete-api-product [flag]</li>
</ul>
<br />
<p><strong>Commands to be added:</strong></p>
<ul>
<li>apictl import-api-product [flags]</li>
<li>apictl export-api-product [flags]</li>
</ul>
</td>
</tr>
</tbody>
</table>
It would be better if all the commands can be revamped into one then all the commands will be more consistent.
**Suggested new structure:** _**apictl [verb] [noun] [flags]** (The structure that already has been used in the left column)_
The recently added new commands (check right column **Newly added commands:**) and the commands to be added (check right column **Commands to be added:**) can be easily restructured to the suggested new structure.
The existing commands (check right column **Existing commands:**) should be migrated in a manner without breaking any user functionality. These existing commands can be deprecated first without directly removing them which will address the backward compatibility.
**Suggested Labels:**
Type/Improvement
Affected/3.1.0
**Affected Product Version:**
APICTL 3.1.0 | priority | revamp the existing apictl commands and the new commands to be added to api controller description there are two types of command signatures structures in api controller such as apictl and apictl below are the commands belonging to those two categories apictl apictl existing commands apictl list apis apictl list apps apictl login lt env name gt apictl logout lt env name gt apictl install api operator apictl uninstall api operator apictl change registry apictl version lt apictl noun only apictl help nbsp nbsp lt apictl verb only newly added nbsp commands apictl list api products existing commands apictl add apictl add env apictl remove env apictl export api apictl export apis apictl export app apictl import api apictl import app apictl init apictl get keys apictl set apictl update newly added commands apictl delete api apictl change api status apictl delete api product commands to be added apictl import api product apictl export api product it would be better if all the commands can be revamped into one then all the commands will be more consistent suggested new structure apictl the structure that already has been used in the left column the recently added new commands check right column newly added commands and the commands to be added check right column commands to be added can be easily restructured to the suggested new structure the existing commands check right column existing commands should be migrated in a manner without breaking any user functionality these existing commands can be deprecated first without directly removing them which will address the backward compatibility suggested labels type improvement affected affected product version apictl | 1 |
463,541 | 13,283,379,793 | IssuesEvent | 2020-08-24 03:00:55 | yidongnan/grpc-spring-boot-starter | https://api.github.com/repos/yidongnan/grpc-spring-boot-starter | closed | Stub creation has no fallback | bug high priority | I believe this to be a blocking regression issue.
**The context**
Use [grpc-kotlin](https://github.com/grpc/grpc-kotlin) library to make gRPC calls.
**The bug**
In `grpc-spring-boot-starter:v2.9.0.RELEASE`, `GrpcClientBeanPostProcessor` tries to figure out (reflectively) the appropriate `static` method to create a stub. If it fails, it **falls back** to using the constructor (which is assumed as `private`, but may not be, as in the case of `grpc-kotlin`). In `grpc-spring-boot-starter:v2.10.0.RELEASE`, the fallback is not present, and thus, the stub injection blows up with an error `java.lang.IllegalArgumentException: Unsupported stub type`. See the class hierarchy below:
```
class ServiceNameCoroutineStub @JvmOverloads constructor(
channel: Channel,
callOptions: CallOptions = DEFAULT
) : AbstractCoroutineStub<ServiceNameCoroutineStub>(channel, callOptions)
```
```
abstract class AbstractCoroutineStub<S: AbstractCoroutineStub<S>>(
channel: Channel,
callOptions: CallOptions = CallOptions.DEFAULT
): AbstractStub<S>(channel, callOptions)
```
The problem, that already existed in `grpc-spring-boot-starter:v2.9.0.RELEASE` but was hidden by the fallback, is that there's no code in `GrpcClientBeanPostProcessor` to check for `AbstractStub` subclasses. I think the fallback to using the constructor directly is ok, as seen in the [example code](https://github.com/grpc/grpc-kotlin/blob/master/examples/src/main/kotlin/io/grpc/examples/helloworld/HelloWorldClient.kt#L31), and should not be removed. However, the constructor can be assumed `public` and there's no need to try to change its visibility.
Note that this too, will fail, if grpc-kotlin makes the constructor `private` and adds a static method (I created https://github.com/grpc/grpc-kotlin/issues/163). The futureproof way to fix this is not to check for the stub type, but to check for the existence of static methods named `new*Stub`, and then comparing the return types with the declared stub type.
**Stacktrace and logs**
```
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.mycompany.ServiceNameGrpcKt$ServiceNameCoroutineStub]: Unsupported stub type: com.mycompany.ServiceNameGrpcKt$ServiceNameCoroutineStub -> Please report this issue.
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.lambda$createStub$1(GrpcClientBeanPostProcessor.java:244)
at java.base/java.util.Optional.orElseThrow(Optional.java:408)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.createStub(GrpcClientBeanPostProcessor.java:243)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.valueForMember(GrpcClientBeanPostProcessor.java:218)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.processInjectionPoint(GrpcClientBeanPostProcessor.java:127)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.postProcessBeforeInitialization(GrpcClientBeanPostProcessor.java:83)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:416)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1788)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595)
```
**Steps to Reproduce**
N/A
**The application's environment**
Which versions do you use?
* Spring (boot): 2.3.1.RELEASE
* grpc-java: 1.30.2
* grpc-spring-boot-starter: 2.9.0.RELEASE
**Additional context**
* Did it ever work before? Yes.
* Do you have a demo? No.
| 1.0 | Stub creation has no fallback - I believe this to be a blocking regression issue.
**The context**
Use [grpc-kotlin](https://github.com/grpc/grpc-kotlin) library to make gRPC calls.
**The bug**
In `grpc-spring-boot-starter:v2.9.0.RELEASE`, `GrpcClientBeanPostProcessor` tries to figure out (reflectively) the appropriate `static` method to create a stub. If it fails, it **falls back** to using the constructor (which is assumed as `private`, but may not be, as in the case of `grpc-kotlin`). In `grpc-spring-boot-starter:v2.10.0.RELEASE`, the fallback is not present, and thus, the stub injection blows up with an error `java.lang.IllegalArgumentException: Unsupported stub type`. See the class hierarchy below:
```
class ServiceNameCoroutineStub @JvmOverloads constructor(
channel: Channel,
callOptions: CallOptions = DEFAULT
) : AbstractCoroutineStub<ServiceNameCoroutineStub>(channel, callOptions)
```
```
abstract class AbstractCoroutineStub<S: AbstractCoroutineStub<S>>(
channel: Channel,
callOptions: CallOptions = CallOptions.DEFAULT
): AbstractStub<S>(channel, callOptions)
```
The problem, that already existed in `grpc-spring-boot-starter:v2.9.0.RELEASE` but was hidden by the fallback, is that there's no code in `GrpcClientBeanPostProcessor` to check for `AbstractStub` subclasses. I think the fallback to using the constructor directly is ok, as seen in the [example code](https://github.com/grpc/grpc-kotlin/blob/master/examples/src/main/kotlin/io/grpc/examples/helloworld/HelloWorldClient.kt#L31), and should not be removed. However, the constructor can be assumed `public` and there's no need to try to change its visibility.
Note that this too, will fail, if grpc-kotlin makes the constructor `private` and adds a static method (I created https://github.com/grpc/grpc-kotlin/issues/163). The futureproof way to fix this is not to check for the stub type, but to check for the existence of static methods named `new*Stub`, and then comparing the return types with the declared stub type.
**Stacktrace and logs**
```
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.mycompany.ServiceNameGrpcKt$ServiceNameCoroutineStub]: Unsupported stub type: com.mycompany.ServiceNameGrpcKt$ServiceNameCoroutineStub -> Please report this issue.
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.lambda$createStub$1(GrpcClientBeanPostProcessor.java:244)
at java.base/java.util.Optional.orElseThrow(Optional.java:408)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.createStub(GrpcClientBeanPostProcessor.java:243)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.valueForMember(GrpcClientBeanPostProcessor.java:218)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.processInjectionPoint(GrpcClientBeanPostProcessor.java:127)
at net.devh.boot.grpc.client.inject.GrpcClientBeanPostProcessor.postProcessBeforeInitialization(GrpcClientBeanPostProcessor.java:83)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:416)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1788)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595)
```
**Steps to Reproduce**
N/A
**The application's environment**
Which versions do you use?
* Spring (boot): 2.3.1.RELEASE
* grpc-java: 1.30.2
* grpc-spring-boot-starter: 2.9.0.RELEASE
**Additional context**
* Did it ever work before? Yes.
* Do you have a demo? No.
| priority | stub creation has no fallback i believe this to be a blocking regression issue the context use library to make grpc calls the bug in grpc spring boot starter release grpcclientbeanpostprocessor tries to figure out reflectively the appropriate static method to create a stub if it fails it falls back to using the constructor which is assumed as private but may not be as in the case of grpc kotlin in grpc spring boot starter release the fallback is not present and thus the stub injection blows up with an error java lang illegalargumentexception unsupported stub type see the class hierarchy below class servicenamecoroutinestub jvmoverloads constructor channel channel calloptions calloptions default abstractcoroutinestub channel calloptions abstract class abstractcoroutinestub channel channel calloptions calloptions calloptions default abstractstub channel calloptions the problem that already existed in grpc spring boot starter release but was hidden by the fallback is that there s no code in grpcclientbeanpostprocessor to check for abstractstub subclasses i think the fallback to using the constructor directly is ok as seen in the and should not be removed however the constructor can be assumed public and there s no need to try to change its visibility note that this too will fail if grpc kotlin makes the constructor private and adds a static method i created the futureproof way to fix this is not to check for the stub type but to check for the existence of static methods named new stub and then comparing the return types with the declared stub type stacktrace and logs caused by org springframework beans beaninstantiationexception failed to instantiate unsupported stub type com mycompany servicenamegrpckt servicenamecoroutinestub please report this issue at net devh boot grpc client inject grpcclientbeanpostprocessor lambda createstub grpcclientbeanpostprocessor java at java base java util optional orelsethrow optional java at net devh boot grpc client inject grpcclientbeanpostprocessor createstub grpcclientbeanpostprocessor java at net devh boot grpc client inject grpcclientbeanpostprocessor valueformember grpcclientbeanpostprocessor java at net devh boot grpc client inject grpcclientbeanpostprocessor processinjectionpoint grpcclientbeanpostprocessor java at net devh boot grpc client inject grpcclientbeanpostprocessor postprocessbeforeinitialization grpcclientbeanpostprocessor java at org springframework beans factory support abstractautowirecapablebeanfactory applybeanpostprocessorsbeforeinitialization abstractautowirecapablebeanfactory java at org springframework beans factory support abstractautowirecapablebeanfactory initializebean abstractautowirecapablebeanfactory java at org springframework beans factory support abstractautowirecapablebeanfactory docreatebean abstractautowirecapablebeanfactory java steps to reproduce n a the application s environment which versions do you use spring boot release grpc java grpc spring boot starter release additional context did it ever work before yes do you have a demo no | 1 |
517,831 | 15,020,327,818 | IssuesEvent | 2021-02-01 14:35:31 | Ameelio/pathways-client | https://api.github.com/repos/Ameelio/pathways-client | opened | [Calls] Inc people are shown a notification when they receive a DOC alert | good first issue high-priority | **Is your feature request related to a problem? Please describe.**
Alerts should be given the prominence given that they can lead to a call termination
Currently, they only show up on the regular chat interface
**Describe the solution you'd like**
When users receive a DOC message, it should also be shown as an alert
**Additional context**
There's a `openwithnotification` util function that triggers a notif
| 1.0 | [Calls] Inc people are shown a notification when they receive a DOC alert - **Is your feature request related to a problem? Please describe.**
Alerts should be given the prominence given that they can lead to a call termination
Currently, they only show up on the regular chat interface
**Describe the solution you'd like**
When users receive a DOC message, it should also be shown as an alert
**Additional context**
There's a `openwithnotification` util function that triggers a notif
| priority | inc people are shown a notification when they receive a doc alert is your feature request related to a problem please describe alerts should be given the prominence given that they can lead to a call termination currently they only show up on the regular chat interface describe the solution you d like when users receive a doc message it should also be shown as an alert additional context there s a openwithnotification util function that triggers a notif | 1 |
268,814 | 8,414,909,390 | IssuesEvent | 2018-10-13 08:40:50 | Pack4Duck/ClassicUO | https://api.github.com/repos/Pack4Duck/ClassicUO | closed | ClassicUO fails to load if you're missing an anim*.idx file | Priority: high bug | With the installation of UOR, the following files exist:
- anim.idx
- anim4.idx
- anim5.idx
In Animation.cs, lines 140-149 assume that you have anim, anim2, anim3, anim4, and anim5.
Steps to reproduce the behavior:
1. Delete/rename of of the anim*.idx files
2. Run ClassicUO
3. Observe error
**Expected behavior**
ClassicUO can handle data directories that don't come with all of the anim* files.
**Desktop (please complete the following information):**
- OS: Win10
- Version: master branch, 5.0.8.3 of client from UOR
| 1.0 | ClassicUO fails to load if you're missing an anim*.idx file - With the installation of UOR, the following files exist:
- anim.idx
- anim4.idx
- anim5.idx
In Animation.cs, lines 140-149 assume that you have anim, anim2, anim3, anim4, and anim5.
Steps to reproduce the behavior:
1. Delete/rename of of the anim*.idx files
2. Run ClassicUO
3. Observe error
**Expected behavior**
ClassicUO can handle data directories that don't come with all of the anim* files.
**Desktop (please complete the following information):**
- OS: Win10
- Version: master branch, 5.0.8.3 of client from UOR
| priority | classicuo fails to load if you re missing an anim idx file with the installation of uor the following files exist anim idx idx idx in animation cs lines assume that you have anim and steps to reproduce the behavior delete rename of of the anim idx files run classicuo observe error expected behavior classicuo can handle data directories that don t come with all of the anim files desktop please complete the following information os version master branch of client from uor | 1 |
604,503 | 18,685,695,906 | IssuesEvent | 2021-11-01 12:09:00 | betagouv/service-national-universel | https://api.github.com/repos/betagouv/service-national-universel | opened | fix(inscription): parametrage écran disponibilité | enhancement priority-HIGH inscription | ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
**fevrier**
- Dispo février - Géo : France entière + Outre Mer seulement Guadeloupe, Guyane et Martinique
- Dispo février - Niveau scolaire : exclusion des 1ère et Terminale
- Dispo février : nés entre le 25/02/2004 et le 14/02/2007 (hors 1ère et term)
**juin**
- Dispo juin : nés entre le 24/06/2004 et le 13/06/2007(hors 1ère et term)
- Dispo juin - Niveau scolaire : exclusion des 1ère et Terminale
- Dispo juin - Niveau scolaire : exclusion 3e et 2nde pro ? (à confirmer)
**juillet**
- Dispo juillet : nés entre le 15/07/2004 et le 04/07/2007
### Commentaires
_No response_ | 1.0 | fix(inscription): parametrage écran disponibilité - ### Fonctionnalité liée à un problème ?
_No response_
### Fonctionnalité
**fevrier**
- Dispo février - Géo : France entière + Outre Mer seulement Guadeloupe, Guyane et Martinique
- Dispo février - Niveau scolaire : exclusion des 1ère et Terminale
- Dispo février : nés entre le 25/02/2004 et le 14/02/2007 (hors 1ère et term)
**juin**
- Dispo juin : nés entre le 24/06/2004 et le 13/06/2007(hors 1ère et term)
- Dispo juin - Niveau scolaire : exclusion des 1ère et Terminale
- Dispo juin - Niveau scolaire : exclusion 3e et 2nde pro ? (à confirmer)
**juillet**
- Dispo juillet : nés entre le 15/07/2004 et le 04/07/2007
### Commentaires
_No response_ | priority | fix inscription parametrage écran disponibilité fonctionnalité liée à un problème no response fonctionnalité fevrier dispo février géo france entière outre mer seulement guadeloupe guyane et martinique dispo février niveau scolaire exclusion des et terminale dispo février nés entre le et le hors et term juin dispo juin nés entre le et le hors et term dispo juin niveau scolaire exclusion des et terminale dispo juin niveau scolaire exclusion et pro à confirmer juillet dispo juillet nés entre le et le commentaires no response | 1 |
404,677 | 11,861,489,924 | IssuesEvent | 2020-03-25 16:25:25 | levelkdev/BC-DAPP | https://api.github.com/repos/levelkdev/BC-DAPP | closed | Wrong use of units in dapp | bug high-priority | At this moment the dapp is always using WEI units, where WEI units should be use only when we communicate to the contracts and in the UI it should always display ETH units (which uses 18 decimals by default)
In order to fix I propose to use the BN library provided by web3, accessible by `web3.utils.BN` and use the methods `web3.utils.fromWei` and `web3.utils.toWei` | 1.0 | Wrong use of units in dapp - At this moment the dapp is always using WEI units, where WEI units should be use only when we communicate to the contracts and in the UI it should always display ETH units (which uses 18 decimals by default)
In order to fix I propose to use the BN library provided by web3, accessible by `web3.utils.BN` and use the methods `web3.utils.fromWei` and `web3.utils.toWei` | priority | wrong use of units in dapp at this moment the dapp is always using wei units where wei units should be use only when we communicate to the contracts and in the ui it should always display eth units which uses decimals by default in order to fix i propose to use the bn library provided by accessible by utils bn and use the methods utils fromwei and utils towei | 1 |
111,922 | 4,494,880,802 | IssuesEvent | 2016-08-31 08:11:48 | code-corps/code-corps-phoenix | https://api.github.com/repos/code-corps/code-corps-phoenix | closed | Add in post_type filtering to the posts endpoint | awaiting review high priority | We don't have any `post_type` filtering on the posts endpoint right now. We need to add that filtering in to our queries for the `index` actions. | 1.0 | Add in post_type filtering to the posts endpoint - We don't have any `post_type` filtering on the posts endpoint right now. We need to add that filtering in to our queries for the `index` actions. | priority | add in post type filtering to the posts endpoint we don t have any post type filtering on the posts endpoint right now we need to add that filtering in to our queries for the index actions | 1 |
558,419 | 16,533,021,614 | IssuesEvent | 2021-05-27 08:32:33 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | myspace.com - design is broken | browser-firefox-ios bugbug-probability-high os-ios priority-normal | <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/75143 -->
**URL**: https://myspace.com/nscrepresenta/mixes/classic-my-photos-464417/photo/193447851
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.4
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Picture didn’t load. Need old pictures
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | myspace.com - design is broken - <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/75143 -->
**URL**: https://myspace.com/nscrepresenta/mixes/classic-my-photos-464417/photo/193447851
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.4
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Picture didn’t load. Need old pictures
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | myspace com design is broken url browser version firefox ios operating system ios tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce picture didn’t load need old pictures browser configuration none from with ❤️ | 1 |
320,140 | 9,770,022,415 | IssuesEvent | 2019-06-06 09:56:17 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | CTCLoss produces NaNs in some situations | high priority module: cuda module: nn topic: determinism triaged | ## 🐛 Bug
when I train a cnn-rnn-ctc text recognize model, I meet nan loss after some iters, but it's ok at pytorch 0.4 with warpctc
## To Reproduce
Steps to reproduce the behavior:
1. download the code from https://github.com/WenmuZhou/crnn.pytorch
2. change the ctc loss from warpctc to nn.CTCloss()
3. run
- PyTorch Version (e.g., 1.0): 1.0.0.dev20181115
- OS (e.g., Linux): ubuntu 16.04
- How you installed PyTorch (`conda`, `pip`, source): pip3
- Build command you used (if compiling from source):
- Python version: 3.5.2
- CUDA/cuDNN version: 8.0/6.0
- GPU models and configuration: 1080ti
- Any other relevant information: | 1.0 | CTCLoss produces NaNs in some situations - ## 🐛 Bug
when I train a cnn-rnn-ctc text recognize model, I meet nan loss after some iters, but it's ok at pytorch 0.4 with warpctc
## To Reproduce
Steps to reproduce the behavior:
1. download the code from https://github.com/WenmuZhou/crnn.pytorch
2. change the ctc loss from warpctc to nn.CTCloss()
3. run
- PyTorch Version (e.g., 1.0): 1.0.0.dev20181115
- OS (e.g., Linux): ubuntu 16.04
- How you installed PyTorch (`conda`, `pip`, source): pip3
- Build command you used (if compiling from source):
- Python version: 3.5.2
- CUDA/cuDNN version: 8.0/6.0
- GPU models and configuration: 1080ti
- Any other relevant information: | priority | ctcloss produces nans in some situations 🐛 bug when i train a cnn rnn ctc text recognize model i meet nan loss after some iters but it s ok at pytorch with warpctc to reproduce steps to reproduce the behavior download the code from change the ctc loss from warpctc to nn ctcloss run pytorch version e g os e g linux ubuntu how you installed pytorch conda pip source build command you used if compiling from source python version cuda cudnn version gpu models and configuration any other relevant information | 1 |
525,658 | 15,257,740,422 | IssuesEvent | 2021-02-21 03:01:24 | aneuhold/BestCommunityService | https://api.github.com/repos/aneuhold/BestCommunityService | closed | External Services Page | priority: High | A page for external services.
AC 1: It has something that shows a number to call for in-home services.
AC 2: A description of each thing that is available
| 1.0 | External Services Page - A page for external services.
AC 1: It has something that shows a number to call for in-home services.
AC 2: A description of each thing that is available
| priority | external services page a page for external services ac it has something that shows a number to call for in home services ac a description of each thing that is available | 1 |
270,842 | 8,471,068,560 | IssuesEvent | 2018-10-24 07:21:01 | CS2113-AY1819S1-T12-1/main | https://api.github.com/repos/CS2113-AY1819S1-T12-1/main | opened | [1.3] Check OOP for new method design | priority.high | Check design for new method and class related to export command, sort and new filter | 1.0 | [1.3] Check OOP for new method design - Check design for new method and class related to export command, sort and new filter | priority | check oop for new method design check design for new method and class related to export command sort and new filter | 1 |
636,900 | 20,612,445,224 | IssuesEvent | 2022-03-07 09:58:24 | NicholasG04/Prom | https://api.github.com/repos/NicholasG04/Prom | closed | Lack of discrete axolotl references | bug enhancement UI User Panel Admin Panel High Priority | There is an atrocious lack of hidden, inconspicuous axolotl references throughout the website, this must be corrected immediately for the safety and wellbeing of the human population. | 1.0 | Lack of discrete axolotl references - There is an atrocious lack of hidden, inconspicuous axolotl references throughout the website, this must be corrected immediately for the safety and wellbeing of the human population. | priority | lack of discrete axolotl references there is an atrocious lack of hidden inconspicuous axolotl references throughout the website this must be corrected immediately for the safety and wellbeing of the human population | 1 |
665,063 | 22,298,163,795 | IssuesEvent | 2022-06-13 05:41:42 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | ERROR {org.wso2.carbon.is.migration.MigrationClientImpl} - Migration process was stopped. while scanning a simple key in 'reader', line 367 when generating dry run reports in migration | Priority/High Severity/Critical bug Component/Migration 6.0.0-Migration Affected-6.0.0 QA-Reported | **How to reproduce:**
1. Get IS 5.11.0 U2 updated for latest level 151
2. Get IS 6.0.0 m3 pack
3. Point for mssql 2019
```
[server]
hostname = "localhost"
node_ip = "127.0.0.1"
base_path = "https://$ref{server.hostname}:${carbon.management.port}"
offset = "1"
[super_admin]
username = "admin"
password = "admin"
create_admin_account = true
[user_store]
type = "database_unique_id"
[database.identity_db]
url = "jdbc:sqlserver://localhost:1433;databaseName=migrt4;SendStringParametersAsUnicode=false"
username = "sa"
password = "MyPassword001"
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
[database.identity_db.pool_options]
maxActive = "80"
maxWait="6000"
minIdle ="5"
testOnBorrow = true
validationQuery="SELECT 1"
validationInterval="30000"
defaultAutoCommit=false
[database.shared_db]
url = "jdbc:sqlserver://localhost:1433;databaseName=migrt4;SendStringParametersAsUnicode=false"
username = "sa"
password = "MyPassword001"
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
[database.shared_db.pool_options]
maxActive = "80"
maxWait="6000"
minIdle ="5"
testOnBorrow = true
validationQuery="SELECT 1"
validationInterval="30000"
defaultAutoCommit=false
[keystore.primary]
file_name = "wso2carbon.jks"
password = "wso2carbon"
[truststore]
file_name="client-truststore.jks"
password="wso2carbon"
type="JKS"
[account_recovery.endpoint.auth]
hash= "66cd9688a2ae068244ea01e70f0e230f5623b7fa4cdecb65070a09ec06452262"
[identity.auth_framework.endpoint]
app_password= "dashboard"
[system_applications]
read_only_apps = []
```
4. Create some data using 5.11.0 pack (apps,users,roles,groups,scopes,workflows etc)
5. Get the migration resources and copy it for relavent places https://github.com/wso2-extensions/identity-migration-resources/releases/tag/v1.0.185
6. In migration-config.yaml file uncomment the paramater reportPath ( it was present in 5 places. uncommented from all the 5 places and added the absolute path as below) as instructed in the doc https://is.docs.wso2.com/en/latest/setup/migrating-userstore-managers/
` reportPath:/home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/reportpath
`
`reportPath: /home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/reportpath`
tried both above ways
7. Run the migration client with dry run
`sh wso2server.sh -Dmigrate -Dcomponent=identity -DdryRun
`
Getting below exception
```
ce {super-tenant}
[2022-06-06 19:45:51,237] [] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: mex-ut {super-tenant}
[2022-06-06 19:45:51,241] [] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: mex-ut2 {super-tenant}
[2022-06-06 19:45:51,925] [] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Repository : /home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/600/wso2is-6.0.0-m3/repository/deployment/server/
[2022-06-06 19:45:51,970] [] INFO {org.wso2.carbon.core.multitenancy.eager.TenantLoadingConfig} - Using tenant lazy loading policy...
[2022-06-06 19:45:51,996] [] INFO {org.wso2.carbon.core.internal.permission.update.PermissionUpdater} - Permission cache updated for tenant -1234
[2022-06-06 19:45:52,226] [] INFO {org.wso2.carbon.identity.core.internal.IdentityCoreServiceComponent} - Executing Migration client : org.wso2.carbon.is.migration.MigrationClientImpl
[2022-06-06 19:45:52,243] [] INFO {org.wso2.carbon.is.migration.config.Config} - WSO2 Product Migration Service Task : Loading Migration Configs, PATH:/home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/600/wso2is-6.0.0-m3/migration-resources/migration-config.yaml
[2022-06-06 19:45:52,288] [] ERROR {org.wso2.carbon.is.migration.MigrationClientImpl} - Migration process was stopped. while scanning a simple key
in 'reader', line 367, column 8:
reportPath:/home/shanika/WSO2/5. ...
^
could not find expected ':'
in 'reader', line 368, column 8:
# If migrating only few tenants, ...
^
at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:558)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:224)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:199)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:153)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:199)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:153)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:450)
at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:410)
at org.wso2.carbon.is.migration.util.Utility.loadMigrationConfig(Utility.java:94)
at org.wso2.carbon.is.migration.config.Config.getInstance(Config.java:64)
at org.wso2.carbon.is.migration.MigrationClientImpl.execute(MigrationClientImpl.java:39)
at org.wso2.carbon.identity.core.internal.IdentityCoreServiceComponent.activate(IdentityCoreServiceComponent.java:151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228)
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:525)
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:544)
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:529)
at org.wso2.carbon.core.init.CarbonServerManager.removePendingItem(CarbonServerManager.java:305)
at org.wso2.carbon.core.init.PreAxis2ConfigItemListener.bundleChanged(PreAxis2ConfigItemListener.java:118)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:973)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
[2022-06-06 19:45:52,294] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,294] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,295] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,295] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,295] [] INFO
```
Attaching carbon.log
[wso2carbon.log](https://github.com/wso2/product-is/files/8845107/wso2carbon.log)
attaching migration-config.yaml
[yaml.zip](https://github.com/wso2/product-is/files/8845247/yaml.zip)
Needs to get the migration guide updated with the correct information if the way I have included the reportPath: is wrong
A report was not generated in the given folder path
**Env**
mssql 2019
IS 5.11.0 U2 151 to IS 6.0.0 m3
migration client v185
jdbc
java 1.8 | 1.0 | ERROR {org.wso2.carbon.is.migration.MigrationClientImpl} - Migration process was stopped. while scanning a simple key in 'reader', line 367 when generating dry run reports in migration - **How to reproduce:**
1. Get IS 5.11.0 U2 updated for latest level 151
2. Get IS 6.0.0 m3 pack
3. Point for mssql 2019
```
[server]
hostname = "localhost"
node_ip = "127.0.0.1"
base_path = "https://$ref{server.hostname}:${carbon.management.port}"
offset = "1"
[super_admin]
username = "admin"
password = "admin"
create_admin_account = true
[user_store]
type = "database_unique_id"
[database.identity_db]
url = "jdbc:sqlserver://localhost:1433;databaseName=migrt4;SendStringParametersAsUnicode=false"
username = "sa"
password = "MyPassword001"
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
[database.identity_db.pool_options]
maxActive = "80"
maxWait="6000"
minIdle ="5"
testOnBorrow = true
validationQuery="SELECT 1"
validationInterval="30000"
defaultAutoCommit=false
[database.shared_db]
url = "jdbc:sqlserver://localhost:1433;databaseName=migrt4;SendStringParametersAsUnicode=false"
username = "sa"
password = "MyPassword001"
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
[database.shared_db.pool_options]
maxActive = "80"
maxWait="6000"
minIdle ="5"
testOnBorrow = true
validationQuery="SELECT 1"
validationInterval="30000"
defaultAutoCommit=false
[keystore.primary]
file_name = "wso2carbon.jks"
password = "wso2carbon"
[truststore]
file_name="client-truststore.jks"
password="wso2carbon"
type="JKS"
[account_recovery.endpoint.auth]
hash= "66cd9688a2ae068244ea01e70f0e230f5623b7fa4cdecb65070a09ec06452262"
[identity.auth_framework.endpoint]
app_password= "dashboard"
[system_applications]
read_only_apps = []
```
4. Create some data using 5.11.0 pack (apps,users,roles,groups,scopes,workflows etc)
5. Get the migration resources and copy it for relavent places https://github.com/wso2-extensions/identity-migration-resources/releases/tag/v1.0.185
6. In migration-config.yaml file uncomment the paramater reportPath ( it was present in 5 places. uncommented from all the 5 places and added the absolute path as below) as instructed in the doc https://is.docs.wso2.com/en/latest/setup/migrating-userstore-managers/
` reportPath:/home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/reportpath
`
`reportPath: /home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/reportpath`
tried both above ways
7. Run the migration client with dry run
`sh wso2server.sh -Dmigrate -Dcomponent=identity -DdryRun
`
Getting below exception
```
ce {super-tenant}
[2022-06-06 19:45:51,237] [] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: mex-ut {super-tenant}
[2022-06-06 19:45:51,241] [] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: mex-ut2 {super-tenant}
[2022-06-06 19:45:51,925] [] INFO {org.wso2.carbon.core.init.CarbonServerManager} - Repository : /home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/600/wso2is-6.0.0-m3/repository/deployment/server/
[2022-06-06 19:45:51,970] [] INFO {org.wso2.carbon.core.multitenancy.eager.TenantLoadingConfig} - Using tenant lazy loading policy...
[2022-06-06 19:45:51,996] [] INFO {org.wso2.carbon.core.internal.permission.update.PermissionUpdater} - Permission cache updated for tenant -1234
[2022-06-06 19:45:52,226] [] INFO {org.wso2.carbon.identity.core.internal.IdentityCoreServiceComponent} - Executing Migration client : org.wso2.carbon.is.migration.MigrationClientImpl
[2022-06-06 19:45:52,243] [] INFO {org.wso2.carbon.is.migration.config.Config} - WSO2 Product Migration Service Task : Loading Migration Configs, PATH:/home/shanika/WSO2/5.12.0/MigrationTesting/dryrunreport/600/wso2is-6.0.0-m3/migration-resources/migration-config.yaml
[2022-06-06 19:45:52,288] [] ERROR {org.wso2.carbon.is.migration.MigrationClientImpl} - Migration process was stopped. while scanning a simple key
in 'reader', line 367, column 8:
reportPath:/home/shanika/WSO2/5. ...
^
could not find expected ':'
in 'reader', line 368, column 8:
# If migrating only few tenants, ...
^
at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:558)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:224)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:199)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:153)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:199)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:153)
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:155)
at org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:450)
at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:410)
at org.wso2.carbon.is.migration.util.Utility.loadMigrationConfig(Utility.java:94)
at org.wso2.carbon.is.migration.config.Config.getInstance(Config.java:64)
at org.wso2.carbon.is.migration.MigrationClientImpl.execute(MigrationClientImpl.java:39)
at org.wso2.carbon.identity.core.internal.IdentityCoreServiceComponent.activate(IdentityCoreServiceComponent.java:151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228)
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:525)
at org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:544)
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:529)
at org.wso2.carbon.core.init.CarbonServerManager.removePendingItem(CarbonServerManager.java:305)
at org.wso2.carbon.core.init.PreAxis2ConfigItemListener.bundleChanged(PreAxis2ConfigItemListener.java:118)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:973)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
[2022-06-06 19:45:52,294] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,294] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,295] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,295] [] INFO {org.wso2.carbon.is.migration.MigrationClientImpl} - ............................................................................................
[2022-06-06 19:45:52,295] [] INFO
```
Attaching carbon.log
[wso2carbon.log](https://github.com/wso2/product-is/files/8845107/wso2carbon.log)
attaching migration-config.yaml
[yaml.zip](https://github.com/wso2/product-is/files/8845247/yaml.zip)
Needs to get the migration guide updated with the correct information if the way I have included the reportPath: is wrong
A report was not generated in the given folder path
**Env**
mssql 2019
IS 5.11.0 U2 151 to IS 6.0.0 m3
migration client v185
jdbc
java 1.8 | priority | error org carbon is migration migrationclientimpl migration process was stopped while scanning a simple key in reader line when generating dry run reports in migration how to reproduce get is updated for latest level get is pack point for mssql hostname localhost node ip base path offset username admin password admin create admin account true type database unique id url jdbc sqlserver localhost databasename sendstringparametersasunicode false username sa password driver com microsoft sqlserver jdbc sqlserverdriver maxactive maxwait minidle testonborrow true validationquery select validationinterval defaultautocommit false url jdbc sqlserver localhost databasename sendstringparametersasunicode false username sa password driver com microsoft sqlserver jdbc sqlserverdriver maxactive maxwait minidle testonborrow true validationquery select validationinterval defaultautocommit false file name jks password file name client truststore jks password type jks hash app password dashboard read only apps create some data using pack apps users roles groups scopes workflows etc get the migration resources and copy it for relavent places in migration config yaml file uncomment the paramater reportpath it was present in places uncommented from all the places and added the absolute path as below as instructed in the doc reportpath home shanika migrationtesting dryrunreport reportpath reportpath home shanika migrationtesting dryrunreport reportpath tried both above ways run the migration client with dry run sh sh dmigrate dcomponent identity ddryrun getting below exception ce super tenant info org carbon core deployment deploymentinterceptor deploying service mex ut super tenant info org carbon core deployment deploymentinterceptor deploying service mex super tenant info org carbon core init carbonservermanager repository home shanika migrationtesting dryrunreport repository deployment server info org carbon core multitenancy eager tenantloadingconfig using tenant lazy loading policy info org carbon core internal permission update permissionupdater permission cache updated for tenant info org carbon identity core internal identitycoreservicecomponent executing migration client org carbon is migration migrationclientimpl info org carbon is migration config config product migration service task loading migration configs path home shanika migrationtesting dryrunreport migration resources migration config yaml error org carbon is migration migrationclientimpl migration process was stopped while scanning a simple key in reader line column reportpath home shanika could not find expected in reader line column if migrating only few tenants at org yaml snakeyaml scanner scannerimpl stalepossiblesimplekeys scannerimpl java at org yaml snakeyaml scanner scannerimpl needmoretokens scannerimpl java at org yaml snakeyaml scanner scannerimpl checktoken scannerimpl java at org yaml snakeyaml parser parserimpl parseblockmappingkey produce parserimpl java at org yaml snakeyaml parser parserimpl peekevent parserimpl java at org yaml snakeyaml parser parserimpl checkevent parserimpl java at org yaml snakeyaml composer composer composemappingnode composer java at org yaml snakeyaml composer composer composenode composer java at org yaml snakeyaml composer composer composemappingnode composer java at org yaml snakeyaml composer composer composenode composer java at org yaml snakeyaml composer composer composesequencenode composer java at org yaml snakeyaml composer composer composenode composer java at org yaml snakeyaml composer composer composemappingnode composer java at org yaml snakeyaml composer composer composenode composer java at org yaml snakeyaml composer composer composesequencenode composer java at org yaml snakeyaml composer composer composenode composer java at org yaml snakeyaml composer composer composemappingnode composer java at org yaml snakeyaml composer composer composenode composer java at org yaml snakeyaml composer composer composedocument composer java at org yaml snakeyaml composer composer getsinglenode composer java at org yaml snakeyaml constructor baseconstructor getsingledata baseconstructor java at org yaml snakeyaml yaml loadfromreader yaml java at org yaml snakeyaml yaml loadas yaml java at org carbon is migration util utility loadmigrationconfig utility java at org carbon is migration config config getinstance config java at org carbon is migration migrationclientimpl execute migrationclientimpl java at org carbon identity core internal identitycoreservicecomponent activate identitycoreservicecomponent java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org eclipse equinox internal ds model servicecomponent activate servicecomponent java at org eclipse equinox internal ds model servicecomponentprop activate servicecomponentprop java at org eclipse equinox internal ds model servicecomponentprop build servicecomponentprop java at org eclipse equinox internal ds instanceprocess buildcomponent instanceprocess java at org eclipse equinox internal ds instanceprocess buildcomponents instanceprocess java at org eclipse equinox internal ds resolver geteligible resolver java at org eclipse equinox internal ds scrmanager servicechanged scrmanager java at org eclipse osgi internal serviceregistry filteredservicelistener servicechanged filteredservicelistener java at org eclipse osgi internal framework bundlecontextimpl dispatchevent bundlecontextimpl java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr listenerqueue dispatcheventsynchronous listenerqueue java at org eclipse osgi internal serviceregistry serviceregistry publishserviceeventprivileged serviceregistry java at org eclipse osgi internal serviceregistry serviceregistry publishserviceevent serviceregistry java at org eclipse osgi internal serviceregistry serviceregistrationimpl register serviceregistrationimpl java at org eclipse osgi internal serviceregistry serviceregistry registerservice serviceregistry java at org eclipse osgi internal framework bundlecontextimpl registerservice bundlecontextimpl java at org eclipse osgi internal framework bundlecontextimpl registerservice bundlecontextimpl java at org carbon core init carbonservermanager initializecarbon carbonservermanager java at org carbon core init carbonservermanager removependingitem carbonservermanager java at org carbon core init bundlechanged java at org eclipse osgi internal framework bundlecontextimpl dispatchevent bundlecontextimpl java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr eventmanager eventthread run eventmanager java info org carbon is migration migrationclientimpl info org carbon is migration migrationclientimpl info org carbon is migration migrationclientimpl info org carbon is migration migrationclientimpl info attaching carbon log attaching migration config yaml needs to get the migration guide updated with the correct information if the way i have included the reportpath is wrong a report was not generated in the given folder path env mssql is to is migration client jdbc java | 1 |
50,536 | 3,006,465,749 | IssuesEvent | 2015-07-27 10:34:42 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | New class to be added to OpenCV | auto-transferred category: highgui-images feature priority: normal | Transferred from http://code.opencv.org/issues/3810
```
|| Robby Longhorn on 2014-07-13 18:34
|| Priority: Normal
|| Affected: None
|| Category: highgui-images
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
New class to be added to OpenCV
-----------
```
I wrote a class that is very usefull for all Windows users of OpenCV.
I ask herewith to add this class to the OpenCV project in a subfolder "AddOn's for Windows".
My class solves problems that are faced by Windows users of OpenCV.
Please read my detailed description on StackOverflow where you also find the source code of my class.
http://stackoverflow.com/questions/24725155/opencv-tesseract-how-to-replace-libpng-libtiff-etc-with-gdi-bitmap-load-in
```
History
-------
##### Daniil Osokin on 2014-07-14 06:13
```
Hi, thanks for response! You can try to make a pull request (http://code.opencv.org/projects/opencv/wiki/How_to_contribute) with this (since you have the code), developers will guide you.
- Assignee set to Robby Longhorn
- Category set to highgui-images
``` | 1.0 | New class to be added to OpenCV - Transferred from http://code.opencv.org/issues/3810
```
|| Robby Longhorn on 2014-07-13 18:34
|| Priority: Normal
|| Affected: None
|| Category: highgui-images
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
New class to be added to OpenCV
-----------
```
I wrote a class that is very usefull for all Windows users of OpenCV.
I ask herewith to add this class to the OpenCV project in a subfolder "AddOn's for Windows".
My class solves problems that are faced by Windows users of OpenCV.
Please read my detailed description on StackOverflow where you also find the source code of my class.
http://stackoverflow.com/questions/24725155/opencv-tesseract-how-to-replace-libpng-libtiff-etc-with-gdi-bitmap-load-in
```
History
-------
##### Daniil Osokin on 2014-07-14 06:13
```
Hi, thanks for response! You can try to make a pull request (http://code.opencv.org/projects/opencv/wiki/How_to_contribute) with this (since you have the code), developers will guide you.
- Assignee set to Robby Longhorn
- Category set to highgui-images
``` | priority | new class to be added to opencv transferred from robby longhorn on priority normal affected none category highgui images tracker feature difficulty pr platform none none new class to be added to opencv i wrote a class that is very usefull for all windows users of opencv i ask herewith to add this class to the opencv project in a subfolder addon s for windows my class solves problems that are faced by windows users of opencv please read my detailed description on stackoverflow where you also find the source code of my class history daniil osokin on hi thanks for response you can try to make a pull request with this since you have the code developers will guide you assignee set to robby longhorn category set to highgui images | 1 |
210,584 | 7,191,251,362 | IssuesEvent | 2018-02-02 20:16:22 | Fourdee/DietPi | https://api.github.com/repos/Fourdee/DietPi | closed | DietPi-Software | Uninstalling multiple items results in endless loop | Priority High Whoopsie! bug | Whiptail uninstall.
Fix here: https://github.com/Fourdee/DietPi/issues/1416 Possibly broke this. | 1.0 | DietPi-Software | Uninstalling multiple items results in endless loop - Whiptail uninstall.
Fix here: https://github.com/Fourdee/DietPi/issues/1416 Possibly broke this. | priority | dietpi software uninstalling multiple items results in endless loop whiptail uninstall fix here possibly broke this | 1 |
623,905 | 19,682,932,550 | IssuesEvent | 2022-01-11 18:38:41 | BTAA-Geospatial-Data-Project/geomg | https://api.github.com/repos/BTAA-Geospatial-Data-Project/geomg | closed | Exports not happening | type:bug priority:high | We are not able to export CSVs or JSONs. The console log shows a websocket connection error.
### Expected behavior:
- Filter list of items to export
- Select All (optional -> All results that match this search)
- Choose an export type (CSV or JSON)
- File compiles and Notifications in the menu displays a red icon
- Navigate to Notifications and download file
### Actual behavior
- Filter list of items to export
- Select All (optional -> All results that match this search)
- Choose an export type (CSV or JSON)
- Nothing happens. Console shows multiple errors listed as `action_cable.js:241 WebSocket connection to 'wss://geomg.lib.umn.edu/cable' failed: `
@mberkowski Do you know if this is a server configuration issue? I am not sure when exports stopped working. I believe that it was functional when Eric pushed the enhancement last month. | 1.0 | Exports not happening - We are not able to export CSVs or JSONs. The console log shows a websocket connection error.
### Expected behavior:
- Filter list of items to export
- Select All (optional -> All results that match this search)
- Choose an export type (CSV or JSON)
- File compiles and Notifications in the menu displays a red icon
- Navigate to Notifications and download file
### Actual behavior
- Filter list of items to export
- Select All (optional -> All results that match this search)
- Choose an export type (CSV or JSON)
- Nothing happens. Console shows multiple errors listed as `action_cable.js:241 WebSocket connection to 'wss://geomg.lib.umn.edu/cable' failed: `
@mberkowski Do you know if this is a server configuration issue? I am not sure when exports stopped working. I believe that it was functional when Eric pushed the enhancement last month. | priority | exports not happening we are not able to export csvs or jsons the console log shows a websocket connection error expected behavior filter list of items to export select all optional all results that match this search choose an export type csv or json file compiles and notifications in the menu displays a red icon navigate to notifications and download file actual behavior filter list of items to export select all optional all results that match this search choose an export type csv or json nothing happens console shows multiple errors listed as action cable js websocket connection to wss geomg lib umn edu cable failed mberkowski do you know if this is a server configuration issue i am not sure when exports stopped working i believe that it was functional when eric pushed the enhancement last month | 1 |
720,906 | 24,810,518,747 | IssuesEvent | 2022-10-25 09:03:34 | bounswe/bounswe2022group7 | https://api.github.com/repos/bounswe/bounswe2022group7 | closed | Token Handling in the Frontend | Status: Completed Priority: High Difficulty: Hard Type: Implementation Target: Frontend | We need to manage the JWT token returned from the backend in the frontend. There are two basic functionalities:
- Block access to some pages: Visitors won't be able to access some pages such as the create art item page.
- Show different components: Liking a comment can only be done when the user is logged in.
We need to research methods of handling tokens and implement them.
Deadline: 25.10.2022, 23:59
Reviewer: @erimerkin | 1.0 | Token Handling in the Frontend - We need to manage the JWT token returned from the backend in the frontend. There are two basic functionalities:
- Block access to some pages: Visitors won't be able to access some pages such as the create art item page.
- Show different components: Liking a comment can only be done when the user is logged in.
We need to research methods of handling tokens and implement them.
Deadline: 25.10.2022, 23:59
Reviewer: @erimerkin | priority | token handling in the frontend we need to manage the jwt token returned from the backend in the frontend there are two basic functionalities block access to some pages visitors won t be able to access some pages such as the create art item page show different components liking a comment can only be done when the user is logged in we need to research methods of handling tokens and implement them deadline reviewer erimerkin | 1 |
21,069 | 2,633,024,992 | IssuesEvent | 2015-03-08 19:12:11 | gdietz/OpenMEE | https://api.github.com/repos/gdietz/OpenMEE | closed | Menu layout change | high priority | Make the randomization tests and bootstrapping an option within the meta-analysis and meta-regression menu. | 1.0 | Menu layout change - Make the randomization tests and bootstrapping an option within the meta-analysis and meta-regression menu. | priority | menu layout change make the randomization tests and bootstrapping an option within the meta analysis and meta regression menu | 1 |
99,118 | 4,047,752,765 | IssuesEvent | 2016-05-23 07:33:33 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | Dashboard: My visualizations | new feature Priority-High | This section will display the powerviews for each user/ created by each user. Similar to My organizations, My Datasets.
@yumiendo to provide a mockup | 1.0 | Dashboard: My visualizations - This section will display the powerviews for each user/ created by each user. Similar to My organizations, My Datasets.
@yumiendo to provide a mockup | priority | dashboard my visualizations this section will display the powerviews for each user created by each user similar to my organizations my datasets yumiendo to provide a mockup | 1 |
819,071 | 30,718,793,605 | IssuesEvent | 2023-07-27 14:37:08 | Green-Party-of-Canada-Members/gpc-decidim | https://api.github.com/repos/Green-Party-of-Canada-Members/gpc-decidim | closed | Official Proposals-only space? | priority-high customer feedback | So I think we had this issue before, but we again have a space where we want only official proposals:
https://wedecide.green.ca/processes/create-proposals/f/415/
Didn't we do this last time by hiding the "new proposal" button from the participant's interface? | 1.0 | Official Proposals-only space? - So I think we had this issue before, but we again have a space where we want only official proposals:
https://wedecide.green.ca/processes/create-proposals/f/415/
Didn't we do this last time by hiding the "new proposal" button from the participant's interface? | priority | official proposals only space so i think we had this issue before but we again have a space where we want only official proposals didn t we do this last time by hiding the new proposal button from the participant s interface | 1 |
325,772 | 9,935,767,472 | IssuesEvent | 2019-07-02 17:23:03 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Single post main title should be H1 and sub title should be H2 or H3 in design-1 | NEED FAST REVIEW [Priority: HIGH] bug | Ref:https://secure.helpscout.net/conversation/882175727/71091?folderId=2770543
Here you see the screenshot regarding this :
1.https://monosnap.com/file/3ldXCUFCNlDTChuOTIDO22XuztEEb4
| 1.0 | Single post main title should be H1 and sub title should be H2 or H3 in design-1 - Ref:https://secure.helpscout.net/conversation/882175727/71091?folderId=2770543
Here you see the screenshot regarding this :
1.https://monosnap.com/file/3ldXCUFCNlDTChuOTIDO22XuztEEb4
| priority | single post main title should be and sub title should be or in design ref here you see the screenshot regarding this | 1 |
559,336 | 16,556,173,694 | IssuesEvent | 2021-05-28 14:11:02 | Team-uMigrate/umigrate | https://api.github.com/repos/Team-uMigrate/umigrate | closed | API: Combine common API view decorators | easy high priority | We tend to use the same 6 class decorators over ever model view set class, with the only difference being the list of strings representing the tags.

We should create a new class decorator that takes in a parameter representing the list of tags, and applies all 6 of these decorators to the class.
We can then repeat for these 2 decorators as well.

To create a decorator in python, checkout https://www.geeksforgeeks.org/chain-multiple-decorators-in-python/ | 1.0 | API: Combine common API view decorators - We tend to use the same 6 class decorators over ever model view set class, with the only difference being the list of strings representing the tags.

We should create a new class decorator that takes in a parameter representing the list of tags, and applies all 6 of these decorators to the class.
We can then repeat for these 2 decorators as well.

To create a decorator in python, checkout https://www.geeksforgeeks.org/chain-multiple-decorators-in-python/ | priority | api combine common api view decorators we tend to use the same class decorators over ever model view set class with the only difference being the list of strings representing the tags we should create a new class decorator that takes in a parameter representing the list of tags and applies all of these decorators to the class we can then repeat for these decorators as well to create a decorator in python checkout | 1 |
747,739 | 26,096,977,380 | IssuesEvent | 2022-12-26 21:56:26 | bounswe/bounswe2022group6 | https://api.github.com/repos/bounswe/bounswe2022group6 | closed | Implementing Post Edit Functionality Backend Integration | Priority: High State: In Progress Frontend | Necessary endpoints from the backend needs to be integrated to frontend for post edit functionality. Edit button should be on the post's detail page and only post owner can edit her post.
Deadline: 26.12.2022 | 1.0 | Implementing Post Edit Functionality Backend Integration - Necessary endpoints from the backend needs to be integrated to frontend for post edit functionality. Edit button should be on the post's detail page and only post owner can edit her post.
Deadline: 26.12.2022 | priority | implementing post edit functionality backend integration necessary endpoints from the backend needs to be integrated to frontend for post edit functionality edit button should be on the post s detail page and only post owner can edit her post deadline | 1 |
783,647 | 27,539,842,120 | IssuesEvent | 2023-03-07 07:42:53 | teambit/bit | https://api.github.com/repos/teambit/bit | closed | Inconsistency when importing dependency with a different version | type/bug priority/high area/import | Assuming there are two components `is-string` and `is-type`. Both have versions: 0.0.1 and 0.0.2. `is-string` depends on `is-type`.
### Scenario 1: importing `is-string@0.0.1`, then importing `is-type@0.0.2`
So, the dependency `is-type` is imported with a newer version after the workspace has the dependent.
**installing dependencies as components**
status is modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.2.
Filesystem: is-string uses is-type@0.0.2. (nested is-type@0.0.1 is deleted).
**installing dependencies as NPM packages**
status is modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.2.
Filesystem: is-string uses is-type@0.0.1. (package is-type@0.0.1 is nested inside is-string/node_modules).
This is a bug. It physically uses v0.0.1 from the FS. However, bit-show and bit-status display it as if it uses v0.0.2. (changing package.json of is-string to have is-type@0.0.1 instead of the relative path, fixes it to have them all as 0.0.1).
### Scenario 2: importing `is-type@0.0.2`, then importing `is-string@0.0.1`
**installing dependencies as components**
status is not modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.1.
Filesystem: is-string uses is-type@0.0.1.
**installing dependencies as NPM packages**
status is not modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.1.
Filesystem: is-string uses is-type@0.0.1. (package is-type@0.0.1 is nested inside is-string/node_modules).
is-string package.json has is-type with the version 0.0.1.
### Scenario 3: importing `is-string@0.0.1` and `is-type@0.0.2` at the same time
(e.g. `bit import is-string@0.0.1 is-type@0.0.2`).
Behavior is exactly the same as scenario 2 for both, local and NPM.
For this scenario, the order doesn't matter. it works the same for `bit import is-type@0.0.2 is-string@0.0.1`
### Conclusion
1. Scenario 1 is not consistent between installing as packages and as components. Installing as packages, it keeps using the older dependency, however, if it installs as components it uses the newly imported dependency.
1. Scenario 1 and scenario 2 are not consistent. It matters whether importing the dependency first then the dependents or the vice versa. It might be fine and desirable but not sure if it's intuitive.
1. there is a bug in scenario 1, when installing as packages, see there the details.
GiladShoham , itaymendel , it needs your output.
| 1.0 | Inconsistency when importing dependency with a different version - Assuming there are two components `is-string` and `is-type`. Both have versions: 0.0.1 and 0.0.2. `is-string` depends on `is-type`.
### Scenario 1: importing `is-string@0.0.1`, then importing `is-type@0.0.2`
So, the dependency `is-type` is imported with a newer version after the workspace has the dependent.
**installing dependencies as components**
status is modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.2.
Filesystem: is-string uses is-type@0.0.2. (nested is-type@0.0.1 is deleted).
**installing dependencies as NPM packages**
status is modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.2.
Filesystem: is-string uses is-type@0.0.1. (package is-type@0.0.1 is nested inside is-string/node_modules).
This is a bug. It physically uses v0.0.1 from the FS. However, bit-show and bit-status display it as if it uses v0.0.2. (changing package.json of is-string to have is-type@0.0.1 instead of the relative path, fixes it to have them all as 0.0.1).
### Scenario 2: importing `is-type@0.0.2`, then importing `is-string@0.0.1`
**installing dependencies as components**
status is not modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.1.
Filesystem: is-string uses is-type@0.0.1.
**installing dependencies as NPM packages**
status is not modified.
bit-show of `is-string` shows `is-type` dependency as 0.0.1.
Filesystem: is-string uses is-type@0.0.1. (package is-type@0.0.1 is nested inside is-string/node_modules).
is-string package.json has is-type with the version 0.0.1.
### Scenario 3: importing `is-string@0.0.1` and `is-type@0.0.2` at the same time
(e.g. `bit import is-string@0.0.1 is-type@0.0.2`).
Behavior is exactly the same as scenario 2 for both, local and NPM.
For this scenario, the order doesn't matter. it works the same for `bit import is-type@0.0.2 is-string@0.0.1`
### Conclusion
1. Scenario 1 is not consistent between installing as packages and as components. Installing as packages, it keeps using the older dependency, however, if it installs as components it uses the newly imported dependency.
1. Scenario 1 and scenario 2 are not consistent. It matters whether importing the dependency first then the dependents or the vice versa. It might be fine and desirable but not sure if it's intuitive.
1. there is a bug in scenario 1, when installing as packages, see there the details.
GiladShoham , itaymendel , it needs your output.
| priority | inconsistency when importing dependency with a different version assuming there are two components is string and is type both have versions and is string depends on is type scenario importing is string then importing is type so the dependency is type is imported with a newer version after the workspace has the dependent installing dependencies as components status is modified bit show of is string shows is type dependency as filesystem is string uses is type nested is type is deleted installing dependencies as npm packages status is modified bit show of is string shows is type dependency as filesystem is string uses is type package is type is nested inside is string node modules this is a bug it physically uses from the fs however bit show and bit status display it as if it uses changing package json of is string to have is type instead of the relative path fixes it to have them all as scenario importing is type then importing is string installing dependencies as components status is not modified bit show of is string shows is type dependency as filesystem is string uses is type installing dependencies as npm packages status is not modified bit show of is string shows is type dependency as filesystem is string uses is type package is type is nested inside is string node modules is string package json has is type with the version scenario importing is string and is type at the same time e g bit import is string is type behavior is exactly the same as scenario for both local and npm for this scenario the order doesn t matter it works the same for bit import is type is string conclusion scenario is not consistent between installing as packages and as components installing as packages it keeps using the older dependency however if it installs as components it uses the newly imported dependency scenario and scenario are not consistent it matters whether importing the dependency first then the dependents or the vice versa it might be fine and desirable but not sure if it s intuitive there is a bug in scenario when installing as packages see there the details giladshoham itaymendel it needs your output | 1 |
66,251 | 3,251,439,916 | IssuesEvent | 2015-10-19 09:48:35 | cs2103aug2015-w15-3j/main | https://api.github.com/repos/cs2103aug2015-w15-3j/main | closed | parser doesn't catch invalid inputs like "add" "delete" that has no more arguments | priority.high type.bug | e.g. when user simply types in "add", "edit" or "add " (with a space only) etc, console returns nullpointerexception instead of raijin throwing an illegalcommandargument. | 1.0 | parser doesn't catch invalid inputs like "add" "delete" that has no more arguments - e.g. when user simply types in "add", "edit" or "add " (with a space only) etc, console returns nullpointerexception instead of raijin throwing an illegalcommandargument. | priority | parser doesn t catch invalid inputs like add delete that has no more arguments e g when user simply types in add edit or add with a space only etc console returns nullpointerexception instead of raijin throwing an illegalcommandargument | 1 |
766,573 | 26,889,521,193 | IssuesEvent | 2023-02-06 07:43:07 | ltelab/disdrodb | https://api.github.com/repos/ltelab/disdrodb | closed | [FEATURE] Avoid removal of L0A data if --l0a_processing=False --force=True | high priority | **Is your feature request related to a problem? Please describe.**
A separate generation of L0A and L0B products is currently not possible if `--force=True`
To do so I need to run the reader 2 times with the following arguments:
- first processing step: ` --l0a_processing=True --l0b_processing=False`
- second processing step: ` --l0a_processing=False --l0b_processing=True`
If `--force=True` is in the second step, the current code will remove and recreate the entire campaign directory.
As a consequence, the software raises an error because L0A data are not available anymore as input to L0B processing.
**Describe the solution you'd like**
Therefore, in case `--force=True` and `l0a_processing=False` I guess we need to change the behavior of the code to NOT delete and recreate the campaign directory, but instead to remove the content of the L0B directory !!!
| 1.0 | [FEATURE] Avoid removal of L0A data if --l0a_processing=False --force=True - **Is your feature request related to a problem? Please describe.**
A separate generation of L0A and L0B products is currently not possible if `--force=True`
To do so I need to run the reader 2 times with the following arguments:
- first processing step: ` --l0a_processing=True --l0b_processing=False`
- second processing step: ` --l0a_processing=False --l0b_processing=True`
If `--force=True` is in the second step, the current code will remove and recreate the entire campaign directory.
As a consequence, the software raises an error because L0A data are not available anymore as input to L0B processing.
**Describe the solution you'd like**
Therefore, in case `--force=True` and `l0a_processing=False` I guess we need to change the behavior of the code to NOT delete and recreate the campaign directory, but instead to remove the content of the L0B directory !!!
| priority | avoid removal of data if processing false force true is your feature request related to a problem please describe a separate generation of and products is currently not possible if force true to do so i need to run the reader times with the following arguments first processing step processing true processing false second processing step processing false processing true if force true is in the second step the current code will remove and recreate the entire campaign directory as a consequence the software raises an error because data are not available anymore as input to processing describe the solution you d like therefore in case force true and processing false i guess we need to change the behavior of the code to not delete and recreate the campaign directory but instead to remove the content of the directory | 1 |
127,012 | 5,011,558,975 | IssuesEvent | 2016-12-13 08:21:56 | AnotherCodeArtist/medien-transparenz.at | https://api.github.com/repos/AnotherCodeArtist/medien-transparenz.at | closed | Aggregate Organizations/Media to Groups | feature request high priority needs refinement | - It should be possible to combine different organizations ("Stadt Wien", "Wiener Linien", ...) to groups. As a result such a group should be rendered as if it was single organization. The same thing should also work for media
- Every single user (which does not require registration or login) should be able to create such groups (they could be store in the browser local storage so that are at least partially persistent).
- If a member (=registered user) creates such groups, it should be possible to share them with other members or the general public (might require some staging mechanism).
- Generally these groups should be available on all pages | 1.0 | Aggregate Organizations/Media to Groups - - It should be possible to combine different organizations ("Stadt Wien", "Wiener Linien", ...) to groups. As a result such a group should be rendered as if it was single organization. The same thing should also work for media
- Every single user (which does not require registration or login) should be able to create such groups (they could be store in the browser local storage so that are at least partially persistent).
- If a member (=registered user) creates such groups, it should be possible to share them with other members or the general public (might require some staging mechanism).
- Generally these groups should be available on all pages | priority | aggregate organizations media to groups it should be possible to combine different organizations stadt wien wiener linien to groups as a result such a group should be rendered as if it was single organization the same thing should also work for media every single user which does not require registration or login should be able to create such groups they could be store in the browser local storage so that are at least partially persistent if a member registered user creates such groups it should be possible to share them with other members or the general public might require some staging mechanism generally these groups should be available on all pages | 1 |
452,742 | 13,058,784,608 | IssuesEvent | 2020-07-30 09:34:00 | projectdissolve/dissolve | https://api.github.com/repos/projectdissolve/dissolve | opened | Epic / 0.7 / Handle X-Ray Data | Priority: High | ### Focus
Implement calculation of X-ray structure factors and incorporation into refinement process
### Workload
- [ ] #42
- [ ] #321
- [ ] #322 | 1.0 | Epic / 0.7 / Handle X-Ray Data - ### Focus
Implement calculation of X-ray structure factors and incorporation into refinement process
### Workload
- [ ] #42
- [ ] #321
- [ ] #322 | priority | epic handle x ray data focus implement calculation of x ray structure factors and incorporation into refinement process workload | 1 |
369,560 | 10,914,706,324 | IssuesEvent | 2019-11-21 09:43:25 | charleskorn/batect | https://api.github.com/repos/charleskorn/batect | closed | One-off tasks against a dependent container | is:enhancement priority:high state:work in progress | I have a service and a local S3 instance. I need to run an aws-cli command to create a bucket before my app can run. What I would like to do is:
- Launch the local S3
- Run aws-cli command to create a bucket
- Launch my local server
How I’ve attempted it:
```yaml
containers:
s3: ...
aws-cli: ...
service: ...
tasks:
start:
container: service
dependencies:
- s3
prerequisites:
- createS3Bucket
createS3Bucket:
container: aws-cli
run:
command: aws s3 mb s3://example
```
This doesn't work because the prerequisites are executed before the task's dependencies are started.
Then I tried adding the S3 container as a dependency to the `createS3Bucket` task:
```yaml
containers:
s3: ...
aws-cli: ...
service: ...
tasks:
start:
container: service
dependencies:
- s3
prerequisites:
- createS3Bucket
createS3Bucket:
container: aws-cli
dependencies:
- s3
run:
command: aws s3 mb s3://example
```
This also doesn't work, because when `createS3Bucket` is complete it tears down the `s3` container, and then `start` spins up a new one.
It'd be great if Batect had a way to model this flow. | 1.0 | One-off tasks against a dependent container - I have a service and a local S3 instance. I need to run an aws-cli command to create a bucket before my app can run. What I would like to do is:
- Launch the local S3
- Run aws-cli command to create a bucket
- Launch my local server
How I’ve attempted it:
```yaml
containers:
s3: ...
aws-cli: ...
service: ...
tasks:
start:
container: service
dependencies:
- s3
prerequisites:
- createS3Bucket
createS3Bucket:
container: aws-cli
run:
command: aws s3 mb s3://example
```
This doesn't work because the prerequisites are executed before the task's dependencies are started.
Then I tried adding the S3 container as a dependency to the `createS3Bucket` task:
```yaml
containers:
s3: ...
aws-cli: ...
service: ...
tasks:
start:
container: service
dependencies:
- s3
prerequisites:
- createS3Bucket
createS3Bucket:
container: aws-cli
dependencies:
- s3
run:
command: aws s3 mb s3://example
```
This also doesn't work, because when `createS3Bucket` is complete it tears down the `s3` container, and then `start` spins up a new one.
It'd be great if Batect had a way to model this flow. | priority | one off tasks against a dependent container i have a service and a local instance i need to run an aws cli command to create a bucket before my app can run what i would like to do is launch the local run aws cli command to create a bucket launch my local server how i’ve attempted it yaml containers aws cli service tasks start container service dependencies prerequisites container aws cli run command aws mb example this doesn t work because the prerequisites are executed before the task s dependencies are started then i tried adding the container as a dependency to the task yaml containers aws cli service tasks start container service dependencies prerequisites container aws cli dependencies run command aws mb example this also doesn t work because when is complete it tears down the container and then start spins up a new one it d be great if batect had a way to model this flow | 1 |
123,921 | 4,882,690,156 | IssuesEvent | 2016-11-17 10:14:02 | brian-team/brian2 | https://api.github.com/repos/brian-team/brian2 | closed | `SpikeMonitor` recording from subgroups on weave can record spikes it shouldn't record | bug high priority | Argh, I just found another bug, very similar to #772. This time it is about `SpikeMonitor` and it also affects recording from subgroups. To trigger it you need a similar situation as for #772: record from a subgroup and have no neuron in that subgroup spike during that time step. However, you also need to:
* use the weave target (in contrast to #772, Cython and C++ standalone are not affected)
* record from a subgroup in the middle of a group
Also, the error should have been pretty obvious, except when only asking the `SpikeMonitor` for its total number of spikes: the erroneously recorded spikes have indices that are outside of the range of the subgroup. For example, with `SpikeMonitor(group[100:200])` you should get indices in the range `[0, 100[`, but due to this bug you could record spikes with indices `>=100`.
The fix is trivial, I'll open a PR right away. | 1.0 | `SpikeMonitor` recording from subgroups on weave can record spikes it shouldn't record - Argh, I just found another bug, very similar to #772. This time it is about `SpikeMonitor` and it also affects recording from subgroups. To trigger it you need a similar situation as for #772: record from a subgroup and have no neuron in that subgroup spike during that time step. However, you also need to:
* use the weave target (in contrast to #772, Cython and C++ standalone are not affected)
* record from a subgroup in the middle of a group
Also, the error should have been pretty obvious, except when only asking the `SpikeMonitor` for its total number of spikes: the erroneously recorded spikes have indices that are outside of the range of the subgroup. For example, with `SpikeMonitor(group[100:200])` you should get indices in the range `[0, 100[`, but due to this bug you could record spikes with indices `>=100`.
The fix is trivial, I'll open a PR right away. | priority | spikemonitor recording from subgroups on weave can record spikes it shouldn t record argh i just found another bug very similar to this time it is about spikemonitor and it also affects recording from subgroups to trigger it you need a similar situation as for record from a subgroup and have no neuron in that subgroup spike during that time step however you also need to use the weave target in contrast to cython and c standalone are not affected record from a subgroup in the middle of a group also the error should have been pretty obvious except when only asking the spikemonitor for its total number of spikes the erroneously recorded spikes have indices that are outside of the range of the subgroup for example with spikemonitor group you should get indices in the range but due to this bug you could record spikes with indices the fix is trivial i ll open a pr right away | 1 |
461,290 | 13,228,083,384 | IssuesEvent | 2020-08-18 05:14:46 | moonwards1/Moonwards-Virtual-Moon | https://api.github.com/repos/moonwards1/Moonwards-Virtual-Moon | closed | Create Road material | Department: Graphics/GFX Priority: High Type: Feature | This material would also be used for `Airlock_Bay > Bay_Grounds` and `Spaceport > Road` and `Spaceport > Terminal_Tarmac`
This material is sort of a concrete - it was laid down as molten rock with chunks and chips of other kinds of rocks in it. It ends up looking a lot like very worn asphalt. Except it should be much lighter in color. (Unlike the current version.)
Pinterest page:
https://www.pinterest.com.mx/holder3884/road-surface/ | 1.0 | Create Road material - This material would also be used for `Airlock_Bay > Bay_Grounds` and `Spaceport > Road` and `Spaceport > Terminal_Tarmac`
This material is sort of a concrete - it was laid down as molten rock with chunks and chips of other kinds of rocks in it. It ends up looking a lot like very worn asphalt. Except it should be much lighter in color. (Unlike the current version.)
Pinterest page:
https://www.pinterest.com.mx/holder3884/road-surface/ | priority | create road material this material would also be used for airlock bay bay grounds and spaceport road and spaceport terminal tarmac this material is sort of a concrete it was laid down as molten rock with chunks and chips of other kinds of rocks in it it ends up looking a lot like very worn asphalt except it should be much lighter in color unlike the current version pinterest page | 1 |
689,135 | 23,609,240,905 | IssuesEvent | 2022-08-24 10:55:59 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Investigate DBS ValidStatus spec parameter and add validation function | New Feature Operations WMAgent High Priority ReqMgr2 | **Impact of the new feature**
ReqMgr2 (and WMAgent)
**Is your feature request related to a problem? Please describe.**
As discussed over the last few days, people are wondering if WMAgent could inject data into DBS with a `VALID` status (instead of the default `PRODUCTION` status). This capability would be especially important for the growing Nano workflows.
**Describe the solution you'd like**
A few actions/deliverables must be done with this ticket:
* first, test whether the StdBase `ValidStatus` creation parameter is fully functional (i.e., accepting a value provided during creation and propagating it all the way to the agent and DBS3Upload component, ultimately injecting a dataset into DBS with status: VALID).
* if it works, we should add a validation function which will ensure users provide a sound value. List of allowed values should be (PRODUCTION, VALID).
**Describe alternatives you've considered**
None
**Additional context**
None
| 1.0 | Investigate DBS ValidStatus spec parameter and add validation function - **Impact of the new feature**
ReqMgr2 (and WMAgent)
**Is your feature request related to a problem? Please describe.**
As discussed over the last few days, people are wondering if WMAgent could inject data into DBS with a `VALID` status (instead of the default `PRODUCTION` status). This capability would be especially important for the growing Nano workflows.
**Describe the solution you'd like**
A few actions/deliverables must be done with this ticket:
* first, test whether the StdBase `ValidStatus` creation parameter is fully functional (i.e., accepting a value provided during creation and propagating it all the way to the agent and DBS3Upload component, ultimately injecting a dataset into DBS with status: VALID).
* if it works, we should add a validation function which will ensure users provide a sound value. List of allowed values should be (PRODUCTION, VALID).
**Describe alternatives you've considered**
None
**Additional context**
None
| priority | investigate dbs validstatus spec parameter and add validation function impact of the new feature and wmagent is your feature request related to a problem please describe as discussed over the last few days people are wondering if wmagent could inject data into dbs with a valid status instead of the default production status this capability would be especially important for the growing nano workflows describe the solution you d like a few actions deliverables must be done with this ticket first test whether the stdbase validstatus creation parameter is fully functional i e accepting a value provided during creation and propagating it all the way to the agent and component ultimately injecting a dataset into dbs with status valid if it works we should add a validation function which will ensure users provide a sound value list of allowed values should be production valid describe alternatives you ve considered none additional context none | 1 |
441,243 | 12,709,778,398 | IssuesEvent | 2020-06-23 12:55:34 | RonAsis/Wsep202 | https://api.github.com/repos/RonAsis/Wsep202 | opened | fix approveOwner(..) in Store | High priority bug | approveOwner returns true when the owner who's trying to approve is not really an owner.
[tested in approveOwnerNegative() in storeTest]
the solution:
public boolean approveOwner(UserSystem ownerUser, String ownerToApprove, boolean status) {
appointingAgreements.stream()
.filter(appointingAgreement -> appointingAgreement.getNewOwner().getUserName().equals(ownerToApprove))
.findFirst()
.map(appointingAgreement -> {
appointingAgreement.changeApproval(ownerUser.getUserName(), status ? StatusOwner.APPROVE : StatusOwner.NOT_APPROVE);
ownerUser.removeAgreement(storeId, ownerToApprove);
isApproveOwner(appointingAgreement.getNewOwner());
log.info("The owner: "+ownerUser.getUserName()+" approved: "+ownerToApprove+"" +
"with status: "+status);
return true;
});
return false;
} | 1.0 | fix approveOwner(..) in Store - approveOwner returns true when the owner who's trying to approve is not really an owner.
[tested in approveOwnerNegative() in storeTest]
the solution:
public boolean approveOwner(UserSystem ownerUser, String ownerToApprove, boolean status) {
appointingAgreements.stream()
.filter(appointingAgreement -> appointingAgreement.getNewOwner().getUserName().equals(ownerToApprove))
.findFirst()
.map(appointingAgreement -> {
appointingAgreement.changeApproval(ownerUser.getUserName(), status ? StatusOwner.APPROVE : StatusOwner.NOT_APPROVE);
ownerUser.removeAgreement(storeId, ownerToApprove);
isApproveOwner(appointingAgreement.getNewOwner());
log.info("The owner: "+ownerUser.getUserName()+" approved: "+ownerToApprove+"" +
"with status: "+status);
return true;
});
return false;
} | priority | fix approveowner in store approveowner returns true when the owner who s trying to approve is not really an owner the solution public boolean approveowner usersystem owneruser string ownertoapprove boolean status appointingagreements stream filter appointingagreement appointingagreement getnewowner getusername equals ownertoapprove findfirst map appointingagreement appointingagreement changeapproval owneruser getusername status statusowner approve statusowner not approve owneruser removeagreement storeid ownertoapprove isapproveowner appointingagreement getnewowner log info the owner owneruser getusername approved ownertoapprove with status status return true return false | 1 |
422,139 | 12,266,738,504 | IssuesEvent | 2020-05-07 09:27:50 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Private messaging to a user keeps adding some random users to the chat. | bug can't reproduce component: messages priority: high | **Describe the bug**
Private messaging to a user keeps adding some random users to the chat. This has something to do with the connection option "Require users to be connected before they can message each other".
I have received three tickets for this issue and all the users said that they had this option disabled first and then enabled it lately. That is causing the issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Profile > Messages
2. Compose a personal message to one of your connections
3. See error
**Expected behavior**
Private messaging should not add some random users to the chat.
**Screenshots**
- https://prnt.sc/s2hpdo
**Support ticket links**
- https://secure.helpscout.net/conversation/1154852884/71818/
- https://secure.helpscout.net/conversation/1135218417/68405/
| 1.0 | Private messaging to a user keeps adding some random users to the chat. - **Describe the bug**
Private messaging to a user keeps adding some random users to the chat. This has something to do with the connection option "Require users to be connected before they can message each other".
I have received three tickets for this issue and all the users said that they had this option disabled first and then enabled it lately. That is causing the issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Profile > Messages
2. Compose a personal message to one of your connections
3. See error
**Expected behavior**
Private messaging should not add some random users to the chat.
**Screenshots**
- https://prnt.sc/s2hpdo
**Support ticket links**
- https://secure.helpscout.net/conversation/1154852884/71818/
- https://secure.helpscout.net/conversation/1135218417/68405/
| priority | private messaging to a user keeps adding some random users to the chat describe the bug private messaging to a user keeps adding some random users to the chat this has something to do with the connection option require users to be connected before they can message each other i have received three tickets for this issue and all the users said that they had this option disabled first and then enabled it lately that is causing the issue to reproduce steps to reproduce the behavior go to profile messages compose a personal message to one of your connections see error expected behavior private messaging should not add some random users to the chat screenshots support ticket links | 1 |
318,199 | 9,683,217,598 | IssuesEvent | 2019-05-23 10:57:48 | 0xProject/0x-launch-kit-frontend | https://api.github.com/repos/0xProject/0x-launch-kit-frontend | closed | Metamask shouldn't prompt after every refresh | high priority wontfix | Every time the page reloads, it shows the user the metamask connection prompt.
<img width="428" alt="Screen Shot 2019-05-21 at 5 26 43 PM" src="https://user-images.githubusercontent.com/29830192/58139002-a6f43a80-7bed-11e9-8268-872ccf43da99.png">
| 1.0 | Metamask shouldn't prompt after every refresh - Every time the page reloads, it shows the user the metamask connection prompt.
<img width="428" alt="Screen Shot 2019-05-21 at 5 26 43 PM" src="https://user-images.githubusercontent.com/29830192/58139002-a6f43a80-7bed-11e9-8268-872ccf43da99.png">
| priority | metamask shouldn t prompt after every refresh every time the page reloads it shows the user the metamask connection prompt img width alt screen shot at pm src | 1 |
185,235 | 6,720,201,141 | IssuesEvent | 2017-10-16 06:39:01 | CS2103AUG2017-T15-B2/main | https://api.github.com/repos/CS2103AUG2017-T15-B2/main | opened | As a user I want to delete multiple persons at once | priority.high type.story | ... so that I can avoid going through the hassle of removing one at each time | 1.0 | As a user I want to delete multiple persons at once - ... so that I can avoid going through the hassle of removing one at each time | priority | as a user i want to delete multiple persons at once so that i can avoid going through the hassle of removing one at each time | 1 |
713,756 | 24,538,493,853 | IssuesEvent | 2022-10-11 23:51:14 | Couchers-org/web-frontend | https://api.github.com/repos/Couchers-org/web-frontend | closed | "guest" and "friend" pills on /profile/references are swapped | bug good first issue priority: high good value | It looks like the coding for "guest" and "friend" pills on the references page are swapped. "Friend" is displayed when "guest" should be displayed, and "guest" is displayed when "friend" should be displayed. | 1.0 | "guest" and "friend" pills on /profile/references are swapped - It looks like the coding for "guest" and "friend" pills on the references page are swapped. "Friend" is displayed when "guest" should be displayed, and "guest" is displayed when "friend" should be displayed. | priority | guest and friend pills on profile references are swapped it looks like the coding for guest and friend pills on the references page are swapped friend is displayed when guest should be displayed and guest is displayed when friend should be displayed | 1 |
612,518 | 19,024,362,325 | IssuesEvent | 2021-11-24 00:20:24 | AgoraCloud/ui-edu | https://api.github.com/repos/AgoraCloud/ui-edu | closed | Update and Delete Existing Workstations If the User Is an Admin | enhancement priority:high SEG4105 | # Overview
In this issue, we will allow teachers and IT admins (both termed admins) to create update and delete existing student workstations via the UI. This will be accomplished by adding a menu for each workstation entry in the workstation table.
When the menu icon is clicked, a dropdown will appear containing `Update` and `Delete` options.
- If the user clicks on `Delete`, the user will have to confirm, via a pop up modal, whether they would like to delete the deployment. If the user confirms, the UI will invoke the `DELETE` workstations API to delete the workstaiton.
- If the user clicks on `Update`, the user will be taken to a page, similar to the create workstation page, where the user can update the workstation information.
# To Do
- [x] When the delete button is clicked, I should receive a confirmation modal that prompts me to confirm whether I really want to delete this workstation or not (`CANCEL`). If `DELETE` is clicked, a DELETE API request should be made
- [x] When the edit button is clicked, I should be routed to `/ws/<workstation_id>edit` page where I can edit the appropriate workstation fields. I can click `UPDATE` to update the information (PUT request) or `CANCEL` to abandon any changes | 1.0 | Update and Delete Existing Workstations If the User Is an Admin - # Overview
In this issue, we will allow teachers and IT admins (both termed admins) to create update and delete existing student workstations via the UI. This will be accomplished by adding a menu for each workstation entry in the workstation table.
When the menu icon is clicked, a dropdown will appear containing `Update` and `Delete` options.
- If the user clicks on `Delete`, the user will have to confirm, via a pop up modal, whether they would like to delete the deployment. If the user confirms, the UI will invoke the `DELETE` workstations API to delete the workstaiton.
- If the user clicks on `Update`, the user will be taken to a page, similar to the create workstation page, where the user can update the workstation information.
# To Do
- [x] When the delete button is clicked, I should receive a confirmation modal that prompts me to confirm whether I really want to delete this workstation or not (`CANCEL`). If `DELETE` is clicked, a DELETE API request should be made
- [x] When the edit button is clicked, I should be routed to `/ws/<workstation_id>edit` page where I can edit the appropriate workstation fields. I can click `UPDATE` to update the information (PUT request) or `CANCEL` to abandon any changes | priority | update and delete existing workstations if the user is an admin overview in this issue we will allow teachers and it admins both termed admins to create update and delete existing student workstations via the ui this will be accomplished by adding a menu for each workstation entry in the workstation table when the menu icon is clicked a dropdown will appear containing update and delete options if the user clicks on delete the user will have to confirm via a pop up modal whether they would like to delete the deployment if the user confirms the ui will invoke the delete workstations api to delete the workstaiton if the user clicks on update the user will be taken to a page similar to the create workstation page where the user can update the workstation information to do when the delete button is clicked i should receive a confirmation modal that prompts me to confirm whether i really want to delete this workstation or not cancel if delete is clicked a delete api request should be made when the edit button is clicked i should be routed to ws edit page where i can edit the appropriate workstation fields i can click update to update the information put request or cancel to abandon any changes | 1 |
661,720 | 22,066,573,819 | IssuesEvent | 2022-05-31 04:24:52 | Knowledge-Management-Capstone/knowledge-management-dashboard | https://api.github.com/repos/Knowledge-Management-Capstone/knowledge-management-dashboard | closed | KMND-42 Migrate to Vite | type:bug estimated-sp:2 priority:high | ## Description
- some `Dialog` from `@headlessui/react` components doesn't work with `create-react-app`
- `vite` is faster and fully supports `vitest` for unit testing | 1.0 | KMND-42 Migrate to Vite - ## Description
- some `Dialog` from `@headlessui/react` components doesn't work with `create-react-app`
- `vite` is faster and fully supports `vitest` for unit testing | priority | kmnd migrate to vite description some dialog from headlessui react components doesn t work with create react app vite is faster and fully supports vitest for unit testing | 1 |
779,881 | 27,370,047,305 | IssuesEvent | 2023-02-27 22:33:14 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | reopened | [PTR][Dungeon] Deadmines Heroic lvl 85 | Dungeon Gameobject NPC Waypoints Priority: High Status: Confirmed | [**How to reproduce:**
1. Glubtok is not using his blink ability at all also damages of fires wall around him on second phase is inconsistent sometimes does 50k+ 2-3 times back to back and cause players to die instantly that should not happen it should be per sec

2. olaf smash should take away 60% of your hp on heroic and it does no damage
his grab is not working properly sometimes he doesnt grab anything just does the move after killing the olaf helix didnt cast leap to jump on people, he didnt cast heli'x cew as well adding 4 additional members to the guys that drop bombs ( 8 crew members in total)
also this https://github.com/gamefreedomgit/Maelstrom/issues/1869
3. During vanessa vancleef event fires should not be attackable, also looks like jumping through them makes you to not take damage, might be related to the parachute that we get on cauldron and on next area worgons were not fighting humans

also in this event ship is not on fire at all, fire should get more and more over time and with explosion, there are some animation bugs with rope usage as well,
you can see the whole fight here it should work like the video below.
https://www.youtube.com/watch?v=tkNxQ176Bhs
4. spell harvest on foe reaper 5000 hitting you behind the boss as well by the look of it
5. blackspots where cannons hit on last area don't have a black spot it should be visible for the players (i could only see them with gm on) https://youtu.be/5JuLT_qkGRg?t=419

| 1.0 | [PTR][Dungeon] Deadmines Heroic lvl 85 - [**How to reproduce:**
1. Glubtok is not using his blink ability at all also damages of fires wall around him on second phase is inconsistent sometimes does 50k+ 2-3 times back to back and cause players to die instantly that should not happen it should be per sec

2. olaf smash should take away 60% of your hp on heroic and it does no damage
his grab is not working properly sometimes he doesnt grab anything just does the move after killing the olaf helix didnt cast leap to jump on people, he didnt cast heli'x cew as well adding 4 additional members to the guys that drop bombs ( 8 crew members in total)
also this https://github.com/gamefreedomgit/Maelstrom/issues/1869
3. During vanessa vancleef event fires should not be attackable, also looks like jumping through them makes you to not take damage, might be related to the parachute that we get on cauldron and on next area worgons were not fighting humans

also in this event ship is not on fire at all, fire should get more and more over time and with explosion, there are some animation bugs with rope usage as well,
you can see the whole fight here it should work like the video below.
https://www.youtube.com/watch?v=tkNxQ176Bhs
4. spell harvest on foe reaper 5000 hitting you behind the boss as well by the look of it
5. blackspots where cannons hit on last area don't have a black spot it should be visible for the players (i could only see them with gm on) https://youtu.be/5JuLT_qkGRg?t=419

| priority | deadmines heroic lvl how to reproduce glubtok is not using his blink ability at all also damages of fires wall around him on second phase is inconsistent sometimes does times back to back and cause players to die instantly that should not happen it should be per sec olaf smash should take away of your hp on heroic and it does no damage his grab is not working properly sometimes he doesnt grab anything just does the move after killing the olaf helix didnt cast leap to jump on people he didnt cast heli x cew as well adding additional members to the guys that drop bombs crew members in total also this during vanessa vancleef event fires should not be attackable also looks like jumping through them makes you to not take damage might be related to the parachute that we get on cauldron and on next area worgons were not fighting humans also in this event ship is not on fire at all fire should get more and more over time and with explosion there are some animation bugs with rope usage as well you can see the whole fight here it should work like the video below spell harvest on foe reaper hitting you behind the boss as well by the look of it blackspots where cannons hit on last area don t have a black spot it should be visible for the players i could only see them with gm on | 1 |
6,011 | 2,582,234,178 | IssuesEvent | 2015-02-15 00:28:52 | david415/HoneyBadger | https://api.github.com/repos/david415/HoneyBadger | closed | Add proper injection attack full-take logging. | enhancement help wanted highest priority |
HoneyBadger is designed to be a long running daemon that concurrently processes many long (or short) running TCP connections. SInce we need to perform a full-take on all data and metadata... therefore we must be able to enforce maximum disk quota, memory usage maximum and concurrent connection maximum. In this issue/ticket we are only concerned with enforcing a maximum disk quota. Files written have a max enforced size and total stream contents can be split into many files with a 32 bit integer suffix to order them... called a split-ID.
User specifies a target logging directory. HoneyBadger creates (if non-existent) a "active" and "inactive" subdirectories for active and inactive connections. After a timeout/close/reset the connection data is moved from the active to the inactive directory. The files associated with a connection shall be named flow_id.pseuodo-randomly-generated-32bit-integer-value.split-ID like this:
192.168.1.1:2345-192.168.1.2:3563.0.0
further suffixes can be added to indicate what kind of data/metadata the file contains about that connection.
We need to log several things to disk:
- full-take log of both TCP reassembled stream directions
- log all the overlap bytes and it's TCP Sequence boundaries
- full packet log; we should log all the packets belonging to this connection to a pcap file
| 1.0 | Add proper injection attack full-take logging. -
HoneyBadger is designed to be a long running daemon that concurrently processes many long (or short) running TCP connections. SInce we need to perform a full-take on all data and metadata... therefore we must be able to enforce maximum disk quota, memory usage maximum and concurrent connection maximum. In this issue/ticket we are only concerned with enforcing a maximum disk quota. Files written have a max enforced size and total stream contents can be split into many files with a 32 bit integer suffix to order them... called a split-ID.
User specifies a target logging directory. HoneyBadger creates (if non-existent) a "active" and "inactive" subdirectories for active and inactive connections. After a timeout/close/reset the connection data is moved from the active to the inactive directory. The files associated with a connection shall be named flow_id.pseuodo-randomly-generated-32bit-integer-value.split-ID like this:
192.168.1.1:2345-192.168.1.2:3563.0.0
further suffixes can be added to indicate what kind of data/metadata the file contains about that connection.
We need to log several things to disk:
- full-take log of both TCP reassembled stream directions
- log all the overlap bytes and it's TCP Sequence boundaries
- full packet log; we should log all the packets belonging to this connection to a pcap file
| priority | add proper injection attack full take logging honeybadger is designed to be a long running daemon that concurrently processes many long or short running tcp connections since we need to perform a full take on all data and metadata therefore we must be able to enforce maximum disk quota memory usage maximum and concurrent connection maximum in this issue ticket we are only concerned with enforcing a maximum disk quota files written have a max enforced size and total stream contents can be split into many files with a bit integer suffix to order them called a split id user specifies a target logging directory honeybadger creates if non existent a active and inactive subdirectories for active and inactive connections after a timeout close reset the connection data is moved from the active to the inactive directory the files associated with a connection shall be named flow id pseuodo randomly generated integer value split id like this further suffixes can be added to indicate what kind of data metadata the file contains about that connection we need to log several things to disk full take log of both tcp reassembled stream directions log all the overlap bytes and it s tcp sequence boundaries full packet log we should log all the packets belonging to this connection to a pcap file | 1 |
635,935 | 20,514,307,162 | IssuesEvent | 2022-03-01 10:08:23 | LycheeOrg/Lychee | https://api.github.com/repos/LycheeOrg/Lychee | closed | Cannot import some pictures : error 22P02 | bug High Priority | ### Detailed description of the problem [REQUIRED]
Some pictures cannot be uploaded. Seems to be all pictures from my smartphone.
This happens in both command line (lychee:sync) and web interface.
I'm using postgresql 14 and I haven't setup imagick yet.
As far as I can tell, there are floats in the metadata and the sql request is chocking expecting an integer
I hope this can be easily solved, so that I can start using it. Keep the good work! :+1:
### Steps to reproduce the issue
**Steps to reproduce the behavior:**
Just upload a picture but not sure this is related to this particular pictures taken by my smartphone.
### Output of the diagnostics [REQUIRED]
```
Diagnostics
-------
Warning: '/mnt/raid/lychee/dist/user.css' does not exist or has insufficient read/write privileges.
Warning: Dropbox import not working. dropbox_key is empty.
Warning: Pictures that are rotated lose their metadata! Please install Imagick to avoid that.
System Information
--------------
Lychee Version (release): 4.4.0
DB Version: 4.4.0
composer install: --no-dev
APP_ENV: production
APP_DEBUG: true
System: Linux
PHP Version: 8.1
PHP User agent: Lychee/4 (https://lycheeorg.github.io/)
Max uploaded file size: 16G
Max post size: 16G
Max execution time: 3600
PostgreSQL Version: PostgreSQL 14.1 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (Gentoo Hardened 11.2.1_p20220115 p4) 11.2.1 20220115, 64-bit
Imagick: -
Imagick Active: 1
Imagick Version: -
GD Version: bundled (2.1.0 compatible)
Config Information
--------------
version: 040400
check_for_updates: 0
sorting_Photos_col: taken_at
sorting_Photos_order: ASC
sorting_Albums_col: max_taken_at
sorting_Albums_order: ASC
imagick: 1
skip_duplicates: 0
small_max_width: 0
small_max_height: 360
medium_max_width: 1920
medium_max_height: 1080
lang: en
layout: 1
image_overlay_type: desc
default_license: none
compression_quality: 90
full_photo: 1
delete_imported: 0
Mod_Frame: 1
Mod_Frame_refresh: 30
thumb_2x: 1
small_2x: 1
medium_2x: 1
landing_page_enable: 0
landing_owner: John Smith
landing_title: John Smith
landing_subtitle: Cats, Dogs & Humans Photography
landing_facebook: https://www.facebook.com/JohnSmith
landing_flickr: https://www.flickr.com/JohnSmith
landing_twitter: https://www.twitter.com/JohnSmith
landing_instagram: https://instagram.com/JohnSmith
landing_youtube: https://www.youtube.com/JohnSmith
landing_background: dist/cat.jpg
site_title: Lychee v4
site_copyright_enable: 1
site_copyright_begin: 2019
site_copyright_end: 2019
additional_footer_text:
display_social_in_gallery: 0
public_search: 0
SL_enable: 0
SL_for_admin: 0
public_recent: 0
recent_age: 1
public_starred: 0
downloadable: 0
photos_wraparound: 1
map_display: 0
zip64: 1
map_display_public: 0
map_provider: Wikimedia
force_32bit_ids: 0
map_include_subalbums: 0
update_check_every_days: 3
has_exiftool: 0
share_button_visible: 0
import_via_symlink: 0
has_ffmpeg: 0
location_decoding: 0
location_decoding_timeout: 30
location_show: 1
location_show_public: 0
rss_enable: 0
rss_recent_days: 7
rss_max_items: 100
prefer_available_xmp_metadata: 0
editor_enabled: 1
lossless_optimization: 0
swipe_tolerance_x: 150
swipe_tolerance_y: 250
local_takestamp_video_formats: .avi|.mov
log_max_num_line: 1000
unlock_password_photos_with_url_param: 0
nsfw_visible: 1
nsfw_blur: 0
nsfw_warning: 0
nsfw_warning_admin: 0
map_display_direction: 1
album_subtitle_type: oldstyle
upload_processing_limit: 4
public_photos_hidden: 1
new_photos_notification: 0
```
### Browser and system
This is the relevant log part:
```
2022-02-28 18:16:18 -- error -- App\Actions\Photo\Extensions\Save::recover -- 59 -- Something went wrong, error 22P02, SQLSTATE[22P02]: Invalid text representation: 7 ERROR: invalid input syntax for type integer: "607.5"
CONTEXT: unnamed portal parameter $32 = '...' (SQL: insert into "photos" ("id", "checksum", "title", "url", "description", "tags", "width", "height", "type", "filesize", "iso", "aperture", "make", "model", "lens", "shutter", "focal", "taken_at", "taken_at_orig_tz", "latitude", "longitude", "altitude", "imgDirection", "location", "livePhotoContentID", "public", "star", "album_id", "owner_id", "thumb2x", "thumbUrl", "medium_width", "medium_height", "medium2x_width", "medium2x_height", "small_width", "small_height", "small2x_width", "small2x_height", "updated_at", "created_at") values (16460721747793, 76085e58611e0d9a321115b322d07f238d34b358, 20170414_223757(0), 76085e58611e0d9a321115b322d07f23.jpg, , , 2268, 4032, image/jpeg, 4092045, 200, f/1.7, samsung, SM-G935F, , 1/10 s, 4 mm, 2017-04-14 22:37:57, UTC, 36.718333333333, 10.367777777778, 37, ?, ?, ?, 0, 0, ?, 0, 1, 76085e58611e0d9a321115b322d07f23.jpeg, 607.5, 1080, 1215, 2160, 202.5, 360, 405, 720, 2022-02-28 18:16:18, 2022-02-28 18:16:18) returning "id")
2022-02-28 18:16:15 -- notice -- App\Actions\Photo\Extensions\ImageEditing::createThumb -- 165 -- Photo URL is 76085e58611e0d9a321115b322d07f23.jpg
2022-02-28 18:16:14 -- notice -- App\Models\Extensions\ConfigsHas::hasImagick -- 19 -- hasImagick : false
```
| 1.0 | Cannot import some pictures : error 22P02 - ### Detailed description of the problem [REQUIRED]
Some pictures cannot be uploaded. Seems to be all pictures from my smartphone.
This happens in both command line (lychee:sync) and web interface.
I'm using postgresql 14 and I haven't setup imagick yet.
As far as I can tell, there are floats in the metadata and the sql request is chocking expecting an integer
I hope this can be easily solved, so that I can start using it. Keep the good work! :+1:
### Steps to reproduce the issue
**Steps to reproduce the behavior:**
Just upload a picture but not sure this is related to this particular pictures taken by my smartphone.
### Output of the diagnostics [REQUIRED]
```
Diagnostics
-------
Warning: '/mnt/raid/lychee/dist/user.css' does not exist or has insufficient read/write privileges.
Warning: Dropbox import not working. dropbox_key is empty.
Warning: Pictures that are rotated lose their metadata! Please install Imagick to avoid that.
System Information
--------------
Lychee Version (release): 4.4.0
DB Version: 4.4.0
composer install: --no-dev
APP_ENV: production
APP_DEBUG: true
System: Linux
PHP Version: 8.1
PHP User agent: Lychee/4 (https://lycheeorg.github.io/)
Max uploaded file size: 16G
Max post size: 16G
Max execution time: 3600
PostgreSQL Version: PostgreSQL 14.1 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (Gentoo Hardened 11.2.1_p20220115 p4) 11.2.1 20220115, 64-bit
Imagick: -
Imagick Active: 1
Imagick Version: -
GD Version: bundled (2.1.0 compatible)
Config Information
--------------
version: 040400
check_for_updates: 0
sorting_Photos_col: taken_at
sorting_Photos_order: ASC
sorting_Albums_col: max_taken_at
sorting_Albums_order: ASC
imagick: 1
skip_duplicates: 0
small_max_width: 0
small_max_height: 360
medium_max_width: 1920
medium_max_height: 1080
lang: en
layout: 1
image_overlay_type: desc
default_license: none
compression_quality: 90
full_photo: 1
delete_imported: 0
Mod_Frame: 1
Mod_Frame_refresh: 30
thumb_2x: 1
small_2x: 1
medium_2x: 1
landing_page_enable: 0
landing_owner: John Smith
landing_title: John Smith
landing_subtitle: Cats, Dogs & Humans Photography
landing_facebook: https://www.facebook.com/JohnSmith
landing_flickr: https://www.flickr.com/JohnSmith
landing_twitter: https://www.twitter.com/JohnSmith
landing_instagram: https://instagram.com/JohnSmith
landing_youtube: https://www.youtube.com/JohnSmith
landing_background: dist/cat.jpg
site_title: Lychee v4
site_copyright_enable: 1
site_copyright_begin: 2019
site_copyright_end: 2019
additional_footer_text:
display_social_in_gallery: 0
public_search: 0
SL_enable: 0
SL_for_admin: 0
public_recent: 0
recent_age: 1
public_starred: 0
downloadable: 0
photos_wraparound: 1
map_display: 0
zip64: 1
map_display_public: 0
map_provider: Wikimedia
force_32bit_ids: 0
map_include_subalbums: 0
update_check_every_days: 3
has_exiftool: 0
share_button_visible: 0
import_via_symlink: 0
has_ffmpeg: 0
location_decoding: 0
location_decoding_timeout: 30
location_show: 1
location_show_public: 0
rss_enable: 0
rss_recent_days: 7
rss_max_items: 100
prefer_available_xmp_metadata: 0
editor_enabled: 1
lossless_optimization: 0
swipe_tolerance_x: 150
swipe_tolerance_y: 250
local_takestamp_video_formats: .avi|.mov
log_max_num_line: 1000
unlock_password_photos_with_url_param: 0
nsfw_visible: 1
nsfw_blur: 0
nsfw_warning: 0
nsfw_warning_admin: 0
map_display_direction: 1
album_subtitle_type: oldstyle
upload_processing_limit: 4
public_photos_hidden: 1
new_photos_notification: 0
```
### Browser and system
This is the relevant log part:
```
2022-02-28 18:16:18 -- error -- App\Actions\Photo\Extensions\Save::recover -- 59 -- Something went wrong, error 22P02, SQLSTATE[22P02]: Invalid text representation: 7 ERROR: invalid input syntax for type integer: "607.5"
CONTEXT: unnamed portal parameter $32 = '...' (SQL: insert into "photos" ("id", "checksum", "title", "url", "description", "tags", "width", "height", "type", "filesize", "iso", "aperture", "make", "model", "lens", "shutter", "focal", "taken_at", "taken_at_orig_tz", "latitude", "longitude", "altitude", "imgDirection", "location", "livePhotoContentID", "public", "star", "album_id", "owner_id", "thumb2x", "thumbUrl", "medium_width", "medium_height", "medium2x_width", "medium2x_height", "small_width", "small_height", "small2x_width", "small2x_height", "updated_at", "created_at") values (16460721747793, 76085e58611e0d9a321115b322d07f238d34b358, 20170414_223757(0), 76085e58611e0d9a321115b322d07f23.jpg, , , 2268, 4032, image/jpeg, 4092045, 200, f/1.7, samsung, SM-G935F, , 1/10 s, 4 mm, 2017-04-14 22:37:57, UTC, 36.718333333333, 10.367777777778, 37, ?, ?, ?, 0, 0, ?, 0, 1, 76085e58611e0d9a321115b322d07f23.jpeg, 607.5, 1080, 1215, 2160, 202.5, 360, 405, 720, 2022-02-28 18:16:18, 2022-02-28 18:16:18) returning "id")
2022-02-28 18:16:15 -- notice -- App\Actions\Photo\Extensions\ImageEditing::createThumb -- 165 -- Photo URL is 76085e58611e0d9a321115b322d07f23.jpg
2022-02-28 18:16:14 -- notice -- App\Models\Extensions\ConfigsHas::hasImagick -- 19 -- hasImagick : false
```
| priority | cannot import some pictures error detailed description of the problem some pictures cannot be uploaded seems to be all pictures from my smartphone this happens in both command line lychee sync and web interface i m using postgresql and i haven t setup imagick yet as far as i can tell there are floats in the metadata and the sql request is chocking expecting an integer i hope this can be easily solved so that i can start using it keep the good work steps to reproduce the issue steps to reproduce the behavior just upload a picture but not sure this is related to this particular pictures taken by my smartphone output of the diagnostics diagnostics warning mnt raid lychee dist user css does not exist or has insufficient read write privileges warning dropbox import not working dropbox key is empty warning pictures that are rotated lose their metadata please install imagick to avoid that system information lychee version release db version composer install no dev app env production app debug true system linux php version php user agent lychee max uploaded file size max post size max execution time postgresql version postgresql on pc linux gnu compiled by pc linux gnu gcc gentoo hardened bit imagick imagick active imagick version gd version bundled compatible config information version check for updates sorting photos col taken at sorting photos order asc sorting albums col max taken at sorting albums order asc imagick skip duplicates small max width small max height medium max width medium max height lang en layout image overlay type desc default license none compression quality full photo delete imported mod frame mod frame refresh thumb small medium landing page enable landing owner john smith landing title john smith landing subtitle cats dogs humans photography landing facebook landing flickr landing twitter landing instagram landing youtube landing background dist cat jpg site title lychee site copyright enable site copyright begin site copyright end additional footer text display social in gallery public search sl enable sl for admin public recent recent age public starred downloadable photos wraparound map display map display public map provider wikimedia force ids map include subalbums update check every days has exiftool share button visible import via symlink has ffmpeg location decoding location decoding timeout location show location show public rss enable rss recent days rss max items prefer available xmp metadata editor enabled lossless optimization swipe tolerance x swipe tolerance y local takestamp video formats avi mov log max num line unlock password photos with url param nsfw visible nsfw blur nsfw warning nsfw warning admin map display direction album subtitle type oldstyle upload processing limit public photos hidden new photos notification browser and system this is the relevant log part error app actions photo extensions save recover something went wrong error sqlstate invalid text representation error invalid input syntax for type integer context unnamed portal parameter sql insert into photos id checksum title url description tags width height type filesize iso aperture make model lens shutter focal taken at taken at orig tz latitude longitude altitude imgdirection location livephotocontentid public star album id owner id thumburl medium width medium height width height small width small height width height updated at created at values jpg image jpeg f samsung sm s mm utc jpeg returning id notice app actions photo extensions imageediting createthumb photo url is jpg notice app models extensions configshas hasimagick hasimagick false | 1 |
510,193 | 14,786,529,177 | IssuesEvent | 2021-01-12 05:44:38 | OpenSRP/opensrp-client-reveal | https://api.github.com/repos/OpenSRP/opensrp-client-reveal | closed | RVL-755 - Inactive versus ineligible structure colours | Blocked Priority: High Size: Medium (2-3) | **Current state:**
1. Structures marked as ineligible (ie they are not eligible during this plan) are black
2. Inactive structures (ie they are not structures that will ever be eligible, maybe were enumerated accidentally or entered in error) are grey.
**Desired Operation**
1. Ineligible - Black (determined based on current in-field data collection) - retain current functionality (ability to view & edit the status of the black structure)
2. Inactive - Grey (this only applies to NON-residential structures that were determined to be non-residential and ineligible in the previous campaign) - need to be able to update ‘inactive’ to ‘not-visited’ (yellow structure) and then proceed as normal.
See the acceptance Criteria [here](https://smartregister.atlassian.net/browse/RVL-755):
| 1.0 | RVL-755 - Inactive versus ineligible structure colours - **Current state:**
1. Structures marked as ineligible (ie they are not eligible during this plan) are black
2. Inactive structures (ie they are not structures that will ever be eligible, maybe were enumerated accidentally or entered in error) are grey.
**Desired Operation**
1. Ineligible - Black (determined based on current in-field data collection) - retain current functionality (ability to view & edit the status of the black structure)
2. Inactive - Grey (this only applies to NON-residential structures that were determined to be non-residential and ineligible in the previous campaign) - need to be able to update ‘inactive’ to ‘not-visited’ (yellow structure) and then proceed as normal.
See the acceptance Criteria [here](https://smartregister.atlassian.net/browse/RVL-755):
| priority | rvl inactive versus ineligible structure colours current state structures marked as ineligible ie they are not eligible during this plan are black inactive structures ie they are not structures that will ever be eligible maybe were enumerated accidentally or entered in error are grey desired operation ineligible black determined based on current in field data collection retain current functionality ability to view edit the status of the black structure inactive grey this only applies to non residential structures that were determined to be non residential and ineligible in the previous campaign need to be able to update ‘inactive’ to ‘not visited’ yellow structure and then proceed as normal see the acceptance criteria | 1 |
221,330 | 7,382,083,488 | IssuesEvent | 2018-03-15 02:35:12 | Unibeautify/vscode | https://api.github.com/repos/Unibeautify/vscode | closed | Unable to publish "peer dep missing" | bug help wanted high priority | Related to: https://github.com/npm/npm/issues/19877 , https://github.com/npm/npm/issues/15708 , https://github.com/yarnpkg/yarn/issues/4850 , https://stackoverflow.com/a/48318878/2578205 , https://github.com/yarnpkg/yarn/issues/4743 , https://github.com/yarnpkg/yarn/pull/5088
```
❯ vsce publish
Executing prepublish script 'npm run vscode:prepublish'...
> unibeautify-vscode@0.1.0 vscode:prepublish /Users/glavin/Development/unibeautify/vscode
> npm run build
> unibeautify-vscode@0.1.0 build /Users/glavin/Development/unibeautify/vscode
> tsc
Error: Command failed: npm list --production --parseable --depth=99999
npm ERR! peer dep missing: unibeautify@^0.9.2, required by @unibeautify/beautifier-eslint@0.4.0
npm ERR! peer dep missing: unibeautify@^0.8.0, required by @unibeautify/beautifier-js-beautify@0.3.1
npm ERR! peer dep missing: unibeautify@^0.8.0, required by @unibeautify/beautifier-prettier@0.7.3
npm ERR! peer dep missing: unibeautify@^0.9.1, required by @unibeautify/beautifier-prettydiff@0.5.2
```
and
```
❯ npm install
> unibeautify-vscode@0.1.0 postinstall /Users/glavin/Development/unibeautify/vscode
> node ./node_modules/vscode/bin/install
Detected VS Code engine version: ^1.6.0
Found minimal version that qualifies engine range: 1.6.0
Fetching vscode.d.ts from: https://raw.githubusercontent.com/Microsoft/vscode/e52fb0bc87e6f5c8f144e172639891d8d8c9aa55/src/vs/vscode.d.ts
vscode.d.ts successfully installed!
npm WARN @unibeautify/beautifier-eslint@0.4.0 requires a peer of unibeautify@^0.9.2 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-js-beautify@0.3.1 requires a peer of unibeautify@^0.8.0 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-prettier@0.7.3 requires a peer of unibeautify@^0.8.0 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-prettydiff@0.5.2 requires a peer of unibeautify@^0.9.1 but none is installed. You must install peer dependencies yourself.
up to date in 6.171s
``` | 1.0 | Unable to publish "peer dep missing" - Related to: https://github.com/npm/npm/issues/19877 , https://github.com/npm/npm/issues/15708 , https://github.com/yarnpkg/yarn/issues/4850 , https://stackoverflow.com/a/48318878/2578205 , https://github.com/yarnpkg/yarn/issues/4743 , https://github.com/yarnpkg/yarn/pull/5088
```
❯ vsce publish
Executing prepublish script 'npm run vscode:prepublish'...
> unibeautify-vscode@0.1.0 vscode:prepublish /Users/glavin/Development/unibeautify/vscode
> npm run build
> unibeautify-vscode@0.1.0 build /Users/glavin/Development/unibeautify/vscode
> tsc
Error: Command failed: npm list --production --parseable --depth=99999
npm ERR! peer dep missing: unibeautify@^0.9.2, required by @unibeautify/beautifier-eslint@0.4.0
npm ERR! peer dep missing: unibeautify@^0.8.0, required by @unibeautify/beautifier-js-beautify@0.3.1
npm ERR! peer dep missing: unibeautify@^0.8.0, required by @unibeautify/beautifier-prettier@0.7.3
npm ERR! peer dep missing: unibeautify@^0.9.1, required by @unibeautify/beautifier-prettydiff@0.5.2
```
and
```
❯ npm install
> unibeautify-vscode@0.1.0 postinstall /Users/glavin/Development/unibeautify/vscode
> node ./node_modules/vscode/bin/install
Detected VS Code engine version: ^1.6.0
Found minimal version that qualifies engine range: 1.6.0
Fetching vscode.d.ts from: https://raw.githubusercontent.com/Microsoft/vscode/e52fb0bc87e6f5c8f144e172639891d8d8c9aa55/src/vs/vscode.d.ts
vscode.d.ts successfully installed!
npm WARN @unibeautify/beautifier-eslint@0.4.0 requires a peer of unibeautify@^0.9.2 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-js-beautify@0.3.1 requires a peer of unibeautify@^0.8.0 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-prettier@0.7.3 requires a peer of unibeautify@^0.8.0 but none is installed. You must install peer dependencies yourself.
npm WARN @unibeautify/beautifier-prettydiff@0.5.2 requires a peer of unibeautify@^0.9.1 but none is installed. You must install peer dependencies yourself.
up to date in 6.171s
``` | priority | unable to publish peer dep missing related to ❯ vsce publish executing prepublish script npm run vscode prepublish unibeautify vscode vscode prepublish users glavin development unibeautify vscode npm run build unibeautify vscode build users glavin development unibeautify vscode tsc error command failed npm list production parseable depth npm err peer dep missing unibeautify required by unibeautify beautifier eslint npm err peer dep missing unibeautify required by unibeautify beautifier js beautify npm err peer dep missing unibeautify required by unibeautify beautifier prettier npm err peer dep missing unibeautify required by unibeautify beautifier prettydiff and ❯ npm install unibeautify vscode postinstall users glavin development unibeautify vscode node node modules vscode bin install detected vs code engine version found minimal version that qualifies engine range fetching vscode d ts from vscode d ts successfully installed npm warn unibeautify beautifier eslint requires a peer of unibeautify but none is installed you must install peer dependencies yourself npm warn unibeautify beautifier js beautify requires a peer of unibeautify but none is installed you must install peer dependencies yourself npm warn unibeautify beautifier prettier requires a peer of unibeautify but none is installed you must install peer dependencies yourself npm warn unibeautify beautifier prettydiff requires a peer of unibeautify but none is installed you must install peer dependencies yourself up to date in | 1 |
318,791 | 9,702,319,833 | IssuesEvent | 2019-05-27 08:27:39 | mojaloop/project | https://api.github.com/repos/mojaloop/project | closed | Updating the simulator to support (QA, etc) for ALS | Priority: High Story | ## **Goal**:
As a maintainer of the ML OSS Switch software
I want _the Simulators used for QA, testing purposes to simulate an Oracle Registry_
so that end-to-end testing, QA can be performed on a deployable ML Switch with ALS
As a community member
I want to be able to deploy the ML OSS Switch with an ALS and supporting components
so that I can perform end-to-end QA, Testing and analysis
**Tasks**:
- [x] Refactor/Update Simulator implementation to simulate an Oracle Registry service to support the account-lookup-service (ALS) [ @rmothilal ]
- [x] Sanity testing against ALS [ @rmothilal ]
- [x] Helm changes needed to make simulators part of the Mojaloop deployment [ @mdebarros ]
- [x] Deprecate (remove) central-directory and dependencies from the Mojaloop deployment [ @mdebarros ]
- [x] Sanity testing of Helm Charts [ @mdebarros ]
**Acceptance Criteria**:
- [x] The OSS Simulator is updated to simulate an Oracle/Directory to support the QA, testing & deployment of the ML Switch
- [x] Simulators are made part of the Mojaloop deployment
## **Pull Requests**:
- [x] https://github.com/mojaloop/simulator/pull/17 [ @rmothilal ]
- [x] https://github.com/mojaloop/helm/pull/181 [ @mdebarros ]
## **Follow-up**:
- [x] #787
**Dependencies**:
- N/A
## **Accountability**:
- Owner: @rmothilal
- QA/Review: TBC
| 1.0 | Updating the simulator to support (QA, etc) for ALS - ## **Goal**:
As a maintainer of the ML OSS Switch software
I want _the Simulators used for QA, testing purposes to simulate an Oracle Registry_
so that end-to-end testing, QA can be performed on a deployable ML Switch with ALS
As a community member
I want to be able to deploy the ML OSS Switch with an ALS and supporting components
so that I can perform end-to-end QA, Testing and analysis
**Tasks**:
- [x] Refactor/Update Simulator implementation to simulate an Oracle Registry service to support the account-lookup-service (ALS) [ @rmothilal ]
- [x] Sanity testing against ALS [ @rmothilal ]
- [x] Helm changes needed to make simulators part of the Mojaloop deployment [ @mdebarros ]
- [x] Deprecate (remove) central-directory and dependencies from the Mojaloop deployment [ @mdebarros ]
- [x] Sanity testing of Helm Charts [ @mdebarros ]
**Acceptance Criteria**:
- [x] The OSS Simulator is updated to simulate an Oracle/Directory to support the QA, testing & deployment of the ML Switch
- [x] Simulators are made part of the Mojaloop deployment
## **Pull Requests**:
- [x] https://github.com/mojaloop/simulator/pull/17 [ @rmothilal ]
- [x] https://github.com/mojaloop/helm/pull/181 [ @mdebarros ]
## **Follow-up**:
- [x] #787
**Dependencies**:
- N/A
## **Accountability**:
- Owner: @rmothilal
- QA/Review: TBC
| priority | updating the simulator to support qa etc for als goal as a maintainer of the ml oss switch software i want the simulators used for qa testing purposes to simulate an oracle registry so that end to end testing qa can be performed on a deployable ml switch with als as a community member i want to be able to deploy the ml oss switch with an als and supporting components so that i can perform end to end qa testing and analysis tasks refactor update simulator implementation to simulate an oracle registry service to support the account lookup service als sanity testing against als helm changes needed to make simulators part of the mojaloop deployment deprecate remove central directory and dependencies from the mojaloop deployment sanity testing of helm charts acceptance criteria the oss simulator is updated to simulate an oracle directory to support the qa testing deployment of the ml switch simulators are made part of the mojaloop deployment pull requests follow up dependencies n a accountability owner rmothilal qa review tbc | 1 |
261,094 | 8,224,035,948 | IssuesEvent | 2018-09-06 12:36:41 | bio-tools/biotoolsRegistry | https://api.github.com/repos/bio-tools/biotoolsRegistry | closed | Rename 'Github page' to 'Source-code repository' and other small helpful fixes | GUI clarification needed high priority | Rename **Github page** to **Source-code repository**.
Add mention of **URL** to all **Documentation** fields.
Add mention of **DOI (preferred), PubMedID, or 'None'** to **Publication** fields.
| 1.0 | Rename 'Github page' to 'Source-code repository' and other small helpful fixes - Rename **Github page** to **Source-code repository**.
Add mention of **URL** to all **Documentation** fields.
Add mention of **DOI (preferred), PubMedID, or 'None'** to **Publication** fields.
| priority | rename github page to source code repository and other small helpful fixes rename github page to source code repository add mention of url to all documentation fields add mention of doi preferred pubmedid or none to publication fields | 1 |
233,849 | 7,707,684,506 | IssuesEvent | 2018-05-22 00:09:59 | sul-dlss/preservation_catalog | https://api.github.com/repos/sul-dlss/preservation_catalog | closed | Travis Live S3 tests can collide during highly simultaneous builds | high priority needs review | ### Background:
- The code that tests live S3 integration does a `put` and a `get` of an object to a test bucket.
- The same S3 object (key) is being targeted (to minimize junk sprawl/cost)
- To defend against a stale object still passing the test when `put` is actually failing, a timestamp is put in the metadata, and checked upon retrieval.
### Problem:
If two Travis builds hit the same test time (well, specifically, if the second test `put` hits before the first's `get`, but at a different clock second -- technically could appear "before" because different systems have different clocks), then the test will fail, because the value that was written to the object doesn't match upon retrieval.
### Possible Directions:
- Change granularity of timestamp to, say, hour. On the whole, CI would still detect if `put` started silently failing (but not immediately). The frequency of near-simultaneity/collision/failure would be 1/3600th what it is now.
- **Use different object (key) every time. To avoid inducing unnecessary costs, we would have to build more assiduous object cleanup or S3 configuration on the bucket to do cleanup automatically.** **<-- this is the chosen solution**
- Finish getting rspec to run in `random` order. For a fragment of a ~2 second test in a ~4 min run, that would seem to reduce collision likelihood by >120x.
- Limit number of concurrent Travis builds to 1. | 1.0 | Travis Live S3 tests can collide during highly simultaneous builds - ### Background:
- The code that tests live S3 integration does a `put` and a `get` of an object to a test bucket.
- The same S3 object (key) is being targeted (to minimize junk sprawl/cost)
- To defend against a stale object still passing the test when `put` is actually failing, a timestamp is put in the metadata, and checked upon retrieval.
### Problem:
If two Travis builds hit the same test time (well, specifically, if the second test `put` hits before the first's `get`, but at a different clock second -- technically could appear "before" because different systems have different clocks), then the test will fail, because the value that was written to the object doesn't match upon retrieval.
### Possible Directions:
- Change granularity of timestamp to, say, hour. On the whole, CI would still detect if `put` started silently failing (but not immediately). The frequency of near-simultaneity/collision/failure would be 1/3600th what it is now.
- **Use different object (key) every time. To avoid inducing unnecessary costs, we would have to build more assiduous object cleanup or S3 configuration on the bucket to do cleanup automatically.** **<-- this is the chosen solution**
- Finish getting rspec to run in `random` order. For a fragment of a ~2 second test in a ~4 min run, that would seem to reduce collision likelihood by >120x.
- Limit number of concurrent Travis builds to 1. | priority | travis live tests can collide during highly simultaneous builds background the code that tests live integration does a put and a get of an object to a test bucket the same object key is being targeted to minimize junk sprawl cost to defend against a stale object still passing the test when put is actually failing a timestamp is put in the metadata and checked upon retrieval problem if two travis builds hit the same test time well specifically if the second test put hits before the first s get but at a different clock second technically could appear before because different systems have different clocks then the test will fail because the value that was written to the object doesn t match upon retrieval possible directions change granularity of timestamp to say hour on the whole ci would still detect if put started silently failing but not immediately the frequency of near simultaneity collision failure would be what it is now use different object key every time to avoid inducing unnecessary costs we would have to build more assiduous object cleanup or configuration on the bucket to do cleanup automatically this is the chosen solution finish getting rspec to run in random order for a fragment of a second test in a min run that would seem to reduce collision likelihood by limit number of concurrent travis builds to | 1 |
252,665 | 8,038,726,116 | IssuesEvent | 2018-07-30 16:09:20 | MARKETProtocol/MARKET.js | https://api.github.com/repos/MARKETProtocol/MARKET.js | closed | [api] Return the txHash from tradeOrderAsync | Bounty Attached Priority: High Status: Completed | ## Before you `start work`
Please read our contribution [guidelines](https://docs.marketprotocol.io/#contributing) and if there is a bounty involved please also see [here](https://docs.marketprotocol.io/#gitcoin-and-bounties)
If you have ongoing work from other bounties with us where funding has not been released, please do not pick up a new issue. We would like to involve as many contributors as possible and parallelize the work flow as much as possible.
Please make sure to comment in the issue here immediately after starting work so we know your plans for implementation and a timeline.
Please also note that in order for work to be accepted, all code must be accompanied by test cases as well.
### User Story
As a dev using MARKET.js returning a promise<txHash> rather than a value after a transaction has been mined is probably more useful.
### Why Is this Needed?
*Summary*: Mining could take a very very long time
### Description
*Type*: Feature
### Current Behavior
We aren't always returning a txHash (see traderOrderAsync)
### Expected Behavior
For traderOrderAsync, we return a promise<string> with the txHash
### Definition of Done
- [ ] implement promise<string> return value
- [ ] fix all tests (big issue)
- [ ] implement new methods for getting results of important txs (for instance tradedQty from tradeOrderAsync) | 1.0 | [api] Return the txHash from tradeOrderAsync - ## Before you `start work`
Please read our contribution [guidelines](https://docs.marketprotocol.io/#contributing) and if there is a bounty involved please also see [here](https://docs.marketprotocol.io/#gitcoin-and-bounties)
If you have ongoing work from other bounties with us where funding has not been released, please do not pick up a new issue. We would like to involve as many contributors as possible and parallelize the work flow as much as possible.
Please make sure to comment in the issue here immediately after starting work so we know your plans for implementation and a timeline.
Please also note that in order for work to be accepted, all code must be accompanied by test cases as well.
### User Story
As a dev using MARKET.js returning a promise<txHash> rather than a value after a transaction has been mined is probably more useful.
### Why Is this Needed?
*Summary*: Mining could take a very very long time
### Description
*Type*: Feature
### Current Behavior
We aren't always returning a txHash (see traderOrderAsync)
### Expected Behavior
For traderOrderAsync, we return a promise<string> with the txHash
### Definition of Done
- [ ] implement promise<string> return value
- [ ] fix all tests (big issue)
- [ ] implement new methods for getting results of important txs (for instance tradedQty from tradeOrderAsync) | priority | return the txhash from tradeorderasync before you start work please read our contribution and if there is a bounty involved please also see if you have ongoing work from other bounties with us where funding has not been released please do not pick up a new issue we would like to involve as many contributors as possible and parallelize the work flow as much as possible please make sure to comment in the issue here immediately after starting work so we know your plans for implementation and a timeline please also note that in order for work to be accepted all code must be accompanied by test cases as well user story as a dev using market js returning a promise rather than a value after a transaction has been mined is probably more useful why is this needed summary mining could take a very very long time description type feature current behavior we aren t always returning a txhash see traderorderasync expected behavior for traderorderasync we return a promise with the txhash definition of done implement promise return value fix all tests big issue implement new methods for getting results of important txs for instance tradedqty from tradeorderasync | 1 |
6,534 | 2,589,089,290 | IssuesEvent | 2015-02-18 09:41:23 | olga-jane/prizm | https://api.github.com/repos/olga-jane/prizm | closed | Connection the elements with multiple common diameters | bug bug - functional Coding Construction HIGH priority | Scenario:
1. Open "New Joint"
2. Select elements with multiple common diameters for connection (like 1, 2, 2 and 1, 2)
3. Fill all required fields
4. Click "Save" button
5. Close "SelectDiameterDialog"
6. Click "Ok" in MassegeBox
7. Click "Save" button again
Result:
Joint successfully saved, but elements still disconnected. | 1.0 | Connection the elements with multiple common diameters - Scenario:
1. Open "New Joint"
2. Select elements with multiple common diameters for connection (like 1, 2, 2 and 1, 2)
3. Fill all required fields
4. Click "Save" button
5. Close "SelectDiameterDialog"
6. Click "Ok" in MassegeBox
7. Click "Save" button again
Result:
Joint successfully saved, but elements still disconnected. | priority | connection the elements with multiple common diameters scenario open new joint select elements with multiple common diameters for connection like and fill all required fields click save button close selectdiameterdialog click ok in massegebox click save button again result joint successfully saved but elements still disconnected | 1 |
777,614 | 27,288,080,822 | IssuesEvent | 2023-02-23 14:50:30 | wso2/docs-apim | https://api.github.com/repos/wso2/docs-apim | closed | Improvement on Enable password recovery | Priority/Highest API-M 4.2.0 | **Description:**
When a user forgot his/her password, APIM lets users to reset the password through a link sent to the mail.
This is the [4.2.0 documentation](https://apim.docs.wso2.com/en/4.2.0/consume/user-account-management/recover-password/) for that scenario.
As mentioned in the documentation, an email server needs to be configured to be able to send the password recovery email for the APIM prior to that task. The link for that is [here](https://apim.docs.wso2.com/en/4.2.0/install-and-setup/setup/security/user-account-management/#enable-password-recovery).
But since Google has disabled the less secure apps to access Gmail from May 30, 2022 ([read more from this link](https://support.google.com/accounts/answer/6010255?hl=en#:~:text=To%20help%20keep,continue%20to%20read.)), Normal Gmail password as mentioned [in the document](https://apim.docs.wso2.com/en/4.2.0/install-and-setup/setup/security/user-account-management/#enable-password-recovery:~:text=Password%20used%20to%20authenticate%20the%20mail%20server.) for the deployment configuration will not work from now on.
I found a way to work around it. Now in order to get the email server work, we have to create an app password for the email and that app password has to be added as the email password.
It is better to mention that or any other possible solution in the documentation and not just **Password used to authenticate the mail server** which is the explanation we used in our documentation before Google disabling less secure apps to access Gmail. | 1.0 | Improvement on Enable password recovery - **Description:**
When a user forgot his/her password, APIM lets users to reset the password through a link sent to the mail.
This is the [4.2.0 documentation](https://apim.docs.wso2.com/en/4.2.0/consume/user-account-management/recover-password/) for that scenario.
As mentioned in the documentation, an email server needs to be configured to be able to send the password recovery email for the APIM prior to that task. The link for that is [here](https://apim.docs.wso2.com/en/4.2.0/install-and-setup/setup/security/user-account-management/#enable-password-recovery).
But since Google has disabled the less secure apps to access Gmail from May 30, 2022 ([read more from this link](https://support.google.com/accounts/answer/6010255?hl=en#:~:text=To%20help%20keep,continue%20to%20read.)), Normal Gmail password as mentioned [in the document](https://apim.docs.wso2.com/en/4.2.0/install-and-setup/setup/security/user-account-management/#enable-password-recovery:~:text=Password%20used%20to%20authenticate%20the%20mail%20server.) for the deployment configuration will not work from now on.
I found a way to work around it. Now in order to get the email server work, we have to create an app password for the email and that app password has to be added as the email password.
It is better to mention that or any other possible solution in the documentation and not just **Password used to authenticate the mail server** which is the explanation we used in our documentation before Google disabling less secure apps to access Gmail. | priority | improvement on enable password recovery description when a user forgot his her password apim lets users to reset the password through a link sent to the mail this is the for that scenario as mentioned in the documentation an email server needs to be configured to be able to send the password recovery email for the apim prior to that task the link for that is but since google has disabled the less secure apps to access gmail from may normal gmail password as mentioned for the deployment configuration will not work from now on i found a way to work around it now in order to get the email server work we have to create an app password for the email and that app password has to be added as the email password it is better to mention that or any other possible solution in the documentation and not just password used to authenticate the mail server which is the explanation we used in our documentation before google disabling less secure apps to access gmail | 1 |
329,542 | 10,021,278,898 | IssuesEvent | 2019-07-16 14:20:41 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | closed | [Spell] [Druid] rejuvenation | Class Confirmed Fixed in Dev Priority-High | **Links:**
http://cata-shoot.tauri.hu/?spell=774
**What is happening:**
with my current gear it is healing for 10k less than the tooltip on dr damage shows it should, its supposed to heal for 3510 every 2.82 seconds, but its currently healing for 2226, thats quite alot, works out about 10k less healing over the course of 1 spell, when you take into account you can hot up loads of people in a raid environement and 5 people in a 5 man instance, thats 50k damage that you cannot heal because of missing values
**What should happen:**
should be healing for alot more ! im still new to testing spells with coeffs and the maths im still struggling with so i wont go into that, but it should at my current gear and spell power should be healing for 10k more baseline with 0 crits, but its healing for about 11 k not 21 k as dr damage tooltip states
| 1.0 | [Spell] [Druid] rejuvenation - **Links:**
http://cata-shoot.tauri.hu/?spell=774
**What is happening:**
with my current gear it is healing for 10k less than the tooltip on dr damage shows it should, its supposed to heal for 3510 every 2.82 seconds, but its currently healing for 2226, thats quite alot, works out about 10k less healing over the course of 1 spell, when you take into account you can hot up loads of people in a raid environement and 5 people in a 5 man instance, thats 50k damage that you cannot heal because of missing values
**What should happen:**
should be healing for alot more ! im still new to testing spells with coeffs and the maths im still struggling with so i wont go into that, but it should at my current gear and spell power should be healing for 10k more baseline with 0 crits, but its healing for about 11 k not 21 k as dr damage tooltip states
| priority | rejuvenation links what is happening with my current gear it is healing for less than the tooltip on dr damage shows it should its supposed to heal for every seconds but its currently healing for thats quite alot works out about less healing over the course of spell when you take into account you can hot up loads of people in a raid environement and people in a man instance thats damage that you cannot heal because of missing values what should happen should be healing for alot more im still new to testing spells with coeffs and the maths im still struggling with so i wont go into that but it should at my current gear and spell power should be healing for more baseline with crits but its healing for about k not k as dr damage tooltip states | 1 |
469,203 | 13,503,454,308 | IssuesEvent | 2020-09-13 13:38:51 | IFB-ElixirFr/ifbcat | https://api.github.com/repos/IFB-ElixirFr/ifbcat | closed | Add logo_url field to Event model | high priority | Missing [fields ](https://docs.google.com/spreadsheets/d/1tMzwkZINvFTj5mUOCBA8MrR8kxjrD6GRfyqH_mT1pi4/edit#gid=346104586) are:
* organisedBy
* sponsoredBy
* logo_url
| 1.0 | Add logo_url field to Event model - Missing [fields ](https://docs.google.com/spreadsheets/d/1tMzwkZINvFTj5mUOCBA8MrR8kxjrD6GRfyqH_mT1pi4/edit#gid=346104586) are:
* organisedBy
* sponsoredBy
* logo_url
| priority | add logo url field to event model missing are organisedby sponsoredby logo url | 1 |
201,973 | 7,042,879,832 | IssuesEvent | 2017-12-30 19:33:29 | mattbdean/Helium | https://api.github.com/repos/mattbdean/Helium | closed | Part table can have different master keys | bug high priority | Part table keys that are part of the master table should always be hidden/ grayed out.
We can submit a master + part table referring to different master keys:

When we navigate to this table we also get this error on the [console](https://gist.github.com/LiuDaveLiu/a9e686b2f7d01e615194432009136e06)
Tell us if you need permissions to the server
| 1.0 | Part table can have different master keys - Part table keys that are part of the master table should always be hidden/ grayed out.
We can submit a master + part table referring to different master keys:

When we navigate to this table we also get this error on the [console](https://gist.github.com/LiuDaveLiu/a9e686b2f7d01e615194432009136e06)
Tell us if you need permissions to the server
| priority | part table can have different master keys part table keys that are part of the master table should always be hidden grayed out we can submit a master part table referring to different master keys when we navigate to this table we also get this error on the tell us if you need permissions to the server | 1 |
594,940 | 18,057,721,443 | IssuesEvent | 2021-09-20 10:21:48 | transport-nantes/tn_web | https://api.github.com/repos/transport-nantes/tn_web | opened | Get SES working | 1-priority high | We use Amazon SES for mail.
TODO:
* [ ] @JeffAbrahamson gives @Shriukan33 AWS access to admin
* [ ] @Shriukan33 gets "thank you for your donation" mails working, cf #49 when starting to send mail
* [ ] @Shriukan33 consults with GJ on text of said mail so that we all agree. That mail should be a TBv2 item with "view on web" link.
| 1.0 | Get SES working - We use Amazon SES for mail.
TODO:
* [ ] @JeffAbrahamson gives @Shriukan33 AWS access to admin
* [ ] @Shriukan33 gets "thank you for your donation" mails working, cf #49 when starting to send mail
* [ ] @Shriukan33 consults with GJ on text of said mail so that we all agree. That mail should be a TBv2 item with "view on web" link.
| priority | get ses working we use amazon ses for mail todo jeffabrahamson gives aws access to admin gets thank you for your donation mails working cf when starting to send mail consults with gj on text of said mail so that we all agree that mail should be a item with view on web link | 1 |
535,387 | 15,687,398,636 | IssuesEvent | 2021-03-25 13:38:26 | sopra-fs21-group-24/client | https://api.github.com/repos/sopra-fs21-group-24/client | opened | Create multiplayer-waiting-room page and display all participants | high priority task | Estimate: 3h
This is part of user story #10 | 1.0 | Create multiplayer-waiting-room page and display all participants - Estimate: 3h
This is part of user story #10 | priority | create multiplayer waiting room page and display all participants estimate this is part of user story | 1 |
448,362 | 12,948,916,394 | IssuesEvent | 2020-07-19 06:51:10 | dhowe/Website | https://api.github.com/repos/dhowe/Website | closed | Website Update | needs-verification priority: high ready for work | Primary change are to homepage:
1. Show only images in a grid similar to [this](http://www.paglen.com/?l=work), though wider (either 1-2-1 or 1-2-1-2 arrangement).
1. Show title/text only on hover as in above link
1. Design/integrate news blog as discussed, with data like [this](http://billposters.ch/blog-news/)
Work on this in a separate branch (1. should probably collapse to fewer columns for responsive/mobile view)
| 1.0 | Website Update - Primary change are to homepage:
1. Show only images in a grid similar to [this](http://www.paglen.com/?l=work), though wider (either 1-2-1 or 1-2-1-2 arrangement).
1. Show title/text only on hover as in above link
1. Design/integrate news blog as discussed, with data like [this](http://billposters.ch/blog-news/)
Work on this in a separate branch (1. should probably collapse to fewer columns for responsive/mobile view)
| priority | website update primary change are to homepage show only images in a grid similar to though wider either or arrangement show title text only on hover as in above link design integrate news blog as discussed with data like work on this in a separate branch should probably collapse to fewer columns for responsive mobile view | 1 |
606,601 | 18,765,876,208 | IssuesEvent | 2021-11-06 00:05:33 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | [bug] XcodeDeps generates bad xcconfig variables with dash-cased pkg and armv8 arch | type: bug priority: high complex: low | # XcodeDeps generates bad xcconfig variables with dash-cased pkg and armv8 arch
### Environment Details (include every applicable attribute)
* Conan version: 1.42
### Steps to reproduce (Include if Applicable)
- Create a conanfile with a requirement on a dash-cased package: `requires = "test-lib/x.y.z"`
- Set the XcodeDeps generator: `generators = "XcodeDeps"`
- Install armv8 deps `conan install . -s arch=armv8`
### Logs (Executed commands with output) (Include/Attach if Applicable)
Here is the content of one of the generated `.xcconfig` file:

Possible fixed naming using snake_case:

On top of that, if I try to build with Xcode with an arm64 iOS target, those variables will never be used as Xcode uses `arm64` instead of `armv8` that is generated by conan with XcodeDeps.
| 1.0 | [bug] XcodeDeps generates bad xcconfig variables with dash-cased pkg and armv8 arch - # XcodeDeps generates bad xcconfig variables with dash-cased pkg and armv8 arch
### Environment Details (include every applicable attribute)
* Conan version: 1.42
### Steps to reproduce (Include if Applicable)
- Create a conanfile with a requirement on a dash-cased package: `requires = "test-lib/x.y.z"`
- Set the XcodeDeps generator: `generators = "XcodeDeps"`
- Install armv8 deps `conan install . -s arch=armv8`
### Logs (Executed commands with output) (Include/Attach if Applicable)
Here is the content of one of the generated `.xcconfig` file:

Possible fixed naming using snake_case:

On top of that, if I try to build with Xcode with an arm64 iOS target, those variables will never be used as Xcode uses `arm64` instead of `armv8` that is generated by conan with XcodeDeps.
| priority | xcodedeps generates bad xcconfig variables with dash cased pkg and arch xcodedeps generates bad xcconfig variables with dash cased pkg and arch environment details include every applicable attribute conan version steps to reproduce include if applicable create a conanfile with a requirement on a dash cased package requires test lib x y z set the xcodedeps generator generators xcodedeps install deps conan install s arch logs executed commands with output include attach if applicable here is the content of one of the generated xcconfig file possible fixed naming using snake case on top of that if i try to build with xcode with an ios target those variables will never be used as xcode uses instead of that is generated by conan with xcodedeps | 1 |
577,173 | 17,104,681,171 | IssuesEvent | 2021-07-09 15:53:18 | ranking-agent/strider | https://api.github.com/repos/ranking-agent/strider | closed | two hop (both ends pinned) not finding results. | Priority: High standup | ```
{
"message": {
"query_graph": {
"nodes": {
"n0": {
"ids": [
"MONDO:0007743"
],
"categories": [
"biolink:Disease"
]
},
"n1": {
"categories": [
"biolink:Gene"
]
},
"n2": {
"ids": [
"CHEBI:31859"
],
"categories": [
"biolink:ChemicalSubstance"
]
}
},
"edges": {
"e0": {
"subject": "n0",
"object": "n1"
},
"e1": {
"subject": "n2",
"object": "n1"
}
}
}
}
}
```
https://github.com/NCATSTranslator/testing/issues/77
I think that this is showing up as a timeout in the ARS, as strider takes 7+ minutes to complete.
When strider is complete, it is not returning any results. There is an expected result of gene: SLC 6A3 | 1.0 | two hop (both ends pinned) not finding results. - ```
{
"message": {
"query_graph": {
"nodes": {
"n0": {
"ids": [
"MONDO:0007743"
],
"categories": [
"biolink:Disease"
]
},
"n1": {
"categories": [
"biolink:Gene"
]
},
"n2": {
"ids": [
"CHEBI:31859"
],
"categories": [
"biolink:ChemicalSubstance"
]
}
},
"edges": {
"e0": {
"subject": "n0",
"object": "n1"
},
"e1": {
"subject": "n2",
"object": "n1"
}
}
}
}
}
```
https://github.com/NCATSTranslator/testing/issues/77
I think that this is showing up as a timeout in the ARS, as strider takes 7+ minutes to complete.
When strider is complete, it is not returning any results. There is an expected result of gene: SLC 6A3 | priority | two hop both ends pinned not finding results message query graph nodes ids mondo categories biolink disease categories biolink gene ids chebi categories biolink chemicalsubstance edges subject object subject object i think that this is showing up as a timeout in the ars as strider takes minutes to complete when strider is complete it is not returning any results there is an expected result of gene slc | 1 |
556,918 | 16,494,980,191 | IssuesEvent | 2021-05-25 09:21:37 | SmashMC-Development/Bugs-and-Issues | https://api.github.com/repos/SmashMC-Development/Bugs-and-Issues | closed | Survival /buy command | custom dev high priority survival | **Describe the Bug**
Clicking on the items in the GUI for /buy doesn't do anything
**To Reproduce**
Steps to reproduce the behavior:
1. Go to survival
2. Type /buy
3. Try clicking on the items in the GUI
**Servers with the Bug**
Survival
**Expected behavior**
Clicking on an item in the GUI of /buy should give you a link to the corresponding store page
**Screenshots**
N/A
**Additional context**
N/A | 1.0 | Survival /buy command - **Describe the Bug**
Clicking on the items in the GUI for /buy doesn't do anything
**To Reproduce**
Steps to reproduce the behavior:
1. Go to survival
2. Type /buy
3. Try clicking on the items in the GUI
**Servers with the Bug**
Survival
**Expected behavior**
Clicking on an item in the GUI of /buy should give you a link to the corresponding store page
**Screenshots**
N/A
**Additional context**
N/A | priority | survival buy command describe the bug clicking on the items in the gui for buy doesn t do anything to reproduce steps to reproduce the behavior go to survival type buy try clicking on the items in the gui servers with the bug survival expected behavior clicking on an item in the gui of buy should give you a link to the corresponding store page screenshots n a additional context n a | 1 |
415,755 | 12,133,817,987 | IssuesEvent | 2020-04-23 09:40:05 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | CI: ad machine doesn't start correctly | Priority: High Type: Bug | **Describe the bug**
Sometimes, `test` stage in pipeline failed during initial booting of `ad` virtual machine. Issue is random because it could work for CentOS and not for Debian (and vice versa).
Error message:
```
2020-04-21 05:05:03 [ERROR] Error occurred: An error occurred executing a remote WinRM command.
Shell: Cmd
Command: hostname
Message: unknown type: 1235072778
```
I set `VAGRANT_LOG` to `debug` in CI and get two pipelines this night:
- [a passed pipeline for CentOS](https://gitlab.com/inverse-inc/packetfence/-/jobs/519508213)
- [a failed pipeline for Debian](https://gitlab.com/inverse-inc/packetfence/-/jobs/519508227)
| 1.0 | CI: ad machine doesn't start correctly - **Describe the bug**
Sometimes, `test` stage in pipeline failed during initial booting of `ad` virtual machine. Issue is random because it could work for CentOS and not for Debian (and vice versa).
Error message:
```
2020-04-21 05:05:03 [ERROR] Error occurred: An error occurred executing a remote WinRM command.
Shell: Cmd
Command: hostname
Message: unknown type: 1235072778
```
I set `VAGRANT_LOG` to `debug` in CI and get two pipelines this night:
- [a passed pipeline for CentOS](https://gitlab.com/inverse-inc/packetfence/-/jobs/519508213)
- [a failed pipeline for Debian](https://gitlab.com/inverse-inc/packetfence/-/jobs/519508227)
| priority | ci ad machine doesn t start correctly describe the bug sometimes test stage in pipeline failed during initial booting of ad virtual machine issue is random because it could work for centos and not for debian and vice versa error message error occurred an error occurred executing a remote winrm command shell cmd command hostname message unknown type i set vagrant log to debug in ci and get two pipelines this night | 1 |
303,481 | 9,307,603,751 | IssuesEvent | 2019-03-25 12:43:51 | keepassxreboot/keepassxc | https://api.github.com/repos/keepassxreboot/keepassxc | closed | Release link for v2.4.0 for macOS is not present | distribution high priority platform: macOS | [TIP]: # ( Provide a general summary of the issue in the title above ^^ )
[TIP]: # ( DO NOT include screenshots of your actual database! )
## Expected Behavior
In the releases page, I expected to find a link to the download for a dmg file for macOS.
## Current Behavior
Link is not present for v2.4.0
## Possible Solution
Please add the link
## Steps to Reproduce
[NOTE]: # ( Provide a link to a live example, or an unambiguous set of steps to )
[NOTE]: # ( reproduce this bug. Include code to reproduce, if relevant )
1.
2.
3.
## Context
[NOTE]: # ( How has this issue affected you? What unique circumstances do you have? )
## Debug Info
[NOTE]: # ( Paste debug info from Help → About here )
KeePassXC - VERSION
Revision: REVISION
Libraries:
- LIBS
Operating system: OS
CPU architecture: ARCH
Kernel: KERNEL
Enabled extensions:
- EXTENSIONS
| 1.0 | Release link for v2.4.0 for macOS is not present - [TIP]: # ( Provide a general summary of the issue in the title above ^^ )
[TIP]: # ( DO NOT include screenshots of your actual database! )
## Expected Behavior
In the releases page, I expected to find a link to the download for a dmg file for macOS.
## Current Behavior
Link is not present for v2.4.0
## Possible Solution
Please add the link
## Steps to Reproduce
[NOTE]: # ( Provide a link to a live example, or an unambiguous set of steps to )
[NOTE]: # ( reproduce this bug. Include code to reproduce, if relevant )
1.
2.
3.
## Context
[NOTE]: # ( How has this issue affected you? What unique circumstances do you have? )
## Debug Info
[NOTE]: # ( Paste debug info from Help → About here )
KeePassXC - VERSION
Revision: REVISION
Libraries:
- LIBS
Operating system: OS
CPU architecture: ARCH
Kernel: KERNEL
Enabled extensions:
- EXTENSIONS
| priority | release link for for macos is not present provide a general summary of the issue in the title above do not include screenshots of your actual database expected behavior in the releases page i expected to find a link to the download for a dmg file for macos current behavior link is not present for possible solution please add the link steps to reproduce provide a link to a live example or an unambiguous set of steps to reproduce this bug include code to reproduce if relevant context how has this issue affected you what unique circumstances do you have debug info paste debug info from help → about here keepassxc version revision revision libraries libs operating system os cpu architecture arch kernel kernel enabled extensions extensions | 1 |
604,730 | 18,717,791,405 | IssuesEvent | 2021-11-03 08:12:28 | JYGC/OffPeakMediaFetcher | https://api.github.com/repos/JYGC/OffPeakMediaFetcher | closed | Enable downloading of multiple videos at once | enhancement investigation high priority | Use youtube-dl's ability to download multiple videos at once. | 1.0 | Enable downloading of multiple videos at once - Use youtube-dl's ability to download multiple videos at once. | priority | enable downloading of multiple videos at once use youtube dl s ability to download multiple videos at once | 1 |
16,405 | 2,614,998,292 | IssuesEvent | 2015-03-01 02:35:23 | ceylon/ceylon-ide-eclipse | https://api.github.com/repos/ceylon/ceylon-ide-eclipse | opened | removal of DeclarationWithProject broke OpenDeclarationDialog | bug high priority | Declaration.equals() doesn't take into account the actual archive/source folder containing the declaration, so we _do_ need to wrap the Declaration in a proxy with a different notion of equality. | 1.0 | removal of DeclarationWithProject broke OpenDeclarationDialog - Declaration.equals() doesn't take into account the actual archive/source folder containing the declaration, so we _do_ need to wrap the Declaration in a proxy with a different notion of equality. | priority | removal of declarationwithproject broke opendeclarationdialog declaration equals doesn t take into account the actual archive source folder containing the declaration so we do need to wrap the declaration in a proxy with a different notion of equality | 1 |
668,031 | 22,549,554,290 | IssuesEvent | 2022-06-27 03:02:27 | nkalupahana/baseline | https://api.github.com/repos/nkalupahana/baseline | closed | Safari flickity fullscreen bug | area: summary type: bugfix high priority | Right now, the hardware acceleration on the MoodLogList on Safari is causing position: fixed to target the list instead of the body when fullscreening. This is high priority and should be fixed ASAP. | 1.0 | Safari flickity fullscreen bug - Right now, the hardware acceleration on the MoodLogList on Safari is causing position: fixed to target the list instead of the body when fullscreening. This is high priority and should be fixed ASAP. | priority | safari flickity fullscreen bug right now the hardware acceleration on the moodloglist on safari is causing position fixed to target the list instead of the body when fullscreening this is high priority and should be fixed asap | 1 |
363,137 | 10,738,280,466 | IssuesEvent | 2019-10-29 14:32:38 | IBM/gWhisper | https://api.github.com/repos/IBM/gWhisper | closed | doc test fails sometimes | High Priority bug | 2: #################################################################
2: Executing test 'oneof input choices again' at line 120
2: execute cmd '/home/travis/build/IBM/gWhisper/build/gwhisper --complete 127.0.0.1 examples.ComplexTypeRpcs sendNumberOrStringOneOf both=:number=5 str=5:'
2: Received:
2: number= (Both, a number and a string)
2: str= (Only a string)
2: both= (Both, a number and a string)
2: Expected:
2: number= (Only a number)
2: str= (Only a string)
2: both= (Both, a number and a string)
2: FAIL: line 1 received and expected text does not match.
| 1.0 | doc test fails sometimes - 2: #################################################################
2: Executing test 'oneof input choices again' at line 120
2: execute cmd '/home/travis/build/IBM/gWhisper/build/gwhisper --complete 127.0.0.1 examples.ComplexTypeRpcs sendNumberOrStringOneOf both=:number=5 str=5:'
2: Received:
2: number= (Both, a number and a string)
2: str= (Only a string)
2: both= (Both, a number and a string)
2: Expected:
2: number= (Only a number)
2: str= (Only a string)
2: both= (Both, a number and a string)
2: FAIL: line 1 received and expected text does not match.
| priority | doc test fails sometimes executing test oneof input choices again at line execute cmd home travis build ibm gwhisper build gwhisper complete examples complextyperpcs sendnumberorstringoneof both number str received number both a number and a string str only a string both both a number and a string expected number only a number str only a string both both a number and a string fail line received and expected text does not match | 1 |
764,063 | 26,783,479,659 | IssuesEvent | 2023-01-31 23:40:21 | phetsims/axon | https://api.github.com/repos/phetsims/axon | opened | Add Validator.valueComparisonStrategy | priority:2-high | For a while there has a been a discrepancy between Property deep equality and Validation's version. Currently we use reference equality to see if validValues match:
https://github.com/phetsims/axon/blob/40e574c241fce41a37dec0539b93a0c70fcd35cc/js/Validation.ts#L290
This is not acceptable, and I'm glad we got to a point where we want to change! From a conversation over in https://github.com/phetsims/studio/issues/291 with @samreid, we should add a more general strategy that helps us get this coverage in our central validation code.
While we are at it, we can loop into the conversation ReadOnlyProperty.useDeepEquality. This can also use our more useful, general algorithm.
In terms of the size of this change, it is not huge. I have Validator.valueCompareStrategy working well in my working copy, and we will just want to expand that to Property next. | 1.0 | Add Validator.valueComparisonStrategy - For a while there has a been a discrepancy between Property deep equality and Validation's version. Currently we use reference equality to see if validValues match:
https://github.com/phetsims/axon/blob/40e574c241fce41a37dec0539b93a0c70fcd35cc/js/Validation.ts#L290
This is not acceptable, and I'm glad we got to a point where we want to change! From a conversation over in https://github.com/phetsims/studio/issues/291 with @samreid, we should add a more general strategy that helps us get this coverage in our central validation code.
While we are at it, we can loop into the conversation ReadOnlyProperty.useDeepEquality. This can also use our more useful, general algorithm.
In terms of the size of this change, it is not huge. I have Validator.valueCompareStrategy working well in my working copy, and we will just want to expand that to Property next. | priority | add validator valuecomparisonstrategy for a while there has a been a discrepancy between property deep equality and validation s version currently we use reference equality to see if validvalues match this is not acceptable and i m glad we got to a point where we want to change from a conversation over in with samreid we should add a more general strategy that helps us get this coverage in our central validation code while we are at it we can loop into the conversation readonlyproperty usedeepequality this can also use our more useful general algorithm in terms of the size of this change it is not huge i have validator valuecomparestrategy working well in my working copy and we will just want to expand that to property next | 1 |
728,580 | 25,084,976,721 | IssuesEvent | 2022-11-07 22:50:33 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Fix buggy $current_focus_elem code in recent conversations | bug priority: high area: recent-conversations release goal | In some code paths in Recent Conversations, we seem to have code that's a bit confused about the type of `$current_focus_elem`, which can sometimes be the literal string "table".
```
if ($current_focus_elem && input_key !== "escape") {
$current_focus_elem.trigger("focus");
if ($current_focus_elem.hasClass("btn-recent-filters")) {
compose_closed_ui.set_standard_text_for_reply_button();
}
return true;
}
```
@alya somehow managed to get this code to throw an exception earlier; I don't have a reproducer but the code seems on the face of it incorrect. Maybe `$current_focus_elem` should be unset rather than `"table"` in the state where it isn't set? Or the conditional should check for a value of `table`? I'm not sure, but obviously "table".trigger("focus") is invalid, since strings don't have a `.trigger` method :).
@amanagr can you pick a plan for fixing this one? | 1.0 | Fix buggy $current_focus_elem code in recent conversations - In some code paths in Recent Conversations, we seem to have code that's a bit confused about the type of `$current_focus_elem`, which can sometimes be the literal string "table".
```
if ($current_focus_elem && input_key !== "escape") {
$current_focus_elem.trigger("focus");
if ($current_focus_elem.hasClass("btn-recent-filters")) {
compose_closed_ui.set_standard_text_for_reply_button();
}
return true;
}
```
@alya somehow managed to get this code to throw an exception earlier; I don't have a reproducer but the code seems on the face of it incorrect. Maybe `$current_focus_elem` should be unset rather than `"table"` in the state where it isn't set? Or the conditional should check for a value of `table`? I'm not sure, but obviously "table".trigger("focus") is invalid, since strings don't have a `.trigger` method :).
@amanagr can you pick a plan for fixing this one? | priority | fix buggy current focus elem code in recent conversations in some code paths in recent conversations we seem to have code that s a bit confused about the type of current focus elem which can sometimes be the literal string table if current focus elem input key escape current focus elem trigger focus if current focus elem hasclass btn recent filters compose closed ui set standard text for reply button return true alya somehow managed to get this code to throw an exception earlier i don t have a reproducer but the code seems on the face of it incorrect maybe current focus elem should be unset rather than table in the state where it isn t set or the conditional should check for a value of table i m not sure but obviously table trigger focus is invalid since strings don t have a trigger method amanagr can you pick a plan for fixing this one | 1 |
695,580 | 23,864,837,999 | IssuesEvent | 2022-09-07 10:06:23 | WordPress/Learn | https://api.github.com/repos/WordPress/Learn | closed | Converting a Shortcode into a Block - Tutorial | [Priority] High [Component] Tutorials [Experience Level] Intermediate [Audience] Developers [Content Type] Tutorial Ready to publish | # Topic Description
Following on from the [Using the create-block tool](https://learn.wordpress.org/tutorial/using-the-create-block-tool/) tutorial, this tutorial will guide the new block developer on the process of converting a PHP shortcode into a block.
Due to time limit constraints, this tutorial covers the most basic aspects of this process and will be followed up by supplementary tutorials on:
- Styling a block
- Adding and Using attributes to allow user input
- Making your block translation ready
- and [more](https://drive.google.com/drive/folders/1tOJBip5mPH6lokf5muQCXeiArdIEo7A3?usp=sharing)
Therefore, those topics are not covered in this tutorial.
# Related Resources
Links to related content on Learn, HelpHub, DevHub, GitHub Gutenberg Issues, DevNotes, etc.
- [Using the create-block tool](https://learn.wordpress.org/tutorial/using-the-create-block-tool/)
- [create-block documentation](https://developer.wordpress.org/block-editor/reference-guides/packages/packages-create-block/)
- [Create a Block Tutorial](https://developer.wordpress.org/block-editor/getting-started/create-block/)
# Guidelines
Review the [team guidelines] (https://make.wordpress.org/training/handbook/guidelines/)
# Tutorial Development Checklist
- [x] Vetted by instructional designers for content idea
- [x] Provide feedback of the idea
- [x] Gather links to Support and Developer Docs
- [ ] Consider any MarComms (marketing communications) resources and link to those
- [x] Review any related material on Learn
- [ ] Define several SEO keywords to use in the article and where they should be prominently used
- [x] Description and Objectives finalized
- [x] Create an outline of the workshop
- [x] Tutorial submitted to the team for Q/A review https://blog.wordpress.tv/submission-guidelines/ & https://make.wordpress.org/training/2021/08/17/proposal-brand-guidelines-for-learn-wordpress-content/
- [x] Tutorial submitted to WPTV https://wordpress.tv/submit-video/
- [x] Tutorial published on WPTV
- [x] Tutorial is captioned https://make.wordpress.org/training/handbook/workshops/workshop-subtitles-and-transcripts/
- [x] Tutorial created on Learn.WordPress.org
- [x] Tutorial post is reviewed for grammar, spelling, etc.
- [x] Tutorial published on Learn.WordPress.org
- [ ] Tutorial announced to training team
- [ ] Tutorial announced to creator
- [ ] Tutorial announced to Marketing Team for promotion
- [ ] Gather feedback from workshop viewers/participants
| 1.0 | Converting a Shortcode into a Block - Tutorial - # Topic Description
Following on from the [Using the create-block tool](https://learn.wordpress.org/tutorial/using-the-create-block-tool/) tutorial, this tutorial will guide the new block developer on the process of converting a PHP shortcode into a block.
Due to time limit constraints, this tutorial covers the most basic aspects of this process and will be followed up by supplementary tutorials on:
- Styling a block
- Adding and Using attributes to allow user input
- Making your block translation ready
- and [more](https://drive.google.com/drive/folders/1tOJBip5mPH6lokf5muQCXeiArdIEo7A3?usp=sharing)
Therefore, those topics are not covered in this tutorial.
# Related Resources
Links to related content on Learn, HelpHub, DevHub, GitHub Gutenberg Issues, DevNotes, etc.
- [Using the create-block tool](https://learn.wordpress.org/tutorial/using-the-create-block-tool/)
- [create-block documentation](https://developer.wordpress.org/block-editor/reference-guides/packages/packages-create-block/)
- [Create a Block Tutorial](https://developer.wordpress.org/block-editor/getting-started/create-block/)
# Guidelines
Review the [team guidelines] (https://make.wordpress.org/training/handbook/guidelines/)
# Tutorial Development Checklist
- [x] Vetted by instructional designers for content idea
- [x] Provide feedback of the idea
- [x] Gather links to Support and Developer Docs
- [ ] Consider any MarComms (marketing communications) resources and link to those
- [x] Review any related material on Learn
- [ ] Define several SEO keywords to use in the article and where they should be prominently used
- [x] Description and Objectives finalized
- [x] Create an outline of the workshop
- [x] Tutorial submitted to the team for Q/A review https://blog.wordpress.tv/submission-guidelines/ & https://make.wordpress.org/training/2021/08/17/proposal-brand-guidelines-for-learn-wordpress-content/
- [x] Tutorial submitted to WPTV https://wordpress.tv/submit-video/
- [x] Tutorial published on WPTV
- [x] Tutorial is captioned https://make.wordpress.org/training/handbook/workshops/workshop-subtitles-and-transcripts/
- [x] Tutorial created on Learn.WordPress.org
- [x] Tutorial post is reviewed for grammar, spelling, etc.
- [x] Tutorial published on Learn.WordPress.org
- [ ] Tutorial announced to training team
- [ ] Tutorial announced to creator
- [ ] Tutorial announced to Marketing Team for promotion
- [ ] Gather feedback from workshop viewers/participants
| priority | converting a shortcode into a block tutorial topic description following on from the tutorial this tutorial will guide the new block developer on the process of converting a php shortcode into a block due to time limit constraints this tutorial covers the most basic aspects of this process and will be followed up by supplementary tutorials on styling a block adding and using attributes to allow user input making your block translation ready and therefore those topics are not covered in this tutorial related resources links to related content on learn helphub devhub github gutenberg issues devnotes etc guidelines review the tutorial development checklist vetted by instructional designers for content idea provide feedback of the idea gather links to support and developer docs consider any marcomms marketing communications resources and link to those review any related material on learn define several seo keywords to use in the article and where they should be prominently used description and objectives finalized create an outline of the workshop tutorial submitted to the team for q a review tutorial submitted to wptv tutorial published on wptv tutorial is captioned tutorial created on learn wordpress org tutorial post is reviewed for grammar spelling etc tutorial published on learn wordpress org tutorial announced to training team tutorial announced to creator tutorial announced to marketing team for promotion gather feedback from workshop viewers participants | 1 |
783,186 | 27,521,564,370 | IssuesEvent | 2023-03-06 15:20:39 | YoruNoKen/miaosu | https://api.github.com/repos/YoruNoKen/miaosu | closed | compare not working with mentions | bug high priority | > when replying to a map and trying to get user by tagging them in compare command, the userargs is processed as `message.author.id`


[Original Message by @yoru#9267](https://canary.discord.com/channels/913176314419220500/913545872552394774/1075085464354160710) | 1.0 | compare not working with mentions - > when replying to a map and trying to get user by tagging them in compare command, the userargs is processed as `message.author.id`


[Original Message by @yoru#9267](https://canary.discord.com/channels/913176314419220500/913545872552394774/1075085464354160710) | priority | compare not working with mentions when replying to a map and trying to get user by tagging them in compare command the userargs is processed as message author id | 1 |
156,038 | 5,963,307,296 | IssuesEvent | 2017-05-30 04:11:19 | Wuzzy2/MineClone2-Bugs | https://api.github.com/repos/Wuzzy2/MineClone2-Bugs | closed | Items fall through floor | bug HIGH PRIORITY non-mob entities | Some items now fall through floor when broken. About a 50% chance of not being able to collect the items.
Minetest 0.4.15 & commit cf9f4ba976c3cef4c269014defb8470c45ab0a2e | 1.0 | Items fall through floor - Some items now fall through floor when broken. About a 50% chance of not being able to collect the items.
Minetest 0.4.15 & commit cf9f4ba976c3cef4c269014defb8470c45ab0a2e | priority | items fall through floor some items now fall through floor when broken about a chance of not being able to collect the items minetest commit | 1 |
3,361 | 2,537,769,202 | IssuesEvent | 2015-01-26 22:53:26 | newca12/gapt | https://api.github.com/repos/newca12/gapt | closed | create two versions of the inductive structure of the Abs object in typed lambda calculus | 1 star bug Component-Logic imported Milestone-Release2.0 Priority-High | _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on January 13, 2010 14:56:40_
Since implementing the de bruijn indices when creating Abs new terms are
created for the variable and the expression there that will contain bound
variables indexed by db-index. It should be possible to access both the
modified version of the expression as is now but also to be able to
decompose the element naturally (so some bound variables will no longer be
bound).
The current solution is to store both versions in Abs and have two
different extractors and names:
variable and expression for the original ones
variableInScope and expressionInScope for the modified ones
and also objects Abs and AbsInScope
_Original issue: http://code.google.com/p/gapt/issues/detail?id=63_ | 1.0 | create two versions of the inductive structure of the Abs object in typed lambda calculus - _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on January 13, 2010 14:56:40_
Since implementing the de bruijn indices when creating Abs new terms are
created for the variable and the expression there that will contain bound
variables indexed by db-index. It should be possible to access both the
modified version of the expression as is now but also to be able to
decompose the element naturally (so some bound variables will no longer be
bound).
The current solution is to store both versions in Abs and have two
different extractors and names:
variable and expression for the original ones
variableInScope and expressionInScope for the modified ones
and also objects Abs and AbsInScope
_Original issue: http://code.google.com/p/gapt/issues/detail?id=63_ | priority | create two versions of the inductive structure of the abs object in typed lambda calculus from on january since implementing the de bruijn indices when creating abs new terms are created for the variable and the expression there that will contain bound variables indexed by db index it should be possible to access both the modified version of the expression as is now but also to be able to decompose the element naturally so some bound variables will no longer be bound the current solution is to store both versions in abs and have two different extractors and names variable and expression for the original ones variableinscope and expressioninscope for the modified ones and also objects abs and absinscope original issue | 1 |
768,083 | 26,952,469,600 | IssuesEvent | 2023-02-08 12:44:37 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [Bug]: More CancellationException with module imports | Type/Bug Priority/High Team/LanguageServer Points/2 Reason/Other | ### Description
Observed more CancellationException in LS debug logs, when the program has a http import. This leads to slowness in suggestions.
### Steps to Reproduce
Remove `@display` annotation and type it again in the program 1. Then remove http module import and try to write it again.
Program 1 - withhttp
```ballerina
import ballerina/http;
import ballerina/io;
public function main() returns error? {
http:Client cl = check new("https://example.com/");
io:println("Hello");
}
type SomeConfig record {
string username;
@display {
label: "abc",
iconPath: "a"
}
string token;
};
```
Program 2 - withouthttp
```ballerina
import ballerina/io;
public function main() returns error? {
io:println("Hello");
}
type SomeConfig record {
string username;
@display {
label: "abc",
iconPath: "a"
}
string token;
};
```
### Affected Version(s)
Ballerina version output: Ballerina 2201.2.0 (Swan Lake Update 2)
Language specification 2022R3
Update Tool 1.3.10
Plugin version: 3.1.0
Windows 10
### OS, DB, other environment details and versions
_No response_
### Labels
LanguageServer
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_ | 1.0 | [Bug]: More CancellationException with module imports - ### Description
Observed more CancellationException in LS debug logs, when the program has a http import. This leads to slowness in suggestions.
### Steps to Reproduce
Remove `@display` annotation and type it again in the program 1. Then remove http module import and try to write it again.
Program 1 - withhttp
```ballerina
import ballerina/http;
import ballerina/io;
public function main() returns error? {
http:Client cl = check new("https://example.com/");
io:println("Hello");
}
type SomeConfig record {
string username;
@display {
label: "abc",
iconPath: "a"
}
string token;
};
```
Program 2 - withouthttp
```ballerina
import ballerina/io;
public function main() returns error? {
io:println("Hello");
}
type SomeConfig record {
string username;
@display {
label: "abc",
iconPath: "a"
}
string token;
};
```
### Affected Version(s)
Ballerina version output: Ballerina 2201.2.0 (Swan Lake Update 2)
Language specification 2022R3
Update Tool 1.3.10
Plugin version: 3.1.0
Windows 10
### OS, DB, other environment details and versions
_No response_
### Labels
LanguageServer
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_ | priority | more cancellationexception with module imports description observed more cancellationexception in ls debug logs when the program has a http import this leads to slowness in suggestions steps to reproduce remove display annotation and type it again in the program then remove http module import and try to write it again program withhttp ballerina import ballerina http import ballerina io public function main returns error http client cl check new io println hello type someconfig record string username display label abc iconpath a string token program withouthttp ballerina import ballerina io public function main returns error io println hello type someconfig record string username display label abc iconpath a string token affected version s ballerina version output ballerina swan lake update language specification update tool plugin version windows os db other environment details and versions no response labels languageserver related issue s optional no response suggested label s optional no response suggested assignee s optional no response | 1 |
298,219 | 9,197,228,112 | IssuesEvent | 2019-03-07 09:29:38 | dm-drogeriemarkt/foreman_git_templates | https://api.github.com/repos/dm-drogeriemarkt/foreman_git_templates | closed | setting host to build mode fails with error message | bug priority/high | When setting a host to build mode, it currently always fails with this error message:
```
No PXELinux templates were found for this host, make sure you define at least one in your CoreOS 1967.4.0 settings or change PXE loader
``` | 1.0 | setting host to build mode fails with error message - When setting a host to build mode, it currently always fails with this error message:
```
No PXELinux templates were found for this host, make sure you define at least one in your CoreOS 1967.4.0 settings or change PXE loader
``` | priority | setting host to build mode fails with error message when setting a host to build mode it currently always fails with this error message no pxelinux templates were found for this host make sure you define at least one in your coreos settings or change pxe loader | 1 |
418,025 | 12,192,114,407 | IssuesEvent | 2020-04-29 12:24:06 | eclipse/deeplearning4j | https://api.github.com/repos/eclipse/deeplearning4j | reopened | OpenBLAS 0.3.8 issue on AMD Threadripper cpu | Bug High Priority | OpenBLAS 0.3.8 crashes on AMD Threadripper 3970X cpu, with SIGILL error code.
We always got following crash while testing TFGraphTestAllHelper.log_determinant.rank3:
- A fatal error has been detected by the Java Runtime Environment:
- SIGILL (0x4) at pc=0x00007f86f33181ff, pid=130947, tid=0x00007f8839009700
- C [libopenblas_nolapack.so.0+0x10a61ff] sgemm_kernel_direct+0x126f
This issue is a blocker for us, since we basically unable to use devbox for java tests | 1.0 | OpenBLAS 0.3.8 issue on AMD Threadripper cpu - OpenBLAS 0.3.8 crashes on AMD Threadripper 3970X cpu, with SIGILL error code.
We always got following crash while testing TFGraphTestAllHelper.log_determinant.rank3:
- A fatal error has been detected by the Java Runtime Environment:
- SIGILL (0x4) at pc=0x00007f86f33181ff, pid=130947, tid=0x00007f8839009700
- C [libopenblas_nolapack.so.0+0x10a61ff] sgemm_kernel_direct+0x126f
This issue is a blocker for us, since we basically unable to use devbox for java tests | priority | openblas issue on amd threadripper cpu openblas crashes on amd threadripper cpu with sigill error code we always got following crash while testing tfgraphtestallhelper log determinant a fatal error has been detected by the java runtime environment sigill at pc pid tid c sgemm kernel direct this issue is a blocker for us since we basically unable to use devbox for java tests | 1 |
750,826 | 26,219,631,885 | IssuesEvent | 2023-01-04 13:54:03 | ramp4-pcar4/story-ramp | https://api.github.com/repos/ramp4-pcar4/story-ramp | closed | Exiting full screen in a map takes you back to the top of the page | RAMP Quality Assurance Bug Priority: High | If you trigger a full screen RAMP map, and then exit out of it, you will get jumped back to the top of the page. If possible, can we stay at the triggering page when the full screen session is closed? | 1.0 | Exiting full screen in a map takes you back to the top of the page - If you trigger a full screen RAMP map, and then exit out of it, you will get jumped back to the top of the page. If possible, can we stay at the triggering page when the full screen session is closed? | priority | exiting full screen in a map takes you back to the top of the page if you trigger a full screen ramp map and then exit out of it you will get jumped back to the top of the page if possible can we stay at the triggering page when the full screen session is closed | 1 |
566,343 | 16,819,190,298 | IssuesEvent | 2021-06-17 11:02:37 | Stranger6667/jsonschema-rs | https://api.github.com/repos/Stranger6667/jsonschema-rs | closed | Regular expressions aren't transformed to ECMAScript-compatible within `format: `regex` | Priority: High Type: Bug | For this reason, some regular expressions are rejected, but actually can be handled by this lib. | 1.0 | Regular expressions aren't transformed to ECMAScript-compatible within `format: `regex` - For this reason, some regular expressions are rejected, but actually can be handled by this lib. | priority | regular expressions aren t transformed to ecmascript compatible within format regex for this reason some regular expressions are rejected but actually can be handled by this lib | 1 |
44,035 | 2,898,754,118 | IssuesEvent | 2015-06-17 06:42:15 | Baystation12/Baystation12 | https://api.github.com/repos/Baystation12/Baystation12 | closed | [DEV]Blood drips do not appear, are not GC'd | priority: high | A bleeding mob no longer leaves blood drips all over the floor, other blood items (splatter, footprints) appear, but not regular blood drips.
They are also not properly garbage collected as this report, of which I got many, MANY of, state.
## TESTING: GC: -- [0x2002239] | /obj/effect/decal/cleanable/blood/drip was unable to be GC'd and was deleted -- | 1.0 | [DEV]Blood drips do not appear, are not GC'd - A bleeding mob no longer leaves blood drips all over the floor, other blood items (splatter, footprints) appear, but not regular blood drips.
They are also not properly garbage collected as this report, of which I got many, MANY of, state.
## TESTING: GC: -- [0x2002239] | /obj/effect/decal/cleanable/blood/drip was unable to be GC'd and was deleted -- | priority | blood drips do not appear are not gc d a bleeding mob no longer leaves blood drips all over the floor other blood items splatter footprints appear but not regular blood drips they are also not properly garbage collected as this report of which i got many many of state testing gc obj effect decal cleanable blood drip was unable to be gc d and was deleted | 1 |
333,507 | 10,127,399,864 | IssuesEvent | 2019-08-01 10:07:12 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | opened | System stdlib should accept untainted paths | Component/System Priority/High Type/Bug | **Description:**
`path`s accepted in `system` stdlib should all be `@untainted` to make sure untainted data cannot get passed in as paths.
[1] https://github.com/ballerina-platform/ballerina-lang/blob/f566a88558cf86ab9753810a5ace633ebf1e762b/stdlib/system/src/main/ballerina/src/system/system.bal#L48
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 1.0 | System stdlib should accept untainted paths - **Description:**
`path`s accepted in `system` stdlib should all be `@untainted` to make sure untainted data cannot get passed in as paths.
[1] https://github.com/ballerina-platform/ballerina-lang/blob/f566a88558cf86ab9753810a5ace633ebf1e762b/stdlib/system/src/main/ballerina/src/system/system.bal#L48
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| priority | system stdlib should accept untainted paths description path s accepted in system stdlib should all be untainted to make sure untainted data cannot get passed in as paths steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
793,651 | 28,006,177,316 | IssuesEvent | 2023-03-27 15:23:28 | AY2223S2-CS2103T-W11-3/tp | https://api.github.com/repos/AY2223S2-CS2103T-W11-3/tp | closed | Give clearer message if user keys in invalid command in the wrong mode | priority.High | Right now, the only feedback the user receives if they key in addDeck when no deck is selected is "Invalid Command".
To add clearer messages when a wrong command is given in any of the 3 possible states
- Main mode with no deck selected
- Main mode with a deck selected
- Review mode
| 1.0 | Give clearer message if user keys in invalid command in the wrong mode - Right now, the only feedback the user receives if they key in addDeck when no deck is selected is "Invalid Command".
To add clearer messages when a wrong command is given in any of the 3 possible states
- Main mode with no deck selected
- Main mode with a deck selected
- Review mode
| priority | give clearer message if user keys in invalid command in the wrong mode right now the only feedback the user receives if they key in adddeck when no deck is selected is invalid command to add clearer messages when a wrong command is given in any of the possible states main mode with no deck selected main mode with a deck selected review mode | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.